Denise Persson, Laura Langdon & Scott Holden V1
>>Hello. Everyone were here it the Data Cloud summit and we had a real treat for you. I call it the CMO Power Panel. We're gonna explore how data is transforming marketing, branding and promotion, and with me, a three phenomenal marketing pros and chief marketing officers. Denise Person is the CMO Snowflakes Kat Holden of Thought spot and Laura Langdon of Whip pro Folks. Great to see you. Thanks so much for coming on the Cube. >>Great to be with you, David. >>Awesome. Denise, let's let's start with you. I want to talk about the role and the changing role of the CMO. It's changed a lot, you know, sports, of course, with all this data, but I wonder what you're experiencing and can you share us share with us? Why marketing, especially, is being impacted by data? >>Well, data is really what has helped us marketers turn ourselves into revenue drivers instead instead of call centers, and that's definitely a much better place to be. We come today measure things that were never possible before. What a person most excited about is the rial time access to data we have today. In the past, we used to get stale reports, you know, weeks after a marketing program was over. Today we get data in real time as our campaigns are up and running. And this is really what enables us to make those riel time adjustments to our investments in real time. And that is really have a profound impact on the results were having. And also today, you know, more than ever, adaptability is truly the superpower or marketing today and day. That's really what allows us to adapt to our customers preferences in real time. And that's really critical at this time. >>That's interesting what you say because, you know, in tough times used to be okay. Sales and engineering put a brick wall around those and you know the name it. Marketing, Say Okay, cut. But now it's like you go to marketing and say, Okay, what's the data say? How do we have to pivot and Scott? I wonder what of data and cloud really brought to the modern marketer that you might not have had before this modern era? >>Well, it Z this era. I don't think there's ever been a better time to be a marketer than there is right now, and The primary reason is that we have access to data and insights like we've never had, and I'm not exaggerating. When I say that I have 100 times more access to data, then I had a decade. It's just phenomenal when you look at the power of cloud search, ai These new consumer experiences for analytics. We can do things in seconds that used to take days and so it's B comments did he said. Ah, superpower for us toe. Have access to so much data and it's, you know, Kobe has been hard. Ah, lot of our marketing teams who've never worked harder, making this pivot from the physical world to the virtual world. But there, you know, at least we're working, and three other part of it is that digital she's created this phenomenal opportunity for us because the beauty of digital and digital transformation is that everything now is trackable, which makes it measurable and means that we can actually get insights that we can act on in a smarter way. And you know, it's worth giving an example. If you just look at this show right, like this event that we're doing in a physical world, all of you watching at home, you'd be in front of us in a room and we'd be able to know if you're in the room, right? We tracking the scanners when you walked in. But that's basically it. At that point, we don't really get a good sense for how much you like what we're saying. Uh, maybe you filled out a survey, but only 5 to 10% of people ever do that in the digital world. We know how long you stick around, and as a result, like it's easy people could just with the click, you know, change the channel. And so the bar for content has gone way up as we do these events. But we know how long people are sticking around. And that's what's so special about it. You know Denise and her team as the host of this show, they're going to know how long people watch this segment and that knowing is powerful. I mean, it's simple as using a product like thought spot. You could just ask a question. How many you know, what's the average view? Time by session and boom and sharp pops up. You're gonna know what's working, what's not. And that's something that you could take and act on in the future. And that's what our That's what customers were doing. So you know, snowflake in the spot that we share a customer with Lulu and they're tracking programs. So what people are watching at home, how long they're watching what they're watching next, and they're able to do that in a super granular way and improve their content as a result. And that's the power of this new world we live in. Uh, that's made the cloud and data so accessible to folks like us. >>Well, thank you for that. And I want to come back to that notion to understand how you're bringing data into your marketing office. But I wanna bring Laura and Laura were pro You guys partner with a lot of brands, a lot of companies around the world. I mean, thousands of partners, obviously snowflake and thought spot are, too. How are you using data to optimize these co marketing relationships? You know specifically, what are the trends that you're seeing around around things like customer experience? >>So, you know, we used data for all of our marketing decisions our own as well as with our partners. And I think what's really been interesting about partner marketing data is we can we can feed that back to our sales team, right? So it's very directional for them as well in their efforts moving forward. So I think that's a place where specifically to partners, it's really powerful. We can also use our collective data to go out to customers to better effect. And then, you know, regarding these trends, we just did a survey on the state of the intelligent enterprise. We we interviewed 300 companies, US and UK, and there were three Interesting. I thought statistics relevant to this, um, Onley 22% of the companies that we interviewed felt that their marketing was where it needed to be from an automation standpoint. So lots of room for us to grow right. Lots of space for us to play. And 61% of them believed that it was critical that they implement this technology to become a more intelligent enterprise. But when they ranked readiness by function, marketing came in six right, so H R R and D finance were all ahead of marketing. It was followed by sales, you know, And then the final data point that I think was interesting was 40% of those agreed that while the technology was the most important thing, that thought leadership was critical, you know? And I think that's where marketers really could bring. You know, our tried and true experience to bear and merging with this technology. >>Great. Thank you. So so did he say I've been getting the Kool Aid injection this week around Data Cloud have been pushing people, But now that I have the CMO in front of me, I wanna ask about the data cloud and what it means specifically for the customers. And what are some of the learnings? Maybe that you've experienced that can support some of the things that that Laura and Scott were just discussing. >>Yeah. Scott said before, right, he had 100 times more data than he ever has before. And that's again, if you look at all the companies we talked to around the world, it's not about the amount of data that they have. That is the problem is the ability to access that data that data for most companies is trapped across Silas across the organization. It's It's in data applications, systems of records. Some of that data sits with your partners that you want access, and that's really what the data clouds camps in. Data Cloud is really mobilizing that data for you. It brings all that data together for you in one place so you can finally access that data and really provide ubiquitous access to that data to everyone in your organization that needs it and can truly unlock the value off that data. And from a marketing perspective, I mean, we are responsible for the customer experience, you know, we provide to our customers. And if you have access toe all the data on your customers, that's when you have that customer 3 60 that we've all been talking about for so many years. And if you have all that data, you can truly, you know, look at their, you know, buying behaviors, put all those dots together and create those exceptional customer experiences. You can do things such as the retailers do in terms of personal decision, for instance, rights and those are the type of experiences in our customers are expecting today. They are expecting a 100% personalized experience for them all the time. And if you don't have all the data, you can't really put those experiences together at scale. And that is really where the data cloud comes in again. The data cloud is not only about mobilizing your own data within your enterprise. It's also about having access to data from your partners or extending access to your own data in a secure way to your partners within your ecosystems. >>Yeah, So I'm glad you mentioned a couple of things. I've been writing about this a lot, and particularly the 3 60 that we would dying for but haven't really been able to tap. I didn't call it the Data Cloud. I don't have a marketing gene. I had another sort of boring name for it, but I think there's, you know, similar vectors there. So I appreciate that. Scott, I wanna come back to this notion of building data DNA in your marketing, you know, fluency on and how you put data at the core of your marketing ops. I've been working with a lot of folks in banking and manufacturing and other industries that air that are struggling to do this. How are you doing it? What are some of the challenges that you can share and maybe some advice for your peers out there? >>Yeah, sure, it's, um Well, you brought up this concept of data fluency and it zone important one. And there's been a lot of talking industry about data literacy and being able to read data. But I think it's more important to be able to speak data to be fluent. And as marketers, we're all storytellers. And when you combine data with storytelling, magic happens. And so getting a data fluency is a great goal for us toe have for all of the people in our companies. And to get to that end, I think one of the things that's happening is that people are hiring wrong and they're thinking about it. They're making some mistakes. And so a couple of things come to mind when, especially when I look at marketing teams that I'm familiar with, they're hiring a lot of data analysts and data scientists, and those folks are amazing and every team needs them. Uh, but if you go to big on that, you do yourself a disservice. The second key thing is that you're basically giving your front lines, focus your marketing managers or people on the front lines. An excuse not to get involved data. And I think that's a big mistake because it used to be really hard. But with the technologies available to us now, these new consumer like experiences for Data Analytics, anybody can do it. And so we as leaders have to encourage them to do it. And I'll give you just a you know, an example. You know, I've got about 32 people on my marketing team, and I don't have any data analysts on my team across our entire company. We have a couple of analysts and a couple of data engineers, and what's happening is the world is changing where those folks, their enablers, they architect the system, they bring in the different status forces they use. Technologies like snowflake has been so great at making it easier for people. The folks technology together, and they get data out of it quickly. But they're pulling it together, and then we'll simple things like, Hey, I just want to see this weekly instead of monthly. You don't need to waste your expensive data science talent. Gartner puts a stand out there that 50% of data scientists are doing basic visualization work. That's not a good use of their time. You The products are >>easy >>enough now that everyday marketing managers could do that. And when you have a marketing manager come to you and say, You know, I just figured out this this campaign, which looks great on the surface, is doing poorly. From our perspective, that's a magic moment. And so we all need to coach our teams to get there. And I would say, you know, lead by example, give them an opportunity Thio access data and turn it into a story that's really powerful. And then, lastly, praised people who do it, use it as something to celebrate inside our companies is a great way to kind of get this initiative. >>E love it. You're talking about democratizing data, making it self service. People feel ownership, you know, Laura did. He starts talking about the ecosystem, and you're kind of the ecosystem pro here. How does the ecosystem help marketers succeed? Maybe you could talk about the power of many versus the resource of of one. >>Sure, you know, I think it's a it's a game changer and it will continue to be. And I think it's really the next level for marketers to harness this. This power that's out there and use it. You know, it's something that's important to us. But it's also something we're starting to see our customers demand, you know, we went from a one size fits all solution, Thio. They want to bring the best in class to their organization. We all need to be really agile and flexible right now. And I think this ecosystem allows that, you know, you think about the power of a snow plate snowflake mining data for you, and then a thought spot really giving you the dashboard toe, have what you want. And then, of course, um, implementation partner like a whip Roh coming in and really being able to plug in whatever else you need, um, to deliver. And, uh, I think it's really super powerful. And I think it gives us, you know, it just gives us so much to play with. And so much room to grow is market. >>Thank you. Did he say why don't you bring us home? We're almost out of time here, but marketing, art, science both. What do you thoughts? >>Definite? Both. I think that's exciting. Part about marketing. It is a balancing act between art and science. Clearly, it's problem or science today than it used to be. But the art part is really about inspiring change. It's about changing people's people's behavior and challenging the status quo, right? That's the art part. The science part. That's about making the right decisions all the time, right? Making sure we are truly investing in what's gonna drive revenue for us. >>Guys, thanks so much for coming on the Cube. Great discussion. Really appreciate it. Thank you for watching everybody. We're here at the data clouds summit. A lot of great content, so keep it right there. We'll be right back right after this short break.
SUMMARY :
I call it the CMO Power It's changed a lot, you know, sports, of course, with all this data, but I wonder what you're experiencing and can And also today, you know, more than ever, adaptability is truly of data and cloud really brought to the modern marketer that you might not have had before And you know, it's worth giving an example. And I want to come back to that notion to understand how you're bringing data into your marketing And then, you know, regarding these trends, we just did a survey on people, But now that I have the CMO in front of me, I wanna ask about the data cloud and what it means specifically And that's again, if you look at all the companies we talked to around the world, What are some of the challenges that you can share and maybe some advice And I'll give you just a you And I would say, you know, lead by example, you know, Laura did. And I think it gives us, you know, it just gives us so much to play with. What do you thoughts? But the art part is really about inspiring change. Thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Laura | PERSON | 0.99+ |
Laura Langdon | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Denise | PERSON | 0.99+ |
David | PERSON | 0.99+ |
100 times | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Denise Persson | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
300 companies | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
Today | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
22% | QUANTITY | 0.99+ |
Laura Langdon | PERSON | 0.99+ |
61% | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Both | QUANTITY | 0.99+ |
Kobe | PERSON | 0.99+ |
Kat Holden | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
Data Cloud | EVENT | 0.98+ |
this week | DATE | 0.97+ |
5 | QUANTITY | 0.97+ |
Denise Person | PERSON | 0.97+ |
Kool Aid | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
3 60 | OTHER | 0.95+ |
10% | QUANTITY | 0.95+ |
one place | QUANTITY | 0.95+ |
second key thing | QUANTITY | 0.94+ |
three phenomenal | QUANTITY | 0.93+ |
Whip pro Folks | ORGANIZATION | 0.89+ |
Data Cloud | ORGANIZATION | 0.88+ |
Thio | PERSON | 0.87+ |
Onley | ORGANIZATION | 0.8+ |
cloud | ORGANIZATION | 0.79+ |
data clouds | EVENT | 0.78+ |
about 32 people | QUANTITY | 0.76+ |
six right | QUANTITY | 0.72+ |
thousands of partners | QUANTITY | 0.71+ |
Holden | ORGANIZATION | 0.69+ |
part | QUANTITY | 0.68+ |
CMO Snowflakes | ORGANIZATION | 0.62+ |
Lulu | PERSON | 0.59+ |
pros | QUANTITY | 0.53+ |
Cube | COMMERCIAL_ITEM | 0.52+ |
data scientists | QUANTITY | 0.52+ |
couple | QUANTITY | 0.51+ |
Silas | PERSON | 0.46+ |
Holden Karau, Google | Flink Forward 2018
>> Narrator: Live from San Francisco, it's the Cube, covering Flink Forward, brought to you by Data Artisans. (tech music) >> Hi, this is George Gilbert, we're at Flink Forward, the user conference for the Apache Flink Community, sponsored by Data Artisans. We are in San Francisco. This is the second Flink Forward conference here in San Francisco. And we have a very imminent guest, with a long pedigree, Holden Karau, formerly of IBM, and Apache Spark fame, putting Apache Spark and Python together. >> Yes. >> And now, Holden is at Google, focused on the Beam API, which is an API that makes it possible to write portable stream processing applications across Google's Dataflow, as well as Flink and other stream processors. >> Yeah. >> And Holden has been working on integrating it with the Google TensorFlow framework, also open-sourced. Yes. >> So, Holden, tell us about the objective of putting these together. What type of use cases.... >> So, I think it's really exciting. And it's still very early days, I want to be clear. If you go out there and run this code, you are going to get a lot of really weird errors, but please tell us about the errors you get. The goal is really, and we see this in Spark, with the pipeline APIs, that most of our time in machine learning is spent doing data preparation. We have to get our data in a format where we can do our machine learning on top of it. And the tricky thing about the data preparation is that we also often have to have a lot of the same preparation code available to use when we're making our predictions. And what this means is that a lot people essentially end up having to write, like, a stream-processing job to do their data preparation, and they have to write a corresponding online serving job, to do similar data preparation for when they want to make real predictions. And by integrating tf.Transform and things like this into the Beam ecosystem, the idea is that people can write their data preparation in a simple, uniform way, that can be taken from the training time into the online serving time, without them having to rewrite their code, removing the potential for mistakes where we like, change one variable slightly in one place and forget to update it in another. And just really simplifying the deployment process for these models. >> Okay, so help us tie that back to, in this case, Flink. >> Yes. >> And also to clarify, that data prep.... My impression was data prep was a different activity. It was like design time and serving was run time. But you're saying that they can be better integrated? >> So, there's different types of data prep. Some types of data prep would be things like removing invalid records. And if I'm doing that, I don't have to do that at serving time. But one of the classic examples for data prep would be tokenizing my inputs, or performing some kind of hashing transformation. And if I do that, when I get new records to predict, they won't be in a pre-tokenized form, or they won't be hashed correctly. And my model won't be able to serve on these sort of raw inputs. So I have to re-create the data prep logic that I created for training at serving time. >> So, by having common Beam API and the common provider underneath it, like Flink and TensorFlow, it's the repeatable activities for transforming data to make it ready to feed to a machine-learning model that you want those.... It would be ideal to have those transformation activities be common in your prep work, and then in the production serving. >> Yes, very true. >> So, tell us what type of customers want to write to the Beam API and have that portability? >> Yeah, so that's a really good question. So, there's a lot of people who really want portability outside of Google Cloud, and that's one group of people, essentially people who want to adopt different Google Cloud technologies, but they don't want be locked into Google Cloud forever. Which is completely understandable. There are other people who are more interested in being able to switch streaming engines, like, they want to be able to switch between Spark and Flink. And those are people who want to try out different streaming engines without having to rewrite their entire jobs. >> Does Spark Structured Streaming support the Beam API? >> So, right now, the Spark support for Beam is limited. It's in the old Dstream API, it's not on top of the Structured Streaming API. It's a thing we're actively discussing on the mailing list, how to go about doing. Because there's a lot of intricacies involved in bringing new APIs in line. And since it already works there, there's less of a pressure. But it's something that we should look at more of. Where was I going with this? So the other one that I see, is like, Flink is a wonderful API, but it's very Java-focused. And so, Java's great, everyone loves it, but a lot of cool things that are being done nowadays, are being built in Python, like TensorFlow. There's a lot of really interesting machine learning and deep learning stuff happening in Python. Beam gives a way for people to work with Python, across these different engines. Flink supports Python, but it's maybe not a first class citizen. And the Beam Python support is still a work in progress. We're working to get it to be better, but it's.... You can see the demos this afternoon, although if you're not here, you can't see the demo, but you can see the work happening in GitHub. And there's also work being done to support Go. >> In to support Go. >> Which is a little out of left field. >> So, would it be fair to say that the value of Beam, for potential Flink customers, they can work and start on Google Cloud platform. They can start on one of several stream processors. They can move to another one later, and they also inherit the better language support, or bindings from the Beam API? >> I think that's very true. The better language support, it's better for some languages, it's probably not as good for others. It's somewhat subjective, like what better language support is. But I think definitely for Go, it's pretty clear. This stuff is all stuff that's in the master branch, it's not released today. But if people are looking to play with it, I think it's really exciting. They can go and check it out from GitHub, and build it locally. >> So, what type of customers do you see who have moved into production with machine learning? >> So the.... >> And the streaming pipelines? >> The biggest customer that's in production is obviously, or not obviously, is Spotify. One of them is Spotify. They give a lot of talks about it. Because I didn't know we were going to be talking today, I didn't have a chance to go through my customer list and see who's okay with us mentioning them publicly. I'll just stick with Spotify. >> Without the names, the sort of use cases and the general industry.... >> I don't want to get in trouble. >> Okay. >> I'm just going to ... sorry. >> Okay. So then, let's talk about, does Google view Dataflow as their sort of strategic successor to map produce? >> Yes, so.... >> And is that a competitor then to Flink? >> I think Flink and Dataflow can be used in some of the same cases. But, I think they're more complementary. Flink is something you can run on-prem. You can run it in different Defenders. And Dataflow is very much like, "I can run this on Google Cloud." And part of the idea with Beam is to make it so that people who want to write Dataflow jobs but maybe want the flexibility to go back to something else later can still have that. Yeah, we couldn't swap in Flink or Dataflow execution engines if we're on Google Cloud, but.... We're not, how do I put it nicely? Provided people are running this stuff, they're burning CPU cycles, I don't really care if they're running Dataflow or Flink as the execution engine. Either way, it's a party for me, right? >> George: Okay. >> It's probably one of those, sort of, friendly competitions. Where we both push each other to do better and add more features that the respective projects have. >> Okay, 30 second question. >> Cool. >> Do you see people building stream processing applications with machine learning as part of it to extend existing apps or for ground up new apps? >> Totally. I mostly see it as extending existing apps. This is obviously, possibly a bias, just for the people that I talk to. But, going ground up with both streaming and machine learning, at the same time, like, starting both of those projects fresh is a really big hurdle to get over. >> George: For skills. >> For skills. It's really hard to pick up both of those at the same time. It's not impossible, but it's much more likely you'll build something ... maybe you'll build a batch machine learning system, realize you want to productionize your results more quickly. Or you'll build a streaming system, and then want to add some machine learning on top of it. Those are the two paths that I see. I don't see people jumping head first into both at the same time. But this could change. Batch has been King for a long time and streaming is getting it's day in the sun. So, we could start seeing people becoming more adventurous and doing both, at the same time. >> Holden, on that note, we'll have to call it a day. That was most informative. >> It's really good to see you again. >> Likewise. So this is George Gilbert. We're on the ground at Flink Forward, the Apache Flink user conference, sponsored by Data Artisans. And we will be back in a few minutes after this short break. (tech music)
SUMMARY :
Narrator: Live from San Francisco, it's the Cube, This is the second Flink Forward conference focused on the Beam API, which is an API And Holden has been working on integrating it So, Holden, tell us about the objective of the same preparation code available to use And also to clarify, that data prep.... I don't have to do that at serving time. and the common provider underneath it, in being able to switch streaming engines, And the Beam Python support is still a work in progress. or bindings from the Beam API? But if people are looking to play with it, I didn't have a chance to go through my customer list the sort of use cases and the general industry.... as their sort of strategic successor to map produce? And part of the idea with Beam is to make it so that and add more features that the respective projects have. at the same time, and streaming is getting it's day in the sun. Holden, on that note, we'll have to call it a day. We're on the ground at Flink Forward,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
George | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Holden Karau | PERSON | 0.99+ |
Data Artisans | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
Holden | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Spotify | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
two paths | QUANTITY | 0.99+ |
TensorFlow | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
Spark | TITLE | 0.99+ |
GitHub | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Dataflow | TITLE | 0.97+ |
Flink | ORGANIZATION | 0.97+ |
one variable | QUANTITY | 0.97+ |
a day | QUANTITY | 0.97+ |
Go | TITLE | 0.97+ |
Flink Forward | EVENT | 0.96+ |
Flink | TITLE | 0.96+ |
30 second question | QUANTITY | 0.96+ |
one place | QUANTITY | 0.95+ |
Beam | TITLE | 0.95+ |
second | QUANTITY | 0.95+ |
Google Cloud | TITLE | 0.94+ |
Apache | ORGANIZATION | 0.94+ |
one group | QUANTITY | 0.94+ |
one | QUANTITY | 0.93+ |
this afternoon | DATE | 0.9+ |
Dstream | TITLE | 0.88+ |
2018 | DATE | 0.87+ |
first | QUANTITY | 0.79+ |
Beam API | TITLE | 0.75+ |
Beam | ORGANIZATION | 0.74+ |
Apache Flink Community | ORGANIZATION | 0.72+ |
Holden Karau, IBM Big Data SV 17 #BigDataSV #theCUBE
>> Announcer: Big Data Silicon Valley 2017. >> Hey, welcome back, everybody, Jeff Frick here with The Cube. We are live at the historic Pagoda Lounge in San Jose for Big Data SV, which is associated with Strathead Dupe World, across the street, as well as Big Data week, so everything big data is happening in San Jose, we're happy to be here, love the new venue, if you're around, stop by, back of the Fairmount, Pagoda Lounge. We're excited to be joined in this next segment by, who's now become a regular, any time we're at a Big Data event, a Spark event, Holden always stops by. Holden Karau, she's the principal software engineer at IBM. Holden, great to see you. >> Thank you, it's wonderful to be back yet again. >> Absolutely, so the big data meme just keeps rolling, Google Cloud Next was last week, a lot of talk about AI and ML and of course you're very involved in Spark, so what are you excited about these days? What are you, I'm sure you've got a couple presentations going on across the street. >> Yeah, so my two presentations this week, oh wow, I should remember them. So the one that I'm doing today is with my co-worker Seth Hendrickson, also at IBM, and we're going to be focused on how to use structured streaming for machine learning. And sort of, I think that's really interesting, because streaming machine learning is something a lot of people seem to want to do but aren't yet doing in production, so it's always fun to talk to people before they've built their systems. And then tomorrow I'm going to be talking with Joey on how to debug Spark, which is something that I, you know, a lot of people ask questions about, but I tend to not talk about, because it tends to scare people away, and so I try to keep the happy going. >> Jeff: Bugs are never fun. >> No, no, never fun. >> Just picking up on that structured streaming and machine learning, so there's this issue of, as we move more and more towards the industrial internet of things, like having to process events as they come in, make a decision. How, there's a range of latency that's required. Where does structured streaming and ML fit today, and where might that go? >> So structured streaming for today, latency wise, is probably not something I would use for something like that right now. It's in the like sub second range. Which is nice, but it's not what you want for like live serving of decisions for your car, right? That's just not going to be feasible. But I think it certainly has the potential to get a lot faster. We've seen a lot of renewed interest in ML liblocal, which is really about making it so that we can take the models that we've trained in Spark and really push them out to the edge and sort of serve them in the edge, and apply our models on end devices. So I'm really excited about where that's going. To be fair, part of my excitement is someone else is doing that work, so I'm very excited that they're doing this work for me. >> Let me clarify on that, just to make sure I understand. So there's a lot of overhead in Spark, because it runs on a cluster, because you have an optimizer, because you have the high availability or the resilience, and so you're saying we can preserve the predict and maybe serve part and carve out all the other overhead for running in a very small environment. >> Right, yeah. So I think for a lot of these IOT devices and stuff like that it actually makes a lot more sense to do the predictions on the device itself, right. These models generally are megabytes in size, and we don't need a cluster to do predictions on these models, right. We really need the cluster to train them, but I think for a lot of cases, pushing the prediction out to the edge node is actually a pretty reasonable use case. And so I'm really excited that we've got some work going on there. >> Taking that one step further, we've talked to a bunch of people, both like at GE, and at their Minds and Machines show, and IBM's Genius of Things, where you want to be able to train the models up in the cloud where you're getting data from all the different devices and then push the retrained model out to the edge. Can that happen in Spark, or do we have to have something else orchestrating all that? >> So actually pushing the model out isn't something that I would do in Spark itself, I think that's better served by other tools. Spark is not really well suited to large amounts of internet traffic, right. But it's really well suited to the training, and I think with ML liblocal it'll essentially, we'll be able to provide both sides of it, and the copy part will be left up to whoever it is that's doing their work, right, because like if you're copying over a cell network you need to do something very different as if you're broadcasting over a terrestrial XM or something like that, you need to do something very different for satellite. >> If you're at the edge on a device, would you be actually running, like you were saying earlier, structured streaming, with the prediction? >> Right, I don't think you would use structured streaming per se on the edge device, but essentially there would be a lot of code share between structured streaming and the code that you'd be using on the edge device. And it's being vectored out now so that we can have this code sharing and Spark machine learning. And you would use structured streaming maybe on the training side, and then on the serving side you would use your custom local code. >> Okay, so tell us a little more about Spark ML today and how we can democratize machine learning, you know, for a bigger audience. >> Right, I think machine learning is great, but right now you really need a strong statistical background to really be able to apply it effectively. And we probably can't get rid of that for all problems, but I think for a lot of problems, doing things like hyperparameter tuning can actually give really powerful tools to just like regular engineering folks who, they're smart, but maybe they don't have a strong machine learning background. And Spark's ML pipelines make it really easy to sort of construct multiple stages, and then just be like, okay, I don't know what these parameters should be, I want you to do a search over what these different parameters could be for me, and it makes it really easy to do this as just a regular engineer with less of an ML background. >> Would that be like, just for those of us who are, who don't know what hyperparameter tuning is, that would be the knobs, the variables? >> Yeah, it's going to spin the knobs on like our regularization parameter on like our regression, and it can also spin some knobs on maybe the engram sizes that we're using on the inputs to something else, right. And it can compare how these knobs sort of interact with each other, because often you can tune one knob but you actually have six different knobs that you want to tune and you don't know, if you just explore each one individually, you're not going to find the best setting for them working together. >> So this would make it easier for, as you're saying, someone who's not a data scientist to set up a pipeline that lets you predict. >> I think so, very much. I think it does a lot of the, brings a lot of the benefits from sort of the SciPy world to the big data world. And SciPy is really wonderful about making machine learning really accessible, but it's just not ready for big data, and I think this does a good job of bringing these same concepts, if not the code, but the same concepts, to big data. >> The SciPy, if I understand, is it a notebook that would run essentially on one machine? >> SciPy can be put in a notebook environment, and generally it would run on, yeah, a single machine. >> And so to make that sit on Spark means that you could then run it on a cluster-- >> So this isn't actually taking SciPy and distributing it, this is just like stealing the good concepts from SciPy and making them available for big data people. Because SciPy's done a really good job of making a very intuitive machine learning interface. >> So just to put a fine sort of qualifier on one thing, if you're doing the internet of things and you have Spark at the edge and you're running the model there, it's the programming model, so structured streaming is one way of programming Spark, but if you don't have structured streaming at the edge, would you just be using the core batch Spark programming model? >> So at the edge you'd just be using, you wouldn't even be using batch, right, because you're trying to predict individual events, right, so you'd just be calling predict with every new event that you're getting in. And you might have a q mechanism of some type. But essentially if we had this batch, we would be adding additional latency, and I think at the edge we really, the reason we're moving the models to the edge is to avoid the latency. >> So just to be clear then, is the programming model, so it wouldn't be structured streaming, and we're taking out all the overhead that forced us to use batch with Spark. So the reason I'm trying to clarify is a lot of people had this question for a long time, which is are we going to have a different programming model at the edge from what we have at the center? >> Yeah, that's a great question. And I don't think the answer is finished yet, but I think the work is being done to try and make it look the same. Of course, you know, trying to make it look the same, this is Boosh, it's not like actually barking at us right now, even though she looks like a dog, she is, there will always be things which are a little bit different from the edge to your cluster, but I think Spark has done a really good job of making things look very similar on single node cases to multi node cases, and I think we can probably bring the same things to ML. >> Okay, so it's almost time, we're coming back, Spark took us from single machine to cluster, and now we have to essentially bring it back for an edge device that's really light weight. >> Yeah, I think at the end of the day, just from a latency point of view, that's what we have to do for serving. For some models, not for everyone. Like if you're building a website with a recommendation system, you don't need to serve that model like on the edge node, that's fine, but like if you've got a car device we can't depend on cell latency, right, you have to serve that in car. >> So what are some of the things, some of the other things that IBM is contributing to the ecosystem that you see having a big impact over the next couple years? >> So there's a lot of really exciting things coming out of IBM. And I'm obviously pretty biased. I spend a lot of time focused on Python support in Spark, and one of the most exciting things is coming from my co-worker Brian, I'm not going to say his last name in case I get it wrong, but Brian is amazing, and he's been working on integrating Arrow with Spark, and this can make it so that it's going to be a lot easier to sort of interoperate between JVM languages and Python and R, so I'm really optimistic about the sort of Python and R interfaces improving a lot in Spark and getting a lot faster as well. And we're also, in addition to the Arrow work, we've got some work around making it a lot easier for people in R and Python to get started. The R stuff is mostly actually the Microsoft people, thanks Felix, you're awesome. I don't actually know which camera I should have done that to but that's okay. >> I think you got it! >> But Felix is amazing, and the other people working on R are too. But I think we've both been pursuing sort of making it so that people who are in the R or Python spaces can just use like Pit Install, Conda Install, or whatever tool it is they're used to working with, to just bring Spark into their machine really easily, just like they would sort of any other software package that they're using. Because right now, for someone getting started in Spark, if you're in the Java space it's pretty easy, but if you're in R or Python you have to do sort of a lot of weird setup work, and it's worth it, but like if we can get rid of that friction, I think we can get a lot more people in these communities using Spark. >> Let me see, just as a scenario, the R server is getting fairly well integrated into Sequel server, so would it be, would you be able to use R as the language with a Spark execution engine to somehow integrate it into Sequel server as an execution engine for doing the machine learning and predicting? >> You definitely, well I shouldn't say definitely, you probably could do that. I don't necessarily know if that's a good idea, but that's the kind of stuff that this would enable, right, it'll make it so that people that are making tools in R or Python can just use Spark as another library, right, and it doesn't have to be this really special setup. It can just be this library and they point out the cluster and they can do whatever work it wants to do. That being said, the Sequel server R integration, if you find yourself using that to do like distributed computing, you should probably take a step back and like rethink what you're doing. >> George: Because it's not really scale out. >> It's not really set up for that. And you might be better off doing this with like, connecting your Spark cluster to your Sequel server instance using like JDBC or a special driver and doing it that way, but you definitely could do it in another inverted sort of way. >> So last question from me, if you look out a couple years, how will we make machine learning accessible to a bigger and bigger audience? And I know you touched on the tuning of the knobs, hyperparameter tuning, what will it look like ultimately? >> I think ML pipelines are probably what things are going to end up looking like. But I think the other part that we'll sort of see is we'll see a lot more examples of how to work with certain kinds of data, because right now, like, I know what I need to do when I'm ingesting some textural data, but I know that because I spent like a week trying to figure out what the hell I was doing once, right. And I didn't bother to write it down. And it looks like no one else bothered to write it down. So really I think we'll see a lot of tools that look very similar to the tools we have today, they'll have more options and they'll be a bit easier to use, but I think the main thing that we're really lacking right now is good documentation and sort of good books and just good resources for people to figure out how to use these tools. Now of course, I mean, I'm biased, because I work on these tools, so I'm like, yeah, they're pretty great. So there might be other people who are like, Holden, no, you're wrong, we need to rethink everything. But I think this is, we can go very far with the pipeline concept. >> And then that's good, right? The democratization of these things opens it up to more people, you get more creative people solving more different problems, that makes the whole thing go. >> You can like install Spark easily, you can, you know, set up an ML pipeline, you can train your model, you can start doing predictions, you can, people that haven't been able to do machine learning at scale can get started super easily, and build a recommendation system for their small little online shop and be like, hey, you bought this, you might also want to buy Boosh, he's really cute, but you can't have this one. No no no, not this one. >> Such a tease! >> Holden: I'm sorry, I'm sorry. >> Well Holden, that will, we'll say goodbye for now, I'm sure we will see you in June in San Francisco at the Spark Summit, and look forward to the update. >> Holden: I look forward to chatting with you then. >> Absolutely, and break a leg this afternoon at your presentation. >> Holden: Thank you. >> She's Holden Karau, I'm Jeff Frick, he's George Gilbert, you're watching The Cube, we're at Big Data SV, thanks for watching. (upbeat music)
SUMMARY :
Announcer: Big Data We're excited to be joined to be back yet again. so what are you excited about these days? but I tend to not talk about, like having to process and really push them out to the edge and carve out all the other overhead We really need the cluster to train them, model out to the edge. and the copy part will be left up to and then on the serving side you would use you know, for a bigger audience. and it makes it really easy to do this that you want to tune and you don't know, that lets you predict. but the same concepts, to big data. and generally it would run the good concepts from SciPy the models to the edge So just to be clear then, from the edge to your cluster, machine to cluster, like on the edge node, that's fine, R and Python to get started. and the other people working on R are too. but that's the kind of stuff not really scale out. to your Sequel server instance and they'll be a bit easier to use, that makes the whole thing go. and be like, hey, you bought this, look forward to the update. to chatting with you then. Absolutely, and break you're watching The Cube,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Holden Karau | PERSON | 0.99+ |
Holden | PERSON | 0.99+ |
Felix | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Joey | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
Seth Hendrickson | PERSON | 0.99+ |
Spark | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
June | DATE | 0.99+ |
six different knobs | QUANTITY | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Boosh | PERSON | 0.99+ |
Pagoda Lounge | LOCATION | 0.99+ |
one knob | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
two presentations | QUANTITY | 0.99+ |
this week | DATE | 0.98+ |
today | DATE | 0.98+ |
The Cube | ORGANIZATION | 0.98+ |
Java | TITLE | 0.98+ |
both | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Big Data week | EVENT | 0.96+ |
single machine | QUANTITY | 0.95+ |
R | TITLE | 0.95+ |
SciPy | TITLE | 0.95+ |
Big Data | EVENT | 0.95+ |
single machine | QUANTITY | 0.95+ |
each one | QUANTITY | 0.94+ |
JDBC | TITLE | 0.93+ |
Spark ML | TITLE | 0.89+ |
JVM | TITLE | 0.89+ |
The Cube | TITLE | 0.88+ |
single | QUANTITY | 0.88+ |
Sequel | TITLE | 0.87+ |
Big Data Silicon Valley 2017 | EVENT | 0.86+ |
Spark Summit | LOCATION | 0.86+ |
one machine | QUANTITY | 0.86+ |
a week | QUANTITY | 0.84+ |
Fairmount | LOCATION | 0.83+ |
liblocal | TITLE | 0.83+ |
Holden Karau, IBM - #BigDataNYC 2016 - #theCUBE
>> Narrator: Live from New York, it's the CUBE from Big Data New York City 2016. Brought to you by headline sponsors, Cisco, IBM, Nvidia. And our ecosystem sponsors. Now, here are your hosts: Dave Vellante and Peter Burris. >> Welcome back to New York City, everybody. This is the CUBE, the worldwide leader in live tech coverage. Holden Karau is here, principle software engineer with IBM. Welcome to the CUBE. >> Thank you for having me. It's nice to be back. >> So, what's with Boo? >> So, Boo is my stuffed dog that I bring-- >> You've got to hold Boo up. >> Okay, yeah. >> Can't see Boo. >> So, this is Boo. Boo comes with me to all of my conferences in case I get stressed out. And she also hangs out normally on the podium while I'm giving the talk as well, just in case people get bored. You know, they can look at Boo. >> So, Boo is not some new open source project. >> No, no, Boo is not an open source project. But Boo is really cute. So, that counts for something. >> All right, so, what's new in your world of spark and machinery? >> So, there's a lot of really exciting things, right. Spark 2.0.0 came out, and that's really exciting because we finally got to get rid of some of the chunkier APIs. And data sets are just becoming sort of the core base of everything going forward in Spark. This is bringing the Spark Sequel engine to all sorts of places, right. So, the machine learning APIs are built on top of the data set API now. The streaming APIs are being built on top of the data set APIs. And this is starting to actually make it a lot easier for people to work together, I think. And that's one of the things that I really enjoy is when we can have people from different sort of profiles or roles work together. And so this support of data sets being everywhere in Spark now lets people with more of like a Sequel background still write stuff that's going to be used directly in sort of a production pipeline. And the engineers can build whatever, you know, production ready stuff they need on top of the Sequel expressions from the analysts and do some really cool stuff there. >> So, chunky API, what does that mean to a layperson? >> Sure, um, it means like, for example, there's this thing in Spark where one of the things you want to do is shuffle a whole bunch of data around and then look at all of the records associated with a given key, right? But, you know, when the APIs were first made, right, it was made by university students. Very smart university students, but you know, it started out as like a grad school project, right? And like, um, so finally with 2.0, we were about to get rid of things like places where we use traits like iterables rather than iterators. And because like these minor little drunky things it's like we had to keep supporting this old API, because you can't break people's code in a minor release, but when you do a big release like Spark 2.0, you can actually go, okay, you need to change your stuff now to start using Spark 2.0. But as a result of changing that in this one place, we're actually able to better support spilling to disk. And this is for people who have too much data to fit in memory even on the individual executors. So, being able to spill to disk more effectively is really important from a performance point of view. So, there's a lot of clean up of getting rid of things, which were sort of holding us back performance-wise. >> So, the value is there. Enough value to break the-- >> Yeah, enough value to break the APIs. And 1.6 will continue to be updated for people that are not ready to migrate right today. But for the people that are looking at it, it's definitely worth it, right? You get a bunch of real cool optimizations. >> One of the themes of this event of the last couple of years has been complexity. You guys wrote an article recently in SiliconANGLE some of the broken promises of open source, really the route of it, being complexity. So, Spark addresses that to a large degree. >> I think so. >> Maybe you could talk about that and explain to us sort of how and what the impact could be for businesses. >> So, I think Spark does a really good job of being really user-friendly, right? It has a Sequel engine for people that aren't comfortable with writing, you know, Scala or Java or Python code. But then on top of that, right, there's a lot of analysts that are really familiar with Python. And Spark actually exposes Python APIs and is working on exposing R APIs. And this is making it so that if you're working on Spark, you don't have to understand the internals in a lot of depth, right? There's some other streaming systems where to make them perform really well, you have to have a really deep mental model of what you're doing. But with Spark, it's much simpler and the APIs are cleaner, and they're exposed in the ways that people are already used to working with their data. And because it's exposed in ways that people are used to working with their data, they don't have to relearn large amounts of complexity. They just have to learn it in the few cases where they run into problems, right? Because it will work most of the time just with the sort of techniques that they're used to doing. So, I think that it's really cool. Especially structured streaming, which is new in Spark 2.0. And structured streaming makes it so that you can write sort of arbitrary Sequel expressions on streaming data, which is really awesome. Like, you can do aggregations without having to sit around and think about how to effectively do an aggregation over different microbatches. That's not a problem for you to worry about. That's a problem for the Spark developers to worry about. Which, unfortunately, is sometimes a problem for me to worry about, but you know, not too often. Boo helps out whenever it gets too stressful. >> First of all, a lot to learn. But there's been some great research done in places like Cornell and Penn and others about how the open source community collaborates and works together. And I'm wondering is the open source community that's building things like Spark, especially in a domain like Big Data, which the use cases themselves are so complex and so important. Are we starting to take some of the knowledge in the contributors, or developing, on how to collaborate and how to work together. And starting to find that way into the tools so that the whole thing starts to collaborate better? >> Yeah, I think, actually, if you look at Spark, you can see that there's a lot of sort of tools that are being built on top of Spark, which are also being built in similar models. I mean, the Apache Software Foundation is a really good tool for managing projects of a certain scale. You can see a lot of Spark-related projects that have also decided that become part of Apache Foundation is a good way to manage their governance and collaborate with different people. But then there's people that look at Spark and go like wow, there's a lot of overhead here. I don't think I'm going to have 500 people working on this project. I'm going to go and model my project after something a bit simpler, right? And I think that both of those are really valid ways of building open source tools on Spark. But it's really interesting seeing there's a Spark components page, essentially, a Spark packages list, for community to publish the work that they're doing on top of Spark. And it's really interesting to see all of the collaborations that are happening there. Especially even between vendors sometimes. You'll see people make tools, which help everyone's data access go faster. And it's open source. so you'll see it start to get contributed into other people's data access layers as well. >> So, pedagogy of how the open source community's work starting to find a way into the tools, so people who aren't in the community, but are focused on the outcomes are now able to not only gain the experience about how the big data works, but also how people on complex outcomes need to work. >> I think that's definitely happening. And you can see that a lot with, like, the collaboration layers that different people are building on top of Spark, like the different notebook solutions, are all very focused on ableing collaboration, right? Because if you're an analyst and you're writing some python code on your local machine, you're not going to, like, probably set up a get up recode to share that with everyone, right? But if you have a notebook and you can just send the link to your friends and be like hey, what's up, can you take a look at this? You can share your results more easily and you can also work together a lot more, more collaboratively. And then so data bricks is doing some great things. IBM as well. I'm sure there's other companies building great notebook solutions who I'm forgetting. But the notebooks, I think, are really empowering people to collaborate in ways that we haven't traditionally seen in the big data space before. >> So, collaboration, to stay on that theme. So, we had eight data scientists on a panel the other night and just talking about, collaboration came up, and the question is specifically from an application developer standpoint. As data becomes, you know, the new development kit, how much of a data scientist do you have to become or are you becoming as a developer? >> Right, so, my role is very different, right? Because I focus just on tools, mostly. So, my data science is mostly to make sure that what I'm doing is actually useful to other people. Because a lot of the people that consume my stuff are data scientists. So, for me, personally, like the answer is not a whole lot. But for a lot of my friends that are working in more traditional sort of data engineering roles where they're empowering specific use cases, they find themselves either working really closely with data scientists often to be like, okay, what are your requirements? What data do I need to be able to get to you so you can do your job? And, you know, sometimes if they find themselves blocking on the data scientists, they're like, how hard could it be? And it turns out, you know, statistics is actually pretty complicated. But sometimes, you know, they go ahead and pick up some of the tools on their own. And we get to see really cool things with really, really ugly graphs. 'Cause they do not know how to use graphing libraries. But, you know, it's really exciting. >> Machine learning is another big theme in this conference. Maybe you could share with us your perspectives on ML and what's happening there. >> So, I really thing machine learning is very powerful. And I think machine learning in Spark is also super powerful. And especially just like the traditional things is you down-sample your data. And you train a bunch of your models. And then, eventually, you're like okay, I think this is like the model that I want to like build for real. And then you go and you get your engineer to help you train it on your giant data set. But Spark and the notebooks that are built on top of it actually mean that it's entirely reasonable for data scientists to take the tools which are traditionally used by the data engineering roles, and just start directly applying them during their exploration phase. And so we're seeing a lot of really more interesting models come to life, right? Because if you're always working with down-sampled data, it's okay, right? Like you can do reasonable exploration on down-sampled data. But you can find some really cool sort of features that you wouldn't normally find once you're working with your full data set, right? 'Cause you're just not going to have that show up in your down-sampled data. And I think also streaming machine learning is a really interesting thing, right? Because we see there's a lot of IOT devices and stuff like that. And like the traditional machine learning thing is I'm going to build a model and then I'm going to deploy it. And then like a week later, I'll maybe consider building a new model. And then I'll deploy it. And then so very much it looks like the old software release processes as opposed to the more agile software release processes. And I think that streaming machine learning can look a lot more like, sort of the agile software development processes where it's like cool, I've got a bunch of labeled data from our contractors. I'm going to integrate that right away. And if I don't see any regression on my cross-validation set, we're just going to go ahead and deploy that today. And I think it's really exciting. I'm obviously a little biased, because some of my work right now is on enabling machine learning with structured streaming in Spark. So, I obviously think my work is useful. Otherwise I would be doing something else. But it's entirely possible. You know, everyone will be like Holden, your work is terrible. But I hope not. I hope people find it useful. >> Talking about sampling. In our first at Dupe World 2010, Albi Meta, he stopped by again today, of course, and he made the statement then. Sampling's dead. It's dead. Is sampling dead? >> Sampling didn't quite die. I think we're getting really close to killing sampling. Sampling will only be data once all of the data scientists in the organization have access to the same tools that the data engineers have been using, right? 'Cause otherwise you'll still be sampling. You'll still be implicitly doing your model selection on down-sampled data. And we'll still probably always find an excuse to sample data, because I'm lazy and sometimes I just want to develop on my laptop. But, you know, I think we're getting close to killing a lot more of sampling. >> Do you see an opportunity to start utilizing many of these tools to actually improve the process of building models, finding data sources, identifying individuals that need access to the data? Are we going to start turning big data on the problem of big data? >> No, that's really exciting. And so, okay, so this is something that I find really enjoyable. So, one of the things that traditionally, when everyone's doing their development on their laptop, right? You don't get to collect a lot of metrics about what they're doing, right? But once you start moving everyone into a sort of more integrated notebook environment, you can be like, okay, like, these are data sets that these different people are accessing. Like these are the things that I know about them. And you can actually train a recommendation algorithm on the data sets to recommend other data sets to people. And there are people that are starting to do this. And I think it's really powerful, right? Because it's like in small companies, maybe not super important, right? Because I'll just go an ask my coworker like hey, what data sets do I want to use? But if you're at a company like Google or IBM scale or even like a 500 person company, you're not going to know all of the data sets that are available for you to work with. And the machine will actually be able to make some really interesting recommendations there. >> All right, we have to leave it there. We're out of time. Holden, thanks very much. >> Thank you so much for having me and having Boo. >> Pleasure. All right, any time. Keep right there everybody. We'll be back with our next guest. This is the CUBE. We're live from New York City. We'll be right back.
SUMMARY :
Brought to you by headline sponsors, This is the CUBE, the worldwide leader It's nice to be back. normally on the podium So, Boo is not some So, that counts for something. And this is starting to So, being able to spill So, the value is there. But for the people that are looking at it, that to a large degree. about that and explain to us and think about how to And starting to find And it's really interesting to but are focused on the outcomes the link to your friends and the question is specifically be able to get to you Maybe you could share with And then you go and you get your engineer and he made the statement then. that the data engineers on the data sets to recommend All right, we have to leave it there. Thank you so much for This is the CUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dave Vellante | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Holden Karau | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
Java | TITLE | 0.99+ |
Apache Foundation | ORGANIZATION | 0.99+ |
Scala | TITLE | 0.99+ |
New York City | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
Spark 2.0 | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
500 people | QUANTITY | 0.99+ |
Albi Meta | PERSON | 0.99+ |
a week later | DATE | 0.99+ |
Spark 2.0.0 | TITLE | 0.99+ |
500 person | QUANTITY | 0.99+ |
Apache Software Foundation | ORGANIZATION | 0.98+ |
New York | LOCATION | 0.98+ |
today | DATE | 0.98+ |
Holden | PERSON | 0.98+ |
first | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Cornell | ORGANIZATION | 0.97+ |
Boo | PERSON | 0.97+ |
One | QUANTITY | 0.96+ |
Spark Sequel | TITLE | 0.95+ |
CUBE | ORGANIZATION | 0.93+ |
eight data scientists | QUANTITY | 0.93+ |
python code | TITLE | 0.93+ |
2016 | DATE | 0.91+ |
one | QUANTITY | 0.91+ |
First | QUANTITY | 0.9+ |
Penn | ORGANIZATION | 0.89+ |
last couple of years | DATE | 0.88+ |
Big Data | ORGANIZATION | 0.86+ |
one place | QUANTITY | 0.85+ |
2.0 | TITLE | 0.8+ |
agile | TITLE | 0.79+ |
one of | QUANTITY | 0.75+ |
things | QUANTITY | 0.73+ |
once | QUANTITY | 0.7+ |
#BigDataNYC | EVENT | 0.7+ |
2010 | DATE | 0.65+ |
Dupe | EVENT | 0.6+ |
World | ORGANIZATION | 0.56+ |
Data | TITLE | 0.53+ |
themes | QUANTITY | 0.52+ |
1.6 | OTHER | 0.5+ |
Keynote Analysis with Zeus Kerravala | VeeamON 2022
>>Hello, everybody. Welcome to Von 2022, the live version. Yes, we're finally back live. Last time we did Von was 2019 live. Of course we did two subsequent years, uh, virtual. My name is Dave Valante and we've got two days of wall to wall coverage of VEON. As usual Veeam has brought together a number of customers, but it's really doing something different this year. Like many, uh, companies that you see, they have a big hybrid event. It's close to 40,000 people online and that's sort of driving the actual program where the content is actually different for the, the, the virtual viewers versus the onsite onsite. There's the, the V I P event going on, they got the keynotes. VM is a company who's a ancy occurred during the, the VMware rise. They brought in a new way of doing data protection. They didn't use agents. They, they protected at the hypervisor level. >>That changed the way that people did things. They're now doing it again in cloud, in SAS, in containers and ransomware. And so we're gonna dig into that. My cohost is Dave Nicholson this week, and we've got a special guest Zs Carava who is the principal at ZK research. He's an extraordinary analyst Zs. Great to see you, David. Thanks for coming out. Absolutely good to see you Beon. Great to be here. Yeah, we've done. Von act, live things have changed so dramatically. Uh, I mean the focus ransomware, it's now a whole new Tam, uh, the adjacency to security data protection. It's just a Zs. It's a whole new ballgame, isn't it? >>Well, it is. And, and in fact, um, during the keynote, they, they mentioned that they've, they're now tied at number one in, for, you know, back of a recovery, which is, I think it's safe to say Veeam. Does that really well? >>I think from a that's tied with Dell. Yes. Right. They didn't, I don't think they met Dell as >>Keto. And, uh, but I, you know, they've been rising Dell, EMC's been falling. And so I think >>It's somebody said 10 points that Dell lost and sharing the I data. >>It's not a big surprise. I mean, they haven't really invested a whole lot, >>I think anyway, >>Anyways, but I think from a Veeam perspective, the question is now that they've kind of hit that number one spot or close to it, what do they do next? This company, they mentioned, I was talking the CTO yesterday. You mentioned they're holding X bite of customer data. That is a lot of data. Right. And so they, they do back recovery really well. They do it arguably better than anybody. And so how do they take that data and then move into other adjacent markets to go create, not just a back recovery company, but a true data management platform company that has relevancy in cyber and analytics and artificial intelligence and data warehousing. Right? All those other areas I think are, are really open territory for this company right now. >>You know, Dave, you were a CTO at, at EMC when you, when you saw a lot of the acquisitions that the company made, uh, you, you know, they really never had a singular focus on data protection. They had a big data protection business, but that's the differentiator with Veeam. That's all it does. And you see that shine through from a, from a CTO's perspective. How do you see this market changing, evolving? And what's your sense as to how Vema is doing here? >>I think a lot of it's being driven by kind of, uh, unfortunately evil genius, uh, out in the market space. Yeah. I know we're gonna be hearing a lot about ransomware, uh, a lot about some concepts that we didn't really talk about outside of maybe the defense industry, air gaping, logical air gaping, um, Zs, you mentioned, you know, this, this, this question of what do you do when you have so many petabytes of data under management exabytes now exabytes, I'm sorry. Yeah, I see there I'm I'm already falling behind. One thing you could do is you could encrypt it all and then ask for Bitcoin in exchange for access to that data. >>Yes. That is what happens a >>Lot of them. So we're, we're getting, we're getting so much of the evil genius stuff headed our way. You start, you start thinking in those ways, but yet to, to your point, uh, dedicated backup products, don't address the scale and scope and variety of threats, not just from operational, uh, uh, you know, mishaps, uh, but now from so many bad actors coming in from the outside, it it's a whole new world. >>See us as analysts. We get inundated with ransomware solutions. Everybody's talking about it across the spectrum. The thing that interested me about what's happening here at VEON is they're, they're sort of trotting out this study that they do Veeam does some serious research, you know, thousands of customers that got hit by ransomware that they dug into. And then a, a larger study of all companies, many of whom didn't realize or said they hadn't been hit by ransomware, but they're really trying to inject thought leadership into the equation. You saw some of that in the analyst session this morning, it's now public. Uh, so we could talk about it. What were your thoughts on that data? >>Yeah, that was, uh, really fascinating data cuz it shows the ransomware industry, the response to it is largely reactive, right? We wait to get breach. We wait to, to uh, to get held at ransom I suppose. And then we, a lot of companies paid out. In fact, I thought there's one hospital in Florida, they're buying lots and lots of Bitcoin simply to pay out ransomware attacks. They didn't even really argue with them. They just pay it out. And I think Veeam's trying to change that mentality a little bit. You know, if you have the right strategy in place to be more preventative, you can do that. You can protect your data and then restore it right when you want to. So you don't have to be in that big bucket of companies that frankly pay and actually don't get their data back. Right. >>And like a third, I think roughly >>It's shocking amount of companies that get hit by that. And for a lot of companies, that's the end of their business. >>You know, a lot of the recovery process is manual is again a technologist. You understand that that's not the ideal way to go. In fact, it's probably a, a way to fail. >>Well, recovery's always the problem when I was in corporate, it used to joke that we were the best at backup, terrible at recovery. Well, you know, that's not atypical. >>My Fred Fred Moore, who was the vice president of strategy at a company called storage tech storage technology, corpor of storage tech. He had a great, uh, saying, he said, backup is one thing. Recovery is everything. And he started, he said that 30 years ago, but, but orchestration and automating that orchestration is, is really vital. We saw in the study, a lot of organizations are using scripts and scripts are fragile here they break. Right? >>Yeah, no, absolutely. Absolutely. Um, unfortunately the idea of the red run book on the shelf is still with us. Uh, uh, you know, scripting does not equal automation necessarily in every case, there's still gonna be a lot of manual steps in the process. Um, but you know, what I hope we get to talk about during the next couple of days is, you know, some of the factors that go into this, we've got day zero exploits that have already been uncovered that are stockpiled, uh, and tucked away. And it's inevitable that they're gonna hit. Yeah. So whether it's a manual recovery process or some level of automation, um, if you don't have something that is air gapped and cut off from the rest of the world in a physical or logical way, you can't guarantee >>That the, the problem with manual processes and scripting is even if you can set it up today, the environment changes so fast, right? With shadow it and business units buying their own services and users storing things and you know, wherever, um, you, you can't keep up with scripts in manual. Automation must be the way and I've been, and I don't care what part of it. You work in, whether it's this area in networking, communications, whatever automation must be the way I think prior to the pandemic, I saw a lot of resistance from it pros in the area of mission. Since the pandemic, I've seen a lot of warming up to it because I think it pros, I just realized they can't do their job without it. So, so you >>Don't, you don't think that edge devices, uh, lend themselves to manual >>Recovery, no process. In fact, I think that's one of the things they didn't talk about. What's that is, is edge. Edge is gonna be huge. More, every retailer, I talk to oil and gas, company's been using it for a long time. I've, you know, manufacturing organizations are looking at edge as a way to put more data in more places to improve experiences. Cuz you're moving the data closer, but we're creating a world where the fragmentation of data, you think it's bad now just wait a couple of years until the edge is a little more, you know, uh, to life here. And I think you ain't see nothing yet. This is this world of data. Everywhere is truly becoming that. And the thing with edge is there's no one definition, edge, you got IOT edge cellular edge, campus edge, right? Um, you know, you look at hotels, they have their own edge. I talked to major league baseball, right? They have every, stadium's got its own edge server in it. So we're moving into a world. We're putting more data in more places it's more fragmented than ever. And we need better ways of managing Of securing that data. But then also being able to recover for when >>Things happen. I was having that Danny Allen, he used the term that we coined called super cloud. He used that in the analyst meeting today. And, and that's a metaphor for this new layer of cloud. That's developing to your point, whether it's on-prem in a hybrid across clouds, not just running on the cloud, but actually abstracting away the complexity of the underlying primitives and APIs. And then eventually to your point, going out to the edge, I don't know if anyone who has an aggressive edge strategy Veeam to its credit, you know, has gone well beyond just virtualization and gone to bare metal into cloud. They were the containers. There was first at SAS. They acquired Caston who was a partner of theirs and they tried to acquire them earlier, but there was some government things and you know, that whole thing that got cleaned up and now they've, they own Caston. And I think the edge is next. I mean, it's gotta be, there's gonna be so much data at the edge. I guess the question is where is it today? How much of that is actually persisted? How much goes back to the cloud? I don't think people really have a good answer for that yet. >>No. In fact, a lot of edge services will be very ephemeral in nature. So it's not like with cloud where we'll take data and we'll store it there forever with the edge, we're gonna take data, we'll store it there for the time, point in time we need it. But I think one of the interesting things about Veeam is because they're decoupled from the airline hardware, they can run virtual machines and containers, porting Veeam to whatever platform you have next actually isn't all that difficult. Right? And so then if you need to be able to go back to a certain point in time, they can do that instantly. It's, it's a fascinating way to do backup. Are >>You you' point about it? I mean, you remember the signs up and down, you know, near the EMC facility, right outside of Southborough no hardware agenda that that was Jeremy Burton when he was running Verto of course they've got a little hardware agenda. So, but Veeam doesn't Veeam is, you know, they they're friendly with all the hardware players of pure play software, couple other stats on them. So they're a billion dollar company. They've now started to talk about their ARR growth. They grew, uh, 27% last year in, in, in annual recurring revenue, uh, 25%, uh, in the most recent quarter. And so they're in, in the vast majority of their business is subscription. I think they said, uh, 73% is now subscription based. So they really trans transitioned that business. The other thing about vem is they they've come up with a licensing model that's very friendly. >>Um, and they sort of removed that friction early on in the process. I remember talking to TIR about this. He said, we are gonna incent our partners and make it transparent to them, whether it's, you know, that when we shift from, you know, the, the, the, the crack of, of perpetual license to a subscription model, we're gonna make that transparent to partners. We'll take care of that. Essentially. They funded that transition. So that's worked very well. So they do stand out, I think from some of the larger companies at these big portfolios, although the big portfolio companies, you know, they get board level contacts and they can elbow their ways in your thoughts on that sort of selling dynamic. >>So navigating that transition to a subscription model is always fraught with danger. Everybody wants you to be there, but they want you to be there now. Mm-hmm <affirmative>, they don't like the transition that happens over 1824 months to get there. Um, >>As a private company, they're somewhat shielded from what they would've been if they were appli. Sure, >>Exactly. But, but that, but that bodes well from a, from a, a Veeam perspective. Um, the other interesting thing is that they sit where customers sit today in the real world, a hybrid world, not everything is in the cloud or a single cloud, uh, still a lot of on-prem things to take care of. And, >>And there will be for >>A long time exactly. Back to this idea. Yeah. There's a very long tail on that. So it's, it's, it's well enough to have a niche product that addresses a certain segment of the market, but to be able to go in and say all data everywhere, it doesn't matter where it lives. We have you covered. Um, that's a powerful message. And we were talking earlier. I think they, they stand a really good shot at taking market share, you know, on an ongoing basis. >>Yeah. The interesting thing about this market, Dave is they're, you know, although, you know, they're tied to number one with Dell now, they're, it's 12%, right? This reminds me of the security industry five, six years ago, where it's so fragmented. There's so many vendors, no one really stood out right. Then what happened in security? It's a little company called Palo Alto networks came around, they created a platform story. They moved into adjacent markets like SDWAN, they did a lot of smart acquisitions and they took off. I think vem is at that similar point where they've now, you know, that 12% number they've got some capital. Now they could go do some acquisitions that they want do. There's lots of adjacent markets as they talk about this company could be the Palo Alto of the data management market, if you know, and based on good execution. But there's certainly the opportunities there with all the data that they're holding. >>That's a really interesting point. I wanna stay that in a second. So there's obviously, there's, there's backup, there's recovery, there's data protection, there's ransomware protection, there's SAS data protection. And now all of a sudden you're seeing even a company like Rubrik is kind of repositioning as a security play. Yeah. Which I'm not sure that's the right move for a company that's really been focused on, on backup to really dive into that fragmented market. But it's clearly an adjacency and we heard Anan the new CEO today in the analyst segment, you know, we asked him, what's your kinda legacy gonna look like? And he said, I want to, I want to, defragment this market he's looking at. Yeah. He wants 25 to 45% of the market, which I think is really ambitious. I love that goal now to your point, agree, he, he sure. But that doubles yeah. >>From today or more, and he gets there to your point, possibly through acquisitions, they've made some really interesting tuck-ins with Castin. They certainly bought an AWS, uh, cloud play years ago. But my, my so, uh, Veeam was purchased by, uh, private equity inside capital inside capital in January of 2020, just before COVID for 5 billion. And at the time, then COVID hit right after you were like uhoh. And then of course the market took off so great acquisition by insight. But I think an IPO is in their future and that's, uh, Zs when they can start picking up some of these adjacent markets through every day. >>And I think one of the challenges for them is now that the Holden XAB bited data, they need to be able to tell customers things they, the customer doesn't know. Right. And that's where a lot of the work they're doing in artificial intelligence machine learning comes into play. Right. And, and nobody does that better than AWS, right? AWS is always looking at your data and telling you things you don't know, which makes you buy more. And so I think from a Veeam perspective, they need to now take all this, this huge asset they have and, and find a way to monetize it. And that's by revealing these key insights to customers that the customers don't even know they have. And >>They've got that monitor monitoring layer. Um, it's if you called it, Danny, didn't like to use the term, but he called it an AI. It's really machine learning that monitors. And then I think makes recommendations. I want to dig into that a little bit with it. >>Well, you can see the platform story starting to build here. Right. And >>Here's a really good point. Yeah. Because they really have been historically a point product company. This notion of super cloud is really a platform play. >>Right. And if you look in the software industry, look across any, any segment of the software industry, those companies that were niche that became big became platforms, Salesforce, SAP, Oracle. Right. And, and they find a way to allow others to build on their platform. You know, companies, they think like a Citrix, they never did that. Yeah. And they kind of taped, you know, petered out at a certain level of growth and had to, you know, change. They're still changing their business model, in fact. But I think that's Veeam's at that inflection point, right. They either build a platform story, enable others to do more on their platform or they stagnate >>HP software is another good example. They never were able to get that platform. And we're not able bunch of spoke with it, a non used to work there. Why is it so important Dave, to have a platform over a product? >>Well, cynical, Dave says, uh, you have a platform because it attracts investment and it makes you look cooler than maybe you really are. Um, but, uh, but really for longevity, you have, you, you, you have to be a platform. So what's >>The difference. How do you know when you have platform versus it? APIs? Is it, yeah. Brett, is it ecosystem? >>Some of it is. Some of it is semantics. Look at when, when I'm worried about my critical assets, my data, um, I think of a platform, a portfolio of point solutions for backing up edge data stuff. That's in the cloud stuff that exists in SAS. I see that holistically. And I think guys, you're doing enough. This is good. Don't, don't dilute your efforts. Just keep focusing on making sure that you can back up my data wherever it lives and we'll both win together. So whenever I hear a platform, I get a little bit, a little bit sketchy, >>Well platform, beats products, doesn't >>It? Yeah. To me, it's a last word. You said ecosystem. Yes. When you think of the big platform players, everybody B in the customer, uh, experience space builds to build for Salesforce. First, if you're a small security vendor, you build for Palo Alto first, right? Right. If you're in the database, you build for Oracle first and when you're that de facto platform, you create an ecosystem around you that you no longer have to fund and build yourself. It just becomes self-fulfilling. And that drives a level of stickiness that can't be replicated through product. >>Well, look at the ecosystem that, that these guys are forming. I mean, it's clear. Yeah. So are they becoming in your view >>Of platform? I think they are becoming a platform and I think that's one of the reasons they brought on and in, I think he's got some good experience doing that. You could argue that ring kind of became that. Right. The, when, you know, when he was ring central. >>Yeah. >>Yeah. And, uh, so I think some, some of his experiences and then moving into adjacencies, I think is really the reason they brought him in to lead this company to the next level. >>Excellent guys, thanks so much for setting up VEON 20, 22, 2 days of coverage on the cube. We're here at the area. It's a, it's a great venue. I >>Love the area. >>Yeah. It's nice. It's a nice intimate spot. A lot of customers here. Of course, there's gonna be a big Veeam party. They're famous for their parties, but, uh, we'll, we'll be here to cover it and, uh, keep it right there. We'll be back with the next segment. You're watching the cube VEON 20, 22 from Las Vegas.
SUMMARY :
Like many, uh, companies that you see, Absolutely good to see you Beon. one in, for, you know, back of a recovery, which is, I think it's safe to say Veeam. I think from a that's tied with Dell. And so I think I mean, they haven't really invested a whole lot, And so how do they take that data and then move into other adjacent markets to And you see that shine through from I think a lot of it's being driven by kind of, uh, unfortunately evil genius, uh, uh, you know, mishaps, uh, but now from so many bad actors coming in from the outside, does some serious research, you know, thousands of customers that got hit by ransomware that they dug You know, if you have the right strategy in place to be more preventative, you can do that. And for a lot of companies, that's the end of their business. You know, a lot of the recovery process is manual is again a technologist. Well, you know, that's not atypical. And he started, he said that 30 years ago, but, but orchestration and automating that orchestration and cut off from the rest of the world in a physical or logical way, you can't guarantee services and users storing things and you know, wherever, um, you, And I think you ain't see nothing yet. they tried to acquire them earlier, but there was some government things and you know, that whole thing that got cleaned up and And so then if you need to be able to go back I mean, you remember the signs up and down, you know, near the EMC facility, although the big portfolio companies, you know, they get board level contacts and they can elbow their ways in your Everybody wants you to be there, but they want you to be there now. As a private company, they're somewhat shielded from what they would've been if they were appli. the other interesting thing is that they sit where customers sit market share, you know, on an ongoing basis. I think vem is at that similar point where they've now, you know, Anan the new CEO today in the analyst segment, you know, And at the time, then COVID hit right after you were like And I think one of the challenges for them is now that the Holden XAB bited data, they need to be able to tell Um, it's if you called it, Well, you can see the platform story starting to build here. Because they really have been historically a point product company. And they kind of taped, you know, Why is it so important Dave, to have a platform over a Well, cynical, Dave says, uh, you have a platform because it attracts investment and it makes you How do you know when you have platform versus it? sure that you can back up my data wherever it lives and we'll both win together. facto platform, you create an ecosystem around you that you no longer have to fund and build yourself. So are they becoming in your The, when, you know, when he was ring central. I think is really the reason they brought him in to lead this company to the next level. We're here at the area. They're famous for their parties, but, uh, we'll, we'll be here to cover it and,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
PCCW | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Michelle Dennedy | PERSON | 0.99+ |
Matthew Roszak | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Mark Ramsey | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Jeff Swain | PERSON | 0.99+ |
Andy Kessler | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Matt Roszak | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
John Donahoe | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dan Cohen | PERSON | 0.99+ |
Michael Biltz | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Michael Conlin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Melo | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Joe Brockmeier | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Jeff Garzik | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
George Canuck | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Rebecca Night | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
NUTANIX | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Mike Nickerson | PERSON | 0.99+ |
Jeremy Burton | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
Robert McNamara | PERSON | 0.99+ |
Doug Balog | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Alistair Wildman | PERSON | 0.99+ |
Kimberly | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Sam Groccot | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Mobilizing Data for Marketing Transforming the Role of the CMO
>>Hello. Everyone were here in the Data Cloud Summit, and we had a real treat for you. I call it the CMO Power Panel. We're gonna explore how data is transforming marketing, branding and promotion, and with me, a three phenomenal marketing pros and chief marketing officers. Denise Person is the CMO Snowflakes Scott Holden of Thought Spot and Laura Langdon, of which pro folks great to see you. Thanks so much for coming on the Cube. >>Think >>great to be here with you, David. Awesome >>did. He's Let's let's start with you. I want to talk about the role and the changing role of the CMO. It's changed a lot, you know, sports, of course, with all this data, but I wonder what you're experiencing And can you share us share with us? Why marketing, especially, is being impacted by data? >>Well, data is really what has helped turn us marketers into revenue drivers into call centers, and it's clearly a much better place to be. What I'm personally most excited about is the real time access we have to data today. In the past, I used to get a stale report a few weeks after a marketing program was over and a tough time, but we couldn't make any. Changes to the investments were already made. Today we get data in the midst of running a program so we can reallocate investments at the time of program is up and running, and that's really a profound today as well. I would say that adaptability has truly become the true superpowers or marketing today and data. It's really what enables us to adapt to scale. We can adapt to customers, behavior and preferences at scale, and that's truly a profound new way of working as well. >>That's interesting what you say because, you know, in tough times used to be okay. Sales and engineering put a brick wall around those and you know the name it. Marketing, Say Okay, cut. But now it's like you go to marketing and say, Okay, what's the data say? How do we have to pivot and Scott? I wonder what of data and cloud really brought to the modern marketer that you might not have had before this modern era? Well, >>it's ah, this era. I don't think there's ever been a better time to be a marketer than there is right now. and the primary reason is that we have access to data and insights like we've never had. And I'm not exaggerating when I say that I have 100 times more access to data than I had a decade ago. It's just phenomenal when you look at the power cloud search AI these new consumer experiences for analytics, we can do things in seconds. It used to take days. And so it's B comments that he said, Ah, superpower for us toe. Have access to so much data. And it's, you know, Kobe has been hard. Ah, lot of our marketing teams who've never worked harder, making this pivot from the physical world to the virtual world. But there, you know, at least we're working, and three other part of it is that digital she's created this phenomenal opportunity for us because the beauty of digital and digital transformation is that everything now is trackable, which makes it measurable and means that we can actually get insights that we can act on in a smarter way. And you know, it's worth giving an example. If you just look at this show right, like this event that we're doing in a physical world. All of you watching at home, you'd be in front of us in a room and we'd be able to know if you're in the room, right? We tracking the scanners when you walked in. But that's basically it. At that point, we don't really get a good sense for how much you like what we're saying. Uh, maybe you filled out a survey, but only 5 to 10% of people ever do that. In the digital world. We know how long you stick around, and as a result, like, it's easy people can just with the click, you know, change the channel. And so the bar for content has gone way up as we do these events. But we know how long people are sticking around. And that's what's so special about it. You know Denise and her team as the host of this show, they're going to know how long people watch this segment and that knowing is powerful. I mean, it's simple. As you know, using a product like that spot, you could just ask a question. You know how many you know, what's the average you time by session and boom and chart pops up, you're gonna know what's working, what's not. And that's something that you could take and act on in the future. And that's what our That's what customers were doing. So you know, snowflake and the spot that we share a customer with Lulu and they're tracking programs. So what people are watching at home, how long they're watching what they're watching next, and they're able to do that in a super granular way and improve their content as a result. And that's the power of this new world we live in. Uh, that's made the cloud and data so accessible. Folks like us. >>Well, thank you for that. And I want to come back to that notion to understand how you're bringing data into your marketing office. But I want to bring Laura and Laura were pro You guys partner with a lot of brands, a lot of companies around the world. I mean, thousands of partners, obviously snowflake in in thought spot are are, too. How are you using data to optimize these co marketing relationships? You know specifically, what are the trends that you're seeing around around things like customer experience? >>So, you know, we used data for all of our marketing decisions, our own as well as with our partners. And I think what's really been interesting about partner marketing data is we can we can feed that back to our sales team, right? So it's very directional for them as well in their efforts moving forward. So I think that's a place where specifically to partners, it's really powerful. We can also use our collective data to go out to customers to better effect. And then, you know, regarding these trends, we just did a survey on the state of the intelligent enterprise. We we interviewed 300 companies, US and UK, and there were three Interesting. I thought statistics relevant to this, um only 22% of the companies that we interviewed felt that their marketing was where it needed to be from an automation standpoint. So lots of room for us to grow right. Lots of space for us to play, and 61% of them believed that it was critical that they implement this technology to become a more intelligent enterprise. But when they ranked readiness by function, marketing came in six right, So H R R and D finance were all ahead of marketing was followed by sales, you know. And then the final data point that I think was interesting was 40% of those agreed that while the technology was the most important thing, that thought leadership was critical, you know? And I think that's where marketers really could bring. You know, our tried and true experience to bear and merging with this technology. >>Great. Thank you. So so did he say I've been getting the Kool Aid injection this week around Data Cloud? I've been pushing people, but now that I have the CMO in front of me, I wanna ask about the data cloud and what it means specifically for the customers. And what are some of the learnings? Maybe that you've experienced that, that that can support some of the things that that Laura and Scott were just discussing. >>Yeah, As Scott said before, right, he had 100 times more data than he ever has before. And that's again if you look at all the companies we talked to around the world, it's not about the amount of data that they have. That is the problem is the ability to access that data that data for most companies is trapped across Silas across the organization. It's It's in data applications, systems of records. Some of that data sits with your partners that you want access, and that's really what the data clouds comes in. Data Cloud is really mobilizing that data for you. It brings all that data together for you in one place so you can finally access that data and really provide ubiquitous access to that data to everyone in your organization that needs it and can truly unlock the value off that data. And from a marketing perspective, I mean, we are responsible for the customer experience, you know, we provide to our customers, and if you have access toe all the data on your customers, that's when you have that customer 3 60 that we've all been talking about for so many years. If you have all the data, you can truly, you know, look at their, you know, buying behaviors, put all those adults together and create those exceptional customer experiences. You can do things such as the retailers do in terms of personal decision for, for instance, right, and those are the type of experiences in our customers are expecting today. They are expecting a 100% personalized experience for them, you know, all the time. And if you don't have all the data, you can't really put those experiences together at scale. And that is really where the data cloud comes in again. The data cloud is not only about mobilizing your own data within your enterprise. It's also about having access to data from your partners or extending access to your own data in a secure way to your partners within your ecosystems. >>Yeah, So I'm glad you mentioned a couple of things. I've been writing about this a lot, and particularly the 3 60 that we would dying for but haven't really been able to tap. I didn't call it the Data Cloud. I don't have a marketing gene. I had another sort of boring name for it, but I think there's, you know, similar vectors there. So I appreciate that, Scott, I want to come back to this notion of of building data DNA in your marketing, you know, fluency on and how you put data at the core of your marketing ops. I've been working with a lot of folks in banking and manufacturing and other industries that air that are struggling to do this. How are you doing it? What are some of the challenges that you can share and maybe some advice for your your peers out there? >>Yeah, sure, it's, um Well, you brought up this concept of data fluency and it zone important one. And there's been a lot of talking industry about data literacy on being able to read data. But I think it's more important to be able to speak data to be fluent. And as marketers, we're all storytellers. And when you combine data with storytelling, magic happens. And so getting the data fluency is a great goal for us toe have for all of the people in our companies. And to get to that end. I think one of the things that's happening is that people are hiring wrong and they're thinking about it. They're making some mistakes. And so a couple of things come to mind when, especially when I look at marketing teams that I'm familiar with, they're hiring a lot of data analysts and data scientists, and those folks are amazing and every team needs them. Uh, but if you go to big on that, you do yourself a disservice. The second key thing is that you're basically giving your frontline focus, your marketing managers or people on the front lines. An excuse not to get involved data. And I think that's a big mistake because it used to be really hard. But with the technologies available to us now, these new consumer like experiences for Data Analytics, anybody can do it. And so we as leaders have to encourage them to do it. And I'll give you just a you know, an example. You know, I've got about 32 people on my marketing team, and I don't have any data analysts on my team across our entire company. We have a couple of analysts and a couple of data engineers, and what's happening is the world is changing where those folks, their enablers, they architect the system, they bring in the different status sources they use. Technologies like snowflake has been so great at making it easier for people to pull technology together, and they get access data out of it quickly. But they're pulling it together, and then simple things like, Hey, I just want to see this weekly instead of monthly. You don't need to waste your expensive data science talent. You know, Gardner puts a stand out there. 50% of data scientists are doing basic visualization work. That's not a good use of their time. You The products are easy enough now that everyday marketing managers can do that. And when you have a marketing manager come to you and say, You know, I just figured out this this campaign, which looks great on the surface, is doing poorly. From our perspective, that's a magic moment. And so we all need to coach our teams to get there. And I would say, you know, lead by example, give them an opportunity Thio access data and turn it into a story that's really powerful. And then, lastly, praised people who do it like use it as something to celebrate inside our companies is a great way to kind of get this initiative. >>E love it. You talk about democratizing data, making it self service. People feel ownership, you know, Laura did. He starts talking about the ecosystem, and you're kind of the ecosystem pro here. How does the ecosystem help marketers succeed? Maybe you could talk about the power of of many versus the resource of one. >>So, you know, I think it's a it's a game changer and it will continue to be. And I think it's really the next level for marketers to harness this. This power that's out there and use it. Um, you know, it's something that's important to us, but it's also something we're starting to see our customers demand, you know, we went from a one size fits all solution, Thio. They want to bring the best in class to their organization. Um, we all need to be really agile and flexible right now. And I think this ecosystem allows that, you know, you think about the power of a snow plate snowflake mining data for you, and then a thought spot really giving you the dashboard toe, have what you want. And then, of course, on implementation partner like a whip Roh coming in and really being able to plug in whatever else you need to deliver. And I think it's really super powerful, and I think it gives us, you know, it just gives us so much to play with and so much room to grow market. >>Thank you. Did he say, Why don't you bring us home? We were almost out of time here, but marketing, art, science both. What do your thoughts? >>Definitely Both. I think that's exciting. Part about marketing. It is a balancing act between art and science. Clearly, it's probably mawr science today than it used to be. But the art part is really about inspiring change. It's about changing people's people's behavior and challenging the status quo, right? That's the art part. The science part. That's about making the right decisions all the time, right? It's making sure we are truly investing in what's gonna drive revenue for us. >>Guys, thanks so much for coming on the Cube. Great discussion. Really appreciate it. Okay. And thank you for watching. Keep it right there. Wall to wall coverage of the Snowflake Data Cloud Summit on the Cube.
SUMMARY :
I call it the CMO Power great to be here with you, David. It's changed a lot, you know, sports, of course, with all this data, but I wonder what you're experiencing And can What I'm personally most excited about is the real time access we have of data and cloud really brought to the modern marketer that you might not have had before And you know, it's worth giving an example. And I want to come back to that notion to understand how you're bringing data into your marketing And then, you know, regarding these trends, we just did a survey on I've been pushing people, but now that I have the CMO in front of me, I wanna ask about the data cloud and what it means And that's again if you look at all the companies we talked to around the world, What are some of the challenges that you can And I would say, you know, lead by example, you know, Laura did. powerful, and I think it gives us, you know, it just gives us so much to play with and so Did he say, Why don't you bring us home? But the art part is really about inspiring change. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Laura | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Denise | PERSON | 0.99+ |
100 times | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
100 times | QUANTITY | 0.99+ |
300 companies | QUANTITY | 0.99+ |
61% | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
22% | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Laura Langdon | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
Kobe | PERSON | 0.98+ |
Denise Person | PERSON | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
Snowflake Data Cloud Summit | EVENT | 0.98+ |
second key | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
grams | QUANTITY | 0.98+ |
Data Cloud Summit | EVENT | 0.97+ |
Kool Aid | ORGANIZATION | 0.97+ |
5 | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
Gardner | PERSON | 0.97+ |
10% | QUANTITY | 0.96+ |
one place | QUANTITY | 0.95+ |
3 60 | OTHER | 0.94+ |
Thio | PERSON | 0.89+ |
a decade ago | DATE | 0.86+ |
one size | QUANTITY | 0.82+ |
about 32 people | QUANTITY | 0.81+ |
Thought Spot | ORGANIZATION | 0.81+ |
thousands of partners | QUANTITY | 0.8+ |
three phenomenal marketing pros | QUANTITY | 0.72+ |
Data Cloud | ORGANIZATION | 0.68+ |
Holden | ORGANIZATION | 0.63+ |
Cube | COMMERCIAL_ITEM | 0.6+ |
CMO Snowflakes | ORGANIZATION | 0.57+ |
H R | ORGANIZATION | 0.55+ |
Lulu | ORGANIZATION | 0.53+ |
couple | QUANTITY | 0.51+ |
officers | QUANTITY | 0.47+ |
Roh | PERSON | 0.44+ |
Mobilizing Data for Marketing - Transforming the Role of the CMO | Snowflake Data Cloud Summit
>> Hello everyone, we're here at the Data Cloud Summit, and we have a real treat for you. I call it the CMO Power Panel. And we're going to explore how data is transforming marketing, branding and promotion. And with me are three phenomenal marketing pros and chief marketing officers. Denise Persson is the CMO of Snowflake, Scott Holden of ThoughtSpot and Laura Langdon of Wipro. Folks, great to see you. Thanks so much for coming on "theCUBE." >> Great to be here with you David. >> Awesome, Denise, let's start with you. I want to talk about the role and the changing role of the CMOs, has changed a lot, you know, I suppose of course with all this data, but I wonder what you're experiencing and can you share with us why marketing especially is being impacted by data. >> Well data's really what has helped turn us marketers into revenue drivers, into call centers. And it's clearly a much better place to be. What I'm personally most excited about is the real time access we have to data today. In the past, I used to get a stale report a few weeks after a marketing program was over and at that time we couldn't make any changes to the investments we'd already made. Today, we get data in the midst of running a program. So it can reallocate investments at the time a program is up and running and that's really profound. Today as well, I would say that adaptability has truly become the true superpowers of marketing today and data is really what enables us to adapt to scale. We can adapt to customer's behavior and preferences at scale and that's truly a profound new way of working as well. >> That's interesting what you say cause you know, in tough times used to be okay, sales and engineering, put a brick wall around those and you know, you name it marketing, say, "Okay, cut." But now it's like, you go to marketing and say, "Okay, what's the data say, "how do we have to pivot?" And Scott, I wonder what have data and cloud really brought to the modern marketer that you might not have had before through to this modern era? >> Well, this era, I don't think there's ever been a better time to be a marketer than there is right now. And the primary reason is that we have access to data and insights like we've never had before and I'm not exaggerating when I say that I have a hundred times more access to data than I had a decade ago. It's just phenomenal. When you look at the power of cloud, search, AI, these new consumer experiences for analytics, we can do things in seconds that used to take days. And so it's become in us, as Denise said a super power for us to have access to so much data. And it's, you know, COVID has been hard. A lot of our marketing teams who never worked harder making this pivot from the physical world to the virtual world but they're, you know, at least we're working. And the other part of it is that digital has just created this phenomenal opportunity for us because the beauty of digital and digital transformation is that everything now is trackable, which makes it measurable and means that we can actually get insights that we can act on in a smarter way. And you know, it's worth giving an example. If you just look at this show, right? Like this event that we're viewing. In a physical world, all of you watching at home you'd be in front of us in a room and we'd be able to know if you're in the room, right? We'd track to the scanners when you walked in but that's basically it. At that point, we don't really get a good sense for how much you like, what we're saying. You know, maybe you filled out a survey, but only five to 10% of people ever do that. In a digital world, we know how long you stick around. And as a result, like it's easy, people can just with a click, you know, change the channel. And so the bar for content has gone way up as we do these events but we know how long people are sticking around. And that's, what's so special about it. You know, Denise and her team, as the host of this show they're going to know how long people watch this segment. And that knowing is powerful. I mean, it's simple as you know, using a product like ThoughtSpot, you could just ask a question, you know, how many, you know, what's the average view time by session and Bloomer chart pops up. You're going to know what's working and what's not. And that's something that you can take and act on in the future. And that's what our customers are doing. So, you know, Snowflake and ThoughtSpot, we share our customer with Hulu and they're tracking programs. So, what people are watching at home, how long they're watching, what they're watching next. And they're able to do that in a super granular way and improve their content as a result. And that's the power of this new world we live in that's made the cloud and data so accessible to folks like us. >> Well, thank you for that. And I want to come back to that notion and understand how you're bringing data into your marketing ops, but I want to bring Laura in. Laura, Wipro, you guys partner with a lot of brands, a lot of companies around the world. I mean, thousands of partners, obviously Snowflake in ThoughtSpot or two. How are you using data to optimize these co-marketing relationships? You know, specifically, what what are the trends that you're seeing around things like customer experience? >> So, you know, we use data for all of our marketing decisions, our own, as well as with our partners. And I think what's really been interesting about partner marketing data is we can feed that back to our sales team, right? So, it's very directional for them as well and their efforts moving forward. So, I think that's a place where specifically to partners, it's really powerful. We can also use our collected data to go out to customers to better effect. And then you know, regarding these trends, we just did a survey on the state of the intelligent enterprise. We interviewed 300 companies, US and UK, and there were three interesting I thought statistics relevant to this. Only 22% of the companies that we interviewed felt that their marketing was where it needed to be from an automation standpoint. So lots of room for us to grow, right? Lots of space for us to play. And 61% of them believe that it was critical that they implement this technology to become a more intelligent enterprise. But when they ranked on readiness by function, marketing came in six, right? So HR, RND, finance were all ahead of marketing followed by sales. You know, and then the final data point that I think was interesting was 40% of those agreed that the technology was the most important thing, that thought leadership was critical. You know, and I think that's where marketers really can bring our tried and true experience to bear and merge it with this technology. >> Great, thank you. So, Denise, I've been getting the Kool-Aid injection this week around Data Cloud. I've been pushing people but now that I have the CMO in front of me, I want to ask about the Data Cloud and what it means specifically for the customers and what are some of the learnings maybe that you've experienced that can support some of the things that that Laura and Scott were just discussing. >> Yeah, as Scott said before, idea of a hundred times more data than he ever has before. And that's again, if you look at all the companies we talked to around the world it's not about the amount of data that they have that is the problem, it's the ability to access that data. That data for most companies is trapped across silos, across the organization. It sits in data applications, systems or records. Some of that data sits with your partners that you want to access. And that's really what the data cloud comes in. Data cloud is really mobilizing that data for you. It brings all that data together for you in one place. So you can finally access that data and really provide ubiquitous access to that data to everyone in your organization that needs it and can truly unlock the value of that data. And from a marketing perspective, I mean, we are responsible for the customer experience you know, we provide to our customers and if you have access to all the data on your customers, that's when you have that to customer 360, that we've all been talking about for so many years. And if you have all that data, you can truly, you know, look at their, you know, buying behaviors, put all those dots together and create those exceptional customer experiences. You can do things such as the retailers do in terms of personal decision, for instance, right? And those are the types of experiences, you know, our customers are expecting today. They are expecting a 100% personalized experience for them you know, all the time. And if you don't have all the data, you can't really put those experiences together at scale. And that is really where the data cloud comes in. Again, the data cloud is not only about mobilizing your own data within your enterprise. It's also about having access to data from your partners or extending access to your own data in a secure way to your partners within your ecosystems. >> Yeah, so I'm glad you mentioned a couple of things. I've been writing about this a lot and in particularly the 360 that we were dying for, but haven't really been able to tap. I didn't call it the data cloud, I don't have a marketing gene. I had another sort of boring name for it, but I think there's similar vectors there. So I appreciate that. Scott, I want to come back to this notion of building data DNA in your marketing, you know, fluency and how you put data at the core of your marketing ops. I've been working with a lot of folks in banking and manufacturing and other industries that are that are struggling to do this. How are you doing it? What are some of the challenges that you can share and maybe some advice for your peers out there? >> Yeah, sure, you brought up this concept of data fluency and it's an important one. And there's been a lot of talk in the industry about data literacy and being able to read data. But I think it's more important to be able to speak data, to be fluent and as marketers, we're all storytellers. And when you combine data with storytelling, magic happens. And so, getting a data fluency is a great goal for us to have for all of the people in our companies. And to get to that end, I think one of the things that's happening is that people are hiring wrong and they're thinking about it, they're making some mistakes. And so a couple of things come to mind especially when I look at marketing teams that I'm familiar with. They're hiring a lot of data analysts and data scientists and those folks are amazing and every team needs them. But if you go too big on that, you do yourself a disservice. The second key thing is that you're basically giving your frontline folks, your marketing managers or people on the front lines, an excuse not to get involved with data. And then that's a big mistake because it used to be really hard. But with the technologies available to us now, these new consumer like experiences for data analytics, anybody can do it. And so we as leaders have to encourage them to do it. And I'll give you just a you know, an example, you know, I've got about 32 people on my marketing team and I don't have any data analysts on my team. Across our entire company, we have a couple of analysts and a couple of data engineers. And what's happening is the world is changing where those folks, they're enablers, they architect the system. They bring in the different data sources. They use technologies like Snowflake as being so great at making it easier for people to pull spectrum technology together and to get access to data out of it quickly, but they're pulling it together and then simple things like, "Hey I just want to see this "weekly instead of monthly." You don't need to waste your expensive data science talent. You know, Gardener puts a stat out there that 50% of data scientists are doing basic visualization work. That's not a good use of their time. The products are easy enough now that everyday marketing managers can do that. And when you have a marketing manager come to you and say, you know, "I just figured out "this campaign which looks great on the surface "is doing poorly from an ROI perspective. That's a magic moment. And so we all need to coach our teams to get there. And I would say, you know, lead by example, give them an opportunity to access data and turn it into a story, that's really powerful. And then lastly, praise people who do it, like, use it as something to celebrate inside our companies is a great way to kind of get this initiative. >> I love it. And talking about democratizing data and making it self service, people feel ownership. You know, Laura, Denise was talking about the ecosystem and you're kind of the ecosystem pro here. How does the ecosystem help marketers succeed? Maybe you can talk about the power of many versus the resource of one. >> Sure, you know, I think it's a game changer and it will continue to be. And I think it's really the next level for marketers to harness this power that's out there and use it, you know, and it's something that's important to us, but it's also something we're starting to see our customers demand. You know, we went from a one size fits all solution to they want to bring the best in class to their organization. We all need to be really agile and flexible right now. And I think this ecosystem allows that, you know, you think about the power of Snowflake, Snowflake mining data for you and then a ThoughtSpot really giving you the dashboard to have what you want. And then an implementation partner like a Wipro coming in, and really being able to plug in whatever else you need to deliver. And I think it's really super powerful and I think it gives us you know, it just gives us so much to play with and so much room to grow as marketers. >> Thank you, Denise, why don't you bring us home. We're almost out of time here, but marketing, art, science, both? What are your thoughts? >> Definitely both, I think that's the exciting part about marketing. It is a balancing act between art and science. Clearly, it's probably more science today than it used to be but the art part is really about inspiring change. It's about changing people's behavior and challenging the status quo, right? That's the art part. The science part, that's about making the right decisions all the time, right? It's making sure we are truly investing in what's going to drive revenue for us. >> Guys, thanks so much for coming on "theCUBE." Great discussion, I really appreciate it. Okay, and thank you for watching. Keep it right there. Wall-to-wall coverage of the Snowflake Data Cloud Summit on "theCUBE."
SUMMARY :
and we have a real treat for you. and can you share with us and at that time and you know, you name it And you know, it's a lot of companies around the world. And then you know, regarding these trends, but now that I have the CMO And that's again, if you challenges that you can share and say, you know, "I just figured out Maybe you can talk about the power to have what you want. don't you bring us home. and challenging the status quo, right? Okay, and thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Denise | PERSON | 0.99+ |
Laura | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Laura Langdon | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Denise Persson | PERSON | 0.99+ |
Scott Holden | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Hulu | ORGANIZATION | 0.99+ |
300 companies | QUANTITY | 0.99+ |
Wipro | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Kool-Aid | ORGANIZATION | 0.99+ |
61% | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
Snowflake Data Cloud Summit | EVENT | 0.99+ |
ThoughtSpot | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Snowflake Data Cloud Summit | EVENT | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
22% | QUANTITY | 0.98+ |
10% | QUANTITY | 0.97+ |
five | QUANTITY | 0.97+ |
this week | DATE | 0.95+ |
Data Cloud Summit | EVENT | 0.95+ |
Gardener | ORGANIZATION | 0.95+ |
hundred times | QUANTITY | 0.94+ |
one place | QUANTITY | 0.92+ |
second key thing | QUANTITY | 0.89+ |
hundred times | QUANTITY | 0.88+ |
three phenomenal | QUANTITY | 0.88+ |
theCUBE | ORGANIZATION | 0.8+ |
one size | QUANTITY | 0.79+ |
Data Cloud | TITLE | 0.78+ |
Snowflake | TITLE | 0.77+ |
about 32 people | QUANTITY | 0.75+ |
a decade ago | DATE | 0.73+ |
Snowflake | EVENT | 0.71+ |
three interesting | QUANTITY | 0.7+ |
pros | QUANTITY | 0.59+ |
partners | QUANTITY | 0.59+ |
Bloomer | TITLE | 0.54+ |
COVID | ORGANIZATION | 0.51+ |
things | QUANTITY | 0.5+ |
360 | OTHER | 0.5+ |