Image Title

Search Results for AWS Startup Showcase S2 E3:

MarTech Market Landscape | Investor Insights w/ Jerry Chen, Greylock | AWS Startup Showcase S2 E3


 

>>Hello, everyone. Welcome to the cubes presentation of the 80, but startup showcases MarTech is the focus. And this is all about the emerging cloud scale customer experience. This is season two, episode three of the ongoing series covering the exciting, fast growing startups from the cloud AWS ecosystem to talk about the future and what's available now, where are the actions? I'm your host John fur. Today. We joined by Cub alumni, Jerry Chen partner at Greylock ventures. Jerry. Great to see you. Thanks for coming on, >>John. Thanks for having me back. I appreciate you welcome there for season two. Uh, as a, as a guest star, >><laugh>, you know, Hey, you know, season two, it's not a one and done it's continued coverage. We, we got the episodic, uh, cube flicks model going >>Here. Well, you know, congratulations, the, the coverage on this ecosystem around AWS has been impressive, right? I think you and I have talked a long time about AWS and the ecosystem building. It just continues to grow. And so the coverage you did last season, all the events of this season is, is pretty amazing from the data security to now marketing. So it's, it's great to >>Watch. And 12 years now, the cube been running. I remember 2013, when we first met you in the cube, we just left VMware just getting into the venture business. And we were just riffing the next 80. No one really kind of knew how big it would be. Um, but we were kinda riffing on. We kind of had a sense now it's happening. So now you start to see every vertical kind of explode with the right digital transformation and disruption where you see new incumbents. I mean, new Newton brands get replaced the incumbent old guard. And now in MarTech, it's ripe for, for disruption because web two has gone on to web 2.5, 3, 4, 5, um, cookies are going away. You've got more governance and privacy challenges. There's a slew of kind of ad tech baggage, but yet lots of new data opportunities. Jerry, this is a huge, uh, thing. What's your take on this whole MarTech cloud scale, uh, >>Market? I, I think, I think to your point, John, that first the trends are correct and the bad and the good or good old days, the battle days MarTech is really about your webpage. And then email right there. There's, there's the emails, the only channel and the webpage was only real estate and technology to care about fast forward, you know, 10 years you have webpages, mobile apps, VR experiences, car experiences, your, your, your Alexa home experiences. Let's not even get to web three web 18, whatever it is. Plus you got text messages, WhatsApp, messenger, email, still great, et cetera. So I think what we've seen is both, um, explosion and data, uh, explosion of channel. So sources of data have increases and the fruits of the data where you can reach your customers from text, email, phone calls, etcetera have exploded too. So the previous generation created big company responses, Equa, you know, that exact target that got acquired by Oracle or, or, um, Salesforce, and then companies like, um, you know, MailChimp that got acquired as well, but into it, you're seeing a new generation companies for this new stack. So I, I think it's exciting. >>Yeah. And you mentioned all those things about the different channels and stuff, but the key point is now the generation shifts going on, not just technical generation, uh, and platform and tools, it's the people they're younger. They don't do email. They have, you know, proton mail accounts, zillion Gmail accounts, just to get the freebie. Um, they're like, they're, they'll do subscriptions, but not a lot. So the generational piece on the human side is huge. Okay. And then you got the standards, bodies thrown away, things like cookies. Sure. So all this is makes it for a complicated, messy situation. Um, so out of this has to come a billion dollar startup in my mind, >>I, I think multiple billion dollars, but I think you're right in the sense that how we want engage with the company branch, either consumer brands or business brands, no one wants to pick a phone anymore. Right? Everybody wants to either chat or DM people on Twitter. So number one, the, the way we engage is different, both, um, where both, how like chat or phone, but where like mobile device, but also when it's the moment when we need to talk to a company or brand be it at the store, um, when I'm shopping in real life or in my car or at the airport, like we want to reach the brands, the brands wanna reach us at the point of decision, the point of support, the point of contact. And then you, you layer upon that the, the playing field, John of privacy security, right? All these data silos in the cloud, the, the, the, the game has changed and become even more complicated with the startup. So the startups are gonna win. Will do, you know, the collect, all the data, make us secure in private, but then reach your customers when and where they want and how they want it. >>So I gotta ask you, because you had a great podcast just this week, published and snowflake had their event going on the data cloud, there's a new kind of SAS platform vibe going on. You're starting to see it play out. Uh, and one of the things I, I noticed on your podcast with the president of Hashi Corp, who was on people should listen to that podcast. It's on gray matter, which is the Greylocks podcast, uh, plug for you guys. He mentioned he mentions the open source dynamic, right? Sure. And, and I like what he, things, he said, he said, software business has changed forever. It's my words. Now he said infrastructure, but I'm saying software in general, more broader infrastructure and software as a category is all open source. One game over no debate. Right. You agree? >>I, I think you said infrastructure specifically starts at open source, but I would say all open source is one more or less because open source is in every bit of software. Right? And so from your operating system to your car, to your mobile phone, open source, not necessarily as a business model or, or, or whatever, we can talk about that. But open source as a way to build software distribute, software consume software has one, right? It is everywhere. So regardless how you make money on it, how you build software, an open source community ha has >>One. Okay. So let's just agree. That's cool. I agree with that. Let's take it to the next level. I'm a company starting a company to sell to big companies who pay. I gotta have a proprietary advantage. There's gotta be a way. And there is, I know you've talked about it, but I have my opinion. There is needs to be a way to be proprietary in a way that allows for that growth, whether it's integration, it's not gonna be on software license or maybe support or new open source model. But how does startups in the MarTech this area in general, when they disrupt or change the category, they gotta get value creation going. What's your take on, on building. >>You can still build proprietary software on top of open source, right? So there's many companies out there, um, you know, in a company called rock set, they've heavily open source technology like Rock's DB under the hood, but they're running a cloud database. That's proprietary snowflake. You talk about them today. You know, it's not open source technology company, but they use open source software. I'm sure in the hoods, but then there's open source companies, data break. So let's not confus the two, you can still build proprietary software. There's just components of open source, wherever we go. So number one is you can still build proprietary IP. Number two, you can get proprietary data sources, right? So I think increasingly you're seeing companies fight. I call this systems intelligence, right, by getting proprietary data, to train your algorithms, to train your recommendations, to train your applications, you can still collect data, um, that other competitors don't have. >>And then it can use the data differently, right? The system of intelligence. And then when you apply the system intelligence to the end user, you can create value, right? And ultimately, especially marketing tech, the highest level, what we call the system of engagement, right? If, if the chat bot the mobile UI, the phone, the voice app, etcetera, if you own the system of engagement, be a slack, or be it, the operating system for a phone, you can also win. So still multiple levels to play John in multiple ways to build proprietary advantage. Um, just gotta own system record. Yeah. System intelligence, system engagement. Easy, right? Yeah. >>Oh, so easy. Well, the good news is the cloud scale and the CapEx funded there. I mean, look at Amazon, they've got a ton of open storage. You mentioned snowflake, but they're getting a proprietary value. P so I need to ask you MarTech in particular, that means it's a data business, which you, you pointed out and we agree. MarTech will be about the data of the workflows. How do you get those workflows what's changing and how these companies are gonna be building? What's your take on it? Because it's gonna be one of those things where it might be the innovation on a source of data, or how you handle two parties, ex handling encrypted data sets. I don't know. Maybe it's a special encryption tool, so we don't know what it is. What's your what's, what's your outlook on this area? >>I, I, I think that last point just said is super interesting, super genius. It's integration or multiple data sources. So I think either one, if it's a data business, do you have proprietary data? Um, one number two with the data you do have proprietary, not how do you enrich the data and do you enrich the data with, uh, a public data set or a party data set? So this could be cookies. It could be done in Brad street or zoom info information. How do you enrich the data? Number three, do you have machine learning models or some other IP that once you collected the data, enriched the data, you know, what do you do with the data? And then number four is once you have, um, you know, that model of the data, the customer or the business, what do you deal with it? Do you email, do you do a tax? >>Do you do a campaign? Do you upsell? Do you change the price dynamically in our customers? Do you serve a new content on your website? So I think that workflow to your point is you can start from the same place, what to do with the data in between and all the, on the out the side of this, this pipeline is where a MarTech company can have then. So like I said before, it was a website to an email go to website. You know, we have a cookie fill out a form. Yeah. I send you an email later. I think now you, you can't just do a website to email, it's a website plus mobile apps, plus, you know, in real world interaction to text message, chat, phone, call Twitter, a whatever, you know, it's >>Like, it's like, they're playing checkers in web two and you're talking 3d chess. <laugh>, I mean, there's a level, there's a huge gap between what's coming. And this is kind of interesting because now you mentioned, you know, uh, machine learning and data, and AI is gonna factor into all this. You mentioned, uh, you know, rock set. One of your portfolios has under the hood, you know, open source and then use proprietary data and cloud. Okay. That's a configuration, that's an architecture, right? So architecture will be important in terms of how companies posture in this market, cuz MarTech is ripe for innovation because it's based on these old technologies, but there's tons of workflows, but you gotta have the data. Right. And so if I have the best journey map from a client that goes to a website, but then they go and they do something in the organic or somewhere else. If I don't have that, what good is it? It's like a blind spot. >>Correct. So I think you're seeing folks with the data BS, snowflake or data bricks, or an Amazon that S three say, Hey, come to my data cloud. Right. Which, you know, Snowflake's advertising, Amazon will say the data cloud is S3 because all your data exists there anyway. So you just, you know, live on S3 data. Bricks will say, S3 is great, but only use Amazon tools use data bricks. Right. And then, but on top of that, but then you had our SaaS companies like Oracle, Salesforce, whoever, and say, you know, use our qua Marketo, exact target, you know, application as a system record. And so I think you're gonna have a battle between, do I just work my data in S3 or where my data exists or gonna work my data, some other application, like a Marketo Ella cloud Z target, um, or, you know, it could be a Twilio segment, right. Was combination. So you'll have this battle between these, these, these giants in the cloud, easy, the castles, right. Versus, uh, the, the, the, the contenders or the, or the challengers as we call >>'em. Well, great. Always chat with the other. We always talk about castles in the cloud, which is your work that you guys put out, just an update on. So check out greylock.com. They have castles on the cloud, which is a great thesis on and a map by the way ecosystem. So you guys do a really good job props to Jerry and the team over at Greylock. Um, okay. Now I gotta ask kind of like the VC private equity sure. Market question, you know, evaluations. Uh, first of all, I think it's a great time to do a startup. So it's a good time to be in the VC business. I think the next two years, you're gonna find some nice gems, but also you gotta have that cleansing period. You got a lot of overvaluation. So what happened with the markets? So there's gonna be a lot of M and a. So the question is what are some of the things that you see as challenges for product teams in particular that might have that killer answer in MarTech, or might not have the runway if there's no cash, um, how do people partner in this modern era, cuz scale's a big deal, right? Mm-hmm <affirmative> you can measure everything. So you get the combination of a, a new kind of M and a market coming, a potential growth market for the right solution. Again, value's gotta be be there. What's your take on this market? >>I, I, I think you're right. Either you need runway, so cash to make it through, through this next, you know, two, three years, whatever you think the market Turmo is or two, you need scale, right? So if you're at a company of scale and you have enough data, you can probably succeed on your own. If not, if you're kind of in between or early to your point, either one focus, a narrower wedge, John, just like we say, just reduce the surface area. And next two years focus on solving one problem. Very, very well, or number two in this MarTech space, especially there's a lot of partnership and integration opportunities to create a complete solution together, to compete against kind of the incumbents. Right? So I think they're folks with the data, they're folks doing data, privacy, security, they're post focusing their workflow or marketing workflows. You're gonna see either one, um, some M and a, but I definitely can see a lot of Coopers in partnership. And so in the past, maybe you would say, I'm just raise another a hundred million dollars and do what you're doing today. You might say, look, instead of raising more money let's partner together or, or merge or find a solution. So I think people are gonna get creative. Yeah. Like said scarcity often is good. Yeah. I think forces a lot more focus and a lot more creativity. >>Yeah. That's a great point. I'm glad you brought that up up. Cause I didn't think you were gonna go there. I was gonna ask that biz dev activity is going to be really fundamental because runway combined with the fact that, Hey, you know, if you know, get real or you're gonna go under is a real issue. So now people become friends. They're like, okay, if we partner, um, it's clearly a good way to go if you can get there. So what advice would you give companies? Um, even most experienced, uh, founders and operators. This is a different market, right? It's a different kind of velocity, obviously architectural data. You mentioned some of those key things. What's the posture to partner. What's your advice? What's the combat man manual to kind of compete in this new biz dev world where some it's a make or break time, either get the funding, get the customers, which is how you get funding or you get a biz dev deal where you combine forces, uh, go to market together or not. What's your advice? >>I, I think that the combat manual is either you're partnering for one or two things, either one technology or two customers or sometimes both. So it would say which partnerships, youre doing for technology EG solution completers. Like you have, you know, this puzzle piece, I have this puzzle piece data and data privacy and let's work together. Um, or number two is like, who can help you with customers? And that's either a, I, they can be channel for you or, or vice versa or can share customers and you can actually go to market together and find customers jointly. So ideally you're partner for one, if not the other, sometimes both. And just figure out where in your life cycle do you need? Um, friends. >>Yeah. Great. My final question, Jerry, first of all, thanks for coming on and sharing your in insight as usual. Always. Awesome final question for the folks watching that are gonna be partnering and buying product and services from these startups. Um, there's a select few great ones here and obviously every other episode as well, and you've got a bunch you're investing in this, it's actually a good market for the ones that are lean companies that are lean and mean have value. And the cloud scale does provide that. So a lot of companies are getting it right, they're gonna break through. So they're clearly gonna be getting customers the buyer side, how should they be looking through the lens right now and looking at companies, what should they look for? Um, and they like to take chances with seeing that. So it's not so much, they gotta be vetted, but you know, how do they know the winners from the pretenders? >>You know, I, I think the customers are always smart. I think in the, in the, in the past in market market tech, especially they often had a budget to experiment with. I think you're looking now the customers, the buyer technologies are looking for a hard ROI, like a return on investment. And before think they might experiment more, but now they're saying, Hey, are you gonna help me save money or increase revenue or some hardcore metric that they care about? So I think, um, the startups that actually have a strong ROI, like save money or increased revenue and can like point empirically how they do that will, will, you know, rise to the top of, of the MarTech landscape. And customers will see that they're they're, the customers are smart, right? They're savvy buyers. They, they, they, they, they can smell good from bad and they're gonna see the strong >>ROI. Yeah. And the other thing too, I like to point out, I'd love to get your reaction real quick is a lot of the companies have DNA, any open source or they have some community track record where communities now, part of the vetting. I mean, are they real good people? >>Yeah. I, I think open stores, like you said, in the community in general, like especially all these communities that move on slack or discord or something else. Right. I think for sure, just going through all those forums, slack communities or discord communities, you can see what's a good product versus next versus bad. Don't go to like the other sites. These communities would tell you who's working. >>Well, we got a discord channel on the cube now had 14,000 members. Now it's down to six, losing people left and right. We need a moderator, um, to get on. If you know anyone on discord, anyone watching wants to volunteer to be the cube discord, moderator. Uh, we could use some help there. Love discord. Uh, Jerry. Great to see you. Thanks for coming on. What's new at Greylock. What's some of the things happening. Give a quick plug for the firm. When you guys working on, I know there's been some cool things happening, new investments, people moving. >>Yeah. Look we're we're Greylock partners, seed series a firm. I focus at enterprise software. I have a team with me that also does consumer investing as well as crypto investing like all firms. So, but we're we're seed series a occasionally later stage growth. So if you're interested, uh, FA me@jkontwitterorjgreylock.com. Thank you, John. >>Great stuff, Jerry. Thanks for coming on. This is the Cube's presentation of the, a startup showcase. MarTech is the series this time, emerging cloud scale customer experience where the integration and the data matters. This is season two, episode three of the ongoing series covering the hottest cloud startups from the ADWS ecosystem. Um, John farrier, thanks for watching.

Published Date : Jun 29 2022

SUMMARY :

the cloud AWS ecosystem to talk about the future and what's available now, where are the actions? I appreciate you welcome there for season two. <laugh>, you know, Hey, you know, season two, it's not a one and done it's continued coverage. And so the coverage you did last season, all the events of this season is, So now you start to see every vertical kind of explode with the right digital transformation So sources of data have increases and the fruits of the data where you can reach your And then you got the standards, bodies thrown away, things like cookies. Will do, you know, Uh, and one of the things I, I noticed on your podcast with the president of Hashi Corp, So regardless how you make money on it, how you build software, But how does startups in the MarTech this area So let's not confus the two, you can still build proprietary software. or be it, the operating system for a phone, you can also win. might be the innovation on a source of data, or how you handle two parties, So I think either one, if it's a data business, do you have proprietary data? Do you serve a new content on your website? You mentioned, uh, you know, rock set. So you just, you know, live on S3 data. So you get the combination of a, a new kind of M and a market coming, a potential growth market for the right And so in the past, maybe you would say, I'm just raise another a hundred million dollars and do what you're doing today. get the customers, which is how you get funding or you get a biz dev deal where you combine forces, And that's either a, I, they can be channel for you or, or vice versa or can share customers and So it's not so much, they gotta be vetted, but you know, will, will, you know, rise to the top of, of the MarTech landscape. part of the vetting. just going through all those forums, slack communities or discord communities, you can see what's a If you know anyone on discord, So if you're interested, MarTech is the series this time, emerging cloud scale customer experience where the integration

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarTechORGANIZATION

0.99+

JohnPERSON

0.99+

JerryPERSON

0.99+

Jerry ChenPERSON

0.99+

AmazonORGANIZATION

0.99+

GreylockORGANIZATION

0.99+

CapExORGANIZATION

0.99+

2013DATE

0.99+

14,000 membersQUANTITY

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

OracleORGANIZATION

0.99+

twoQUANTITY

0.99+

Brad streetLOCATION

0.99+

bothQUANTITY

0.99+

three yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

two customersQUANTITY

0.99+

ADWSORGANIZATION

0.99+

12 yearsQUANTITY

0.99+

two partiesQUANTITY

0.99+

John farrierPERSON

0.98+

TodayDATE

0.98+

billion dollarsQUANTITY

0.98+

todayDATE

0.98+

S3TITLE

0.98+

SalesforceORGANIZATION

0.98+

3OTHER

0.97+

TwitterORGANIZATION

0.97+

two thingsQUANTITY

0.97+

Hashi CorpORGANIZATION

0.97+

John furPERSON

0.97+

GreylockPERSON

0.97+

VMwareORGANIZATION

0.96+

one problemQUANTITY

0.96+

this weekDATE

0.96+

TurmoORGANIZATION

0.95+

OneQUANTITY

0.95+

GreylocksPERSON

0.95+

4OTHER

0.94+

One gameQUANTITY

0.94+

5OTHER

0.93+

80QUANTITY

0.92+

firstQUANTITY

0.92+

CubORGANIZATION

0.91+

SnowflakeORGANIZATION

0.91+

greylock.comOTHER

0.91+

billion dollarQUANTITY

0.91+

season twoQUANTITY

0.91+

RockORGANIZATION

0.91+

TwilioORGANIZATION

0.9+

EquaORGANIZATION

0.9+

zillionQUANTITY

0.9+

GmailTITLE

0.9+

Rachel Obstler, Heap | AWS Startup Showcase S2 E3


 

>> Hello, everyone. Welcome to theCUBE presentation of the AWS startup showcase, market MarTech, emerging cloud scale customer experience. This is season two, episode three of the ongoing series covering the exciting startups from the AWS ecosystem. Talking about the data analytics, all the news and all the hot stories. I'm John Furrier your host of theCUBE. And today we're excited to be joined by Rachel Ostler, VP of product at Heap, Heap.io. Here to talk about from what, to why the future of digital insights. Great to see you, thanks for joining us today. >> Thanks for having me, John. Thanks for having me back. >> Well, we had a great conversation prior to the event here, a lot going on, you guys had acquired Auryc in an acquisition. You kind of teased that out last time. Talk about this, the news here, and why is it important? And first give a little setup on Heap and then the acquisition with Auryc. >> Yeah. So heap is a digital insights platform. So as you mentioned, it's all about analytics and so Heap really excels at helping you understand what your users and customers are doing in your digital application at scale. So when it comes to Auryc, what we really saw was a broken workflow. Maybe, I would even call it a broken market. Where a lot of customers had an analytics tool like Heap. So they're using Heap on one hand to figure out what is happening at scale with their users. But on the other hand, they were also using, like a session replay tool separately, to look at individual sessions and see exactly what was happening. And no one was very effective at using these tools together. They didn't connect at all. And so as a result, neither one of them could really be fully leveraged. And so with this acquisition, we're able to put these two tools together, so that users can both understand the what at scale, and then really see the why, immediately together in one place. >> You know, that I love that word why, because there's always that, you know, that famous motivational video on the internet, "you got to know your why", you know, it's very a motivational thing, but now you're getting more practicality. What and why is the, is the lens you want, right? So, I totally see that. And again, you can teased that out in our last interview we did. But I want to understand what's under the covers, under the acquisition. What was the big thesis behind it? Why the joint forces? What does this all mean? Why is this so important to understand this new, what and why and the acquisition specifically? >> Yeah, so let me give you example of a couple used cases, that's really helpful for understanding this. So imagine that you are a product manager or a, maybe a growth marketer, but you're someone who owns a digital experience. And what you're trying to do, of course, is make that digital experience amazing for your users so that they get value and that may mean that they're using it more, it may mean that new features are easily discoverable, that you can upsell things on your own. There's all sorts of different things that that may mean, but it's all about making it easy to use, discoverable, understandable, and as self-service as possible too. And so most of these digital builders, we call 'em digital builders sometimes. They are trying to figure out when the application is not working the way that it should be working, where people are getting stuck, where they're not getting the value and figure out how to fix that. And so, one really great used case is, I just want to understand in mass, like, let's say I have a flow, where are people dropping off? Right, so I see that I have a four step funnel and between step three and four people are dropping off. Heap is great for getting very detailed on exactly what action they're taking, where they're dropping off. But then the second you find what that action is, quantitatively, you want to watch it, you want to see what they did exactly before it. You want to see what they did after it. You want to understand why they're getting stuck. What they're confused at, are they mouthing over two things, like you kind of want to watch their session. And so what this acquisition allows us to do, is to put those things together seamlessly, you find the point in friction, you watch a bunch of examples, very easily. In the past, this would take you at least hours, if you could do it at all. And then in other used cases, the other direction. So there's the kind of, I think of it as the max to the min, and then there's the other direction as well. Like you have the, or maybe it's the macro to micro. You have the micro to macro, which is you have one user that had a problem. Maybe they send in a support ticket. Well, you can validate the problem. You can watch it in the session, but then you want to know, did this only happen to them? Did this happen to a lot of users? And this is really worth fixing, because all these customers are having the same problem. That's the micro to macro flow that you can do as well. >> Yeah. That's like, that's like the quantitative qualitative, the what and the why. I truly see the value there and I liked the way you explained that, good call out. The question I have for you, because a lot of people have these tools. "I got someone who does that." "I got someone over here that does the quantitative." "I don't need to have one company do it, or do I?" So the question I have for you, what does having a single partner or vendor, providing both the quantitative and the qualitative nails mean for your customers? >> So it's all because now it's immediate. So today with the two tools being separate, you may find something quantitatively. But then to, then to find the sessions that you want to watch that are relevant to that quantitative data point is very difficult. At least it takes hours to do so. And a lot of times people just give up and they don't bother. The other way is also true, you can watch sessions, you can watch as many sessions as you want, you can spend hours doing it, you may never find anything of interest, right? So it just ends up being something that users don't do. And actually we've interviewed a lot of customers, they have a lot of guilt about this. A lot of product managers feel like they should be spending all this time, but they just don't have the time to spend. And so it not only brings them together, but it brings them together with immediacy. So you can immediately find the issue, find exactly where it is and watch it. And this is a big deal, because, if you think about, I guess, like today's economic conditions, you don't have a lot of money to waste. You don't have a lot of time to waste. You have to be very impactful with what you're doing and with your spending of development resources. >> Yeah. And totally, and I think one of the things that immediacy is key, because it allows you to connect dots faster. And we have the aha moments all the time. If you miss that, the consequences can be quantified in a bad product experience and lost customers. So, totally see that. Zooming out now, I want to get your thoughts on this, cause you're bringing, we're going down this road of essentially every company is digital now, right? So digitization, digital transformation. What do you want to call it? Data is digital. This video is an experience. It's also data as well. You're talking, we're going to share this and people are going to experience that. So every website that's kind of old school is now becoming essentially a digital native application or eCommerce platform. All the things that were once preserved for the big guys, the hyper-scalers and the categories, the big budgets, now are coming down to every company. Every company is a digital company. What challenges do they have to transition from? I got a website, I got a marketing team. Now I got to look like a world class, product, eCommerce, multifaceted, application with developers, with change, with agility? >> Well, so I think that last thing you said is a really important part of it, the agility. So, these products, when you're going from a, just a website to a product, they're a lot more complex. Right? And so maybe I can give an example. We have a customer, it's an insurance company. So they have this online workflow. And if you can imagine signing up for insurance online, it's a pretty long complicated workflow. I mean, Hey, better to do it online than to have to call someone and wait on, you know, on the phone. And so it's a good experience, but it's still fraught with like opportunities of people getting stuck and never coming back. And so one of the things that Heap allowed this customer to do was figure out something that wasn't working in their workflow. And so if you think about traditional analytics tools, typically what you're doing is you're writing tracking code and you're saying, "Hey, I'm going to track this funnel, this process." And so maybe it has, you know, five different forms or pages that you have to go through. And so what you're doing when you track it is you say, did you submit the first one? Did you submit the second one? Did you submit the third one? So you know, like where they're falling off. You know where they're falling off, but you don't know why, you don't know which thing got them stuck because each one of these pages has multiple inputs and it has maybe multiple steps that you need to do. And so you're completely blind to exactly what's happening. Well, it turned out because Heap collects all this data, that on one of these pages where users were dropping off, it was because they were clicking on a FAQ, there was a link to a FAQ, and because this was a big company, the FAQ took them to a completely different application. Didn't know how to get back from there and they just lost people. And imagine if you are doing this with traditional means today, right? You don't have any visibility into what's happening on that page, you just know that they fell off. You might think about what do I do to fix this? How do I make this flow work better? And you might come up with a bunch of ideas. One of your ideas could be, let's break it into multiple pages. Maybe there's too much stuff on this page. One of your ideas may have been, let's try a FAQ. They're getting stuck, let's give them some more help. That would be a very bad idea, right? Because that was actually the reason why they were leaving and never coming back. So, the point I'm making is that, if you don't know exactly where people are getting stuck and you can't see exactly what is happening, then you're going to make a lot of very bad decisions. You're going to waste a lot of resources, trying things that make no sense. It is hard enough as a digital builder and all the product managers and growth marketers and marketers out there can attest to this, it's hard enough when you know exactly what the problem is to figure out a good solution. Right? That's still hard. But if you don't know the problem, it's impossible. >> Okay, so let's just level up, the bumper sticker now for the challenges are what? Decision making, what's the, stack rank the top three challenges from that. So it's being agile, right? So being very fast, because you're competing with a lot of companies right now. It's about making really good decisions and driving impact, right? So you have to have all the data that you need. You have to have the, the specific information about what's going on. Cause if you don't have it, you're going to decide to invest in things and you're not going to drive the impact that you want. >> So now you got the acquisition of Auryc and Auryc and you have the, this visibility to the customers that are building, investing, you mentioned, okay. As they invest, whether it's the digital product or new technology in R and D, what feedback have you guys seen from these investments, from these customers, what results have come out of it? Could you share any specific answers to the problems and challenges you have outlined, because you know, there's growth hackers could be failing cause of stupid little product mistakes that could have been avoided in the feedback, you know what I'm saying? So it's like, where can you, where are these challenges addressed and what are some of the results? >> Yeah, so, what we've seen with our customers is that when they are applying this data and doing this analysis on say workflows or goals that they're trying to accomplish, they've been able to move the needle quite a bit. And so, whether it is, you know, increasing conversion rates or whether it is making sure that they don't have, you know, drop off of trial signups or making sure that their customers are more engaged than before, when they know exactly where they're failing, it is much easier to make an investment and move the needle. >> Awesome. Well, let's move on to the next big topic, which I love, it's about data science and data engineering. You guys are a data company and I want to ask you specifically, how Heap uniquely is positioned to help companies succeed, where in the old big tech world, they're tightening the ropes on secure cookies, privacy, data sharing. At the same time, there's been an explosion in cloud scale data opportunities and new technologies. So it seems like a new level of, capability, is going to replace the old cookies, privacy and data sharing, which seem to be constricting or going away. How do you, what's your reaction to that? Can you share how Heap fits into this next generation and the current situation going on with the cookies and this privacy stuff. >> Yep, so it is really important in this world to be collecting data compliantly, right? And so what that means is, you don't want to be reliant on third party cookies. You want to be reliant on just first party information. You want to make sure that you don't collect any PII. Heap is built to do that from the ground up. We by default will not collect information, like what do people put into forms, right? Because that's a obvious source of PII. The other thing is that, there's just so much data. So you kind of alluded to this, with this idea of data science. So first of all, you're collecting data compliantly, you're making sure that you have all the data of what your user actions are doing, compliantly, but then it's so much data that it like, how do you know where to start? Right? You want to know, you want to get to that specific point that users are dropping off, but there's so many different options out there. And so that's where Heap is applying data science, to automatically find those points of friction and automatically surface them to users, so that you don't have to guess and check and constantly guess at what the problem is, but you can see it in the product surface right for you. >> You know, Rachel, that's a great point. I want to call that out because I think a lot of companies don't underestimate, they may underestimate what you said earlier, capturing in compliance way means, you're opting in to say, not to get the data, to unwind it later, figure it out. You're capturing it in a compliant way, which actually reduces the risk and operational technical debt you might have to deploy to get it fixed on compliance. Okay, that's one thing, I love that. I want to make sure people understand that value. That's a huge value, especially for people that don't have huge teams and diverse platforms or other data sources. The other thing you mentioned is owning their own data. And that first party data is a strategic advantage, mainly around personalization and targeted customer interaction. So the question is, with the new data, I own the data, you got the comp- capture with compliance. How do you do personalization and targeted customer interactions, at the same time while being compliant? It just seems, it seems like compliance is restrictive and kind of forecloses value, but open means you can personalization and targeted interactions. How do you guys connect the dots there by being compliant, but yet being valuable on the personalization and targeted? >> Well, it all depends on how the customer is managing their information, but imagine that you have a logged in user, well, you know, who the logged in user is, right? And so all we really need is an ID. Doesn't have, we don't need to know any of the user information. We just need an ID and then we can serve up the information about like, what have they done, if they've done these three actions, maybe that means that this particular offer would be interested to them. And so that information is available within Heap, for our customers to use it as they want to, with their users. >> So you're saying you can enable companies to own their data, be compliant and then manage it end to end from a privacy standpoint. >> Yes. >> That's got to be a top seller right there. >> Well, it's not just a top seller, it's a necessity. >> It's a must have. I mean, think about it. I mean, what are people, what are the, what are people who don't do this? What do they face? What's the alternative? If you don't keep, get the Heap going immediately, what's the alternative? I'm going through logs, I got to have to get request to forget my data. All these things are all going on, right? Is, what's the consequence of not doing this? >> Well, there's a couple consequences. So one is, and I kind of alluded to it earlier that, you're just, you're blind to what your users are doing, which means that you're making investments that may not make sense, right? So you can, you can decide to add all the cool features in the world, but if the customers don't perceive them as being valuable or don't find them or don't understand them, it doesn't, it doesn't serve your business. And so, this is one of like the rule number one of being a product manager, is you're trying to balance what your customers need, with what is also good for your business. And both of those have to be in place. So that's basically where you are, is that you'll be making investments that just won't be hitting the mark and you won't be moving the needle. And as I mentioned, it's more important now in this economic climate than ever to make sure that the investments you're making are targeted and impactful. >> Yeah and I think the other thing to point out, is that's a big backlash against the whole, Facebook, you're the product, you're getting used, the users being used for product, but you're, you guys have a way to make that happen in a way that's safe for the user. >> Yes. Safe and compliant. So look, we're all about making sure that we certainly don't get our customers into trouble and we recommend that they follow all compliance rules, because the last thing you want to be is on the, on the wrong side of a compliance officer. >> Well, there's also the user satisfaction problem of, and the fines. So a lot going on there, great product. I got to ask you real quick before we kind of wrap up here. What's the reaction been to the acquisition? Quantitative, qualitative. What's been the vibe? What are some, what are people saying about it? >> We've got a lot of interest. So, I mentioned earlier that this is really a broken workflow in the market. And when users see the two products working together, they just love it because they have not been able to leverage them being separate before. And so it just makes it so much easier for these digital builders to figure out, what do I invest in because they know exactly where people are having trouble. So it's been really great, we've had a lot of reach outs already asking us how they can use it, try it, not quite available yet. So it's going to be available later this summer, but great, great response so far. >> Awesome. Well, I love the opportunity. Love the conversation, I have to ask you now, looking forward, what does the future look like for companies taking advantage of your platform and tool? What can they expect in terms of R and D investments, area moves you're making? You're the head of product, you get the keys to the kingdom. What's the future look like? What's coming next? >> Yeah, so other than pulling the qual and the quant together, you actually hinted at it earlier when you're asking me about data science, but continuing to automate as much of the analysis as we can. So, first of all, analysis, analytics, it should be easy for everyone. So we're continue to invest in making it easy, but part of making it easy is, like we can automate analysis. We can, we can see that your website has a login page on it and build a funnel for you automatically. So that's some of the stuff that we're working on, is how do we both automate getting up to speed and getting that initial analysis done easily, without any work. And then also, how do we automate more complex analysis? So you have, typically a lot of companies have a data science team and they end up doing a lot of analysis, it's a little bit more complex. I'm not saying data science teams will go away, they will be around forever. There's tons of very complex analysis that they're probably not even getting time to do. We're going to start chipping away at that, so we can help product managers do more and more of that self-service and then free up the data science team to do even more interesting things. >> I really like how you use the word product managers, product builders, digital builders, because while I got you, I want to get your thought on this, because it's a real industry shift. You're talking about it directly here, about websites going to eCommerce, CMOs, a C-suite, they generally observe that websites are old technology, but not going away, because the next level abstraction builds on top of it. What's the new capabilities because for the CMOs and the C-suites and the product folks out there, they're not building webpages, they're building applications. So what is it about this new world that's different from the old web architecture? How would you talk to a CMO or a leader? And to, when they ask what's this new opportunity to take my website, cause maybe it's not enough traffic. People are consuming out in the organic, what's this new expectation and how, what does a new product manager environment look like, if it's not the web, so to speak? >> Well, there's a couple things. So one is, and you alluded to it a bit, like the websites are also getting more complex and you need to start thinking of your website as a product. Now it's, it may not be the product that you sell, but it is, well for eCommerce it's the place that you get access to the product, for B2B SaaS, it is the window to the product. It's a place where you can learn about the product. And you need to think about, not just like, what pieces of content are being used, but you need to understand the user flow, through the application. So that's how it's a lot more like a product. >> Rachel, thanks so much for coming on theCUBE here for this presentation, final word, put a plugin for the company. What are you guys up to? What are you looking for? Take a minute to explain kind of that, what's going on. How do people contact you with a great value proposition? Put a plugin for the company. >> Yeah, well, if you want to up level your product experience or website experience, you want to be able to drive impact quickly, try Heap. You can go to Heap.io, you can try it for free. We have a free trial, we have a free product even. And yeah, and then if you have any questions, you want to talk to a live person, you can do that too, at sales@Heap.io. >> Rachel, thanks so much. Customer-scale experiences with the cloud house league. This is the season two, episode three of the ongoing series. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Jun 29 2022

SUMMARY :

of the AWS startup Thanks for having me back. you guys had acquired So as you mentioned, the lens you want, right? So imagine that you are a product manager and I liked the way you that you want to watch that are relevant What do you want to call it? And so maybe it has, you know, the data that you need. in the feedback, you know what I'm saying? that they don't have, you know, and I want to ask you specifically, so that you don't have to guess and check I own the data, you got the but imagine that you it end to end from a privacy standpoint. That's got to be a Well, it's not just a top If you don't keep, get the So that's basically where you are, the users being used for product, you want to be is on I got to ask you real quick So it's going to be I have to ask you now, So you have, typically a lot of companies and the C-suites and the the product that you sell, What are you guys up to? Yeah, well, if you want to up level This is the season two, episode

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RachelPERSON

0.99+

Rachel OstlerPERSON

0.99+

AurycORGANIZATION

0.99+

John FurrierPERSON

0.99+

two toolsQUANTITY

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

FacebookORGANIZATION

0.99+

Rachel ObstlerPERSON

0.99+

oneQUANTITY

0.99+

third oneQUANTITY

0.99+

second oneQUANTITY

0.99+

two productsQUANTITY

0.99+

bothQUANTITY

0.99+

two toolsQUANTITY

0.99+

todayDATE

0.99+

three actionsQUANTITY

0.99+

HeapORGANIZATION

0.99+

firstQUANTITY

0.98+

four peopleQUANTITY

0.98+

first oneQUANTITY

0.97+

each oneQUANTITY

0.97+

one companyQUANTITY

0.96+

one userQUANTITY

0.96+

single partnerQUANTITY

0.95+

secondQUANTITY

0.95+

sales@Heap.ioOTHER

0.94+

step threeQUANTITY

0.94+

four stepQUANTITY

0.93+

later this summerDATE

0.92+

One ofQUANTITY

0.92+

three challengesQUANTITY

0.92+

one placeQUANTITY

0.91+

five different formsQUANTITY

0.91+

one thingQUANTITY

0.89+

couple thingsQUANTITY

0.83+

Heap.ioTITLE

0.82+

coupleQUANTITY

0.82+

two thingsQUANTITY

0.81+

episode threeQUANTITY

0.81+

season twoQUANTITY

0.8+

Heap.ioORGANIZATION

0.8+

MarTechORGANIZATION

0.71+

couple used casesQUANTITY

0.69+

theCUBEORGANIZATION

0.67+

episodeQUANTITY

0.66+

these pagesQUANTITY

0.64+

threeOTHER

0.63+

ideasQUANTITY

0.63+

tonsQUANTITY

0.59+

caseQUANTITY

0.59+

HeapPERSON

0.58+

ruleQUANTITY

0.57+

Startup Showcase S2 E3EVENT

0.54+

SaaSTITLE

0.42+

Michelle Lerner, Branch.io | AWS Startup Showcase S2 E3


 

(gentle music) >> Hey everyone. Welcome to theCUBE's coverage of the AWS Startup Showcase. Season two, episode three. This is about MarTech, emerging cloud scale customer experience. This is our ongoing series that you know and love hopefully that feature a great number of AWS ecosystem partners. I'm your host, Lisa Martin. Got a great guest here from Branch. Michelle Lerner joins me, the senior director of business development. She's going to be talking about Branch but also about one of your favorite brands, Peet's, yep, the coffee place, and how they supercharged loyalty and app adoption with Branch. Michelle, it's great to have you on the program. >> Yeah. Great to be here. Thank you so much for having me. >> Tell us a little bit about Branch, what you guys do for the modern mobile marketer. >> Yeah, absolutely. So you can think about Branch as a mobile linking platform. So what that means is we offer a seamless deep linking experience and insightful campaign measurement across every single marketing channel and platform on mobile. We exist so that we can break down walled gardens to help our customers engage with their customers in the most optimal way across any device and from every marketing channel. Our products are specifically designed to help create an amazing user experience, but also provide full picture holistic downstream measurement across any paid, owned, and earned channels so that brands can actually see what's working. So what that really means is that we make it really easy to scale our links across every single marketing channel, which then route the users to the right place at any device through even past install so that they can get to the context that they expect for a seamless experience. We then provide that cross channel analytics back to the brand so that they could see what's working and they can make better business decisions. So kind of summing it up, our industry leading mobile linking actually powers those deep links, also supports that measurement so that brands can build a sophisticated experience that actually delight their users but also improve their metrics and conversion rates. >> Those two things that you said are key. We expected to be delighted with whatever experience we're having and we also want to make sure, and obviously, the brands want to make sure that they're doing that but also that from an attribution perspective, from a campaign conversion perspective, that they can really understand the right tactics and the right strategic elements that are driving those conversions. That's been a challenge for marketers for a long time. Speaking of challenges, we've all been living through significant challenges. There's no way to say it nicely. The last two years, every industry completely affected by the pandemic talk. We're going to talk about Peet's Coffee. And I want to understand some of the challenges that you saw in the quick service restaurant or QSR industry at large. Talk to me about those industry challenges and then we'll dig into the Peet's story. >> Yeah, absolutely. So obviously the pandemic changed so much in our lives whether it's going to work or commuting or taking our kids to school or even getting our morning coffee. So when you think about Peet's, specifically within the QSR industry, they knew that they needed to innovate in order to make sure that they could provide their customers with their daily cups of coffee in a really safe and effective way. So they thought really quickly on their feet, they engaged us at Branch to help launch their order ahead messaging across their online and offline channels. They really wanted to maintain their commitment to an excellent customer experience but in a way that obviously would be safe and effective. >> That was one of the things that I missed the very most in the very beginning of the pandemic was going to my local Peet's. I missed that experience. Talk me about, you mentioned the online and offline, I'm very familiar with the online as an app user, mobile app user, but what were some of the challenges that they were looking to Branch to resolve on the offline experiences? People were queuing outside or for those folks that were they trying to get folks to convert to using the mobile app that maybe weren't users already? What was that online and offline experience? What were some of the challenges they were looking to resolve? >> Yeah, absolutely. The modern marketer is really both, like you said, online and offline, there is a heavy focus within the app and Peet's kind of wanted to bridge those two by pushing users into the app to provide a better experience there. So what they ended up doing was they used our deep linking capabilities to seamlessly route their customers to their loyalty program and their rewards catalog and other menu offerings within the app so that they could actually get things done in real time, but also in real time was the ability to then measure across those different campaigns so that they had visibility, Peet's, into kind of the way that they could optimize that campaign performance but also still give that great experience to their users. And they actually saw higher loyalty adoption, order values, and attributed purchases when they were able to kind of see in real time where these users were converting. But another thing that we're actually seeing across the board and Peet's did a great job of this was leveraging Branch power QR codes where we are seeing like the rebirth of the QR code. They're back, they're here to stay. They actually used that across multiple channels. So they used it with their in-store signage. You might have even seen it on their to go cups, coffee cards that were handed out by baristas. They were all encouraging customers to go order ahead using the Peet's coffee app. But that was kind of just the beginning for them. The creation of unique links for those QR codes actually spread for them to create Branch links across everything from emails to ads on Instagram. So before long, most of Peet's retail marketing were actually Branch links just because of the ease of creation and reliability, but more so again, going back to that customer experience, it really provided that good experience for the customers to make sure that they were getting within the mobile app so that they can take action and order their coffee. Another way that Branch kind of bridges the different platforms is actually between mobile web and app. Peet used Branch Journeys and that's a product of ours. It's a way that they can convert their mobile web users into app users. So they used deferred deep links with the ultimate goal of then converting those users into high value app users. So the Peet's team actually tested different creative and interstitials across the mobile site which would then place those users into the key pages, like either the homepage or the store locator, or the menu pages within the app. So that also helped them kind of build up not just their mobile app order online but also their delivery business so they could hire new trials of seasonal beverages. They could pair them with a free delivery offering. So they knew that they were able to leverage that at scale across multiple initiatives. >> I love those kinds of stories where it's kind of like a land and expand where there was obviously a global massive problem. They saw that recognized our customers are still going to be is demanding. Maybe if not more than they were before with I want my coffee, I want it now, you mentioned real time. I think one of the things we learned during the pandemic is access to realtime data isn't a nice to have anymore. We expect it as consumers even in our business lives, but the ability to be able to measure, course correct, but then see, wow, this is driving average order value up, we're getting more folks using our mobile app, maybe using delivery. Let's expand the usage of Branch across what we're doing in marketing can really help transform our marketing organization and a business at the brand level. >> Absolutely. And it also helps predict that brand loyalty. Because like you said, we, as consumers expect that that brands are going to kind of follow us where we are in our life cycle as consumers and if you don't do that, then you're going to be left in the dust unfortunately. >> I think one of the memories that will always stick with me, Michelle, during the last couple years is that first cup of Peet's that I didn't have to make at home myself. Just finally getting the courage to go back in, use the app, go in there, but oh man, that was probably the best taste of coffee I probably will ever have. You mentioned some of the products, you mentioned Journeys, and that allows them to do AB testing, looking at different CTAs, being able to kind of course correct and adjust campaigns in real time. >> Yeah, absolutely. So Journeys, what it does is it's basically a banner or a full page interstitial that is populated on the mobile web. So if you go to let's say Peets.com, you could get served as a user, either different creative or depending on where you are, location wise, you could be in the store, maybe there's a promotion. So it's triggered by all these different targeting capabilities. And so what that does is it takes me as a user. I can click that and go into the app where, like we said before, we have higher order value, higher lifetime value of a customer. And all my credit card information is saved. It just makes it so much more seamless for me to convert as a user within the app. And obviously Peet's likes that as well because then their conversion rates are actually higher. There's also kind of fun ways to play around with it. So if I am already a loyal customer and I have the app, you probably would target different creative for me than you would for someone who doesn't have the app. So you could say, hey, download our app, get $5 off of your next mobile order. Things like that you could play around with and you can see really does help increase that loyalty. But actually they were able to take, they kind of are experimenting with the geotargeted journeys in different key markets with different Peet's. And actually it was helping ultimately get their reinstalls growing. So for customers who maybe had the app before but needed to reinstall it because now there's such a bigger focus, they saw it both on the acquisition and the re-engagement side as well. >> So Branch has been pretty transformative, not in my estimation to Peet's marketing, but to Peet's as a business I'm hearing absolutely customer loyalty, revenue obviously impacted, brand loyalty, brand reputation. These are things that really kind of boil up to the top of the organization. So we're not just talking about benefits to the marketing and the sales folks. This is the overall massive business outcomes that you guys are enabling organizations like Peet's to generate. >> Yeah, definitely. And that's kind of what we tell our customers when they come to Branch. We want them to think about what their overall business objectives are versus if you think just campaign by campaign, okay, that's fine. But ultimately what are we trying to achieve? How could we help the bottom line? And then how can we also kind of help integrate with other mobile marketing technology or the modern tech stack that they're using? How do we integrate into that and actually provide not just a seamless experience for their end user, but with their marketing orgs, their product orgs, whoever's kind of touching the business as well? >> Have you noticed along those lines in the last couple of years as things like customer delight, seamless experience, the ability to translate, if I start on my iPad and I go to my laptop and then I finish a transaction on my phone, have you noticed your customer conversations increasing up to the C-suite level? Is this much more of a broad organizational objective around we've got to make sure that we have a really strong digital user experience? >> Yeah, absolutely. Like we were talking about before, it really does help affect the bottom line when you're providing a great experience with Branch being a mobile linking platform, our links just work. We outperform everybody else in the space and it might sound like really simple, okay, a link is working getting me from point A to point B, but doing it the right way and being consistent actually will increase performance over time of all these campaigns. So it's just an addition to providing that experience, you're seeing those key business results every single time. >> Talk about attribution for a minute because I've been in marketing for a long time in the tech industry. And that's always one of the challenges is we want to know what lever did the customer pull that converted them from opportunity to a lead to whatnot? Talk about the ability for Branch from an attribution perspective to really tell those marketers and the organization exactly, tactically, down to the tactical level, this is what's working. This is what's not working. Even if it's a color combination for example. That science is critical. >> Yeah, absolutely. Because we are able to cover the entire marketing life cycle of that they're trying to reach their customers. We cover off on email. We have mobile web to app. We have organic, we have search. No matter what you can look at that purview under a Branch lens. So we are just providing not just the accurate attribution down to the post-install, what happens after that, but also a more holistic view of everything that's happening on mobile. So then you can stitch all that together and really look at which ones are actually performing so you could see exactly which campaigns attributed directly to what amount of spend or which campaigns helped us understand the true lifetime long term value of customers, let's say in this case who ordered delivery or pickup. So to the kind of customer persona, it really helped. And also they actually were able to see Peet's because of our attribution, they saw actually a four and a half time increase in attributed purchases at the peak of the pandemic. And even since then, they're still seeing a three times increase in monthly attributed purchases. So because they actually have the view across everything that they're doing, we're able to provide that insight. >> That insight is so critical these days, like we mentioned earlier talking about real time data. Well we expect the experiences to be real time. And I expect that when I go back on the app they're going to know what I ordered last time. Maybe I want that again. Maybe I want to be able to change that, but I want them to know enough about me in a non creepy way. Give me that seamless experience that I'm expecting because of course that drives me to come back over and over again and spend way too much money there which I'm guilty of, guilty as charged. >> Coffee is totally fine. >> Right? Thank you. Thank you so much for validating that. I appreciate that. But talk to me about, as we are kind of wrapping things up here, the brick and mortars, it was such a challenge globally, especially the mom and pops to be able to convert quickly and figure out how do we reach a digital audience? How do we get our customers to be loyal? What's some of the advice that you have for the brick and mortars or those quick service restaurants like Peet's who've been navigating this the last couple years now here we are in this interesting semi post pandemic I would like to believe world? >> Yeah, we're getting there slowly but surely, but yeah, it's really important for them to adapt as we kind of move into this semi post pandemic world, we're kind of in the middle of like a hybrid online, offline, are we in stores, are we ordering online? These brand and customer relationships are super complex. I think the mobile app is just one part of that. Customers really shouldn't have any problems getting from the content or item they're looking for, no matter if they're in the store, if they're in the app, if they're on the desktop, if they're checking their email, if they're perusing TikTok, the best customer relationships really are omnichannel in nature. So what I would say, the need for providing the stellar customer experience isn't going to go away. It's actually really key. Whether it's driving users from their mobile properties to the app, providing a great in-store experience, like the QR codes, customers are expecting a lot more than they did before the pandemic. So they're not really seeing these brand touch points as little silos. They're seeing one brand. So it really should feel like one brand you should speak to the customers as if it's one brand across every single device, channel, and platform, and really unify that experience for them. >> Absolutely. That's going to be I think for so many different brands, whether it's a brick and mortar QSR, that's going to be one of the defining competitive advantages. If they can give their end users a single brand experience across channels, and you mentioned TikTok, those channels are only going to grow. As are I think or expectations. I don't think anybody's going to go back to wanting less than they did two years ago, right? >> Absolutely. Absolutely. >> Well this has been great, Michelle, thank you so much for joining me, talking about Branch, what you guys are doing, mobile linking platform, mobile measurement platform, the deep links, what you were able to do with Peet's Coffee, a beloved brand since the 60s and so many others. We appreciate your insights, your time and the story that you shared. >> Thank you so much, Lisa. I hope you have a great rest of your day. >> You as well. For Michelle Lerner, I'm Lisa Martin. You're watching theCUBE's coverage of the AWS Showcase. Keep it right here. More great content coming up from theCUBE, the leader in live tech coverage. (gentle music)

Published Date : Jun 29 2022

SUMMARY :

of the AWS Startup Showcase. Thank you so much for having me. what you guys do for the so that they can get to the context of the challenges that you saw So obviously the pandemic that I missed the very most for the customers to make sure but the ability to that brands are going to kind and that allows them to do AB testing, and I have the app, that you guys are enabling organizations or the modern tech stack So it's just an addition to And that's always one of the So to the kind of customer that drives me to come that you have for the brick to adapt as we kind of move I don't think anybody's going to go back Absolutely. a beloved brand since the I hope you have a great rest of your day. coverage of the AWS Showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Michelle LernerPERSON

0.99+

MichellePERSON

0.99+

$5QUANTITY

0.99+

LisaPERSON

0.99+

iPadCOMMERCIAL_ITEM

0.99+

AWSORGANIZATION

0.99+

Peet's CoffeeORGANIZATION

0.99+

PeetORGANIZATION

0.99+

first cupQUANTITY

0.99+

bothQUANTITY

0.99+

three timesQUANTITY

0.99+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

one brandQUANTITY

0.98+

TikTokORGANIZATION

0.98+

two thingsQUANTITY

0.97+

two years agoDATE

0.97+

one partQUANTITY

0.97+

theCUBEORGANIZATION

0.95+

Peet'sORGANIZATION

0.94+

pandemicEVENT

0.94+

semiEVENT

0.91+

last couple yearsDATE

0.91+

four and a half timeQUANTITY

0.91+

last couple yearsDATE

0.9+

Peets.comORGANIZATION

0.9+

Branch.ioORGANIZATION

0.88+

JourneysORGANIZATION

0.88+

AWS Startup Showcase S2 E3EVENT

0.86+

Season twoQUANTITY

0.85+

single brandQUANTITY

0.84+

last two yearsDATE

0.83+

Startup ShowcaseEVENT

0.83+

single marketing channelQUANTITY

0.81+

last couple of yearsDATE

0.75+

single deviceQUANTITY

0.74+

InstagramORGANIZATION

0.74+

single marketingQUANTITY

0.72+

60sDATE

0.71+

BranchORGANIZATION

0.7+

MarTechORGANIZATION

0.67+

PeetPERSON

0.67+

single timeQUANTITY

0.67+

point BOTHER

0.62+

episode threeQUANTITY

0.56+

ShowcaseTITLE

0.47+

BranchLOCATION

0.38+

Manyam Mallela, Blueshift | AWS Startup Showcase S2 E3


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase. Topic is MarTech: Emerging Cloud-Scale Experience. This is season two, episode three of the ongoing series covering the exciting startups from the AWS ecosystem. Talk about their value proposition and their company and all the good stuff that's going on. I'm your host, John Furrier. And today we're excited to be joined by Manyam Mallela who's the co-founder and head of AI at Blueshift. Great to have you on here to talk about the Blueshift-Intelligent Customer Engagement, Made Simple. Thanks for joining us today. >> Thank you, John. Thank you for having me. >> So last time we did our intro video. We put it out in the web. Got great feedback. One of the things that we talked about, which is resonating out there in the viral Twitter sphere and in the thought leadership circles is this concept that you mentioned called 10X marketer. That idea that you have a solution that can provide 10X value. Kind of a riff on the 10X engineer in the DevOps cloud world. What does it mean? And how does someone get there? >> Yeah, fantastic. I think that's a great way to start our discussion. I think a lot of organizations, especially as of this current economic environment are looking to say, I have limited resources, limited budgets, how do I actually achieve digital and customer engagement that helps move the needle for my key metrics, whether it's average revenue per user, lifetime value of the user and frequent interactions. Above all, the more frequently a brand is able to interact with their customers, the better they understand them, the better they can actually engage them. And that usually leads to long term good outcomes for both customer and the brand and the organizations. So the way I see 10X marketer is that you need to have tools that give you that speed and agility without hindering your ability to activate any of the campaigns or experience that you want to create. And I see the roadblocks usually for many organizations, is that kind of threefold. One is your data silos. Usually data that is on your sites, does not talk to your app data, does not talk to your social data, does not talk to your CRM data and so forth. So how do I break those silos? The second is channel silos. I actually have customers who are only engaging on email or some are on email and mobile apps. Some are on email and mobile apps and maybe the OTT TV in a Roku or one of the connected TV experiences, or maybe in the future, another Web3 environments. How do I actually break those channel silos so that I get a comprehensive view of the customer and my marketing team can engage with all of them in respect to the channel? So break the channel silos. And the last part, what I call like some of the little talked about is I call the inside silo, which is that, not only do you need to have the data, but you also have to have a common language to share and talk about within your organizations. What are we learning from our customers? What do we translate our learning and insight on this common data platform or fabric into an action? And that requires the shared language of how do I actually know my customers and what do I do with them? Like either the inside silo as well. I think a lot of times organizations do get into this habit like each one speaks their own language, but they don't actually are talking the common language of what did we actually know about the real customer there. >> Yeah, and I think that's a great conversation because there's two, when you hear 10X marketer or 10X conversations, it implies a couple things. One is you're breaking an old way and bringing in something new. And the new is a force multiplier, in this case, 10X marketer. But this is the cloud scale so marketing executives, chiefs, staffs, chiefs of staffs of CMOs and their staffs. They want to get that scale. So marketing at scale is now the table stakes. Now budget constraints are there as well. So you're starting to see, okay, I need to do more with less. Now the big question comes up is ROI. So I want to have AI. I want to have all these force multipliers. What do I got to do with the old? How do I handle that? How do I bring the new in and operationalize it? And if that's the case, I'm making a change. So I have to ask you, what's your view on the ROI of AI marketing, because this is a key component 'cause you've got scale factor here. You've got to force multiplier opportunity. How do you get that ROI on the table? >> I think that as you rightly said, it's table stakes. And I think the ROI of AI marketing starts with one very key simple premise that today some of the tools allow you to do things one at a time. So I can actually say, "can I run this campaign today?" And you can scramble your team, hustle your way, get everybody involved and run that campaign. And then tomorrow I'd say like, Hey, I looked at the results. Can I do this again? And they're like, oh, we just asked for all of us to get that done. How do I do it tomorrow? How do I do it next week? How do I do it for every single week for the rest of the year? That's where I think the AI marketing is essentially taking your insight, taking your creativity, and creating a platform and a tool that allows you to run this every single day. And that's agility at scale. That is not only a scale of the customer base, but scale across time. And that AI-based automation is the key ROI piece for a lot of AI marketing practitioners. So Forrester, for example, did a comprehensive total economic impact study with our customers. And what they found out was actually the 781% ROI that they reported in that particular report is based on three key factors. One is being able to do experiences that are intelligent at scale, day in and day out. So do your targeting, do your recommendations. Not just one day, but do it every single day. And don't hold back yourself on being able to do that. >> I think they got to get the return. They got to get the sales too. This is the numbers. >> That's right. They actually have real dollars, real numbers attached to it. They have a calculator. You can actually go in and plug your own numbers and get what you might expect from your existing customer base. The second is that once you have a unified platform like ours, the 10X marketer that we're talking about is actually able to do more. It's sometimes actually, it's kind of counterintuitive to think that a smaller team does more. But in reality, what we have seen, that is the case. When you actually have the right tools, the smaller teams actually achieve more. And that's the redundant operations, conflicting insights that go away into something more coherent and comprehensive. And that's the second insight that they found. And the third is just having reporting and all of the things in one place means that you can amplify it. You can amplify it across your paid media channels. You can amplify it across your promotions programs and other partnerships that you're running. >> That's the key thing about platforms that people don't understand is that you have a platform and it enables a lot of value. In this case, force multiplier value. It enables more value than you pay for it. But the key is it enables customers to do things without a line of code, meaning it's a platform. They're innovating on top of it. And that's, I think, where the ROI comes in and this leads me where the next question is. I wanted to ask you is, not to throw a wet blanket on the MarTech industry, but I got to think of when I hear marketing automation, I kind of think old. I think old, inadequate antiquated technologies. I think email blasting and just some boring stuff that just gets siloed or it's bespoke from something else. Are marketing automation tools created equal? Does something like, what you guys are doing with SmartHub? Change that, and can you just talk about that 'cause it's not going to go away. It's just another level that's going to be abstracted away under the coverage. >> Yeah, great question. Certainly, email marketing has been practiced for two or three decades now and in some form or another. I think we went from essentially what people call list-based marketing. I have a list, let me keep blasting the same message to everybody and then hopefully something will come out of it. A little bit more of saying, then they can, okay, maybe now I have CRM database and can I do database marketing, which they will call like, "Hey, Hi John. Hi Manyam", which is the first name. And that's all they think will get the customer excited about because you'll call them by name, which is certainly helpful, but not enough. I think now what we call like, the new age that we live in is that we call it graph-based marketing. And the way we materialize that is that every single user is interacting with a brand with their offerings. So that this interaction graph that's happening across millions of customers, across thousands of content articles, videos, shows, products, items, and that graph actually has much richer knowledge of what the customer wants than the first names or list-based ones. So I think the next evolution of marketing automation, even though the industry has been there a while, there is a step change in what can actually be done at scale. And which is taking that interaction graph and making that a part of the experience for the customer, and that's what we enable. That's why we do think of that as a big step change from how people are being practicing list-based marketing. And within that, certainly there is a relation of curve as to how people approach AI marketing and they are in a different spectrum. Some people are still at list-based marketing. Some people are database marketing. And hopefully will move them to this new interaction graph-based marketing. >> Yeah and I think the context is key. I like how you bring up the graph angle on this because the graph databases imply there's a lot of different optionality around what's happened contextually both over time and currently and it adds to it. Makes it smarter. It's not just siloed, just one dimensional. It feels like it's got a lot there. This is clearly I'm a big fan of and I think this is the way to go. As you get more personalization, you get more data. Graphic database makes a lot of sense. So I have to ask you, this is a really cutting edge value proposition, who are the primary buyers and users in an organization that you guys are working with? >> Yeah, great question. So we typically have CMO organizations approaching us with this problem and they usually talk to their CIO organizations, their counterparts, and the chief information officers have been investing in data fabrics, data lakes, data warehouses for the better part of last decade or two, and have some very cutting edge technology that goes into organizing all this data. But that doesn't still solve the problem of how do I take this data and make a meaningful, relevant, authentic experience for the customer. That's the CMO problem. And CMO are now challenge with creating product level experience with every interaction and that's where we coming. So the CMO are the buyers of our SmartHub CDP platform. And we're looking for consolidating hundreds of tools that they had in the past and making that one or two channel marketers. Actually, the 10X marketer that we talk about. And you need the right tool on top of your data lakes and data warehouses to be able to do that. So CMO are also the real drivers of using this technology. >> I think that also place the ROI equation around ROI and having that unified platform. Great call out there. I got to ask you the question here 'cause this comes up a lot and when I hear you talking, I think, okay, all the great stuff you guys have there. But if I'm a company, I want to make my core competencies mine. I don't really want to outsource or buy something that's going to be core to my business. But at the same time as market shifts, the business changes. And sometimes people don't even know what business they're in at the end of the day. And as it gets more complicated too, by the way. So the question comes up with companies and I can see this clearly, do I buy it? Do I build it? When it comes to AI because that's a core competency. Wait a minute, AI. I'm going to maybe buy some chatbot technology. That's not really AI, but it feels like AI, but I'm a company, I want to buy it or build it. That's a choice. What do you see there? 'Cause you guys have a very comprehensive platform. It's hard to replicate, imitates, inimitable. So what's your customers doing with respect buy and build? And where do they get the core competency? What do they get to have as a core competency? >> Fantastic. I think certainly, AI as it applies to at the organization level, I've seen this at my previous organization that I was part of, and there will be product and financial applications that are using AI for the service of that organization. So we do see, depending upon the size of the organization having in-house AI and data science teams. They are focused on these long term problems that they are doing as part of their product itself. Adjacent to that, the CMO organization gets some resources, but not certainly a lot. I think the CMO organization is usually challenged with the task, but not given the hundred people data science and engineering team to be able to go solve that. So what we see among our customer base is that they need agile platform to do most of the things that they need to do on a day to day basis, but augmented with what our in-house data science they have. So we are an extensible platform. What we have seen is that half of our customers use us solely for the AI needs. The other half certainly uses both AI modules that we provide and are actually augmented with things that they've already built. And we do not have a fight in that ring. But we do acknowledge and we do provide the right hooks for getting the data out of our system and bringing their AI back into our system. And we think that at the end of the day, if you want agility for the CMO, there should not be any barriers. >> It's like they're in the data business and that's the focus. So I think with what I hear you saying is that with your technology and platform, you're enabling to get them to be in the data business as fast as possible. >> That's right. >> Versus algorithm business, which they could add to over time. >> Certainly they could add to. But I think the bulk of competencies for the CMO are on the creative side. And certainly wrangling with data pipelines day in and day out and wondering what actually happened to a pipeline in the middle of the night is not probably what they would want to focus on. >> Not their core confidence. Yeah, I got that. >> That's right. >> You can do all the heavy lifting. I love that. I got to ask you on the Blueshift side on customer experience consumption. how can someone experience the product before buying? Is there a trial or POC? What's the scale and scope of operationalizing and getting the Blueshift value proposition in them? >> Yeah, great. So we actually recently released a fantastic way to experience our product. So if you go to our website, there's only one call-to-action saying, explore Blueshift. And if you click on that, without asking, anything other than your business email address, you're shown the full product. You're given a guided tour of all the possibilities. So you can actually experience what your marketing team would be doing in the product. And they call it Project Rover. We launched it very recently and we are seeing fantastic reception to that. I think a lot of times, as you said, there is that question mark of like, I have a marketing team that is already doing X, Y, Z. Now you are asking me to implement Blueshift. How would they actually experience the product? And now they can go in and experience the product. It's a great way to get the gist of the product in 10 clicks. Much more than going through any number of videos or articles. I think people really want to say, let me do those 10 clicks. And I know what impression that I can get from platform. So we do think that's a great way to experience the product and it's easily available from the main website. >> It's in the value proposition. It isn't always a straight line. And you got that technology. And I got to ask from between your experience with the customers that you're talking to, prospects, and customers, where do you see yourself winning deals on Customer Engagement, Made Simple because the word customer engagement's been around for a while, and it's become, I won't say cliche, but there's been different generational evolutions of technology that made that possible. Obviously, we're living in an era of high velocity Omni-Channel, a lot of data, the graph databases you mentioned are in there, big part of it. Where are you winning deals? Where are customers pain points where you are solving that specifically? >> Yeah, great question. So the organizations that come to us usually have one of the dimensions of either they have offering complexity, which is what catalog of content or videos or items do they offer to the customers. And on the data complexity on the other side is to what the scale of customer base that I usually target. And that problem has not gone away. I think the customer engagement, even though has been around for a while, the problem of engaging those customers at scale hasn't gone away and it only is getting harder and harder and organizations that have, especially on what we call the business-to-consumer side where the bulk of what marketing organizations in a B2C segments are doing. I have tens to millions of customers and how do I engage them day in and day out. And I think that all that problem is only getting harder because consumer preferences keeps shifting all the time. >> And where's your sweet spot for your customer? What size? Can you just share the target organization? Is it medium enterprise, large B2C, B2B2C? What's the focus area? >> Yeah, great question. So we have seen like startups that are in Silicon Valley. I have now half a million monthly active users, how do I actually engage them to customers and clients like LendingTree and PayPal and Discovery and BBC who have been in the business for multiple decades, have tens of millions of customers that they're engaging with. So that's kind of our sweet spot. We are certainly not maybe for small shop with maybe a hundred plus customers. But as you reach the scale of tens of thousands of customers, you start seeing this problem. And then you start to look out for solutions that are beyond, especially list-based marketing and email blast. >> So as the scale, you can dial up and down, but you have to have some enough scale to get the data pattern. >> That's right. >> If I can connect the dots there. >> I would probably say, looking at a hundred thousand or more monthly active customer base, and then you're trying to ramp up your own growth based on what you're learning and to engage those customers. >> It's like a bulldozer. You need the heavy equipment. Great conversation. For the last minute we have here Manyam, give you a plug for the company. What's going on? What are you guys doing? What's new? Give some success stories, your latest achievements. Take a minute to give a plug for the company. >> Yeah, great. We have been recognized by Deloitte as the fastest growth startup two years in a row and continuing to be on that streak. We have released currently integrations with AWS partners and Snowflake partners and data lake partners that allow implementing Blueshift a much streamlined experience with bidirectional integrations. We have now hundred plus data connectors and data integrations in our system and that takes care of many of our needs. And now, I think organizations that have been budget constraint and are trying to achieve a lot with a small team are actually going to look at these solutions and say, "Can I get there?" and "Can I become that 10X marketing organization? And as you have said, agility at scale is very, very hard to achieve. Being able to take your marketing team and achieve 10X requires the right platform and the right solution. We are ready for it. >> And every company's in the data business that's the asset. You guys make that sing for them. It's good stuff. Love the 10X. Love the scale. Manyam Mallela, thanks for coming on. Co-founder, Head of AI at Blueshift. This is the AWS Startup Showcase season two, episode three of the ongoing series covering the exciting startups from the AWS ecosystem. I'm John Furrier, your host. Thanks for watching. >> Thank you, John. (upbeat music)

Published Date : Jun 29 2022

SUMMARY :

and all the good stuff that's going on. Thank you for having me. and in the thought leadership And that requires the shared language And if that's the case, Hey, I looked at the results. This is the numbers. and all of the things in one place is that you have a platform and making that a part of the the graph angle on this But that doesn't still solve the problem I got to ask you the question here that they need to do and that's the focus. which they could add to over time. for the CMO are on the creative side. Yeah, I got that. I got to ask you on the Blueshift side of all the possibilities. the graph databases you And on the data complexity And then you start to look out So as the scale, you and to engage those customers. For the last minute we have here Manyam, and the right solution. And every company's in the Thank you, John.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Manyam MallelaPERSON

0.99+

JohnPERSON

0.99+

10 clicksQUANTITY

0.99+

BBCORGANIZATION

0.99+

DeloitteORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

oneQUANTITY

0.99+

next weekDATE

0.99+

tomorrowDATE

0.99+

781%QUANTITY

0.99+

AWSORGANIZATION

0.99+

hundred peopleQUANTITY

0.99+

ForresterORGANIZATION

0.99+

tensQUANTITY

0.99+

twoQUANTITY

0.99+

one dayQUANTITY

0.99+

two yearsQUANTITY

0.99+

OneQUANTITY

0.99+

BlueshiftORGANIZATION

0.99+

thirdQUANTITY

0.99+

DiscoveryORGANIZATION

0.99+

todayDATE

0.99+

thousandsQUANTITY

0.99+

second insightQUANTITY

0.99+

bothQUANTITY

0.99+

PayPalORGANIZATION

0.99+

Project RoverORGANIZATION

0.98+

secondQUANTITY

0.98+

ManyamPERSON

0.98+

10XQUANTITY

0.97+

MarTechORGANIZATION

0.97+

SmartHubORGANIZATION

0.97+

firstQUANTITY

0.97+

three decadesQUANTITY

0.96+

BlueshiftTITLE

0.96+

each oneQUANTITY

0.96+

one placeQUANTITY

0.96+

millionsQUANTITY

0.95+

tens of thousands of customersQUANTITY

0.95+

LendingTreeORGANIZATION

0.94+

last decadeDATE

0.94+

SnowflakeORGANIZATION

0.94+

hundreds of toolsQUANTITY

0.94+

three key factorsQUANTITY

0.92+

two channelQUANTITY

0.92+

TwitterORGANIZATION

0.91+

theCUBEORGANIZATION

0.91+

Startup ShowcaseEVENT

0.89+

halfQUANTITY

0.89+

hundred plusQUANTITY

0.89+

tens of millions of customersQUANTITY

0.87+

CMOTITLE

0.84+

MarTech: Emerging Cloud-Scale ExperienceTITLE

0.83+

half a million monthlyQUANTITY

0.82+

single dayQUANTITY

0.82+

single weekQUANTITY

0.81+

a hundred plus customersQUANTITY

0.81+

AWS Startup ShowcaseEVENT

0.81+

a hundred thousand or moreQUANTITY

0.77+

half of our customersQUANTITY

0.77+

season twoQUANTITY

0.75+

Christian Wiklund, unitQ | AWS Startup Showcase S2 E3


 

(upbeat music) >> Hello, everyone. Welcome to the theCUBE's presentation of the AWS Startup Showcase. The theme, this showcase is MarTech, the emerging cloud scale customer experiences. Season two of episode three, the ongoing series covering the startups, the hot startups, talking about analytics, data, all things MarTech. I'm your host, John Furrier, here joined by Christian Wiklund, founder and CEO of unitQ here, talk about harnessing the power of user feedback to empower marketing. Thanks for joining us today. >> Thank you so much, John. Happy to be here. >> In these new shifts in the market, when you got cloud scale, open source software is completely changing the software business. We know that. There's no longer a software category. It's cloud, integration, data. That's the new normal. That's the new category, right? So as companies are building their products, and want to do a good job, it used to be, you send out surveys, you try to get the product market fit. And if you were smart, you got it right the third, fourth, 10th time. If you were lucky, like some companies, you get it right the first time. But the holy grail is to get it right the first time. And now, this new data acquisition opportunities that you guys in the middle of that can tap customers or prospects or end users to get data before things are shipped, or built, or to iterate on products. This is the customer feedback loop or data, voice of the customer journey. It's a gold mine. And it's you guys, it's your secret weapon. Take us through what this is about now. I mean, it's not just surveys. What's different? >> So yeah, if we go back to why are we building unitQ? Which is we want to build a quality company. Which is basically, how do we enable other companies to build higher quality experiences by tapping into all of the existing data assets? And the one we are in particularly excited about is user feedback. So me and my co-founder, Nik, and we're doing now the second company together. We spent 14 years. So we're like an old married couple. We accept each other, and we don't fight anymore, which is great. We did a consumer company called Skout, which was sold five years ago. And Skout was kind of early in the whole mobile first. I guess, we were actually mobile first company. And when we launched this one, we immediately had the entire world as our marketplace, right? Like any modern company. We launch a product, we have support for many languages. It's multiple platforms. We have Android, iOS, web, big screens, small screens, and that brings some complexities as it relates to staying on top of the quality of the experience because how do I test everything? >> John: Yeah. >> Pre-production. How do I make sure that our Polish Android users are having a good day? And we found at Skout, personally, like I could discover million dollar bugs by just drinking coffee and reading feedback. And we're like, "Well, there's got to be a better way to actually harness the end user feedback. That they are leaving in so many different places." So, you know what, what unitQ does is that we basically aggregate all different sources of user feedback, which can be app store reviews, Reddit posts, Tweets, comments on your Facebook ads. It can be better Business Bureau Reports. We don't like to get to many of those, of course. But really, anything on the public domain that mentions or refers to your product, we want to ingest that data in this machine, and then all the private sources. So you probably have a support system deployed, a Zendesk, or an Intercom. You might have a chatbot like an Ada, or and so forth. And your end user is going to leave a lot of feedback there as well. So we take all of these channels, plug it into the machine, and then we're able to take this qualitative data. Which and I actually think like, when an end user leaves a piece of feedback, it's an act of love. They took time out of the day, and they're going to tell you, "Hey, this is not working for me," or, "Hey, this is working for me," and they're giving you feedback. But how do we package these very messy, multi-channel, multiple languages, all over the place data? How can we distill it into something that's quantifiable? Because I want to be able to monitor these different signals. So I want to turn user feedback into time series. 'Cause with time series, I can now treat this the same way as Datadog treats machine logs. I want to be able to see anomalies, and I want to know when something breaks. So what we do here is that we break down your data in something called quality monitors, which is basically machine learning models that can aggregate the same type of feedback data in this very fine grained and discrete buckets. And we deploy up to a thousand of these quality monitors per product. And so we can get down to the root cause. Let's say, passive reset link is not working. And it's in that root cause, the granularity that we see that companies take action on the data. And I think historically, there has been like the workflow between marketing and support, and engineering and product has been a bit broken. They've been siloed from a data perspective. They've been siloed from a workflow perspective, where support will get a bunch of tickets around some issue in production. And they're trained to copy and paste some examples, and throw it over the wall, file a Jira ticket, and then they don't know what happens. So what we see with the platform we built is that these teams are able to rally around the single source of troop or like, yes, passive recent link seems to have broken. This is not a user error. It's not a fix later, or I can't reproduce. We're looking at the data, and yes, something broke. We need to fix it. >> I mean, the data silos a huge issue. Different channels, omnichannel. Now, there's more and more channels that people are talking in. So that's huge. I want to get to that. But also, you said that it's a labor of love to leave a comment or a feedback. But also, I remember from my early days, breaking into the business at IBM and Hewlett-Packard, where I worked. People who complain are the most loyal customers, if you service them. So it's complaints. >> Christian: Yeah. >> It's leaving feedback. And then, there's also reading between the lines with app errors or potentially what's going on under the covers that people may not be complaining about, but they're leaving maybe gesture data or some sort of digital trail. >> Yeah. >> So this is the confluence of the multitude of data sources. And then you got the siloed locations. >> Siloed locations. >> It's complicated problem. >> It's very complicated. And when you think about, so I started, I came to Bay Area in 2005. My dream was to be a quant analyst on Wall Street, and I ended up in QA at VMware. So I started at VMware in Palo Alto, and didn't have a driver's license. I had to bike around, which was super exciting. And we were shipping box software, right? This was literally a box with a DVD that's been burned, and if that DVD had bugs in it, guess what it'll be very costly to then have to ship out, and everything. So I love the VMware example because the test cycles were long and brutal. It was like a six month deal to get through all these different cases, and they couldn't be any bugs. But then as the industry moved into the cloud, CI/CD, ship at will. And if you look at the modern company, you'll have at least 20 plus integrations into your product. Analytics, add that's the case, authentication, that's the case, and so forth. And these integrations, they morph, and they break. And you have connectivity issues. Is your product working as well on Caltrain, when you're driving up and down, versus wifi? You have language specific bugs that happen. Android is also quite a fragmented market. The binary may not perform as well on that device, or is that device. So how do we make sure that we test everything before we ship? The answer is, we can't. There's no company today that can test everything before the ship. In particular, in consumer. And the epiphany we had at our last company, Skout, was that, "Hey, wait a minute. The end user, they're testing every configuration." They're sitting on the latest device, the oldest device. They're sitting on Japanese language, on Swedish language. >> John: Yeah. >> They are in different code paths because our product executed differently, depending on if you were a paid user, or a freemium user, or if you were certain demographical data. There's so many ways that you would have to test. And PagerDuty actually had a study they came out with recently, where they said 51% of all end user impacting issues are discovered first by the end user, when they serve with a bunch of customers. And again, like the cool part is, they will tell you what's not working. So now, how do we tap into that? >> Yeah. >> So what I'd like to say is, "Hey, your end user is like your ultimate test group, and unitQ is the layer that converts them into your extended test team." Now, the signals they're producing, it's making it through to the different teams in the organization. >> I think that's the script that you guys are flipping. If I could just interject. Because to me, when I hear you talking, I hear, "Okay, you're letting the customers be an input into the product development process." And there's many different pipelines of that development. And that could be whether you're iterating, or geography, releases, all kinds of different pipelines to get to the market. But in the old days, it was like just customer satisfaction. Complain in a call center. >> Christian: Yeah. >> Or I'm complaining, how do I get support? Nothing made itself into the product improvement, except for slow moving, waterfall-based processes. And then, maybe six months later, a small tweak could be improved. >> Yes. >> Here, you're taking direct input from collective intelligence. Okay. >> Is that have input and on timing is very important here, right? So how do you know if the product is working as it should in all these different flavors and configurations right now? How do you know if it's working well? And how do you know if you're improving or not improving over time? And I think the industry, what can we look at, as far as when it relates to quality? So I can look at star ratings, right? So what's the star rating in the app store? Well, star ratings, that's an average over time. So that's something that you may have a lot of issues in production today, and you're going to get dinged on star ratings over the next few months. And then, it brings down the score. NPS is another one, where we're not going to run NPS surveys every day. We're going to run it once a quarter, maybe once a month, if we're really, really aggressive. That's also a snapshot in time. And we need to have the finger on the pulse of product quality today. I need to know if this release is good or not good. I need to know if anything broke. And I think that real time aspect, what we see as stuff sort of bubbles up the stack, and not into production, we see up to a 50% reduction in time to fix these end user impacting issues. And I think, we also need to appreciate when someone takes time out of the day to write an app review, or email support, or write that Reddit post, it's pretty serious. It's not going to be like, "Oh, I don't like the shade of blue on this button." It's going to be something like, "I got double billed," or "Hey, someone took over my account," or, "I can't reset my password anymore. The CAPTCHA, I'm solving it, but I can't get through to the next phase." And we see a lot of these trajectory impacting bugs and quality issues in these work, these flows in the product that you're not testing every day. So if you work at Snapchat, your employees probably going to use Snapchat every day. Are they going to sign up every day? No. Are they going to do passive reset every day? No. And these things are very hard to instrument, lower in the stack. >> Yeah, I think this is, and again, back to these big problems. It's smoke before fire, and you're essentially seeing it early with your process. Can you give an example of how this new focus or new mindset of user feedback data can help customers increase their experience? Can you give some examples, 'cause folks watching and be like, "Okay, I love this value. Sell me on this idea, I'm sold. Okay, I want to tap into my prospects, and my customers, my end users to help me improve my product." 'Cause again, we can measure everything now with data. >> Yeah. We can measure everything. we can even measure quality these days. So when we started this company, I went out to talk to a bunch of friends, who are entrepreneurs, and VCs, and board members, and I asked them this very simple question. So in your board meetings, or on all hands, how do you talk about quality of the product? Do you have a metric? And everyone said, no. Okay. So are you data driven company? Yes, we're very data driven. >> John: Yeah. Go data driven. >> But you're not really sure if quality, how do you compare against competition? Are you doing as good as them, worse, better? Are you improving over time, and how do you measure it? And they're like, "Well, it's kind of like a blind spot of the company." And then you ask, "Well, do you think quality of experience is important?" And they say, "Yeah." "Well, why?" "Well, top of fund and growth. Higher quality products going to spread faster organically, we're going to make better store ratings. We're going to have the storefronts going to look better." And of course, more importantly, they said the different conversion cycles in the product box itself. That if you have bugs and friction, or an interface that's hard to use, then the inputs, the signups, it's not going to convert as well. So you're going to get dinged on retention, engagement, conversion to paid, and so forth. And that's what we've seen with the companies we work with. It is that poor quality acts as a filter function for the entire business, if you're a product led company. So if you think about product led company, where the product is really the centerpiece. And if it performs really, really well, then it allows you to hire more engineers, you can spend more on marketing. Everything is fed by this product at them in the middle, and then quality can make that thing perform worse or better. And we developed a metric actually called the unitQ Score. So if you go to our website, unitq.com, we have indexed the 5,000 largest apps in the world. And we're able to then, on a daily basis, update the score. Because the score is not something you do once a month or once a quarter. It's something that changes continuously. So now, you can get a score between zero and 100. If you get the score 100, that means that our AI doesn't find any quality issues reported in that data set. And if your score is 90, that means that 10% will be a quality issue. So now you can do a lot of fun stuff. You can start benchmarking against competition. So you can see, "Well, I'm Spotify. How do I rank against Deezer, or SoundCloud, or others in my space?" And what we've seen is that as the score goes up, we see this real big impact on KPI, such as conversion, organic growth, retention, ultimately, revenue, right? And so that was very satisfying for us, when we launched it. quality actually still really, really matters. >> Yeah. >> And I think we all agree at test, but how do we make a science out of it? And that's so what we've done. And when we were very lucky early on to get some incredible brands that we work with. So Pinterest is a big customer of ours. We have Spotify. We just signed new bank, Chime. So like we even signed BetterHelp recently, and the world's largest Bible app. So when you look at the types of businesses that we work with, it's truly a universal, very broad field, where if you have a digital exhaust or feedback, I can guarantee you, there are insights in there that are being neglected. >> John: So Chris, I got to. >> So these manual workflows. Yeah, please go ahead. >> I got to ask you, because this is a really great example of this new shift, right? The new shift of leveraging data, flipping the script. Everything's flipping the script here, right? >> Yeah. >> So you're talking about, what the value proposition is? "Hey, board example's a good one. How do you measure quality? There's no KPI for that." So it's almost category creating in its own way. In that, this net new things, it's okay to be new, it's just new. So the question is, if I'm a customer, I buy it. I can see my product teams engaging with this. I can see how it can changes my marketing, and customer experience teams. How do I operationalize this? Okay. So what do I do? So do I reorganize my marketing team? So take me through the impact to the customer that you're seeing. What are they resonating towards? Obviously, getting that data is key, and that's holy gray, we all know that. But what do I got to do to change my environment? What's my operationalization piece of it? >> Yeah, and that's one of the coolest parts I think, and that is, let's start with your user base. We're not going to ask your users to ask your users to do something differently. They're already producing this data every day. They are tweeting about it. They're putting in app produce. They're emailing support. They're engaging with your support chatbot. They're already doing it. And every day that you're not leveraging that data, the data that was produced today is less valuable tomorrow. And in 30 days, I would argue, it's probably useless. >> John: Unless it's same guy commenting. >> Yeah. (Christian and John laughing) The first, we need to make everyone understand. Well, yeah, the data is there, and we don't need to do anything differently with the end user. And then, what we do is we ask the customer to tell us, "Where should we listen in the public domain? So do you want the Reddit post, the Trustpilot? What channels should we listen to?" And then, our machine basically starts ingesting that data. So we have integration with all these different sites. And then, to get access to private data, it'll be, if you're on Zendesk, you have to issue a Zendesk token, right? So you don't need any engineering hours, except your IT person will have to grant us access to the data source. And then, when we go live. We basically build up this taxonomy with the customers. So we don't we don't want to try and impose our view of the world, of how do you describe the product with these buckets, these quality monitors? So we work with the company to then build out this taxonomy. So it's almost like a bespoke solution that we can bootstrap with previous work we've done, where you don't have these very, very fine buckets of where stuff could go wrong. And then what we do is there are different ways to hook this into the workflow. So one is just to use our products. It's a SaaS product as anything else. So you log in, and you can then get this overview of how is quality trending in different markets, on different platforms, different languages, and what is impacting them? What is driving this unitQ Score that's not good enough? And all of these different signals, we can then hook into Jira for instance. We have a Jira integration. We have a PagerDuty integration. We can wake up engineers if certain things break. We also tag tickets in your support system, which is actually quite cool. Where, let's say, you have 200 people, who wrote into support, saying, "I got double billed on Android." It turns out, there are some bugs that double billed them. Well, now we can tag all of these users in Zendesk, and then the support team can then reach out to that segment of users and say, "Hey, we heard that you had this bug with double billing. We're so sorry. We're working on it." And then when we push fix, we can then email the same group again, and maybe give them a little gift card or something, for the thank you. So you can have, even big companies can have that small company experience. So, so it's groups that use us, like at Pinterest, we have 800 accounts. So it's really through marketing has vested interest because they want to know what is impacting the end user. Because brand and product, the lines are basically gone, right? >> John: Yeah. >> So if the product is not working, then my spend into this machine is going to be less efficient. The reputation of our company is going to be worse. And the challenge for marketers before unitQ was, how do I engage with engineering and product? I'm dealing with anecdotal data, and my own experience of like, "Hey, I've never seen these type of complaints before. I think something is going on." >> John: Yeah. >> And then engineering will be like, "Ah, you know, well, I have 5,000 bugs in Jira. Why does this one matter? When did it start? Is this a growing issue?" >> John: You have to replicate the problem, right? >> Replicate it then. >> And then it goes on and on and on. >> And a lot of times, reproducing bugs, it's really hard because it works on my device. Because you don't sit on that device that it happened on. >> Yup. >> So now, when marketing can come with indisputable data, and say, "Hey, something broke here." And we see the same with support. Product engineering, of course, for them, we talk about, "Hey, listen, you you've invested a lot in observability of your stack, haven't you?" "Yeah, yeah, yeah." "So you have a Datadog in the bottom?" "Absolutely." "And you have an APP D on the client?" "Absolutely." "Well, what about the last mile? How the product manifests itself? Shouldn't you monitor that as well using machines?" They're like, "Yeah, that'd be really cool." (John laughs) And we see this. There's no way to instrument everything, lowering the stack to capture these bugs that leak out. So it resonates really well there. And even for the engineers who's going to fix it. >> Yeah. >> I call it like empathy data. >> Yup. >> Where I get assigned a bug to fix. Well, now, I can read all the feedback. I can actually see, and I can see the feedback coming in. >> Yeah. >> Oh, there's users out there, suffering from this bug. And then when I fix it and I deploy the fix, and I see the trend go down to zero, and then I can celebrate it. So that whole feedback loop is (indistinct). >> And that's real time. It's usually missed too. This is the power of user feedback. You guys got a great product, unitQ. Great to have you on. Founder and CEO, Christian Wiklund. Thanks for coming on and sharing, and showcase. >> Thank you, John. For the last 30 seconds, the minute we have left, put a plug in for the company. What are you guys looking for? Give a quick pitch for the company, real quick, for the folks out there. Looking for more people, funding status, number of employees. Give a quick plug. >> Yes. So we raised our A Round from Google, and then we raised our B from Excel that we closed late last year. So we're not raising money. We are hiring across go-to-markets, engineering. And we love to work with people, who are passionate about quality and data. We're always, of course, looking for customers, who are interested in upping their game. And hey, listen, competing with features is really hard because you can copy features very quickly. Competing with content. Content is commodity. You're going to get the same movies more or less on all these different providers. And competing on price, we're not willing to do. You're going to pay 10 bucks a month for music. So how do you compete today? And if your competitor has a better fine tuned piano than your competitor will have better efficiencies, and they're going to retain customers and users better. And you don't want to lose on quality because it is actually a deterministic and fixable problem. So yeah, come talk to us if you want to up the game there. >> Great stuff. The iteration lean startup model, some say took craft out of building the product. But this is now bringing the craftsmanship into the product cycle, when you can get that data from customers and users. >> Yeah. >> Who are going to be happy that you fixed it, that you're listening. >> Yeah. >> And that the product got better. So it's a flywheel of loyalty, quality, brand, all off you can figure it out. It's the holy grail. >> I think it is. It's a gold mine. And every day you're not leveraging this assets, your use of feedback that's there, is a missed opportunity. >> Christian, thanks so much for coming on. Congratulations to you and your startup. You guys back together. The band is back together, up into the right, doing well. >> Yeah. We we'll check in with you later. Thanks for coming on this showcase. Appreciate it. >> Thank you, John. Appreciate it very much. >> Okay. AWS Startup Showcase. This is season two, episode three, the ongoing series. This one's about MarTech, cloud experiences are scaling. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Jun 29 2022

SUMMARY :

of the AWS Startup Showcase. Thank you so much, John. But the holy grail is to And the one we are in And so we can get down to the root cause. I mean, the data silos a huge issue. reading between the lines And then you got the siloed locations. And the epiphany we had at And again, like the cool part is, in the organization. But in the old days, it was the product improvement, Here, you're taking direct input And how do you know if you're improving Can you give an example So are you data driven company? And then you ask, And I think we all agree at test, So these manual workflows. I got to ask you, So the question is, if And every day that you're ask the customer to tell us, So if the product is not working, And then engineering will be like, And a lot of times, And even for the engineers Well, now, I can read all the feedback. and I see the trend go down to zero, Great to have you on. the minute we have left, So how do you compete today? of building the product. happy that you fixed it, And that the product got better. And every day you're not Congratulations to you and your startup. We we'll check in with you later. Appreciate it very much. I'm John Furrier, your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JohnPERSON

0.99+

Christian WiklundPERSON

0.99+

IBMORGANIZATION

0.99+

John FurrierPERSON

0.99+

2005DATE

0.99+

Hewlett-PackardORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

10%QUANTITY

0.99+

six monthQUANTITY

0.99+

thirdQUANTITY

0.99+

fourthQUANTITY

0.99+

PinterestORGANIZATION

0.99+

800 accountsQUANTITY

0.99+

5,000 bugsQUANTITY

0.99+

51%QUANTITY

0.99+

14 yearsQUANTITY

0.99+

Bay AreaLOCATION

0.99+

90QUANTITY

0.99+

AndroidTITLE

0.99+

200 peopleQUANTITY

0.99+

NikPERSON

0.99+

SkoutORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

iOSTITLE

0.99+

ExcelTITLE

0.99+

tomorrowDATE

0.99+

first timeQUANTITY

0.99+

ChristianPERSON

0.99+

todayDATE

0.99+

unitQORGANIZATION

0.99+

5,000 largest appsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

second companyQUANTITY

0.99+

100QUANTITY

0.99+

JiraTITLE

0.99+

SpotifyORGANIZATION

0.99+

BibleTITLE

0.99+

30 daysQUANTITY

0.99+

FacebookORGANIZATION

0.99+

ZendeskORGANIZATION

0.99+

IntercomORGANIZATION

0.98+

ChimeORGANIZATION

0.98+

firstQUANTITY

0.98+

Wall StreetLOCATION

0.98+

once a monthQUANTITY

0.98+

RedditORGANIZATION

0.98+

once a quarterQUANTITY

0.98+

five years agoDATE

0.98+

million dollarQUANTITY

0.97+

first companyQUANTITY

0.97+

six months laterDATE

0.97+

zeroQUANTITY

0.97+

SwedishOTHER

0.97+

JapaneseOTHER

0.97+

late last yearDATE

0.96+

PagerDutyORGANIZATION

0.95+

AWSORGANIZATION

0.95+

10th timeQUANTITY

0.95+

James Fang, mParticle | AWS Startup Showcase S2 E3


 

>> Hey everyone, welcome to theCUBE's coverage of the AWS startup showcase. This is season two, episode three of our ongoing series featuring AWS and its big ecosystem of partners. This particular season is focused on MarTech, emerging cloud scale customer experiences. I'm your host, Lisa Martin, and I'm pleased to be joined by James Fang, the VP of product marketing at mparticle. James, welcome to the program. Great to have you on. >> Thanks for having me. >> Tell us a little bit about mparticle, what is it that you guys do? >> Sure, so we're mparticle, we were founded in 2013, and essentially we are a customer data platform. What we do is we help brands collect and organize their data. And their data could be coming from web apps, mobile apps, existing data sources like data warehouses, data lakes, et cetera. And we help them help them organize it in a way where they're able to activate that data, whether it's to analyze it further, to gather insights or to target them with relevant messaging, relevant offers. >> What were some of the gaps in the market back then as you mentioned 2013, or even now, that mparticle is really resolving so that customers can really maximize the value of their customer's data. >> Yeah. So the idea of data has actually been around for a while, and you may have heard the buzzword 360 degree view of the customer. The problem is no one has really been actually been able to, to achieve it. And it's actually, some of the leading analysts have called it a myth. Like it's a forever ending kind of cycle. But where we've kind of gone is, first of all customer expectations have really just inflated over the years, right? And part of that was accelerated due to COVID, and the transformation we saw in the last two years, right. Everyone used to, you know, have maybe a digital footprint, as complimentary perhaps to their physical footprint. Nowadays brands are thinking digital first, for obvious reasons. And the data landscape has gotten a lot more complex, right? Brands have multiple experiences, on different screens, right? And, but from the consumer perspective, they want a complete end to end experience, no matter how you're engaging with the brand. And in order to, for a brand to deliver that experience they have to know, how the customers interacted before in each of those channels, and be able to respond in as real time as possible, to those experiences. >> So I can start an interaction on my iPad, maybe carry it through or continue it on my laptop, go to my phone. And you're right, as a, as a consumer, I want the experience across all of those different media to be seamless, to be the same, to be relevant. You talk about the customer 360, as a marketer I know that term well. It's something that so many companies use, interesting that you point out that it's really been, largely until companies like mparticle, a myth. It's one of those things though, that everybody wants to achieve. Whether we're talking about healthcare organization, a retailer, to be able to know everything about a customer so that they can deliver what's increasingly demanded that personalized, relevant experience. How does mparticle fill some of the gaps that have been there in customer 360? And do you say, Hey, we actually deliver a customer 360. >> Yeah, absolutely. So, so the reason it's been a myth is for the most part, data has been- exists either in silos, or it's kind of locked behind this black box that the central data engineering team or sometimes traditionally referred to as IT, has control over, right? So brands are collecting all sorts of data. They have really smart people working on and analyzing it. You know, being able to run data science models, predictive models on it, but the, the marketers and the people who want to draw insights on it are asking how do I get it in, in my hands? So I can use that data for relevant targeting messaging. And that's exactly what mparticle does. We democratize access to that data, by making it accessible in the very tools that the actual business users are are working in. And we do that in real time, you don't have to wait for days to get access to data. And the marketers can even self-service, they're able to for example, build audiences or build computed insights, such as, you know, average order value of a customer within the tool themselves. The other main, the other main thing that mparticle does, is we ensure the quality of that data. We know that activation is only as as good, when you can trust that data, right? When there's no mismatching, you know, first name last names, identities that are duplicated. And so we put a lot of effort, not only in the identity resolution component of our product but also being able to ensure that the consistency of that data when it's being collected meets the standard that you need. >> So give us a, a picture, kind of a topology of a, of a customer data platform. And what are some of the key components that it contains, then I kind of want to get into some of the use cases. >> Yeah. So at, at a core, a lot of customer data platforms look similar. They're responsible first of all for the collection of data, right? And again, that could be from web mobile sources, as well as existing data sources, as well as third party apps, right? For example, you may have e-commerce data in a Shopify, right. Or you may have, you know, a computer model from a, from a warehouse. And then the next thing is to kind of organize it somehow, right? And the most common way to do that is to unify it, using identity resolution into this idea of customer profiles, right. So I can look up everything that Lisa or James has done, their whole historical record. And then the third thing is to be able to kind of be able to draw some insights from that, whether to be able to build an audience membership on top of that, build a predictive model, such as the churn risk model or lifetime value of that customer. And finally is being able to activate that data, so you'll be able to push that data again, to those relevant downstream systems where the business users are actually using that data to, to do their targeting, or to do more interesting things with it. >> So for example, if I go to the next Warrior's game, which I predict they're going to win, and I have like a mobile app of the stadium or the team, how, and I and I'm a season ticket holder, how can a customer data platform give me that personalized experience and help to, yeah, I'd love to kind of get it in that perspective. >> Yeah. So first of all, again, in this modern day and age consumers are engaging with brands from multiple devices, and their attention span, frankly, isn't that long. So I may start off my day, you know, downloading the official warriors app, right. And I may be, you know browsing from my mobile phone, but I could get distracted. I've got to go join a meeting at work, drop off my kids or whatever, right? But later in the day I had in my mind, I may be interested in purchasing tickets or buying that warriors Jersey. So I may return to the website, or even the physical store, right. If, if I happen to be in the area and what the customer data platform is doing in the background, is associating and connecting all those online and offline touchpoints, to that user profile. And then now, I have a mar- so let's say I'm a marker for the golden state warriors. And I see that, you know, this particular user has looked at my website even added to their cart, you know, warriors Jersey. I'm now able to say, Hey, here's a $5 promotional coupon. Also, here's a special, limited edition. We just won, you know, the, the Western conference finals. And you can pre-book, you know, the, you know the warriors championships Jersey, cross your fingers, and target that particular user with that promotion. And it's much more likely because we have that contextual data that that user's going to convert, than just blasting them on a Facebook or something like that. >> Right. Which all of us these days are getting less and less patient with, Is those, those broad blasts through social media and things like that. That was, I love that example. That was a great example. You talked about timing. One of the things I think that we've learned that's in very short supply, in the last couple of years is people's patience and tolerance. We now want things in nanoseconds. So, the ability to glean insights from data and act on it in real time is no longer really a nice to have that's really table stakes for any type of organization. Talk to us about how mparticle facilitates that real time data, from an insights perspective and from an activation standpoint. >> Yeah. You bring up a good point. And this is actually one of the core differentiators of mparticle compared to the other CDPs is that, our architecture from the ground up is built for real time. And the way we do that is, we use essentially a real time streaming architecture backend. Essentially all the data points that we collect and send to those downstream destinations, that happens in milliseconds, right? So the moment that that user, again, like clicks a button or adds something to their shopping cart, or even abandons that shopping cart, that downstream tool, whether it's a marketer, whether it's a business analyst looking at that data for intelligence, they get that data within milliseconds. And our audience computations also happens within seconds. So again, if you're, if you have a targeted list for a targeted campaign, those updates happen in real time. >> You gave an- you ran with the Warrior's example that I threw at you, which I love, absolutely. Talk to me. You must have though, a favorite cu- real world customer example of mparticle's that you think really articulates the value to organizations, whether it's to marketers operators and has some nice, tangible business outcomes. Share with me if you will, a favorite customer story. >> Yeah, definitely one of mine and probably one of the- our most well known's is we were actually behind the scenes of the Whopper jr campaign. So a couple of years ago, Burger King ran this really creative ad where the, effectively their goal was to get their mobile app out, as well as to train, you know, all of us back before COVID days, how to order on our mobile devices and to do things like curbside checkout. None of us really knew how to do that, right. And there was a challenge of course that, no one wants to download another app, right? And most apps get downloaded and get deleted right out away. So they ran this really creative promotion where, if you drove towards a McDonald's, they would actually fire off a text message saying, Hey, how about a Whopper for 99 cents instead? And you would, you would, you would receive a text message personalized just for you. And you'd be able to redeem that at any burger king location. So we were kind of the core infrastructure plumbing the geofencing location data, to partner of ours called radar, which handles you geofencing, and then send it back to a marketing orchestration vendor to be able to fire that targeted message. >> Very cool. I, I, now I'm hungry. You, but there's a fine line there between knowing that, okay, Lisa's driving towards McDonald's let's, you know, target her with an ad for a whopper, in privacy. How do you guys help organizations in any industry balance that? Cause we're seeing more and more privacy regulations popping up all over the world, trying to give consumers the ability to protect either the right to forget about me or don't use my data. >> Yeah. Great question. So the first way I want to respond to that is, mparticle's really at the core of helping brands build their own first party data foundation. And what we mean by that is traditionally, the way that brands have approached marketing is reliant very heavily on second and third party data, right? And most that second-third party data is from the large walled gardens, such as like a Facebook or a TikTok or a Snapchat, right? They're they're literally just saying, Hey find someone that is going to, you know fit our target profile. And that data is from people, all their activity on those apps. But with the first party data strategy, because the brand owns that data, we- we can guarantee that or the brands can guarantee to their customers it's ethically sourced, meaning it's from their consent. And we also help brands have governance policies. So for example, if the user has said, Hey you're allowed to collect my data, because obviously you want to run your business better, but I don't want any my information sold, right? That's something that California recently passed, with CPRA. Then brands can use mparticle data privacy controls to say, Hey, you can pass this data on to their warehouses and analytics platforms, but don't pass it to a platform like Facebook, which potentially could resell that data. >> Got it, Okay. So you really help put sort of the, the reigns on and allow those customers to make those decisions, which I know the mass community appreciates. I do want to talk about data quality. You talked about that a little bit, you know, and and data is the lifeblood of an organization, if it can really extract value from it and act on it. But how do you help organizations maintain the quality of data so that what they can do, is actually deliver what the end user customer, whether it's a somebody buying something on a, on a eCommerce site or or, a patient at a hospital, get what they need. >> Yeah. So on the data quality front, first of all I want to highlight kind of our strengths and differentiation in identity resolution. So we, we run a completely deterministic algorithm, but it's actually fully customizable by the customer depending on their needs. So for a lot of other customer data providers, platform providers out there, they do offer identity resolution, but it's almost like a black box. You don't know what happens. And they could be doing a lot of fuzzy matching, right. Which is, you know, probabilistic or predictive. And the problem with that is, let's say, you know, Lisa your email changed over the years and CDP platform may match you with someone that's completely not you. And now all of a sudden you're getting ads that completely don't fit you, or worse yet that brand is violating privacy laws, because your personal data is is being used to target another user, which which obviously should not, should not happen, right? So because we're giving our customers complete control, it's not a black box, it's transparent. And they have the ability to customize it, such as they can specify what identifiers matter more to them, whether they want to match on email address first. They might've drawn on a more high confidence identifier like a, a hash credit card number or even a customer ID. They have that choice. The second part about ensuring data quality is we act actually built in schema management. So as those events are being collected you could say that, for example, when when it's a add to cart event, I require the item color. I require the size. Let's say it's a fashion apparel. I require the size of it and the type of apparel, right? And if, if data comes in with missing fields, or perhaps with fields that don't match the expectation, let's say you're expecting small, medium, large and you get a Q, you know Q is meaningless data, right? We can then enforce that and flag that as a data quality violation and brands can complete correct that mistake to make sure again, all the data that's flowing through is, is of value to them. >> That's the most important part is, is to make sure that the data has value to the organization, and of course value to whoever it is on the other side, the, the end user side. Where should customers start, in terms of working with you guys, do you recommend customers buy an all in one marketing suite? The best, you know, build a tech stack of best of breed? What are some of those things that you recommend for folks who are going, all right, We, maybe we have a CDP it's been under delivering. We can't really deliver that customer 360, mparticle, help us out. >> Yeah, absolutely. Well, the best part about mparticle is you can kind of deploy it in phases, right. So if you're coming from a world where you've deployed a, all in one marketing suite, like a sales force in Adobe, but you're looking to maybe modernize pieces of a platform mparticle can absolutely help with that initial step. So let again, let's say all you want to do is modernize your event collection. Well, we can absolutely, as a first step, for example, you can instrument us. You can collect all your data from your web and mobile apps in real time, and we can pipe to your existing, you know Adobe campaign manager, Salesforce, marketing cloud. And later down the line, let's say, you say I want to, you know, modernize my analytics platform. I'm tired of using Adobe analytics. You can swap that out, right again with an mparticle place, a marketer can or essentially any business user can flip the switch. And within the mparticle interface, simply disconnect their existing tool and connect a new tool with a couple of button clicks and bam, the data's now flowing into the new tool. So it mparticle really, because we kind of sit in the middle of all these tools and we have over 300 productized prebuilt integrations allows you to move away from kind of a locked in, you know a strategy where you're committed to a vendor a hundred percent to more of a best of breed, agile strategy. >> And where can customers that are interested, go what's your good and market strategy? How does that involve AWS? Where can folks go and actually get and test out this technology? >> Yeah. So first of all, we are we are AWS, a preferred partner. and we have a couple of productized integrations with AWS. The most obvious one is for example, being able to just export data to AWS, whether it's Redshift or an S3 or a kinesis stream, but we also have productized integrations with AWS, personalized. For example, you can take events, feed em to personalize and personalize will come up with the next best kind of content recommendation or the next best offer available for the customer. And mparticle can ingest that data back and you can use that for personalized targeting. In fact, Amazon personalize is what amazon.com themselves use to populate the recommended for use section on their page. So brands could essentially do the same. They could have a recommended for you carousel using Amazon technology but using mparticle to move the data back and forth to, to populate that. And then on top of that very, very soon we'll be also launching a marketplace kind of entry. So if you are a AWS customer and you have credits left over or you just want to transact through AWS, then you'll have that option available as well. >> Coming soon to the AWS marketplace. James, thank you so much for joining me talking about mparticle, how you guys are really revolutionizing the customer data platform and allowing organizations and many industries to really extract value from customer data and use it wisely. We appreciate your insights and your time. >> Thank you very much, Lisa >> For James Fang, I'm Lisa Martin. You're watching theCube's coverage of the AWS startup showcase season three, season two episode three, leave it right here for more great coverage on theCube, the leader in live tech coverage.

Published Date : Jun 29 2022

SUMMARY :

Great to have you on. to gather insights or to gaps in the market back then and the transformation we saw interesting that you point that the central data engineering team into some of the use cases. And then the third thing is to be able to app of the stadium And I see that, you know, So, the ability to And the way we do that of mparticle's that you And you would, you would, the ability to protect So for example, if the user has said, and data is the lifeblood And the problem with that that the data has value And later down the So brands could essentially do the same. and many industries to of the AWS startup showcase

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Burger KingORGANIZATION

0.99+

2013DATE

0.99+

James FangPERSON

0.99+

AWSORGANIZATION

0.99+

LisaPERSON

0.99+

$5QUANTITY

0.99+

JamesPERSON

0.99+

CPRAORGANIZATION

0.99+

99 centsQUANTITY

0.99+

iPadCOMMERCIAL_ITEM

0.99+

AdobeORGANIZATION

0.99+

360 degreeQUANTITY

0.99+

McDonald'sORGANIZATION

0.99+

AmazonORGANIZATION

0.98+

FacebookORGANIZATION

0.98+

RedshiftTITLE

0.98+

first stepQUANTITY

0.98+

OneQUANTITY

0.97+

TikTokORGANIZATION

0.96+

oneQUANTITY

0.96+

amazon.comORGANIZATION

0.96+

over 300 productizedQUANTITY

0.96+

WesternEVENT

0.96+

third thingQUANTITY

0.96+

SnapchatORGANIZATION

0.95+

second partQUANTITY

0.95+

eachQUANTITY

0.94+

ShopifyORGANIZATION

0.94+

hundred percentQUANTITY

0.94+

mparticleORGANIZATION

0.94+

firstQUANTITY

0.92+

theCUBEORGANIZATION

0.91+

warriorsTITLE

0.91+

first partyQUANTITY

0.88+

S3TITLE

0.88+

COVIDOTHER

0.87+

season twoQUANTITY

0.85+

season threeQUANTITY

0.85+

couple of years agoDATE

0.85+

first party dataQUANTITY

0.83+

mparticleTITLE

0.82+

warriors JerseyORGANIZATION

0.8+

MarTechORGANIZATION

0.79+

golden state warriorsTITLE

0.77+

secondQUANTITY

0.77+

first wayQUANTITY

0.76+

last two yearsDATE

0.76+

SalesforceORGANIZATION

0.75+

episode threeQUANTITY

0.75+

JerseyLOCATION

0.69+

COVIDEVENT

0.68+

last coupleDATE

0.67+

WarriorTITLE

0.65+

yearsDATE

0.65+

Daisy Urfer, Algolia & Jason Ling, Apply Digital | AWS Startup Showcase S2 E3


 

(introductory riff) >> Hey everyone. Welcome to theCUBE's presentation of the "AWS Startup Showcase." This is Season 2, Episode 3 of our ongoing series that features great partners in the massive AWS partner ecosystem. This series is focused on, "MarTech, Emerging Cloud-Scale Customer Experiences." I'm Lisa Martin, and I've got two guests here with me to talk about this. Please welcome Daisy Urfer, Cloud Alliance Sales Director at Algolia, and Jason Lang, the Head of Product for Apply Digital. These folks are here to talk with us today about how Algolia's Search and Discovery enables customers to create dynamic realtime user experiences for those oh so demanding customers. Daisy and Jason, it's great to have you on the program. >> Great to be here. >> Thanks for having us. >> Daisy, we're going to go ahead and start with you. Give the audience an overview of Algolia, what you guys do, when you were founded, what some of the gaps were in the market that your founders saw and fixed? >> Sure. It's actually a really fun story. We were founded in 2012. We are an API first SaaS solution for Search and Discovery, but our founders actually started off with a search tool for mobile platforms, so just for your phone and it quickly expanded, we recognize the need across the market. It's been a really fun place to grow the business. And we have 11,000 customers today and growing every day, with 30 billion searches a week. So we do a lot of business, it's fun. >> Lisa: 30 billion searches a week and I saw some great customer brands, Locost, NBC Universal, you mentioned over 11,000. Talk to me a little bit about some of the technologies, I see that you have a search product, you have a recommendation product. What are some of those key capabilities that the products deliver? 'Cause as we know, as users, when we're searching for something, we expect it to be incredibly fast. >> Sure. Yeah. What's fun about Algolia is we are actually the second largest search engine on the internet today to Google. So we are right below the guy who's made search of their verb. So we really provide an overall search strategy. We provide a dashboard for our end users so they can provide the best results to their customers and what their customers see. Customers want to see everything from Recommend, which is our recommended engine. So when you search for that dress, it shows you the frequently bought together shoes that match, things like that, to things like promoted items and what's missing in the search results. So we do that with a different algorithm today. Most in the industry rank and they'll stack what you would want to see. We do kind of a pair for pair ranking system. So we really compare what you're looking for and it gives a much better result. >> And that's incredibly critical for users these days who want results in milliseconds. Jason, you, Apply Digital as a partner of Algolia, talk to us about Apply Digital, what it is that you guys do, and then give us a little bit of insight on that partnership. >> Sure. So Apply Digital was originally founded in 2016 in Vancouver, Canada. And we have offices in Vancouver, Toronto, New York, LA, San Francisco, Mexico city, Sao Paulo and Amsterdam. And we are a digital experiences agency. So brands and companies, and startups, and all the way from startups to major global conglomerates who have this desire to truly create these amazing digital experiences, it could be a website, it could be an app, it could be a full blown marketing platform, just whatever it is. And they lack either the experience or the internal resources, or what have you, then they come to us. And and we are end-to-end, we strategy, design, product, development, all the way through the execution side. And to help us out, we partner with organizations like Algolia to offer certain solutions, like an Algolia's case, like search recommendation, things like that, to our various clients and customers who are like, "Hey, I want to create this experience and it's going to require search, or it's going to require some sort of recommendation." And we're like, "Well, we highly recommend that you use Algolia. They're a partner of ours, they've been absolutely amazing over the time that we've had the partnership. And that's what we do." And honestly, for digital experiences, search is the essence of the internet, it just is. So, I cannot think of a single digital experience that doesn't require some sort of search or recommendation engine attached to it. So, and Algolia has just knocked it out of the park with their experience, not only from a customer experience, but also from a development experience. So that's why they're just an amazing, amazing partner to have. >> Sounds like a great partnership. Daisy, let's point it back over to you. Talk about some of those main challenges, Jason alluded to them, that businesses are facing, whether it's e-commerce, SaaS, a startup or whatnot, where search and recommendations are concerned. 'Cause we all, I think I've had that experience, where we're searching for something, and Daisy, you were describing how the recommendation engine works. And when we are searching for something, if I've already bought a tent, don't show me more tent, show me things that would go with it. What are some of those main challenges that Algolia solution just eliminates? >> Sure. So I think, one of the main challenges we have to focus on is, most of our customers are fighting against the big guides out there that have hundreds of engineers on staff, custom building a search solution. And our consumers expect that response. You expect the same search response that you get when you're streaming video content looking for a movie, from your big retailer shopping experiences. So what we want to provide is the ability to deliver that result with much less work and hassle and have it all show up. And we do that by really focusing on the results that the customers need and what that view needs to look like. We see a lot of our customers just experiencing a huge loss in revenue by only providing basic search. And because as Jason put it, search is so fundamental to the internet, we all think it's easy, we all think it's just basic. And when you provide basic, you don't get the shoes with the dress, you get just the text response results back. And so we want to make sure that we're providing that back to our customers. What we see average is even, and everybody's going mobile. A lot of times I know I do all my shopping on my phone a lot of the time, and 40%-50% better relevancy results for our customers for mobile users. That's a huge impact to their use case. >> That is huge. And when we talked about patients wearing quite thin the last couple of years. But we have this expectation in our consumer lives and in our business lives if we're looking for SaaS or software, or whatnot, that we're going to be able to find what we want that's relevant to what we're looking for. And you mentioned revenue impact, customer churn, brand reputation, those are all things that if search isn't done well, to your point, Daisy, if it's done in a basic fashion, those are some of the things that customers are going to experience. Jason, talk to us about why Algolia, what was it specifically about that technology that really led Apply Digital to say, "This is the right partner to help eliminate some of those challenges that our customers could face?" >> Sure. So I'm in the product world. So I have the wonderful advantage of not worrying about how something's built, that is left, unfortunately, to the poor, poor engineers that have to work with us, mad scientist, product people, who are like, "I want, make it do this. I don't know how, but make it do this." And one of the big things is, with Algolia is the lift to implement is really, really light. Working closely with our engineering team, and even with our customers/users and everything like that, you kind of alluded to it a little earlier, it's like, at the end of the day, if it's bad search, it's bad search. It just is. It's terrible. And people's attention span can now be measured in nanoseconds, but they don't care how it works, they just want it to work. I push a button, I want something to happen, period. There's an entire universe that is behind that button, and that's what Algolia has really focused on, that universe behind that button. So there's two ways that we use them, on a web experience, there's the embedded Search widget, which is really, really easy to implement, documentation, and I cannot speak high enough about documentation, is amazing. And then from the web aspect, I'm sorry, from the mobile aspect, it's very API fort. And any type of API implementation where you can customize the UI, which obviously you can imagine our clients are like, "No we want to have our own front end. We want to have our own custom experience." We use Algolia as that engine. Again, the documentation and the light lift of implementation is huge. That is a massive, massive bonus for why we partnered with them. Before product, I was an engineer a very long time ago. I've seen bad documentation. And it's like, (Lisa laughing) "I don't know how to imple-- I don't know what this is. I don't know how to implement this, I don't even know what I'm looking at." But with Algolia and everything, it's so simple. And I know I can just hear the Apply Digital technology team, just grinding sometimes, "Why is a product guy saying that (mumbles)? He should do it." But it is, it just the lift, it's the documentation, it's the support. And it's a full blown partnership. And that's why we went with it, and that's what we tell our clients. It's like, listen, this is why we chose Algolia, because eventually this experience we're creating for them is theirs, ultimately it's theirs. And then they are going to have to pick it up after a certain amount of time once it's theirs. And having that transition of, "Look this is how easy it is to implement, here is all the documentation, here's all the support that you get." It just makes that transition from us to them beautifully seamless. >> And that's huge. We often talk about hard metrics, but ease of use, ease of implementation, the documentation, the support, those are all absolutely business critical for the organization who's implementing the software, the fastest time to value they can get, can be table stakes, and it can be on also a massive competitive differentiator. Daisy, I want to go back to you in terms of hard numbers. Algolia has a recent force or Total Economic Impact, or TEI study that really has some compelling stats. Can you share some of those insights with us? >> Yeah. Absolutely. I think that this is the one of the most fun numbers to share. We have a recent report that came out, it shared that there's a 382% Return on Investment across three years by implementing Algolia. So that's increase to revenue, increased conversion rate, increased time on your site, 382% Return on Investment for the purchase. So we know our pricing's right, we know we're providing for our customers. We know that we're giving them the results that we need. I've been in the search industry for long enough to know that those are some amazing stats, and I'm really proud to work for them and be behind them. >> That can be transformative for a business. I think we've all had that experience of trying to search on a website and not finding anything of relevance. And sometimes I scratch my head, "Why is this experience still like this? If I could churn, I would." So having that ability to easily implement, have the documentation that makes sense, and get such high ROI in a short time period is hugely differentiated for businesses. And I think we all know, as Jason said, we measure response time in nanoseconds, that's how much patience and tolerance we all have on the business side, on the consumer side. So having that, not just this fast search, but the contextual search is table stakes for organizations these days. I'd love for you guys, and on either one of you can take this, to share a customer example or two, that really shows the value of the Algolia product, and then also maybe the partnership. >> So I'll go. We have a couple of partners in two vastly different industries, but both use Algolia as a solution for search. One of them is a, best way to put this, multinational biotech health company that has this-- We built for them this internal portal for all of their healthcare practitioners, their HCPs, so that they could access information, data, reports, wikis, the whole thing. And it's basically, almost their version of Wikipedia, but it's all internal, and you can imagine the level of of data security that it has to be, because this is biotech and healthcare. So we implemented Algolia as an internal search engine for them. And the three main reasons why we recommended Algolia, and we implemented Algolia was one, HIPAA compliance. That's the first one, it's like, if that's a no, we're not playing. So HIPAA compliance, again, the ease of search, the whole contextual search, and then the recommendations and things like that. It was a true, it didn't-- It wasn't just like a a halfhearted implementation of an internal search engine to look for files thing, it is a full blown search engine, specifically for the data that they want. And I think we're averaging, if I remember the numbers correctly, it's north of 200,000 searches a month, just on this internal portal specifically for their employees in their company. And it's amazing, it's absolutely amazing. And then conversely, we work with a pretty high level adventure clothing brand, standard, traditional e-commerce, stable mobile application, Lisa, what you were saying earlier. It's like, "I buy everything on my phone," thing. And so that's what we did. We built and we support their mobile application. And they wanted to use for search, they wanted to do a couple of things which was really interesting. They wanted do traditional search, search catalog, search skews, recommendations, so forth and so on, but they also wanted to do a store finder, which was kind of interesting. So, we'd said, all right, we're going to be implementing Algolia because the lift is going to be so much easier than trying to do everything like that. And we did, and they're using it, and massively successful. They are so happy with it, where it's like, they've got this really contextual experience where it's like, I'm looking for a store near me. "Hey, I've been looking for these items. You know, I've been looking for this puffy vest, and I'm looking for a store near me." It's like, "Well, there's a store near me but it doesn't have it, but there's a store closer to me and it does have it." And all of that wraps around what it is. And all of it was, again, using Algolia, because like I said earlier, it's like, if I'm searching for something, I want it to be correct. And I don't just want it to be correct, I want it to be relevant. >> Lisa: Yes. >> And I want it to feel personalized. >> Yes. >> I'm asking to find something, give me something that I am looking for. So yeah. >> Yeah. That personalization and that relevance is critical. I keep saying that word "critical," I'm overusing it, but it is, we have that expectation that whether it's an internal portal, as you talked about Jason, or it's an adventure clothing brand, or a grocery store, or an e-commerce site, that what they're going to be showing me is exactly what I'm looking for, that magic behind there that's almost border lines on creepy, but we want it. We want it to be able to make our lives easier whether we are on the consumer side, whether we on the business side. And I do wonder what the Go To Market is. Daisy, can you talk a little bit about, where do customers go that are saying, "Oh, I need to Algolia, and I want to be able to do that." Now, what's the GTM between both of these companies? >> So where to find us, you can find us on AWS Marketplace which another favorite place. You can quickly click through and find, but you can connect us through Apply Digital as well. I think, we try to be pretty available and meet our customers where they are. So we're open to any options, and we love exploring with them. I think, what is fun and I'd love to talk about as well, in the customer cases, is not just the e-commerce space, but also the content space. We have a lot of content customers, things about news, organizations, things like that. And since that's a struggle to deliver results on, it's really a challenge. And also you want it to be relevant, so up-to-date content. So it's not just about e-commerce, it's about all of your solution overall, but we hope that you'll find us on AWS Marketplace or anywhere else. >> Got it. And that's a great point, that it's not just e-commerce, it's content. And that's really critical for some industry, businesses across industries. Jason and Daisy, thank you so much for joining me talking about Algolia, Apply Digital, what you guys are doing together, and the huge impact that you're making to the customer user experience that we all appreciate and know, and come to expect these days is going to be awesome. We appreciate your insights. >> Thank you. >> Thank you >> For Daisy and Jason, I'm Lisa Martin. You're watching "theCUBE," our "AWS Startup Showcase, MarTech Emerging Cloud-Scale Customer Experiences." Keep it right here on "theCUBE" for more great content. We're the leader in live tech coverage. (ending riff)

Published Date : Jun 29 2022

SUMMARY :

and Jason Lang, the Head of Give the audience an overview of Algolia, And we have 11,000 customers that the products deliver? So we do that with a talk to us about Apply Digital, And to help us out, we and Daisy, you were describing that back to our customers. that really led Apply Digital to say, And one of the big things is, the fastest time to value they and I'm really proud to work And I think we all know, as Jason said, And all of that wraps around what it is. I'm asking to find something, and that relevance and we love exploring with them. and the huge impact that you're making We're the leader in live tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

Lisa MartinPERSON

0.99+

DaisyPERSON

0.99+

Jason LangPERSON

0.99+

LisaPERSON

0.99+

VancouverLOCATION

0.99+

Apply DigitalORGANIZATION

0.99+

2012DATE

0.99+

Sao PauloLOCATION

0.99+

AmsterdamLOCATION

0.99+

MexicoLOCATION

0.99+

twoQUANTITY

0.99+

Jason LingPERSON

0.99+

2016DATE

0.99+

LocostORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

AlgoliaORGANIZATION

0.99+

LALOCATION

0.99+

NBC UniversalORGANIZATION

0.99+

40%QUANTITY

0.99+

New YorkLOCATION

0.99+

AWSORGANIZATION

0.99+

two guestsQUANTITY

0.99+

Daisy UrferPERSON

0.99+

OneQUANTITY

0.99+

two waysQUANTITY

0.99+

GoogleORGANIZATION

0.99+

11,000 customersQUANTITY

0.99+

382%QUANTITY

0.99+

HIPAATITLE

0.99+

bothQUANTITY

0.99+

TorontoLOCATION

0.99+

oneQUANTITY

0.99+

theCUBETITLE

0.98+

todayDATE

0.98+

first oneQUANTITY

0.98+

singleQUANTITY

0.98+

three yearsQUANTITY

0.98+

AlgoliaPERSON

0.98+

50%QUANTITY

0.97+

over 11,000QUANTITY

0.97+

30 billion searches a weekQUANTITY

0.96+

theCUBEORGANIZATION

0.94+

John Kim, Sendbird & Luiz Fernando Diniz, PicPay Social | AWS Startup Showcase S2 E3


 

>>Hello, everyone. Welcome to the cubes presentation of the 80 startup showcase marketing technology, emerging cloud scale customer experiences. This is season two, episode three of the ongoing series covering the exciting startups from the, a AWS ecosystem to talk about all the top trends and also featuring the key customers. I'm your host, John ER, today we're joined by Louis Fernando, Denise vice president of peak pay social and John Kim, the CEO of Sandberg to learn about the future of what's going on in fostering deeper customer relationships. Gentlemen, thanks for joining us in the cube showcase, >>Excited to be here. >>So John talk about Sendbird real quick set the table for us. What you guys do, you got a customer here to highlight some of the key things you're doing with customers, the value proposition what's Sendbird and what's the showcase about, >>Yeah, I'm really excited to be here. Uh, I'm John founder, C of Sandberg. So Sandberg is the worst leading conversations platform for mobile applications. We can power user to user conversations in mobile applications, as well as the brand to user conversations such as marketing sales and support. So, uh, today we power over quarter billion users on a monthly basis. Uh, we have, you know, through over 300 employees across seven different countries around the world, we work with some of the world's leading, uh, uh, customers such as big pay that we are going to showcase today, along with other, uh, wonderful customers like DoorDash, Reddit, <inaudible> sports and so forth. We have collectively raised over 200 million in funding. Um, so that's kind of where we are today. >>Well, it's always great to have, uh, one great success. Uh, good funding, more important is the customers. And I love showcases where the customers do the talking, because that means you've got some success stories. Louise, talk about, um, are you happy customer? What's it like working with Sandberg? Give us the, give us the scoop. >>So sandbar is being a great partner with us. So pick pay is a Brazilian payment app. We're at a FinTech here with more than 30 million active users using everyday pick pay to pay everything. So the, the, the majority of the payments are between peers, between people. So sandbar is, is helping us to improve a lot this journey to make it more pleasant between every everyone who are using big, big. So we are here, let's talk and it's a >>Pleasure. Yeah, it's awesome. Well, I great to have you guys on great, great relationship. And one of the things we've been talking about on the cube, if the folks watching that know our audience, no we've been banging the, the drum hard on this new world and this new patterns of user expectations and building relationships in this new digital world is not about the old way, the old MarTech way. There are new new use cases, new expectations by the consumers, John, that are, that are bringing up new opportunities, but also expectations. It's not about, I mean, I mean, if someone's using discord, for example, cuz they're gamers, they're done discord. If they want to communicate with, with slack, they, I do slack, SMS, kind of old hat. You got WhatsApp, you've got all these now peer to peer organic connections, multiple channels. This is all the new world. What's your vision on this new relationship building digital communication world. >>Yeah. So I, I think you brought a really good point there. One of the most frequently used applications in the world today are messaging applications across any countries, any region, any culture, if you look at the most frequently used and most longest used applications are usually some form of a, a messaging application. Now the end users or the customers in the world are so used to using, uh, uh, such a, you know, frictionless ver very responsive, modern experience on those messaging applications. What we want to help with the business around the world, the 99.9% of the business around the world don't have those really te knowledge or user experience expertise in messaging. So we want to help our businesses, help our customers be able to harness the power of modern messaging capabilities and then be able to embed it in their own business so that they can retain their users on their platform, engage with them in the con context that their, uh, what their business is about so that they can not only, uh, control or provide a better user experience, but also be able to, uh, understand their users better, uh, understand what they're doing on their businesses, be able to own and, uh, control the data in a more secure and safe way. >>So really it's uh, we're like the Robin hood of the world trying to keep superpower yeah. Back to the businesses. >>Yeah. Deal from the rich idea, the messaging scale. Bring that to everybody else. I love that. Uh, and you got kind of this double int Robin hood kind of new for the new generation finance. This is about taking the advantage of scalable platforms, monopolies, right. And giving the entrepreneur an opportunity to have that same capability feature, rich Louise PPE. You guys used Sendbird together. You have to level up, you gotta compete with those big monopolies to pride, scalable conversations. Okay. How did you engage this? What was your success path look? What was it look like? >>Yeah. When we look to this majority, the bigger chat apps that we have nowadays in the market, we are looking to them and then Brazilians are using for their daily course, but Brazilians are paying every day millions and millions of payments. And these chat apps are not, uh, able to, to, to deal with these payments. So what we are doing here is that, uh, providing a solution where every conversation that are going to happen before, during, or after a payment between the, the people, they would, uh, uh, have a nice platform that could afford all, all of their emotions and discussions that they have to do before or after the payment. So we are putting together the chat platform and we with the payment platform. So that's, that's what we are doing now. >>Okay. So just so I get this right. You're using Sandberg essentially integrated your mobile payment experience. Okay. Which is your app you're Sandberg to bring that scalability into the, into the social app application into the app itself. Is that right? >>Yes. Perfect. Integrated with the payment journey. So everybody who is going to pay, they need to find the one, the, the one they want to pay and then they can chat and conclude the payment through the platform. Yeah. I >>Mean, why not have it right there at point of, uh, transaction. Right. Um, why did you, um, decide to, um, to use conversations in your mobile wallet? Just curious. >>So it's important to say that we were born social. We born in 2012. So when our main main product was peer to peer payments, so everybody were sending money to a friend requesting or charging their family. So a service provider. And once we, we started as a social platform in that period. In that moment, we are just focusing in likes comments and like public interactions and the word become more private. And as soon we under understood this situation, we decided to move from a public feed to a private, to a private interaction. So that's, uh, that then the, the conversational space was the solution for that moving from a public interaction to a private interaction. So between the peers, which are involved in the, the transaction. So that's why we are providing the chat solution integrated with payments. >>That's a great call. John, just give some context here, again, for the folks watching this is now expected, this integrated experience. What's your, how would you talk to folks out there? I mean, first of all, I, I, I see it clearly, you've got an app, you gotta have all this integration and you need it scaling to reach features. Talk about your view on that. Is that the, is that what's happening here? What's, what's the real dynamic here. What's the, the big trend. >>Yeah. One thing that's, uh, super interesting about, uh, uh, like messaging experience in general, if you think about any kind of conversations that's happening, uh, digitally between human beings, more and more conversations, just like what Louis mentioned earlier are happening between in a private setting, even on applications, whether it be slack or other forms of communication, uh, more hap uh, more conversations happen through either one-on-one conversations or in a private small group settings. And because people feel more secure, uh, safe to have, uh, more intimate conversations. So even when you're making transactions is more, you know, there's a higher trust and, uh, people tend to engage, uh, far better on platforms through these kind of private conversations. That's where we kind of come in, whether it be, you want to set a one-on-one conversations or with a group conversation. And then ultimately if you want to take it public in a large group setting, you can also support, you know, thousands, if not, you know, hundreds of thousands of people, uh, engaging a public forum as well. So all of those capabilities can be implemented using something Ember, but again, the world is, uh, right now the businesses and how the user are, are interacting with this with each other is all happening through digital conversations. And we're seeing more and more of that happening, uh, throughout the life cycle of our company. >>Yeah, just as a sidebar, I was just talking to a venture in San Francisco the other day, and we're talking about the future of security and SAS and cloud scale. And, you know, the conversation went to more of, is it SAS? Is it platform as a service Louis? I wanna get your thoughts because, you know, you're seeing more and more needs for customization, low code, no code. You're seeing these trends. You gotta built in security. So, you know, the different, the old SAS model was softwares a service, but now that's everything in the cloud is softwares a service. So, but you need to have that platform kind of vibe for scale customization, maybe some developer integration, cuz apps are becoming the, the touchpoint. So can you walk us through what your vision was when you decided to integrate, chat into your app and how did you see that chat, changing the customer experience for payments and across your user journey? Cause, I mean, it's obvious now looking at it, but it might not have been for some. What was your, what was your vision? And when you had to do that, >>When you looked to Brazilian reality, we can see those in, uh, payment apps. All of them are focused on the transactional moment. And as soon as we started to think, how could be, how could our journey be better, more pleased than the others and make people want to be here and to use and to open our app every day is just about making the interaction with the peers easier, even with a merchant or even with my friend. So the main point that our first step was just to connect all, all the users between themselves to payments. The second step we are providing now is using the chat platform, the send bird platform as a platform for peak pay. So we are going to provide more best information. We're going to provide a better customer experience through the support and everything. So, um, this, this, this interaction or this connection, this partnership with Sandberg are going to unlock a new level of service for our users. And at the same time, a much more pleasant or a more pleasant journey for them while they are using the, the app for a, a simple payment, or if they are going to look for a group objective or maybe a crowdfund in the future or a group to decide, or just to pay something. So we are then locking a new level of interaction between the peers between the people and the users that are, that are involved into this, this payment or this simple transaction, we are making it more conversational. >>Yeah. You're making the application more valuable. We're gonna get to that in the next segment about, you know, the future of apps one and done, you see a lot of sports apps, oh, this big tournament, you know, and then you use it and then you never use it again until next year. You know, you have very time specific apps, but now you guys are smart to kind of build this in, but I gotta ask you a question because a lot of developers and companies out there always have this buy versus build decision. Why did you decide to use Sendbird versus building it in house? It's always kind of like the big trade off. >>Yeah. First of all, it will take a long, long time for us to achieve a major platform as Sandberg. And we are not a chat platform. So we are going to use this social interaction to improve the payment platform that we have. So when we look to the market and we found Sandberg, then we thought, okay, this guys, they are a real platform. And through the conversations, we are seeing that they are roadmap working in synergy with our roadmap. And then we can, we could start to deliver value to our, to our users in a fastest way. Could you imagine it spending 2, 3, 4 years to develop something like sand? And even when we achieve this point, probably our solution will be, would be weaker than, than Sandberg. So it was like no brainer to do that. Yeah. Because we want to improve the payment journey, not to do a chat, only a chat platform. So that's why we are working together to prove it's >>Really, you start to see these plugins, these, you know, look at Stripe for payments, for instance, right. And here in the success they've had, you know, people want to plug in for services. So John, I gotta ask you about, um, about the, the complexity that goes into it. The trust required that they have for you, you have to do this heavy lifting, you gotta provide the confidence that your service is gonna have to scale the compliance. Talk about that. What do you guys do under the covers that make this easy again, great business model, heavy lifting done by you. Seamless integration provide that value. That's why business is good, but there's a lot going on share what's happening under the, under the covers. >>Yeah. Um, before going to like the technical, like intricacy of what we do just to provide a little bit of background context on why we even started this business is we, uh, this is my second startup. My first company was a gaming company. We had built like chat three, four times just for our own game. So we were basically, we felt like we were reinventing the wheel. And then we actually went on a buyer's journey when we were building a social application, uh, uh, for, for, uh, uh, building our own community. We tried to actually be a buyer to see if we can actually find a solution. We want to use turns out that there weren't a lot of like sophisticated, you know, top notch, modern, uh, uh, chat experience that we can build using some other third party solutions. So we had to build all of that ourselves, which became the foundation for se today. >>And what we realized is that for most companies like using a building, the most sophisticated chat is probably not going to be their highest priority in case a pick pay will be, you know, financial transactions and all the other business that can be built on and hosted by platform like pick pay. But, you know, building the most topnotch chat experience would be a priority for a company like let's say WhatsApp or, or telegram, but it will probably not be the priority for, you know, major gaming companies, food delivery companies, finance companies, chat is not the highest priority. That's kind of where we come in, cuz chat is the highest priority for us. And we also have a privilege of working with some of the other, uh, world industry, uh, industry leaders. So by, uh, having this collective experience, working with the industry leaders, we get, uh, uh, technological superiority, being able to, uh, scale to, you know, hundreds of millions of users on a monthly basis. Also the security and the compliances by working with some of the largest commercial banks on some of the largest FinTech applications across the globe. So we have, you know, security, compliances, all the industry, best practices that are built in and all the new topnotch user experience that we are, uh, building with other customers can be also be, uh, utilized by a customer like pick pay. So you get this collective almost like evolutionary benefit. Yeah. By, uh, working with a company like us, >>You get a lot of economies of scale. Could you mind just sharing the URL for the company? So folks watching can go get, do a deep dive. Cause I'm you guys got a lot of, lot of, um, certifications under the covers, a lot of things you guys do. So you mind just sharing URL real quick. >>Yeah. So our company, uh, you can find everything about our company on sandberg.com like carrot pigeon. So, uh, you're sending a bird to send a message. So, uh, yeah. send.com >>All so let's get it to the application, cuz this is really interesting cuz Chad is table stakes now, but things are evolving beyond Chad. You gotta integrate that user experience. It's data. Now you gotta have scale. I mean, you know, people who wanna roll their own chat will find out there's a lot of client side and backend scale issues. Right. You can have a tsunami river like on Twitch, you know, you chat. I mean that, could you got client side issues, data scale. <laugh> right. You got backend. Um, Louis, talk about that dynamic because you know, as you start to scale, you want to rely on that. Talk about this dynamic, how apps now are integrating all these new features. So is it, are apps gonna go like more multifunctional? Do you see apps one and done? What's the, how do you guys see this app world playing out and where does, does the Sendbird fit in? And >>Just, just let me know better John, about the performance or about the, just, just let me >>Oh, slow with performance. Uh, performance is huge, right? You gotta have no one wants to have lag on, on chat. >>Okay. So, um, big pay when we look to the payments have millions, thousands of, of, of payments happen happening every second. So what we are doing now is moving all the payments through a conversation. So it always happened inside the conversation. So since from the first moment, um, every second counts to convert this client. And since from the first moment we never saw in, on Sandberg, any issue about that. And even when we have a question or something that we need to improve the team we're working together. So that that's, those are the points that are making us to work together and to make things going pretty fast. When we look to the users who are going to use chat, they are, their intention is three times better than the users who are not using payments through the chat. They are average. Average spent is three times higher too. >>So they, they are making more connections. They are chatting with their friends. They are friends are here. So the network effect is stronger. So if they're going to pay and they need to wait one more second, two seconds to conclude the payment, probably they will not go into choose paying through the, again, they will use only the wallet, only the code, only the Alliance of the user. So that's is so important for us to perform really, really fast. And then this is what we are finding. And this is what is happening with the integration with Sandberg. >>And what's interesting is, is that the by build chat with conversation, we just had a minute ago kind of plays in here. You get the benefits of Sandberg, but now your transactional fidelity is in the chat <laugh> that you don't build that you rely on them on. So again, that's an interesting dynamic. This is the future of apps, John, this is where it matters. The engagement. This is what you talk about is the new, the new digital experience who would've thought that five, 10 years ago. I mean, chat was just like, Hey, what's going around direct message. Now it's integral part of the app. What's your reading. >>Yeah. I mean, we're seeing that across, uh, uh, to Lewis's point, not just transactions, but like marketing messages are now being sent through chat. So the marketing is no longer just about like giving discount calls, but you can actually reengage with the brand. Uh, also support is becoming more real time through chat. So you're actually building a relationship. The support agents have a better context about the previous conversations and the transactions, the sales conversations, even like building, uh, building alerts, notification, all those things are now, uh, happening through conversations. And that's a better way for customers to engage with the brand cuz you actually, you're actually building a better relationship and also, uh, being able to trust the brand more because there is a channel for you to communicate and, and, and be seen and be heard, uh, by the brand. So we do believe that that's the future of the business and how more and more, uh, brands will be building relationships with their customers. >>Yeah. I love, I love your business model. I think it's really critical. And I think that stickiness is a real, uh, call out point there and the brand, the co-branding and the branding capability, but also really quickly in the last minute we have John and Luis, if you don't mind talking about security, I mean, I can't go a day now without getting an SMS scam, uh, text, uh, you seeing it now on WhatsApp. I mean, I don't even use telegram anymore. I mean, come on. So like, like this is now a problem. The old way has been infiltrated with spam and security issues. Security has to be there. The trust and security real quick, John, we'll start with you and we all Louis go, go ahead. >>No, no. Just, just to, to say how important is that we are not only a chatting platform. We are a payment platform, so we have money now, the transaction. So here in Brazil, we have all this safe, the, the, the layers, the security layers that we have in, on our app. And then we have the security layers provided from Sandburg. So, and when we look to the features, Sandberg are providing to us a lot of features that help users to feel safer like per refined profiles, like announcements, where it's a profile from peak pay, where the users can recognize. So this is peak pay talking with me. It's not a user trying to pass, trying to use big Bay's name to talk with me. So these issues is something that we are really, really, we really care about here because we are not only a chat platform. As I said before, we are a payment platform. We are a FinTech, we're at a digital bank. So we need to take care a lot and we don't have any complaint about it because Sandberg understood it. And then they, they, they are providing since the first moment with the perfect solutions and the user interface to make it simpler for the users to recognize that we speak, pay who is chatting with them, not a user with, with bad, bad intentions. >>Great, great insight, Louis. Thanks for sharing that, John really appreciate you guys coming on. Great showcase. Real final word. John will give you the final word folks watching out there. How do they engage with Sendbird? I want to integrate, I want to use your chat service. What do I do? Do I have to connect in as it managed service is the line of code. What do I do to get Sendbird? >>Yeah. So if you're a developer building a mobile application, simply come visit our website, we have a open documentation and SDK you can download and simply plug into your application. You can have a chat experience up and running matter of minutes, if not ours using our UI kit. So we want to make it as easy as possible for all the builders in the world to be able to harness the superpower of digital conversations. >>All right, great. Congratulations, John, on your success and all the growth and Louis, thanks for coming in, sharing the customer perspective and great insight. Thanks for coming on the showcase. Really appreciate it. Thanks for your time. >>Yeah. Thank you for having me. >>Okay. The a of us startup showcase season two, episode three here I'm John for your host. Thanks for watching.

Published Date : Jun 29 2022

SUMMARY :

covering the exciting startups from the, a AWS ecosystem to talk about all the top trends So John talk about Sendbird real quick set the table for us. leading, uh, uh, customers such as big pay that we are going to showcase today, along with other, Well, it's always great to have, uh, one great success. So we are here, let's talk and it's a Well, I great to have you guys on great, great relationship. uh, uh, such a, you know, frictionless ver very responsive, modern experience on So really it's uh, we're like the Robin hood of the world trying to keep superpower yeah. And giving the entrepreneur an opportunity to have that same capability feature, rich Louise PPE. So we are putting together the chat platform and we with the Which is your app you're Sandberg to bring that scalability into So everybody who is going to pay, why did you, um, decide to, um, to use conversations in your mobile wallet? So it's important to say that we were born social. John, just give some context here, again, for the folks watching this is now expected, And then ultimately if you want to take it public in a large group setting, you can also support, you know, So can you walk us through what your vision was when you decided to integrate, So the main point that our first step was just to connect all, all the users between We're gonna get to that in the next segment about, you know, the future of apps one and done, So we are going to use this social interaction to improve the payment platform that we have. And here in the success they've had, you know, people want to plug in for services. So we had to build all of that ourselves, which became the foundation for se today. So we have, you know, security, compliances, all the industry, best practices that are built in and all the new topnotch user So you mind just sharing URL real quick. So, uh, you're sending a bird to send a message. You can have a tsunami river like on Twitch, you know, you chat. Oh, slow with performance. So it always happened inside the conversation. So the network effect is stronger. You get the benefits of Sandberg, but now your transactional fidelity is in the chat And that's a better way for customers to engage with the brand cuz you actually, in the last minute we have John and Luis, if you don't mind talking about security, I mean, I can't go a day now to make it simpler for the users to recognize that we speak, pay who is chatting with them, Thanks for sharing that, John really appreciate you guys coming on. we have a open documentation and SDK you can download and simply plug into your application. Thanks for coming on the showcase. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John KimPERSON

0.99+

BrazilLOCATION

0.99+

SandbergORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

two secondsQUANTITY

0.99+

Louis FernandoPERSON

0.99+

Luiz Fernando DinizPERSON

0.99+

2012DATE

0.99+

LouisePERSON

0.99+

three timesQUANTITY

0.99+

LuisPERSON

0.99+

2QUANTITY

0.99+

SendbirdORGANIZATION

0.99+

LewisPERSON

0.99+

first momentQUANTITY

0.99+

millionsQUANTITY

0.99+

second stepQUANTITY

0.99+

John ERPERSON

0.99+

first stepQUANTITY

0.99+

second startupQUANTITY

0.99+

hundredsQUANTITY

0.99+

LouisPERSON

0.99+

AWSORGANIZATION

0.99+

over 200 millionQUANTITY

0.99+

thousandsQUANTITY

0.99+

99.9%QUANTITY

0.99+

over 300 employeesQUANTITY

0.99+

next yearDATE

0.99+

four timesQUANTITY

0.98+

first companyQUANTITY

0.98+

MarTechORGANIZATION

0.98+

seven different countriesQUANTITY

0.98+

pick payTITLE

0.98+

SandburgORGANIZATION

0.98+

DenisePERSON

0.97+

3QUANTITY

0.97+

RedditORGANIZATION

0.97+

sandbarORGANIZATION

0.97+

OneQUANTITY

0.97+

todayDATE

0.96+

oneQUANTITY

0.96+

one more secondQUANTITY

0.96+

a dayQUANTITY

0.96+

StripeORGANIZATION

0.95+

a minuteDATE

0.95+

fiveDATE

0.95+

DoorDashORGANIZATION

0.95+

WhatsAppORGANIZATION

0.94+

FirstQUANTITY

0.93+

more than 30 million active usersQUANTITY

0.93+

SASORGANIZATION

0.92+

over quarter billion usersQUANTITY

0.91+

4 yearsQUANTITY

0.9+

BrazilianOTHER

0.89+

80 startup showcaseQUANTITY

0.88+

PicPay SocialORGANIZATION

0.87+

ChadLOCATION

0.86+

Tim Barnes, AWS | AWS Startup Showcase S2 E3


 

(upbeat music) >> Hello, everyone, welcome to theCUBE's presentation of the AWS Startup Showcase. We're in Season two, Episode three, and this is the topic of MarTech and the Emerging Cloud-Scale Customer Experiences, the ongoing coverage of AWS's ecosystem of large scale growth and new companies and growing companies. I'm your host, John Furrier. We're excited to have Tim Barnes, Global Director, General Manager of Advertiser and Marketing at AWS here doing the keynote cloud-scale customer experience. Tim, thanks for coming on. >> Oh, great to be here and thank you for having me. >> You've seen many cycles of innovation, certainly in the ad tech platform space around data, serving consumers and a lot of big, big scale advertisers over the years as the Web 1.0, 2.0, now 3.0 coming, cloud-scale, roll of data, all big conversations changing the game. We see things like cookies going away. What does this all mean? Silos, walled gardens, a lot of new things are impacting the applications and expectations of consumers, which is also impacting the folks trying to reach the consumers. And this is kind of creating a kind of a current situation, which is challenging, but also an opportunity. Can you share your perspective of what this current situation is, as the emerging MarTech landscape emerges? >> Yeah, sure, John, it's funny in this industry, the only constant has changed and it's an ever-changing industry and never more so than right now. I mean, we're seeing with whether it's the rise of privacy legislation or just breach of security of data or changes in how the top tech providers and browser controllers are changing their process for reaching customers. This is an inflection point in the history of both ad tech and MarTech. You hit the nail on the head with cookie deprecation, with Apple removing IDFA, changes to browsers, et cetera, we're at an interesting point. And by the way, we're also seeing an explosion of content sources and ability to reach customers that's unmatched in the history of advertising. So those two things are somewhat at odds. So whether we see the rise of connected television or digital out of home, you mentioned Web 3.0 and the opportunities that may present in metaverse, et cetera, it's an explosion of opportunity, but how do we continue to connect brands with customers and do so in a privacy compliant way? And that's really the big challenge we're facing. One of the things that I see is the rise of modeling or machine learning as a mechanism to help remove some of these barriers. If you think about the idea of one-to-one targeting, well, that's going to be less and less possible as we progress. So how am I still as a brand advertiser or as a targeted advertiser, how am I going to still reach the right audience with the right message in a world where I don't necessarily know who they are. And modeling is a really key way of achieving that goal and we're seeing that across a number of different angles. >> We've always talked about on the ad tech business for years, it's the behemoth of contextual and behavioral, those dynamics. And if you look at the content side of the business, you have now this new, massive source of new sources, blogging has been around for a long time, you got video, you got newsletters, you got all kinds of people, self-publishing, that's been around for a while, right? So you're seeing all these new sources. Trust is a big factor, but everyone wants to control their data. So this walled garden perpetuation of value, I got to control my data, but machine learning works best when you expose data, so this is kind of a paradox. Can you talk about the current challenge here and how to overcome it because you can't fight fashion, as they say, and we see people kind of going down this road as saying, data's a competitive advantage, but I got to figure out a way to keep it, own it, but also share it for the machine learning. What's your take on that? >> Yeah, I think first and foremost, if I may, I would just start with, it's super important to make that connection with the consumer in the first place. So you hit the nail on the head for advertisers and marketers today, the importance of gaining first party access to your customer and with permission and consent is paramount. And so just how you establish that connection point with trust and with very clear directive on how you're going to use the data has never been more important. So I would start there if I was a brand advertiser or a marketer, trying to figure out how I'm going to better connect with my consumers and get more first party data that I could leverage. So that's just building the scale of first party data to enable you to actually perform some of the types of approaches we'll discuss. The second thing I would say is that increasingly, the challenge exists with the exchange of the data itself. So if I'm a data control, if I own a set of first party data that I have consent with consumers to use, and I'm passing that data over to a third party, and that data is leaked, I'm still responsible for that data. Or if somebody wants to opt out of a communication and that opt out signal doesn't flow to the third party, I'm still liable, or at least from the consumer's perspective, I've provided a poor customer experience. And that's where we see the rise of the next generation, I call it of data clean rooms, the approaches that you're seeing, a number of customers take in terms of how they connect data without actually moving the data between two sources. And we're seeing that as certainly a mechanism by which you can preserve accessibility data, we call that federated data exchange or federated data clean rooms and I think you're seeing that from a number of different parties in the industry. >> That's awesome, I want to get into the data interoperability because we have a lot of startups presenting in this episode around that area, but why I got you here, you mentioned data clean room. Could you define for us, what is a federated data clean room, what is that about? >> Yeah, I would simply describe it as zero data movement in a privacy and secure environment. To be a little bit more explicit and detailed, it really is the idea that if I'm a party A and I want to exchange data with party B, how can I run a query for analytics or other purposes without actually moving data anywhere? Can I run a query that has accessibility to both parties, that has the security and the levels of aggregation that both parties agree to and then run the query and get those results sets back in a way that it actually facilitates business between the two parties. And we're seeing that expand with partners like Snowflake and InfoSum, even within Amazon itself, AWS, we have data sharing capabilities within Redshift and some of our other data-led capabilities. And we're just seeing explosion of demand and need for customers to be able to share data, but do it in a way where they still control the data and don't ever hand it over to a third party for execution. >> So if I understand this correctly, this is kind of an evolution to kind of take away the middleman, if you will, between parties that used to be historically the case, is that right? >> Yeah, I'd say this, the middleman still exists in many cases. If you think about joining two parties' data together, you still have the problem of the match key. How do I make sure that I get the broadest set of data to match up with the broadest set of data on the other side? So we have a number of partners that provide these types of services from LiveRamp, TransUnion, Experian, et cetera. So there's still a place for that so-called middleman in terms of helping to facilitate the transaction, but as a clean room itself, I think that term is becoming outdated in terms of a physical third party location, where you push data for analysis, that's controlled by a third party. >> Yeah, great clarification there. I want to get into this data interoperability because the benefits of AWS and cloud scales we've seen over the past decade and looking forward is, it's an API based economy. So APIs and microservices, cloud native stuff is going to be the key to integration. And so connecting people together is kind of what we're seeing as the trend. People are connecting their data, they're sharing code in open source. So there's an opportunity to connect the ecosystem of companies out there with their data. Can you share your view on this interoperability trend, why it's important and what's the impact to customers who want to go down this either automated or programmatic connection oriented way of connecting data. >> Never more important than it has been right now. I mean, if you think about the way we transact it and still too today do to a certain extent through cookie swaps and all sorts of crazy exchanges of data, those are going away at some point in the future; it could be a year from now, it could be later, but they're going away. And I think that that puts a great amount of pressure on the broad ecosystem of customers who transact for marketers, on behalf of marketers, both for advertising and marketing. And so data interoperability to me is how we think about providing that transactional layer between multiple parties so that they can continue to transact in a way that's meaningful and seamless, and frankly at lower cost and at greater scale than we've done in the past with less complexity. And so, we're seeing a number of changes in that regard, whether that's data sharing and data clean rooms or federated clean rooms, as we described earlier, whether that's the rise of next generation identity solutions, for example, the UID 2.0 Consortium, which is an effort to use hashed email addresses and other forms of identifiers to facilitate data exchange for the programmatic ecosystem. These are sort of evolutions based on this notion that the old world is going away, the new world is coming, and part of that is how do we connect data sources in a more seamless and frankly, efficient manner. >> It's almost interesting, it's almost flipped upside down, you had this walled garden mentality, I got to control my data, but now I have data interoperability. So you got to own and collect the data, but also share it. This is going to kind of change the paradigm around my identity platforms, attributions, audience, as audiences move around, and with cookies going away, this is going to require a new abstraction, a new way to do it. So you mentioned some of those standards. Is there a path in this evolution that changes it for the better? What's your view on this? What do you see happening? What's going to come out of this new wave? >> Yeah, my father was always fond of telling me, "The customer, my customers is my customer." And I like to put myself in the shoes of the Marc Pritchards of the world at Procter & Gamble and think, what do they want? And frankly, their requirements for data and for marketing have not changed over the last 20 years. It's, I want to reach the right customer at the right time, with the right message and I want to be able to measure it. In other words, summarizing, I want omnichannel execution with omnichannel measurement, and that's become increasingly difficult as you highlighted with the rise of the walled gardens and increasingly data living in silos. And so I think it's important that we, as an industry start to think about what's in the best interest of the one customer who brings virtually 100% of the dollars to this marketplace, which is the CMO and the CMO office. And how do we think about returning value to them in a way that is meaningful and actually drives its industry forward. And I think that's where the data operability piece becomes really important. How do we think about connecting the omnichannel channels of execution? How do we connect that with partners who run attribution offerings with machine learning or partners who provide augmentation or enrichment data such as third party data providers, or even connecting the buy side with the sell side in a more efficient manner? How do I make that connection between the CMO and the publisher in a more efficient and effective way? And these are all challenges facing us today. And I think at the foundational layer of that is how do we think about first of all, what data does the marketer have, what is the first party data? How do we help them ethically source and collect more of that data with proper consent? And then how do we help them join that data into a variety of data sources in a way that they can gain value from it. And that's where machine learning really comes into play. So whether that's the notion of audience expansion, whether that's looking for some sort of cohort analysis that helps with contextual advertising, whether that's the notion of a more of a modeled approach to attribution versus a one-to-one approach, all of those things I think are in play, as we think about returning value back to that customer of our customer. >> That's interesting, you broke down the customer needs in three areas; CMO office and staff, partners ISV software developers, and then third party services. Kind of all different needs, if you will, kind of tiered, kind of at the center of that's the user, the consumer who have the expectations. So it's interesting, you have the stakeholders, you laid out kind of those three areas as to customers, but the end user, the consumer, they have a preference, they kind of don't want to be locked into one thing. They want to move around, they want to download apps, they want to play on Reddit, they want to be on LinkedIn, they want to be all over the place, they don't want to get locked in. So you have now kind of this high velocity user behavior. How do you see that factoring in, because with cookies going away and kind of the convergence of offline-online, really becoming predominant, how do you know someone's paying attention to what and when attention and reputation. All these things seem complex. How do you make sense of it? >> Yeah, it's a great question. I think that the consumer as you said, finds a creepiness factor with a message that follows them around their various sources of engagement with content. So I think at first and foremost, there's the recognition by the brand that we need to be a little bit more thoughtful about how we interact with our customer and how we build that trust and that relationship with the customer. And that all starts with of course, opt-in process consent management center but it also includes how we communicate with them. What message are we actually putting in front of them? Is it meaningful, is it impactful? Does it drive value for the customer? I think we've seen a lot of studies, I won't recite them that state that most consumers do find value in targeted messaging, but I think they want it done correctly and there in lies the problem. So what does that mean by channel, especially when we lose the ability to look at that consumer interaction across those channels. And I think that's where we have to be a little bit more thoughtful with frankly, kind of going back to the beginning with contextual advertising, with advertising that perhaps has meaning, or has empathy with the consumer, perhaps resonates with the consumer in a different way than just a targeted message. And we're seeing that trend, we're seeing that trend both in television, connected television as those converge, but also as we see about connectivity with gaming and other sort of more nuanced channels. The other thing I would say is, I think there's a movement towards less interruptive advertising as well, which kind of removes a little bit of those barriers for the consumer and the brand to interact. And whether that be dynamic product placement, content optimization, or whether that be sponsorship type opportunities within digital. I think we're seeing an increased movement towards those types of executions, which I think will also provide value to both parties. >> Yeah, I think you nailed it there. I totally agree with you on the contextual targeting, I think that's a huge deal and that's proven over the years of providing benefit. People, they're trying to find what they're looking for, whether it's data to consume or a solution they want to buy. So I think that all kind of ties together. The question is these three stakeholders, the CMO office and staff you mentioned, and the software developers, apps, or walled gardens, and then like ad servers as they come together, have to have standards. And so, I think to me, I'm trying to squint through all the movement and the shifting plates that are going on in the industry and trying to figure out where are the dots connecting? And you've seen many cycles of innovation at the end of the day, it comes down to who can perform best for the end user, as well as the marketers and advertisers, so that balance. What's your view on this shift? It's going to land somewhere, it has to land in the right area, and the market's very efficient. I mean, this ad market's very efficient. >> Yeah, I mean, in some way, so from a standards perspective, I support and we interact extensively with the IB and other industry associations on privacy enhancing technologies and how we think about these next generations of connection points or identifiers to connect with consumers. But I'd say this, with respect to the CMO, and I mentioned the publisher earlier, I think over the last 10 years with the rise of programmatic, certainly we saw the power reside mostly with the CMO who was able to amass a large pool of cookies or purchase a large sort of cohort of customers with cookie based attributes and then execute against that. And so almost a blind fashion to the publisher, the publisher was sort of left to say, "Hey, here's an opportunity, do you want to buy it or not?" With no real reason why the marketer might be buying that customer? And I think that we're seeing a shift backwards towards the publisher and perhaps a healthy balance between the two. And so, I do believe that over time, that we're going to see publishers provide a lot more, what I might almost describe as mini walled gardens. So the ability, great publisher or a set of publishers to create a cohort of customers that can be targeted through programmatic or perhaps through programmatic guaranteed in a way that it's a balance between the two. And frankly thinking about that notion of federated data clean rooms, you can see an approach where publishers are able to share their first party data with a marketer's first party data, without either party feeling like they're giving up something or passing all their value over to the other. And I do believe we're going to see some significant technology changes over the next three to four years. That really rely on that interplay between the marketer and the publisher in a way that it helps both sides achieve their goals, and that is, increasing value back to the publisher in terms of higher CPMs, and of course, better reach and frequency controls for the marketer. >> I think you really brought up a big point there we can maybe follow up on, but I think this idea of publishers getting more control and power and value is an example of the market filling a void and the power log at the long tail, it's kind of a straight line. Then it's got the niche kind of communities, it's growing in the middle there, and I think the middle of the torso of that power law is the publishers because they have all the technology to measure the journeys and the click throughs and all this traffic going on their platform, but they just need to connect to someone else. >> Correct. >> That brings in the interoperability. So, as a publisher ourselves, we see that long tail getting really kind of fat in the middle where new brands are going to emerge, if they have audience. I mean, some podcasts have millions of users and some blogs are attracting massive audience, niche audiences that are growing. >> I would say, just look at the rise of what we might not have considered publishers in the past, but are certainly growing as publishers today. Customers like Instacart or Uber who are creating ad platforms or gaming, which of course has been an ad supported platform for some time, but is growing immensely. Retail as a platform, of course, amazon.com being one of the biggest retail platforms with advertising supported models, but we're seeing that growth across the board for retail customers. And I think that again, there's never been more opportunities to reach customers. We just have to do it the right way, in the way that it's not offensive to customers, not creepy, if you want to call it that, and also maximizes value for both parties and that be both the buy and the sell side. >> Yeah, everyone's a publisher and everyone's a media company. Everyone has their own news network, everyone has their own retail, it's a completely new world. Tim, thanks for coming on and sharing your perspective and insights on this key note, Tim Barnes, Global Director, General Manager of Advertiser and Market at AWS here with the Episode three of Season two of the AWS Startup Showcase. I'm John Furrier, thanks for watching. (upbeat music)

Published Date : Jun 29 2022

SUMMARY :

of the AWS Startup Showcase. Oh, great to be here and certainly in the ad tech and the opportunities that may present and how to overcome it because exchange of the data itself. into the data interoperability that has the security and to match up with the broadest the impact to customers that the old world is going of change the paradigm of the one customer who brings and kind of the convergence the ability to look and the market's very efficient. and the publisher in a way that it helps is an example of the market filling a void getting really kind of fat in the middle in the way that it's not offensive of the AWS Startup Showcase.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Tim BarnesPERSON

0.99+

Tim BarnesPERSON

0.99+

Procter & GambleORGANIZATION

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

TimPERSON

0.99+

AmazonORGANIZATION

0.99+

TransUnionORGANIZATION

0.99+

ExperianORGANIZATION

0.99+

two sourcesQUANTITY

0.99+

twoQUANTITY

0.99+

UberORGANIZATION

0.99+

LiveRampORGANIZATION

0.99+

both partiesQUANTITY

0.99+

AppleORGANIZATION

0.99+

two partiesQUANTITY

0.99+

MarTechORGANIZATION

0.99+

both sidesQUANTITY

0.99+

InfoSumORGANIZATION

0.99+

bothQUANTITY

0.99+

todayDATE

0.98+

two thingsQUANTITY

0.98+

four yearsQUANTITY

0.98+

two parties'QUANTITY

0.98+

first partyQUANTITY

0.98+

second thingQUANTITY

0.98+

firstQUANTITY

0.98+

LinkedInORGANIZATION

0.98+

InstacartORGANIZATION

0.98+

OneQUANTITY

0.98+

threeQUANTITY

0.97+

oneQUANTITY

0.97+

UID 2.0 ConsortiumORGANIZATION

0.97+

one customerQUANTITY

0.97+

three stakeholdersQUANTITY

0.96+

SnowflakeORGANIZATION

0.96+

theCUBEORGANIZATION

0.95+

Marc PritchardsPERSON

0.95+

amazon.comORGANIZATION

0.94+

100%QUANTITY

0.91+

three areasQUANTITY

0.9+

first placeQUANTITY

0.87+

RedditORGANIZATION

0.83+

millions of usersQUANTITY

0.83+

Startup ShowcaseEVENT

0.82+

IDFATITLE

0.78+

SeasonQUANTITY

0.76+

past decadeDATE

0.75+

EpisodeQUANTITY

0.75+

a year fromDATE

0.74+

last 20 yearsDATE

0.74+

one thingQUANTITY

0.72+

last 10 yearsDATE

0.71+

Web 3.0OTHER

0.7+

RedshiftTITLE

0.65+

Episode threeOTHER

0.64+

zeroQUANTITY

0.64+

Season twoQUANTITY

0.63+

waveEVENT

0.61+

MarTechTITLE

0.58+

twoOTHER

0.55+

S2 E3EVENT

0.53+

threeOTHER

0.5+

Jay Marshall, Neural Magic | AWS Startup Showcase S3E1


 

(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)

Published Date : Mar 9 2023

SUMMARY :

of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JayPERSON

0.99+

Jay MarshallPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

fiveQUANTITY

0.99+

Nir ShavitPERSON

0.99+

20-yearQUANTITY

0.99+

AlexaTITLE

0.99+

2010sDATE

0.99+

sevenQUANTITY

0.99+

PythonTITLE

0.99+

MITORGANIZATION

0.99+

each coreQUANTITY

0.99+

Neural MagicORGANIZATION

0.99+

JavaTITLE

0.99+

YouTubeORGANIZATION

0.99+

TodayDATE

0.99+

nine yearsQUANTITY

0.98+

bothQUANTITY

0.98+

BERTTITLE

0.98+

theCUBEORGANIZATION

0.98+

ChatGPTTITLE

0.98+

20 yearsQUANTITY

0.98+

over 50%QUANTITY

0.97+

second natureQUANTITY

0.96+

todayDATE

0.96+

ARMORGANIZATION

0.96+

oneQUANTITY

0.95+

DeepSparseTITLE

0.94+

neuralmagic.com/gravitonOTHER

0.94+

SiliconANGLEORGANIZATION

0.94+

WebSphereTITLE

0.94+

nineQUANTITY

0.94+

firstQUANTITY

0.93+

Startup ShowcaseEVENT

0.93+

five millisecondsQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.91+

twoQUANTITY

0.9+

YOLOORGANIZATION

0.89+

CUBEORGANIZATION

0.88+

OPTTITLE

0.88+

last six monthsDATE

0.88+

season threeQUANTITY

0.86+

doubleQUANTITY

0.86+

one customerQUANTITY

0.86+

SupercloudEVENT

0.86+

one sideQUANTITY

0.85+

VicePERSON

0.85+

x86OTHER

0.83+

AI/ML: Top Startups Building Foundational ModelsTITLE

0.82+

ECSTITLE

0.81+

$100 billionQUANTITY

0.81+

DevOpsTITLE

0.81+

WebLogicTITLE

0.8+

EKSTITLE

0.8+

a minuteQUANTITY

0.8+

neuralmagic.comOTHER

0.79+

Luis Ceze & Anna Connolly, OctoML | AWS Startup Showcase S3 E1


 

(soft music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. AI and Machine Learning: Top Startups Building Foundational Model Infrastructure. This is season 3, episode 1 of the ongoing series covering the exciting stuff from the AWS ecosystem, talking about machine learning and AI. I'm your host, John Furrier and today we are excited to be joined by Luis Ceze who's the CEO of OctoML and Anna Connolly, VP of customer success and experience OctoML. Great to have you on again, Luis. Anna, thanks for coming on. Appreciate it. >> Thank you, John. It's great to be here. >> Thanks for having us. >> I love the company. We had a CUBE conversation about this. You guys are really addressing how to run foundational models faster for less. And this is like the key theme. But before we get into it, this is a hot trend, but let's explain what you guys do. Can you set the narrative of what the company's about, why it was founded, what's your North Star and your mission? >> Yeah, so John, our mission is to make AI sustainable and accessible for everyone. And what we offer customers is, you know, a way of taking their models into production in the most efficient way possible by automating the process of getting a model and optimizing it for a variety of hardware and making cost-effective. So better, faster, cheaper model deployment. >> You know, the big trend here is AI. Everyone's seeing the ChatGPT, kind of the shot heard around the world. The BingAI and this fiasco and the ongoing experimentation. People are into it, and I think the business impact is clear. I haven't seen this in all of my career in the technology industry of this kind of inflection point. And every senior leader I talk to is rethinking about how to rebuild their business with AI because now the large language models have come in, these foundational models are here, they can see value in their data. This is a 10 year journey in the big data world. Now it's impacting that, and everyone's rebuilding their company around this idea of being AI first 'cause they see ways to eliminate things and make things more efficient. And so now they telling 'em to go do it. And they're like, what do we do? So what do you guys think? Can you explain what is this wave of AI and why is it happening, why now, and what should people pay attention to? What does it mean to them? >> Yeah, I mean, it's pretty clear by now that AI can do amazing things that captures people's imaginations. And also now can show things that are really impactful in businesses, right? So what people have the opportunity to do today is to either train their own model that adds value to their business or find open models out there that can do very valuable things to them. So the next step really is how do you take that model and put it into production in a cost-effective way so that the business can actually get value out of it, right? >> Anna, what's your take? Because customers are there, you're there to make 'em successful, you got the new secret weapon for their business. >> Yeah, I think we just see a lot of companies struggle to get from a trained model into a model that is deployed in a cost-effective way that actually makes sense for the application they're building. I think that's a huge challenge we see today, kind of across the board across all of our customers. >> Well, I see this, everyone asking the same question. I have data, I want to get value out of it. I got to get these big models, I got to train it. What's it going to cost? So I think there's a reality of, okay, I got to do it. Then no one has any visibility on what it costs. When they get into it, this is going to break the bank. So I have to ask you guys, the cost of training these models is on everyone's mind. OctoML, your company's focus on the cost side of it as well as the efficiency side of running these models in production. Why are the production costs such a concern and where specifically are people looking at it and why did it get here? >> Yeah, so training costs get a lot of attention because normally a large number, but we shouldn't forget that it's a large, typically one time upfront cost that customers pay. But, you know, when the model is put into production, the cost grows directly with model usage and you actually want your model to be used because it's adding value, right? So, you know, the question that a customer faces is, you know, they have a model, they have a trained model and now what? So how much would it cost to run in production, right? And now without the big wave in generative AI, which rightfully is getting a lot of attention because of the amazing things that it can do. It's important for us to keep in mind that generative AI models like ChatGPT are huge, expensive energy hogs. They cost a lot to run, right? And given that model usage growth directly, model cost grows directly with usage, what you want to do is make sure that once you put a model into production, you have the best cost structure possible so that you're not surprised when it's gets popular, right? So let me give you an example. So if you have a model that costs, say 1 to $2 million to train, but then it costs about one to two cents per session to use it, right? So if you have a million active users, even if they use just once a day, it's 10 to $20,000 a day to operate that model in production. And that very, very quickly, you know, get beyond what you paid to train it. >> Anna, these aren't small numbers, and it's cost to train and cost to operate, it kind of reminds me of when the cloud came around and the data center versus cloud options. Like, wait a minute, one, it costs a ton of cash to deploy, and then running it. This is kind of a similar dynamic. What are you seeing? >> Yeah, absolutely. I think we are going to see increasingly the cost and production outpacing the costs and training by a lot. I mean, people talk about training costs now because that's what they're confronting now because people are so focused on getting models performant enough to even use in an application. And now that we have them and they're that capable, we're really going to start to see production costs go up a lot. >> Yeah, Luis, if you don't mind, I know this might be a little bit of a tangent, but, you know, training's super important. I get that. That's what people are doing now, but then there's the deployment side of production. Where do people get caught up and miss the boat or misconfigure? What's the gotcha? Where's the trip wire or so to speak? Where do people mess up on the cost side? What do they do? Is it they don't think about it, they tie it to proprietary hardware? What's the issue? >> Yeah, several things, right? So without getting really technical, which, you know, I might get into, you know, you have to understand relationship between performance, you know, both in terms of latency and throughput and cost, right? So reducing latency is important because you improve responsiveness of the model. But it's really important to keep in mind that it often leads diminishing returns. Below a certain latency, making it faster won't make a measurable difference in experience, but it's going to cost a lot more. So understanding that is important. Now, if you care more about throughputs, which is the time it takes for you to, you know, units per period of time, you care about time to solution, we should think about this throughput per dollar. And understand what you want is the highest throughput per dollar, which may come at the cost of higher latency, which you're not going to care about, right? So, and the reality here, John, is that, you know, humans and especially folks in this space want to have the latest and greatest hardware. And often they commit a lot of money to get access to them and have to commit upfront before they understand the needs that their models have, right? So common mistake here, one is not spending time to understand what you really need, and then two, over-committing and using more hardware than you actually need. And not giving yourself enough freedom to get your workload to move around to the more cost-effective choice, right? So this is just a metaphoric choice. And then another thing that's important here too is making a model run faster on the hardware directly translates to lower cost, right? So, but it takes a lot of engineers, you need to think of ways of producing very efficient versions of your model for the target hardware that you're going to use. >> Anna, what's the customer angle here? Because price performance has been around for a long time, people get that, but now latency and throughput, that's key because we're starting to see this in apps. I mean, there's an end user piece. I even seeing it on the infrastructure side where they're taking a heavy lifting away from operational costs. So you got, you know, application specific to the user and/or top of the stack, and then you got actually being used in operations where they want both. >> Yeah, absolutely. Maybe I can illustrate this with a quick story with the customer that we had recently been working with. So this customer is planning to run kind of a transformer based model for tech generation at super high scale on Nvidia T4 GPU, so kind of a commodity GPU. And the scale was so high that they would've been paying hundreds of thousands of dollars in cloud costs per year just to serve this model alone. You know, one of many models in their application stack. So we worked with this team to optimize our model and then benchmark across several possible targets. So that matching the hardware that Luis was just talking about, including the newer kind of Nvidia A10 GPUs. And what they found during this process was pretty interesting. First, the team was able to shave a quarter of their spend just by using better optimization techniques on the T4, the older hardware. But actually moving to a newer GPU would allow them to serve this model in a sub two milliseconds latency, so super fast, which was able to unlock an entirely new kind of user experience. So they were able to kind of change the value they're delivering in their application just because they were able to move to this new hardware easily. So they ultimately decided to plan their deployment on the more expensive A10 because of this, but because of the hardware specific optimizations that we helped them with, they managed to even, you know, bring costs down from what they had originally planned. And so if you extend this kind of example to everything that's happening with generative AI, I think the story we just talked about was super relevant, but the scale can be even higher, you know, it can be tenfold that. We were recently conducting kind of this internal study using GPT-J as a proxy to illustrate the experience of just a company trying to use one of these large language models with an example scenario of creating a chatbot to help job seekers prepare for interviews. So if you imagine kind of a conservative usage scenario where the model generates just 3000 words per user per day, which is, you know, pretty conservative for how people are interacting with these models. It costs 5 cents a session and if you're a company and your app goes viral, so from, you know, beginning of the year there's nobody, at the end of the year there's a million daily active active users in that year alone, going from zero to a million. You'll be spending about $6 million a year, which is pretty unmanageable. That's crazy, right? >> Yeah. >> For a company or a product that's just launching. So I think, you know, for us we see the real way to make these kind of advancements accessible and sustainable, as we said is to bring down cost to serve using these techniques. >> That's a great story and I think that illustrates this idea that deployment cost can vary from situation to situation, from model to model and that the efficiency is so strong with this new wave, it eliminates heavy lifting, creates more efficiency, automates intellect. I mean, this is the trend, this is radical, this is going to increase. So the cost could go from nominal to millions, literally, potentially. So, this is what customers are doing. Yeah, that's a great story. What makes sense on a financial, is there a cost of ownership? Is there a pattern for best practice for training? What do you guys advise cuz this is a lot of time and money involved in all potential, you know, good scenarios of upside. But you can get over your skis as they say, and be successful and be out of business if you don't manage it. I mean, that's what people are talking about, right? >> Yeah, absolutely. I think, you know, we see kind of three main vectors to reduce cost. I think one is make your deployment process easier overall, so that your engineering effort to even get your app running goes down. Two, would be get more from the compute you're already paying for, you're already paying, you know, for your instances in the cloud, but can you do more with that? And then three would be shop around for lower cost hardware to match your use case. So on the first one, I think making the deployment easier overall, there's a lot of manual work that goes into benchmarking, optimizing and packaging models for deployment. And because the performance of machine learning models can be really hardware dependent, you have to go through this process for each target you want to consider running your model on. And this is hard, you know, we see that every day. But for teams who want to incorporate some of these large language models into their applications, it might be desirable because licensing a model from a large vendor like OpenAI can leave you, you know, over provision, kind of paying for capabilities you don't need in your application or can lock you into them and you lose flexibility. So we have a customer whose team actually prepares models for deployment in a SaaS application that many of us use every day. And they told us recently that without kind of an automated benchmarking and experimentation platform, they were spending several days each to benchmark a single model on a single hardware type. So this is really, you know, manually intensive and then getting more from the compute you're already paying for. We do see customers who leave money on the table by running models that haven't been optimized specifically for the hardware target they're using, like Luis was mentioning. And for some teams they just don't have the time to go through an optimization process and for others they might lack kind of specialized expertise and this is something we can bring. And then on shopping around for different hardware types, we really see a huge variation in model performance across hardware, not just CPU vs. GPU, which is, you know, what people normally think of. But across CPU vendors themselves, high memory instances and across cloud providers even. So the best strategy here is for teams to really be able to, we say, look before you leap by running real world benchmarking and not just simulations or predictions to find the best software, hardware combination for their workload. >> Yeah. You guys sound like you have a very impressive customer base deploying large language models. Where would you categorize your current customer base? And as you look out, as you guys are growing, you have new customers coming in, take me through the progression. Take me through the profile of some of your customers you have now, size, are they hyperscalers, are they big app folks, are they kicking the tires? And then as people are out there scratching heads, I got to get in this game, what's their psychology like? Are they coming in with specific problems or do they have specific orientation point of view about what they want to do? Can you share some data around what you're seeing? >> Yeah, I think, you know, we have customers that kind of range across the spectrum of sophistication from teams that basically don't have MLOps expertise in their company at all. And so they're really looking for us to kind of give a full service, how should I do everything from, you know, optimization, find the hardware, prepare for deployment. And then we have teams that, you know, maybe already have their serving and hosting infrastructure up and ready and they already have models in production and they're really just looking to, you know, take the extra juice out of the hardware and just do really specific on that optimization piece. I think one place where we're doing a lot more work now is kind of in the developer tooling, you know, model selection space. And that's kind of an area that we're creating more tools for, particularly within the PyTorch ecosystem to bring kind of this power earlier in the development cycle so that as people are grabbing a model off the shelf, they can, you know, see how it might perform and use that to inform their development process. >> Luis, what's the big, I like this idea of picking the models because isn't that like going to the market and picking the best model for your data? It's like, you know, it's like, isn't there a certain approaches? What's your view on this? 'Cause this is where everyone, I think it's going to be a land rush for this and I want to get your thoughts. >> For sure, yeah. So, you know, I guess I'll start with saying the one main takeaway that we got from the GPT-J study is that, you know, having a different understanding of what your model's compute and memory requirements are, very quickly, early on helps with the much smarter AI model deployments, right? So, and in fact, you know, Anna just touched on this, but I want to, you know, make sure that it's clear that OctoML is putting that power into user's hands right now. So in partnership with AWS, we are launching this new PyTorch native profiler that allows you with a single, you know, one line, you know, code decorator allows you to see how your code runs on a variety of different hardware after accelerations. So it gives you very clear, you know, data on how you should think about your model deployments. And this ties back to choices of models. So like, if you have a set of choices that are equally good of models in terms of functionality and you want to understand after acceleration how are you going to deploy, how much they're going to cost or what are the options using a automated process of making a decision is really, really useful. And in fact, so I think these events can get early access to this by signing up for the Octopods, you know, this is exclusive group for insiders here, so you can go to OctoML.ai/pods to sign up. >> So that Octopod, is that a program? What is that, is that access to code? Is that a beta, what is that? Explain, take a minute and explain Octopod. >> I think the Octopod would be a group of people who is interested in experiencing this functionality. So it is the friends and users of OctoML that would be the Octopod. And then yes, after you sign up, we would provide you essentially the tool in code form for you to try out in your own. I mean, part of the benefit of this is that it happens in your own local environment and you're in control of everything kind of within the workflow that developers are already using to create and begin putting these models into their applications. So it would all be within your control. >> Got it. I think the big question I have for you is when do you, when does that one of your customers know they need to call you? What's their environment look like? What are they struggling with? What are the conversations they might be having on their side of the fence? If anyone's watching this, they're like, "Hey, you know what, I've got my team, we have a lot of data. Do we have our own language model or do I use someone else's?" There's a lot of this, I will say discovery going on around what to do, what path to take, what does that customer look like, if someone's listening, when do they know to call you guys, OctoML? >> Well, I mean the most obvious one is that you have a significant spend on AI/ML, come and talk to us, you know, putting AIML into production. So that's the clear one. In fact, just this morning I was talking to someone who is in life sciences space and is having, you know, 15 to $20 million a year cloud related to AI/ML deployment is a clear, it's a pretty clear match right there, right? So that's on the cost side. But I also want to emphasize something that Anna said earlier that, you know, the hardware and software complexity involved in putting model into production is really high. So we've been able to abstract that away, offering a clean automation flow enables one, to experiment early on, you know, how models would run and get them to production. And then two, once they are into production, gives you an automated flow to continuously updating your model and taking advantage of all this acceleration and ability to run the model on the right hardware. So anyways, let's say one then is cost, you know, you have significant cost and then two, you have an automation needs. And Anna please compliment that. >> Yeah, Anna you can please- >> Yeah, I think that's exactly right. Maybe the other time is when you are expecting a big scale up in serving your application, right? You're launching a new feature, you expect to get a lot of usage or, and you want to kind of anticipate maybe your CTO, your CIO, whoever pays your cloud bills is going to come after you, right? And so they want to know, you know, what's the return on putting this model essentially into my application stack? Am I going to, is the usage going to match what I'm paying for it? And then you can understand that. >> So you guys have a lot of the early adopters, they got big data teams, they're pushed in the production, they want to get a little QA, test the waters, understand, use your technology to figure it out. Is there any cases where people have gone into production, they have to pull it out? It's like the old lemon laws with your car, you buy a car and oh my god, it's not the way I wanted it. I mean, I can imagine the early people through the wall, so to speak, in the wave here are going to be bloody in the sense that they've gone in and tried stuff and get stuck with huge bills. Are you seeing that? Are people pulling stuff out of production and redeploying? Or I can imagine that if I had a bad deployment, I'd want to refactor that or actually replatform that. Do you see that too? >> Definitely after a sticker shock, yes, your customers will come and make sure that, you know, the sticker shock won't happen again. >> Yeah. >> But then there's another more thorough aspect here that I think we likely touched on, be worth elaborating a bit more is just how are you going to scale in a way that's feasible depending on the allocation that you get, right? So as we mentioned several times here, you know, model deployment is so hardware dependent and so complex that you tend to get a model for a hardware choice and then you want to scale that specific type of instance. But what if, when you want to scale because suddenly luckily got popular and, you know, you want to scale it up and then you don't have that instance anymore. So how do you live with whatever you have at that moment is something that we see customers needing as well. You know, so in fact, ideally what we want is customers to not think about what kind of specific instances they want. What they want is to know what their models need. Say, they know the SLA and then find a set of hybrid targets and instances that hit the SLA whenever they're also scaling, they're going to scale with more freedom, right? Instead of having to wait for AWS to give them more specific allocation for a specific instance. What if you could live with other types of hardware and scale up in a more free way, right? So that's another thing that we see customers, you know, like they need more freedom to be able to scale with whatever is available. >> Anna, you touched on this with the business model impact to that 6 million cost, if that goes out of control, there's a business model aspect and there's a technical operation aspect to the cost side too. You want to be mindful of riding the wave in a good way, but not getting over your skis. So that brings up the point around, you know, confidence, right? And teamwork. Because if you're in production, there's probably a team behind it. Talk about the team aspect of your customers. I mean, they're dedicated, they go put stuff into production, they're developers, there're data. What's in it for them? Are they getting better, are they in the beach, you know, reading the book. Are they, you know, are there easy street for them? What's the customer benefit to the teams? >> Yeah, absolutely. With just a few clicks of a button, you're in production, right? That's the dream. So yeah, I mean I think that, you know, we illustrated it before a little bit. I think the automated kind of benchmarking and optimization process, like when you think about the effort it takes to get that data by hand, which is what people are doing today, they just don't do it. So they're making decisions without the best information because it's, you know, there just isn't the bandwidth to get the information that they need to make the best decision and then know exactly how to deploy it. So I think it's actually bringing kind of a new insight and capability to these teams that they didn't have before. And then maybe another aspect on the team side is that it's making the hand-off of the models from the data science teams to the model deployment teams more seamless. So we have, you know, we have seen in the past that this kind of transition point is the place where there are a lot of hiccups, right? The data science team will give a model to the production team and it'll be too slow for the application or it'll be too expensive to run and it has to go back and be changed and kind of this loop. And so, you know, with the PyTorch profiler that Luis was talking about, and then also, you know, the other ways we do optimization that kind of prevents that hand-off problem from happening. >> Luis and Anna, you guys have a great company. Final couple minutes left. Talk about the company, the people there, what's the culture like, you know, if Intel has Moore's law, which is, you know, doubling the performance in few years, what's the culture like there? Is it, you know, more throughput, better pricing? Explain what's going on with the company and put a plug in. Luis, we'll start with you. >> Yeah, absolutely. I'm extremely proud of the team that we built here. You know, we have a people first culture, you know, very, very collaborative and folks, we all have a shared mission here of making AI more accessible and sustainable. We have a very diverse team in terms of backgrounds and life stories, you know, to do what we do here, we need a team that has expertise in software engineering, in machine learning, in computer architecture. Even though we don't build chips, we need to understand how they work, right? So, and then, you know, the fact that we have this, this very really, really varied set of backgrounds makes the environment, you know, it's say very exciting to learn more about, you know, assistance end-to-end. But also makes it for a very interesting, you know, work environment, right? So people have different backgrounds, different stories. Some of them went to grad school, others, you know, were in intelligence agencies and now are working here, you know. So we have a really interesting set of people and, you know, life is too short not to work with interesting humans. You know, that's something that I like to think about, you know. >> I'm sure your off-site meetings are a lot of fun, people talking about computer architectures, silicon advances, the next GPU, the big data models coming in. Anna, what's your take? What's the culture like? What's the company vibe and what are you guys looking to do? What's the customer success pattern? What's up? >> Yeah, absolutely. I mean, I, you know, second all of the great things that Luis just said about the team. I think one that I, an additional one that I'd really like to underscore is kind of this customer obsession, to use a term you all know well. And focus on the end users and really making the experiences that we're bringing to our user who are developers really, you know, useful and valuable for them. And so I think, you know, all of these tools that we're trying to put in the hands of users, the industry and the market is changing so rapidly that our products across the board, you know, all of the companies that, you know, are part of the showcase today, we're all evolving them so quickly and we can only do that kind of really hand in glove with our users. So that would be another thing I'd emphasize. >> I think the change dynamic, the power dynamics of this industry is just the beginning. I'm very bullish that this is going to be probably one of the biggest inflection points in history of the computer industry because of all the dynamics of the confluence of all the forces, which you mentioned some of them, I mean PC, you know, interoperability within internetworking and you got, you know, the web and then mobile. Now we have this, I mean, I wouldn't even put social media even in the close to this. Like, this is like, changes user experience, changes infrastructure. There's going to be massive accelerations in performance on the hardware side from AWS's of the world and cloud and you got the edge and more data. This is really what big data was going to look like. This is the beginning. Final question, what do you guys see going forward in the future? >> Well, it's undeniable that machine learning and AI models are becoming an integral part of an interesting application today, right? So, and the clear trends here are, you know, more and more competitional needs for these models because they're only getting more and more powerful. And then two, you know, seeing the complexity of the infrastructure where they run, you know, just considering the cloud, there's like a wide variety of choices there, right? So being able to live with that and making the most out of it in a way that does not require, you know, an impossible to find team is something that's pretty clear. So the need for automation, abstracting with the complexity is definitely here. And we are seeing this, you know, trends are that you also see models starting to move to the edge as well. So it's clear that we're seeing, we are going to live in a world where there's no large models living in the cloud. And then, you know, edge models that talk to these models in the cloud to form, you know, an end-to-end truly intelligent application. >> Anna? >> Yeah, I think, you know, our, Luis said it at the beginning. Our vision is to make AI sustainable and accessible. And I think as this technology just expands in every company and every team, that's going to happen kind of on its own. And we're here to help support that. And I think you can't do that without tools like those like OctoML. >> I think it's going to be an error of massive invention, creativity, a lot of the format heavy lifting is going to allow the talented people to automate their intellect. I mean, this is really kind of what we see going on. And Luis, thank you so much. Anna, thanks for coming on this segment. Thanks for coming on theCUBE and being part of the AWS Startup Showcase. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

Great to have you on again, Luis. It's great to be here. but let's explain what you guys do. And what we offer customers is, you know, So what do you guys think? so that the business you got the new secret kind of across the board So I have to ask you guys, And that very, very quickly, you know, and the data center versus cloud options. And now that we have them but, you know, training's super important. John, is that, you know, humans and then you got actually managed to even, you know, So I think, you know, for us we see in all potential, you know, And this is hard, you know, And as you look out, as And then we have teams that, you know, and picking the best model for your data? from the GPT-J study is that, you know, What is that, is that access to code? And then yes, after you sign up, to call you guys, OctoML? come and talk to us, you know, And so they want to know, you know, So you guys have a lot make sure that, you know, we see customers, you know, What's the customer benefit to the teams? and then also, you know, what's the culture like, you know, So, and then, you know, and what are you guys looking to do? all of the companies that, you know, I mean PC, you know, in the cloud to form, you know, And I think you can't And Luis, thank you so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnnaPERSON

0.99+

Anna ConnollyPERSON

0.99+

John FurrierPERSON

0.99+

LuisPERSON

0.99+

Luis CezePERSON

0.99+

JohnPERSON

0.99+

1QUANTITY

0.99+

10QUANTITY

0.99+

15QUANTITY

0.99+

AWSORGANIZATION

0.99+

10 yearQUANTITY

0.99+

6 millionQUANTITY

0.99+

zeroQUANTITY

0.99+

IntelORGANIZATION

0.99+

threeQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

FirstQUANTITY

0.99+

OctoMLORGANIZATION

0.99+

twoQUANTITY

0.99+

millionsQUANTITY

0.99+

todayDATE

0.99+

TwoQUANTITY

0.99+

$2 millionQUANTITY

0.98+

3000 wordsQUANTITY

0.98+

one lineQUANTITY

0.98+

A10COMMERCIAL_ITEM

0.98+

OctoMLTITLE

0.98+

oneQUANTITY

0.98+

three main vectorsQUANTITY

0.97+

hundreds of thousands of dollarsQUANTITY

0.97+

bothQUANTITY

0.97+

CUBEORGANIZATION

0.97+

T4COMMERCIAL_ITEM

0.97+

one timeQUANTITY

0.97+

first oneQUANTITY

0.96+

two centsQUANTITY

0.96+

GPT-JORGANIZATION

0.96+

single modelQUANTITY

0.95+

a minuteQUANTITY

0.95+

about $6 million a yearQUANTITY

0.95+

once a dayQUANTITY

0.95+

$20,000 a dayQUANTITY

0.95+

a millionQUANTITY

0.94+

theCUBEORGANIZATION

0.93+

OctopodTITLE

0.93+

this morningDATE

0.93+

first cultureQUANTITY

0.92+

$20 million a yearQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.9+

North StarORGANIZATION

0.9+

Ez Natarajan & Brad Winney | AWS re:Invent 2022 - Global Startup Program


 

(upbeat music) >> Hi everybody. Welcome back to theCUBE as to continue our coverage here at AWS re:Invent '22. We're in the Venetian. Out in Las Vegas, it is Wednesday. And the PaaS is still happening. I can guarantee you that. We continue our series of discussions as part of the "AWS Startup Showcase". This is the "Global Startup Program", a part of that showcase. And I'm joined by two gentlemen today who are going to talk about what CoreStack is up to. One of them is Ez Natarajan, who is the Founder and CEO. Good to have you- (simultaneous chatter) with us today. We appreciate it. Thanks, EZ. >> Nice to meet you, John. >> And Brad Winney who is the area Sales Leader for startups at AWS. Brad, good to see you. >> Good to see you, John. >> Thanks for joining us here on The Showcase. So Ez, first off, let's just talk about CoreStack a little bit for people at home who might not be familiar with what you do. It's all about obviously data, governance, giving people peace of mind, but much deeper than that. I'll let you take it from there. >> So CoreStack is a governance platform that helps customers maximize their cloud usage and get governance at scale. When we talk about governance, we instill confidence through three layers: solving the problems of the CIO, solving the problems of the CTO, solving the problems of the CFO, together with a single pin of class,- >> John: Mm-hmm. >> which helps them achieve continuous holistic automated outcomes at any given time. >> John: Mm-hmm. So, Brad, follow up on that a little bit- >> Yeah. because Ez touched on it there that he's got a lot of stakeholders- >> Right. >> with a lot of different needs and a lot of different demands- >> Mm-hmm. >> but the same overriding emotion, right? >> Yeah. >> They all want confidence. >> They all want confidence. And one of the trickiest parts of confidence is the governance issue, which is policy. It's how do we determine who has access to what, how we do that scale. And across not only start been a process. This is a huge concern, especially as we talked a lot about cutting costs as the overriding driver for 2023. >> John: Mm-hmm. >> The economic compression being what it is, you still have to do this in a secure way and as a riskless way as possible. And so companies like CoreStack really offer core, no pun intended, (Ez laughs) function there where you abstract out a lot of the complexity of governance and you make governance a much more simple process. And that's why we're big fans of what they do. >> So we think governance from a three dimensional standpoint, right? (speaks faintly) How do we help customers be more compliant, secure, achieve the best performance and operations with increased availability? >> Jaohn: Mm-hmm. >> At the same time do the right spend from a cost standpoint. >> Interviewer: Mm-hmm. So when all three dimensions are connected, the business velocity increases and the customer's ability to cater to their customers increase. So our governance tenants come from these three pillars of finance operations, security operations and air operations at cloud operations. >> Yeah. And... Yeah. Please, go ahead. >> Can I (indistinct)? >> Oh, I'm sorry. Just- >> No, that's fine. >> So part of what's going on here, which is critical for AWS, is if you notice a lot of (indistinct) language is at the business value with key stakeholders of the CTO, the CSO and so on. And we're doing a much better job of speaking business value on top of AWS services. But the AWS partners, again, like CoreStack have such great expertise- >> John: Mm-hmm. >> in that level of dialogue. That's why it's such a key part for us, why we're really interested partnering with them. >> How do you wrestle with this, wrestle may not be the right word, but because you do have, as we just went through these litany, these business parts of your business or a business that need access- >> Ez: Mm-hmm. >> and that you need to have policies in place, but they change, right? I mean, and somebody maybe from the financial side should have a window into data and other slices of their business. There's a lot of internal auditing. >> Man: Mm-hmm. >> Obviously, it's got to be done, right? And so just talk about that process a little bit. How you identify the appropriate avenues or the appropriate gateways for people to- >> Sure. >> access data so that you can have that confidence as a CTO or CSO, that it's all right. And we're not going to let too much- >> out to the wrong people. >> Sure. >> Yeah. So there are two dimensions that drive the businesses to look for that kind of confidence building exercise, right? One, there are regulatory external requirements that say that I know if I'm in the financial industry, I maybe need to following NIST, PCI, and sort of compliances. Or if I'm in the healthcare industry, maybe HIPAA and related compliance, I need to follow. >> John: Mm-hmm. >> That's an external pressure. Internally, the organizations based on their geographical presence and the kind of partners and customers they cater to, they may have their own standards. And when they start adopting cloud; A, for each service, how do I make sure the service is secure and it operates at the best level so that we don't violate any of the internal or external requirements. At the same time, we get the outcome that is needed. And that is driven into policies, that is driven into standards which are consumable easily, like AWS offers well-architected framework that helps customers make sure that I know I'm architecting my application workloads in a way that meets the business demands. >> John: Mm-hmm. >> And what CoreStack has done is taken that and automated it in such a way it helps the customers simplify that process to get that outcome measured easily so they get that confidence to consume more of the higher order services. >> John: Okay. And I'm wondering about your relationship as far with AWS goes, because, to me, it's like going deep sea fishing and all of a sudden you get this big 4, 500 pound fish. Like, now what? >> Mm-hmm. >> Now what do we do because we got what we wanted? So, talk about the "Now what?" with AWS in terms of that relationship, what they're helping you with, and the kind of services that you're seeking from them as well. >> Oh, thanks to Brad and the entire Global Startup Ecosystem team at AWS. And we have been part of AWS Ecosystem at various levels, starting from Marketplace to ISV Accelerate to APN Partners, Cloud Management Tools Competency Partner, Co-Sell programs. The team provides different leverages to connect to the entire ecosystem of how AWS gets consumed by the customers. Customers may come through channels and partners. And these channels and partners maybe from WAs to MSPs to SIs to how they really want to use each. >> John: Mm-hmm. >> And the ecosystem that AWS provides helps us feed into all these players and provide this higher order capability which instills confidence to the customers end of the day. >> Man: Absolutely. Right. >> And this can be taken through an MSP. This can be taken through a GSI. This can be taken to the customer through a WA. And that's how our play of expansion into larger AWS customer base. >> Brad: Yeah. >> Brad, from your side of the fence. >> Brad: No, its... This is where the commons of scale come to benefit our partners. And AWS has easily the largest ecosystem. >> John: Mm-hmm. >> Whether or not it's partners, customers, and the like. And so... And then, all the respective teams and programs bring all those resources to bear for startups. Your analogy of of catching a big fish off coast, I actually have a house in Florida. I spend a lot of time there. >> Interviewer: Okay. >> I've yet to catch a big 500 pound fish. But... (interviewer laughs) >> But they're out there. >> But they're definitely out there. >> Yeah. >> And so, in addition to the formalized programs like the Global Partner Network Program, the APN and Marketplace, we really break our activities down with the CoreStacks of the world into two major kind of processes: "Sell to" and "Sell with". And when we say "Sell to", what we're really doing is helping them architect for the future. And so, that plays dividends for their customers. So what do we mean by that? We mean helping them take advantage of all the latest serverless technologies: the latest chip sets like Graviton, thing like that. So that has the added benefit of just lowering the overall cost of deployment and expend. And that's... And we focus on that really extensively. So don't ever want to lose that part of the picture of what we do. >> Mm-hmm. >> And the "Sell with" is what he just mentioned, which is, our teams out in the field compliment these programs like APN and Marketplace with person-to-person in relationship development for core key opportunities in things like FinTech and Retail and so on. >> Interviewer: Mm-hmm. >> We have significant industry groups and business units- >> Interviewer: Mm-hmm. >> in the enterprise level that our teams work with day in and day out to help foster those relationships. And to help CoreStack continue to develop and grow that business. >> Yeah. We've talked a lot about cost, right? >> Yeah. >> But there's a difference between reducing costs or optimizing your spend, right? I mean there- >> Brad: Right. >> Right. There's a... They're very different prism. So in terms of optimizing and what you're doing in the data governance world, what kind of conversations discussions are you having with your clients? And how is that relationship with AWS allowing you to go with confidence into those discussions and be able to sell optimization of how they're going to spend maybe more money than they had planned on originally? >> So today, because of the extra external micro-market conditions, every single customer that we talk to wanting to take a foster status of, "Hey, where are we today? How are we using the cloud? Are we in an optimized state?" >> Interviewer: Mm-hmm. >> And when it comes to optimization, again, the larger customers that we talk to are really bothered about the business outcome and how their services and ability to cater to their customers, right? >> Interviewer: Mm-hmm. >> They don't want to compromise on that just because they want to optimize on the spend. That conversation trickled down to taking a poster assessment first, and then are you using the right set of services within AWS? Are the right set of services being optimized for various requirements? >> Interviewer: Mm-hmm. >> And AWS help in terms of catering to the segment of customers who need that kind of a play through the patent ecosystem. >> John: Mm-hmm. Yeah. We've talked a lot about confidence too, cloud with confidence. >> Brad: Yeah. Yeah. >> What does that mean to different people, you think? I mean, (Brad laughing) because don't you have to feel them out and say "Okay. What's kind of your tolerance level for certain, not risks, but certain measures that you might need to change"? >> I actually think it's flipped the other way around now. I think the risk factor- >> Okay. >> is more on your on-prem environment. And all that goes with that. 'Cause you... Because the development of the cloud in the last 15 years has been profound. It's gone from... That's been the risky proposition now. With all of the infrastructure, all the security and compliance guardrails we have built into the cloud, it's really more about transition and risk of transition. And that's what we see a lot of. And that's why, again, where governance comes into play here, which is how do I move my business from on-prem in a fairly insecure environment relatively speaking to the secure cloud? >> Interviewer: Sure. >> How do I do that without disrupting business? How do I do that without putting my business at risk? And that's a key piece. I want to come back, if I may, something on cost-cutting. >> Interviewer: Sure. >> We were talking about this on the way up here. Cost-cutting, it's the bonfire of the vanities in that in that everybody is talking about cost-cutting. And so we're in doing that perpetuating the very problem that we kind of want to avoid, which is our big cost-cutting. (laughs) So... And I say that because in the venture capital community, what's happening is two things: One is, everybody's being asked to extend their runways as much as possible, but they are not letting them off the hook on growth. And so what we're seeing a lot of is a more nuanced conversation of where you trim your costs, it's not essential, spend, but reinvest. Especially if you've got good strong product market fit, reinvest that for growth. And so that's... So if I think about our playbook for 2023, it's to help good strong startups. Either tune their market fit or now that they good have have good market fit, really run and develop their business. So growth is not off the hook for 2023. >> And then let me just hit on something- >> Yeah. >> before we say goodbye here that you just touched on too, Brad, about. How we see startups, right? AWS, I mean, obviously there's a company focus on nurturing this environment of innovation and of growth. And for people looking at maybe through different prisms and coming. >> Brad: Yeah. >> So if you would maybe from your side of the fence, Ez from CoreStack, about working as a startup with AWS, I mean, how would you characterize that relationship about the kind of partnership that you have? And I want to hear from Brad too about how he sees AWS in general in the startup world. But go ahead. >> It's kind of a mutually enriching relationship, right? The support that comes from AWS because our combined goal is help the customers maximize the potential of cloud. >> Interviewer: Mm-hmm. >> And we talked about confidence. And we talked about all the enablement that we provide. But the partnership helps us get to the reach, right? >> Interviewer: Mm-hmm. >> Reach at scale. >> Interviewer: Mm-hmm. We are talking about customers from different industry verticals having different set of problems. And how do we solve it together so that like the reimbursement that happens, in fact healthcare customers that we repeatedly talk to, even in the current market conditions, they don't want to save. They want to optimize and re-spend their savings using more cloud. >> Interviewer: Mm-hmm. >> So that's the partnership that is mutually enriching. >> Absolutely. >> Yeah. To me, this is easy. I think the reason why a lot of us are here at AWS, especially the startup world, is that our business interests are completely aligned. So I run a pretty significant business unit in a startup neighbor. But a good part of my job and my team's job is to go help cut costs. >> Interviewer: Mm-hmm. >> So tell me... Show me a revenue responsibility position where part of your job is to go cut cost. >> Interviewer: Right. >> It's so unique and we're not a non-profit. We just have a very good long-term view, right? Which is, if we help companies reduce costs and conserve capital and really make sure that that capital is being used the right way, then their long-term viability comes into play. And that's where we have a chance to win more of that business over time. >> Interviewer: Mm-hmm. >> And so because those business interests are very congruent and we come in, we earn so much trust in the process. But I think that... That's why I think we being AWS, are uniquely successful startups. Our business interests are completely aligned and there's a lot of trust for that. >> It's a great success story. It really is. And thank you for sharing your little slice of that and growing slice of that too- >> Yeah. Absolutely. >> from all appearances. Thank you both. >> Thank you, John. >> Thank you very much, John. >> Appreciate your time. >> This is part of the AWS Startup Showcase. And I'm John Walls. You're watching theCUBE here at AWS re:Invent '22. And theCUBE, of course, the leader in high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

And the PaaS is still happening. And Brad Winney with what you do. solving the problems of the CIO, which helps them achieve John: Mm-hmm. that he's got a lot of stakeholders- And one of the trickiest a lot of the complexity of governance do the right spend from a cost standpoint. and the customer's ability to cater Oh, I'm sorry. of the CTO, the CSO and so on. in that level of dialogue. and that you need to or the appropriate gateways for people to- access data so that you that drive the businesses to look for that and the kind of partners it helps the customers and all of a sudden you get and the kind of services and the entire Global Startup And the ecosystem that Right. And this can be taken through an MSP. of the fence. And AWS has easily the largest ecosystem. customers, and the like. (interviewer laughs) So that has the added benefit And the "Sell with" in the enterprise level lot about cost, right? And how is that relationship Are the right set of And AWS help in terms of catering to John: Mm-hmm. What does that mean to the other way around now. And all that goes with that. How do I do that without And I say that because in the that you just touched on too, Brad, about. general in the startup world. is help the customers But the partnership helps so that like the So that's the partnership especially the startup world, So tell me... of that business over time. And so because those business interests and growing slice of that too- Thank you both. This is part of the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

Brad WinneyPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

FloridaLOCATION

0.99+

John WallsPERSON

0.99+

Ez NatarajanPERSON

0.99+

2023DATE

0.99+

Las VegasLOCATION

0.99+

WednesdayDATE

0.99+

JaohnPERSON

0.99+

two dimensionsQUANTITY

0.99+

4, 500 poundQUANTITY

0.99+

each serviceQUANTITY

0.99+

two thingsQUANTITY

0.99+

todayDATE

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.98+

VenetianLOCATION

0.98+

CoreStackORGANIZATION

0.97+

CoreStackTITLE

0.97+

firstQUANTITY

0.97+

EZPERSON

0.97+

two gentlemenQUANTITY

0.97+

HIPAATITLE

0.95+

Global Partner Network ProgramTITLE

0.93+

AWS Startup ShowcaseEVENT

0.93+

PCIORGANIZATION

0.91+

oneQUANTITY

0.91+

NISTORGANIZATION

0.9+

500 pound fishQUANTITY

0.89+

two major kindQUANTITY

0.88+

three layersQUANTITY

0.83+

last 15 yearsDATE

0.81+

Invent 2022 - Global Startup ProgramTITLE

0.81+

single pinQUANTITY

0.8+

WALOCATION

0.79+

eachQUANTITY

0.78+

threeQUANTITY

0.78+

AWSEVENT

0.77+

three pillarsQUANTITY

0.76+

Denise Hayman, Sonrai Security | AWS re:Inforce 2022


 

(bright music) >> Welcome back everyone to the live Cube coverage here in Boston, Massachusetts for AWS re:Inforce 22, with a great guest here, Denise Hayman, CRO, Chief Revenue of Sonrai Security. Sonrai's a featured partner of Season Two, Episode Four of the upcoming AWS Startup Showcase, coming in late August, early September. Security themed startup focused event, check it out. awsstartups.com is the site. We're on Season Two. A lot of great startups, go check them out. Sonrai's in there, now for the second time. Denise, it's great to see you. Thanks for coming on. >> Ah, thanks for having me. >> So you've been around the industry for a while. You've seen the waves of innovation. We heard encrypt everything today on the keynote. We heard a lot of cloud native. They didn't say shift left but they said don't bolt on security after the fact, be in the CI/CD pipeline or the DevStream. All that's kind of top of line, Amazon's talking cloud native all the time. This is kind of what you guys are in the middle of. I've covered your company, you've been on theCUBE before. Your, not you, but your teammates have. You guys have a unique value proposition. Take a minute to explain for the folks that don't know, we'll dig into it, but what you guys are doing. Why you're winning. What's the value proposition. >> Yeah, absolutely. So, Sonrai is, I mean what we do is it's, we're a total cloud solution, right. Obviously, right, this is what everybody says. But what we're dealing with is really, our superpower has to do with the data and identity pieces within that framework. And we're tying together all the relationships across the cloud, right. And this is a unique thing because customers are really talking to us about being able to protect their sensitive data, protect their identities. And not just people identities but the non-people identity piece is the hardest thing for them to reign in. >> Yeah. >> So, that's really what we specialize in. >> And you guys doing good, and some good reports on good sales, and good meetings happening here. Here at the show, the big theme to me, and again, listening to the keynotes, you hear, you can see what's, wasn't talk about. >> Mm-hmm. >> Ransomware wasn't talked about much. They didn't talk about air-gapped. They mentioned ransomware I think once. You know normal stuff, teamwork, encryption everywhere. But identity was sprinkled in everywhere. >> Mm-hmm. >> And I think one of the, my favorite quotes was, I wrote it down, We've security in the development cycle CSD, they didn't say shift left. Don't bolt on any of that. Now, that's not new information. We know that don't bolt, >> Right. >> has been around for a while. He said, lessons learned, this is Stephen Schmidt, who's the CSO, top dog on security, who has access to what and why over permissive environments creates chaos. >> Absolutely. >> This is what you guys reign in. >> It is. >> Explain, explain that. >> Yeah, I mean, we just did a survey actually with AWS and Forrester around what are all the issues in this area that, that customers are concerned about and, and clouds in particular. One of the things that came out of it is like 95% of clouds are, what's called over privileged. Which means that there's access running amok, right. I mean, it, it is, is a crazy thing. And if you think about the, the whole value proposition of security it's to protect sensitive data, right. So if, if it's permissive out there and then sensitive data isn't being protected, I mean that, that's where we really reign it in. >> You know, it's interesting. I zoom out, I just put my historian hat on going back to the early days of my career in late eighties, early nineties. There's always, when you have these inflection points, there's always these problems that are actually opportunities. And DevOps, infrastructure as code was all about APS, all about the developer. And now open source is booming, open source is the software industry. Open source is it in the world. >> Right. >> That's now the software industry. Cloud scale has hit and now you have the Devs completely in charge. Now, what suffers now is the Ops and the Sec, Second Ops. Now Ops, DevOps. Now, DevSecOps is where all the action is. >> Yep. >> So the, the, the next thing to do is build an abstraction layer. That's what everyone's trying to do, build tools and platforms. And so that's where the action is here. This is kind of where the innovation's happening because the networks aren't the, aren't in charge anymore either. So, you now have this new migration up to higher level services and opportunities to take the complexity away. >> Mm-hmm. >> Because what's happened is customers are getting complexity. >> That's right. >> They're getting it shoved in their face, 'cause they want to do good with DevOps, scale up. But by default their success is also their challenge. >> Right. >> 'Cause of complexity. >> That's exactly right. >> This is, you agree with that. >> I do totally agree with that. >> If you, you believe that, then what's next. What happens next? >> You know, what I hear from customers has to do with two specific areas is they're really trying to understand control frameworks, right. And be able to take these scenarios and build them into something that they, where they can understand where the gaps are, right. And then on top of that building in automation. So, the automation is a, is a theme that we're hearing from everybody. Like how, how do they take and do things like, you know it's what we've been hearing for years, right. How do we automatically remediate? How do we automatically prioritize? How do we, how do we build that in so that they're not having to hire people alongside that, but can use software for that. >> The automation has become key. You got to find it first. >> Yes. >> You guys are also part of the DevCycle too. >> Yep. >> Explain that piece. So, I'm a developer, I'm an organization. You guys are on the front end. You're not bolt-on, right? >> We can do either. We prefer it when customers are willing to use us, right. At the very front end, right. Because anything that's built in the beginning doesn't have the extra cycles that you have to go through after the fact, right. So, if you can build security right in from the beginning and have the ownership where it needs to be, then you're not having to, to deal with it afterwards. >> Okay, so how do you guys, I'm putting my customer hat on for a second. A little hard, hard question, hard problem. I got active directory on Azure. I got, IM over here with AWS. I wanted them to look the same. Now, my on-premises, >> Ah. >> Is been booming, now I got cloud operations, >> Right. >> So, DevOps has moved to my premise and edge. So, what do I do? Do I throw everything out, do a redo. How do you, how do you guys talk about, talk to customers that have that chance, 'cause a lot of them are old school. >> Right. >> ID. >> And, and I think there's a, I mean there's an important distinction here which is there's the active directory identities right, that customers are used to. But then there's this whole other area of non-people identities, which is compute power and privileges and everything that gets going when you get you know, machines working together. And we're finding that it's about five-to-one in terms of how many identities are non-human identities versus human identity. >> Wow. >> So, so you actually have to look at, >> So, programmable access, basically. >> Yeah. Yes, absolutely. Right. >> Wow. >> And privileges and roles that are, you know accessed via different ways, right. Because that's how it's assigned, right. And people aren't really paying that close attention to it. So, from that scenario, like the AD thing of, of course that's important, right. To be able to, to take that and lift it into your cloud but it's actually even bigger to look at the bigger picture with the non-human identities, right. >> What about the CISOs out there that you talk to. You're in the front lines, >> Yep. >> talking to customers and you see what's coming on the roadmap. >> Yep. >> So, you kind of get the best of both worlds. See what they, what's coming out of engineering. What's the biggest problem CISOs are facing now? Is it the sprawl of the problems, the hacker space? Is it not enough talent? What, I mean, I see the fear, what are, what are they facing? How do you, how do you see that, and then what's your conversations like? >> Yeah. I mean the, the answer to that is unfortunately yes, right. They're dealing with all of those things. And, and here we are at the intersection of, you know, this huge complex thing around cloud that's happening. There's already a gap in terms of resources nevermind skills that are different skills than they used to have. So, I hear that a lot. The, the bigger thing I think I hear is they're trying to take the most advantage out of their current team. So, they're again, worried about how to operationalize things. So, if we bring this on, is it going to mean more headcount. Is it going to be, you know things that we have to invest in differently. And I was actually just with a CISO this morning, and the whole team was, was talking about the fact that bringing us on means they have, they can do it with less resource. >> Mm-hmm. >> Like this is a a resource help for them in this particular area. So, that that was their value proposition for us, which I loved. >> Let's talk about Adrian Cockcroft who retired from AWS. He was at Netflix before. He was a big DevOps guy. He talks about how agility's been great because from a sales perspective the old model was, he called it the, the big Indian wedding. You had to get everyone together, do a POC, you know, long sales cycles for big tech investments, proprietary. Now, open sources like speed dating. You can know what's good quickly and and try things quicker. How is that, how is that impacting your sales motions. Your customer engagements. Are they fast? Are they, are they test-tried before they buy? What's the engagement model that you, you see happening that the customers like the best. >> Yeah, hey, you know, because of the fact that we're kind of dealing with this serious part of the problem, right. With the identities and, and dealing with data aspects of it it's not as fast as I would like it to be, right. >> Yeah, it's pretty important, actually. >> They still need to get in and understand it. And then it's different if you're AWS environment versus other environments, right. We have to normalize all of that and bring it together. And it's such a new space, >> Yeah. >> that they all want to see it first. >> Yeah. >> Right, so. >> And, and the consequences are pretty big. >> They're huge. >> Yeah. >> Right, so the, I mean, the scenario here is we're still doing, in some cases we'll do workshops instead of a POV or a POC. 90% of the time though we're still doing a POV. >> Yeah, you got to. >> Right. So, they can see what it is. >> They got to get their hands on it. >> Yep. >> This is one of those things they got to see in action. What is the best-of-breed? If you had to say best-of-breed in identity looks like blank. How would you describe that from a customer's perspective? What do they need the most? Is it robustness? What's some of the things that you guys see as differentiators for having a best-of-breed solution like you guys have. >> A best-of-breed solution. I mean, for, for us, >> Or a relevant solution for that matter, for the solution. >> Yeah. I mean, for us, this, again, this identity issue it, for us, it's depth and it's continuous monitoring, right. Because the issue in the cloud is that there are new privileges that come out every single day, like to the tune of like 35,000 a year. So, even if at this exact moment, it's fine. It's not going to be in another moment, right. So, having that continuous monitoring in there, and, and it solves this issue that we hear from a lot of customers also around lateral movement, right. Because like a piece of compute can be on and off, >> Yeah, yeah, yeah. >> within a few seconds, right. So, you can't use any of the old traditional things anymore. So to me, it's the continuous monitoring I think that's important. >> I think that, and the lateral movement piece, >> Yep. >> that you guys have is what I hear the most of the biggest fears. >> Mm-hmm. >> Someone gets in here and can move around, >> That's right. >> and that's dangerous. >> Mm-hmm. And, and no traditional tools will see it. >> Yeah. Yeah. >> Right. There's nothing in there unless you're instrumented down to that level, >> Yeah. >> which is what we do. You're not going to see it. >> I mean, when someone has a firewall, a perimeter based system, yeah, I'm in the castle, I'm moving around, but that's not the case here. This is built for full observability, >> That's right. >> Yet there's so many vulnerabilities. >> It's all open. Mm-hmm, yeah. And, and our view too, is, I mean you bring up vulnerabilities, right. It, it is, you know, a little bit of the darling, right. People start there. >> Yep. >> And, and our belief in our view is that, okay, that's nice. But, and you do have to do that. You have to be able to see everything right, >> Yep. >> to be able to operationalize it. But if you're not dealing with the sensitive data pieces right, and the identities and stuff that's at the core of what you're trying to do >> Yeah. >> then you're not going to solve the problem. >> Yeah. Denise, I want to ask you. Because you make what was it, five-to-one was the machine to humans. I think that's actually might be low, on the low end. If you could imagine. If you believe that's true. >> Yep. >> I believe that's true by the way If microservices continues to be the, be the wave. >> Oh, it'll just get bigger. >> Which it will. It's going to much bigger. >> Yeah. >> Turning on and off, so, the lateral movement opportunities are going to be greater. >> Yep. >> That's going to be a bigger factor. Okay, so how do I protect myself. Now, 'cause developer productivity is also important. >> Mm-hmm. >> 'Cause, I've heard horror stories like, >> Yep. >> Yeah, my Devs are cranking away. Uh-oh, something's out there. We don't know about it. Everyone has to stop, have a meeting. They get pulled off their task. It's kind of not agile. >> Right. Right. >> I mean, >> Yeah. And, and, in that vein, right. We have built the product around what we call swim lanes. So, the whole idea is we're prioritizing based on actual impact and context. So, if it's a sandbox, it probably doesn't matter as much as if it's like operational code that's out there where customers are accessing it, right. Or it's accessing sensitive data. So, we look at it from a swim lane perspective. When we try to get whoever needs to solve it back to the person that is responsible for it. So we can, we can set it up that way. >> Yeah. I think that, that's key insight into operationalizing this. >> Yep. >> And remediation is key. >> Yes. >> How, how much, how important is the timing of that. When you talk to your customer, I mean, timing is obviously going to be longer, but like seeing it's one thing, knowing what to do is another. >> Yep. >> Do you guys provide that? Is that some of the insights you guys provide? >> We do, it's almost like, you know, us. The, and again, there's context that's involved there, right? >> Yeah. >> So, some remediation from a priority perspective doesn't have to be immediate. And some of it is hair on fire, right. So, we provide actually, >> Yeah. >> a recommendation per each of those situations. And, and in some cases we can auto remediate, right. >> Yeah. >> If, it depends on what the customer's comfortable with, right. But, when I talk to customers about what is their favorite part of what we do it is the auto remediation. >> You know, one of the things on the keynotes, not to, not to go off tangent, one second here but, Kurt who runs platforms at AWS, >> Mm-hmm. >> went on his little baby project that he loves was this automated, automatic reasoning feature. >> Mm-hmm. >> Which essentially is advanced machine learning. >> Right. >> That can connect the dots. >> Yep. >> Not just predict stuff but like actually say this doesn't belong here. >> Right. >> That's advanced computer science. That's heavy duty coolness. >> Mm-hmm. >> So, operationalizing that way, the way you're saying it I'm imagining there's some future stuff coming around the corner. Can you share how you guys are working with AWS specifically? Is it with Amazon? You guys have your own secret sauce for the folks watching. 'Cause this remediation should, it only gets harder. You got to, you have to be smarter on your end, >> Yep. >> with your engineers. What's coming next. >> Oh gosh, I don't know how much of what's coming next I can share with you, except for tighter and tighter integrations with AWS, right. I've been at three meetings already today where we're talking about different AWS services and how we can be more tightly integrated and what's things we want out of their APIs to be able to further enhance what we can offer to our customers. So, there's a lot of those discussions happening right now. >> What, what are some of those conversations like? Without revealing. >> I mean, they have to do with, >> Maybe confidential privilege. >> privileged information. I don't mean like privileged information. >> Yep. I mean like privileges, right, >> Right. >> that are out there. >> Like what you can access, and what you can't. >> What you can, yes. And who and what can access it and what can't. And passing that information on to us, right. To be able to further remediate it for an AWS customer. That's, that's one. You know, things like other AWS services like CloudTrail and you know some of the other scenarios that they're talking about. Like we're, you know, we're getting deeper and deeper and deeper with the AWS services. >> Yeah, it's almost as if Amazon over the past two years in particular has been really tightly integrating as a strategy to enable their partners like you guys >> Mm-hmm. >> to be successful. Not trying to land grab. Is that true? Do you get that vibe? >> I definitely get that vibe, right. Yesterday, we spent all day in a partnership meeting where they were, you know talking about rolling out new services. I mean, they, they are in it to win it with their ecosystem. Not on, not just themselves. >> All right, Denise it's great to have you on theCUBE here as part of re:Inforce. I'll give you the last minute or so to give a plug for the company. You guys hiring? What are you guys looking for? Potential customers that are watching? Why should they buy you? Why are you winning? Give a, give the pitch. >> Yeah, absolutely. So, so yes we are hiring. We're always hiring. I think, right, in this startup world. We're growing and we're looking for talent, probably in every area right now. I know I'm looking for talent on the sales side. And, and again, the, I think the important thing about us is the, the fullness of our solution but the superpower that we have, like I said before around the identity and the data pieces and this is becoming more and more the reality for customers that they're understanding that that is the most important thing to do. And I mean, if they're that, Gartner says it, Forrester says it, like we are one of the, one of the best choices for that. >> Yeah. And you guys have been doing good. We've been following you. Thanks for coming on. >> Thank you. >> And congratulations on your success. And we'll see you at the AWS Startup Showcase in late August. Check out Sonrai Systems at AWS Startup Showcase late August. Here at theCUBE live in Boston getting all the coverage. From the keynotes, to the experts, to the ecosystem, here on theCUBE, I'm John Furrier your host. Thanks for watching. (bright music)

Published Date : Jul 26 2022

SUMMARY :

of the upcoming AWS Startup Showcase, This is kind of what you is the hardest thing for them to reign in. So, that's really Here at the show, the big theme to me, You know normal stuff, We've security in the this is Stephen Schmidt, One of the things that came out of it is open source is the software industry. Ops and the Sec, Second Ops. because the networks aren't the, Because what's happened is customers is also their challenge. that, then what's next. So, the automation is a, is a theme You got to find it first. part of the DevCycle too. You guys are on the front end. and have the ownership Okay, so how do you guys, talk to customers that have that chance, and everything that gets Right. like the AD thing of, You're in the front lines, on the roadmap. What, I mean, I see the fear, what are, the answer to that is So, that that was their that the customers like the best. because of the fact that We have to normalize all of And, and the 90% of the time though So, they can see what it is. What is the best-of-breed? I mean, for, for us, for the solution. Because the issue in the cloud is that So, you can't use any of the of the biggest fears. And, and no traditional tools will see it. down to that level, You're not going to see it. but that's not the case here. bit of the darling, right. But, and you do have to do that. that's at the core of to solve the problem. might be low, on the low end. to be the, be the wave. going to much bigger. so, the lateral movement That's going to be a bigger factor. Everyone has to stop, have a meeting. Right. So, the whole idea is that's key insight into is the timing of that. We do, it's almost like, you know, us. doesn't have to be immediate. And, and in some cases we it is the auto remediation. baby project that he loves Which essentially is but like actually say That's advanced computer science. the way you're saying it I'm imagining with your engineers. to be able to further What, what are some of I don't mean like privileged information. I mean like privileges, right, access, and what you can't. some of the other scenarios to be successful. to win it with their ecosystem. to have you on theCUBE here the most important thing to do. Thanks for coming on. From the keynotes, to the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Denise HaymanPERSON

0.99+

Adrian CockcroftPERSON

0.99+

DenisePERSON

0.99+

Stephen SchmidtPERSON

0.99+

AWSORGANIZATION

0.99+

BostonLOCATION

0.99+

John FurrierPERSON

0.99+

95%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

SonraiPERSON

0.99+

ForresterORGANIZATION

0.99+

KurtPERSON

0.99+

todayDATE

0.99+

late eightiesDATE

0.99+

90%QUANTITY

0.99+

second timeQUANTITY

0.99+

NetflixORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

Sonrai SecurityORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

YesterdayDATE

0.99+

late AugustDATE

0.99+

early ninetiesDATE

0.98+

three meetingsQUANTITY

0.98+

one secondQUANTITY

0.98+

OneQUANTITY

0.98+

fiveQUANTITY

0.97+

eachQUANTITY

0.97+

oneQUANTITY

0.97+

awsstartups.comOTHER

0.96+

DevSecOpsTITLE

0.96+

early SeptemberDATE

0.96+

both worldsQUANTITY

0.96+

35,000 a yearQUANTITY

0.95+

two specific areasQUANTITY

0.95+

CROPERSON

0.94+

AzureTITLE

0.93+

firstQUANTITY

0.92+

this morningDATE

0.9+

DevCycleORGANIZATION

0.89+

DevOpsTITLE

0.89+

2022DATE

0.88+

AWS Startup ShowcaseEVENT

0.86+

CloudTrailTITLE

0.86+

late August,DATE

0.85+

IndianOTHER

0.83+

Season TwoQUANTITY

0.8+

DevStreamORGANIZATION

0.8+

about fiveQUANTITY

0.79+

theCUBEORGANIZATION

0.78+

Chief RevenuePERSON

0.77+

past two yearsDATE

0.77+

one thingQUANTITY

0.77+

Sonrai SystemsPERSON

0.73+

SonraiORGANIZATION

0.7+

single dayQUANTITY

0.69+

CubeTITLE

0.66+

waves of innovationEVENT

0.66+

Episode FourQUANTITY

0.62+

thingsQUANTITY

0.61+

yearsQUANTITY

0.61+

Inforce 22TITLE

0.45+

secondQUANTITY

0.42+

Ed Bailey, Cribl | AWS Startup Showcase S2 E2


 

(upbeat music) >> Welcome everyone to theCUBE presentation of the AWS Startup Showcase, the theme here is Data as Code. This is season two, episode two of our ongoing series covering the exciting startups from the AWS ecosystem. And talk about the future of data, future of analytics, the future of development and all kind of cool stuff in Multicloud. I'm your host, John Furrier. Today we're joined by Ed Bailey, Senior Technology, Technical Evangelist at Cribl. Thanks for coming on the queue here. >> I thank you for the invitation, thrilled to be here. >> The theme of this session is the observability lake, which I love by the way I'm getting into that in a second. A breach investigation's best friend, which is a great topic. Couple of things, one, I like the breach investigation angle, but I also like this observability lake positioning, because I think this is a teaser of what's coming, more and more data usage where it's actually being applied specifically for things here, it's observability lake. So first, what is an observability lake? Why is it important? >> Why it's important is technology professionals, especially security professionals need data to make decisions. They need data to drive better decisions. They need data to understand, just to achieve understanding. And that means they need everything. They don't need what they can afford to store. They don't need not what vendor is going to let them store. They need everything. And I think as a point of the observability lake, because you couple an observability pipeline with the lake to bring your enterprise of data, to make it accessible for analytics, to be able to use it, to be able to get value from it. And I think that's one of the things that's missing right now in the enterprises. Admins are being forced to make decisions about, okay, we can't afford to keep this, we can afford to keep this, they're missing things. They're missing parts of the picture. And by bringing, able to bring it together, to be able to have your cake and eat it too, where I can get what I need and I can do it affordably is just, I think that's the future, and it just drives value for everyone. >> And it just makes a lot of sense data lake or the earlier concert, throw everything into the lake, and you can figure it out, you can query it, you can take action on it real time, you can stream it. You can do all kinds of things with it. Verb observability is important because it's the most critical thing people are doing right now for all kinds of things from QA, administration, security. So this is where the breach piece comes in. I like that's part of the talk because the breached investigation's best friend, it implies that you got the secret sourced to behind it, right? So, what is the state of the breach investigation today? What's going on with that? Because we know breaches, we see 'em out there, but like, why is this the best friend of a breach investigator? >> Well, and this is unfortunate, but typically there's an enormous delay between breach and detection. And right now, there's an IBM study, I think it's 287 days, but from the actual breach to detection and containment. It's an enormous amount of time. And the key is so when you do detect a breach, you're bringing in your instant, your response team, and typically without an observability lake, without Cribl solutions around observability pipeline, you're going to have an incomplete picture. The incident response team has to first to understand what's the scope of the breach. Is it one server? Is it three servers? Is it all the servers? You got to understand what's been compromised, what's been the end, what's the impact? How did the breach occur in the first place? And they need all the data to stitch that together, and they need it quickly. The more time it takes to get that data, the more time it takes for them to finish their analysis and contain the breach. I mean, hence the, I think about an 87, 90 days to contain a breach. And so by being able to remove the friction, by able to make it easier to achieve these goals, what shouldn't be hard, but making, by removing that friction, you speed up the containment and resolution time. Not to mention for many system administrators, they don't simply have the data because they can afford to store the data in their SIEM. Or they have to go to their backup team to get a restore which can take days. And so that's-- It's just so many obstacles to getting resolution right now. >> I mean, it's just, you're crawling through glass there, right? Because you think about it like just the timing aspect. Where is the data? Where is it stored and relevant and-- >> And do you have it at all? >> And you have it at all, and then, you know, that person doesn't work anywhere, they change jobs. I mean, who is keeping track of all this? You guys have now, this capability where you can come in and do the instrumentation with the observability lake without a lot of change to the environment, which is not the way it used to be. Used to be, buy a tool, build a platform. Cribl has a solution that eases the struggles with the enterprise. What specifically is that pain point? And what do you guys do specifically? >> Well, I'll start out with kind of example, what drew me to Cribl, so back in 2018. I'm running the Splunk team for a very large multinational. The complexity of that, we were dealing with the complexity of the data, the demands we were getting from security and operations were just an enormous issue to overcome. I had vendors come to me all the time that will solve your problems, but that means you got to move to our platform where you have to get rid of Splunk or you have to do this, and I'm losing something. And what Cribl stream brought into, was I could put it between my sources and my destinations and manage my data. And I would have flow control over the data. I don't have to lose anything. I could keep continuing use our existing analytics tools, and that sense of power and control, and I don't have to lose anything. I was like, there's something wrong here. This is too good to be true. And so what we're talking about now in terms of breach investigation, is that with Cribl stream, I can create a clone of my data to an object store. So this is in, this is almost any object store. So it can be AWS, it could be the other vendor object stores. It could be on-prem object stores. And then I can house my data, I can house all my data at the cheapest possible price. So instead of eating up my most expensive storage, I put all my data in my object store. And I only put the data I need for the detections in my SIEM. So if, and hopefully never, but if you do have a breach, lock stream has a wonderful UI that makes a trivial to then pick my data out of my object store and restore it back into my SIEM so that my IR team has to develop a complete picture of how the breach happen. What's the scope? What is their lateral movement and answer those questions. And it just, it takes the friction away. Just like you said, just no more crawling over glass. You're running to your solution. >> You mentioned object store, and you're streaming that in. You talk about the Cribble stream tool. I'm assuming there when you're streaming the pipeline stuff, but is there a schema involved? Is there database challenges? What, how do you guys look at that? I know you're vendor agnostic. I like that piece, you plug in and you leverage all the tools that are out there, Splunk, Datadog, whatever. But how about on the database side, what's the impact there? >> Well, so I'm assuming you're talking about the object store itself, so we don't have to apply the schema. We can fit the data to whichever the object store is. We structure the data so it makes it easier to understand. For example, if I want to see communications from one IP to another IP, we structure it to make it easier to see that and query that, but it is just, we're-- Yeah, it's completely vendor neutral and this makes it so simple, so simple to enable, I think-- >> So no pre-defined schema needed. >> No, not at all. And this, it made it so much easier. I think we enabled this for the enterprise. I think it took us three hours to do, and we were able to then start, I mean, start cutting our retention costs dramatically. >> Yeah, it's great when you get that kind of value, time to value critical and all the skeptics fall to the sides pretty quickly. (chuckles) I got to ask you, well, go ahead. >> So I say, I mean, previously, I would have to go to our backup team. We'd have to open up a ticket, we'd have to have a bridge, then we'd have to go through the process of pulling tape and being, it could take, you know, hours, hours if not days to restore the amount of data we needed. And just it, you know, we were able to run to our goals, and solve business problems instead of focusing on the process steps of getting things done. >> Right, so take me through the architecture here and some customer examples, 'cause you have the Cribble streaming there, observability pipeline. That's key, you mentioned that. >> Yes. >> And then they build out these observability lakes from that. So what is the impact of that? Can you share the customers that are using that solution? What are they seeing for benefits? What are some of the impact? Can you give us some specifics? >> I mean, I can't share with all the exact customer names. I can definitely give you some examples. Like referenceable conference would be TransUnion, so that I came from TransUnion. I was one of the first customers and it solved enormous number of problems for us. Autodesk is another great example. The idea that we're able to automate and data practices. I mean, just for example, what we were talking about with backups. We'd have to, you have to put a lot of time into managing your backups in your inner analytics platforms, you have to. And then you're locked into custom database schemas, you're locked into vendors. And it's also, it's still, it's expensive. So being able to spend a few hours, dramatically cut your costs, but still have the data available, and that's the key. I didn't have to make compromises, 'cause before I was having to say, okay, we're going to keep this, we're going to just drop this and hope for the best. And we just don't, we just didn't have to do that anymore. I think for the same thing for TransUnion and Autodesk, the idea that we're going to lower our cost, we're going to make it easier for our administrators to do their job and so they can spend more time on business value fundamentals, like responding to a breach. You're going to spend time working with your teams, getting value observability solutions and stop spending time on writing custom solutions using to open source tools. 'Cause your engineering time is the most precious asset for any enterprise and you got to focus your engineering time on where it's needed the most. >> Yeah, and they can't underestimate the hassle and cost of ownership, of swapping out pre-existing stuff, just for the sake of having a functionality. I mean that's a big-- >> It's pain and that's a big thing about lock stream is that being vendor neutral is so important. If you want to use the Splunk universal forwarder, that's great. If you want to use Beats, that's awesome. If you want to use Fluentd, even better. If you want to use all three, you can do that too. It's the customer choice and we're saying to people, use what suits your needs. And if you want to write some of your data to elastic, that's great. Some of your data to Splunk, that's even better. Some of it to, pick your pick, fine as well or Exabeam. You have the choices to put together, put your own solutions together and put your data where you need it to be. We're not asking you only in our ecosystem to work with only our partners. We're letting you pick and choose what suits your business. >> Yeah, you know, that's the direction I was just talking about the Amazon folks around their serverless. You know, you can use any tool, you know, you can, they have that core architecture for everything, the S3 and then pick whatever you want to use. SageMaker, just that other thing. This is the new way. That's the way it has to be to be effective. How do you guys handle that? What's been the reaction from customers? Do they like, roll their eyes and doubt you guys, or can you do it? Are they skeptical? How fast can you convert 'em over? (chuckles) >> Right, and that's always the challenge. And that's, I mean, the best part of my day is talking to customers. I love hearing and feedback, what they like, what they don't and what they need. And of course I was skeptical. I didn't believe it when I first saw it because I was like this, you know, because I'm, I was used to being locked in. I was used to having to put a lot of effort, a lot of custom code, like, what do you mean? It's this easy? I believe I did the first, this is 2018, and I did our first demos, like 30 minutes in, and I cut about 1/2 million dollars out of our license in the first 30 minutes in our first demo. And I was stunned because I mean, it's like, this is easy. >> Yeah, I mean-- >> Yeah, exactly. I mean, this is, and then this is the future. And then for example, we needed to bring in so like the security team wanted to bring in a UBA solution that wasn't part of the vendor ecosystem that we were in. And I was like, not a problem. We're going to use log stream. We're going to clone a copy of our data to the UBA solution. We were able to get value from this UBA solution in weeks. What typically is a six month cycle to start getting value. And it just, it was just too easy and the best part of it. And the thing is, it just struck me was my engineers can now spend their time on delivering value instead of integrations and moving data around. >> Yeah, and also we can spend more time preventing breaches. But what's interesting is counterintuitive here is that, if you, as you add more flexibility and choice, you'd think it'd be harder to handle a breach, right? So, now let's go back to the scenario. Now you guys, say an organization has a breach, and they have the observability pipeline, They got the lake in place, your observability lake, take me through the investigation. How easy is it, what happens? How they start it, what goes on? >> So, once your SOC detects a breach, then they bring in the idea. Typically you're going to bring in your incident response team. So what we did, and this is one more way that we removed that friction, we cleaned up the glass, is we delegate to the instant response team, the ability to restore, we call it-- So if Cribl calls it replay, we play data at our object store back into your SIEM. There's a very nice UI that gives you the ability to say, "I want data from this time period, at this time period, I want it to be all the data." Or the ability to filter and say, "I want this, just this IP." For example, if I detected, okay, this IP has been breached then I'm going to pull all the data that mentions this IP and this timeframe, hit a button and it just starts. And then it's going to restore how as fast your IOPS are for your solution. And then it's back in your tool, it's back in your tool. One of the things I also want to mention is we have an amazing enrichment capability. So one of the things that we would do is we would've pipelines so as the data comes out of the object store, it hits the pipeline, and then we enrich it. We hit use GoIP information, perverse and NAS. It gets processed through threat Intel feed. So the data's already enriched and ready for the incident response people to do their job. And so it just, it bamboozle the friction of getting to the point where I can start doing my job. >> You know, at this theme, this episode for this showcase is about Data as Code. And which is, you know, we've been, I've been saying this on theCUBES for since it was being around 13 years ago, that developers are going to be dealing with data like they deal with software code, and you're starting to see, you mentioned enrichment. Where do you see Data as Code going? How relevant in it now, because we really talking about when you add machine learning in here, that has to be enriched, and iterated on too. We're talking about taking things off a branch and putting it back into the core. This is a data discussion, this isn't software, but it sounds the same. >> Right, and this is something that the irony is that, I remember first time saying it to an auditor. I was constantly going with auditors, and that's what I described is I'm going to show you the code that manages the data. This is the data's code that's going to show you how we transform it, how we secure it, where the data goes, how it's enriched. So you can see the whole story, the data life cycle in one place. And that's how we handled our orders. And I think that is enormously, you know, positive because it's so easy to be confused. It's so easy to have complexity to get in the way of progress. And by being able to represent your Data as Code, it's a step forward 'cause the amount of data and the complexity of data, it's not getting simpler, it's getting more complex. So we need to come up with better ways to handle it. >> Now you've been on both sides of the fence. You've been in the trenches as customer, now you're a supplier with Great Solution. What are people doing with this data engineering roles? Because it's not enough data engineering. I mean, 'cause if you say Data as Code, if you believe that to be true and many people do, we do. And you looked at the history of infrastructure risk code that enabled DevOps, AIOps, MLOps, DataOps, it's happening, right? So data stack ops is coming. Obviously security is huge in this. How does that data engineering role evolve? Because it just seems more and more that there's going to be a big push towards an SRE version of data, right? >> I completely agree. I was working with a customer yesterday, and I spent a large part of our conversation talking about implementing development practices for administrators. It's a new role. It's a new way to think of things 'cause traditionally your Splunk or elastic administrators is talking about operating systems and memory and talking about how to use proprietary tools in the vendor, that's just not quite the same. And so we started talking about, you need to have, you need to start getting used to code reviews. Yeah, the idea of getting used to making sure everything has a comment, was one thing I told him was like, you know, if you have a function has to have a comment, just by default, just it has to. Yeah, the standards of how you write things, how you name things all really start to matter. And also you got to start adding, considering your skillset. And this is some mean probably one of the best hire I ever made was I hired a guy with a math degree, because I needed his help to understand how do machine learning works, how to pick the best type of algorithm. And I think this is going to evolve, that you're going to be just away from the gray bearded administrator to some other gray bearded administrator with a math degree. >> It's interesting, it's a step function. You have a data engineer who's got that kind of capabilities, like what the SRA did with infrastructure. The step function of enablement, the value creation from really good data engineering, puts the democratization playback on the table, and changes, >> Thank you very much John. >> And changes that entire landscape. How do you, what's your reaction to that? >> I completely agree 'cause so operational data. So operational security data is the most volatile data in the enterprise. It changes on a whim, you have developers who change things. They don't tell you what happens, vendor doesn't tell you what happened, and so that idea, that life cycle of managing data. So the same types of standards of disciplines that database administrators have done for years is going to have, it has to filter down into the operational areas, and you need tooling that's going to give you the ability to manage that data, manage it in flight in real time, in order to drive detections, in order to drive response. All those business value things we've been talking about. >> So I got to ask you the larger role that you see with observability lakes we were talking before we came on camera live here about how exciting this kind of concept is, and you were attracted to the company because of it. I love the observability lake concept because it puts all that data in one spot, you can manage it. But you got machine learning in AI around the corner that also can help. How has all this changed in the landscape of data security and things because it makes a lot of sense, and I can only see it getting better with machine learning. >> Yeah, definitely does. >> Totally, and so the core issue, and I don't want to say, so when you talk about observability, most people have assumptions around observability is only an operational or an application support process. It's also security process. The idea that you're looking for your unknown, unknowns. This is what keeps security administrators up at night is I'm being attacked by something I don't know about. How do you find those unknown? And that's where your machine learning comes in. And that's where that you have to understand there's so many different types of machine learning algorithms, where the guy that I hired, I mean, had started educating me about the umpteen number of algorithms and how it applies to different data and how you get different value, how you have to test your data constantly. There's no such thing as the magical black box of machine learning that gives you value. You have to implement, but just like the developer practices to keep testing and over and over again, data scientists, for example. >> The best friend of a machine learning algorithm is data, right? You got to keep feeding that data, and when the data sets are baked and secure and vetted, even better, all cool. Had great stuff, great insight. Congratulations Cribl, Great Solution. Love the architecture, love the pipelining of the observability data and streaming that in to a lake. Great stuff. Give a plug for the company where you guys are at, where people can get information. I know you guys got a bunch of live feeds on YouTube, Twitch, here in theCUBE. Where else can people find you? Give the plug. >> Oh, please, please join our slack community, go to cribl.io/community. We have an amazing community. This was another thing that drew me to the company is have a large group of people who are genuinely excited about data, about managing data. If you want to try Cribl out, we have some great tool. Try Cribl tools out. We have a cloud platform, one terabyte up free data. So go to cribl.io/cloud or cribl.cloud, sign up for, you know, just never times out. You're not 30 day, it's forever up to one terabyte. Try out our new products as well, Cribl Edge. And then finally come watch Nick Decker and I, every Thursday, 2:00 PM Eastern. We have live streams on Twitter, LinkedIn and YouTube live. And so just my Twitter handle is EBA 1367. Love to have, love to chat, love to have these conversations. And also, we are hiring. >> All right, good stuff. Great team, great concepts, right? Of course, we're theCUBE here. We got our video lake coming on soon. I think I love this idea of having these video. Hey, videos data too, right? I mean, we've got to keep coming to you. >> I love it, I love videos, it's awesome. It's a great way to communicate, it's a great way to have a conversation. That's the best thing about us, having conversations. I appreciate your time. >> Thank you so much, Ed, for representing Cribl here on the Data as Code. This is season two episode two of the ongoing series covering the hottest, most exciting startups from the AWS ecosystem. Talking about the future data, I'm John Furrier, your host. Thanks for watching. >> Ed: All right, thank you. (slow upbeat music)

Published Date : Apr 26 2022

SUMMARY :

And talk about the future of I thank you for the I like the breach investigation angle, to be able to have your I like that's part of the talk And the key is so when Where is the data? and do the instrumentation And I only put the data I need I like that piece, you We can fit the data to for the enterprise. I got to ask you, well, go ahead. and being, it could take, you know, hours, the Cribble streaming there, What are some of the impact? and that's the key. just for the sake of You have the choices to put together, This is the new way. I believe I did the first, this is 2018, And the thing is, it just They got the lake in place, the ability to restore, we call it-- and putting it back into the core. is I'm going to show you more that there's going to be And I think this is going to evolve, the value creation from And changes that entire landscape. that's going to give you the So I got to ask you the Totally, and so the core of the observability data and that drew me to the company I think I love this idea That's the best thing about Cribl here on the Data as Code. Ed: All right, thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John FurrierPERSON

0.99+

EdPERSON

0.99+

Ed BaileyPERSON

0.99+

TransUnionORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

2018DATE

0.99+

AutodeskORGANIZATION

0.99+

AWSORGANIZATION

0.99+

three hoursQUANTITY

0.99+

287 daysQUANTITY

0.99+

IBMORGANIZATION

0.99+

30 dayQUANTITY

0.99+

six monthQUANTITY

0.99+

first demoQUANTITY

0.99+

yesterdayDATE

0.99+

CriblORGANIZATION

0.99+

first demosQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

TwitchORGANIZATION

0.99+

firstQUANTITY

0.99+

both sidesQUANTITY

0.99+

three serversQUANTITY

0.99+

SplunkORGANIZATION

0.99+

one spotQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.98+

30 minutesQUANTITY

0.98+

CriblPERSON

0.98+

UBAORGANIZATION

0.98+

one placeQUANTITY

0.98+

one terabyteQUANTITY

0.98+

first 30 minutesQUANTITY

0.98+

LinkedInORGANIZATION

0.98+

SRAORGANIZATION

0.97+

TodayDATE

0.97+

one more wayQUANTITY

0.97+

about 1/2 million dollarsQUANTITY

0.96+

one serverQUANTITY

0.96+

TwitterORGANIZATION

0.96+

BeatsORGANIZATION

0.96+

Nick DeckerPERSON

0.96+

CriblTITLE

0.95+

todayDATE

0.94+

Cribl EdgeTITLE

0.94+

first customersQUANTITY

0.94+

87, 90 daysQUANTITY

0.93+

Thursday, 2:00 PM EasternDATE

0.92+

around 13 years agoDATE

0.91+

first timeQUANTITY

0.89+

threeQUANTITY

0.87+

cribl.io/communityOTHER

0.87+

IntelORGANIZATION

0.87+

cribl.cloudTITLE

0.86+

DatadogORGANIZATION

0.85+

S3TITLE

0.84+

Cribl streamTITLE

0.82+

cribl.io/cloudTITLE

0.81+

Couple of thingsQUANTITY

0.78+

twoOTHER

0.78+

episodeQUANTITY

0.74+

AWS Startup ShowcaseEVENT

0.72+

lockTITLE

0.72+

ExabeamORGANIZATION

0.71+

Startup Showcase S2 E2EVENT

0.69+

season twoQUANTITY

0.67+

MulticloudTITLE

0.67+

up to one terabyteQUANTITY

0.67+

Venkat Venkataramani, Rockset & Doug Moore, Command Alkon | AWS Startup Showcase S2 E2


 

(upbeat music) >> Hey everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. This is Data as Code, The Future of Enterprise Data and Analytics. This is also season two, episode two of our ongoing series with exciting partners from the AWS ecosystem who are here to talk with us about data and analytics. I'm your host, Lisa Martin. Two guests join me, one, a cube alumni. Venkat Venkataramani is here CEO & Co-Founder of Rockset. Good to see you again. And Doug Moore, VP of cloud platforms at Command Alkon. You're here to talk to me about how Command Alkon implemented real time analytics in just days with Rockset. Guys, welcome to the program. >> Thanks for having us. >> Yeah, great to be here. >> Doug, give us a little bit of a overview of Command Alkon, what type of business you are? what your mission is? That good stuff. >> Yeah, great. I'll pref it by saying I've been in this industry for only three years. The 30 years prior I was in financial services. So this was really exciting and eye opening. It actually plays into the story of how we met Rockset. So that's why I wanted to preface that. But Command Alkon is in the business, is in the what's called The Heavy Building Materials Industry. And I had never heard of it until I got here. But if you think about large projects like building buildings, cities, roads anything that requires concrete asphalt or just really big trucks, full of bulky materials that's the heavy building materials industry. So for over 40 years Command Alkon has been the north American leader in providing software to quarries and production facilities to help mine and load these materials and to produce them and then get them to the job site. So that's what our supply chain is, is from the quarry through the development of these materials, then out to the to a heavy building material job site. >> Got it, and now how historically in the past has the movement of construction materials been coordinated? What was that like before you guys came on the scene? >> You'll love this answer. So 'cause, again, it's like a step back in time. When I got here the people told me that we're trying to come up with the platform that there are 27 industries studied globally. And our industry is second to last in terms of automation which meant that literally everything is still being done with paper and a lot of paper. So when one of those, let's say material is developed, concrete asphalt is produced and then needs to get to the job site. They start by creating a five part printed ticket or delivery description that then goes to multiple parties. It ends up getting touched physically over 50 times for every delivery. And to give you some idea what kind of scale it is there are over 330 million of these type deliveries in north America every year. So it's really a lot of favor and a lot of manual work. So that was the state of really where we were. And obviously there are compelling reasons certainly today but even 3, 4, 5 years ago to automate that and digitize it. >> Wow, tremendous potential to go nowhere but up with the amount of paper, the lack of, of automation. So, you guys Command Alkon built a platform, a cloud software construction software platform. Talk to me of about that. Why you built it, what was the compelling event? I mean, I think you've kind of already explained the compelling event of all the paper but give us a little bit more context. >> Yeah. That was the original. And then we'll get into what happened two years ago which has made it even more compelling but essentially with everything on premises there's really in a huge amount of inefficiency. So, people have heard the enormous numbers that it takes to build up a highway or a really large construction project. And a lot of that is tied up in these inefficiencies. So we felt like with our significant presence in this market, that if we could figure out how to automate getting this data into the cloud so that at least the partners in the supply chain could begin sharing information. That's not on paper a little bit closer to real time that we could make has an impact on everything from the timing it takes to do a project to even the amount of carbon dioxide that's admitted, for example from trucks running around and being delayed and not being coordinated well. >> So you built the connect platform you started on Amazon DynamoDB and ran into some performance challenges. Talk to us about the, some of those performance bottlenecks and how you found Venkat and Rockset. >> So from the beginning, we were fortunate, if you start building a cloud three years ago you're you have a lot of opportunity to use some of the what we call more fully managed or serverless offerings from Amazon and all the cloud vendors have them but Amazon is the one we're most familiar with throughout the past 10 years. So we went head first into saying, we're going to do everything we can to not manage infrastructure ourselves. So we can really focus on solving this problem efficiently. And it paid off great. And so we chose dynamo as our primary database and it still was a great decision. We have obviously hundreds of millions of billions of these data points in dynamo. And it's great from a transactional perspective, but at some point you need to get the data back out. And what plays into the story of the beginning when I came here with no background basically in this industry, is that, and as did most of the other people on my team, we weren't really sure what questions were going to be asked of the data. And that's super, super important with a NoSQL database like dynamo. You sort of have to know in advance what those usage patterns are going to be and what people are going to want to get back out of it. And that's what really began to strain us on both performance and just availability of information. >> Got it. Venkat, let's bring you into the conversation. Talk to me about some of the challenges that Doug articulated the, is industry with such little automation so much paper. Are you finding that still out there for in quite a few industries that really have nowhere to go but up? >> I think that's a very good point. We talk about digital transformation 2.0 as like this abstract thing. And then you meet like disruptors and innovators like Doug, and you realize how much impact, it has on the real world. But now it's not just about disrupting, and digitizing all of these records but doing it at a faster pace than ever before, right. I think this is really what digital transformation in the cloud really enable tools you do that, a small team in a, with a very very big mission and responsibility like what Doug team have been, shepherding here. They're able to move very, very, very fast, to be able to kind of accelerate this. And, they're not only on the forefront of digitizing and transforming a very big, paper-heavy kind of process, but real-time analytics and real time reporting is a requirement, right? Nobody's wondering where is my supply chain three days ago? Are my, one of the most important thing in heavy construction is to keep running on a schedule. If you fall behind, there's no way to catch up because there's so many things that falls apart. Now, how do you make sure you don't fall behind, realtime analytics and realtime reporting on how many trucks are supposed to be delivered today? Halfway through the day, are they on track? Are they getting behind? And all of those things is not just able to manage the data but also be able to get reporting and analytics on that is a extremely important aspect of this. So this is like a combination of digital transformation happening in the cloud in realtime and realtime analytics being in the forefront of it. And so we are very, very happy to partner with digital disruptors like Doug and his team to be part of this movement. >> Doug, as Venkat mentioned, access to real time data is a requirement that is just simple truth these days. I'm just curious, compelling event wise was COVID and accelerator? 'Cause we all know of the supply chain challenges that we're all facing in one way or the other, was that part of the compelling event that had you guys go and say, we want to do DynamoDB plus Rockset? >> Yeah, that is a fantastic question. In fact, more so than you can imagine. So anytime you come into an industry and you're going to try to completely change or revolutionize the way it operates it takes a long time to get the message out. Sometimes years, I remember in insurance it took almost 10 years really to get that message out and get great adoption and then COVID came along. And when COVID came along, we all of a sudden had a situation where drivers and the foreman on the job site didn't want to exchange the paperwork. I heard one story of a driver taping the ticket for signature to the foreman on a broomstick and putting it out his windows so that he didn't get too close. It really was that dramatic. And again, this is the early days and no one really has any idea what's happening and we're all working from home. So we launched, we saw that as an opportunity to really help people solve that problem and understand more what this transformation would mean in the long term. So we launched internally what we called Project Lemonade obviously from, make lemonade out of lemons, that's the situation that we were in and we immediately made some enhancements to a mobile app and then launched that to the field. So that basically there's now a digital acceptance capability where the driver can just stay in the vehicle and the foreman can be anywhere, look at the material say it's acceptable for delivery and go from there. So yeah, it made a, it actually immediately caused many of our customers hundreds to begin, to want to push their data to the cloud for that reason just to take advantage of that one capability >> Project lemonade, sounds like it's made a lot of lemonade out of a lot of lemons. Can you comment Doug on kind of the larger trend of real time analytics and logistics? >> Yeah, obviously, and this is something I didn't think about much either not knowing anything about concrete other than it was in my driveway before I got here. And that it's a perishable product and you've got that basically no more than about an hour and a half from the time you mix it, put it in the drum and get it to the job site and pour it. And then the next one has to come behind it. And I remember I, the trend is that we can't really do that on paper anymore and stay on top of what has to be done we'll get into the field. So a foreman, I recall saying that when you're in the field waiting on delivery, that you have people standing around and preparing the site ready to make a pour that two minutes is an eternity. And so, working a real time is all always a controversial word because it means something different to anyone, but that gave it real, a real clarity to mean, what it really meant to have real time analytics and how we are doing and where are my vehicles and how is this job performing today? And I think that a lot of people are still trying to figure out how to do that. And fortunately, we found a great tool set that's allowing us to do that at scale. Thankfully, for Rockset primarily. >> Venkat talk about it from your perspective the larger trend of real time analytics not just in logistics, but in other key industries. >> Yeah. I think we're seeing this across the board. I think, whether, even we see a huge trend even within an enterprise different teams from the marketing team to the support teams to more and more business operations team to the security team, really moving more and more of their use cases from real time. So we see this, the industries that are the innovators and the pioneers here are the ones for whom real times that requirement like Doug and his team here or where, if it is all news, it's no news, it's useless, right? But I think even within, across all industries, whether it is, gaming whether it is, FinTech, Bino related companies, e-learning platforms, so across, ed tech and so many different platforms, there is always this need for business operations. Some, certain aspects certain teams within large organizations to, have to tell me how to win the game and not like, play Monday morning quarterback after the game is over. >> Right, Doug, let's go back at you, I'm curious with connects, have you been able to scale the platform since you integrated with Rockset? Talk to us about some of the outcomes that you've achieved so far? >> Yeah, we have, and of course we knew and we made our database selection with dynamo that it really doesn't have a top end in terms of how much information that we can throw at it. But that's very, very challenging when it comes to using that information from reporting. But we've found the same thing as we've scaled the analytics side with Rockset indexing and searching of that database. So the scale in terms of the number of customers and the amount of data we've been able to take on has been, not been a problem. And honestly, for the first time in my career, I can say that we've always had to add people every time we add a certain number of customers. And that has absolutely not been the case with this platform. >> Well, and I imagine the team that you do have is far more, sorry Venkat, far more strategic and able to focus on bigger projects. >> It, is, and, you've amazed at, I mean Venkat hit on a couple of points that it's in terms of the adoption of analytics. What we found is that we are as big a customer of this analytic engine as our customers are because our marketing team and our sales team are always coming to us. Well how many customers are doing this? How many partners are connected in this way? Which feature flags are turned on the platform? And the way this works is all data that we push into the platform is automatically just indexed and ready for reporting analytics. So we really it's no additional ad of work, to answer these questions, which is really been phenomenal. >> I think the thing I want to add here is the speed at which they were able to build a scalable solution and also how little, operational and administrative overhead that it has cost of their teams, right. I think, this is again, realtime analytics. If you go and ask hundred people, do you want fast analytics on realtime data or slow analytics on scale data, people, no one would say give me slow and scale. So, I think it goes back to again our fundamental pieces that you have to remove all the cost and complexity barriers for realtime analytics to be the new default, right? Today companies try to get away with batch and the pioneers and the innovators are forced to solve, I know, kind of like address some of these realtime analytics challenges. I think with the platforms like the realtime analytics platform, like Rockset, we want to completely flip it on its head. You can do everything in real time. And there may be some extreme situations where you're dealing with like, hundreds of petabytes of data and you just need an analyst to generate like, quarterly reports out of that, go ahead and use some really, really good batch base system but you should be able to get anything, and everything you want without additional cost or complexity, in real time. That is really the vision. That is what we are really enabling here. >> Venkat, I want to also get your perspective and Doug I'd like your perspective on this as well but that is the role of cloud native and serverless technologies in digital disruption. And what do you see there? >> Yeah, I think it's huge. I think, again and again, every customer, and we meet, Command Alkon and Doug and his team is a great example of this where they really want to spend as much time and energies and calories that they have to, help their business, right? Like what, are we accomplishing trying to accomplish as a business? How do we enable, how do we build better products? How do we grow revenue? How do we eliminate risk that is inherent in the business? And that is really where they want to spend all of their energy not trying to like, install some backend software, administer build IDL pipelines and so on and so forth. And so, doing serverless on the compute side of that things like AWS lambda does and what have you. And, it's a very important innovation but that isn't, complete the story or your data stack also have to become serverless. And, that is really the vision with Rockset that your entire realtime analytics stack can be operating and managing. It could be as simple as managing a serverless stack for your compute environments like your APS servers and what have you. And so I think that is going to be a that is for here to stay. This is a path towards simplicity and simplicity scales really, really well, right? Complexity will always be the killer that'll limit, how far you can use this solution and how many problems can you solve with that solution? So, simplicity is a very, very important aspect here. And serverless helps you, deliver that. >> And Doug your thoughts on cloud native and serverless in terms of digital disruption >> Great point, and there are two parts to the scalability part. The second one is the one that's more subtle unless you're in charge of the budget. And that is, with enough effort and enough money that you can make almost any technology scale whether it's multiple copies of it, it may take a long time to get there but you can get there with most technologies but what is least scalable, at least that I as I see that this industry is the people, everybody knows we have a talent shortage and these other ways of getting the real time analytics and scaling infrastructure for compute and database storage, it really takes a highly skilled set of resources. And the more your company grows, the more of those you need. And that is what we really can't find. And that's actually what drove our team in our last industry to even go this way we reached a point where our growth was limited by the people we could find. And so we really wanted to break out of that. So now we had the best of both scalable people because we don't have to scale them and scalable technology. >> Excellent. The best of both worlds. Isn't it great when those two things come together? Gentlemen, thank you so much for joining me on "theCUBE" today. Talking about what Rockset and Command Alkon are doing together better together what you're enabling from a supply chain digitization perspective. We appreciate your insights. >> Great. Thank you. >> Thanks, Lisa. Thanks for having us. >> My pleasure. For Doug Moore and Venkat Venkatramani, I'm Lisa Martin. Keep it right here for more coverage of "theCUBE", your leader in high tech event coverage. (upbeat music)

Published Date : Mar 30 2022

SUMMARY :

Good to see you again. what type of business you are? and to produce them and then And to give you some idea Talk to me of about that. And a lot of that is tied and how you found Venkat and Rockset. and as did most of the that really have nowhere to go but up? and his team to be part of this movement. and say, we want to do and then launched that to the field. kind of the larger trend and get it to the job site and pour it. the larger trend of real time analytics team to the support teams And that has absolutely not been the case and able to focus on bigger projects. that it's in terms of the and the pioneers and the but that is the role of cloud native And so I think that is going to be a And that is what we really can't find. and Command Alkon are doing Thank you. Moore and Venkat Venkatramani,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Doug MoorePERSON

0.99+

DougPERSON

0.99+

Venkat VenkataramaniPERSON

0.99+

Command AlkonORGANIZATION

0.99+

RocksetORGANIZATION

0.99+

LisaPERSON

0.99+

Doug MoorePERSON

0.99+

AmazonORGANIZATION

0.99+

Two guestsQUANTITY

0.99+

AWSORGANIZATION

0.99+

27 industriesQUANTITY

0.99+

two minutesQUANTITY

0.99+

bothQUANTITY

0.99+

VenkatORGANIZATION

0.99+

north AmericaLOCATION

0.99+

Monday morningDATE

0.99+

two partsQUANTITY

0.99+

over 50 timesQUANTITY

0.99+

oneQUANTITY

0.99+

over 330 millionQUANTITY

0.99+

Venkat VenkatramaniPERSON

0.99+

hundred peopleQUANTITY

0.99+

three days agoDATE

0.99+

two thingsQUANTITY

0.99+

over 40 yearsQUANTITY

0.99+

two years agoDATE

0.98+

three years agoDATE

0.98+

secondQUANTITY

0.98+

five partQUANTITY

0.98+

first timeQUANTITY

0.98+

todayDATE

0.98+

VenkatPERSON

0.97+

hundredsQUANTITY

0.97+

30 years priorDATE

0.97+

both worldsQUANTITY

0.97+

TodayDATE

0.97+

three yearsQUANTITY

0.96+

one storyQUANTITY

0.95+

DynamoDBTITLE

0.94+

almost 10 yearsQUANTITY

0.94+

hundreds of millions of billionsQUANTITY

0.93+

dynamoORGANIZATION

0.92+

second oneQUANTITY

0.91+

about an hour and a halfQUANTITY

0.9+

theCUBEORGANIZATION

0.9+

NoSQLTITLE

0.89+

3DATE

0.87+

BinoORGANIZATION

0.85+

past 10 yearsDATE

0.84+

every yearQUANTITY

0.84+

DougORGANIZATION

0.83+

AnalyticsTITLE

0.83+

5 years agoDATE

0.82+

north AmericanOTHER

0.81+

Startup ShowcaseEVENT

0.81+

Venkat Venkataramani, Rockset | CUBE Conversation


 

(upbeat music) >> Hello, welcome to this CUBE Conversation featuring Rockset CEO and co-founder Venkat Venkataramani who selected season two of the AWS Startup Showcase featured company. Before co-founding Rockset Venkat was the engineering director at Facebook, infrastructure team responsible for all the data infrastructure, storing all there at Facebook and he's here to talk real-time analytics. Venkat welcome back to theCUBE for this CUBE Conversation. >> Thanks John. Thanks for having me again. It's a pleasure to be here. >> I'd love to read back and I know you don't like to take a look back but Facebook was huge hyperscale data at scale, really a leading indicator of where everyone is kind of in now so this is about real-time analytics moving from batch to theme here. You guys are at the center, we've talked about it before here on theCUBE, and so let's get in. We've a couple different good talk tracks to dig into but first I want to get your reaction to this soundbite I read on your blog post. Fast analytics on fresh data is better than slow analytics on stale data, fresh beats stale every time, fast beats slow in every space. Where does that come from obviously it makes a lot of sense nobody wants slow data, no one wants to bail data.(giggles) >> Look, we live in the information era. Businesses do want to track, ask much information as possible about their business and want to use data driven decisions. This is now like motherhood and apple pie, no business would say that is not useful because there's more information than what can fit in one person's head that the businesses want to know. You can either do Monday morning quarterback or in the middle of the third quarter before the game is over, you're maybe six points down, you look at what plays are working today, you look at who's injured in your team and who's injured in your opponent and you try to come up with plays that can change the outcome of the game. You still need Monday morning quarterbacking that's not going anywhere, that's batch analytics, that's BI, classic BI, and what the world is demanding more and more is operational intelligence like help me run my business better, don't just gimme a great report at the end of the quarter. >> Yeah, this is the whole trend. Looking back is key to post more like all that good stuff but being present to make future decisions is a lot more mainstream now than ever was you guys are the center of it, and I want to get your take on this data driven culture because the showcase this year for this next episode of the showcase for Startup says, cloud stuff says, data as code something I'm psyched for because I've been saying in theCUBE for many years, data as code is almost as important as infrastructure as code. Because when you think about the application of data in real-time, it's not easy, it's a hard problem and two, you want to make it easy so this is the whole point of this data driven culture that you're on right now. Can you talk about how you see that because this is really one of the most important stories we've seen since the last inflection point. >> Exactly right. What is data driven culture which basically means you stop guessing. You look at the data, you look at what the data says and you try to come up with hypothesis it's still guardrail, it's a guiding light it's not going to tell you what to do, but you need to be able to interrogate your data. If every time you ask a question and it takes 20 minutes for you to get an answer from your favorite Alexa CD or what have you you are probably not going to ever use that device you will not try to be data driven and you can't really build that culture, so it's not just about visibility it's not just about looking back and getting analytics on how the business is doing, you need to be able to interrogate your data in real-time in an interactive fashion, and that I think is what real-time analytics gives you. This is what we say when we say fast analytics on real-time data that's what we mean, which is, as you make changes to your business on the course of your day-to-day work, week-to-week work, what changes are working? How much impact is it having? If something isn't working you have more questions to figure out why and being able to answer all of that is how you really build the data driven culture and it isn't really going to come from just looking at static reports at the end of the week and at the end of the quarter. >> To talk about the latency aspect of the term and how it relates to where it could be a false flag in the sense of you could say, well, we have low latency but you're not getting all the data. You got to get the data, you got to ingest it, make it addressable, query it, represent it, these are huge things when you factor in every single data where you're not guessing latency is a factor. Can you unpack what this new definition is all about and how do people understand whether they got it right or not. >> A great question. A lot of people say, is five minutes real-time? Because I used to run my thing every six hours. Now for us, if it's more than two seconds behind in terms of your data latency, data freshness, it's too old. When does the present become the past and the future hasn't arrived yet and we think it's about one to two seconds. And so everything we do at Rockset we only call it real-time if it can be within one to two seconds 'cause that's the present, that's what's happening now, if it's five minutes ago, it's already five minutes ago it's already past tense. So if you kind of break it down, you're absolutely right that you have to be able to bring data into a system in real-time without sacrificing freshness, and you store it in a way where you can get fast analytics out of that so Rockset is the only real-time data platform real-time analytics platform with built-in connectors so this is why we have built-in connectors where without writing a single line of code, you can bring in data in real-time from wherever you happen to be managing it today. And when data comes into Rockset now the latency is about query processing. What is the point of bringing in data in real-time if every question you're going to ask is going to still take 20 minutes to come back. Well, then you might as well batch data in order to load it, so there I think we have a conversion indexing, we have a real-time indexing technology that allows data as it comes in real-time to be organized in a way and how a distributor SQL engine on top of that so as long as you can frame your question using a SQL query you can ask any question on your real-time data and expect subsequent response time. So that I think is the the combination of the latency having two parts to it, one is how fresh is your data and how fast is your analytics, and you need both, with the simplicity of the cloud for you to really unlock and make real-time analytics to default, as opposed to let me try to do it and batch and see if I can get away with it, but if you really need real-time you have to be able to do both cut down and control your data latency on how fresh your data is, and also make it fast. >> You talk about culture, can you talk about the people you're working with and how that translates into your next topic which is business observability, the next play on words obviously observability if you can measure everything, there shouldn't be any questions that you can't ask. But it's important this culture is shifting from hardcore data engineering to business value kind of coming together at scale. This is kind of where you see the hardcore data folks really bringing that into the business can you talk about this? The people you're working with, and how that's translating to this business observability. >> Absolutely. We work with the world's probably largest Buy Now Pay Later company maybe they're in the top three, they have hundreds of millions of users 300,000+ merchants, working in so many different countries so many different payment methods and there's a very simple problem they have. Some part of their product, some part of their payment system is always down at any given point in time or it has a very high chance of not working. It's not the whole thing is down but, for this one merchant in Switzerland, Apple Pay could be not working and so all of those kinds of transactions might not be processing, and so they had a very classic cloud data warehouse based solution, accumulate all these payments, every six hours they would kind of process and look for anomalies and say, hey, these things needs to be investigated and a response team needs to be tackling these. The business was growing so fast. Those analytical jobs that would run every six hours in batch mode was taking longer than six hours to run and so that was a dead end. They came to Rockset, simply using SQL they're able to define all the metrics they care about across all of their dimensions and they're all accurate up to the second, and now they're able to run their models every minute. And in sort of six hours, every minute they're able find anomalies and run their statistical models, so that now they can protect their business better and more than that, the real side effect of that is they can offer much better quality of a product, much better quality of service to their customer so that the customers are very sticky because now they're getting into the state where they know something is wrong with one of their more merchants, even before the merchants realize that, and that allows them to build a much better product to their end users. So business observability is all about that. It's about do you know really what's happening in your business and can you keep tabs on it, in real-time, as you go about your business and this is what we call operational intelligence, businesses are really demanding operational intelligence a lot more than just traditional BI. >> And we're seeing it in every aspect of a company the digital transformation affects every single department. Sales use data to get big sales better, make the product better people use data to make product usage whether it's A/B testing whatnot, risk management, OPS, you name it data is there to drill down so this is a huge part of real-time. Are you finding that the business observability is maturing faster now or where do you put the progress of companies with respect to getting on board with the idea that this wave is here. >> I think it's a very good question. I would say it has gone mainstream primarily because if you look at technologies like Apache Kafka, and you see Confluent doing really really well, those technologies have really enabled now customers and business units, business functions across the spectrum, to be able to now acquire really really important business data in real-time. If you didn't have those mechanisms to acquire the data in real-time, well, you can't really do analytics and get operational intelligence on that. And so the majority is getting there and things are growing very fast as those kinds of technologies get better and better. SaaSification also is a very big component to it which is like more and more business apps are basically becoming SaaS apps. Now that allows everything to be in the cloud and being interconnected and now when all of those data systems are all interconnected, you can now have APIs that make data flow from one system to another all in happening in real-time, and that also unlocks a lot more potential for again, getting better operational intelligence for your enterprise, and there's a subcategory to this which is like B2B SaaS companies also having to build real-time interactive analytics embedded as part of their offering otherwise people wouldn't even want to buy it and so that it's all interconnected. I think the market is emerging, market is growing but it is gone mainstream I would say predominantly because, Kafka, Confluent, and these kinds of real-time data collection and aggregation kind of systems have gone mainstream and now you actually get to dream about operational intelligence which you couldn't even think about maybe five or 10 years ago. >> They're getting all their data together. So to close it out, take us through the bottom line real-time business observability, great for companies collecting their data, but now you got B2B, you got B2C, people are integrating partnerships where APIs are connecting, it could be third party business relationships, so the data collection is not just inside the company it's also outside. This is more value. This is the more of what's going on. >> Exactly. So more and more, instead of going to your data team and demanding real-time analytics what a lot of business units are doing is, they're going to the product analytics platform, the SaaS app they're using for covering various parts of their business, they go to them and demand, either this is my recruiting software, sales software, customer support, gimme more real-time insights otherwise it's not really that useful. And so there is really a huge uptake on all these SaaS companies now building real-time infrastructure powered by Rockset in many cases that actually ends up giving a lot of value to their end customers and that I think is kind of the proof of value for a SaaS product, all the workflows are all very, very important absolutely but almost every amazing SaaS product has an analytics tab and it needs to be fast, interactive and it needs to be real-time. It needs you talking about fresh insights that are happening and that is often in a B2B SaaS, application developers always comes and tell us that's the proof of value that we can show how much value that that particular SaaS application is creating for their customer. So I think it's all two sides of the same coin, large enterprises want to build it themselves because now they get more control about how exactly the problem needs to be solved and then there are also other solutions where you rely on a SaaS application, where you demand that particular application gives you. But at the end of the day, I think the world is going real-time and we are very, very happy to be part of this moment, operational intelligence. For every classic BI use case I think there are 10 times more operational intelligence use cases. As Rockset we are on a mission to eliminate all cost and complexity barriers and really really provide fast analytics on real-time data with the simplicity of the cloud and really be part of this moment. >> You guys having some fun right now these days through in the middle of all the action. >> Absolutely. I think we're growing very fast, we're hiring, we are onboarding as many customers as possible and really looking forward to being part of this moment and really accelerate this moment from business intelligence to operational intelligence. >> Well, Venkat great to see you. Thanks for coming on theCUBE as part of this CUBE Conversation, you're in the class of AWS Startup Showcase season two, episode two. Thanks for coming on. Keep it right there everyone watch more action from theCUBE. Your leader in tech coverage, I'm John Furrier your host. Thanks for watching. (upbeat music)

Published Date : Mar 23 2022

SUMMARY :

and he's here to talk real-time analytics. It's a pleasure to be here. and I know you don't like and you try to come up with plays and two, you want to make it easy and it isn't really going to come from and how it relates to where it could be and make real-time analytics to default, and how that translates and that allows them to data is there to drill down and now you actually get to This is the more of what's going on. and it needs to be fast, interactive You guys having some and really accelerate this moment Well, Venkat great to see you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
20 minutesQUANTITY

0.99+

SwitzerlandLOCATION

0.99+

five minutesQUANTITY

0.99+

JohnPERSON

0.99+

10 timesQUANTITY

0.99+

FacebookORGANIZATION

0.99+

Venkat VenkataramaniPERSON

0.99+

six hoursQUANTITY

0.99+

RocksetORGANIZATION

0.99+

Monday morningDATE

0.99+

John FurrierPERSON

0.99+

six pointsQUANTITY

0.99+

two partsQUANTITY

0.99+

two sidesQUANTITY

0.99+

VenkatPERSON

0.99+

oneQUANTITY

0.99+

this yearDATE

0.99+

five minutes agoDATE

0.99+

bothQUANTITY

0.98+

twoQUANTITY

0.98+

more than two secondsQUANTITY

0.98+

two secondsQUANTITY

0.98+

hundreds of millionsQUANTITY

0.98+

longer than six hoursQUANTITY

0.97+

300,000+ merchantsQUANTITY

0.97+

firstQUANTITY

0.94+

AWSORGANIZATION

0.94+

ApacheORGANIZATION

0.93+

ConfluentORGANIZATION

0.93+

todayDATE

0.93+

season twoQUANTITY

0.92+

appleORGANIZATION

0.92+

endDATE

0.91+

every six hoursQUANTITY

0.9+

single lineQUANTITY

0.89+

AppleORGANIZATION

0.88+

CUBE ConversationEVENT

0.88+

fiveDATE

0.88+

SQLTITLE

0.86+

AlexaTITLE

0.82+

one systemQUANTITY

0.82+

SaaSTITLE

0.82+

episode twoQUANTITY

0.81+

10 years agoDATE

0.8+

single departmentQUANTITY

0.77+

AWS Startup ShowcaseEVENT

0.77+

one personQUANTITY

0.76+

one merchantQUANTITY

0.74+

third quarterDATE

0.73+

Startup ShowcaseEVENT

0.72+

CUBETITLE

0.7+

about oneQUANTITY

0.69+

single dataQUANTITY

0.68+

PayTITLE

0.68+

threeQUANTITY

0.64+

coupleQUANTITY

0.57+

theCUBEORGANIZATION

0.52+

ConversationEVENT

0.51+

KafkaTITLE

0.44+

theCUBETITLE

0.37+

AWS Heroes Panel | Open Cloud Innovations


 

(upbeat music) >> Hello, and welcome back to AWS Startup Showcase, I'm John Furrier, your host. This is the Hero panel, the AWS Heroes. These are folks that have a lot of experience in Open Source, having fun building great projects and commercializing the value and best practices of Open Source innovation. We've got some great guests here. Liz Rice, Chief Open Source Officer, Isovalent. CUBE alumni, great to see you. Brian LeRoux, who is the Co-founder and CTO of begin.com. Erica Windisch who's an Architect for Developer Experience. AWS Hero, also CUBE alumni. Casey Lee, CTO Gaggle. Doing some great stuff in ed tech. Great collection of experts and experienced folks doing some fun stuff, welcome to this conversation this CUBE panel. >> Hi. >> Thanks for having us. >> Hello. >> Let's go down the line. >> I don't normally do this, but since we're remote and we have such great guests, go down the line and talk about why Open Source is important to you guys. What projects are you currently working on? And what's the coolest thing going on there? Liz we'll start with you. >> Okay, so I am very involved in the world of Cloud Native. I'm the chair of the technical oversight committee for the Cloud Native Computing Foundation. So that means I get to see a lot of what's going on across a very broad range of Cloud Native projects. More specifically, Isovalent. I focus on Cilium, which is it's based on a technology called EBPF. That is to me, probably the most exciting technology right now. And then finally, I'm also involved in an organization called OpenUK, which is really pushing for more use of open technologies here in the United Kingdom. So spread around lots of different projects. And I'm in a really fortunate position, I think, to see what's happening with lots of projects and also the commercialization of lots of projects. >> Awesome, Brian what project are you working on? >> Working project these days called Architect. It's a Open Source project built on top of AWSM. It adds a lot of sugar and terseness to the SM experience and just makes it a lot easier to work with and get started. AWS can be a little bit intimidating to people at times. And the Open Source community is stepping up to make some of that bond ramp a little bit easier. And I'm also an Apache member. And so I keep a hairy eyeball on what's going on in that reality all the time. And I've been doing this open-source thing for quite a while, and yeah, I love it. It's a great thing. It's real science. We get to verify each other's work and we get to expand and build on human knowledge. So that's a huge honor to just even be able to do that and I feel stoked to be here so thanks for having me. >> Awesome, yeah, and totally great. Erica, what's your current situation going on here? What's happening? >> Sure, so I am currently working on developer experience of a number of Open Source STKS and CLI components from my current employer. And previously, recently I left New Relic where I was working on integrating with OpenTelemetry, as well as a number of other things. Before that I was a maintainer of Docker and of OpenStack. So I've been in this game for a while as well. And I tend to just put my fingers in a lot of little pies anywhere from DVD players 20 years ago to a lot of this open telemetry and monitoring and various STKs and developer tools is where like Docker and OpenStack and the STKs that I work on now, all very much focusing on developer as the user. >> Yeah, you're always on the wave, Erica great stuff. Casey, what's going on? Do you got some great ed techs happening? What's happening with you? >> Yeah, sure. The primary Open Source project that I'm contributing to right now is ACT. This is a tool I created a couple of years back when GitHub Actions first came out, and my motivation there was I'm just impatient. And that whole commit, push, wait time where you're testing out your pipelines is painful. And so I wanted to build a tool that allowed developers to test out their GitHub Actions workflows locally. And so this tool uses Docker containers to emulate, to get up action environment and gives you fast feedback on those workflows that you're building. Lot of innovation happening at GitHub. And so we're just trying to keep up and continue to replicate those new features functionalities in the local runner. And the biggest challenge I've had with this project is just keeping up with the community. We just passed 20,000 stars, and it'd be it's a normal week to get like 10 PRs. So super excited to announce just yesterday, actually I invited four of the most active contributors to help me with maintaining the project. And so this is like a big deal for me, letting the project go and bringing other people in to help lead it. So, yeah, huge shout out to those folks that have been helping with driving that project. So looking forward to what's next for it. >> Great, we'll make sure the SiliconANGLE riders catch that quote there. Great call out. Let's start, Brian, you made me realize when you mentioned Apache and then you've been watching all the stuff going on, it brings up the question of the evolution of Open Source, and the commercialization trends have been very interesting these days. You're seeing CloudScale really impact also with the growth of code. And Liz, if you remember, the Linux Foundation keeps making projections and they keep blowing past them every year on more and more code and more and more entrance coming in, not just individuals, corporations. So you starting to see Netflix donates something, you got Lyft donate some stuff, becomes a project company forms around it. There's a lot of entrepreneurial activity that's creating this new abstraction layers, new platforms, not just tools. So you start to see a new kickup trajectory with Open Source. You guys want to comment on this because this is going to impact how fast the enterprise will see value here. >> I think a really great example of that is a project called Backstage that's just come out of Spotify. And it's going through the incubation process at the CNCF. And that's why it's front of mind for me right now, 'cause I've been working on the due diligence for that. And the reason why I thought it was interesting in relation to your question is it's spun out of Spotify. It's fully Open Source. They have a ton of different enterprises using it as this developer portal, but they're starting to see some startups emerging offering like a hosted managed version of Backstage or offering services around Backstage or offering commercial plugins into Backstage. And I think it's really fascinating to see those ecosystems building up around a project and different ways that people can. I'm a big believer. You cannot sell the Open Source code, but you can sell other things that create value around Open Source projects. So that's really exciting to see. >> Great point. Anyone else want to weigh in and react to that? Because it's the new model. It's not the old way. I mean, I remember when I was in college, we had the Pirate software. Open Source wasn't around. So you had to deal under the table. Now it's free. But I mean the old way was you had to convince the enterprise, like you've got a hard knit, it builds the community and the community manage the quality of the code. And then you had to build the company to make sure they could support it. Now the companies are actually involved in it, right? And then new startups are forming faster. And the proof points are shorter and highly accelerated for that. I mean, it's a whole new- >> It's a Cambrian explosion, and it's great. It's one of those things that it's challenging for the new developers because they come in and they're like, "Whoa, what is all this stuff that I'm supposed to figure out?" And there's no right answer and there's no wrong answer. There's just tons of it. And I think that there's a desire for us to have one sort of well-known trot and happy path, that audience we're a lot better with a more diverse community, with lots of options, with lots of ways to approach these problems. And I think it's just great. A challenge that we have with all these options and all these Cambrian explosion of projects and all these competing ideas, right now, the sustainability, it's a bit of a tricky question to answer. We know that there's a commercialization aspect that helps us fund these projects, but how we compose the open versus the commercial source is still a bit of a tricky question and a tough one for a lot of folks. >> Erica, would you chime in on that for a second. I want to get your angle on that, this experience and all this code, and I'm a new person, I'm an existing person. Do I get like a blue check mark and verify? I mean, these are questions like, well, how do you navigate? >> Yeah, I think this has been something happening for a while. I mean, back in the early OpenStack days, 2010, for instance, Rackspace Open Sourcing, OpenStack and ANSU Labs and so forth, and then trying, having all these companies forming in creating startups around this. I started at a company called Cloudccaling back in late 2010, and we had some competitors such as Piston and so forth where a lot of the ANSUL Labs people went. But then, the real winners, I think from OpenStack ended up being the enterprises that jumped in. We had Red Hat in particular, as well as HP and IBM jumping in and investing in OpenStack, and really proving out a lot of... not that it was the first time, but this is when we started seeing billions of dollars pouring into Open Source projects and Open Source Foundations, such as the OpenStack Foundation, which proceeded a lot of the things that we now see with the Linux Foundation, which was then created a little bit later. And at the same time, I'm also reflecting a little bit what Brian said because there are projects that don't get funded, that don't get the same attention, but they're also getting used quite significantly. Things like Log4j really bringing this to the spotlight in terms of projects that are used everywhere by everything with significant outsized impacts on the industry that are not getting funded, that aren't flashy enough, that aren't exciting enough because it's just logging, but a vulnerability in it brings every everything and everybody down and has possibly billions of dollars of impact to our industry because nobody wanted to fund this project. >> I think that brings up the commercialization point about maybe bringing a venture capital model in saying, "Hey, that boring little logging thing could be a key ingredient for say solving some observability problems so I think let's put some cash." Again then we'd never seen that before. Now you're starting to see that kind of a real smart investment thesis going into Open Source projects. I mean, Promethease, Crafter, these are projects that turned off companies. This is turning up companies. >> A decade ago, there was no money in Dev tools that I think that's been fully debunked now. They used to be a concept that the venture community believed, but there's just too much evidence to the contrary, the companies like Cash Court, Datadog, the list goes on and on. I think the challenge for the Open Source (indistinct) comes back to foundations and working (indistinct) these developers make this code safe and secure. >> Casey, what's your reaction to all of this? You've got, so a project has gained some traction, got some momentum. There's a lot of mission critical. I won't say white spaces, but the opportunities in the big cloud game happening. And there's a lot of, I won't say too many entrepreneurial, but there's a lot of community action happening that's precommercialization that's getting traction. How does this all develop naturally and then vector in quickly when it hits? >> Yeah, I want to go back to the Log4j topic real quick. I think that it's a great example of an area that we need to do better at. And there was a cool article that Rob Pike wrote describing how to quantify the criticality. I think that's sort of quantifying criticality was the article he wrote on how to use metrics, to determine how valuable, how important a piece of Open Source is to the community. And we really need to highlight that more. We need a way to make it more clear how important this software is, how many people depend on it and how many people are contributing to it. And because right now we all do that. Like if I'm going to evaluate an Open Source software, sure, I'll look at how many stars it has and how many contributors it has. But I got to go through and do all that work myself and come up with. It would be really great if we had an agreed upon method for ranking the criticality of software, but then also the risk, hey, that this is used by a ton of people, but nobody's contributing to it anymore. That's a concern. And that would be great to potential users of that to signal whether or not it makes sense. The Open Source Security Foundation, just getting off the ground, they're doing some work in this space, and I'm really excited to see where they go with that looking at ways to stop score critically. >> Well, this brings up a good point while we've got everyone here, let's take a plug and plug a project you think that's not getting the visibility it needs. Let's go through each of you, point out a project that you think people should be looking at and talking about that might get some free visibility here. Anyone want to highlight projects they think should be focused more on, or that needs a little bit of love? >> I think, I mean, particularly if we're talking about these sort of vulnerability issues, there's a ton of work going on, like in the Secure Software Foundation, other foundations, I think there's work going on in Apache somewhere as well around the bill of material, the software bill of materials, the Secure Software supply chain security, even enumerating your dependencies is not trivial today. So I think there's going to be a ton of people doing really good work on that, as well as the criticality aspect. It's all like that. There's a really great xkcd cartoon with your software project and some really big monolithic lumps. And then, this tiny little piece in a very important point that's maintained by somebody in his bedroom in Montana or something and if you called it out. >> Yeah, you just opened where the next lightening and a bottle comes from. And this is I think the beauty of Open Source is that you get a little collaboration, you get three feet in a cloud of dust going and you get some momentum, and if it's relevant, it rises to the top. I think that's the collective intelligence of Open Source. The question I want to ask that the panel here is when you go into an enterprise, and now that the game is changing with a much more collaborative and involved, what's the story if they say, hey, what's in it for me, how do I manage the Open Source? What's the current best practice? Because there's no doubt I can't ignore it. It's in everything we do. How do I organize around it? How do I build around it to be more efficient and more productive and reduce the risk on vulnerabilities to managing staff, making sure the right teams in place, the right agility and all those things? >> You called it, they got to get skin in the game. They need to be active and involved and donating to a sustainable Open Source project is a great way to start. But if you really want to be active, then you should be committing. You should have a goal for your organization to be contributing back to that project. Maybe not committing code, it could be committing resources into the darks or in the tests, or even tweeting about an Open Source project is contributing to it. And I think a lot of these enterprises could benefit a lot from getting more active with the Open Source Foundations that are out there. >> Liz, you've been actively involved. I know we've talked personally when the CNCF started, which had a great commercial uptake from companies. What do you think the current state-of-the-art kind of equation is has it changed a little bit? Or is it the game still the same? >> Yeah, and in the early days of the CNCF, it was very much dominated by vendors behind the project. And now we're seeing more and more membership from end-user companies, the kind of enterprises that are building their businesses on Cloud Native, but their business is not in itself. That's not there. The infrastructure is not their business. And I think seeing those companies, putting money in, putting time in, as Brian says contributing resources quite often, there's enough money, but finding the talent to do the work and finding people who are prepared to actually chop the wood and carry the water, >> Exactly. >> that it's hard. >> And if enterprises can find peoples to spend time on Open Source projects, help with those chores, it's hugely valuable. And it's one of those the rising tide floats all the boats. We can raise security, we can reduce the amount of dependency on maintain projects collectively. >> I think the business models there, I think one of the things I'll react to and then get your guys' comments is remember which CubeCon it was, it was one of the early ones. And I remember seeing Apple having a booth, but nobody was manning. It was just an Apple booth. They weren't doing anything, but they were recruiting. And I think you saw the transition of a business model where the worry about a big vendor taking over a project and having undue influence over it goes away because I think this idea of participation is also talent, but also committing that talent back into the communities as a model, as a business model, like, okay, hire some great people, but listen, don't screw up the Open Source piece of it 'cause that's a critical. >> Also hire a channel, right? They can use those contributions to source that talent and build the reputation in the communities that they depend on. And so there's really a lot of benefit to the larger organizations that can do this. They'll have a huge pipeline of really qualified engineers right out the gate without having to resort to cheesy whiteboard interviews, which is pretty great. >> Yeah, I agree with a lot of this. One of my concerns is that a lot of these corporations tend to focus very narrowly on certain projects, which they feel that they depend greatly, they'll invest in OpenStack, they'll invest in Docker, they'll invest in some of the CNCF projects. And then these other projects get ignored. Something that I've been a proponent of for a little bit for a while is observability of your dependencies. And I don't think there's quite enough projects and solutions to this. And it sounds maybe from lists, there are some projects that I don't know about, but I also know that there's some startups like Snyk and so forth that help with a little bit of this problem, but I think we need more focus on some of these edges. And I think companies need to do better, both in providing, having some sort of solution for observability of the dependencies, as well as understanding those dependencies and managing them. I've seen companies for instance, depending on software that they actively don't want to use based on a certain criteria that they already set projects, like they'll set a requirement that any project that they use has a code of conduct, but they'll then use projects that don't have codes of conduct. And if they don't have a code of conduct, then employees are prohibited from working on those projects. So you've locked yourself into a place where you're depending on software that you have instructed, your employees are not allowed to contribute to, for certain legal and other reasons. So you need to draw a line in the sand and then recognize that those projects are ones that you don't want to consume, and then not use them, and have observability around these things. >> That's a great point. I think we have 10 minutes left. I want to just shift to a topic that I think is relevant. And that is as Open Source software, software, people develop software, you see under the hood kind of software, SREs developing very quickly in the CloudScale, but also you've got your classic software developers who were writing code. So you have supply chain, software supply chain challenges. You mentioned developer experience around how to code. You have now automation in place. So you've got the development of all these things that are happening. Like I just want to write software. Some people want to get and do infrastructure as code so DevSecOps is here. So how does that look like going forward? How has the future of Open Source going to make the developers just want to code quickly? And the folks who want to tweak the infrastructure a bit more efficient, any views on that? >> At Gaggle, we're using AWS' CDK, exclusively for our infrastructure as code. And it's a great transition for developers instead of writing Yammel or Jason, or even HCL for their infrastructure code, now they're writing code in the language that they're used to Python or JavaScript, and what that's providing is an easier transition for developers into that Infrastructure as code at Gaggle here, but it's also providing an opportunity to provide reusable constructs that some Devs can build on. So if we've got a very opinionated way to deploy a serverless app in a database and do auto-scaling behind and all stuff, we can present that to a developer as a library, and they can just consume it as it is. Maybe that's as deep as they want to go and they're happy with that. But then they want to go deeper into it, they can either use some of the lower level constructs or create PRs to the platform team to have those constructs changed to fit their needs. So it provides a nice on-ramp developers to use the tools and languages they're used to, and then also go deeper as they need. >> That's awesome. Does that mean they're not full stack developers anymore that they're half stack developers they're taking care of for them? >> I don't know either. >> We'll in. >> No, only kidding. Anyway, any other reactions to this whole? I just want to code, make it easy for me, and some people want to get down and dirty under the hood. >> So I think that for me, Docker was always a key part of this. I don't know when DevSecOps was coined exactly, but I was talking with people about it back in 2012. And when I joined Docker, it was a part of that vision for me, was that Docker was applying these security principles by default for your application. It wasn't, I mean, yes, everybody adopted because of the portability and the acceleration of development, but it was for me, the fact that it was limiting what you could do from a security angle by default, and then giving you these tuna balls that you can control it further. You asked about a project that may not get enough recognition is something called DockerSlim, which is designed to optimize your containers and will make them smaller, but it also constraints the security footprint, and we'll remove capabilities from the container. It will help you build security profiles for app armor and the Red Hat one. SELinux. >> SELinux. >> Yeah, and this is something that I think a lot of developers, it's kind of outside of the realm of things that they're really thinking about. So the more that we can automate those processes and make it easier out of the box for users or for... when I say users, I mean, developers, so that it's straightforward and automatic and also giving them the capability of refining it and tuning it as needed, or simply choosing platforms like serverless offerings, which have these security constraints built in out of the box and sometimes maybe less tuneable, but very strong by default. And I think that's a good place for us to be is where we just enforced these things and make you do things in a secure way. >> Yeah, I'm a huge fan of Kubernetes, but it's not the right hammer for every nail. And there are absolutely tons of applications that are better served by something like Lambda where a lot more of that security surface is taken care of for the developer. And I think we will see better tooling around security profiling and making it easier to shrink wrap your applications that there are plenty of products out there that can help you with this in a cloud native environment. But I think for the smaller developer let's say, or an earlier stage company, yeah, it needs to be so much more straightforward. Really does. >> Really an interesting time, 10 years ago, when I was working at Adobe, we used to requisition all these analysts to tell us how many developers there were for the market. And we thought there was about 20 million developers. If GitHub's to be believed, we think there is now around 80 million developers. So both these groups are probably wrong in their numbers, but the takeaway here for me is that we've got a lot of new developers and a lot of these new developers are really struck by a paradox of choice. And they're typically starting on the front end. And so there's a lot of movement in the stack moved towards the front end. We saw that at re:Invent when Amazon was really pushing Amplify 'cause they're seeing this too. It's interesting because this is where folks start. And so a lot of the obstructions are moving in that direction, but maybe not always necessarily totally appropriate. And so finding the right balance for folks is still a work in progress. Like Lambda is a great example. It lets me focus totally on just business logic. I don't have to think about infrastructure pretty much at all. And if I'm newer to the industry, that makes a lot of sense to me. As use cases expand, all of a sudden, reality intervenes, and it might not be appropriate for everything. And so figuring out what those edges are, is still the challenge, I think. >> All right, thank you very much for coming on the CUBE here panel. AWS Heroes, thanks everyone for coming. I really appreciate it, thank you. >> Thank you. >> Thank you. >> Okay. >> Thanks for having me. >> Okay, that's a wrap here back to the program and the awesome startups. Thanks for watching. (upbeat music)

Published Date : Jan 26 2022

SUMMARY :

and commercializing the value is important to you guys. and also the commercialization that reality all the time. Erica, what's your current and the STKs that I work on now, the wave, Erica great stuff. and continue to replicate those and the commercialization trends And the reason why I and the community manage that I'm supposed to figure out?" in on that for a second. that don't get the same attention, the commercialization point that the venture community believed, but the opportunities in the of that to signal whether and plug a project you think So I think there's going to be and now that the game is changing and donating to a sustainable Or is it the game still the same? but finding the talent to do the work the rising tide floats all the boats. And I think you saw the and build the reputation And I think companies need to do better, And the folks who want to in the language that they're Does that mean they're not and some people want to get and the acceleration of development, of the realm of things and making it easier to And so finding the right balance for folks for coming on the CUBE here panel. the awesome startups.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Erica WindischPERSON

0.99+

Brian LeRouxPERSON

0.99+

IBMORGANIZATION

0.99+

Liz RicePERSON

0.99+

BrianPERSON

0.99+

Casey LeePERSON

0.99+

Rob PikePERSON

0.99+

EricaPERSON

0.99+

HPORGANIZATION

0.99+

AppleORGANIZATION

0.99+

ANSU LabsORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

DatadogORGANIZATION

0.99+

MontanaLOCATION

0.99+

2012DATE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

LizPERSON

0.99+

ANSUL LabsORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

Secure Software FoundationORGANIZATION

0.99+

CaseyPERSON

0.99+

GitHubORGANIZATION

0.99+

OpenUKORGANIZATION

0.99+

AWS'ORGANIZATION

0.99+

United KingdomLOCATION

0.99+

AWSORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

10 minutesQUANTITY

0.99+

Open Source Security FoundationORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

three feetQUANTITY

0.99+

Cash CourtORGANIZATION

0.99+

SnykORGANIZATION

0.99+

20,000 starsQUANTITY

0.99+

JavaScriptTITLE

0.99+

ApacheORGANIZATION

0.99+

yesterdayDATE

0.99+

SpotifyORGANIZATION

0.99+

OneQUANTITY

0.99+

PythonTITLE

0.99+

bothQUANTITY

0.99+

John FurrierPERSON

0.99+

CloudccalingORGANIZATION

0.99+

PistonORGANIZATION

0.99+

20 years agoDATE

0.99+

LyftORGANIZATION

0.98+

late 2010DATE

0.98+

oneQUANTITY

0.98+

OpenStack FoundationORGANIZATION

0.98+

LambdaTITLE

0.98+

GaggleORGANIZATION

0.98+

Secure SoftwareORGANIZATION

0.98+

around 80 million developersQUANTITY

0.98+

CNCFORGANIZATION

0.98+

10 years agoDATE

0.97+

fourQUANTITY

0.97+

Open Source FoundationsORGANIZATION

0.97+

billions of dollarsQUANTITY

0.97+

New RelicORGANIZATION

0.97+

OpenStackORGANIZATION

0.97+

OpenStackTITLE

0.96+

DevSecOpsTITLE

0.96+

first timeQUANTITY

0.96+

EBPFORGANIZATION

0.96+

about 20 million developersQUANTITY

0.96+

Open Source FoundationsORGANIZATION

0.95+

DockerORGANIZATION

0.95+

10 PRsQUANTITY

0.95+

todayDATE

0.94+

CloudScaleTITLE

0.94+

AWS HeroORGANIZATION

0.94+

DockerTITLE

0.92+

GitHub ActionsTITLE

0.92+

A decade agoDATE

0.92+

firstQUANTITY

0.91+

How Open Source is Changing the Corporate and Startup Enterprises | Open Cloud Innovations


 

(gentle upbeat music) >> Hello, and welcome to theCUBE presentation of the AWS Startup Showcase Open Cloud Innovations. This is season two episode one of an ongoing series covering setting status from the AWS ecosystem. Talking about innovation, here it's open source for this theme. We do this every episode, we pick a theme and have a lot of fun talking to the leaders in the industry and the hottest startups. I'm your host John Furrier here with Lisa Martin in our Palo Alto studios. Lisa great series, great to see you again. >> Good to see you too. Great series, always such spirited conversations with very empowered and enlightened individuals. >> I love the episodic nature of these events, we get more stories out there than ever before. They're the hottest startups in the AWS ecosystem, which is dominating the cloud sector. And there's a lot of them really changing the game on cloud native and the enablement, the stories that are coming out here are pretty compelling, not just from startups they're actually penetrating the enterprise and the buyers are changing their architectures, and it's just really fun to catch the wave here. >> They are, and one of the things too about the open source community is these companies embracing that and how that's opening up their entry to your point into the enterprise. I was talking with several customers, companies who were talking about the 70% of their pipeline comes from the open source community. That's using the premium version of the technology. So, it's really been a very smart, strategic way into the enterprise. >> Yeah, and I love the format too. We get the keynote we're doing now, opening keynote, some great guests. We have Sir John on from AWS started program, he is the global startups lead. We got Swami coming on and then closing keynote with Deepak Singh. Who's really grown in the Amazon organization from containers now, compute services, which now span how modern applications are being built. And I think the big trend that we're seeing that these startups are riding on that big wave is cloud natives driving the modern architecture for software development, not just startups, but existing, large ISV and software companies are rearchitecting and the customers who buy their products and services in the cloud are rearchitecting too. So, it's a whole new growth wave coming in, the modern era of cloud some say, and it's exciting a small startup could be the next big name tomorrow. >> One of the things that kind of was a theme throughout the conversations that I had with these different guests was from a modern application security perspective is, security is key, but it's not just about shifting lab. It's about doing so empowering the developers. They don't have to be security experts. They need to have a developer brain and a security heart, and how those two organizations within companies can work better together, more collaboratively, but ultimately empowering those developers, which goes a long way. >> Well, for the folks who are watching this, the format is very simple. We have a keynote, editorial keynote speakers come in, and then we're going to have a bunch of companies who are going to present their story and their showcase. We've interviewed them, myself, you Dave Vallante and Dave Nicholson from theCUBE team. They're going to tell their stories and between the companies and the AWS heroes, 14 companies are represented and some of them new business models and Deepak Singh who leads the AWS team, he's going to have the closing keynote. He talks about the new changing business model in open source, not just the tech, which has a lot of tech, but how companies are being started around the new business models around open source. It's really, really amazing. >> I bet, and does he see any specific verticals that are taking off? >> Well, he's seeing the contribution from big companies like AWS and the Facebook's of the world and large companies, Netflix, Intuit, all contributing content to the open source and then startups forming around them. So Netflix does some great work. They donated to open source and next thing you know a small group of people get together entrepreneurs, they form a company and they create a platform around it with unification and scale. So, the cloud is enabling this new super application environment, superclouds as we call them, that's emerging and this new supercloud and super applications are scaling data-driven machine learning and AI that's the new formula for success. >> The new formula for success also has to have that velocity that developers expect, but also that the consumerization of tech has kind of driven all of us to expect things very quickly. >> Well, we're going to bring in Serge Shevchenko, AWS Global Startup program into the program. Serge is our partner. He is the leader at AWS who has been working on this program Serge, great to see you. Thanks for coming on. >> Yeah, likewise, John, thank you for having me very excited to be here. >> We've been working together on collaborating on this for over a year. Again, season two of this new innovative program, which is a combination of CUBE Media partnership, and AWS getting the stories out. And this has been a real success because there's a real hunger to discover content. And then in the marketplace, as these new solutions coming from startups are the next big thing coming. So, you're starting to see this going on. So I have to ask you, first and foremost, what's the AWS startup showcase about. Can you explain in your terms, your team's vision behind it, and why those startup focus? >> Yeah, absolutely. You know John, we curated the AWS Startup Showcase really to bring meaningful and oftentimes educational content to our customers and partners highlighting innovative solutions within these themes and ultimately to help customers find the best solutions for their use cases, which is a combination of AWS and our partners. And really from pre-seed to IPO, John, the world's most innovative startups build on AWS. From leadership downward, very intentional about cultivating vigorous AWS community and since 2019 at re:Invent at the launch of the AWS Global Startup program, we've helped hundreds of startups accelerate their growth through product development support, go to market and co-sell programs. >> So Serge question for you on the theme of today, John mentioned our showcases having themes. Today's theme is going to cover open source software. Talk to us about how Amazon thinks about opensource. >> Sure, absolutely. And I'll just touch on it briefly, but I'm very excited for the keynote at the end of today, that will be delivered by Deepak the VP of compute services at AWS. We here at Amazon believe in open source. In fact, Amazon contributes to open source in multiple ways, whether that's through directly contributing to third-party project, repos or significant code contributions to Kubernetes, Rust and other projects. And all the way down to leadership participation in organizations such as the CNCF. And supporting of dozens of ISV myself over the years, I've seen explosive growth when it comes to open source adoption. I mean, look at projects like Checkov, within 12 months of launching their open source project, they had about a million users. And another great example is Falco within, under a decade actually they've had about 37 million downloads and that's about 300% increase since it's become an incubating project in the CNCF. So, very exciting things that we're seeing here at AWS. >> So explosive growth, lot of content. What do you hope that our viewers and our guests are going to be able to get out of today? >> Yeah, great question, Lisa. I really hope that today's event will help customers understand why AWS is the best place for them to run open source, commercial and which partner solutions will help them along their journey. I think that today the lineup through the partner solutions and Deepak at the end with the ending keynote is going to present a very valuable narrative for customers and startups in selecting where and which projects to run on AWS. >> That's great stuff Serge would love to have you on and again, I want to just say really congratulate your team and we enjoy working with them. We think this showcase does a great service for the community. It's kind of open source in its own way if I can co contributing working on out there, but you're really getting the voices out at scale. We've got companies like Armory, Kubecost, Sysdig, Tidelift, Codefresh. I mean, these are some of the companies that are changing the game. We even had Patreon a customer and one of the partners sneak with security, all the big names in the startup scene. Plus AWS Deepak saying Swami is going to be on the AWS Heroes. I mean really at scale and this is really a great. So, thank you so much for participating and enabling all of this. >> No, thank you to theCUBE. You've been a great partner in this whole process, very excited for today. >> Thanks Serge really appreciate it. Lisa, what a great segment that was kicking off the event. We've got a great lineup coming up. We've got the keynote, final keynote fireside chat with Deepak Singh a big name at AWS, but Serge in the startup showcase really innovative. >> Very innovative and in a short time period, he talked about the launch of this at re:Invent 2019. They've helped hundreds of startups. We've had over 50 I think on the showcase in the last year or so John. So we really gotten to cover a lot of great customers, a lot of great stories, a lot of great content coming out of theCUBE. >> I love the openness of it. I love the scale, the storytelling. I love the collaboration, a great model, Lisa, great to work with you. We also Dave Vallante and Dave Nicholson interview. They're not here, but let's kick off the show. Let's get started with our next guest Swami. The leader at AWS Swami just got promoted to VP of the database, but also he ran machine learning and AI at AWS. He is a leader. He's the author of the original DynamoDB paper, which is celebrating its 10th year anniversary really impacted distributed computing and open source. Swami's introduced many opensource aspects of products within AWS and has been a leader in the engineering side for many, many years at AWS, from an intern to now an executive. Swami, great to see you. Thanks for coming on our AWS startup showcase. Thanks for spending the time with us. >> My pleasure, thanks again, John. Thanks for having me. >> I wanted to just, if you don't mind asking about the database market over the past 10 to 20 years cloud and application development as you see, has changed a lot. You've been involved in so many product launches over the years. Cloud and machine learning are the biggest waves happening to your point to what you're doing now. Software is under the covers it's powering it all infrastructure is code. Open source has been a big part of it and it continues to grow and change. Deepak Singh from AWS talks about the business model transformation of how like Netflix donates to the open source. Then a company starts around it and creates more growth. Machine learnings and all the open source conversations around automation as developers and builders, like software as cloud and machine learning become the key pistons in the engine. This is a big wave, what's your view on this? How how has cloud scale and data impacting the software market? >> I mean, that's a broad question. So I'm going to break it down to kind of give some of the back data. So now how we are thinking about it first, I'd say when it comes to the open source, I'll start off by saying first the longevity and by ability of open sources are very important to our customers and that is why we have been a significant contributor and supporter of these communities. I mean, there are several efforts in open source, even internally by actually open sourcing some of our key Amazon technologies like Firecracker or BottleRocket or our CDK to help advance the industry. For example, CDK itself provides some really powerful way to build and configure cloud services as well. And we also contribute to a lot of different open source projects that are existing ones, open telemetries and Linux, Java, Redis and Kubernetes, Grafana and Kafka and Robotics Operating System and Hadoop, Leucine and so forth. So, I think, I can go on and on, but even now I'd say the database and observability space say machine learning we have always started with embracing open source in a big material way. If you see, even in deep learning framework, we championed MX Linux and some of the core components and we open sourced our auto ML technology auto Glue on, and also be open sourced and collaborated with partners like Facebook Meta on Fighter showing some major components and there, and then we are open search Edge Compiler. So, I would say the number one thing is, I mean, we are actually are very, very excited to partner with broader community on problems that really mattered to the customers and actually ensure that they are able to get amazing benefit of this. >> And I see machine learning is a huge thing. If you look at how cloud group and when you had DynamoDB paper, when you wrote it, that that was the beginning of, I call the cloud surge. It was the beginning of not just being a resource versus building a data center, certainly a great alternative. Every startup did it. That's history phase one inning and a half, first half inning. Then it became a large scale. Machine learning feels like the same way now. You feel like you're seeing a lot of people using it. A lot of people are playing around with it. It's evolving. It's been around as a science, but combined with cloud scale, this is a big thing. What should people who are in the enterprise think about how should they think about machine learning? How has some of your top customers thought about machine learning as they refactor their applications? What are some of the things that you can share from your experience and journey here? >> I mean, one of the key things I'd say just to set some context on scale and numbers. More than one and a half million customers use our database analytics or ML services end-to-end. Part of which machine learning services and capabilities are easily used by more than a hundred thousand customers at a really good scale. However, I still think in Amazon, we tend to use the phrase, "It's day one in the age of internet," even though it's an, or the phrase, "Now, but it's a golden one," but I would say in the world of machine learning, yes it's day one but I also think we just woke up and we haven't even had a cup of coffee yet. That's really that early, so. And, but when you it's interesting, you've compared it to where cloud was like 10, 12 years ago. That's early days when I used to talk to engineering leaders who are running their own data center and then we talked about cloud and various disruptive technologies. I still used to get a sense about like why cloud and basic and whatnot at that time, Whereas now with machine learning though almost every CIO, CEO, all of them never asked me why machine learning. Instead, the number one question, I get is, how do I get started with it? What are the best use cases? which is great, and this is where I always tell them one of the learnings that we actually learned in Amazon. So again, a few years ago, probably seven or eight years ago, and Amazon itself realized as a company, the impact of what machine learning could do in terms of changing how we actually run our business and what it means to provide better customer experience optimize our supply chain and so far we realized that the we need to help our builders learn machine learning and the help even our business leaders understand the power of machine learning. So we did two things. One, we actually, from a bottom-up level, we built what I call as machine learning university, which is run in my team. It's literally stocked with professors and teachers who offer curriculum to builders so that they get educated on machine learning. And now from a top-down level we also, in our yearly planning process, we call it the operational planning process where we write Amazon style narratives six pages and then answer FAQ's. We asked everyone to answer one question around, like how do you plan to leverage machine learning in your business? And typically when someone says, I really don't play into our, it does not apply. It's usually it doesn't go well. So we kind of politely encourage them to do better and come back with a better answer. This kind of dynamic on top-down and bottom-up, changed the conversation and we started seeing more and more measurable growth. And these are some of the things you're starting to see more and more among our customers too. They see the business benefit, but this is where to address the talent gap. We also made machine learning university curriculum actually now open source and freely available. And we launched SageMaker Studio Lab, which is a no cost, no set up SageMaker notebook service for educating learner profiles and all the students as well. And we are excited to also announce AIMLE scholarship for underrepresented students as well. So, so much more we can do well. >> Well, congratulations on the DynamoDB paper. That's the 10 year anniversary, which is a revolutionary product, changed the game that did change the world and that a huge impact. And now as machine learning goes to the next level, the next intern out there is at school with machine learning. They're going to be writing that next paper, your advice to them real quick. >> My biggest advice is, always, I encourage all the builders to always dream big, and don't be hesitant to speak your mind as long as you have the right conviction saying you're addressing a real customer problem. So when you feel like you have an amazing solution to address a customer problem, take the time to articulate your thoughts better, and then feel free to speak up and communicate to the folks you're working with. And I'm sure any company that nurtures good talent and knows how to hire and develop the best they will be willing to listen and then you will be able to have an amazing impact in the industry. >> Swami, great to know you're CUBE alumni love our conversations from intern on the paper of DynamoDB to the technical leader at AWS and database analyst machine learning, congratulations on all your success and continue innovating on behalf of the customers and the industry. Thanks for spending the time here on theCUBE and our program, appreciate it. >> Thanks again, John. Really appreciate it. >> Okay, now let's kick off our program. That ends the keynote track here on the AWS startup showcase. Season two, episode one, enjoy the program and don't miss the closing keynote with Deepak Singh. He goes into great detail on the changing business models, all the exciting open source innovation. (gentle bright music)

Published Date : Jan 26 2022

SUMMARY :

of the AWS Startup Showcase Good to see you too. and the buyers are changing and one of the things too Yeah, and I love the format too. One of the things and the AWS heroes, like AWS and the Facebook's of the world but also that the consumerization of tech He is the leader at AWS who has thank you for having me and AWS getting the stories out. at the launch of the AWS Talk to us about how Amazon And all the way down to are going to be able to get out of today? and Deepak at the end and one of the partners in this whole process, but Serge in the startup in the last year or so John. Thanks for spending the time with us. Thanks for having me. and data impacting the software market? but even now I'd say the database are in the enterprise and all the students as well. on the DynamoDB paper. take the time to articulate and the industry. Thanks again, John. and don't miss the closing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SergePERSON

0.99+

Dave NicholsonPERSON

0.99+

Dave VallantePERSON

0.99+

AWSORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Deepak SinghPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

SwamiPERSON

0.99+

NetflixORGANIZATION

0.99+

John FurrierPERSON

0.99+

CodefreshORGANIZATION

0.99+

DeepakPERSON

0.99+

ArmoryORGANIZATION

0.99+

LisaPERSON

0.99+

SysdigORGANIZATION

0.99+

Serge ShevchenkoPERSON

0.99+

KubecostORGANIZATION

0.99+

TideliftORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

14 companiesQUANTITY

0.99+

six pagesQUANTITY

0.99+

one questionQUANTITY

0.99+

12 monthsQUANTITY

0.99+

more than a hundred thousand customersQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.99+

todayDATE

0.98+

last yearDATE

0.98+

CNCFORGANIZATION

0.98+

More than one and a half million customersQUANTITY

0.98+

two organizationsQUANTITY

0.98+

TodayDATE

0.98+

CDKORGANIZATION

0.98+

IntuitORGANIZATION

0.98+

DynamoDBTITLE

0.98+

first half inningQUANTITY

0.98+

Steve George, Weaveworks & Steve Waterworth, Weaveworks | AWS Startup Showcase S2 E1


 

(upbeat music) >> Welcome everyone to theCUBE's presentation of the AWS Startup Showcase Open Cloud Innovations. This is season two of the ongoing series. We're covering exciting start startups in the AWS ecosystem to talk about open source community stuff. I'm your host, Dave Nicholson. And I'm delighted today to have two guests from Weaveworks. Steve George, COO of Weaveworks, and Steve Waterworth, technical marketing engineer from Weaveworks. Welcome, gentlemen, how are you? >> Very well, thanks. >> Very well, thanks very much. >> So, Steve G., what's the relationship with AWS? This is the AWS Startup Showcase. How do Weaveworks and AWS interact? >> Yeah sure. So, AWS is a investor in Weaveworks. And we, actually, collaborate really closely around EKS and some specific EKS tooling. So, in the early days of Kubernetes when AWS was working on EKS, the Elastic Kubernetes Service, we started working on the command line interface for EKS itself. And due to that partnership, we've been working closely with the EKS team for a long period of time, helping them to build the CLI and make sure that users in the community find EKS really easy to use. And so that brought us together with the AWS team, working on GitOps and thinking about how to deploy applications and clusters using this GitOps approach. And we've built that into the EKS CLI, which is an open source tool, is a project on GitHub. So, everybody can get involved with that, use it, contribute to it. We love hearing user feedback about how to help teams take advantage of the elastic nature of Kubernetes as simply and easily as possible. >> Well, it's great to have you. Before we get into the specifics around what Weaveworks is doing in this area that we're about to discuss, let's talk about this concept of GitOps. Some of us may have gotten too deep into a Netflix series, and we didn't realize that we've moved on from the world of DevOps or DevSecOps and the like. Explain where GitOps fits into this evolution. >> Yeah, sure. So, really GitOps is an instantiation, a version of DevOps. And it fits within the idea that, particularly in the Kubernetes world, we have a model in Kubernetes, which tells us exactly what we want to deploy. And so what we're talking about is using Git as a way of recording what we want to be in the runtime environment, and then telling Kubernetes from the configuration that is stored in Git exactly what we want to deploy. So, in a sense, it's very much aligned with DevOps, because we know we want to bring teams together, help them to deploy their applications, their clusters, their environments. And really with GitOps, we have a specific set of tools that we can use. And obviously what's nice about Git is it's a very developer tool, or lots and lots of developers use it, the vast majority. And so what we're trying to do is bring those operational processes into the way that developers work. So, really bringing DevOps to that generation through that specific tooling. >> So Steve G., let's continue down this thread a little bit. Why is it necessary then this sort of added wrinkle? If right now in my organization we have developers, who consider themselves to be DevOps folks, and we give them Amazon gift cards each month. And we say, "Hey, it's a world of serverless, "no code, low code lights out data centers. "Go out and deploy your code. "Everything should be fine." What's the problem with that model, and how does GitOps come in and address that? >> Right. I think there's a couple of things. So, for individual developers, one of the big challenges is that, when you watch development teams, like deploying applications and running them, you watch them switching between all those different tabs, and services, and systems that they're using. So, GitOps has a real advantage to developers, because they're already sat in Git, they're already using their familiar tooling. And so by bringing operations within that developer tooling, you're giving them that familiarity. So, it's one advantage for developers. And then for operations staff, one of the things that it does is it centralizes where all of this configuration is kept. And then you can use things like templating and some other things that we're going to be talking about today to make sure that you automate and go quickly, but you also do that in a way which is reliable, and secure, and stable. So, it's really helping to bring that run fast, but don't break things kind of ethos to how we can deploy and run applications in the cloud. >> So, Steve W., let's start talking about where Weaveworks comes into the picture, and what's your perspective. >> So, yeah, Weaveworks has an engine, a set of software, that enables this to happen. So, think of it as a constant reconciliation engine. So, you've got your declared state, your desired state is declared in Git. So, this is where all your YAML for all your Kubernetes hangs out. And then you have an agent that's running inside Kubernetes, that's the Weaveworks GitOps agent. And it's constantly comparing the desired state in Git with the actual state, which is what's running in Kubernetes. So, then as a developer, you want to make a change, or an operator, you want to make a change. You push a change into Git. The reconciliation loop runs and says, "All right, what we've got in Git does not match "what we've got in Kubernetes. "Therefore, I will create story resource, whatever." But it also works the other way. So, if someone does directly access Kubernetes and make a change, then the next time that reconciliation loop runs, it's automatically reverted back to that single source of truth in Git. So, your Kubernetes cluster, you don't get any configuration drift. It's always configured as you desire it to be configured. And as Steve George has already said, from a developer or engineer point of view, it's easy to use. They're just using Git just as they always have done and continue to do. There's nothing new to learn. No change to working practices. I just push code into Git, magic happens. >> So, Steve W., little deeper dive on that. When we hear Ops, a lot of us start thinking about, specifically in terms of infrastructure, and especially since infrastructure when deployed and left out there, even though it's really idle, you're paying for it. So, anytime there's an Ops component to the discussion, cost and resource management come into play. You mentioned this idea of not letting things drift from a template. What are those templates based on? Are they based on... Is this primarily an infrastructure discussion, or are we talking about the code itself that is outside of the infrastructure discussion? >> It's predominantly around the infrastructure. So, what you're managing in Git, as far as Kubernetes is concerned, is always deployment files, and services, and horizontal pod autoscalers, all those Kubernetes entities. Typically, the source code for your application, be it in Java, Node.js, whatever it is you happen to be writing it in, that's, typically, in a separate repository. You, typically, don't combine the two. So, you've got one set of repository, basically, for building your containers, and your CLI will run off that, and ultimately push a container into a registry somewhere. Then you have a separate repo, which is your config. repo, which declares what version of the containers you're going to run, how many you're going to run, how the services are bound to those containers, et cetera. >> Yeah, that makes sense. Steve G., talk to us about this concept of trusted application delivery with GitOps, and frankly, it's what led to the sort of prior question. When you think about trusted application delivery, where is that intertwinement between what we think of as the application code versus the code that is creating the infrastructure? So, what is trusted application delivery? >> Sure, so, with GitOps, we have the ability to deploy the infrastructure components. And then we also define what the application containers are, that would go to be deployed into that environment. And so, this is a really interesting question, because some teams will associate all of the services that an application needs within an application team. And sometimes teams will deploy sort of horizontal infrastructure, which then all application teams services take advantage of. Either way, you can define that within your configuration, within your GitOps configuration. Now, when you start deploying speed, particularly when you have multiple different teams doing these sorts of deployments, one of the questions that starts to come up will be from the security team, or someone who's thinking about, well, what happens if we make a deployment, which is accidentally incorrect, or if there is a security issue in one of those dependencies, and we need to get a new version deployed as quickly as possible? And so, in the GitOps pipeline, one of the things that we can do is to put in various checkpoints to check that the policy is being followed correctly. So, are we deploying the right number of applications, the right configuration of an application? Does that application follow certain standards that the enterprise has set down? And that's what we talk about when we talk about trusted policy and trusted delivery. Because really what we're thinking about here is enabling the development teams to go as quickly as possible with their new deployments, but protecting them with automated guard rails. So, making sure that they can go fast, but they are not going to do anything which destroys the reliability of the application platform. >> Yeah, you've mentioned reliability and kind of alluded to scalability in the application environment. What about looking at this from the security perspective? There've been some recently, pretty well publicized breaches. Not a lot of senior executives in enterprises understand that a very high percentage of code that their businesses are running on is coming out of the open source community, where developers and maintainers are, to a certain degree, what they would consider to be volunteers. That can be a scary thing. So, talk about why an enterprise struggles today with security, policy, and governance. And I toss this out to Steve W. Or Steve George. Answer appropriately. >> I'll try that in a high level, and Steve W. can give more of the technical detail. I mean, I'll say that when I talk to enterprise customers, there's two areas of concern. One area of concern is that, we're in an environment with DevOps where we started this conversation of trying to help teams to go as quickly as possible. But there's many instances where teams accidentally do things, but, nonetheless, that is a security issue. They deploy something manually into an environment, they forget about it, and that's something which is wrong. So, helping with this kind of policy as code pipeline, ensuring that everything goes through a set of standards could really help teams. And that's why we call it developer guard rails, because this is about helping the development team by providing automation around the outside, that helps them to go faster and relieves them from that mental concern of have they made any mistakes or errors. So, that's one form. And then the other form is the form, where you are going, David, which is really around security dependencies within software, a whole supply chain of concern. And what we can do there, by, again, having a set of standard scanners and policy checking, which ensures that everything is checked before it goes into the environment. That really helps to make sure that there are no security issues in the runtime deployment. Steve W., anything that I missed there? >> Yeah, well, I'll just say, I'll just go a little deeper on the technology bit. So, essentially, we have a library of policies, which get you started. Of course, you can modify those policies, write your own. The library is there just to get you going. So, as a change is made, typically, via, say, a GitHub action, the policy engine then kicks in and checks all those deployment files, all those YAML for Kubernetes, and looks for things that then are outside of policy. And if that's the case, then the action will fail, and that'll show up on the pull request. So, things like, are your containers coming from trusted sources? You're not just pulling in some random container from a public registry. You're actually using a trusted registry. Things like, are containers running as route, or are they running in privileged mode, which, again, it could be a security? But it's not just about security, it can also be about coding standards. Are the containers correctly annotated? Is the deployment correctly annotated? Does it have the annotation fields that we require for our coding standards? And it can also be about reliability. Does the deployment script have the health checks defined? Does it have a suitable replica account? So, a rolling update. We'll actually do a rolling update. You can't do a rolling update with only one replica. So, you can have all these sorts of checks and guards in there. And then finally, there's an admission controller that runs inside Kubernetes. So, if someone does try and squeeze through, and do something a little naughty, and go directly to the cluster, it's not going to happen, 'cause that admission controller is going to say, "Hey, no, that's a policy violation. "I'm not letting that in." So, it really just stops. It stops developers making mistakes. I know, I know, I've done development, and I've deployed things into Kubernetes, and haven't got the conflict quite right, and then it falls flat on its face. And you're sitting there scratching your head. And with the policy checks, then that wouldn't happen. 'Cause you would try and put something in that has a slightly iffy configuration, and it would spit it straight back out at you. >> So, obviously you have some sort of policy engine that you're you're relying on. But what is the user experience like? I mean, is this a screen that is reminiscent of the matrix with non-readable characters streaming down that only another machine can understand? What does this look like to the operator? >> Yeah, sure, so, we have a console, a web console, where developers and operators can use a set of predefined policies. And so that's the starting point. And we have a set of recommendations there and policies that you can just attach to your deployments. So, set of recommendations about different AWS resources, deployment types, EKS deployment types, different sets of standards that your enterprise might be following along with. So, that's one way of doing it. And then you can take those policies and start customizing them to your needs. And by using GitOps, what we're aiming for here is to bring both the application configuration, the environment configuration. We talked about this earlier, all of this being within Git. We're adding these policies within Git as well. So, for advanced users, they'll have everything that they need together in a single unit of change, your application, your definitions of how you want to run this application service, and the policies that you want it to follow, all together in Git. And then when there is some sort of policy violation on the other end of the pipeline, people can see where this policy is being violated, how it was violated. And then for a set of those, we try and automate by showing a pull request for the user about how they can fix this policy violation. So, try and make it as simple as possible. Because in many of these sorts of violations, if you're a busy developer, there'll be minor configuration details going against the configuration, and you just want to fix those really quickly. >> So Steve W., is that what the Mega Leaks policy engine is? >> Yes, that's the Mega Leaks policy engine. So, yes, it's a SaaS-based service that holds the actual policy engine and your library of policies. So, when your GitHub action runs, it goes and essentially makes a call across with the configuration and does the check and spits out any violation errors, if there are any. >> So, folks in this community really like to try things before they deploy them. Is there an opportunity for people to get a demo of this, get their hands on it? what's the best way to do that? >> The best way to do it is have a play with it. As an engineer, I just love getting my hands dirty with these sorts of things. So, yeah, you can go to the Mega Leaks website and get a 30-day free trial. You can spin yourself up a little, test cluster, and have a play. >> So, what's coming next? We had DevOps, and then DevSecOps, and now GitOps. What's next? Are we going to go back to all infrastructure on premises all the time, back to waterfall? Back to waterfall, "Hot Tub Time Machine?" What's the prediction? >> Well, I think the thing that you set out right at the start, actually, is the prediction. The difference between infrastructure and applications is steadily going away, as we try and be more dynamic in the way that we deploy. And for us with GitOps, I think we're... When we talk about operations, there's a lots of depth to what we mean about operations. So, I think there's lots of areas to explore how to bring operations into developer tooling with GitOps. So, that's, I think, certainly where Weaveworks will be focusing. >> Well, as an old infrastructure guy myself, I see this as vindication. Because infrastructure still matters, kids. And we need sophisticated ways to make sure that the proper infrastructure is applied. People are shocked to learn that even serverless application environments involve servers. So, I tell my 14-year-old son this regularly, he doesn't believe it, but it is what it is. Steve W., any final thoughts on this whole move towards GitOps and, specifically, the Weaveworks secret sauce and superpower. >> Yeah. It's all about (indistinct)... It's all about going as quickly as possible, but without tripping up. Being able to run fast, but without tripping over your shoe laces, which you forgot to tie up. And that's what the automation brings. It allows you to go quickly, does lots of things for you, and yeah, we try and stop you shooting yourself in the foot as you're going. >> Well, it's been fantastic talking to both of you today. For the audience's sake, I'm in California, and we have a gentleman in France, and a gentlemen in the UK. It's just the wonders of modern technology never cease. Thanks, again, Steve Waterworth, Steve George from Weaveworks. Thanks for coming on theCUBE for the AWS Startup Showcase. And to the rest of us, keep it right here for more action on theCUBE, your leader in tech coverage. (upbeat music)

Published Date : Jan 26 2022

SUMMARY :

of the AWS Startup Showcase This is the AWS Startup Showcase. So, in the early days of Kubernetes from the world of DevOps from the configuration What's the problem with that model, to make sure that you and what's your perspective. that enables this to happen. that is outside of the how the services are bound to that is creating the infrastructure? one of the things that we can do and kind of alluded to scalability that helps them to go And if that's the case, is reminiscent of the matrix and start customizing them to your needs. So Steve W., is that what that holds the actual policy engine So, folks in this community So, yeah, you can go to on premises all the in the way that we deploy. that the proper infrastructure is applied. and yeah, we try and stop you and a gentlemen in the UK.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Steve WaterworthPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavidPERSON

0.99+

Steve GeorgePERSON

0.99+

AWSORGANIZATION

0.99+

Steve G.PERSON

0.99+

FranceLOCATION

0.99+

Steve W.PERSON

0.99+

CaliforniaLOCATION

0.99+

30-dayQUANTITY

0.99+

WeaveworksORGANIZATION

0.99+

GitTITLE

0.99+

UKLOCATION

0.99+

GitOpsTITLE

0.99+

JavaTITLE

0.99+

twoQUANTITY

0.99+

Node.jsTITLE

0.99+

one advantageQUANTITY

0.99+

two guestsQUANTITY

0.99+

Mega LeaksTITLE

0.99+

Mega LeaksTITLE

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

each monthQUANTITY

0.99+

DevOpsTITLE

0.98+

NetflixORGANIZATION

0.98+

one setQUANTITY

0.98+

DevSecOpsTITLE

0.98+

one formQUANTITY

0.98+

EKSTITLE

0.98+

oneQUANTITY

0.97+

One areaQUANTITY

0.97+

KubernetesTITLE

0.97+

two areasQUANTITY

0.97+

one replicaQUANTITY

0.96+

GitHubORGANIZATION

0.95+

Loris Degioanni | AWS Startup Showcase S2 Ep 1 | Open Cloud Innovations


 

>>Welcoming into the cubes presentation of AWS startup showcase open cloud innovations. This is season two episode one of the ongoing series covering exciting hot startups from the AWS ecosystem. Today's episode. One of season two theme is open source community and the open cloud innovations. I'm your host, John farrier of the cube. And today we're excited to be joined by Loris Dajani who is the C T O chief technology officer and founder of cystic found that in his backyard with some wine and beer. Great to see you. We're here to talk about Falco finding cloud threats in real time. Thank you for joining us, Laura. Thanks. Good to see you >>Love that your company was founded in your backyard. Classic startup story. You have been growing very, very fast. And the key point of the showcase is to talk about the startups that are making a difference and, and that are winning and doing well. You guys have done extremely well with your business. Congratulations, but thank you. The big theme is security and as organizations have moved their business critical applications to the cloud, the attackers have followed. This is Billy important in the industry. You guys are in the middle of this. What's your view on this? What's your take? What's your reaction? >>Yeah. As we, as a end ecosystem are moving to the cloud as more and more, we are developing cloud native applications. We relying on CACD. We are relying on orchestrations in containers. Security is becoming more and more important. And I would say more and more complex. I mean, we're reading every day in the news about attacks about data leaks and so on. There's rarely a day when there's nothing major happening and that we can see the press from this point of view. And definitely things are evolving. Things are changing in the cloud. In for example, Cisco just released a cloud native security and usage report a few days ago. And the mundane things that we found among our user base, for example, 60, 66% of containers are running as rude. So still many organizations adopting a relatively relaxed way to deploy their applications. Not because they like doing it, but because it tends to be, you know, easier and a little bit with a little bit less ration. >>We also found that that 27% of users unnecessary route access in the 73% of the cloud accounts, public has three buckets. This is all stuff that is all good, but can generate consequences when you make a mistake, like typically, you know, your data leaks, no, because of super sophisticated attacks, but because somebody in your organization forgets maybe some data on it on a public history bucket, or because some credentials that are not restrictive enough, maybe are leaked to another team member or, or, or a Gita, you know, repository or something like that. So is infrastructures and the software becomes a let's a more sophisticated and more automated. There's also at the same time, more risks and opportunities for misconfigurations that then tend to be, you know, very often the sewers of, of issues in the cloud. >>Yeah, those self-inflicted wounds definitely come up. We've seen people leaving S3 buckets open, you know, it's user error, but, you know, w w those are small little things that get taken care of pretty quickly. That's just hygiene. It's just discipline. You know, most of the sophisticated enterprises are moving way past that, but now they're adopting more cloud native, right. And as they get into the critical apps, securing them has been challenging. We've talked to many CEOs and CSOs, and they say that to us. Yeah. It's very challenging, but we're on it. I have to ask you, what should people worry about when secure in the cloud, because they know is challenging, then they'll have the opportunity on the other side, what are they worried about? What do you see people scared of or addressing, or what should I be worried about when securing the cloud? >>Yeah, definitely. Sometimes when I'm talking about the security, I like to compare, you know, the old data center in that the old monolithic applications to a castle, you know, in middle aged castle. So what, what did you do to protect your castle? You used to build very thick walls around it, and then a small entrance and be very careful about the entrance, you know, protect the entrance very well. So what we used to doing that, that data center was protect everything, you know, the, the whole perimeter in a very aggressive way with firewalls and making sure that there was only a very narrow entrance to our data center. And, you know, as much as possible, like active security there, like firewalls or this kind of stuff. Now we're in the cloud. Now, it's everything. Everything is much more diffused, right? Our users, our customers are coming from all over the planet, every country, every geography, every time, but also our internal team is coming from everywhere because they're all accessing a cloud environment. >>You know, they often from home for different offices, again, from every different geography, every different country. So in this configuration, the metaphor data that they like to use is an amusement park, right? You have a big area with many important things inside in the users and operators that are coming from different dangerous is that you cannot really block, you know, you need to let everything come in and in operate together in these kinds of environment, the traditional protection is not really effective. It's overwhelming. And it doesn't really serve the purpose that we need. We cannot build a giant water under our amusement park. We need people to come in. So what we're finding is that understanding, getting visibility and doing, if you Rheodyne is much more important. So it's more like we need to replace the big walls with a granular network of security cameras that allow us to see what's happening in the, in the different areas of our amusement park. And we need to be able to do that in a way that is real time and allows us to react in a smart way as things happen because in the modern world of cloud five minutes of delay in understanding that something is wrong, mean that you're ready being, you know, attacked and your data's already being >>Well. I also love the analogy of the amusement park. And of course, certain rides, you need to be a certain height to ride the rollercoaster that I guess, that's it credentials or security credentials, as we say, but in all seriousness, the perimeter is dead. We all know that also moats were relied upon as well in the old days, you know, you secure the firewall, nothing comes in, goes out, and then once you're in, you don't know what's going on. Now that's flipped. There's no walls, there's no moats everyone's in. And so you're saying this kind of security camera kind of model is key. So again, this topic here is securing real time. Yeah. How do you do that? Because it's happening so fast. It's moving. There's a lot of movement. It's not at rest there's data moving around fast. What's the secret sauce to making real identifying real-time threats in an enterprise. >>Yeah. And in, in our opinion, there are some key ingredients. One is a granularity, right? You cannot really understand the threats in your amusement park. If you're just watching these from, from a satellite picture. So you need to be there. You need to be granular. You need to be located in the, in the areas where stuff happens. This means, for example, in, in security for the clowning in runtime, security is important to whoever your sensors that are distributed, that are able to observe every single end point. Not only that, but you also need to look at the infrastructure, right? From this point of view, cloud providers like Amazon, for example, offer nice facilities. Like for example, there's CloudTrail in AWS that collects in a nice opinionated consistent way, the data that is coming from multiple cloud services. So it's important from one point of view, to go deep into, into the endpoint, into the processes, into what's executing, but also collect his information like the cultural information and being able to correlate it to there's no full security without covering all of the basics. >>So a security is a matter of both granularity and being able to go deep and understanding what every single item does, but also being able to go abroad and collect the right data, the right data sources and correlated. And then the real time is really critical. So decisions need to be taken as the data comes in. So the streaming nature of security engines is becoming more and more important. So the step one of course, security, especially cost security, posture management was very much let's ball. Once in a while, let's, let's involve the API and see what's happening. This is still important. Of course, you know, you need to have the basics covered, but more and more, the paradigm needs to change to, okay, the data is coming in second by second, instead of asking for the data manually, once in a while, second by second, there's the moment it arrives. You need to be able to detect, correlate, take decisions. And so, you know, machine learning is very important. Automation is very important. The rules that are coming from the community on a daily basis are, are very important. >>Let me ask you a question, cause I love this topic because it's a data problem at the same time. There's some network action going on. I love this idea of no perimeter. You're going to be monitoring anything, but there's been trade offs in the past, overhead involved, whether you're monitoring or putting probes in the network or the different, there's all kinds of different approaches. How does the new technology with cloud and machine learning change the dynamics of the kinds of approaches? Because it's kind of not old tech, but you the same similar concepts to network management, other things, what what's going on now that's different and what makes this possible today? >>Yeah, I think from the friction point of view, which is one very important topic here. So this needs to be deployed efficiently and easily in this transparency, transparent as possible, everywhere, everywhere to avoid blind spots and making sure that everything is scheduled in front. His point of view, it's very important to integrate with the orchestration is very important to make use of all of the facilities that Amazon provides in the it's very important to have a system that is deployed automatically and not manually. That is in particular, the only to avoid blind spots because it's manual deployment is employed. Somebody would forget, you know, to deploy where somewhere where it's important. And then from the performance point of view, very much, for example, with Falco, you know, our open source front-end security engine, we really took key design decisions at the beginning to make sure that the engine would be able to support in Paris, millions of events per second, with minimal overhead. >>You know, they're barely measure measurable overhead. When you want to design something like that, you know, that you need to accept some kind of trade-offs. You need to know that you need to maybe limit a little bit this expressiveness, you know, or what can be done, but ease of deployment and performance were more important goals here. And you know, it's not uncommon for us is Dave to have users of Farco or commercial customers that they have tens of thousands, hundreds of thousands of machines. You know, I said two machines and sometimes millions of containers. And in these environments, lightweight is key. You want death, but you want overhead to be really meaningful and >>Okay, so a amusement park, a lot of diverse applications. So integration, I get that orchestration brings back the Kubernetes angle a little bit and Falco and per overhead and performance cloud scale. So all these things are working in favor. If I get that right, is that, am I getting that right? You get the cloud scale, you get the integration and open. >>Yeah, exactly. Any like ingredients over SEP, you know, and that, and with these ingredients, it's possible to bake a, a recipe to, to have a plate better, can be more usable, more effective and more efficient. That may be the place that we're doing in the previous direction. >>Oh, so I've got to ask you about Falco because it's come up a lot. We talked about it on our cube conversations already on the internet. Check that out. And a great conversation there. You guys have close to 40 million plus million downloads of, of this. You have also 80 was far gate integration, so six, some significant traction. What does this mean? I mean, what is it telling us? Why is this successful? What are people doing with Falco? I see this as a leading indicator, and I know you guys were sponsoring the project, so congratulations and propelled your business, but there's something going on here. What does this as a leading indicator of? >>Yeah. And for, for the audience, Falco is the runtime security tool of the cloud native generation such. And so when we, the Falco, we were inspired by previous generation, for example, network intrusion detection, system tools, and a post protection tools and so on. But we created essentially a unique tool that would really be designed for the modern paradigm of containers, cloud CIC, and salt and Falco essentially is able to collect a bunch of brainer information from your applications that are running in the cloud and is a religion that is based on policies that are driven by the community, essentially that allow you to detect misconfigurations attacks and normals conditions in your cloud, in your cloud applications. Recently, we announced that the extension of Falco to support a cloud infrastructure and time security by parsing cloud logs, like cloud trail and so on. So now Falba can be used at the same time to protect the workloads that are running in virtual machines or containers. >>And also the cloud infrastructure to give the audience a couple of examples, focused, able to detect if somebody is running a shelf in a radius container, or if somebody is downloading a sensitive by, from an S3 bucket, all of these in real time with Falco, we decided to go really with CR study. This is Degas was one of the team members that started it, but we decided to go to the community right away, because this is one other ingredient. We are talking about the ingredients before, and there's not a successful modern security tool without being able to leverage the community and empower the community to contribute to it, to use it, to validate and so on. And that's also why we contributed Falco to the cloud native computing foundation. So that Falco is a CNCF tool and is blessed by many organizations. We are also partnering with many companies, including Amazon. Last year, we released that far gate support for Falco. And that was done is a project that was done in cooperation with Amazon, so that we could have strong runtime security for the containers that are running in. >>Well, I've got to say, first of all, congratulations. And I think that's a bold move to donate or not donate contribute to the open source community because you're enabling a lot of people to do great things. And some people might be scared. They think they might be foreclosing and beneficial in the future, but in the reality, that is the new business model open source. So I think that's worth calling out and congratulations. This is the new commercial open source paradigm. And it kind of leads into my last question, which is why is security well-positioned to benefit from open source besides the fact that the new model of getting people enabled and getting scale and getting standards like you're doing, makes everybody win. And again, that's a community model. That's not a proprietary approach. So again, source again, big part of this. Why was security benefit from opensource? >>I am a strong believer. I mean, we are in a better, we could say we are in a war, right? The good guys versus the bad guys. The internet is full of bad guys. And these bad guys are coordinated, are motivated, are sometimes we'll find it. And we'll equip. We win only if we fight this war as a community. So the old paradigm of vendors building their own Eva towers, you know, their own self-contained ecosystems and that the us as users as, as, as customers, every many different, you know, environments that don't communicate with each other, just doesn't take advantage of our capabilities. Our strength is as a community. So we are much stronger against the big guys and we have a much better chance doing when this war, if we adopt a paradigm that allows us to work together. Think only about for example, I don't know, companies any to train, you know, the workforce on the security best practices on the security tools. >>It's much better to standardize on something, build the stack that is accepted by everybody and tell it can focus on learning the stack and becoming a master of the steak rounded rather than every single organization naming the different tool. And, and then B it's very hard to attract talent and to have the right, you know, people that can help you with, with your issues in, in, in, in, in, with your goals. So the future of security is going to be open source. I'm a strong believer in that, and we'll see more and more examples like Falco of initiatives that really start with, with the community and for the community. >>Like we always say an open, open winds, always turn the lights on, put the code out there. And I think, I think the community model is winning. Congratulations, Loris Dajani CTO and founder of SIS dig congratulatory success. And thank you for coming on the cube for the ADB startup showcase open cloud innovations. Thanks for coming on. Okay. Is the cube stay with us all day long every day with the cube, check us out the cube.net. I'm John furrier. Thanks for watching.

Published Date : Jan 26 2022

SUMMARY :

Good to see you And the key point of the showcase is to talk about the startups that are making a difference and, but because it tends to be, you know, easier and a little bit with a little bit less ration. for misconfigurations that then tend to be, you know, very often the sewers You know, most of the sophisticated enterprises I like to compare, you know, the old data center in that the metaphor data that they like to use is an amusement park, right? What's the secret sauce to making real identifying real-time threats in the cultural information and being able to correlate it to there's no full security the paradigm needs to change to, okay, the data is coming in second by second, How does the new technology with cloud and machine learning change And then from the performance point of view, very much, for example, with Falco, you know, You need to know that you need to maybe limit a little bit this expressiveness, you know, You get the cloud scale, you get the integration and open. over SEP, you know, and that, and with these ingredients, it's possible to bake Oh, so I've got to ask you about Falco because it's come up a lot. on policies that are driven by the community, essentially that allow you to detect And also the cloud infrastructure to give the audience a couple of examples, And I think that's a bold move to donate or not donate contribute that the us as users as, as, as customers, to attract talent and to have the right, you know, people that can help you with, And thank you for coming

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LauraPERSON

0.99+

AmazonORGANIZATION

0.99+

Loris DajaniPERSON

0.99+

Loris DegioanniPERSON

0.99+

two machinesQUANTITY

0.99+

Loris DajaniPERSON

0.99+

73%QUANTITY

0.99+

ParisLOCATION

0.99+

27%QUANTITY

0.99+

CiscoORGANIZATION

0.99+

Last yearDATE

0.99+

FalcoORGANIZATION

0.99+

millionsQUANTITY

0.99+

sixQUANTITY

0.99+

FarcoORGANIZATION

0.99+

John farrierPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

five minutesQUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

one pointQUANTITY

0.99+

oneQUANTITY

0.99+

TodayDATE

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

cube.netOTHER

0.97+

BillyPERSON

0.96+

a dayQUANTITY

0.95+

SIS digORGANIZATION

0.94+

one other ingredientQUANTITY

0.94+

OneQUANTITY

0.93+

C T OORGANIZATION

0.91+

Ep 1QUANTITY

0.89+

secondQUANTITY

0.89+

80QUANTITY

0.88+

singleQUANTITY

0.88+

few days agoDATE

0.88+

one very important topicQUANTITY

0.87+

hundreds of thousands of machinesQUANTITY

0.86+

FalbaTITLE

0.85+

S3TITLE

0.83+

single itemQUANTITY

0.83+

every geographyQUANTITY

0.8+

every countryQUANTITY

0.78+

AWS Startup Showcase S2EVENT

0.75+

three bucketsQUANTITY

0.75+

CTOPERSON

0.75+

60, 66%QUANTITY

0.74+

CloudTrailTITLE

0.74+

40 million plus million downloadsQUANTITY

0.73+

containersQUANTITY

0.73+

twoQUANTITY

0.73+

John furrierPERSON

0.73+

DegasPERSON

0.72+

millions of events per secondQUANTITY

0.67+

single end pointQUANTITY

0.67+

season two themeQUANTITY

0.65+

firstQUANTITY

0.63+

ADBORGANIZATION

0.6+

KubernetesORGANIZATION

0.59+

episode oneQUANTITY

0.59+

RheodyneORGANIZATION

0.59+

studyORGANIZATION

0.56+

step oneQUANTITY

0.55+

seasonOTHER

0.54+

EvaORGANIZATION

0.53+

teamQUANTITY

0.53+

SEPTITLE

0.52+

CACDORGANIZATION

0.52+

everyQUANTITY

0.52+

viewQUANTITY

0.5+

CRTITLE

0.49+

S3COMMERCIAL_ITEM

0.35+

Pat Conte, Opsani | AWS Startup Showcase


 

(upbeat music) >> Hello and welcome to this CUBE conversation here presenting the "AWS Startup Showcase: "New Breakthroughs in DevOps, Data Analytics "and Cloud Management Tools" featuring Opsani for the cloud management and migration track here today, I'm your host John Furrier. Today, we're joined by Patrick Conte, Chief Commercial Officer, Opsani. Thanks for coming on. Appreciate you coming on. Future of AI operations. >> Thanks, John. Great to be here. Appreciate being with you. >> So congratulations on all your success being showcased here as part of the Startups Showcase, future of AI operations. You've got the cloud scale happening. A lot of new transitions in this quote digital transformation as cloud scales goes next generation. DevOps revolution as Emily Freeman pointed out in her keynote. What's the problem statement that you guys are focused on? Obviously, AI involves a lot of automation. I can imagine there's a data problem in there somewhere. What's the core problem that you guys are focused on? >> Yeah, it's interesting because there are a lot of companies that focus on trying to help other companies optimize what they're doing in the cloud, whether it's cost or whether it's performance or something else. We felt very strongly that AI was the way to do that. I've got a slide prepared, and maybe we can take a quick look at that, and that'll talk about the three elements or dimensions of the problem. So we think about cloud services and the challenge of delivering cloud services. You've really got three things that customers are trying to solve for. They're trying to solve for performance, they're trying to solve for the best performance, and, ultimately, scalability. I mean, applications are growing really quickly especially in this current timeframe with cloud services and whatnot. They're trying to keep costs under control because certainly, it can get way out of control in the cloud since you don't own the infrastructure, and more importantly than anything else which is why it's at the bottom sort of at the foundation of all this, is they want their applications to be a really a good experience for their customers. So our customer's customer is actually who we're trying to solve this problem for. So what we've done is we've built a platform that uses AI and machine learning to optimize, meaning tune, all of the key parameters of a cloud application. So those are things like the CPU usage, the memory usage, the number of replicas in a Kubernetes or container environment, those kinds of things. It seems like it would be simple just to grab some values and plug 'em in, but it's not. It's actually the combination of them has to be right. Otherwise, you get delays or faults or other problems with the application. >> Andrew, if you can bring that slide back up for a second. I want to just ask one quick question on the problem statement. You got expenditures, performance, customer experience kind of on the sides there. Do you see this tip a certain way depending upon use cases? I mean, is there one thing that jumps out at you, Patrick, from your customer's customer's standpoint? Obviously, customer experience is the outcome. That's the app, whatever. That's whatever we got going on there. >> Sure. >> But is there patterns 'cause you can have good performance, but then budget overruns. Or all of them could be failing. Talk about this dynamic with this triangle. >> Well, without AI, without machine learning, you can solve for one of these, only one, right? So if you want to solve for performance like you said, your costs may overrun, and you're probably not going to have control of the customer experience. If you want to solve for one of the others, you're going to have to sacrifice the other two. With machine learning though, we can actually balance that, and it isn't a perfect balance, and the question you asked is really a great one. Sometimes, you want to over-correct on something. Sometimes, scalability is more important than cost, but what we're going to do because of our machine learning capability, we're going to always make sure that you're never spending more than you should spend, so we're always going to make sure that you have the best cost for whatever the performance and reliability factors that you you want to have are. >> Yeah, I can imagine. Some people leave services on. Happened to us one time. An intern left one of the services on, and like where did that bill come from? So kind of looked back, we had to kind of fix that. There's a ton of action, but I got to ask you, what are customers looking for with you guys? I mean, as they look at Opsani, what you guys are offering, what's different than what other people might be proposing with optimization solutions? >> Sure. Well, why don't we bring up the second slide, and this'll illustrate some of the differences, and we can talk through some of this stuff as well. So really, the area that we play in is called AIOps, and that's sort of a new area, if you will, over the last few years, and really what it means is applying intelligence to your cloud operations, and those cloud operations could be development operations, or they could be production operations. And what this slide is really representing is in the upper slide, that's sort of the way customers experience their DevOps model today. Somebody says we need an application or we need a feature, the developers pull down something from get. They hack an early version of it. They run through some tests. They size it whatever way they know that it won't fail, and then they throw it over to the SREs to try to tune it before they shove it out into production, but nobody really sizes it properly. It's not optimized, and so it's not tuned either. When it goes into production, it's just the first combination of settings that work. So what happens is undoubtedly, there's some type of a problem, a fault or a delay, or you push new code, or there's a change in traffic. Something happens, and then, you've got to figure out what the heck. So what happens then is you use your tools. First thing you do is you over-provision everything. That's what everybody does, they over-provision and try to soak up the problem. But that doesn't solve it because now, your costs are going crazy. You've got to go back and find out and try as best you can to get root cause. You go back to the tests, and you're trying to find something in the test phase that might be an indicator. Eventually your developers have to hack a hot fix, and the conveyor belt sort of keeps on going. We've tested this model on every single customer that we've spoken to, and they've all said this is what they experience on a day-to-day basis. Now, if we can go back to the side, let's talk about the second part which is what we do and what makes us different. So on the bottom of this slide, you'll see it's really a shift-left model. What we do is we plug in in the production phase, and as I mentioned earlier, what we're doing is we're tuning all those cloud parameters. We're tuning the CPU, the memory, the Replicas, all those kinds of things. We're tuning them all in concert, and we're doing it at machine speed, so that's how the customer gets the best performance, the best reliability at the best cost. That's the way we're able to achieve that is because we're iterating this thing in machine speed, but there's one other place where we plug in and we help the whole concept of AIOps and DevOps, and that is we can plug in in the test phase as well. And so if you think about it, the DevOps guy can actually not have to over-provision before he throws it over to the SREs. He can actually optimize and find the right size of the application before he sends it through to the SREs, and what this does is collapses the timeframe because it means the SREs don't have to hunt for a working set of parameters. They get one from the DevOps guys when they send it over, and this is how the future of AIOps is being really affected by optimization and what we call autonomous optimization which means that it's happening without humans having to press a button on it. >> John: Andrew, bring that slide back up. I want to just ask another question. Tuning in concert thing is very interesting to me. So how does that work? Are you telegraphing information to the developer from the autonomous workload tuning engine piece? I mean, how does the developer know the right knobs or where does it get that provisioning information? I see the performance lag. I see where you're solving that problem. >> Sure. >> How does that work? >> Yeah, so actually, if we go to the next slide, I'll show you exactly how it works. Okay, so this slide represents the architecture of a typical application environment that we would find ourselves in, and inside the dotted line is the customer's application namespace. That's where the app is. And so, it's got a bunch of pods. It's got a horizontal pod. It's got something for replication, probably an HPA. And so, what we do is we install inside that namespace two small instances. One is a tuning pod which some people call a canary, and that tuning pod joins the rest of the pods, but it's not part of the application. It's actually separate, but it gets the same traffic. We also install somebody we call Servo which is basically an action engine. What Servo does is Servo takes the metrics from whatever the metric system is is collecting all those different settings and whatnot from the working application. It could be something like Prometheus. It could be an Envoy Sidecar, or more likely, it's something like AppDynamics, or we can even collect metrics off of Nginx which is at the front of the service. We can plug into anywhere where those metrics are. We can pull the metrics forward. Once we see the metrics, we send them to our backend. The Opsani SaaS service is our machine learning backend. That's where all the magic happens, and what happens then is that service sees the settings, sends a recommendation to Servo, Servo sends it to the tuning pod, and we tune until we find optimal. And so, that iteration typically takes about 20 steps. It depends on how big the application is and whatnot, how fast those steps take. It could be anywhere from seconds to minutes to 10 to 20 minutes per step, but typically within about 20 steps, we can find optimal, and then we'll come back and we'll say, "Here's optimal, and do you want to "promote this to production," and the customer says, "Yes, I want to promote it to production "because I'm saving a lot of money or because I've gotten "better performance or better reliability." Then, all he has to do is press a button, and all that stuff gets sent right to the production pods, and all of those settings get put into production, and now he's now he's actually saving the money. So that's basically how it works. >> It's kind of like when I want to go to the beach, I look at the weather.com, I check the forecast, and I decide whether I want to go or not. You're getting the data, so you're getting a good look at the information, and then putting that into a policy standpoint. I get that, makes total sense. Can I ask you, if you don't mind, expanding on the performance and reliability and the cost advantage? You mentioned cost. How is that impacting? Give us an example of some performance impact, reliability, and cost impacts. >> Well, let's talk about what those things mean because like a lot of people might have different ideas about what they think those mean. So from a cost standpoint, we're talking about cloud spend ultimately, but it's represented by the settings themselves, so I'm not talking about what deal you cut with AWS or Azure or Google. I'm talking about whatever deal you cut, we're going to save you 30, 50, 70% off of that. So it doesn't really matter what cost you negotiated. What we're talking about is right-sizing the settings for CPU and memory, Replica. Could be Java. It could be garbage collection, time ratios, or heap sizes or things like that. Those are all the kinds of things that we can tune. The thing is most of those settings have an unlimited number of values, and this is why machine learning is important because, if you think about it, even if they only had eight settings or eight values per setting, now you're talking about literally billions of combinations. So to find optimal, you've got to have machine speed to be able to do it, and you have to iterate very, very quickly to make it happen. So that's basically the thing, and that's really one of the things that makes us different from anybody else, and if you put that last slide back up, the architecture slide, for just a second, there's a couple of key words at the bottom of it that I want to want to focus on, continuous. So continuous really means that we're on all the time. We're not plug us in one time, make a change, and then walk away. We're actually always measuring and adjusting, and the reason why this is important is in the modern DevOps world, your traffic level is going to change. You're going to push new code. Things are going to happen that are going to change the basic nature of the software, and you have to be able to tune for those changes. So continuous is very important. Second thing is autonomous. This is designed to take pressure off of the SREs. It's not designed to replace them, but to take the pressure off of them having to check pager all the time and run in and make adjustments, or try to divine or find an adjustment that might be very, very difficult for them to do so. So we're doing it for them, and that scale means that we can solve this for, let's say, one big monolithic application, or we can solve it for literally hundreds of applications and thousands of microservices that make up those applications and tune them all at the same time. So the same platform can be used for all of those. You originally asked about the parameters and the settings. Did I answer the question there? >> You totally did. I mean, the tuning in concert. You mentioned early as a key point. I mean, you're basically tuning the engine. It's not so much negotiating a purchase SaaS discount. It's essentially cost overruns by the engine, either over burning or heating or whatever you want to call it. I mean, basically inefficiency. You're tuning the core engine. >> Exactly so. So the cost thing is I mentioned is due to right-sizing the settings and the number of Replicas. The performance is typically measured via latency, and the reliability is typically measured via error rates. And there's some other measures as well. We have a whole list of them that are in the application itself, but those are the kinds of things that we look for as results. When we do our tuning, we look for reducing error rates, or we look for holding error rates at zero, for example, even if we improve the performance or we improve the cost. So we're looking for the best result, the best combination result, and then a customer can decide if they want to do so to actually over-correct on something. We have the whole concept of guard rail, so if performance is the most important thing, or maybe some customers, cost is the most important thing, they can actually say, "Well, give us the best cost, "and give us the best performance and the best reliability, "but at this cost," and we can then use that as a service-level objective and tune around it. >> Yeah, it reminds me back in the old days when you had filtering white lists of black lists of addresses that can go through, say, a firewall or a device. You have billions of combinations now with machine learning. It's essentially scaling the same concept to unbelievable. These guardrails are now in place, and that's super cool and I think really relevant call-out point, Patrick, to kind of highlight that. At this kind of scale, you need machine learning, you need the AI to essentially identify quickly the patterns or combinations that are actually happening so a human doesn't have to waste their time that can be filled by basically a bot at that point. >> So John, there's just one other thing I want to mention around this, and that is one of the things that makes us different from other companies that do optimization. Basically, every other company in the optimization space creates a static recommendation, basically their recommendation engines, and what you get out of that is, let's say it's a manifest of changes, and you hand that to the SREs, and they put it into effect. Well, the fact of the matter is is that the traffic could have changed then. It could have spiked up, or it could have dropped below normal. You could have introduced a new feature or some other code change, and at that point in time, you've already instituted these changes. They may be completely out of date. That's why the continuous nature of what we do is important and different. >> It's funny, even the language that we're using here: network, garbage collection. I mean, you're talking about tuning an engine, am operating system. You're talking about stuff that's moving up the stack to the application layer, hence this new kind of eliminating of these kind of siloed waterfall, as you pointed out in your second slide, is kind of one integrated kind of operating environment. So when you have that or think about the data coming in, and you have to think about the automation just like self-correcting, error-correcting, tuning, garbage collection. These are words that we've kind of kicking around, but at the end of the day, it's an operating system. >> Well in the old days of automobiles, which I remember cause I'm I'm an old guy, if you wanted to tune your engine, you would probably rebuild your carburetor and turn some dials to get the air-oxygen-gas mix right. You'd re-gap your spark plugs. You'd probably make sure your points were right. There'd be four or five key things that you would do. You couldn't do them at the same time unless you had a magic wand. So we're the magic wand that basically, or in modern world, we're sort of that thing you plug in that tunes everything at once within that engine which is all now electronically controlled. So that's the big differences as you think about what we used to do manually, and now, can be done with automation. It can be done much, much faster without humans having to get their fingernails greasy, let's say. >> And I think the dynamic versus static is an interesting point. I want to bring up the SRE which has become a role that's becoming very prominent in the DevOps kind of plus world that's happening. You're seeing this new revolution. The role of the SRE is not just to be there to hold down and do the manual configuration. They had a scale. They're a developer, too. So I think this notion of offloading the SRE from doing manual tasks is another big, important point. Can you just react to that and share more about why the SRE role is so important and why automating that away through when you guys have is important? >> The SRE role is becoming more and more important, just as you said, and the reason is because somebody has to get that application ready for production. The DevOps guys don't do it. That's not their job. Their job is to get the code finished and send it through, and the SREs then have to make sure that that code will work, so they have to find a set of settings that will actually work in production. Once they find that set of settings, the first one they find that works, they'll push it through. It's not optimized at that point in time because they don't have time to try to find optimal, and if you think about it, the difference between a machine learning backend and an army of SREs that work 24-by-seven, we're talking about being able to do the work of many, many SREs that never get tired, that never need to go play video games, to unstress or whatever. We're working all the time. We're always measuring, adjusting. A lot of the companies we talked to do a once-a-month adjustment on their software. So they put an application out, and then they send in their SREs once a month to try to tune the application, and maybe they're using some of these other tools, or maybe they're using just their smarts, but they'll do that once a month. Well, gosh, they've pushed code probably four times during the month, and they probably had a bunch of different spikes and drops in traffic and other things that have happened. So we just want to help them spend their time on making sure that the application is ready for production. Want to make sure that all the other parts of the application are where they should be, and let us worry about tuning CPU, memory, Replica, job instances, and things like that so that they can work on making sure that application gets out and that it can scale, which is really important for them, for their companies to make money is for the apps to scale. >> Well, that's a great insight, Patrick. You mentioned you have a lot of great customers, and certainly if you have your customer base are early adopters, pioneers, and grow big companies because they have DevOps. They know that they're seeing a DevOps engineer and an SRE. Some of the other enterprises that are transforming think the DevOps engineer is the SRE person 'cause they're having to get transformed. So you guys are at the high end and getting now the new enterprises as they come on board to cloud scale. You have a huge uptake in Kubernetes, starting to see the standardization of microservices. People are getting it, so I got to ask you can you give us some examples of your customers, how they're organized, some case studies, who uses you guys, and why they love you? >> Sure. Well, let's bring up the next slide. We've got some customer examples here, and your viewers, our viewers, can probably figure out who these guys are. I can't tell them, but if they go on our website, they can sort of put two and two together, but the first one there is a major financial application SaaS provider, and in this particular case, they were having problems that they couldn't diagnose within the stack. Ultimately, they had to apply automation to it, and what we were able to do for them was give them a huge jump in reliability which was actually the biggest problem that they were having. We gave them 5,000 hours back a month in terms of the application. They were they're having pager duty alerts going off all the time. We actually gave them better performance. We gave them a 10% performance boost, and we dropped their cloud spend for that application by 72%. So in fact, it was an 80-plus % price performance or cost performance improvement that we gave them, and essentially, we helped them tune the entire stack. This was a hybrid environment, so this included VMs as well as more modern architecture. Today, I would say the overwhelming majority of our customers have moved off of the VMs and are in a containerized environment, and even more to the point, Kubernetes which we find just a very, very high percentage of our customers have moved to. So most of the work we're doing today with new customers is around that, and if we look at the second and third examples here, those are examples of that. In the second example, that's a company that develops websites. It's one of the big ones out in the marketplace that, let's say, if you were starting a new business and you wanted a website, they would develop that website for you. So their internal infrastructure is all brand new stuff. It's all Kubernetes, and what we were able to do for them is they were actually getting decent performance. We held their performance at their SLO. We achieved a 100% error-free scenario for them at runtime, and we dropped their cost by 80%. So for them, they needed us to hold-serve, if you will, on performance and reliability and get their costs under control because everything in that, that's a cloud native company. Everything there is cloud cost. So the interesting thing is it took us nine steps because nine of our iterations to actually get to optimal. So it was very, very quick, and there was no integration required. In the first case, we actually had to do a custom integration for an underlying platform that was used for CICD, but with the- >> John: Because of the hybrid, right? >> Patrick: Sorry? >> John: Because it was hybrid, right? >> Patrick: Yes, because it was hybrid, exactly. But within the second one, we just plugged right in, and we were able to tune the Kubernetes environment just as I showed in that architecture slide, and then the third one is one of the leading application performance monitoring companies on the market. They have a bunch of their own internal applications and those use a lot of cloud spend. They're actually running Kubernetes on top of VMs, but we don't have to worry about the VM layer. We just worry about the Kubernetes layer for them, and what we did for them was we gave them a 48% performance improvement in terms of latency and throughput. We dropped their error rates by 90% which is pretty substantial to say the least, and we gave them a 50% cost delta from where they had been. So this is the perfect example of actually being able to deliver on all three things which you can't always do. It has to be, sort of all applications are not created equal. This was one where we were able to actually deliver on all three of the key objectives. We were able to set them up in about 25 minutes from the time we got started, no extra integration, and needless to say, it was a big, happy moment for the developers to be able to go back to their bosses and say, "Hey, we have better performance, "better reliability. "Oh, by the way, we saved you half." >> So depending on the stack situation, you got VMs and Kubernetes on the one side, cloud-native, all Kubernetes, that's dream scenario obviously. Not many people like that. All the new stuff's going cloud-native, so that's ideal, and then the mixed ones, Kubernetes, but no VMs, right? >> Yeah, exactly. So Kubernetes with no VMs, no problem. Kubernetes on top of VMs, no problem, but we don't manage the VMs. We don't manage the underlay at all, in fact. And the other thing is we don't have to go back to the slide, but I think everybody will remember the slide that had the architecture, and on one side was our cloud instance. The only data that's going between the application and our cloud instance are the settings, so there's never any data. There's never any customer data, nothing for PCI, nothing for HIPPA, nothing for GDPR or any of those things. So no personal data, no health data. Nothing is passing back and forth. Just the settings of the containers. >> Patrick, while I got you here 'cause you're such a great, insightful guest, thank you for coming on and showcasing your company. Kubernetes real quick. How prevalent is this mainstream trend is because you're seeing such great examples of performance improvements. SLAs being met, SLOs being met. How real is Kubernetes for the mainstream enterprise as they're starting to use containers to tip their legacy and get into the cloud-native and certainly hybrid and soon to be multi-cloud environment? >> Yeah, I would not say it's dominant yet. Of container environments, I would say it's dominant now, but for all environments, it's not. I think the larger legacy companies are still going through that digital transformation, and so what we do is we catch them at that transformation point, and we can help them develop because as we remember from the AIOps slide, we can plug in at that test level and help them sort of pre-optimize as they're coming through. So we can actually help them be more efficient as they're transforming. The other side of it is the cloud-native companies. So you've got the legacy companies, brick and mortar, who are desperately trying to move to digitization. Then, you've got the ones that are born in the cloud. Most of them aren't on VMs at all. Most of them are on containers right from the get-go, but you do have some in the middle who have started to make a transition, and what they've done is they've taken their native VM environment and they've put Kubernetes on top of it so that way, they don't have to scuttle everything underneath it. >> Great. >> So I would say it's mixed at this point. >> Great business model, helping customers today, and being a bridge to the future. Real quick, what licensing models, how to buy, promotions you have for Amazon Web Services customers? How do people get involved? How do you guys charge? >> The product is licensed as a service, and the typical service is an annual. We license it by application, so let's just say you have an application, and it has 10 microservices. That would be a standard application. We'd have an annual cost for optimizing that application over the course of the year. We have a large application pack, if you will, for let's say applications of 20 services, something like that, and then we also have a platform, what we call Opsani platform, and that is for environments where the customer might have hundreds of applications and-or thousands of services, and we can plug into their deployment platform, something like a harness or Spinnaker or Jenkins or something like that, or we can plug into their their cloud Kubernetes orchestrator, and then we can actually discover the apps and optimize them. So we've got environments for both single apps and for many, many apps, and with the same platform. And yes, thanks for reminding me. We do have a promotion for for our AWS viewers. If you reference this presentation, and you look at the URL there which is opsani.com/awsstartupshowcase, can't forget that, you will, number one, get a free trial of our software. If you optimize one of your own applications, we're going to give you an Oculus set of goggles, the augmented reality goggles. And we have one other promotion for your viewers and for our joint customers here, and that is if you buy an annual license, you're going to get actually 15 months. So that's what we're putting on the table. It's actually a pretty good deal. The Oculus isn't contingent. That's a promotion. It's contingent on you actually optimizing one of your own services. So it's not a synthetic app. It's got to be one of your own apps, but that's what we've got on the table here, and I think it's a pretty good deal, and I hope your guys take us up on it. >> All right, great. Get Oculus Rift for optimizing one of your apps and 15 months for the price of 12. Patrick, thank you for coming on and sharing the future of AIOps with you guys. Great product, bridge to the future, solving a lot of problems. A lot of use cases there. Congratulations on your success. Thanks for coming on. >> Thank you so much. This has been excellent, and I really appreciate it. >> Hey, thanks for sharing. I'm John Furrier, your host with theCUBE. Thanks for watching. (upbeat music)

Published Date : Sep 22 2021

SUMMARY :

for the cloud management and Appreciate being with you. of the Startups Showcase, and that'll talk about the three elements kind of on the sides there. 'cause you can have good performance, and the question you asked An intern left one of the services on, and find the right size I mean, how does the and the customer says, and the cost advantage? and that's really one of the things I mean, the tuning in concert. So the cost thing is I mentioned is due to in the old days when you had and that is one of the things and you have to think about the automation So that's the big differences of offloading the SRE and the SREs then have to make sure and certainly if you So most of the work we're doing today "Oh, by the way, we saved you half." So depending on the stack situation, and our cloud instance are the settings, and get into the cloud-native that are born in the cloud. So I would say it's and being a bridge to the future. and the typical service is an annual. and 15 months for the price of 12. and I really appreciate it. I'm John Furrier, your host with theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Emily FreemanPERSON

0.99+

PatrickPERSON

0.99+

JohnPERSON

0.99+

AndrewPERSON

0.99+

John FurrierPERSON

0.99+

Pat ContePERSON

0.99+

10%QUANTITY

0.99+

50%QUANTITY

0.99+

Patrick ContePERSON

0.99+

15 monthsQUANTITY

0.99+

secondQUANTITY

0.99+

90%QUANTITY

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

fourQUANTITY

0.99+

nine stepsQUANTITY

0.99+

30QUANTITY

0.99+

OculusORGANIZATION

0.99+

100%QUANTITY

0.99+

72%QUANTITY

0.99+

48%QUANTITY

0.99+

10 microservicesQUANTITY

0.99+

second partQUANTITY

0.99+

FirstQUANTITY

0.99+

second slideQUANTITY

0.99+

first caseQUANTITY

0.99+

TodayDATE

0.99+

Amazon Web ServicesORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

20 servicesQUANTITY

0.99+

PrometheusTITLE

0.99+

second exampleQUANTITY

0.99+

second oneQUANTITY

0.99+

five keyQUANTITY

0.99+

OneQUANTITY

0.99+

firstQUANTITY

0.99+

third oneQUANTITY

0.99+

80-plus %QUANTITY

0.99+

eight settingsQUANTITY

0.99+

OpsaniPERSON

0.99+

third examplesQUANTITY

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

servicesQUANTITY

0.99+

50QUANTITY

0.99+

eight valuesQUANTITY

0.99+

bothQUANTITY

0.99+

nineQUANTITY

0.98+

three elementsQUANTITY

0.98+

ServoORGANIZATION

0.98+

80%QUANTITY

0.98+

opsani.com/awsstartupshowcaseOTHER

0.98+

first oneQUANTITY

0.98+

two small instancesQUANTITY

0.98+

10QUANTITY

0.97+

three thingsQUANTITY

0.97+

once a monthQUANTITY

0.97+

one timeQUANTITY

0.97+

70%QUANTITY

0.97+

GDPRTITLE

0.97+

zeroQUANTITY

0.97+

ServoTITLE

0.97+

about 20 stepsQUANTITY

0.97+

12QUANTITY

0.96+

KubernetesTITLE

0.96+

four timesQUANTITY

0.96+

Susan StClair, WhiteSource | AWS Startup Showcase


 

(upbeat music) >> Welcome to the Q3 "AWS Startup Showcase", I'm Lisa Martin. We're going to be talking about new breakthroughs in DevOps, Data Analytics and Cloud Management Tools, with WhiteSource Software, at least for the DevOps track. I'm excited to welcome Susan StClair, Director of Product at WhiteSource software to the program. Susan, it's great to see you! >> Oh, very excited to be here, Lisa, thank you. >> We've got a lot of stuff to talk about today, but ultimately, the theme that Susan's going to talk to us about us is, winning developer's trust is key to scaling-up open source security for the enterprise. We're going to unpack that. You talk about, that winning that trust is key, shifting left won't work without developers buy-in. Susan, help us understand this. >> Yeah, sure, so- on some of the topics we have later but you look at the rate of applications of being the pool of how fast that is, and you look at development teams of hundreds and you have the OpSec teams of five or ten, and they just can't do it all, so, really, you need to leverage everybody who's part of the application to really be able to make sure that you're developing and deploying and releasing a secure application. So, that's the Shifting Left. Unfortunately, I think what's happened is, because application security is overwhelmed and because they're like, "Oh, we have all of these developer teams over here, and it's their code, and they should fix it." And they just kind of dumped application security on them and the poor development teams are like, "but that's not what I do, I don't have any expertise in there." So if you really, truly, want a Shift Left to work, you do need to build that buy-in, you do need to build the trust with your extended team, for lack of a better word. And, really start to look at things that are important to them. So automated tools, making sure that they work with their tools sets and their processes. Looking at automation, not just in terms of scanning but also remediation. You just really need to start to work with them and think about application releases in a different mindset. >> And your recommendation here is also to build that trust gradually, and to let developers control the pace- >> Absolutely >> And the level of automation. Talk to me about why it's important to give the developers that control? >> Yeah, sure. Again, I think nobody likes to be told what to do, I certainly don't, don't tell me how to do my job. So, I think, that because historically application security and development have really been at odds. It has been somewhat of a confrontational relationship, so, I think as you're starting to build that trust, you do need to go slow. Where does it make sense to add in auto-remediation solution like WhiteSource, right? Where does it make sense? We don't want to do it everywhere, we don't want to overwhelm development teams with this. So, really start to look, let them control the pace, build that trust, build that. This is a good thing for everybody. And, again, I think with tools like WhiteSource, the solution software, you can pick and choose, it's not an all or nothing. We're going full automation, full remediation, one-stop-shopping, I mean you can kind of control the pace as you start to build that trust between the various teams. >> Is that differentiator for WhiteSource the ability for this auto-remediation tool to let them control that? >> Yeah, it definitely is, and I know it just rolls off the tongue, doesn't it? Just rolls off the tongue. >> It really does. (both laughing) >> Say it ten times fast >> I'm afraid to. >> Exactly, exactly. So, no, it actually, absolutely is a differentiator for us. And again when we look at, looking at our customer base and enterprise and we look at, even maybe smaller teams that trust is really made us successful and the key to that trust is really that controlling the pace with auto-remediation. And, some of the other automation pieces to the solution. >> And speaking of customers, you guys have 23% of the Fortune 100 as customers, give me an example of one of your favorite customers that you think really shows the value that WhiteSource is giving to those developers by giving them that control. >> Yeah, sure. So I feel like we're like the big company or bigger company that nobody has heard of outside of this space. But, not naming names, but large financial customers and really shifting application security, open-source application security, to the hands of the development teams. So they've actually, again, small application team, they've really pushed it out to the development teams as part of a repo-integration for scanning, for ticket creation, for auto-remediation, and that's really, let them scale beyond, just one or two teams to thousands of repos, for example. I mean, that is, in my opinion, a huge use case or huge validation of that this works. This isn't just somebody talking about how cool their software is and it's not based in reality. >> A stat that I read about WhiteSource offer that I wanted to get your feedback on, is that, "WhiteSource goes beyond traditional detection, providing dependency and trace analysis and that this helps organizations eliminate upto 85% of security alerts." That's a big number. Talk to me about how you guys do that and the advantages that delivers. >> Yeah, sure, so I think like the one of the challenges with, historically, with open-source solutions, is that they scan and they get this result, and you could have hundreds and thousands of insecure libraries and you're like, "Holy moly, where do I even start?" It's just completely overwhelming. And then you dig into little deeper and again starting to build that trust with development teams, and the development teams comes back to you and says, "Well, hey, guess what? Yeah I know that library is insecure, but I'm not using that part of that library." So, it's really kind of a false-positive. So, what this dependency tracing does and how it helps with prioritization, is it says, "Okay, we see this particular library, this vulnerable open-source library, and it is in your execution path, we can see that you're using it." So then, you're able to say, "Okay, I should definitely fix this, because we're using it, or maybe not." Maybe, again, it's part my backlog yes, we should always keep up-to-date, and be completely secure. But having that ability to prioritize where to start and having the alerts based on that really reduces the noise. And again, it builds the trust between the teams. >> So, we talked from the beginning about shifting left isn't going to work without developers buy-in, the idea of using auto-remediation tools to let developers control that pace, the OpSec folks, the Dev folks, we also have for, I believe, it's the fifth consecutive year now, a huge gap in cybersecurity skills. I think I've seen some reports estimating that there needs to be another three million professionals in the next five years to help fill that gap and at the same time we're seeing the security landscape changing dramatically. Talk to me about how the cybersecurity skills gap is affecting developers, OpSec folks, and what your seeing as a tool that can help remediate some of that. >> Sure, yeah, no, that's, I mean that is the challenge. And I would even say that there seems to those skills gap on the development side too. But, I think that in terms of some of the challenges with that, so you have to look at ways, how can we be smarter about things. So, we don't have people, large teams where they know everything about application security and open-source security that we can really rely on to drive remediation, but, also to use these tools that all of us bought that do different things, that aren't correlated but to kind of provide that glue. So, where WhiteSource, I think is trying to address this is, again, if I don't have the people, and I don't have the skillsets, first of all automation, right? So, the more that we can automate, the better. But, not just again, automating on the scanning side, I think that's certainly a part of it, but again, looking at how we can help development teams that are maybe not security experts, and keeping them up-to-date and giving them, again, automatic remediation so that they can fix things without having a really depth that you would expect in a cybersecurity professional. >> I'm sure they appreciate that, not having to have that depth, because there really isn't, in terms of developers, there isn't the time. Speed is always of the essence there. One of the things too that I know, is there's lot of tools being used, you mentioned that. How can WhiteSource Software help the developers to better utilize some of the tools that they have or not just be buying tools to check boxes? >> Yeah, sure. So, yeah it's sad fact, I think, within our industry, probably more than just our hours, but really a lot of decisions, purchasing decision are based on the, "Well, I need to scan because somebody told me to and I that I had to, and I'm going to check the box. I'm not really interested in fixing anything, I just need to check that box." And, I think, historically, when it comes to tool selection, again, because application security is really focused on that check-the-box because they need to do that for a compliance or governance reason, they really haven't taken into heart the teams that would actually be using them and having to make the magic happen. So, they would prioritize things that, again, maybe OpDev wouldn't, so, again does it work with my tools? As a developer, I live in my IDE, I live in my code-repo, I live in my ticketing system, security doesn't typically care anything about that. So, I think with WhiteSource, again, providing the tools that the OpSec team needs. So again, the compliance reports and the policies and all this stuff we love. Also providing, again, the way to easily fit into developer workflows, that's how we're helping to move beyond, okay, we're checking the box but we do want to actually fix something and we want to move the target along. So we're really, I think, helping address that need as well. >> I know you guys did a DevSec Ops Insights Report recently, unpack that a little bit with some of the key findings that have come out of that. >> Yeah, no, that's great, so it's very interesting. First of all I think we in the industry we talk a lot about DevSecOps and that security is part of the DevOps process and everything is good. But when you actually talk to people, I think, two things, one, it's very much a work in progress, absolutely, and a lot of that is part of the tooling. I think, too, like what we've found as a part of this survey, is that the developers, are often, they feel forced to, okay, I'm shifting left, you're telling me I own security, but you're also telling me that I need to get this application out the door. I need this to compete. So, they're really being forced into hard choices of which one to prioritize, and that really comes down to a culture thing. What is more important to you. Being secure or being competitive? And how do you weigh that? So, I thought that was actually very interesting, I think that we tend to give OpDev teams a bad rap but they're really doing the best they can and they need clear guidance and there needs to be a security culture for them to operate in. >> Right, that's a really big one that you just hit on, that cultural impact. It's hard to change. In the last 18 months, we've all been through so much change, personally and professionally. We've seen this massive acceleration in digital transformation, so probably more pressure on developers who need to be able to be productive from work, from anywhere environments, that that cultural change, is really critical. I'm curious if you have some feedback from customers that have done it successfully or are in the process of doing it successfully that you can share? >> Yeah, change is hard, no matter where it's at. Absolutely. So, I think, like where we've seen the most successful of our customers, around this specifically, it truly is both a top-down and bottom-up approach. From a top-down, you can't just give lip-service that application security is important. You can't just say, "Oh, again from a compliance check-the-box, point-of-view, we scan, and we're looking, and, oh look, we have these statistics. You have to really have to live it. And what I mean by that is, when you're developing new applications it's just as important as the feature list. Security bugs are just as important as any other type of bugs. So again, it goes into the workflow of the application development teams and you don't make them make these hard trade-offs all the time between security and release. And then, from the bottom-up, again, you need to be where your teams are at. You can't ask them to go into another tool, or another thing, or another this and that. They have things to do. You have to be where they are. And you, have to give targeted, actionable, not things they have to go research, a guidance, and automate as much as you can. Again, both on the scanning as well as on the remediation side. >> Meet them where they are and facilitate that automation. Susan, thank you so much for joining me today, talking about- >> My pleasure. >> How WhiteSource Software is helping that, and also for the challenge of saying auto-remediation 10 times in a row, fast. (Susan laughing) I might practice that later. But it's been great talking to you. >> That will be my home work. Likewise. >> Exactly! Thank you so much for joining me. >> My pleasure. >> This has been our coverage of the "AWS Startup Showcase", New Breakthroughs in DevOps, Data Analytics and Cloud Management tools. For Susan StClair, I'm Lisa Martin. Thanks for watching. (gentle music)

Published Date : Sep 22 2021

SUMMARY :

Susan, it's great to see you! be here, Lisa, thank you. to talk to us about us is, and the poor development teams are like, And the level of automation. So, really start to look, Just rolls off the tongue. It really does. and the key to that trust that you think really shows the value out to the development teams and the advantages that delivers. and again starting to build that trust estimating that there needs to be another and I don't have the skillsets, Speed is always of the essence there. and having to make the magic happen. I know you guys did a DevSec and a lot of that is part of the tooling. big one that you just hit on, You have to be where they are. and facilitate that automation. and also for the challenge of saying Thank you so much for joining me. of the "AWS Startup Showcase",

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

SusanPERSON

0.99+

oneQUANTITY

0.99+

10 timesQUANTITY

0.99+

Susan StClairPERSON

0.99+

fifthQUANTITY

0.99+

LisaPERSON

0.99+

WhiteSourceORGANIZATION

0.99+

hundredsQUANTITY

0.99+

fiveQUANTITY

0.99+

23%QUANTITY

0.99+

todayDATE

0.99+

thousandsQUANTITY

0.99+

tenQUANTITY

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.98+

ten timesQUANTITY

0.98+

SusPERSON

0.98+

three million professionalsQUANTITY

0.98+

two teamsQUANTITY

0.98+

OpSecORGANIZATION

0.97+

DevSecOpsTITLE

0.97+

AWS Startup ShowcaseEVENT

0.97+

FirstQUANTITY

0.96+

two thingsQUANTITY

0.94+

OpDevORGANIZATION

0.94+

upto 85%QUANTITY

0.92+

last 18 monthsDATE

0.91+

WhiteSource SoftwareORGANIZATION

0.9+

DevOpsTITLE

0.87+

Q3EVENT

0.86+

WhiteSource SoftwareORGANIZATION

0.85+

DevSec Ops InsightsTITLE

0.81+

next five yearsDATE

0.71+

CloudTITLE

0.59+

WhiteSourcePERSON

0.56+

yearQUANTITY

0.56+

100QUANTITY

0.39+

FortuneTITLE

0.33+

Sanjeev Mohan, SanjMo & Nong Li, Okera | AWS Startup Showcase


 

(cheerful music) >> Hello everyone, welcome to today's session of theCUBE's presentation of AWS Startup Showcase, New Breakthroughs in DevOps, Data Analytics, Cloud Management Tools, featuring Okera from the cloud management migration track. I'm John Furrier, your host. We've got two great special guests today, Nong Li, founder and CTO of Okera, and Sanjeev Mohan, principal @SanjMo, and former research vice president of big data and advanced analytics at Gartner. He's a legend, been around the industry for a long time, seen the big data trends from the past, present, and knows the future. Got a great lineup here. Gentlemen, thank you for this, so, life in the trenches, lessons learned across compliance, cloud migration, analytics, and use cases for Fortune 1000s. Thanks for joining us. >> Thanks for having us. >> So Sanjeev, great to see you, I know you've seen this movie, I was saying that in the open, you've at Gartner seen all the visionaries, the leaders, you know everything about this space. It's changing extremely fast, and one of the big topics right out of the gate is not just innovation, we'll get to that, that's the fun part, but it's the regulatory compliance and audit piece of it. It's keeping people up at night, and frankly if not done right, slows things down. This is a big part of the showcase here, is to solve these problems. Share us your thoughts, what's your take on this wide-ranging issue? >> So, thank you, John, for bringing this up, and I'm so happy you mentioned the fact that, there's this notion that it can slow things down. Well I have to say that the old way of doing governance slowed things down, because it was very much about control and command. But the new approach to data governance is actually in my opinion, it's liberating data. If you want to democratize or monetize, whatever you want to call it, you cannot do it 'til you know you can trust said data and it's governed in some ways, so data governance has actually become very interesting, and today if you want to talk about three different areas within compliance regulatory, for example, we all know about the EU GDPR, we know California has CCPA, and in fact California is now getting even a more stringent version called CPRA in a couple of years, which is more aligned to GDPR. That is a first area we know we need to comply to that, we don't have any way out. But then, there are other areas, there is insider trading, there is how you secure the data that comes from third parties, you know, vendors, partners, suppliers, so Nong, I'd love to hand it over to you, and see if you can maybe throw some light into how our customers are handling these use cases. >> Yeah, absolutely, and I love what you said about balancing agility and liberating, in the face of what may be seen as things that slow you down. So we work with customers across verticals with old and new regulations, so you know, you brought up GDPR. One of our clients is using this to great effect to power their ecosystem. They are a very large retail company that has operations and customers across the world, obviously the importance of GDPR, and the regulations that imposes on them are very top of mind, and at the same time, being able to do effective targeting analytics on customer information is equally critical, right? So they're exactly at that spot where they need this customer insight for powering their business, and then the regulatory concerns are extremely prevalent for them. So in the context of GDPR, you'll hear about things like consent management and right to be forgotten, right? I, as a customer of that retailer should say "I don't want my information used for this purpose," right? "Use it for this, but not this." And you can imagine at a very, very large scale, when you have a billion customers, managing that, all the data you've collected over time through all of your devices, all of your telemetry, really, really challenging. And they're leveraging Okera embedded into their analytics platform so they can do both, right? Their data scientists and analysts who need to do everything they're doing to power the business, not have to think about these kind of very granular customer filtering requirements that need to happen, and then they leverage us to do that. So that's kind of new, right, GDPR, relatively new stuff at this point, but we obviously also work with customers that have regulations from a long long time ago, right? So I think you also mentioned insider trading and that supply chain, so we'll talk to customers, and they want really data-driven decisions on their supply chain, everything about their production pipeline, right? They want to understand all of that, and of course that makes sense, whether you're the CFO, if you're going to make business decisions, you need that information readily available, and supply chains as we know get more and more and more complex, we have more and more integrated into manufacturing and other verticals. So that's your, you're a little bit stuck, right? You want to be data-driven on those supply chain analytics, but at the same time, knowing the details of all the supply chain across all of your dependencies exposes your internal team to very high blackout periods or insider trading concerns, right? For example, if you knew Apple was buying a bunch of something, that's maybe information that only a select few people can have, and the way that manifests into data policies, 'cause you need the ability to have very, very scalable, per employee kind of scalable data restriction policies, so they can do their job easier, right? If we talk about speeding things up, instead of a very complex process for them to get approved, and approved on SEC regulations, all that kind of stuff, you can now go give them access to the part of the supply chain that they need, and no more, and limit their exposure and the company's exposure and all of that kind of stuff. So one of our customers able to do this, getting two orders of magnitude, a 100x reduction in the policies to manage the system like that. >> When I hear you talking like that, I think the old days of "Oh yeah, regulatory, it kind of slows down innovation, got to go faster," pretty basic variables, not a lot of combination of things to check. Now with cloud, there seems to be combinations, Sanjeev, because how complicated has the regulatory compliance and audit environment gotten in the past few years, because I hear security in a supply chain, I hear insider threats, I mean these are security channels, not just compliance department G&A kind of functions. You're talking about large-scale, potentially combinations of access, distribution, I mean it seems complicated. How much more complicated is it now, just than it was a few years ago? >> So, you know the way I look at it is, I'm just mentioning these companies just as an example, when PayPal or Ebay, all these companies started, they started in California. Anybody who ever did business on Ebay or PayPal, guess where that data was? In the US in some data center. Today you cannot do it. Today, data residency laws are really tough, and so now these organizations have to really understand what data needs to remain where. On top of that, we now have so many regulations. You know, earlier on if you were healthcare, you needed to be HIPAA compliant, or banking PCI DSS, but today, in the cloud, you really need to know, what data I have, what sensitive data I have, how do I discover it? So that data discovery becomes really important. What roles I have, so for example, let's say I work for a bank in the US, and I decide to move to Germany. Now, the old school is that a new rule will be created for me, because of German... >> John: New email address, all these new things happen, right? >> Right, exactly. So you end up with this really, a mass of rules and... And these are all static. >> Rules and tools, oh my god. >> Yeah. So Okera actually makes a lot of this dynamic, which reduces your cloud migration overhead, and Nong used some great examples, in fact, sorry if I take just a second, without mentioning any names, there's one of the largest banks in the world is going global in the digital space for the first time, and they're taking Okera with them. So... >> But what's the point? This is my next topic in cloud migration, I want to bring this up because, complexity, when you're in that old school kind of data center, waterfall, these old rules and tools, you have to roll this out, and it's a pain in the butt for everybody, it's a hassle, huge hassle. Cloud gives the agility, we know that, and cloud's becoming more secure, and I think now people see the on-premise, certainly things that'd be on-premises for secure things, I get that, but when you start getting into agility, and you now have cloud regions, you can start being more programmatic, so I want to get you guys' thoughts on the cloud migration, how companies who are now lifting and shifting, replatforming, what's the refactoring beyond that, because you can replatform in the cloud, and still some are kind of holding back on that. Then when you're in the cloud, the ones that are winning, the companies that are winning are the ones that are refactoring in the cloud. Doing things different with new services. Sanjeev, you start. >> Yeah, so you know, in fact lot of people tell me, "You know, we are just going to lift and shift into the cloud." But you're literally using cloud as a data center. You still have all the, if I may say, junk you had on-prem, you just moved it into the cloud, and now you're paying for it. In cloud, nothing is free. Every storage, every processing, you're going to pay for it. The most successful companies are the ones that are replatforming, they are taking advantage of the platform as a service or software as a service, so that includes things like, you pay as you go, you pay for exactly the amount you use, so you scale up and scale down or scale out and scale in, pretty quickly, you know? So you're handling that demand, so without replatforming, you are not really utilizing your- >> John: It's just hosting. >> Yeah, you're just hosting. >> It's basically hosting if you're not doing anything right there. >> Right. The reason why people sometimes resist to replatform, is because there's a hidden cost that we don't really talk about, PaaS adds 3x to IaaS cost. So, some organizations that are very mature, and they have a few thousand people in the IT department, for them, they're like "No, we just want to run it in the cloud, we have the expertise, and it's cheaper for us." But in the long run, to get the most benefit, people should think of using cloud as a service. >> Nong what's your take, because you see examples of companies, I'll just call one out, Snowflake for instance, they're essentially a data warehouse in the cloud, they refactored and they replatformed, they have a competitive advantage with the scale, so they have things that others don't have, that just hosting. Or even on-premise. The new model developing where there's real advantages, and how should companies think about this when they have to manage these data lakes, and they have to manage all these new access methods, but they want to maintain that operational stability and control and growth? >> Yeah, so. No? Yeah. >> There's a few topics that are all (indistinct) this topic. (indistinct) enterprises moving to the cloud, they do this maybe for some cost savings, but a ton of it is agility, right? The motor that the business can run at is just so much faster. So we'll work with companies in the context of cloud migration for data, where they might have a data warehouse they've been using for 20 years, and building policies over that time, right? And it's taking a long time to go proof of access and those kind of things, made more sense, right? If it took you months to procure a physical infrastructure, get machines shipped to your data center, then this data access taking so long feels okay, right? That's kind of the same rate that everything is moving. In the cloud, you can spin up new infrastructure instantly, so you don't want approvals for getting policies, creating rules, all that stuff that Sanjeev was talking about, that being slow is a huge, huge problem. So this is a very common environment that we see where they're trying to do that kind of thing. And then, for replatforming, again, they've been building these roles and processes and policies for 20 years. What they don't want to do is take 20 years to go migrate all that stuff into the cloud, right? That's probably an experience nobody wants to repeat, and frankly for many of them, people who did it originally may or may not be involved in this kind of effort. So we work with a lot of companies like that, they have their, they want stability, they got to have the business running as normal, they got to get moving into the new infrastructure, doing it in a new way that, you know, with all the kind of lessons learned, so, as Sanjeev said, one of these big banks that we work with, that classical story of on-premise data warehousing, maybe a little bit of Hadoop, moved onto AWS, S3, Snowflake, that kind of setup, extremely intricate policies, but let's go reimagine how we can do this faster, right? What we like to talk about is, you're an organization, you need a design that, if you onboarded 1000 more data users, that's got to be way, way easier than the first 10 you onboarded, right? You got to get it to be easier over time, in a really, really significant way. >> Talk about the data authorization safety factor, because I can almost imagine all the intricacies of these different tools creates specialism amongst people who operate them. And each one might have their own little authorization nuance. Trend is not to have that siloed mentality. What's your take on clients that want to just "Hey, you know what? I want to have the maximum agility, but I don't want to get caught in the weeds on some of these tripwires around access and authorization." >> Yeah, absolutely, I think it's real important to get the balance of it, right? Because if you are an enterprise, or if you have diversive teams, you want them to have the ability to use tools as best of breed for their purpose, right? But you don't want to have it be so that every tool has its own access and provisioning and whatever, that's definitely going to be a security, or at least, a lot of friction for you to get things going. So we think about that really hard, I think we've seen great success with things like SSO and Okta, right? Unifying authentication. We think there's a very, very similar thing about to happen with authorization. You want that single control plane that can integrate with all the tools, and still get the best of what you need, but it's much, much easier (indistinct). >> Okta's a great example, if people don't want to build their own thing and just go with that, same with what you guys are doing. That seems to be the dots that are connecting you, Sanjeev. The ease of use, but yet the stability factor. >> Right. Yeah, because John, today I may want to bring up a SQL editor to go into Snowflake, just as an example. Tomorrow, I may want to use the Azure Bot, you know? I may not even want to go to Snowflake, I may want to go to an underlying piece of data, or I may use Power BI, you know, for some reason, and come from Azure side, so the point is that, unless we are able to control, in some sort of a centralized manner, we will not get that consistency. And security you know is all or nothing. You cannot say "Well, I secured my Snowflake, but if you come through HTFS, Hadoop, or some, you know, that is outside of my realm, or my scope," what's the point? So that is why it is really important to have a watertight way, in fact I'm using just a few examples, maybe tomorrow I decide to use a data catalog, or I use Denodo as my data virtualization and I run a query. I'm the same identity, but I'm using different tools. I may use it from home, over VPN, or I may use it from the office, so you want this kind of flexibility, all encompassed in a policy, rather than a separate rule if you do this and this, if you do that, because then you end up with literally thousands of rules. >> And it's never going to stop, either, it's like fashion, the next tool's going to come out, it's going to be cool, and people are going to want to use it, again, you don't want to have to then move the train from the compliance side this way or that way, it's a lot of hassle, right? So we have that one capability, you can bring on new things pretty quickly. Nong, am I getting it right, this is kind of like the trend, that you're going to see more and more tools and/or things that are relevant or, certain use cases that might justify it, but yet, AppSec review, compliance review, I mean, good luck with that, right? >> Yeah, absolutely, I mean we certainly expect tools to continue to get more and more diverse, and better, right? Most innovation in the data space, and I think we... This is a great time for that, a lot of things that need to happen, and so on and so forth. So I think one of the early goals of the company, when we were just brainstorming, is we don't want data teams to not be able to use the tools because it doesn't have the right security (indistinct), right? Often those tools may not be focused on that particular area. They're great at what they do, but we want to make sure they're enabled, they do some enterprise investments, they see broader adoption much easier. A lot of those things. >> And I can hear the sirens in the background, that's someone who's not using your platform, they need some help there. But that's the case, I mean if you don't get this right, there are some consequences, and I think one of the things I would like to bring up on next track is, to talk through with you guys is, the persona pigeonhole role, "Oh yeah, a data person, the developer, the DevOps, the SRE," you start to see now, developers and with cloud developers, and data folks, people, however they get pigeonholed, kind of blending in, okay? You got data services, you got analytics, you got data scientists, you got more democratization, all these things are being kicked around, but the notion of a developer now is a data developer, because cloud is about DevOps, data is now a big part of it, it's not just some department, it's actually blending in. Just a cultural shift, can you guys share your thoughts on this trend of data people versus developers now becoming kind of one, do you guys see this happening, and if so, how? >> So when, John, I started my career, I was a DBA, and then a data architect. Today, I think you cannot have a DBA who's not a developer. That's just my opinion. Because there is so much of CICD, DevOps, that happens today, and you know, you write your code in Python, you put it in version control, you deploy using Jenkins, you roll back if there's a problem. And then, you are interacting, you're building your data to be consumed as a service. People in the past, you would have a thick client that would connect to the database over TCP/IP. Today, people don't want to connect over TCP/IP necessarily, they want to go by HTTP. And they want an API gateway in the middle. So, if you're a data architect or DBA, now you have to worry about, "I have a REST API call that's coming in, how am I going to secure that, and make sure that people are allowed to see that?" And that was just yesterday. >> Exactly. Got to build an abstraction layer. You got to build an abstraction layer. The old days, you have to worry about schema, and do all that, it was hard work back then, but now, it's much different. You got serverless, functions are going to show way... It's happening. >> Correct, GraphQL, and semantic layer, that just blows me away because, it used to be, it was all in database, then we took it out of database and we put it in a BI tool. So we said, like BusinessObjects started this whole trend. So we're like "Let's put the semantic layer there," well okay, great, but that was when everything was surrounding BusinessObjects and Oracle Database, or some other database, but today what if somebody brings Power BI or Tableau or Qlik, you know? Now you don't have a semantic layer access. So you cannot have it in the BI layer, so you move it down to its own layer. So now you've got a semantic layer, then where do you store your metrics? Same story repeats, you have a metrics layer, then the data centers want to do feature engineering, where do you store your features? You have a feature store. And before you know, this stack has disaggregated over and over and over, and then you've got layers and layers of specialization that are happening, there's query accelerators like Dremio or Trino, so you've got your data here, which Nong is trying really hard to protect, and then you've got layers and layers and layers of abstraction, and networks are fast, so the end user gets great service, but it's a nightmare for architects to bring all these things together. >> How do you tame the complexity? What's the bottom line? >> Nong? >> Yeah, so, I think... So there's a few things you need to do, right? So, we need to re-think how we express security permanence, right? I think you guys have just maybe in passing (indistinct) talked about creating all these rules and all that kind of stuff, that's been the way we've done things forever. We get to think about policies and mechanisms that are much more dynamic, right? You need to really think about not having to do any additional work, for the new things you add to the system. That's really, really core to solving the complexity problem, right? 'Cause that gets you those orders of magnitude reduction, system's got to be more expressive and map to those policies. That's one. And then second, it's got to be implemented at the right layer, right, to Sanjeev's point, close to the data, and it can service all of those applications and use cases at the same time, and have that uniformity and breadth of support. So those two things have to happen. >> Love this universal data authorization vision that you guys have. Super impressive, we had a CUBE Conversation earlier with Nick Halsey, who's a veteran in the industry, and he likes it. That's a good sign, 'cause he's seen a lot of stuff, too, Sanjeev, like yourself. This is a new thing, you're seeing compliance being addressed, and with programmatic, I'm imagining there's going to be bots someday, very quickly with AI that's going to scale that up, so they kind of don't get in the innovation way, they can still get what they need, and enable innovation. You've got cloud migration, which is only going faster and faster. Nong, you mentioned speed, that's what CloudOps is all about, developers want speed, not things in days or hours, they want it in minutes and seconds. And then finally, ultimately, how's it scale up, how does it scale up for the people operating and/or programming? These are three major pieces. What happens next? Where do we go from here, what's, the customer's sitting there saying "I need help, I need trust, I need scale, I need security." >> So, I just wrote a blog, if I may diverge a bit, on data observability. And you know, so there are a lot of these little topics that are critical, DataOps is one of them, so to me data observability is really having a transparent view of, what is the state of your data in the pipeline, anywhere in the pipeline? So you know, when we talk to these large banks, these banks have like 1000, over 1000 data pipelines working every night, because they've got that hundred, 200 data sources from which they're bringing data in. Then they're doing all kinds of data integration, they have, you know, we talked about Python or Informatica, or whatever data integration, data transformation product you're using, so you're combining this data, writing it into an analytical data store, something's going to break. So, to me, data observability becomes a very critical thing, because it shows me something broke, walk me down the pipeline, so I know where it broke. Maybe the data drifted. And I know Okera does a lot of work in data drift, you know? So this is... Nong, jump in any time, because I know we have use cases for that. >> Nong, before you get in there, I just want to highlight a quick point. I think you're onto something there, Sanjeev, because we've been reporting, and we believe, that data workflows is intellectual property. And has to be protected. Nong, go ahead, your thoughts, go ahead. >> Yeah, I mean, the observability thing is critically important. I would say when you want to think about what's next, I think it's really effectively bridging tools and processes and systems and teams that are focused on data production, with the data analysts, data scientists, that are focused on data consumption, right? I think bridging those two, which cover a lot of the topics we talked about, that's kind of where security almost meets, that's kind of where you got to draw it. I think for observability and pipelines and data movement, understanding that is essential. And I think broadly, on all of these topics, where all of us can be better, is if we're able to close the loop, get the feedback loop of success. So data drift is an example of the loop rarely being closed. It drifts upstream, and downstream users can take forever to figure out what's going on. And we'll have similar examples related to buy-ins, or data quality, all those kind of things, so I think that's really a problem that a lot of us should think about. How do we make sure that loop is closed as quickly as possible? >> Great insight. Quick aside, as the founder CTO, how's life going for you, you feel good? I mean, you started a company, doing great, it's not drifting, it's right in the stream, mainstream, right in the wheelhouse of where the trends are, you guys have a really crosshairs on the real issues, how you feeling, tell us a little bit about how you see the vision. >> Yeah, I obviously feel really good, I mean we started the company a little over five years ago, there are kind of a few things that we bet would happen, and I think those things were out of our control, I don't think we would've predicted GDPR security and those kind of things being as prominent as they are. Those things have really matured, probably as best as we could've hoped, so that feels awesome. Yeah, (indistinct) really expanded in these years, and it feels good. Feels like we're in the right spot. >> Yeah, it's great, data's competitive advantage, and certainly has a lot of issues. It could be a blocker if not done properly, and you're doing great work. Congratulations on your company. Sanjeev, thanks for kind of being my cohost in this segment, great to have you on, been following your work, and you continue to unpack it at your new place that you started. SanjMo, good to see your Twitter handle taking on the name of your new firm, congratulations. Thanks for coming on. >> Thank you so much, such a pleasure. >> Appreciate it. Okay, I'm John Furrier with theCUBE, you're watching today's session presentation of AWS Startup Showcase, featuring Okera, a hot startup, check 'em out, great solution, with a really great concept. Thanks for watching. (calm music)

Published Date : Sep 22 2021

SUMMARY :

and knows the future. and one of the big topics and I'm so happy you in the policies to manage of things to check. and I decide to move to Germany. So you end up with this really, is going global in the digital and you now have cloud regions, Yeah, so you know, if you're not doing anything right there. But in the long run, to and they have to manage all Yeah, so. In the cloud, you can spin up get caught in the weeds and still get the best of what you need, with what you guys are doing. the Azure Bot, you know? are going to want to use it, a lot of things that need to happen, the SRE," you start to see now, People in the past, you The old days, you have and networks are fast, so the for the new things you add to the system. that you guys have. So you know, when we talk Nong, before you get in there, I would say when you want I mean, you started a and I think those things and you continue to unpack it Thank you so much, of AWS Startup Showcase,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nick HalseyPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

CaliforniaLOCATION

0.99+

USLOCATION

0.99+

Nong LiPERSON

0.99+

AppleORGANIZATION

0.99+

GermanyLOCATION

0.99+

EbayORGANIZATION

0.99+

PayPalORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

SanjeevPERSON

0.99+

TomorrowDATE

0.99+

twoQUANTITY

0.99+

GDPRTITLE

0.99+

Sanjeev MohanPERSON

0.99+

TodayDATE

0.99+

OneQUANTITY

0.99+

yesterdayDATE

0.99+

SnowflakeTITLE

0.99+

todayDATE

0.99+

PythonTITLE

0.99+

GartnerORGANIZATION

0.99+

TableauTITLE

0.99+

first timeQUANTITY

0.99+

3xQUANTITY

0.99+

bothQUANTITY

0.99+

100xQUANTITY

0.99+

oneQUANTITY

0.99+

OkeraORGANIZATION

0.99+

InformaticaORGANIZATION

0.98+

two ordersQUANTITY

0.98+

NongORGANIZATION

0.98+

SanjMoPERSON

0.98+

secondQUANTITY

0.98+

Power BITITLE

0.98+

1000QUANTITY

0.98+

tomorrowDATE

0.98+

two thingsQUANTITY

0.98+

QlikTITLE

0.98+

each oneQUANTITY

0.97+

thousands of rulesQUANTITY

0.97+

1000 more data usersQUANTITY

0.96+

TwitterORGANIZATION

0.96+

first 10QUANTITY

0.96+

OkeraPERSON

0.96+

AWSORGANIZATION

0.96+

hundred, 200 data sourcesQUANTITY

0.95+

HIPAATITLE

0.94+

EUORGANIZATION

0.94+

CCPATITLE

0.94+

over 1000 data pipelinesQUANTITY

0.93+

singleQUANTITY

0.93+

first areaQUANTITY

0.93+

two great special guestsQUANTITY

0.92+

BusinessObjectsTITLE

0.92+

Knox Anderson, Sysdig | AWS Startup Showcase


 

(upbeat music) >> Welcome to the Q3 AWS Startup Showcase. I'm Lisa Martin. I'm pleased to welcome Knox Anderson, the VP of Product Management, from Sysdig, to the program. Knox, welcome. >> Thanks for having me, Lisa. >> Excited to uncover Sysdig. Talk to me about what you guys do. >> So Sysdig, we are a secure DevOps platform, and we're going to really allow customers to secure the entire lifecycle of an application from source to production. So give you the ability to scan IAC for security best practices, misconfiguration, help you facilitate things like image scanning as part of the build process, and then monitor runtime behavior for compliance or threats, and then finish up with incident response, so that you can respond to and recover from incidents quickly. >> What are some of the main challenges that you're solving and have those changed in the last 18 months? >> I'd say the main challenge people face today is a skills gap with Kubernetes. Everyone wants to use Kubernetes, but the amount of people that can operate those platforms is really difficult. And then getting visibility into the apps, that's running in those environments is also a huge challenge. So with Sysdig, we provide just an easy way to get your Kubernetes clusters instrumented, and then provide strong coverage for threat detection, compliance, and then observability for those environments. >> One of the things that we've seen in the last 18 months is a big change in the front landscape. So, I'm very curious to understand how you're helping customers navigate some of the major dynamics that are going on. >> Yeah, I'd say, the adoption of cloud and the adoption of Kubernetes have, have changed drastically. I'd say every single week, there's a different environment that has a cryptomining container. That's spun up in there. Obviously, if the price of a Bitcoin and things like that go up, there's more and more people that want to steal your resources for mining. So, we're seeing attacks of people pulling public images for Docker hub onto their clusters, and there's a couple of different ways that we'll help customers see that. We have default Falco rules, better vetted by the open source community to detect cryptomining. And then we also see a leading indicator of this as some of the metrics we, we collect for resource abuse and those types of things where you'll see the CPU spike, and then can easily identify some workload that could have been compromised and is now using your resources to mine Bitcoin or some other alt-coin. >> Give me a picture of a Sysdig customer. Help me understand the challenges they had, why they chose you and some of the results that they're achieving. >> Yeah, I used to say that we were very focused on financial services, but now everyone is doing Kubernetes. Really where we get introduced to an organization is they have their two or three clusters that are now in production and I'm going through a compliance audit, or it's now a big enough part of my estate that I need to get security for this Kubernetes and cloud environment. And, so we come in to really provide kind of the end-to-end tools that you would need for that compliance audit or to meet your internal security guidelines. So they'll usually have us integrated within their Dev pipelines so that developers are getting actionable data about what they need to do to make sure their workloads are as secure as possible before they get deployed to production. So that's part of that shift, left mindset. And then the second main point is around runtime detection. And that's where we started off by building our open source tool Falco, which is now a CNCF project. And that gives people visibility into the common things like, who's accessing my environment? Are there any suspicious connections? Are my workloads doing what they expected? And, those types of things. >> Since the threat landscape has changed so much in the last year and a half, as I mentioned. Are the conversations you're having with customers changing? Is this something at the C-suite or the board level from a security and a visibility standpoint? >> I think containers and Kubernetes and cloud adoption under the big umbrella of digital transformation is definitely at board level objective. And then, that starts to trickle down to, okay, we're taking this app from my on-prem data center, it's now in the cloud and it has to meet the twenty security mandates have been meeting for the last fifteen years. What am I going to do? And so definitely there's practitioners that are coming in and picking tools for different environments. But, I would definitely say that cloud adoption and Kubernetes adoption are something that everyone is trying to accelerate as quickly as possible. >> We've seen a lot of acceleration of cloud adoption in the last eighteen months here, right? Now, something that I want to get into with you is the recent executive order, the White House getting involved. How is this changing the cybersecurity discussion across industries? >> I really like how they kind of brought better awareness to some of the cybersecurity best practices. It's aligned with a lot of the NIST guidance that's come out before, but now cloud providers are picking, private sector, public sector are all looking at this as kind of a new set of standards that we need to pay attention to. So, the fact that they call out things like unauthorized access, you can look at that with Kubernetes audit logs, cloud trail, a bunch of different things. And then, the other term that I think you're going to hear a lot of, at least within the federal community and the tech community, over the next year, is this thing called an 'S bomb', which is for, which is a software bill of materials. And, it's basically saying, "as I'm delivering software to some end user, how can I keep track of everything that's in it?" A lot of this probably came out of solar winds where now you need to have a better view of what are all the different components, how are those being tracked over time? What's the life cycle of that? And, so the fact that things like S bombs are being explicitly called out is definitely going to raise a lot of the best practices as organizations move. And then the last point, money always talks. So, when you see AWS, Azure, Google all saying, we're putting 10, 10 billion plus dollars behind this for training and tooling and building more secure software, that's going to raise the cybersecurity industry as a whole. And so it's definitely driving a lot of investment and growth in the market. >> It's validation. Absolutely. Talk to me about some of the, maybe some of the leading edges that you're seeing in private sector versus public sector of folks and organizations who are going alright, we've got to change. We've got to adopt some of these mandates because the landscape is changing dramatically. >> I think Kubernetes at auction goes hand in hand with that, where it's a declarative system. So, the way you define your infrastructure and source code repost is the same way that runs in production. So, things like auditing are much easier, being able to control what's in your environment. And then containers, it's much easier to package it once and then deploy it wherever you want. So container adoption really makes it easier to be more secure. It's a little tricky where normally like you move to something that's bleeding edge, and a lot of things become much harder. And there's operational parts that are hard about Kubernetes. But, from a pure security perspective, the apps are meant to do one thing. It should be easy to profile them. And so definitely I think the adoption of more modern technology and things like cloud services and Kubernetes is a way to be more secure as you move into these environments. >> Right? Imagine a way to be more secure and faster as well. I want to dig in now to the Sysdig AWS partnership. Talk to me about that. What do you guys do together? >> AWS is a great partner. We, as a company, wouldn't be able to deliver our software without AWS. So we run our SAS services on Amazon. We're in multiple regions around the globe. So we can deliver that to people in Europe and meet all the GDPR requirements and those kinds of things. So from a, a vendor partnership perspective, it's great there. And then on a co-development side, we've had a lot of success and a fun time working with the Fargate team, Fargate is a service on Amazon, that makes it easier for you to run your containers without worrying about the underlying compute. And so they faced the challenge about a year and a half ago where customers didn't want to deploy on Fargate because they couldn't do deeper detection and incident response. So we worked together to figure out different hooks that Amazon could provide to open source tools like Falco or commercial products like Sysdig. So then customers could meet those incident response needs, and those detection needs for Fargate. And really, we're seeing more and more Fargated option as kind of more and more companies are moving to the cloud. And, you don't want to worry about managing infrastructure, a service like Fargate is a great place to get started there. >> Talk to me a little bit about your joint. Go to mark. Is there a joint go-to-market? I should say. >> Yeah, we sell through the AWS marketplace. So customers can procure Sysdig software directly though AWS. It'll end up on your AWS bill. You can kind of take some of your committed spend and draw it down there. So that's a great way. And then we also work closely with different solutions architects teams, or people who are more boots on the ground with different AWS customers trying to solve those problems like PCI-compliance and Fargate, or just building a detection and response strategy for EKS and those types of things. >> Let's kind of shift gears now and talk about the role of open source, in security. What is Sysdig's perspective? >> Yeah, so the platform, open source is a platform, is something that driving more and more adoption these days. So, if you look at like the fundamental platform like Kubernetes, it has a lot of security capabilities baked in there's admission controllers, there's network policies. And so you used to buy a firewall or something like that. But with Kubernetes, you can enforce services, service communication, you put a service mesh on top of that, and you can almost pretend it's a WAF sometimes. So open source is building a lot of fundamental platform level security, and by default. And then the second thing is, we're also seeing a rise of just open source tools that traditionally had always come from commercial products. So, there's things like OPA, which handle authorization, which is becoming a standard. And then there's also projects like Falco, that provide an easy way for people to do IDS use cases and auditing use cases in these environments. >> Last question for you. Talk to me about some of the things that you're most excited about. That's coming down here. We are at, this is the, our Q3 AWS Startup Showcase, but what are some of the things that you're most excited about in terms of being able to help customers resolve some of those challenges even faster? >> I think there's more and more Kubernetes standardization that's going on. So a couple of weeks ago, Amazon released EKS Anywhere, which allows companies who still have an on-prem footprint to run Kubernetes locally the same way that they would run it in the cloud. That's only going to increase cloud adoption, because once you get used to just doing something that matches the cloud, the next question you're going to answer is, okay, how fast can I move that to the cloud? So that's something I'm definitely really excited about. And then, also, the different, or AWS is putting a lot of investment behind tools like security hub. And we're doing a lot of native integrations where we can publish different findings and events into security hubs, so that different practitioners who are used to working in the AWS console can remediate those quickly without ever kind of leading that native AWS ecosystem. And that's a trend I expect to see more and more of over time, as well. >> So a lot of co-innovation coming up with AWS. Where can folks go to learn more information? Is there a specific call to action that you'd like to point them to? >> The Sysdig blog is one of the best sources that I can recommend. We have a great mixture of technical practitioner content, some just one-oh-one level, it's, I'm starting with container security. What do I need to know? So I'd say we do a good job of touching the different areas and then really the best way to learn about anything is to get hands-on. We have a SAS trial. Most of the security vendors have something behind a paywall. You can come in, get started with us for free and start uncovering what's actually running in your infrastructure. >> Knox, let's talk about the secure DevOps movement. As we see that DevOps is becoming more and more common, how is it changing the role of security? >> Yeah, so a lot of traditional security requirements are now getting baked into what a DevOps team does day-to-day. So the DevOps team is doing things like implementing IAC. So your infrastructure is code, and no changes are manually made to environments anymore. It's all done by a Terraform file, a cloud formation, some code that's representing what your infrastructure looks at. And so now security teams, or sorry, these DevOps teams have to bake security into that process. So they're scanning their IAC, making sure there's not elevated privileges. It's not doing something, it shouldn't. DevOps teams, also, traditionally, now are managing your CI/CD Pipeline. And so that's where they're integrating scanning tools in as well, to go in and give actionable feedback to the developers around things like if there's a critical vulnerability with a fix, I'm not going to push that to my registry. So it can be deployed to production. That's something a developer needs to go in and change. So really a lot of these kind of actions and the day-to-day work is driven by corporate security requirements, but then DevOps has the freedom to go in and implement it however they want. And this is where Sysdig adds a lot of value because we provide both monitoring and security capabilities through a single platform. So that DevOps teams can go into one product, see what they need for capacity planning, chargebacks, health monitoring, and then in the same interface, go in and see, okay, is that Kubernetes cluster meeting my SOC 2 controls? How many images have my developers submitted to be scanned over the past day? And all those kinds of things without needing to learn to how to use four or five different tools? >> It sounds to me like a cultural shift almost in terms of the DevOps, the developers working with security. How does Sysdig help with that? If that's a cultural shift? >> Yeah, it's definitely a cultural shift. I see some people in the community getting angry when they see oh we're hiring for a Head of DevOps. They're like DevOps is a movement, not a person. So would totally agree with that there, I think the way we help is if you're troubleshooting an issue, if you're trying to uncover what's in your environment and you are comparing results across five different products, it always turns into kind of a point the finger, a blame game. There's a bunch of confusion. And so what we think, how we help that cultural shift, is by bringing different teams and different use cases together and doing that through a common lens of data, user workflows, integrations, and those types of things. >> Excellent. Knox, thank you for joining me on the program today, sharing with us, Sysdig, what you do, your partnership with AWS and how customers can get started. We appreciate your information. - Thank you. For Knox Anderson. I'm Lisa Martin. You're watching the cube.

Published Date : Sep 22 2021

SUMMARY :

from Sysdig, to the program. Talk to me about what you guys do. the ability to scan IAC for but the amount of people that One of the things that we've source community to detect cryptomining. results that they're achieving. of my estate that I need to has changed so much in the last And then, that starts to to get into with you is the and growth in the market. Talk to me about some of the, So, the way you Talk to me about that. to run your containers without Talk to me a little bit the ground with different now and talk about the role of Yeah, so the platform, Talk to me about some of the how fast can I move that to the cloud? So a lot of co-innovation Most of the security vendors how is it changing the role of security? So it can be deployed to production. It sounds to me like a of a point the finger, me on the program today,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

EuropeLOCATION

0.99+

twoQUANTITY

0.99+

AmazonORGANIZATION

0.99+

KnoxPERSON

0.99+

LisaPERSON

0.99+

White HouseORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Knox AndersonPERSON

0.99+

FalcoORGANIZATION

0.99+

SysdigORGANIZATION

0.99+

fourQUANTITY

0.98+

SysdigPERSON

0.98+

FargateORGANIZATION

0.98+

second thingQUANTITY

0.98+

bothQUANTITY

0.98+

DevOpsTITLE

0.98+

five different productsQUANTITY

0.98+

OneQUANTITY

0.98+

five different toolsQUANTITY

0.97+

SOC 2TITLE

0.97+

three clustersQUANTITY

0.96+

last year and a halfDATE

0.96+

oneQUANTITY

0.96+

todayDATE

0.96+

KubernetesTITLE

0.95+

Q3EVENT

0.94+

single platformQUANTITY

0.94+

SASORGANIZATION

0.94+

second main pointQUANTITY

0.94+

one thingQUANTITY

0.93+

one productQUANTITY

0.92+

a year and a half agoDATE

0.92+

last 18 monthsDATE

0.9+

next yearDATE

0.88+

GDPRTITLE

0.87+

10, 10 billion plusQUANTITY

0.86+

EKSORGANIZATION

0.86+

Q3 AWS Startup ShowcaseEVENT

0.85+

NISTORGANIZATION

0.85+

FargatedORGANIZATION

0.84+

a couple of weeks agoDATE

0.82+

KubernetesORGANIZATION

0.82+

IACTITLE

0.81+

aboutDATE

0.8+

last fifteen yearsDATE

0.8+

twenty security mandatesQUANTITY

0.8+

single weekQUANTITY

0.76+

last eighteen monthsDATE

0.75+

EKS AnywhereTITLE

0.74+

Victor Chang, ThoughtSpot | AWS Startup Showcase


 

(bright music) >> Hello everyone, welcome today's session for the "AWS Startup Showcase" presented by theCUBE, featuring ThoughtSpot for this track and data and analytics. I'm John Furrier, your host. Today, we're joined by Victor Chang, VP of ThoughtSpot Everywhere and Corporate Development for ThoughtSpot. Victor, thanks for coming on and thanks for presenting. Talking about this building interactive data apps through ThoughtSpot Everywhere. Thanks for coming on. >> Thank you, it's my pleasure to be here. >> So digital transformation is reality. We're seeing it large-scale. More and more reports are being told fast. People are moving with modern application development and if you don't have AI, you don't have automation, you don't have the analytics, you're going to get slowed down by other forces and even inside companies. So data is driving everything, data is everywhere. What's the pitch to customers that you guys are doing as everyone realizes, "I got to go faster, I got to be more secure," (laughs) "And I don't want to get slowed down." What's the- >> Yeah, thank you John. No, it's true. I think with digital transformation, what we're seeing basically is everything is done in the cloud, everything gets done in applications, and everything has a lot of data. So basically what we're seeing is if you look at companies today, whether you are a SaaS emerging growth startup, or if you're a traditional company, the way you engage with your customers, first impression is usually through some kind of an application, right? And the application collects a lot of data from the users and the users have to engage with that. So for most of the companies out there, one of the key things that really have to do is find a way to make sense and get value for the users out of their data and create a delightful and engaging experience. And usually, that's pretty difficult these days. You know, if you are an application company, whether it doesn't really matter what you do, if you're hotel management, you're productivity application, analytics is not typically your strong suit, and where ThoughtSpot Everywhere comes in is instead of you having to build your own analytics and interactivity experience with a data, ThoughtSpot Everywhere helps deliver a really self-service interactive experience and transform your application into a data application. And with digital transformation these days, all applications have to engage, all applications have to delight, and all applications have to be self-service. And with analytics, ThoughtSpot Everywhere brings that for you to your customers and your users. >> So a lot of the mainstream enterprises and even businesses from SMB, small businesses that are in the cloud are scaling up, they're seeing the benefits. What's the problem that you guys are targeting? What's the use case? When does a potential customer or customer know they get that ThoughtSpot is needed to be called in and to work with? Is it that they want low code, no code? Is it more democratization? What's the problem statement and how do you guys turn that problem being solved into an opportunity and benefit? >> I think the key problem we're trying to solve is that most applications today, when they try to deliver analytics, really when they're delivering, is usually a static representation of some data, some answers, and some insights that are created by someone else. So usually the company would present, you know, if you think about it, if you go to your banking application, they usually show some pretty charts for you and then it sparks your curiosity about your credit card transactions or your banking transactions over the last month. Naturally, usually for me, I would then want to click in and ask the next question, which transactions fall into this category, what time, you know, change the categories a bit, usually you're stuck. So what happens with most applications? The challenge is because someone else is asking the questions and then the user is just consuming static insights, you wet their appetite and you don't satisfy it. So application users typically get stunted, they're not satisfied, and then leave application. Where ThoughtSpot comes in, ThoughtSpots through differentiation is our ability to create an interactive curiosity journey with the user. So ThoughtSpot in general, if you buy a standalone, that's the experience that we really stand by, now you can deliberate your application where the user, any user, business user, untrained, without the help of an analyst can ask their own questions. So if you see, going back to my example, if it's in your banking app, you see some kind of visualization around expense actions, you can dig in. What about last month? What about last week? Which transactions? Which merchant? You know, all those things you can continue your curiosity journey so that the business user and the app user ask their questions instead of an analyst who's sitting in the company behind a desk kind of asking your questions for you. >> And that's the outcome that everyone wants. I totally see that and everyone kind of acknowledges that, but I got to then ask you, okay, how do you make that happen? Because you've got the developers who have essentially make that happen and so, the cloud is essentially SaaS, right? So you got a SaaS kind of marketplace here. The apps can be deployed very quickly, but in order to do that, you kind of need self-service and you got to have good analytics, right? So self-service, you guys have that. Now on the analytics side, most people have to build their own or use an existing tool and tools become specialists, you know what I'm saying? So you're in this kind of like weird cycle of, "Okay, I got to deploy and spend resource to build my own, which could be long and tiresome." >> Yeah. >> "And or rely on other tools that could be good, but then I have too many tools but that creates specialism kind of silos." These seems to be trends. Do you agree with that? And if customers have this situation, you guys come in, can you help there? >> Absolutely, absolutely. So, you know, if you think about the two options that you just laid out, that you could either roll your own, kind of build your own, and that's really hard. If you think about analyst industry, where 20, $30 billion industry with a lot of companies that specialize in building analytics so it's a really tough thing to do. So it doesn't really matter how big of a company you are, even if you're a Microsoft or an Amazon, it's really hard for them to actually build analytics internally. So for a company to try to do it on their own, hire the talent and also to come up with that interactive experience, most companies fail. So what ends up happening is you deliver the budget and the time to market ends up taking much longer, and then the experience is engaging for the users and they still end up leaving your app, having a bad impression. Now you can also buy something. They are our competitors who offer embedded analytics options as well, but the mainstream paradigm today with analytics is delivering. We talked about earlier static visualizations of insights that are created by someone else. So that certainly is an option. You know, where ThoughtSpot Everywhere really stands out above everything else is our technology is fundamentally built for search and interactive and cloud-scale data kind of an experience that the static visualizations today can't really deliver. So you could deliver a static dashboard purchase from one of our competitors, or if you really want to engage your users again, today is all about self-service, it's all about interactivity, and only ThoughtSpot's architecture can deliver that embedded in a data app for you. >> You know, one of the things I'm really impressed with you guys at ThoughtSpot is that you see data as I see strategic advantage for companies and people say that it's kind of a cliche but, or a punchline, and some sort of like business statement. But when you start getting into new kinds of workflows, that's the intellectual property. If you can enable people to essentially with very little low-code, no-code, or just roll their own analysis and insights from a platform, you're then creating intellectual property for the company. So this is kind of a new paradigm. And so a lot of CIO's that I talked to, or even CSOs on the security side of like, they kind of want this but maybe can't get there overnight. So if I'm a CIO, Victor, who do I, how do I point to on my team to engage with you guys? Like, okay, you sold me on it, I love the vision. This is definitely where we want to go. Who do I bring into the meeting? >> I think that in any application, in any company actually, there's usually product leaders and developers that create applications. So, you know, if you are a SaaS company, obviously your core product, your core product team would be the right team we want to talk to. If you're a traditional enterprise, you'd be surprised actually, how many traditional enterprises that been around for 50, 100 years, you might think of them selling a different product but actually, they have a lot of visual applications and product teams within their company as well. For example, you know, we have customers like a big tractor company. You can probably imagine who they might be. They actually have visual applications that they use ThoughtSpot to offer to the dealers so that they can look at their businesses with the tractors. We also have a big telecom company, for example, that you would think about telecom as a whole service but they have a building application that they offer to their merchants to track their billing. So what I'm saying is really, whether you're a software company where that's your core product, or you're a traditional enterprise that has visual applications underneath to support your core product, there's usually product teams, product leaders, and developers. Those are the ones that we want to talk to and we can help them realize a better vision for the product that they're responsible for. >> I mean, the reality is all applications need analytics, right, at some level. >> Yes. >> Full instrumentation at a minimum log everything and then the ability to roll that up, that's where I see people always telling me like that's where the challenge seems to be. Okay, I can log everything, but now how do I have a... And then after the fact that they say, "Give me a report, what's happening?" >> That's right. >> They get stuck. >> They get stuck 'cause you get that report and you know, someone else asked that question for you and you're probably a curious person. I'm a curious person. You always have that next question, and then usually if you're in a company, let's just say, you're a CIO. You're probably used to having a team of analysts at your fingertip so at least if you have a question, you don't like the report, you can find two people, five people they'll respond to your request. But if you're a business application user, you're sitting there, I don't know about you, but I don't remember the last time I actually went through and really found a support ticket in my application, or I really read a detailed documentation describing features in application. Users like to be self-taught, self-service and they like to explore it on their own. And there's no analyst there, there's no IT guy that they can lean on so if they get a static report of the data, they'll naturally always want to ask more questions, then they're stuck. So it's that kind of unsatisfying where, "I have some curiosity, you sparked by questions, I can't answer them." That's where I think a lot of companies struggle with. That's why a lot of applications, they're data intensive but they don't deliver any insights. >> It's interesting and I like this anywhere idea because you think about like what you guys do, applications can be, they always start small, right? I mean, applications got to be built. So you guys, your solution really fits for small startups and business all the way up to large enterprises which in a large enterprise, they could have hundreds and thousands of applications which look like small startups. >> Absolutely, absolutely. You know, that's a great thing about the sort of ThoughtSpot Everywhere which takes the engine around ThoughtSpot that we built over the last eight or nine years and could deliver in any kind of a context. 'Cause nowadays, as opposed to 10, 15, 20 years ago, everything does run in applications these days. We talk about visual transformation at the beginning of the call. That's really what it means is today, the workflows of business are conducted in applications no matter who you're interacting with. And so we have all these applications. A lot of times, yes, if you have big analytical problems, you can take the data and put into a different context like ThoughtSpot's own UI and do a lot of analytics, but we also understand that a lot of times customers and users, they like to analyze in the context the workflow of the application they're actually working in. And so with that situation, actually having the analytics embedded within right next to their workflow is something that I think a lot of, especially business users that are less trained, they'd like to do that right in the context of their business productivity workflow. And so that's where ThoughtSpot Everywhere, I know the terminology is a little self-serving, but ThoughtSpot Everywhere, we think ThoughtSpot could actually be everywhere in your business workflow. >> That's great value proposition. I'm going to put my skeptic hat on challenge you and say, Okay, I don't want to... Prove it to me, what's in it for me? And how much is it going to cost me, how do I engage? So, you know- >> Yeah. >> What's in it for me as the buyer? If people want to buy this, I want to use it, I'm going to get engaged with ThoughtSpot and how much does it cost and what's the engagements look like? >> So, what's in it for you is easy. So if you have data in the cloud and you have an application, you should use ThoughtSpot Everywhere to deliver a much more valuable, interactive experience for your user's data. So that's clear. How do you engage? So we have a very flexible pricing models. If your data's in the cloud, we can either, you can purchase with us, we'll land small and then grow with your consumption. You know, that's always the kind of thing, "Hey, allow us to prove it to you, right?" We start, and then if a user starts to consume, you don't really have to pay a big bill until we see the consumption increase. So we have consumption and data capacity-based types of pricing models. And you know, one of the real advantages that we have for cloud applications is if you're a developer, often, even in the past for ThoughtSpot, we haven't always made that development experience very easy. You have to embed a relatively heavy product but the beauty for ThoughtSpot is from the beginning, we were designed with a modern API-based kind of architecture. Now, a lot of our BI competitors were designed and developed in the desktop server kind of era where everything you embed is very monolithic. But because we have an API driven architecture, we invest a lot of time now to wrap a seamless developer SDK, plus very easy to use REST APIs, plus an interactive kind of a portal to make that development experience also really simple. So if you're a developer, now you really can get from zero to an easy app for ThoughtSpot embedded in your data app in just often in less than 60 minutes. >> John: Yeah. >> So that's also a very great proposition where modern leaders is your data's in the cloud, you've got developers with an SDK, it can get you into an app very quickly. >> All right so bottom line, if you're in the cloud, you got to get the data embed in the apps, data everywhere with ThoughtSpot. >> Yes. >> All right, so let's unpack it a little bit because I think you just highlighted I think what I think is the critical factor for companies as they evaluate their plethora of tools that they have and figuring out how to streamline and be cloud native in scale. You mentioned static and old BI competitors to the cloud. They also have a team of analysts as well that just can make the executives feel like the all of the reports are dynamic but they're not, they're just static. But look at, I know you guys have a relation with Snowflake, and not to kind of bring them into this but to highlight this, Snowflake disrupted the data warehouse. >> Yes. >> Because they're in the cloud and then they refactored leveraging cloud scale to provide a really easy, fast type of value for their product and then the rest is history. They're public, they're worth a lot of money. That's kind of an example of what's coming for every category of companies. There's going to be that. In fact, Jerry Chen, who was just given the keynote here at the event, had just had a big talk called "Castles In The Cloud", you can build a moat in the cloud with your application if you have the right architecture. >> Absolutely. >> So this is kind of a new, this is a new thing and it's almost like beachfront property, whoever gets there first wins the category. >> Exactly, exactly. And we think the timing is right now. You know, Snowflake, and even earlier, obviously we had the best conference with Redshift, which really started the whole cloud data warehouse wave, and now you're seeing Databricks even with their Delta Lake and trying to get into that kind of swim lane as well. Right now, all of a sudden, all these things that have been brewing in the background in the data architecture has to becoming mainstream. We're now seeing even large financial institutions starting to always have to test and think about moving their data into cloud data warehouse. But once you're in the cloud data warehouse, all the benefits of its elasticity, performance, that can really get realized at the analytics layer. And what ThoughtSpot really can bring to the table is we've always, because we're a search-based paradigm and when you think about search. Search is all about, doesn't really matter what kind of search you're doing, it's about digging really deep into a lot of data and delivering interactive performance. Those things have always... Doesn't really matter what data architecture we sit on, I've always been really fundamental to how we build our product. And that translates extremely well when you have your data in a Snowflake or Redshift have billions of rows in the cloud. We're the only company, we think, that can deliver interactive performance on all the data you have in a cloud data warehouse. >> Well, I want to congratulate you, guys. I'm really a big fan of the company. I think a lot of companies are misunderstood until they become big and there was, "Why didn't everyone else do that search? Well, I thought they were a search engine?" Being search centric is an architectural philosophy. I know as a North Star for your company but that creates value, right? So if you look at like say, Snowflake, Redshift and Databricks, you mentioned a few of those, you have kind of a couple of things going on. You have multiple personas kind of living well together and the developers like the data people. Normally, they hated each other, right? (giggles) Or maybe they didn't hate each other but there's conflict, there's always cultural tension between the data people and the developers. Now, you have developers who are becoming data native, if you will, just by embedding that in. So what Snowflake, these guys, are doing is interesting. You can be a developer and program and get great results and have great performance. The developers love Snowflake, they love Databricks, they love Redshift. >> Absolutely. >> And it's not that hard and the results are powerful. This is a new dynamic. What's your reaction to that? >> Yeah, no, I absolutely believe that. I think, part of the beauty of the cloud is I like your kind of analogy of bringing people together. So being in the cloud, first of all, the data is accessible by everyone, everywhere. You just need a browser and the right permissions, you can get your data, and also different kind of roles. They all kind of come together. Things best of breed tools get blended together through APIs. Everything just becomes a lot more accessible and collaborative and I know that sounds kind of little kumbaya, but the great thing about the cloud is it does blur the lines between goals. Everyone can do a little bit of everything and everyone can access a little bit more of their data and get more value out of it. >> Yeah. >> So all of that, I think that's the... If you talk about digital transformation, you know, that's really at the crux of it. >> Yeah, and I think at the end of the day, speed and high quality applications is a result and I think, the speed game if automation being built in on data plays a big role in that, it's super valuable and people will get slowed down. People get kind of angry. Like I don't want to get, I want to go faster, because automations and AI is going to make things go faster on the dev side, certainly with DevOps, clouds proven that. But if you're like an old school IT department (giggles) or data department, you're talking to weeks not minutes for results. >> Yes. >> I mean, that's the powerful scale we're talking about here. >> Absolutely. And you know, if you think about it, you know, if it's days to minutes, it sounds like a lot but if you think about like also each question, 'cause usually when you're thinking about questions, they come in minutes. Every minute you have a new question and if each one then adds days to your journey, that over time is just amplified, it's just not sustainable. >> Okay- >> So now in the cloud world, you need to have things delivered on demand as you think about it. >> Yeah, and of course you need the data from a security standpoint as well and build that in. Chances is people shift left. I got to ask you if I'm a customer, I want to just run this by you. You mentioned you have an SDK and obviously talking to developers. So I'm working with ThoughtSpot, I'm the leader of the organization. I'm like, "Okay, what's the headroom? What's going to happen as a bridge, the future gets built so I'm going to ride with ThoughtSpot." You mentioned SDK, how much more can I do to build and wrap around ThoughtSpot? Because obviously, this kind of value proposition is enabling value. >> Yes. >> So I want to build around it. How do I get started and where does it go? >> Yeah, well, you can get started as easy as starting with our free trial and just play around with it. And you know, the beauty of SDK and when I talk about how ThoughtSpot is built with API-driven architecture is, hey, there's a lot of magic and features built into ThoughtSpot core pod. You could embed all of that into an application if you would like or you could also use our SDK and our APIs to say, "I just want to embed a couple of visualizations," start with that and allow the users to take into that. You could also embed the whole search feature and allow users to ask repetitive questions, or you can have different role-based kind of experiences. So all of that is very flexible and very dynamic and with SDK, it's low-code in the sense where it creates a JavaScript portal for you and even for me who's haven't coded in a long time. I can just copy and paste some JavaScript code and I can see my applications reflecting in real time. So it's really kind of a modern experience that developers in today's world appreciate, and because all the data's in the cloud and in the cloud, applications are built as services connected through APIs, we really think that this is the modern way that developers would get started. And analysts, even analysts who don't have strong developer training can get started with our developer portal. So really, it's a very easy experience and you can customize it in whichever way you want that suits your application's needs. >> Yeah, I think it's, you don't have to be a developer to really understand the basic value of reuse and discovery of services. I think that's one of these we hear from developers all the time, "I had no idea that Victor did that code. Why do I have to rewrite that?" So you see, reuse come up a lot around automation where code is building with code, right? So you have this new vibe and you need data to discover that search paradigm mindset. How prevalent is that on the minds of customers? Are they just trying to like hold on and survive through the pandemic? (giggles) >> Well, customers are definitely thinking about it. You know, the challenge is change is always hard, you know? So it takes time for people to see the possibilities and then have to go through especially in larger organizations, but even in smaller organizations, people think about, "Well, how do I change my workflow?" and then, "How do I change my data pipeline?" You know, those are the kinds of things where, you know, it takes time, and that's why Redshift has been around since 2012 or I believe, but it took years before enterprises really are now saying, "The benefits are so profound that we really have to change the workflows, change the data pipelines to make it work because we can't hold on to the old ways." So it takes time but when the benefits are so clear, it's really kind of a snowball effect, you know? Once you change a data warehouse, you got to think about, "Do I need to change my application architecture?" Then, "Do I need to change the analytics layer?" And then, "Do I need to change the workflow?" And then you start seeing new possibilities because it's all more flexible that you can add more features to your application and it's just kind of a virtuous cycle, but it starts with taking that first step to your point of considering migrating your data into the cloud and we're seeing that across all kinds of industries now. I think nobody's holding back anymore. It just takes time, sometimes some are slower and some are faster. >> Well, all apps or data apps and it's interesting, I wrote a blog post in 2017 called, "Data Is The New Developer Kit" meaning it was just like a vision statement around data will be part of how apps, like software, it'll be data as code. And you guys are doing that. You're allowing data to be a key ingredient for interactivity with analytics. This is really important. Can you just give us a use case example of how someone builds an interactive data app with ThoughtSpot Everywhere? >> Yeah, absolutely. So I think there are certain applications that when naturally things relates to data, you know, I talk about bending or those kinds of things. Like when you use it, you just kind of inherently know, "Hey, there's tons of data and then can I get some?" But a lot of times we're seeing, you know, for example, one of our customers is a very small company that provides software for personal trainers and small fitness studios. You know, you would think like, "Oh well, these are small businesses. They don't have a ton of data. A lot of them would probably just run on QuickBooks or Excel and all of that." But they could see the value is kind of, once a personal trainer conducts his business on a cloud software, then he'll realize, "Oh, I don't need to download any more data. I don't need to run Excel anymore, the data is already there in a software." And hey, on top of that, wouldn't it be great if you have an analytics layer that can analyze how your clients paid you, where your appointments are, and so forth? And that's even just for, again like I said, no disrespect to personal trainers, but even for one or two personal trainers, hey, they can be an analytics and they could be an analyst on their business data. >> Yeah, why not? Everyone's got a Fitbits and watches and they could have that built into their studio APIs for the trainers. They can get collaboration. >> That's right. So there's no application you can think that's too simple or you might think too traditional or whatnot for analytics. Every application now can become a very engaging data application. >> Well Victor, it's great to have you on. Obviously, great conversation around ThoughtSpot anywhere. And as someone who runs corp dev for ThoughtSpot, for the folks watching that aren't customers yet for ThoughtSpot, what should they know about you guys as a company that they might not know about or they should know about? And what are people talking about ThoughtsSpot, what are they saying about it? So what should they know that know that's not being talked about or they may not understand? And what are other people saying about ThoughtSpot? >> So a couple of things. One is there's a lot of fun out there. I think about search in general, search is generally a very broad term but I think it, you know, I go back to what I was saying earlier is really what differentiates ThoughtSpot is not just that we have a search bar that's put on some kind of analytics UI. Really, it's the fundamental technical architecture underlying that is from the ground up built for search large data, granular, and detailed exploration of your data. That makes us truly unique and nobody else can really do search if you're not built with a technical foundation. The second thing is, we're very much a cloud first company now, and a ton of our over the past few years because of the growth of these highly performing data warehouses like Snowflake and Redshift, we're able to really focus on what we do best which is the search and the query processing performance on the front end and we're fully engaged with cloud platforms now. So if you have data in the cloud, we are the best analytics front end for that. >> Awesome, well, thanks for coming on. Great the feature you guys here in the "Startup Showcase", great conversation, ThoughtSpot leading company, hot startup. We did their event with them with theCUBE a couple of months ago. Congratulations on all your success. Victor Chang, VP of ThoughtSpot Everywhere and Corporate Development here on theCUBE and "AWS Startup Showcase". Go to awsstartups.com and be part of the community, we're doing these quarterly featuring the hottest startups in the cloud. I'm John Furrier, thanks for watching. >> Victor: Thank you so much. (bright music)

Published Date : Sep 22 2021

SUMMARY :

for the "AWS Startup Showcase" and if you don't have AI, the way you engage with your customers, So a lot of the mainstream and you don't satisfy it. but in order to do that, you can you help there? and the time to market to engage with you guys? that you would think about I mean, the reality is all and then the ability to roll that up, get that report and you know, So you guys, your solution A lot of times, yes, if you hat on challenge you and say, the cloud and you have an it can get you into an app very quickly. you got to get the data embed in the apps, of the reports are "Castles In The Cloud", you So this is kind of a new, and when you think about search. and Databricks, you and the results are powerful. of all, the data is accessible transformation, you know, on the dev side, certainly with I mean, that's the powerful scale And you know, if you think about it, So now in the cloud world, Yeah, and of course you need the data So I want to build and in the cloud, applications are built and you need data to discover of things where, you know, And you guys are doing that. relates to data, you know, APIs for the trainers. So there's no application you Well Victor, it's great to have you on. So if you have data in the cloud, Great the feature you guys Victor: Thank you so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jerry ChenPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

John FurrierPERSON

0.99+

Victor ChangPERSON

0.99+

2017DATE

0.99+

JohnPERSON

0.99+

oneQUANTITY

0.99+

two peopleQUANTITY

0.99+

hundredsQUANTITY

0.99+

ExcelTITLE

0.99+

VictorPERSON

0.99+

last weekDATE

0.99+

ThoughtSpotORGANIZATION

0.99+

TodayDATE

0.99+

last monthDATE

0.99+

five peopleQUANTITY

0.99+

second thingQUANTITY

0.99+

less than 60 minutesQUANTITY

0.99+

each questionQUANTITY

0.99+

two optionsQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

JavaScriptTITLE

0.99+

ThoughtsSpotORGANIZATION

0.99+

RedshiftORGANIZATION

0.99+

2012DATE

0.98+

awsstartups.comOTHER

0.98+

firstQUANTITY

0.98+

QuickBooksTITLE

0.98+

todayDATE

0.98+

each oneQUANTITY

0.98+

SnowflakeEVENT

0.98+

first impressionQUANTITY

0.98+

100 yearsQUANTITY

0.98+

first stepQUANTITY

0.98+

10DATE

0.98+

DatabricksORGANIZATION

0.97+

OneQUANTITY

0.97+

SDKTITLE

0.96+

theCUBEORGANIZATION

0.96+

first companyQUANTITY

0.95+

15DATE

0.95+

Startup ShowcaseEVENT

0.95+

20 years agoDATE

0.94+

pandemicEVENT

0.93+

ThoughtSpot EverywhereORGANIZATION

0.92+

AWS Startup ShowcaseEVENT

0.92+

AWSORGANIZATION

0.9+