Christopher Penn, SHIFT Communications | IBM CDO Strategy Summit 2017
>> Live from Boston, Massachusetts, it's theCUBE, Covering IBM Chief Data Officer Summit. Brought to you by IBM. >> Welcome back to theCUBE's live coverage of IBM Chief Data Strategy Summit. My name is Rebecca Knight, and I'm here with my co-host Dave Vellante, we are joined by Christopher Penn, the VP of Marketing Technology at SHIFT Communications, here in Boston. >> Yes. >> Thanks so much for joining us. >> Thank you for having me. >> So we're going to talk about cognitive marketing. Tell our viewers: what is cognitive marketing, and what your approach to it is. >> Sure, so cognitive marketing essentially is applying machine learning and artificial intelligence strategies, tactics and technologies to the discipline of marketing. For a really long time marketing has been kind of known as the arts and crafts department, which was fine, and there's certainly, creativity is an essential part of the discipline, that's never going away. But we have been tasked with proving our value. What's the ROI of things, is a common question. Where's the data live? The chief data officer would be asking, like, who's responsible for this? And if we don't have good answers to those things, we kind of get shown the door. >> Well it sort of gets back to that old adage in advertising, I know half my marketing budget is wasted, I just don't know which half. >> Exactly. >> So now we're really able to know which half is working. >> Yeah, so I mean, one of the more interesting things that I've been working on recently is using what's called Markov chains, which is a type of very primitive machine learning, to do attribution analysis, to say what actually caused someone to become a new viewer of theCUBE, for example. And you would take all this data that you have from your analytics. Most of it that we have, we don't really do anything with. You might pull up your Google Analytics console, and go, "Okay, I got more visitors today than yesterday." but you don't really get a lot of insights from the stock software. But using a lot of tools, many of which are open source and free of financial cost, if you have technical skills you can get much deeper insights into your marketing. >> So I wonder, just if we can for our audience... When we talk about machine learning, and deep learning, and A.I., we're talking about math, right, largely? >> Well so let's actually go through this, because this is important. A.I. is a bucket category. It means teaching a machine to behave as though it had human intelligence. So if your viewers can see me, and disambiguate me from the background, they're using vision, right? If you're hearing sounds coming out of my mouth and interpreting them into words, that's natural language processing. Humans do this naturally. It is now trying to teach machines to do these things, and we've been trying to do this for centuries, in a lot of ways, right? You have the old Mechanical Turks and stuff like that. Machine learning is based on algorithms, and it is mostly math. And there's two broad categories, supervised and unsupervised. Supervised is you put a bunch of blocks on the table, kids blocks, and you hold the red one, and you show the machine over and over again this is red, this is red, and eventually you train it, that's red. Unsupervised is- >> Not a hot dog. (Laughter) >> This is an apple, not a banana. Sorry CNN. >> Silicon Valley fans. >> Unsupervised is there's a whole bunch of blocks on the table, "Machine, make as many different sequences as possible," some are big, some are small, some are red, some are blue, and so on, and so forth. You can sort, and then you figure out what's in there, and that's a lot of what we do. So if you were to take, for example, all of the comments on every episode of theCUBE, that's a lot, right? No humans going to be able to get through that, but you can take a machine and digest through, just say, what's in the bag? And then there's another category, beyond machine learning, called deep learning, and that's where you hear a lot of talk today. Deep learning, if you think of machine learning as a pancake, now deep learnings like a stack of pancakes, where the data gets passed from one layer to the next, until what you get at the bottom is a much better, more tuned out answer than any human can deliver, because it's like having a hundred humans all at once coming up with the answer. >> So when you hear about, like, rich neural networks, and deep neural networks, that's what we're talking about. >> Exactly, generative adversarial networks. All those things are ... Any kind of a lot of the neural network stuff is deep learning. It's tying all these piece together, so that in concert, they're greater than the sum of any one. >> And the math, I presume, is not new math, right? >> No. >> SVM and, it's stuff that's been around forever, it's just the application of that math. And why now? Cause there's so much data? Cause there's so much processing power? What are the factors that enable this? >> The main factor's cloud. There's a great shirt that says: "There's no cloud, it's just somebody else's computer." Well it's absolutely true, it's all somebody else's computer but because of the scale of this, all these tech companies have massive server farms that are kind of just waiting for something to do. And so they offer this as a service, so now you have computational power that is significantly greater than we've ever had in human history. You have the internet, which is a major contributor, the ability to connect machines and people. And you have all these devices. I mean, this little laptop right here, would have been a supercomputer twenty years ago, right? And the fact that you can go to a service like GitHub or Stack Exchange, and copy and paste some code that someone else has written that's open source, you can run machine learning stuff right on this machine, and get some incredible answers. So that's why now, because you've got this confluence of networks, and cloud, and technology, and processing power that we've never had before. >> Well with this emphasis on math and science in marketing, how does this change the composition of the marketing department at companies around the world? >> So, that's a really interesting question because it means very different skill sets for people. And a lot of people like to say, well there's the left brain and then there's a right brain. The right brains the creative, the left brains the quant, and you can't really do that anymore. You actually have to be both brained. You have to be just as creative as you've always been, but now you have to at least have an understanding of this technology and what to do with it. You may not necessarily have to write code, but you'd better know how to think like a coder, and say, how can I approach this problem systematically? This is kind of a popular culture joke: Is there an app for that, right? Well, think about that with every business problem you face. Is there an app for that? Is there an algorithm for that? Can I automate this? And once you go down that path of thinking, you're on the path towards being a true marketing technologist. >> Can you talk about earned, paid, and owned media? How those lines are blurring, or not, and the relationship between sort of those different forms of media, and results in PR or advertising. >> Yeah, there is no difference, media is media, because you can take a piece of content that this media, this interview that we're doing here on theCUBE is technically earned media. If I go and embed this on my website, is that owned media? Well it's still the same thing, and if I run some ads to it, is it technically now paid media? It's the thing, it's content that has value, and then what we do with it, how we distribute it, is up to us, and who our audience is. One of the things that a lot of veteran marketing and PR practitioners have to overcome is this idea that the PR folks sit over there, and they just smile and dial and get hits, go get another hit. And then the ad folks are over here... No, it's all the same thing. And if we don't, as an industry realize that those silos are artificially imposed, basically to keep people in certain jobs, we will eventually end up turning over all of it to the machines, because the machines will be able to cross those organizational barriers much faster. When you have the data, and whatever the data says that's what you do. So if the data says this channels going to be more effective, yes it's a CUBE interview, but actually it's better off as a paid YouTube video. So the machine will just go do that for us. >> I want to go back to something you were talking about at the very beginning of the conversation, which is really understanding, companies understanding, how their marketing campaigns and approaches are effectively working or not working. So without naming names of clients, can you talk about some specific examples of what you've seen, and how it's really changed the way companies are reaching customers? >> The number one thing that does not work, is for any business executive to have a pre-conceived idea of the way things should be, right? "Well we're the industry leader in this, we should have all the market share." Well no, the world doesn't work like that anymore. This lovely device that we all carry around in our pockets is literally a slot-machine for your attention. >> I like it, you've got to copyright that. A slot machine for your attention. >> And there's a million and a half different options, cause that's how many apps there are in the app store. There's a million and half different options that are more exciting than your white paper. (Laughter) Right, so for companies that are successful, they realize this, they realize they can't boil the ocean, that you are competing every single day with the Pope, the president, with Netflix, you know, all these things. So it's understanding: When is my audience interested in something? Then, what are they interested in? And then, how do I reach those people? There was a story on the news relatively recently, Facebook is saying, "Oh brand pages, we're not going to show "your stuff in the regular news feed anymore, "there will be a special feed over here "that no one will ever look at, unless you pay up." So understanding that if we don't understand our audiences, and recruit these influencers, these people who have the ability to reach these crowds, our ability to do so through the "free" social media continues to dwindle, and that's a major change. >> So the smart companies get this, where are we though, in terms of the journey? >> We're in still very early days. I was at major Fortune 50, not too long ago, who just installed Google Analytics on their website, and this is a company that if I named the name you would know it immediately. They make billions of dollars- >> It would embarrass them. >> They make billions of dollars, and it's like, "Yeah, we're just figuring out this whole internet thing." And I'm like, "Cool, we'd be happy to help you, but why, what took so long?" And it's a lot of organizational inertia. Like, "Well, this is the way we've always done it, and it's gotten us this far." But what they don't realize is the incredible amount of danger they're in, because their more agile competitors are going to eat them for lunch. >> Talking about organizational inertia, and this is a very big problem, we're here at a CDO summit to share best practices, and what to learn from each other, what's your advice for a viewer there who's part of an organization that isn't working fast enough on this topic? >> Update your LinkedIn profile. (Laughter) >> Move on, it's a lost cause. >> One of the things that you have to do an honest assessment of, is whether the organization you're in is capable of pivoting quickly enough to outrun its competition. And in some cases, you may be that laboratory inside, but if you don't have that executive buy in, you're going to be stymied, and your nearest competitor that does have that willingness to pivot, and bet big on a relatively proven change, like hey data is important, yeah, you make want to look for greener pastures. >> Great, well Chris thanks so much for joining us. >> Thank you for having me. >> I'm Rebecca Knight, for Dave Vellante, we will have more of theCUBE's coverage of the IBM Chief Data Strategy Officer Summit, after this.
SUMMARY :
Brought to you by IBM. the VP of Marketing Technology and what your approach to it is. of the discipline, Well it sort of gets back to that to know which half is working. of the more interesting and A.I., we're talking the red one, and you show Not a hot dog. This is an apple, not a banana. and that's where you So when you hear about, greater than the sum of any one. it's just the application of that math. And the fact that you can And a lot of people like to and the relationship between So if the data says this channels beginning of the conversation, is for any business executive to have a got to copyright that. that you are competing every that if I named the name is the incredible amount Update your LinkedIn profile. One of the things that you have to do so much for joining us. the IBM Chief Data Strategy
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Christopher Penn | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Chris | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
CNN | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Netflix | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
billions of dollars | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
a million and half | QUANTITY | 0.99+ |
billions of dollars | QUANTITY | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Pope | PERSON | 0.98+ |
a million and a half | QUANTITY | 0.98+ |
one layer | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
Google Analytics | TITLE | 0.97+ |
twenty years ago | DATE | 0.97+ |
two broad categories | QUANTITY | 0.96+ |
Silicon Valley | LOCATION | 0.95+ |
SHIFT Communications | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.94+ |
Google Analytics | TITLE | 0.94+ |
IBM Chief Data Strategy Summit | EVENT | 0.94+ |
One | QUANTITY | 0.93+ |
Stack Exchange | ORGANIZATION | 0.9+ |
IBM Chief Data Strategy Officer Summit | EVENT | 0.88+ |
IBM Chief Data Officer Summit | EVENT | 0.87+ |
Fortune 50 | ORGANIZATION | 0.86+ |
centuries | QUANTITY | 0.86+ |
IBM | EVENT | 0.82+ |
CDO Strategy Summit 2017 | EVENT | 0.79+ |
a hundred humans | QUANTITY | 0.79+ |
much | QUANTITY | 0.77+ |
single day | QUANTITY | 0.74+ |
theCUBE | ORGANIZATION | 0.72+ |
VP | PERSON | 0.72+ |
half | QUANTITY | 0.71+ |
CUBE | ORGANIZATION | 0.63+ |
Technology | PERSON | 0.6+ |
CDO | EVENT | 0.51+ |
Turks | ORGANIZATION | 0.39+ |
Thijs Ebbers & Arno Vonk, ING | KubeCon + CloudNativeCon NA 2022
>>Good morning, brilliant humans. Good afternoon or good evening, depending on your time zone. My name is Savannah Peterson and I'm here live with the Cube. We are at CubeCon in Detroit, Michigan. And joining me is my beautiful co-host, Lisa, how you feeling? Afternoon of day three. >>Afternoon day three. We've had such great conversations. We have's been fantastic. The momentum has just been going like this. I love it. >>Yes. You know, sometimes we feel a little low when we're at the end of a conference. Not today. Don't feel that that way at all, which is very exciting. Just like the guests that we have up for you next. Kind of an unexpected player when we think about technology. However, since every company, one of the themes is every company is trying to be a software company. I love that we're talking to I n G. Joining us today is Ty Evers and Arno vk. Welcome to the show gentlemen. Thank >>You very much. Glad to be you. Thank you. >>Yes, it's wonderful. All the way in from Amsterdam. Probably some of the farthest flying folks here for this adventure. Starting off. I forgot what's going on with the shirts guys. You match very well. Tell, tell everyone. >>Well these are our VR code shirts. VR code is basically the player of our company to get people interested as an IT person in banking. Right? Actually, people don't think banking is a good place to work as an IT professional, but actually this, and we are using the OC went with these nice logos to get it attention. >>I love that. So let's actually, let's just talk about that for a second. Why is it such an exciting role to be working in technology at a company like I N G or traditional bank? >>I N G is a challenging environment. That's how do you make an engineer happy, basically give them a problem to solve. So we have lots and lots of problems to solve. So that makes it challenging. But yeah, also rewarding. And you can say a lot of things about banks and with looking at the IT perspective, we are doing amazing things in I and that's what we talked about. Can >>You, can you tell us any of those amazing things or are they secrets? >>Think we talked about last Tuesday at S shift commons conference. Yeah, so we had two, two presentations I presented with my coho sand on my journey over the last three years. So what has IG done? Basically building a secure container hosting platform. Yeah. How do we live a banking cot with cloud native technology and together with our coho young villa presented actually showed it by demo making life and >>Awesome >>In person. So we were not just presenting, >>It's not all smoke and mirrors. It's >>Not smoke and mirror, which we're not presenting our fufu marketing block now. We actually doing it today. And that's what we wanted to share here. >>Well, and as consumers we expect we can access our banking on any device 24 by seven. I wanna be able to do all my transactions in a way that I know is secure. Obviously security's a huge thing there, but talk about I n G Bank aren't always been around for a very long time. Talk about this financial institution as a software company. Really obviously a lot of challenges to solve, a lot of opportunity. But talk about what it's like working for a history and bank that's really now a tech company. >>Yes. It's been really changing as a bank to a tech company. Yeah. We have a lot of developers and operators and we do deliver offer. We OnPrem, we run in the public. So we have a huge engineers and people around to make our software. Yes. And I am responsible for the i Container Ocean platform and we deliver that the name space as a surface and as a real, real secure environment. So our developers, all our developers in, I can request it, but they only get a name space. Yeah, that's very important there. They >>Have >>Resources and all sort of things. Yeah. And it is, they cannot access it. They can only access it by one wifi. So, >>So Lisa and I were chatting before we brought you up here. Name space as a service. This is a newer term for us. Educate us. What does that mean? >>Basically it means we don't give a full cluster to our consumers, right? We only give them basically cpu, memory networking. That's all they need to host application. Everything else we abstract away. And especially in a banking context where compliance is a big thing, you don't need to do compliance for an entire s clusterized developer. It's really saves development time for the colleagues in the bank. It >>Decreases the complexity of projects, which is a huge theme here, especially at scale. I can imagine. I mean, my gosh, you're serving so many different people, it probably saves you time. Let's talk about regulation. What, how challenging is that for you as technologists to balance in all the regulations around banking and FinTech? It's, it's, it's, it's not like some of these kind of wild, wild west industries where we can just go out and play and prototype and do whatever we want. There's a lot of >>Rules. There's a lot of rules. And the problem is you have legislation and you have the real world. Right. And you have to find something in, they're >>Not the same thing. >>You have to find something in between with both parties on the stands and cannot adhere to. Yeah. So the challenge we had, basically we had to wide our, in our own container security standards to prove that the things we were doing were the white things to be in control as a bank because there was no market standard for container security. So basically we took some input from this. So n did a lot of good work. We basically added some things on top to be valid for a bank in Europe. So yeah, that's what we did. And the nice thing is today we take all the boxes we defined back in 2019. >>Hey, so you what it's, I guess, I guess the rules are a little bit easier when you get to help define them. Yep. Yeah. That it feels like a very good strategic call >>And they makes sense. Yeah. Right. Because the hardest problem is try to be compliant for something which doesn't make sense. Right, >>Right. Arnold, talk about, let's double click on namespace as a service. You talked about what that is, but give us a little bit of information on why I N G really believes this is the right approach for this company. >>It's protects for the security that developers doing things they don't shoot. Yeah. They cannot access their store anymore when it is running in production. And that is the most, most important. That is, it is immutable running in our platform. >>Excellent. Talk about both of you. How long have you, have you both been at I n G for a long time? >>I've been with I N G since September, 2001. So that's more than 20 years >>Now. Long time. Ana, what about you? >>Before 2000 already before. >>So both of your comment on that's a long time. Yeah. Talk about the culture of innovation that's at I N G to be able to move at such speed and be groundbreaking in what you're, how you're using technology, what, what's the appetite like at the bank to embrace new and emerging technologies? >>So we are really looking, basically the, the mantra of the bank is to help our customers get a step ahead in life and in business. And we do that by one superior customer service and secondly, sustainability at the heart. So anything which contributes to those targets, you can go to your manager and if you can make goods case why it contributes most of the cases you get some time or some budgets or even some additional colleagues to help you out and give it a try require from a culture perspective required open to trying things out before we reach production. Once you go to production. Yeah. Then we are back to being a bank and you need to take all the boxes to make really sure that we are confident with our customers data and basically we're still a bank but a lot of is possible. >>A lot. It is possible. And there's the customer on the other end who's expecting, like I said earlier, that they can access their data any time that they want, be able to do any transaction they want, making sure the content that's delivered to them is relevant, that it's secure. Obviously with, that's the biggest challenge especially is we think about how many generations are alive today and and those that aren't tech savvy. Yeah. Have challenges with that. Talk about what the bank's dedication is to ensuring from a security perspective that its customers don't have anything to worry about. >>That's always a thin line between security and the user experience. So I n g, like every other bank needs to make choices. Yes. We want the really ease of customers and take the risk that somebody abuses it or do we make it really, really secure and alienate part of our customer base. And that's an ongoing, that's a, that's a a hard, >>It's a trade off. That's >>A line. >>So it's really hard. Interesting part is in Netherlands we had some debates about banks closing down locations, but the moment we introduced our mobile weapon iPads, basically the debates became a lot quieter because a lot of elderly people couldn't work with an iPhone. It turned out they were perfectly fine with a well-designed iPad app to do their banking. Really? >>Okay. >>But that's already learning from like 15 years ago. >>What was the, what was the product roadmap on that? So how, I mean I can imagine you released a mobile app, you're not really thinking that. >>That's basically, I think that was a heavy coincidence. We just, Yeah, okay. Went out to design a very good mobile app. Yeah. And then looking out afterwards at the statistics we say, hey, who was using this way? We've got somebody who's signing on and I dunno the exact age, but it was something like somebody of 90 plus who signed on to use that mobile app. >>Wow. Wow. I mean you really are the five different generations living and working right now. Designing technology. Everybody has to go to the bank whether we are fans of our bank or we're not. Although now I'm thinking about IG as a bank in general. Y'all have a a very good attitude about it. What has kept you at the company for over 20 years? That is we, we see people move around, especially in this technology industry. Yes. Yeah. You know, every two to three years. Sometimes obviously you're in positions of leadership, they're obviously taking good care of you. But I mean multiple decades. Why have you stuck? >>Well first I didn't have the same job in I N D for two decades. Nice. So I went around the infrastructure domain. I did storage initially I did security, I did solution design and in the end I ended up in enterprise architecture. So yeah, it's not like I stuck 20 years in the same role. So every so years >>Go up the ladder but also grow your own skill sets. >>Explore. Yeah. >>So basically I think that's what's every, everybody should be thinking in these days. If you're in a cloud head industry, if you're good at it, you can out quite a nice salary. But it also means that you have some kind of obligation to society to make a difference. And I think, yeah, >>I wouldn't say that everybody feels that way. I >>Need to make a difference with I N G A difference for being more available to our consumers, be more secure to, to our consumers. I, I think that's what's driving me to stick with the company. >>What about you R Now? >>Yes, for me it's very important. Every two, three years are doing new things. I can work with the latest technology so I become really, really innovative so that it is the place to be. >>Yeah. You sort of get that rotation every two to three years with the different tools that you're using. Speaking of or here we're at Cuan, we're talking cloud native, we're talking Kubernetes. Do you think it's possible to, I'm coming back to the regulations. Do you think it's possible to get to banking grade security with cloud native Tech? >>Initially I said we would be at least as secure traditional la but last Tuesday we've proven we can get more secure than situational it. So yeah, definitely. Yes. >>Awesome. I mean, sounds like you proved it to yourself too, which is really saying something. >>Well we actually have Penta results and of course I cannot divulge those, but I about pretty good. >>Can you define, I wanna kind of double book on thanking great security, define what that is, thanking great security and how could other industries aim to Yeah, >>Hit that, that >>Standard. I want security everywhere. Especially my bank. The >>Architecture is zero privilege. So you hear a lot about lease privilege in all the security talks. That's not what you should be aiming for. Zero privilege is what you should be aiming for. And once you're at zero privileged environments, okay, who can leak data because no natural person has access to it. Even if you have somebody invading your infrastructure, there are no privileges. They cannot do privilege escalations. Yeah. So the answer for me is really clear. If you are handling customer data, if you're and customer funds aim for zero privilege architecture, >>What, what are you most excited about next? What's next for you guys? What's next for I n G? What are we gonna be talking about when we're chatting to you Right here? Atan next year or in Amsterdam actually, since we're headed that way in the spring, which is fun. Yes. >>Happy to be your host in Amsterdam. The >>Other way around. We're holding you to that. You've talked about how fun the culture is. Now you're gonna ask, she and I we need, but we need the tee-shirts. We, we obviously need a matching outfit. >>Definitely. We'll arrange some teachers for you as well. Yeah, no, for me, two highlights from this com. The first one was kcp. That can potentially be a paradigm change on how we deal with workloads on Kubernetes. So that's very interesting. I don't know if you see any implementations by next year, but it's definitely something. Looks >>Like we had them on the show as well. Yeah. So it's, it's very fun. I'm sure, I'm sure they'll be very flattered that you just just said. What about you Arnoldo that got you most excited? >>The most important for me was talking to a lot of Asian is other people. What if they thinking how we go forward? So the, the, the community and talk to each other. And also we found those and people how we go forward. >>Yeah, that's been a big thing for us here on the cube and just the energy, the morale. I mean the open source community is so collaborative. It creates an entirely different ethos. Arna. Ty, thank you so much for being here. It's wonderful to have you and hear what I n g is doing in the technology space. Lisa, always a pleasure to co-host with you. Of course. And thank you Cube fans for hanging out with us here on day three of Cuban Live from Detroit, Michigan. My name is Savannah Peterson and we'll see you up next for a great chat coming soon.
SUMMARY :
And joining me is my beautiful co-host, Lisa, how you feeling? I love it. Just like the guests that we have up for you next. Glad to be you. I forgot what's going on with the shirts guys. VR code is basically the player of our company So let's actually, let's just talk about that for a second. So we have lots and lots of problems to solve. How do we live a banking cot with cloud native technology and together So we were not just presenting, It's not all smoke and mirrors. And that's what we wanted to share here. Well, and as consumers we expect we can access our banking on any device 24 So we have a huge engineers and people around to And it is, they cannot access it. So Lisa and I were chatting before we brought you up here. Basically it means we don't give a full cluster to our consumers, right? What, how challenging is that for you as technologists And the problem is you have legislation and So the challenge we had, basically we had to wide our, in our own container security standards to prove Hey, so you what it's, I guess, I guess the rules are a little bit easier when you get to help define them. Because the hardest problem is try to be compliant for something You talked about what that is, And that is the most, most important. Talk about both of you. So that's more than 20 years Ana, what about you? So both of your comment on that's a long time. of the cases you get some time or some budgets or even some additional colleagues to help you out and making sure the content that's delivered to them is relevant, that it's secure. abuses it or do we make it really, really secure and alienate part of our customer It's a trade off. but the moment we introduced our mobile weapon iPads, basically the debates became a So how, I mean I can imagine you released a mobile app, And then looking out afterwards at the statistics we say, What has kept you at the company for over 20 years? I did solution design and in the end I ended up in enterprise architecture. Yeah. that you have some kind of obligation to society to make a difference. I wouldn't say that everybody feels that way. Need to make a difference with I N G A difference for being more available to our consumers, technology so I become really, really innovative so that it is the place to be. Do you think it's possible to get to we can get more secure than situational it. I mean, sounds like you proved it to yourself too, which is really saying something. I want security everywhere. So you hear a lot about lease privilege in all the security talks. What are we gonna be talking about when we're chatting to you Right here? Happy to be your host in Amsterdam. We're holding you to that. I don't know if you see any implementations by What about you Arnoldo that got you most excited? And also we And thank you Cube fans for hanging out with us here on day three of Cuban Live from Detroit,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
2019 | DATE | 0.99+ |
Ana | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Arnold | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
September, 2001 | DATE | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
I N G | ORGANIZATION | 0.99+ |
iPads | COMMERCIAL_ITEM | 0.99+ |
two decades | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
iPad | COMMERCIAL_ITEM | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
today | DATE | 0.99+ |
next year | DATE | 0.99+ |
KubeCon | EVENT | 0.99+ |
Arno Vonk | PERSON | 0.99+ |
both parties | QUANTITY | 0.99+ |
IG | ORGANIZATION | 0.99+ |
more than 20 years | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
last Tuesday | DATE | 0.99+ |
over 20 years | QUANTITY | 0.98+ |
I n G | ORGANIZATION | 0.98+ |
Thijs Ebbers | PERSON | 0.98+ |
15 years ago | DATE | 0.97+ |
CloudNativeCon | EVENT | 0.97+ |
seven | QUANTITY | 0.97+ |
Cuan | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
90 plus | QUANTITY | 0.96+ |
Cube | ORGANIZATION | 0.96+ |
Zero privilege | QUANTITY | 0.95+ |
Penta | ORGANIZATION | 0.94+ |
Arna | PERSON | 0.94+ |
first one | QUANTITY | 0.93+ |
zero privilege | QUANTITY | 0.93+ |
one wifi | QUANTITY | 0.92+ |
Kubernetes | ORGANIZATION | 0.92+ |
2000 | DATE | 0.92+ |
Arnoldo | PERSON | 0.92+ |
OnPrem | ORGANIZATION | 0.92+ |
two highlights | QUANTITY | 0.92+ |
day three | QUANTITY | 0.91+ |
five different generations | QUANTITY | 0.9+ |
ING | ORGANIZATION | 0.9+ |
24 | QUANTITY | 0.89+ |
CubeCon | ORGANIZATION | 0.88+ |
G Bank | ORGANIZATION | 0.87+ |
zero privilege architecture | QUANTITY | 0.86+ |
secondly | QUANTITY | 0.86+ |
Atan | LOCATION | 0.85+ |
two presentations | QUANTITY | 0.83+ |
S shift commons conference | EVENT | 0.82+ |
NA 2022 | EVENT | 0.82+ |
zero privileged | QUANTITY | 0.81+ |
every two | QUANTITY | 0.81+ |
last three years | DATE | 0.79+ |
double | QUANTITY | 0.77+ |
Ty Evers | ORGANIZATION | 0.76+ |
device | QUANTITY | 0.72+ |
Afternoon | DATE | 0.72+ |
Cuban Live | EVENT | 0.7+ |
a second | QUANTITY | 0.69+ |
Ty | PERSON | 0.68+ |
three | QUANTITY | 0.65+ |
every | QUANTITY | 0.57+ |
i Container Ocean | ORGANIZATION | 0.56+ |
Afternoon of day | DATE | 0.54+ |
Kubernetes | TITLE | 0.52+ |
Oracle Announces MySQL HeatWave on AWS
>>Oracle continues to enhance my sequel Heatwave at a very rapid pace. The company is now in its fourth major release since the original announcement in December 2020. 1 of the main criticisms of my sequel, Heatwave, is that it only runs on O. C I. Oracle Cloud Infrastructure and as a lock in to Oracle's Cloud. Oracle recently announced that heat wave is now going to be available in AWS Cloud and it announced its intent to bring my sequel Heatwave to Azure. So my secret heatwave on AWS is a significant TAM expansion move for Oracle because of the momentum AWS Cloud continues to show. And evidently the Heatwave Engineering team has taken the development effort from O. C I. And is bringing that to A W S with a number of enhancements that we're gonna dig into today is senior vice president. My sequel Heatwave at Oracle is back with me on a cube conversation to discuss the latest heatwave news, and we're eager to hear any benchmarks relative to a W S or any others. Nippon has been leading the Heatwave engineering team for over 10 years and there's over 100 and 85 patents and database technology. Welcome back to the show and good to see you. >>Thank you. Very happy to be back. >>Now for those who might not have kept up with the news, uh, to kick things off, give us an overview of my sequel, Heatwave and its evolution. So far, >>so my sequel, Heat Wave, is a fully managed my secret database service offering from Oracle. Traditionally, my secret has been designed and optimised for transaction processing. So customers of my sequel then they had to run analytics or when they had to run machine learning, they would extract the data out of my sequel into some other database for doing. Unlike processing or machine learning processing my sequel, Heat provides all these capabilities built in to a single database service, which is my sequel. He'd fake So customers of my sequel don't need to move the data out with the same database. They can run transaction processing and predicts mixed workloads, machine learning, all with a very, very good performance in very good price performance. Furthermore, one of the design points of heat wave is is a scale out architecture, so the system continues to scale and performed very well, even when customers have very large late assignments. >>So we've seen some interesting moves by Oracle lately. The collaboration with Azure we've we've covered that pretty extensively. What was the impetus here for bringing my sequel Heatwave onto the AWS cloud? What were the drivers that you considered? >>So one of the observations is that a very large percentage of users of my sequel Heatwave, our AWS users who are migrating of Aurora or so already we see that a good percentage of my secret history of customers are migrating from GWS. However, there are some AWS customers who are still not able to migrate the O. C. I to my secret heat wave. And the reason is because of, um, exorbitant cost, which was charges. So in order to migrate the workload from AWS to go see, I digress. Charges are very high fees which becomes prohibitive for the customer or the second example we have seen is that the latency of practising a database which is outside of AWS is very high. So there's a class of customers who would like to get the benefits of my secret heatwave but were unable to do so and with this support of my secret trip inside of AWS, these customers can now get all the grease of the benefits of my secret he trip without having to pay the high fees or without having to suffer with the poorly agency, which is because of the ws architecture. >>Okay, so you're basically meeting the customer's where they are. So was this a straightforward lifted shift from from Oracle Cloud Infrastructure to AWS? >>No, it is not because one of the design girls we have with my sequel, Heatwave is that we want to provide our customers with the best price performance regardless of the cloud. So when we decided to offer my sequel, he headed west. Um, we have optimised my sequel Heatwave on it as well. So one of the things to point out is that this is a service with the data plane control plane and the console are natively running on AWS. And the benefits of doing so is that now we can optimise my sequel Heatwave for the E. W s architecture. In addition to that, we have also announced a bunch of new capabilities as a part of the service which will also be available to the my secret history of customers and our CI, But we just announced them and we're offering them as a part of my secret history of offering on AWS. >>So I just want to make sure I understand that it's not like you just wrapped your stack in a container and stuck it into a W s to be hosted. You're saying you're actually taking advantage of the capabilities of the AWS cloud natively? And I think you've made some other enhancements as well that you're alluding to. Can you maybe, uh, elucidate on those? Sure. >>So for status, um, we have taken the mind sequel Heatwave code and we have optimised for the It was infrastructure with its computer network. And as a result, customers get very good performance and price performance. Uh, with my secret he trade in AWS. That's one performance. Second thing is, we have designed new interactive counsel for the service, which means that customers can now provision there instances with the council. But in addition, they can also manage their schemas. They can. Then court is directly from the council. Autopilot is integrated. The council we have introduced performance monitoring, so a lot of capabilities which we have introduced as a part of the new counsel. The third thing is that we have added a bunch of new security features, uh, expose some of the security features which were part of the My Secret Enterprise edition as a part of the service, which gives customers now a choice of using these features to build more secure applications. And finally, we have extended my secret autopilot for a number of old gpus cases. In the past, my secret autopilot had a lot of capabilities for Benedict, and now we have augmented my secret autopilot to offer capabilities for elderly people. Includes as well. >>But there was something in your press release called Auto thread. Pooling says it provides higher and sustained throughput. High concerns concerns concurrency by determining Apple number of transactions, which should be executed. Uh, what is that all about? The auto thread pool? It seems pretty interesting. How does it affect performance? Can you help us understand that? >>Yes, and this is one of the capabilities of alluding to which we have added in my secret autopilot for transaction processing. So here is the basic idea. If you have a system where there's a large number of old EP transactions coming into it at a high degrees of concurrency in many of the existing systems of my sequel based systems, it can lead to a state where there are few transactions executing, but a bunch of them can get blocked with or a pilot tried pulling. What we basically do is we do workload aware admission control and what this does is it figures out, what's the right scheduling or all of these algorithms, so that either the transactions are executing or as soon as something frees up, they can start executing, so there's no transaction which is blocked. The advantage to the customer of this capability is twofold. A get significantly better throughput compared to service like Aurora at high levels of concurrency. So at high concurrency, for instance, uh, my secret because of this capability Uh oh, thread pulling offers up to 10 times higher compared to Aurora, that's one first benefit better throughput. The second advantage is that the true part of the system never drops, even at high levels of concurrency, whereas in the case of Aurora, the trooper goes up, but then, at high concurrency is, let's say, starting, uh, level of 500 or something. It depends upon the underlying shit they're using the troopers just dropping where it's with my secret heatwave. The truth will never drops. Now, the ramification for the customer is that if the truth is not gonna drop, the user can start off with a small shape, get the performance and be a show that even the workload increases. They will never get a performance, which is worse than what they're getting with lower levels of concurrency. So this let's leads to customers provisioning a shape which is just right for them. And if they need, they can, uh, go with the largest shape. But they don't like, you know, over pay. So those are the two benefits. Better performance and sustain, uh, regardless of the level of concurrency. >>So how do we quantify that? I know you've got some benchmarks. How can you share comparisons with other cloud databases especially interested in in Amazon's own databases are obviously very popular, and and are you publishing those again and get hub, as you have done in the past? Take us through the benchmarks. >>Sure, So benchmarks are important because that gives customers a sense of what performance to expect and what price performance to expect. So we have run a number of benchmarks. And yes, all these benchmarks are available on guitar for customers to take a look at. So we have performance results on all the three castle workloads, ol DB Analytics and Machine Learning. So let's start with the Rdp for Rdp and primarily because of the auto thread pulling feature. We show that for the IPCC for attended dataset at high levels of concurrency, heatwave offers up to 10 times better throughput and this performance is sustained, whereas in the case of Aurora, the performance really drops. So that's the first thing that, uh, tend to alibi. Sorry, 10 gigabytes. B B C c. I can come and see the performance are the throughput is 10 times better than Aurora for analytics. We have done a comparison of my secret heatwave in AWS and compared with Red Ship Snowflake Googled inquiry, we find that the price performance of my secret heatwave compared to read ship is seven times better. So my sequel, Heat Wave in AWS, provides seven times better price performance than red ship. That's a very, uh, interesting results to us. Which means that customers of Red Shift are really going to take the service seriously because they're gonna get seven times better price performance. And this is all running in a W s so compared. >>Okay, carry on. >>And then I was gonna say, compared to like, Snowflake, uh, in AWS offers 10 times better price performance. And compared to Google, ubiquity offers 12 times better price performance. And this is based on a four terabyte p PCH workload. Results are available on guitar, and then the third category is machine learning and for machine learning, uh, for training, the performance of my secret heatwave is 25 times faster compared to that shit. So all the three workloads we have benchmark's results, and all of these scripts are available on YouTube. >>Okay, so you're comparing, uh, my sequel Heatwave on AWS to Red Shift and snowflake on AWS. And you're comparing my sequel Heatwave on a W s too big query. Obviously running on on Google. Um, you know, one of the things Oracle is done in the past when you get the price performance and I've always tried to call fouls you're, like, double your price for running the oracle database. Uh, not Heatwave, but Oracle Database on a W s. And then you'll show how it's it's so much cheaper on on Oracle will be like Okay, come on. But they're not doing that here. You're basically taking my sequel Heatwave on a W s. I presume you're using the same pricing for whatever you see to whatever else you're using. Storage, um, reserved instances. That's apples to apples on A W s. And you have to obviously do some kind of mapping for for Google, for big query. Can you just verify that for me, >>we are being more than fair on two dimensions. The first thing is, when I'm talking about the price performance for analytics, right for, uh, with my secret heat rape, the cost I'm talking about from my secret heat rape is the cost of running transaction processing, analytics and machine learning. So it's a fully loaded cost for the case of my secret heatwave. There has been I'm talking about red ship when I'm talking about Snowflake. I'm just talking about the cost of these databases for running, and it's only it's not, including the source database, which may be more or some other database, right? So that's the first aspect that far, uh, trip. It's the cost for running all three kinds of workloads, whereas for the competition, it's only for running analytics. The second thing is that for these are those services whether it's like shit or snowflakes, That's right. We're talking about one year, fully paid up front cost, right? So that's what most of the customers would pay for. Many of the customers would pay that they will sign a one year contract and pay all the costs ahead of time because they get a discount. So we're using that price and the case of Snowflake. The costs were using is their standard edition of price, not the Enterprise edition price. So yes, uh, more than in this competitive. >>Yeah, I think that's an important point. I saw an analysis by Marx Tamer on Wiki Bond, where he was doing the TCO comparisons. And I mean, if you have to use two separate databases in two separate licences and you have to do et yelling and all the labour associated with that, that that's that's a big deal and you're not even including that aspect in in your comparison. So that's pretty impressive. To what do you attribute that? You know, given that unlike, oh, ci within the AWS cloud, you don't have as much control over the underlying hardware. >>So look hard, but is one aspect. Okay, so there are three things which give us this advantage. The first thing is, uh, we have designed hateful foreign scale out architecture. So we came up with new algorithms we have come up with, like, uh, one of the design points for heat wave is a massively partitioned architecture, which leads to a very high degree of parallelism. So that's a lot of hype. Each were built, So that's the first part. The second thing is that although we don't have control over the hardware, but the second design point for heat wave is that it is optimised for commodity cloud and the commodity infrastructure so we can have another guys, what to say? The computer we get, how much network bandwidth do we get? How much of, like objects to a brand that we get in here? W s. And we have tuned heat for that. That's the second point And the third thing is my secret autopilot, which provides machine learning based automation. So what it does is that has the users workload is running. It learns from it, it improves, uh, various premieres in the system. So the system keeps getting better as you learn more and more questions. And this is the third thing, uh, as a result of which we get a significant edge over the competition. >>Interesting. I mean, look, any I SV can go on any cloud and take advantage of it. And that's, uh I love it. We live in a new world. How about machine learning workloads? What? What did you see there in terms of performance and benchmarks? >>Right. So machine learning. We offer three capabilities training, which is fully automated, running in France and explanations. So one of the things which many of our customers told us coming from the enterprise is that explanations are very important to them because, uh, customers want to know that. Why did the the system, uh, choose a certain prediction? So we offer explanations for all models which have been derailed by. That's the first thing. Now, one of the interesting things about training is that training is usually the most expensive phase of machine learning. So we have spent a lot of time improving the performance of training. So we have a bunch of techniques which we have developed inside of Oracle to improve the training process. For instance, we have, uh, metal and proxy models, which really give us an advantage. We use adaptive sampling. We have, uh, invented in techniques for paralysing the hyper parameter search. So as a result of a lot of this work, our training is about 25 times faster than that ship them health and all the data is, uh, inside the database. All this processing is being done inside the database, so it's much faster. It is inside the database. And I want to point out that there is no additional charge for the history of customers because we're using the same cluster. You're not working in your service. So all of these machine learning capabilities are being offered at no additional charge inside the database and as a performance, which is significantly faster than that, >>are you taking advantage of or is there any, uh, need not need, but any advantage that you can get if two by exploiting things like gravity. John, we've talked about that a little bit in the past. Or trainee. Um, you just mentioned training so custom silicon that AWS is doing, you're taking advantage of that. Do you need to? Can you give us some insight >>there? So there are two things, right? We're always evaluating What are the choices we have from hybrid perspective? Obviously, for us to leverage is right and like all the things you mention about like we have considered them. But there are two things to consider. One is he is a memory system. So he favours a big is the dominant cost. The processor is a person of the cost, but memory is the dominant cost. So what we have evaluated and found is that the current shape which we are using is going to provide our customers with the best price performance. That's the first thing. The second thing is that there are opportunities at times when we can use a specialised processor for vaccinating the world for a bit. But then it becomes a matter of the cost of the customer. Advantage of our current architecture is on the same hardware. Customers are getting very good performance. Very good, energetic performance in a very good machine learning performance. If you will go with the specialised processor, it may. Actually, it's a machine learning, but then it's an additional cost with the customers we need to pay. So we are very sensitive to the customer's request, which is usually to provide very good performance at a very low cost. And we feel is that the current design we have as providing customers very good performance and very good price performance. >>So part of that is architectural. The memory intensive nature of of heat wave. The other is A W s pricing. If AWS pricing were to flip, it might make more sense for you to take advantage of something like like cranium. Okay, great. Thank you. And welcome back to the benchmarks benchmarks. Sometimes they're artificial right there. A car can go from 0 to 60 in two seconds. But I might not be able to experience that level of performance. Do you? Do you have any real world numbers from customers that have used my sequel Heatwave on A W s. And how they look at performance? >>Yes, absolutely so the my Secret service on the AWS. This has been in Vera for, like, since November, right? So we have a lot of customers who have tried the service. And what actually we have found is that many of these customers, um, planning to migrate from Aurora to my secret heat rape. And what they find is that the performance difference is actually much more pronounced than what I was talking about. Because with Aurora, the performance is actually much poorer compared to uh, like what I've talked about. So in some of these cases, the customers found improvement from 60 times, 240 times, right? So he travels 100 for 240 times faster. It was much less expensive. And the third thing, which is you know, a noteworthy is that customers don't need to change their applications. So if you ask the top three reasons why customers are migrating, it's because of this. No change to the application much faster, and it is cheaper. So in some cases, like Johnny Bites, what they found is that the performance of their applications for the complex storeys was about 60 to 90 times faster. Then we had 60 technologies. What they found is that the performance of heat we have compared to Aurora was 100 and 39 times faster. So, yes, we do have many such examples from real workloads from customers who have tried it. And all across what we find is if it offers better performance, lower cost and a single database such that it is compatible with all existing by sequel based applications and workloads. >>Really impressive. The analysts I talked to, they're all gaga over heatwave, and I can see why. Okay, last question. Maybe maybe two and one. Uh, what's next? In terms of new capabilities that customers are going to be able to leverage and any other clouds that you're thinking about? We talked about that upfront, but >>so in terms of the capabilities you have seen, like they have been, you know, non stop attending to the feedback from the customers in reacting to it. And also, we have been in a wedding like organically. So that's something which is gonna continue. So, yes, you can fully expect that people not dressed and continue to in a way and with respect to the other clouds. Yes, we are planning to support my sequel. He tripped on a show, and this is something that will be announced in the near future. Great. >>All right, Thank you. Really appreciate the the overview. Congratulations on the work. Really exciting news that you're moving my sequel Heatwave into other clouds. It's something that we've been expecting for some time. So it's great to see you guys, uh, making that move, and as always, great to have you on the Cube. >>Thank you for the opportunity. >>All right. And thank you for watching this special cube conversation. I'm Dave Volonte, and we'll see you next time.
SUMMARY :
The company is now in its fourth major release since the original announcement in December 2020. Very happy to be back. Now for those who might not have kept up with the news, uh, to kick things off, give us an overview of my So customers of my sequel then they had to run analytics or when they had to run machine So we've seen some interesting moves by Oracle lately. So one of the observations is that a very large percentage So was this a straightforward lifted shift from No, it is not because one of the design girls we have with my sequel, So I just want to make sure I understand that it's not like you just wrapped your stack in So for status, um, we have taken the mind sequel Heatwave code and we have optimised Can you help us understand that? So this let's leads to customers provisioning a shape which is So how do we quantify that? So that's the first thing that, So all the three workloads we That's apples to apples on A W s. And you have to obviously do some kind of So that's the first aspect And I mean, if you have to use two So the system keeps getting better as you learn more and What did you see there in terms of performance and benchmarks? So we have a bunch of techniques which we have developed inside of Oracle to improve the training need not need, but any advantage that you can get if two by exploiting We're always evaluating What are the choices we have So part of that is architectural. And the third thing, which is you know, a noteworthy is that In terms of new capabilities that customers are going to be able so in terms of the capabilities you have seen, like they have been, you know, non stop attending So it's great to see you guys, And thank you for watching this special cube conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
December 2020 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 times | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Heatwave | TITLE | 0.99+ |
100 | QUANTITY | 0.99+ |
60 times | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
12 times | QUANTITY | 0.99+ |
GWS | ORGANIZATION | 0.99+ |
60 technologies | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
240 times | QUANTITY | 0.99+ |
two separate licences | QUANTITY | 0.99+ |
third category | QUANTITY | 0.99+ |
second advantage | QUANTITY | 0.99+ |
0 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
seven times | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
one | QUANTITY | 0.99+ |
25 times | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
November | DATE | 0.99+ |
85 patents | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
Aurora | TITLE | 0.99+ |
third thing | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
second example | QUANTITY | 0.99+ |
10 gigabytes | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two benefits | QUANTITY | 0.99+ |
one aspect | QUANTITY | 0.99+ |
first aspect | QUANTITY | 0.98+ |
two separate databases | QUANTITY | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
fourth major release | QUANTITY | 0.98+ |
39 times | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
Heat Wave | TITLE | 0.98+ |
theCUBE Insights with Industry Analysts | Snowflake Summit 2022
>>Okay. Okay. We're back at Caesar's Forum. The Snowflake summit 2022. The cubes. Continuous coverage this day to wall to wall coverage. We're so excited to have the analyst panel here, some of my colleagues that we've done a number. You've probably seen some power panels that we've done. David McGregor is here. He's the senior vice president and research director at Ventana Research. To his left is Tony Blair, principal at DB Inside and my in the co host seat. Sanjeev Mohan Sanremo. Guys, thanks so much for coming on. I'm glad we can. Thank you. You're very welcome. I wasn't able to attend the analyst action because I've been doing this all all day, every day. But let me start with you, Dave. What have you seen? That's kind of interested you. Pluses, minuses. Concerns. >>Well, how about if I focus on what I think valuable to the customers of snowflakes and our research shows that the majority of organisations, the majority of people, do not have access to analytics. And so a couple of things they've announced I think address those are helped to address those issues very directly. So Snow Park and support for Python and other languages is a way for organisations to embed analytics into different business processes. And so I think that will be really beneficial to try and get analytics into more people's hands. And I also think that the native applications as part of the marketplace is another way to get applications into people's hands rather than just analytical tools. Because most most people in the organisation or not, analysts, they're doing some line of business function. Their HR managers, their marketing people, their salespeople, their finance people right there, not sitting there mucking around in the data. They're doing a job and they need analytics in that job. So, >>Tony, I thank you. I've heard a lot of data mesh talk this week. It's kind of funny. Can't >>seem to get away from it. You >>can't see. It seems to be gathering momentum, but But what have you seen? That's been interesting. >>What I have noticed. Unfortunately, you know, because the rooms are too small, you just can't get into the data mesh sessions, so there's a lot of interest in it. Um, it's still very I don't think there's very much understanding of it, but I think the idea that you can put all the data in one place which, you know, to me, stuff like it seems to be kind of sort of in a way, it sounds like almost like the Enterprise Data warehouse, you know, Clouded Cloud Native Edition, you know, bring it all in one place again. Um, I think it's providing, sort of, You know, it's I think, for these folks that think this might be kind of like a a linchpin for that. I think there are several other things that actually that really have made a bigger impression on me. Actually, at this event, one is is basically is, um we watch their move with Eunice store. Um, and it's kind of interesting coming, you know, coming from mongo db last week. And I see it's like these two companies seem to be going converging towards the same place at different speeds. I think it's not like it's going to get there faster than Mongo for a number of different reasons, but I see like a number of common threads here. I mean, one is that Mongo was was was a company. It's always been towards developers. They need you know, start cultivating data, people, >>these guys going the other way. >>Exactly. Bingo. And the thing is that but they I think where they're converging is the idea of operational analytics and trying to serve all constituencies. The other thing, which which also in terms of serving, you know, multiple constituencies is how snowflake is laid out Snow Park and what I'm finding like. There's an interesting I economy. On one hand, you have this very ingrained integration of Anaconda, which I think is pretty ingenious. On the other hand, you speak, let's say, like, let's say the data robot folks and say, You know something our folks wanna work data signs us. We want to work in our environment and use snowflake in the background. So I see those kind of some interesting sort of cross cutting trends. >>So, Sandy, I mean, Frank Sullivan, we'll talk about there's definitely benefits into going into the walled garden. Yeah, I don't think we dispute that, but we see them making moves and adding more and more open source capabilities like Apache iceberg. Is that a Is that a move to sort of counteract the narrative that the data breaks is put out there. Is that customer driven? What's your take on that? >>Uh, primarily I think it is to contract this whole notion that once you move data into snowflake, it's a proprietary format. So I think that's how it started. But it's hugely beneficial to the customers to the users, because now, if you have large amounts of data in parquet files, you can leave it on s three. But then you using the the Apache iceberg table format. In a snowflake, you get all the benefits of snowflakes. Optimizer. So, for example, you get the, you know, the micro partitioning. You get the meta data. So, uh, in a single query, you can join. You can do select from a snowflake table union and select from iceberg table, and you can do store procedures, user defined functions. So I think they what they've done is extremely interesting. Uh, iceberg by itself still does not have multi table transactional capabilities. So if I'm running a workload, I might be touching 10 different tables. So if I use Apache iceberg in a raw format, they don't have it. But snowflake does, >>right? There's hence the delta. And maybe that maybe that closes over time. I want to ask you as you look around this I mean the ecosystems pretty vibrant. I mean, it reminds me of, like reinvent in 2013, you know? But then I'm struck by the complexity of the last big data era and a dupe and all the different tools. And is this different, or is it the sort of same wine new new bottle? You guys have any thoughts on that? >>I think it's different and I'll tell you why. I think it's different because it's based around sequel. So if back to Tony's point, these vendors are coming at this from different angles, right? You've got data warehouse vendors and you've got data lake vendors and they're all going to meet in the middle. So in your case, you're taught operational analytical. But the same thing is true with Data Lake and Data Warehouse and Snowflake no longer wants to be known as the Data Warehouse. There a data cloud and our research again. I like to base everything off of that. >>I love what our >>research shows that organisation Two thirds of organisations have sequel skills and one third have big data skills, so >>you >>know they're going to meet in the middle. But it sure is a lot easier to bring along those people who know sequel already to that midpoint than it is to bring big data people to remember. >>Mrr Odula, one of the founders of Cloudera, said to me one time, John Kerry and the Cube, that, uh, sequel is the killer app for a Yeah, >>the difference at this, you know, with with snowflake, is that you don't have to worry about taming the zoo. Animals really have thought out the ease of use, you know? I mean, they thought about I mean, from the get go, they thought of too thin to polls. One is ease of use, and the other is scale. And they've had. And that's basically, you know, I think very much differentiates it. I mean, who do have the scale, but it didn't have the ease of use. But don't I >>still need? Like, if I have, you know, governance from this vendor or, you know, data prep from, you know, don't I still have to have expertise? That's sort of distributed in those those worlds, right? I mean, go ahead. Yeah. >>So the way I see it is snowflake is adding more and more capabilities right into the database. So, for example, they've they've gone ahead and added security and privacy so you can now create policies and do even set level masking, dynamic masking. But most organisations have more than snowflake. So what we are starting to see all around here is that there's a whole series of data catalogue companies, a bunch of companies that are doing dynamic data masking security and governance data observe ability, which is not a space snowflake has gone into. So there's a whole ecosystem of companies that that is mushrooming, although, you know so they're using the native capabilities of snowflake, but they are at a level higher. So if you have a data lake and a cloud data warehouse and you have other, like relational databases, you can run these cross platform capabilities in that layer. So so that way, you know, snowflakes done a great job of enabling that ecosystem about >>the stream lit acquisition. Did you see anything here that indicated there making strong progress there? Are you excited about that? You're sceptical. Go ahead. >>And I think it's like the last mile. Essentially. In other words, it's like, Okay, you have folks that are basically that are very, very comfortable with tableau. But you do have developers who don't want to have to shell out to a separate tool. And so this is where Snowflake is essentially working to address that constituency, um, to San James Point. I think part of it, this kind of plays into it is what makes this different from the ado Pere is the fact that this all these capabilities, you know, a lot of vendors are taking it very seriously to make put this native obviously snowflake acquired stream. Let's so we can expect that's extremely capabilities are going to be native. >>And the other thing, too, about the Hadoop ecosystem is Claudia had to help fund all those different projects and got really, really spread thin. I want to ask you guys about this super cloud we use. Super Cloud is this sort of metaphor for the next wave of cloud. You've got infrastructure aws, azure, Google. It's not multi cloud, but you've got that infrastructure you're building a layer on top of it that hides the underlying complexities of the primitives and the a p I s. And you're adding new value in this case, the data cloud or super data cloud. And now we're seeing now is that snowflake putting forth the notion that they're adding a super path layer. You can now build applications that you can monetise, which to me is kind of exciting. It makes makes this platform even less discretionary. We had a lot of talk on Wall Street about discretionary spending, and that's not discretionary. If you're monetising it, um, what do you guys think about that? Is this something that's that's real? Is it just a figment of my imagination, or do you see a different way of coming any thoughts on that? >>So, in effect, they're trying to become a data operating system, right? And I think that's wonderful. It's ambitious. I think they'll experience some success with that. As I said, applications are important. That's a great way to deliver information. You can monetise them, so you know there's there's a good economic model around it. I think they will still struggle, however, with bringing everything together onto one platform. That's always the challenge. Can you become the platform that's hard, hard to predict? You know, I think this is This is pretty exciting, right? A lot of energy, a lot of large ecosystem. There is a network effect already. Can they succeed in being the only place where data exists? You know, I think that's going to be a challenge. >>I mean, the fact is, I mean, this is a classic best of breed versus the umbrella play. The thing is, this is nothing new. I mean, this is like the you know, the old days with enterprise applications were basically oracle and ASAP vacuumed up all these. You know, all these applications in their in their ecosystem, whereas with snowflake is. And if you look at the cloud, folks, the hyper scale is still building out their own portfolios as well. Some are, You know, some hyper skills are more partner friendly than others. What? What Snowflake is saying is that we're going to give all of you folks who basically are competing against the hyper skills in various areas like data catalogue and pipelines and all that sort of wonderful stuff will make you basically, you know, all equal citizens. You know the burden is on you to basically we will leave. We will lay out the A P. I s Well, we'll allow you to basically, you know, integrate natively to us so you can provide as good experience. But the but the onus is on your back. >>Should the ecosystem be concerned, as they were back to reinvent 2014 that Amazon was going to nibble away at them or or is it different? >>I find what they're doing is different. Uh, for example, data sharing. They were the first ones out the door were data sharing at a large scale. And then everybody has jumped in and said, Oh, we also do data sharing. All the hyper scholars came in. But now what snowflake has done is they've taken it to the next level. Now they're saying it's not just data sharing. It's up sharing and not only up sharing. You can stream the thing you can build, test deploy, and then monetise it. Make it discoverable through, you know, through your marketplace >>you can monetise it. >>Yes. Yeah, so So I I think what they're doing is they are taking it a step further than what hyper scale as they are doing. And because it's like what they said is becoming like the data operating system You log in and you have all of these different functionalities you can do in machine learning. Now you can do data quality. You can do data preparation and you can do Monetisation. Who do you >>think is snowflakes? Biggest competitor? What do you guys think? It's a hard question, isn't it? Because you're like because we all get the we separate computer from storage. We have a cloud data and you go, Okay, that's nice, >>but there's, like, a crack. I think >>there's uniqueness. I >>mean, put it this way. In the old days, it would have been you know, how you know the prime household names. I think today is the hyper scholars and the idea what I mean again, this comes down to the best of breed versus by, you know, get it all from one source. So where is your comfort level? Um, so I think they're kind. They're their co op a Titian the hyper scale. >>Okay, so it's not data bricks, because why they're smaller. >>Well, there is some okay now within the best of breed area. Yes, there is competition. The obvious is data bricks coming in from the data engineering angle. You know, basically the snowflake coming from, you know, from the from the data analyst angle. I think what? Another potential competitor. And I think Snowflake, basically, you know, admitted as such potentially is mongo >>DB. Yeah, >>Exactly. So I mean, yes, there are two different levels of sort >>of a on a longer term collision course. >>Exactly. Exactly. >>Sort of service now and in salesforce >>thing that was that we actually get when I say that a lot of people just laughed. I was like, No, you're kidding. There's no way. I said Excuse me, >>But then you see Mongo last week. We're adding some analytics capabilities and always been developers, as you say, and >>they trashed sequel. But yet they finally have started to write their first real sequel. >>We have M c M Q. Well, now we have a sequel. So what >>were those numbers, >>Dave? Two thirds. One third. >>So the hyper scale is but the hyper scale urz are you going to trust your hyper scale is to do your cross cloud. I mean, maybe Google may be I mean, Microsoft, perhaps aws not there yet. Right? I mean, how important is cross cloud, multi cloud Super cloud Whatever you want to call it What is your data? >>Shows? Cloud is important if I remember correctly. Our research shows that three quarters of organisations are operating in the cloud and 52% are operating across more than one cloud. So, uh, two thirds of the organisations are in the cloud are doing multi cloud, so that's pretty significant. And now they may be operating across clouds for different reasons. Maybe one application runs in one cloud provider. Another application runs another cloud provider. But I do think organisations want that leverage over the hyper scholars right they want they want to be able to tell the hyper scale. I'm gonna move my workloads over here if you don't give us a better rate. Uh, >>I mean, I I think you know, from a database standpoint, I think you're right. I mean, they are competing against some really well funded and you look at big Query barely, you know, solid platform Red shift, for all its faults, has really done an amazing job of moving forward. But to David's point, you know those to me in any way. Those hyper skills aren't going to solve that cross cloud cloud problem, right? >>Right. No, I'm certainly >>not as quickly. No. >>Or with as much zeal, >>right? Yeah, right across cloud. But we're gonna operate better on our >>Exactly. Yes. >>Yes. Even when we talk about multi cloud, the many, many definitions, like, you know, you can mean anything. So the way snowflake does multi cloud and the way mongo db two are very different. So a snowflake says we run on all the hyper scalar, but you have to replicate your data. What Mongo DB is claiming is that one cluster can have notes in multiple different clouds. That is right, you know, quite something. >>Yeah, right. I mean, again, you hit that. We got to go. But, uh, last question, um, snowflake undervalued, overvalued or just about right >>in the stock market or in customers. Yeah. Yeah, well, but, you know, I'm not sure that's the right question. >>That's the question I'm asking. You know, >>I'll say the question is undervalued or overvalued for customers, right? That's really what matters. Um, there's a different audience. Who cares about the investor side? Some of those are watching, but But I believe I believe that the from the customer's perspective, it's probably valued about right, because >>the reason I I ask it, is because it has so hyped. You had $100 billion value. It's the past service now is value, which is crazy for this student Now. It's obviously come back quite a bit below its IPO price. So But you guys are at the financial analyst meeting. Scarpelli laid out 2029 projections signed up for $10 billion.25 percent free time for 20% operating profit. I mean, they better be worth more than they are today. If they do >>that. If I If I see the momentum here this week, I think they are undervalued. But before this week, I probably would have thought there at the right evaluation, >>I would say they're probably more at the right valuation employed because the IPO valuation is just such a false valuation. So hyped >>guys, I could go on for another 45 minutes. Thanks so much. David. Tony Sanjeev. Always great to have you on. We'll have you back for sure. Having us. All right. Thank you. Keep it right there. Were wrapping up Day two and the Cube. Snowflake. Summit 2022. Right back. Mm. Mhm.
SUMMARY :
What have you seen? And I also think that the native applications as part of the I've heard a lot of data mesh talk this week. seem to get away from it. It seems to be gathering momentum, but But what have you seen? but I think the idea that you can put all the data in one place which, And the thing is that but they I think where they're converging is the idea of operational that the data breaks is put out there. So, for example, you get the, you know, the micro partitioning. I want to ask you as you look around this I mean the ecosystems pretty vibrant. I think it's different and I'll tell you why. But it sure is a lot easier to bring along those people who know sequel already the difference at this, you know, with with snowflake, is that you don't have to worry about taming the zoo. you know, data prep from, you know, don't I still have to have expertise? So so that way, you know, snowflakes done a great job of Did you see anything here that indicated there making strong is the fact that this all these capabilities, you know, a lot of vendors are taking it very seriously I want to ask you guys about this super cloud we Can you become the platform that's hard, hard to predict? I mean, this is like the you know, the old days with enterprise applications You can stream the thing you can build, test deploy, You can do data preparation and you can do We have a cloud data and you go, Okay, that's nice, I think I In the old days, it would have been you know, how you know the prime household names. You know, basically the snowflake coming from, you know, from the from the data analyst angle. Exactly. I was like, No, But then you see Mongo last week. But yet they finally have started to write their first real sequel. So what One third. So the hyper scale is but the hyper scale urz are you going to trust your hyper scale But I do think organisations want that leverage I mean, I I think you know, from a database standpoint, I think you're right. not as quickly. But we're gonna operate better on our Exactly. the hyper scalar, but you have to replicate your data. I mean, again, you hit that. but, you know, I'm not sure that's the right question. That's the question I'm asking. that the from the customer's perspective, it's probably valued about right, So But you guys are at the financial analyst meeting. But before this week, I probably would have thought there at the right evaluation, I would say they're probably more at the right valuation employed because the IPO valuation is just such Always great to have you on.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Frank Sullivan | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Tony Blair | PERSON | 0.99+ |
Tony Sanjeev | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Sandy | PERSON | 0.99+ |
David McGregor | PERSON | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
Ventana Research | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
last week | DATE | 0.99+ |
52% | QUANTITY | 0.99+ |
Sanjeev Mohan Sanremo | PERSON | 0.99+ |
more than one cloud | QUANTITY | 0.99+ |
2014 | DATE | 0.99+ |
2029 projections | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
45 minutes | QUANTITY | 0.99+ |
San James Point | LOCATION | 0.99+ |
$10 billion.25 percent | QUANTITY | 0.99+ |
one application | QUANTITY | 0.99+ |
Odula | PERSON | 0.99+ |
John Kerry | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
Summit 2022 | EVENT | 0.99+ |
Data Warehouse | ORGANIZATION | 0.99+ |
Snowflake | EVENT | 0.98+ |
Scarpelli | PERSON | 0.98+ |
Data Lake | ORGANIZATION | 0.98+ |
one platform | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
today | DATE | 0.98+ |
10 different tables | QUANTITY | 0.98+ |
three quarters | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Apache | ORGANIZATION | 0.97+ |
Day two | QUANTITY | 0.97+ |
DB Inside | ORGANIZATION | 0.96+ |
one place | QUANTITY | 0.96+ |
one source | QUANTITY | 0.96+ |
one third | QUANTITY | 0.96+ |
Snowflake Summit 2022 | EVENT | 0.96+ |
One third | QUANTITY | 0.95+ |
two thirds | QUANTITY | 0.95+ |
Claudia | PERSON | 0.94+ |
one time | QUANTITY | 0.94+ |
one cloud provider | QUANTITY | 0.94+ |
Two thirds | QUANTITY | 0.93+ |
theCUBE | ORGANIZATION | 0.93+ |
data lake | ORGANIZATION | 0.92+ |
Snow Park | LOCATION | 0.92+ |
Cloudera | ORGANIZATION | 0.91+ |
two different levels | QUANTITY | 0.91+ |
three | QUANTITY | 0.91+ |
one cluster | QUANTITY | 0.89+ |
single query | QUANTITY | 0.87+ |
aws | ORGANIZATION | 0.84+ |
first ones | QUANTITY | 0.83+ |
Snowflake summit 2022 | EVENT | 0.83+ |
azure | ORGANIZATION | 0.82+ |
mongo db | ORGANIZATION | 0.82+ |
One | QUANTITY | 0.81+ |
Eunice store | ORGANIZATION | 0.8+ |
wave of | EVENT | 0.78+ |
cloud | ORGANIZATION | 0.77+ |
first real sequel | QUANTITY | 0.77+ |
M c M Q. | PERSON | 0.76+ |
Red shift | ORGANIZATION | 0.74+ |
Anaconda | ORGANIZATION | 0.73+ |
Snowflake | ORGANIZATION | 0.72+ |
ASAP | ORGANIZATION | 0.71+ |
Snow | ORGANIZATION | 0.68+ |
snowflake | TITLE | 0.66+ |
Park | TITLE | 0.64+ |
Cube | COMMERCIAL_ITEM | 0.63+ |
Apache | TITLE | 0.63+ |
Mrr | PERSON | 0.63+ |
senior vice president | PERSON | 0.62+ |
Wall Street | ORGANIZATION | 0.6+ |
Prakash Darji, Pure Storage
(upbeat music) >> Hello, and welcome to the special Cube conversation that we're launching in conjunction with Pure Accelerate. Prakash Darji is here, is the general manager of Digital Experience. They actually have a business unit dedicated to this at Pure Storage. Prakash, welcome back, good to see you. >> Yeah Dave, happy to be here. >> So a few weeks back, you and I were talking about the Shift 2 and as a service economy and which is a good lead up to Accelerate, held today, we're releasing this video in LA. This is the fifth in person Accelerate. It's got a new tagline techfest so you're making it fun, but still hanging out to the tech, which we love. So this morning you guys made some announcements expanding the portfolio. I'm really interested in your reaffirmed commitment to Evergreen. That's something that got this whole trend started in the introduction of Evergreen Flex. What is that all about? What's your vision for Evergreen Flex? >> Well, so look, this is one of the biggest moments that I think we have as a company now, because we introduced Evergreen and that was and probably still is one of the largest disruptions to happen to the industry in a decade. Now, Evergreen Flex takes the power of modernizing performance and capacity to storage beyond the box, full stop. So we first started on a project many years ago to say, okay, how can we bring that modernization concept to our entire portfolio? That means if someone's got 10 boxes, how do you modernize performance and capacity across 10 boxes or across maybe FlashBlade and FlashArray. So with Evergreen Flex, we first are starting to hyper disaggregate performance and capacity and the capacity can be moved to where you need it. So previously, you could have thought of a box saying, okay, it has this performance or capacity range or boundary, but let's think about it beyond the box. Let's think about it as a portfolio. My application needs performance or capacity for storage, what if I could bring the resources to it? So with Evergreen Flex within the QLC family with our FlashBlade and our FlashArray QLC projects, you could actually move QLC capacity to where you need it. And with FlashArray X and XL or TLC family, you could move capacity to where you need it within that family. Now, if you're enabling that, you have to change the business model because the capacity needs to get build where you use it. If you use it in a high performance tier, you could build at a high performance rate. If you use it as a lower performance tier, you could build at a lower performance rate. So we changed the business model to enable this technology flexibility, where customers can buy the hardware and they get a pay per use consumption model for the software and services, but this enables the technology flexibility to use your capacity wherever you need. And we're just continuing that journey of hyper disaggregated. >> Okay, so you solve the problem of having to allocate specific capacity or performance to a particular workload. You can now spread that across whatever products in the portfolio, like you said, you're disaggregating performance and capacity. So that's very cool. Maybe you could double click on that. You obviously talk to customers about doing this. They were in pain a little bit, right? 'Cause they had this sort of stovepipe thing. So talk a little bit about the customer feedback that led you here. >> Well, look, let's just say today if you're an application developer or you haven't written your app yet, but you know you're going to. Well, you need that at least say I need something, right? So someone's going to ask you what kind of storage do you need? How many IOPS, what kind of performance capacity, before you've written your code. And you're going to buy something and you're going to spend that money. Now at that point, you're going to go write your application, run it on that box and then say, okay, was I right or was I wrong? And you know what? You were guessing before you wrote the software. After you wrote the software, you can test it and decide what you need, how it's going to scale, et cetera. But if you were wrong, you already bought something. In a hyper disaggregated world, that capacity is not a sunk cost, you can use it wherever you want. You can use capacity of somewhere else and bring it over there. So in the world of application development and in the world of storage, today people think about, I've got a workload, it's SAP, it's Oracle, I've built this custom app. I need to move it to a tier of storage, a performance class. Like you think about the application and you think about moving the application. And it takes time to move the application, takes performance, takes loan, it's a scheduled event. What if you said, you know what? You don't have to do any of that. You just move the capacity to where you need it, right? >> Yep. >> So the application's there and you actually have the ability to instantaneously move the capacity to where you need it for the application. And eventually, where we're going is we're looking to do the same thing across the performance hearing. So right now, the biggest benefit is the agility and flexibility a customer has across their fleet. So Evergreen was great for the customer with one array, but Evergreen Flex now brings that power to the entire fleet. And that's not tied to just FlashArray or FlashBlade. We've engineered a data plane in our direct flash fabric software to be able to take on the personality of the system it needs to go into. So when a data pack goes into a FlashBlade, that data pack is optimized for use in that scale out architecture with the metadata for FlashBlade. When it goes into a FlashArray C it's optimized for that metadata structure. So our Purity software has made this transformative to be able to do this. And we created a business model that allowed us to take advantage of this technology flexibility. >> Got it. Okay, so you got this mutually interchangeable performance and capacity across the portfolio beautiful. And I want to come back to sort of the Purity, but help me understand how this is different from just normal Evergreen, existing evergreen options. You mentioned the one array, but help us understand that more fully. >> Well, look, so in addition to this, like we had Evergreen Gold historically. We introduced Evergreen Flex and we had Pure as a service. So you had kind of two spectrums previously. You had Evergreen Gold on one hand, which modernized the performance and capacity of a box. You had Pure as a service that said don't worry about the box, tell me how many IOPS you have and will run and operate and manage that service for you. I think we've spoken about that previously on theCUBE. >> Yep. >> Now, we have this model where it's not just about the box, we have this model where we say, you know what, it's your fleet. You're going to run and operate and manage your fleet and you could move the capacity to where you need it. So as we started thinking about this, we decided to unify our entire portfolio of sub software and subscription services under the Evergreen brand. Evergreen Gold we're renaming to Evergreen Forever. We've actually had seven customers just crossed a decade of updates Forever Evergreen within a box. So Evergreen Forever is about modernizing a box. Evergreen Flex is about modernizing your fleet and Evergreen one, which is our rebrand of Pure as a service is about modernizing your labor. Instead of you worrying about it, let us do it for you. Because if you're an application developer and you're trying to figure out, where should I put my capacity? Where should I do it? You can just sign up for the IOPS you need and let us actually deliver and move the components to where you need it for performance, capacity, management, SLAs, et cetera. So as we think about this, for us this is a spectrum and a continuum of where you're at in the modernization journey to software subscription and services. >> Okay, got it. So why did you feel like now was the right time for the rebranding and the renaming convention, what's behind? What was the thing? Take us inside the internal conversations and the chalkboard discussion? >> Well, look, the chalkboard discussion's simple. It's everything was built on the Evergreen stateless architecture where within a box, right? We disaggregated the performance and capacity within the box already, 10 years ago within Evergreen. And that's what enabled us to build Pure as a service. That's why I say like when companies say they built a service, I'm like it's not a service if you have to do a data migration. You need a stateless architecture that's disaggregated. You can almost think of this as the anti hyper-converge, right? That's going the other way. It's hyper disaggregated. >> Right. >> And that foundation is true for our whole portfolio. That was fundamental, the Evergreen architecture. And then if Gold is modernizing a box and Flex is modernizing your fleet and your portfolio and Pure as a service is modernizing the labor, it is more of a continuation in the spectrum of how do you ensure you get better with age, right? And it's like one of those things when you think about a car. Miles driven on a car means your car's getting older and it doesn't necessarily get better with age, right? What's interesting when you think about the human body, yeah, you get older and some people deteriorate with age and some people it turns out for a period of time, you pick up some muscle mass, you get a little bit older, you get a little bit wiser and you get a little bit better with age for a while because you're putting in the work to modernize, right? But where in infrastructure and hardware and technology are you at the point where it always just gets better with age, right? We've introduced that concept 10 years ago. And we've now had proven industry success over a decade, right? As I mentioned, our first seven customers who've had a decade of Evergreen update started with an FA-300 way back when, and since then performance and capacity has been getting better over time with Evergreen Forever. So this is the next 10 years of it getting better and better for the company and not just tying it to the box because now we've grown up, we've got customers with like large fleets. I think one of our customers just hit 900 systems, right? >> Wow. >> So when you have 900 systems, right? And you're running a fleet you need to think about, okay, how am I using these resources? And in this day and age in that world, power becomes a big thing because if you're using resources inefficiently and the cost of power and energy is up, you're going to be in a world of hurt. So by using Flex where you can move the capacity to where it's needed, you're creating the most efficient operating environment, which is actually the lowest power consumption environment as well. >> Right. >> So we're really excited about this journey of modernizing, but that rebranding just became kind of a no brainer to us because it's all part of the spectrum on your journey of whether you're a single array customer, you're a fleet customer, or you don't want to even run, operate and manage. You can actually just say, you know what, give me the guarantee in the SLA. So that's the spectrum that informed the rebranding. >> Got it. Yeah, so to your point about the human body, all you got to do is look at Tom Brady's NFL combine videos and you'll see what a transformation. Fine wine is another one. I like the term hyper disaggregated because that to me is consistent with what's happening with the cloud and edge. We're building this hyper distributed or disaggregated system. So I want to just understand a little bit about you mentioned Purity so there's this software obviously is the enabler here, but what's under the covers? Is it like a virtualizer or megaload balancer, metadata manager, what's the tech behind this? >> Yeah, so we'll do a little bit of a double tech, right? So we have this concept of drives where in Purity, we build our own software for direct flash that takes the NAND and we do the NAND management as we're building our drives in Purity software. Now ,that advantage gives us the ability to say how should this drive behave? So in a FlashArray C system, it can behave as part of a FlashArray C and its usable capacity that you can write because the metadata and some of the system information is in NVRAM as part of the controller, right? So you have some metadata capability there. In a legend architecture for example, you have a distributed Blade architecture. So you need parts of that capacity to operate almost like a single layer chip where you can actually have metadata operations independent of your storage operations that operate like QLC. So we actually manage the NAND in a very very different way based on the persona of the system it's going into, right? So this capacity to make it usable, right? It's like saying a competitor could go ahead name it, Dell that has power max in Isilon, HPE that has single store and three power and nimble and like you name, like can you really from a technology standpoint say your capacity can be used anywhere or all these independent systems. Everyone's thinking about the world like a system, like here's this system, here's that system, here's that system. And your capacity is locked into a system. To be able to unlock that capacity to the system, you need to behave differently with the media type in the operating environment you're going into and that's what Purity does, right? So we are doing that as part of our direct Flex software around how we manage these drives to enable this. >> Well, it's the same thing in the cloud precaution, right? I mean, you got different APIs and primitive for object, for block, for file. Now, it's all programmable infrastructure so that makes it easier, but to the point, it's still somewhat stovepipe. So it's funny, it's good to see your commitment to Evergreen, I think you're right. You lay down the gauntlet a decade plus ago. First everybody ignored you and then they kind of laughed at you, then they criticized you, and then they said, oh, then you guys reached the escape velocity. So you had a winning hand. So I'm interested in that sort of progression over the past decade where you're going, why this is so important to your customers, where you're trying to get them ultimately. >> Well, look, the thing that's most disappointing is if I bought 100 terabytes still have to re-buy it every three or five years. That seems like a kind of ridiculous proposition, but welcome to storage. You know what I mean? That's what most people do with Evergreen. We want to end data migrations. We want to make sure that every software updates, hardware updates, non disruptive. We want to make it easy to deploy and run at scale for your fleet. And eventually we want everyone to move to our Evergreen one, formerly Pure as a service where we can run and operate and manage 'cause this is all about trust. We're trying to create trust with the customer to say, trust us, to run and operate and scale for you and worry about your business because we make tech easy. And like think about this hyper disaggregated if you go further. If you're going further with hyper disaggregated, you can think about it as like performance and capacity is your Lego building blocks. Now for anyone, I have a son, he wants to build a Lego Death Star. He didn't have that manual, he's toast. So when you move to at scale and you have this hyper disaggregated world and you have this unlimited freedom, you have unlimited choice. It's the problem of the cloud today, too much choice, right? There's like hundreds of instances of this, what do I even choose? >> Right. >> Well, so the only way to solve that problem and create simplicity when you have so much choice is put data to work. And that's where Pure one comes in because we've been collecting and we can scan your landscape and tell you, you should move these types of resources here and move those types of resources there, right? In the past, it was always about you should move this application there or you should move this application there. We're actually going to turn the entire industry on it's head. It's not like applications and data have gravity. So let's think about moving resources to where that are needed versus saying resources are a fixed asset, let's move the applications there. So that's a concept that's new to the industry. Like we're creating that concept, we're introducing that concept because now we have the technology to make that reality a new efficient way of running storage for the world. Like this is that big for the company. >> Well, I mean, a lot of the failures in data analytics and data strategies are a function of trying to jam everything into a single monolithic system and hyper centralize it. Data by its very nature is distributed. So hyper disaggregated fits that model and the pendulum's clearly swinging to that. Prakash, great to have you, purestorage.com I presume is where I can learn more? >> Oh, absolutely. We're super excited and our pent up by demand I think in this space is huge so we're looking forward to bringing this innovation to the world. >> All right, hey, thanks again. Great to see you, I appreciate you coming on and explaining this new model and good luck with it. >> All right, thank you. >> All right, and thanks for watching. This is David Vellante, and appreciate you watching this Cube conversation, we'll see you next time. (upbeat music)
SUMMARY :
is the general manager So this morning you guys capacity to where you need it. in the portfolio, like you So someone's going to ask you the capacity to where you and capacity across the the box, tell me how many IOPS you have capacity to where you need it. and the chalkboard discussion? if you have to do a data migration. and technology are you at the point So when you have 900 systems, right? So that's the spectrum that disaggregated because that to me and like you name, like can you really So you had a winning hand. and you have this hyper and create simplicity when you have and the pendulum's to bringing this innovation to the world. appreciate you coming on and appreciate you watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Vellante | PERSON | 0.99+ |
Evergreen | ORGANIZATION | 0.99+ |
Prakash | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
LA | LOCATION | 0.99+ |
10 boxes | QUANTITY | 0.99+ |
10 boxes | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Accelerate | ORGANIZATION | 0.99+ |
Prakash Darji | PERSON | 0.99+ |
today | DATE | 0.99+ |
Tom Brady | PERSON | 0.99+ |
900 systems | QUANTITY | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
Pure Accelerate | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
seven customers | QUANTITY | 0.99+ |
first seven customers | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
10 years ago | DATE | 0.98+ |
Evergreen Gold | ORGANIZATION | 0.98+ |
Evergreen Forever | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
one array | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
fifth | QUANTITY | 0.97+ |
purestorage.com | OTHER | 0.95+ |
single | QUANTITY | 0.95+ |
Forever Evergreen | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.93+ |
Evergreen Flex | ORGANIZATION | 0.93+ |
single layer | QUANTITY | 0.93+ |
FlashArray C | TITLE | 0.91+ |
single store | QUANTITY | 0.91+ |
two spectrums | QUANTITY | 0.9+ |
a decade plus ago | DATE | 0.9+ |
TLC | ORGANIZATION | 0.89+ |
NFL | ORGANIZATION | 0.89+ |
single array | QUANTITY | 0.88+ |
three | QUANTITY | 0.87+ |
FA-300 | COMMERCIAL_ITEM | 0.87+ |
SAP | ORGANIZATION | 0.85+ |
hundreds of instances | QUANTITY | 0.83+ |
past | DATE | 0.83+ |
over a decade | QUANTITY | 0.82+ |
double | QUANTITY | 0.8+ |
Shift 2 | TITLE | 0.79+ |
Purity | TITLE | 0.79+ |
FlashBlade | COMMERCIAL_ITEM | 0.78+ |
Death Star | COMMERCIAL_ITEM | 0.78+ |
Miles | QUANTITY | 0.77+ |
next 10 years | DATE | 0.73+ |
Pure | COMMERCIAL_ITEM | 0.73+ |
Isilon | LOCATION | 0.73+ |
every three | QUANTITY | 0.73+ |
this morning | DATE | 0.72+ |
a decade | QUANTITY | 0.71+ |
Purity | ORGANIZATION | 0.71+ |
a few weeks back | DATE | 0.71+ |
Pure | ORGANIZATION | 0.69+ |
Kirsten Newcomer & Jim Mercer | Red Hat Summit 2022
(upbeat music) >> Welcome back. We're winding down theCUBE's coverage of Red Hat Summit 2022. We're here at the Seaport in Boston. It's been two days of a little different Red Hat Summit. We're used to eight, 9,000 people. It's much smaller event this year, fewer developers or actually in terms of the mix, a lot more suits this year, which is kind of interesting to see that evolution and a big virtual audience. And I love the way, the keynotes we've noticed are a lot tighter. They're pithy, on time, they're not keeping us in the hall for three hours. So we appreciate that kind of catering to the virtual audience. Dave Vellante here with my co-host, Paul Gillin. As to say things are winding down, there was an analyst event here today, that's ended, but luckily we have Jim Mercer here as a research director at IDC. He's going to share maybe some of the learnings from that event today and this event overall, we're going to talk about DevSecOps. And Kirsten Newcomer is director of security, product management and hybrid platforms at Red Hat. Folks, welcome. >> Thank you. >> Thank you. >> Great to see you. >> Great to be here. >> Security's everywhere, right? You and I have spoken about the supply chain hacks, we've done some sort of interesting work around that and reporting around that. I feel like SolarWinds created a new awareness. You see these moments, it's Stuxnet, or WannaCry and now is SolarWinds very insidious, but security, Red Hat, it's everywhere in your portfolio. Maybe talk about the strategy. >> Sure, absolutely. We feel strongly that it's really important that security be something that is managed in a holistic way present throughout the application stack, starting with the operating system and also throughout the life cycle, which is partly where DevSecOps comes in. So Red Hat has kind of had a long history here, right? Think SELinux and Red Hat Enterprise Linux for mandatory access control. That's been a key component of securing containers in a Kubernetes environment. SELinux has demonstrated the ability to prevent or mitigate container escapes to the file system. And we just have continued to work up the stack as we go, our acquisition of stack rocks a little over a year ago, now known as Red Hat Advanced Cluster Security, gives us the opportunity to really deliver on that DevSecOps component. So Kubernetes native security solution with the ability to both help shift security left for the developers by integrating in the supply chain, but also providing a SecOps perspective for the operations and the security team and feeding information between the two to really try and do that closed infinity loop and then an additional investment more recently in sigstore and some technologies. >> Interesting. >> Yeah, is interesting. >> Go ahead. >> But Shift Left, explain to people what you mean by Shift Left for people might not be familiar with that term. >> Fair enough. For many, many years, right, IT security has been something that's largely been part of an operations environment and not something that developers tended to need to be engaged in with the exception of say source code static analysis tools. We started to see vulnerability management tools get added, but even then they tend to come after the application has been built. And I even ran a few years ago, I ran into a customer who said my security team won't let me get this information early. So Shift Left is all about making sure that there are security gates in the app dev process and information provided to the developer as early as possible. In fact, even in the IDE, Red Hat code ready dependency analytics does that, so that the developers are part of the solution and don't have to wait and get their apps stalled just before it's ready to go into deployment. >> Thank you. You've also been advocating for supply chain security, software supply chain. First of all, explain what a software supply chain is and then, what is unique about the security needs of that environment? >> Sure. And the SolarWinds example, as Dave said, really kind of has raised awareness around this. So just like we use the term supply chain, most people given kind of what's been happening with the pandemic, they've started hearing that term a lot more than they used to, right? So there's a supply chain to get your groceries, to the grocery store, food to the grocery store. There's a supply chain for manufacturing, where do the parts come for the laptops that we're all using, right? And where do they get assembled? Software has a supply chain also, right? So for years and even more so now, developers have been including open source components into the applications they build. So some of the supplies for the applications, the components of those applications, they can come from anywhere in the world. They can come from a wide range of open source projects. Developers are adding their custom code to that. All of this needs to be built together, delivered together and so when we think about a supply chain and the SolarWinds hack, right, there are a couple of elements of supply chain security that are particularly key. The executive order from May of last year, I think was partly in direct response to the SolarWinds hack. And it calls out that we need a software bill of materials. Now again, in manufacturing that's something folks are used to, I actually had the opportunity to contribute to the software package data exchange format, SPDX when it was first started, I've lost track of when that was. But an S-bomb is all about saying, what are all of those components that I'm delivering in my solution? It might be an application layer. It might be the host operating system layer, but at every layer. And if I know what's in what I'm delivering, I have the opportunity to learn more information about those components to track where does Log4Shell, right? When the Log4j or Spring4Shell, which followed shortly thereafter. When those hit, how do I find out which solutions that I'm running have the vulnerable components in them and where are they? The software bill of materials helps with that but you also have to know where, right. And that's the Ops side. I feel like I missed a piece of your question. >> No, it's not a silver bullet though, to your point and Log4j very widely used, but let's bring Jim into the conversation. So Jim, we've been talking about some of these trends, what's your focus area of research? What are you seeing as some of the mega trends in this space? >> I mean, I focus in DevOps and DevSecOps and it's interesting just talking about trends. Kirsten was mentioning the open source and if you look back five, six, seven years ago and you went to any major financial institution, you asked them if they use an open source. Oh, no. >> True. >> We don't use that, right. We wrote it all here. It's all from our developers-- >> Witchcraft. >> Yeah, right, exactly. But the reality is, they probably use a little open source back then but they didn't realize it. >> It's exactly true. >> However, today, not only are they not on versed to open source, they're seeking it out, right. So we have survey data that kind of indicates... A survey that was run kind of in late 2021 that shows that 70% of those who responded said that within the next two years 90% of their applications will be made up of open source. In other words, the content of an application, 10% will be written by themselves and 90% will come from other sources. So we're seeing these more kind of composite applications. Not, everybody's kind of, if you will, at that 90%, but applications are much more composite than they were before. So I'm pulling in pieces, but I'm taking the innovation of the community. So I not only have the innovation of my developers, but I can expand that. I can take the innovation to the community and bring that in and do things much quicker. I can also not have my developers worry about things that, maybe just kind of common stuff that's out there that might have already been written. In other words, just focus on the business logic, don't focus on, how to get orders or how to move widgets and those types of things that everybody does 'cause that's out there in open source. I'll just take that, right. I'll take it, somebody's perfected it, better than I'll ever do. I'll take that in and then I'll just focus and build my business logic on top of that. So open source has been a boom for growth. And I think we've heard a little bit of that (Kirsten laughs) in the last two days-- >> In the Keynotes. >> From Red Hat, right. But talking about the software bill of materials, and then you think about now I taking all that stuff in, I have my first level open source that I took in, it's called it component A. But behind component A is all these transitive dependencies. In other words, open source also uses open source, right? So there's this kind of this, if you will, web or nest, if you want to call it that, of transitive dependencies that need to be understood. And if I have five, six layers deep, I have a vulnerability in another component and I'm over here. Well, guess what? I picked up that vulnerability, right. Even though I didn't explicitly go for that component. So that's where understanding that software bill of materials is really important. I like to explain it as, during the pandemic, we've all experienced, there was all this contact tracing. It was a term where all came to mind. The software bill of materials is like the contact tracing for your open source, right. >> Good analogy. >> Anything that I've come in contact with, just because I came in contact with it, even though I didn't explicitly go looking for COVID, if you will, I got it, right. So in the same regard, that's how I do the contact tracing for my software. >> That 90% figure is really striking. 90% open source use is really striking, considering that it wasn't that long ago that one of the wraps on open source was it's insecure because anybody can see the code, therefore anybody can see the vulnerabilities. What changed? >> I'll say that, what changed is kind of first, the understanding that I can leapfrog and innovate with open source, right? There's more open source content out there. So as organizations had to digitally transform themselves and we've all heard the terminology around, well, hey, with the pandemic, we've leapfrog up five years of digital transformation or something along those lines, right? Open source is part of what helps those teams to do that type of leapfrog and do that type of innovation. You had to develop all of that natively, it just takes too long, or you might not have the talent to do it, right. And to find that talent to do it. So it kind of gives you that benefit. The interesting thing about what you mentioned there was, now we're hearing about all these vulnerabilities, right, in open source, that we need to contend with because the bad guys realize that I'm taking a lot of open source and they're saying, geez, that's a great way to get myself into applications. If I get myself into this one open source component, I'll get into thousands or more applications. So it's a fast path into the supply chain. And that's why it's so important that you understand where your vulnerabilities are in the software-- >> I think the visibility cuts two ways though. So when people say, it's insecure because it's visible. In fact, actually the visibility helps with security. The reality that I can go see the code, that there is a community working on finding and fixing vulnerabilities in that code. Whereas in code that is not open source it's a little bit more security by obscurity, which isn't really security. And there could well be vulnerabilities that a good hacker is going to find, but are not disclosed. So one of the other things we feel strongly about at Red Hat, frankly, is if there is a CVE that affects our code, we disclose that publicly, we have a public CVE database. And it's actually really important to us that we share that, we think we share way more information about issues in our code than most other users or consumers of open source and we work that through the broad community as well. And then also for our enterprise customers, if an issue needs to be fixed, we don't just fix it in the most recent version of the open source. We will backport that fix. And one of the challenges, if you're only addressing the most recent version, that may not be well tested, it might have other bugs, it might have other issues. When we backport a security vulnerability fix, we're able to do that to a stable version, give the customers the benefit of all the testing and use that's gone on while also fixing. >> Kirsten, can you talk about the announcements 'cause everybody's wondering, okay, now what do I do about this? What technology is there to help me? Obviously this framework, you got to follow the right processes, skill sets, all that, not to dismiss that, that's the most important part, but the announcements that you made at Red Hat Summit and how does the StackRox acquisition fit into those? >> Sure. So in particular, if we stick with DevSecOps a minute, but again, I'll do. Again for me, DevSecOps is the full life cycle and many people think of it as just that Shift Left piece. But for me, it's the whole thing. So StackRox ACS has had the ability to integrate into the CI/CD pipeline before we bought them. That continues. They don't just assess for vulnerabilities, but also for application misconfigurations, excess proof requests and helm charts, deployment YAML. So kind of the big, there are two sort of major things in the DevSecOps angle of the announcement or the supply chain angle of the announcement, which is the investment that we've been making in sigstore, signing, getting integrity of the components, the elements you're deploying is important. I have been asked for years about the ability to sign container images. The reality is that the signing technology and Red Hat signs everything we ship and always have, but the signing technology wasn't designed to be used in a CI/CD pipeline and sigstore is explicitly designed for that use case to make it easy for developers, as well as you can back it with full CO, you can back it with an OIDC based signing, keyless signing, throw away the key. Or if you want that enterprise CA, you can have that backing there too. >> And you can establish that as a protocol where you must. >> You can, right. So our pattern-- >> So that would've helped with SolarWinds. >> Absolutely. >> Because they were putting in malware and then taking it out, seeing what happened. My question was, could sigstore help? I always evaluate now everything and I'm not a security expert, but would this have helped with SolarWinds? A lot of times the answer is no. >> It's a combination. So a combination of sigstore integrated with Tekton Chains. So we ship Tekton, which is a Kubernetes supply chain pipeline. As OpenShift pipelines, we added chains to that. Chains allows you to attest every step in your pipeline. And you're doing that attestation by signing those steps so that you can validate that those steps have not changed. And in fact, the folks at SolarWinds are using Tekton Chains. They did a great talk in October at KubeCon North America on the changes they've made to their supply chain. So they're using both Tekton Chains and sigstore as part of their updated pipeline. Our pattern will allow our customers to deploy OpenShift, advanced cluster manager, advanced cluster security and Quay with security gates in place. And that include a pipeline built on Tekton with Tekton Chains there to sign those steps in the pipeline to enable signing of the code that's moving through that pipeline to store that signature in Quay and to validate the image signature upon deployment with advanced cluster security. >> So Jim, your perspective on this, Red Hat's, I mean, you care about security, security's everywhere, but you're not a security company. You follow security companies. There's like far too many of them. CISOs all say my number one challenge is lack of talent, but I have all these tools to deal with. You see new emerging companies that are doing pretty well. And then you see a company that's highly respected, like an Okta screw up the communications on a pretty benign hack. Actually, when you peel the onion on that, it's just this mess (chuckles) and it doesn't seem like it's going to get any simpler. Maybe the answer is companies like Red Hat kind of absorbing that and taking care of it. What do you see there? I mean, maybe it's great for business 'cause you've got so many companies. >> There's a lot of companies and there's certainly a lot of innovation out there and unique ways to make security easier, right. I mean, one of the keys here is to be able to make security easier for developers, right. One of the challenges with adopting DevSecOps is if DevSecOps creates a lot of friction in the process, it's hard to really... I can do it once, but I can't keep doing that and get the same kind of velocity. So I need to take the friction out of the process. And one of the challenges a lot of organizations have, and I've heard this from the development side, but I've also heard it from the InfoSec side, right. Because I take inquiry for people on InfoSec, and they're like, how do I get these developers to do what I want? And part of the challenge they have is like, I got these teams using these tools. I got those teams using those tools. And it's a similar challenge that we saw on DevOps where there's just too many, if you will, too many dang tools, right. So that is a challenge for organizations is, they're trying to kind of normalize the tools. Interestingly, we did a survey, I think around last August or something. And one of the questions was around, where do you want your security? Where do you want to get your DevSecOps security from, do you want to get it from individual vendors? Or do you want to get it from like, your platforms that you're using and deploying changes in Kubernetes. >> Great question. What did they say? >> The majority of them, they're hoping they can get it built into the platform. That's really what they want. And you see a lot of the security vendors are trying to build security platforms. Like we're not just assess tool, we're desk, we're this, whatever. And they're building platforms to kind of be that end-to-end security platform, trying to solve that problem, right, to make it easier to kind of consume the product overall, without a bunch of individual tools along the way. But certainly tool sprawl is definitely a challenge out there. Just one other point around the sigstore stuff which I love. Because that goes back to the supply chain and talking about digital providence, right. Understanding where things... How do I validate that what I gave you is what you thought it was, right. And what I like about it with Tekton Chains is because there's a couple things. Well, first of all, I don't want to just sign things after I built the binary. Well, I mean, I do want to sign it, but I want to just sign things once, right. Because all through the process, I think of it as a manufacturing plant, right. I'm making automobiles. If I check the quality of the automobile at one stage and I don't check it to the other, things have changed, right. How do I know that I did something wasn't compromised, right. So with sigstore kind of tied in with Tekton Chains, kind of gives me that view. And the other aspect I like it about is, this kind of transparency in the log, right-- >> The report component. >> Exactly. So I can see what was going on. So there is some this kind of like public scrutiny, like if something bad happened, you could go back and see what happened there and it wasn't as you were expected. >> As with most discussions on this topic, we could go for an hour because it's really important. And thank you guys for coming on and sharing your perspectives, the data. >> Our pleasure. >> And keep up the good work. Kirsten, it's on you. >> Thanks so much. >> The IDC survey said it, they want it in platforms. You're up. >> (laughs) That's right. >> All right. Good luck to both you. >> Thank you both so much. >> All right. And thank you for watching. We're back to wrap right after this short break. This is Dave Vellante for Paul Gill. You're watching theCUBE. (upbeat music)
SUMMARY :
And I love the way, the supply chain hacks, the ability to prevent But Shift Left, explain to people so that the developers about the security needs and the SolarWinds hack, right, but let's bring Jim into the conversation. and if you look back We don't use that, right. But the reality is, I can take the innovation to is like the contact tracing So in the same regard, that one of the wraps on So it's a fast path into the supply chain. The reality that I can go see the code, So kind of the big, there And you can establish that So our pattern-- So that would've and I'm not a security expert, And in fact, the folks at SolarWinds Maybe the answer is companies like Red Hat and get the same kind of velocity. What did they say? and I don't check it to the other, and it wasn't as you were expected. And thank you guys for coming on And keep up the good work. they want it in platforms. Good luck to both you. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
Jim Mercer | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Kirsten | PERSON | 0.99+ |
SolarWinds | ORGANIZATION | 0.99+ |
Kirsten Newcomer | PERSON | 0.99+ |
Tekton Chains | ORGANIZATION | 0.99+ |
May | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
October | DATE | 0.99+ |
70% | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
Tekton | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
three hours | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Paul Gill | PERSON | 0.99+ |
late 2021 | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
eight, 9,000 people | QUANTITY | 0.99+ |
DevSecOps | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
two ways | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.98+ |
StackRox | ORGANIZATION | 0.98+ |
last August | DATE | 0.98+ |
six layers | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
DevOps | TITLE | 0.98+ |
Boston | LOCATION | 0.98+ |
first level | QUANTITY | 0.98+ |
pandemic | EVENT | 0.97+ |
first | QUANTITY | 0.96+ |
Kubernetes | ORGANIZATION | 0.96+ |
one stage | QUANTITY | 0.96+ |
Log4Shell | TITLE | 0.96+ |
Seaport | LOCATION | 0.95+ |
Okta | ORGANIZATION | 0.95+ |
five | DATE | 0.95+ |
First | QUANTITY | 0.94+ |
InfoSec | ORGANIZATION | 0.94+ |
Red Hat Enterprise Linux | TITLE | 0.93+ |
component A | OTHER | 0.92+ |
seven years ago | DATE | 0.91+ |
OpenShift | TITLE | 0.91+ |
six | DATE | 0.9+ |
Kubernetes | TITLE | 0.88+ |
Ashesh Badani, Red Hat | Red Hat Summit 2022
welcome back to the seaport in boston massachusetts with cities crazy with bruins and celtics talk but we're here we're talking red hat linux open shift ansible and ashesh badani is here he's the senior vice president and the head of products at red hat fresh off the keynotes had amex up in the state of great to see you face to face amazing that we're here now after two years of of the isolation economy welcome back thank you great to see you again as well and you as well paul yeah so no shortage of announcements uh from red hat this week paul wrote a piece on siliconangle.com i got my yellow highlights i've been through all the announcements which is your favorite baby hard for me to choose hard for me to choose um i'll talk about real nine right well nine's exciting um and in a weird way it's exciting because it's boring right because it's consistent three years ago we committed to releasing a major well uh every three years right so customers partners users can plan for it so we released the latest version of rel in between we've been delivering releases every six months as well minor releases a lot of capabilities that are bundled in around security automation edge management and then rel is also the foundation of the work we announced with gm with the in-vehicle operating system so you know that's extremely exciting news for us as well and the collaboration that we're doing with them and then a whole host of other announcements around you know cloud services work around devsecops and so on so yeah a lot of news a lot of announcements i would say rel nine and the work with gm probably you know comes right up to the top i wanted to get to one aspect of the rail 9 announcement that is the the rose centos streams in that development now in december i think it was red hat discontinued development or support for for centos and moved to central streams i'm still not clear what the difference is between the two can you clarify that i think we go into a situation especially with with many customers many partners as well that you know didn't sort of quite exactly uh get a sense of you know where centos was from a life cycle perspective so was it upstream to rel was it downstream to rel what's the life cycle for itself as well and then there became some sort of you know implied notions around what that looked like and so what we decided was to say well we'll make a really clean break and we'll say centos stream is the upstream for enterprise linux from day one itself partners uh you know software partners hardware partners can collaborate with us to develop rel and then take it all the way through life cycle right so now it becomes a true upstream a true place for development for us and then rel essentially comes uh out as a series of releases based on the work that we do in a fast-moving center-os environment but wasn't centos essentially that upstream uh development environment to begin with what's the difference between centos stream yeah it wasn't wasn't um it wasn't quite upstream it was actually a little bit downstream yeah it was kind of bi-directional yeah and yeah and so then you know that sort of became an implied life cycle to it when there really wasn't one but it was just became one because of some usage and adoption and so now this really clarifies the relationship between the two we've heard feedback for example from software partners users saying hey what do i do for development because i used you know centervis in the past we're like yup we have real for developers available we have rel for small teams available we have rel available for non-profit organizations up and so we've made rail now available in various form factors for the needs that folks had and they were perhaps using centos for because there was no such alternative or rel history so language so now it's this clarity so that's really the key point there so language matters a lot in the technology business we've seen it over the years the industry coalesces around you know terminology whether it was the pc era everything was pc this pc that the internet era and and certainly the cloud we we learned a lot of language from the likes of you know aws two pizza teams and working backwards and things like that became common commonplace hybrid and multi-cloud are kind of the the parlance of the day you guys use hybrid you and i have talked about this i feel like there's something new coming i don't think my term of super cloud is the right necessary terminology but it signifies something different and i feel like your announcements point to that within your hybrid umbrella point being so much talk about the edge and it's we heard paul cormier talk about new hardware architectures and you're seeing that at the edge you know what you're doing with the in-vehicle operating system these are new the cloud isn't just a a bunch of remote services in the cloud anymore it's on-prem it's a cloud it's cross-clouds it's now going out to the edge it's something new and different i think hybrid is your sort of term for that but it feels like it's transcending hybrid are your thoughts you know really really great question actually since you and i talked dave i've been spending some time you know sort of noodling just over that right and you're right right there's probably some terminology something sort of you know that will get developed you know either by us or you know in collaboration with the industry you know where we sort of almost have the connection almost like a meta cloud right that we're sort of working our way towards because there's if you will you know the cloud right so you know on premise you know virtualized uh bare metal by the way you know increasingly interesting and important you know we do a lot of work with nvidia folks want to run specific workloads there we announced support for arm right another now popular architecture especially as we go out to the edge so obviously there's private cloud public cloud then the edge becomes a continuum now you know on that process we actually have a major uh uh shipping company so uh a cruise lines that's talking about using openshift on cruise lines right so you know that's the edge right last year we had verizon talking about you know 5g and you know ran in the next generation there to then that's the edge when we talk to retail the store front's the edge right you talk to a bank you know the bank environments here so everyone's got a different kind of definition of edge we're working with them and then when we you know announce this collaboration with gm right now the edge there becomes the automobile so if you think of this as a continuum right you know bare metal private cloud public cloud take it out to the edge now we're sort of almost you know living in a world of you know a little bit of abstractions and making sure that we are focused on where uh data is being generated and then how can we help ensure that we're providing a consistent experience regardless of you know where meta meta cloud because i can work in nfts i can work a little bit we're going to get through this whole thing without saying metaverse i was hoping i do want to ask you about about the edge and the proliferation of hardware platforms paul comey mentioned this during the keynote today hardware is becoming important yeah there's a lot of people building hardware it's in development now for areas like uh like intelligent devices and ai how does this influence your development priorities you have all these different platforms that you need to support yeah so um we think about that a lot mostly because we have engagements with so many partners hardware right so obviously there's more traditional partners i'd say like the dell and the hpes that we work with we've historically worked with them also working with them in in newer areas uh with regard to appliances that are being developed um and then the work that we do with partners like nvidia or new architectures like arm and so our perspective is this will be uh use case driven more than anything else right so there are certain environments right where you have arm-based devices other environments where you've got specific workloads that can take advantage of being built on gpus that we'll see increasingly being used especially to address that problem and then provide a solution towards that so our belief has always been look we're going to give you a consistent platform a consistent abstraction across all these you know pieces of hardware um and so you mr miss customer make the best choice for yourself a couple other areas we have to hit on i want to talk about cloud services we've got to talk about security leave time to get there but why the push to cloud services what's driving that it's actually customers they're driving right so we have um customers consistently been asking us say you know love what you give us right want to make sure that's available to us when we consume in the cloud so we've made rel available for example on demand right you can consume this directly via public cloud consoles we are now making available via marketplaces uh talked about ansible available as a managed service on azure openshift of course available as a managed service in multiple clouds um all of this also is because you know we've got customers who've got these uh committed spends that they have you know with cloud providers they want to make sure that the environments that they're using are also counting towards that at the same time give them flexibility give them the choice right if in certain situations they want to run in the data center great we have that solution for them other cases they want to procure from the cloud and run it there we're happy to support them there as well let's talk about security because you have a lot of announcements like security everywhere yeah um and then some specific announcements as well i i always think about these days in the context of the solar wind supply chain hack would this have you know how would this have affected it but tell us about what's going on in security your philosophy there and the announcements that you guys made so our secure announcements actually span our entire portfolio yeah right and and that's not an accident right that's by design because you know we've really uh been thinking and emphasizing you know how we ensure that security profile is raised for users both from a malicious perspective and also helping accidental issues right so so both matters so one huge amounts of open source software you know out of the world you know and then estimates are you know one in ten right has some kind of security vulnerability um in place a massive amount of change in where software is being developed right so rate of change for example in kubernetes is dramatic right much more than even than linux right entire parts of kubernetes get rewritten over over a three-year period of time so as you introduce all that right being able to think for example about you know what's known as shift left security or devsec ops right how do we make sure we move security closer to where development is actually done how do we ensure we give you a pattern so you know we introduced a software supply chain pattern uh via openshift delivers complete stack of code that you know you can go off and run that follows best practices uh including for example for developers you know with git ops and support on the pipelines front a whole bunch of security capabilities in rel um a new image integrity measurement architecture which allows for a better ability to see in a post install environment what the integrity of the packages are signing technology they're incorporating open shift as well as an ansible so it's it's a long long list of cables and features and then also more and more defaults that we're putting in place that make it easier for example for someone not to hurt themselves accidentally on security front i noticed that uh this today's batch of announcements included support within openshift pipelines for sigstor which is an open source project that was birthed actually at red hat right uh we haven't heard a whole lot about it how important is zig store to to you know your future product direction yeah so look i i think of that you know as you know work that's you know being done out of our cto's office and obviously security is a big focus area for them um six store's great example of saying look how can we verify content that's in uh containers make sure it's you know digitally signed that's appropriate uh to be deployed across a bunch of environments but that thinking isn't maybe unique uh for us uh in the container side mostly because we have you know two decades or more of thinking about that on the rel side and so fundamentally containers are being built on linux right so a lot of the lessons that we've learned a lot of the expertise that we've built over the years in linux now we're starting to you know use that same expertise trying to apply it to containers and i'm my guess is increasingly we're going to see more of the need for that you know into the edge as well i i i picked up on that too let me ask a follow-up question on sigstor so if i'm a developer and i and i use that capability it it ensures the provenance of that code is it immutable the the signature uh and the reason i ask is because again i think of everything in the context of the solar winds where they were putting code into the the supply chain and then removing it to see what happened and see how people reacted and it's just a really scary environment yeah the hardest part you know in in these environments is actually the behavior change so what's an example of that um packages built verified you know by red hat when it went from red hat to the actual user have we been able to make sure we verify the integrity of all of those when they were put into use um and unless we have behavior that you know make sure that we do that then we find ourselves in trouble in the earliest days of open shift uh we used to get knocked a lot by by developers because i said hey this platform's really hard to use we investigate hey look why is that happening so by default we didn't allow for root access you know and so someone's using you know the openshift platform they're like oh my gosh i can't use it right i'm so used to having root access we're like no that's actually sealed by default because that's not a good security best practice now over a period of time when we you know randomly enough times explained that enough times now behavior changes like yeah that makes sense now right so even just kind of you know there's behaviors the more that we can do for example in in you know the shift left which is one of the reasons by the way why we bought uh sac rocks a year right right for declarative security contain native security so threat detection network segmentation uh watching intrusions you know malicious behavior is something that now we can you know essentially make native into uh development itself all right escape key talk futures a little bit so i went downstairs to the expert you know asked the experts and there was this awesome demo i don't know if you've seen it of um it's like a design thinking booth with what happened how you build an application i think they were using the who one of their apps um during covet and it's you know shows the the granularity of the the stack and the development pipeline and all the steps that have to take place and it strikes me of something we've talked about so you've got this application development stack if you will and the database is there to support that and then over here you've got this analytics stack and it's separate and we always talk about injecting more ai into apps more data into apps but there's separate stacks do you see a day where those two stacks can come together and if not how do we inject more data and ai into apps what are your thoughts on that so great that's another area we've talked about dave in the past right um so we definitely agree with that right and and what final shape it takes you know i think we've got some ideas around that what we started doing is starting to pick up specific areas where we can start saying let's go and see what kind of usage we get from customers around it so for example we have openshift data science which is basically a way for us to talk about ml ops right and you know how can we have a platform that allows for different models that you can use we can uh test and train data different frameworks that you can then deploy in an environment of your choice right and we run that uh for you up and assist you in in uh making sure that you're able to take the next steps you want with with your machine learning algorithms um there's work that we've uh introduced at summit around databases service so essentially our uh a cloud service that allows for deep as an easy way for customers to access either mongodb or or cockroach in a cloud native fashion and all of these things that we're sort of you know experimenting with is to be able to say look how do we sort of bring the world's closer together right off database of data of analytics with a core platform and a core stack because again right this will become part of you know one continuum that we're going to work with it's not i'd like your continuum that's that's i think really instructive it's not a technical barrier is what i'm hearing it's maybe organizational mindset i can i should be able to insert a column into my my my application you know development pipeline and insert the data i mean kafka tensorflow in there there's no technical reason i can't can't do that it's just we've created these sort of separate stovepipe organizations 100 right right so they're different teams right you've got the platform team or the ops team and you're a separate dev team there's a separate data team there's a separate storage team and each of them will work you know slightly differently independently right so the question then is i mean that's sort of how devops came along then you're like oh wait a minute yeah don't forget security and now we're at devsecops right so the more of that that we can kind of bring together i think the more convergence that we'll see when i think about the in-vehicle os i see the the that is a great use case for real-time ai inferencing streaming data i wanted to ask you that about that real quickly because at the very you know just before the conference began we got an announcement about gm but your partnership with gm it seems like this came together very quickly why is it so important for red hat this is a whole new category of application that you're going to be working on yeah so we've been working with gm not publicly for a while now um and it was very clear that look you know gm believes this is the future right you know electric vehicles into autonomous driving and we're very keen to say we believe that a lot of attributes that we've got in rel that we can bring to bear in a different form factor to assist with the different needs that exist in this industry so one it's interesting for us because we believe that's a use case that you know we can add value to um but it's also the future of automotive right so the opportunity to be able to say look we can get open source technology we can collaborate out with the community to fundamentally help transform that industry uh towards where it wants to go you know that that's just the passion that we have that you know is what wakes us up every morning you're opening into that yeah thank you for coming on the cube really appreciate your time and your insights and uh have a great rest of rest of the event thank you for having me metacloud it's a thing it's a thing right it's it's it's kind of there we're gonna we're gonna see it emerge over the next decade all right you're watching the cube's coverage of red hat summit 2022 from boston keep it right there be right back you
SUMMARY :
of the need for that you know into the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
Dave Alampi | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Nick Carr | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mohammad | PERSON | 0.99+ |
Pat Kelson | PERSON | 0.99+ |
Ashesh Badani | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
Mohammed Farooq | PERSON | 0.99+ |
Skyhigh Networks | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
6th | QUANTITY | 0.99+ |
Mohammad Farooq | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Mike | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
100 softwares | QUANTITY | 0.99+ |
1000 dollars | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Allen Bean | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
80 years | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
1000 times | QUANTITY | 0.99+ |
2 | QUANTITY | 0.99+ |
7500 customers | QUANTITY | 0.99+ |
Pivitol | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
'18 | DATE | 0.99+ |
1000 customers | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
34 billion dollars | QUANTITY | 0.99+ |
Rahul Pathak Opening Session | AWS Startup Showcase S2 E2
>>Hello, everyone. Welcome to the cubes presentation of the 80 minutes startup showcase. Season two, episode two, the theme is data as code, the future of analytics. I'm John furry, your host. We had a great day lineup for you. Fast growing startups, great lineup of companies, founders, and stories around data as code. And we're going to kick it off here with our opening keynote with Rahul Pathak VP of analytics at AWS cube alumni. Right? We'll thank you for coming on and being the opening keynote for this awesome event. >>Yeah. And it's great to see you, and it's great to be part of this event, uh, excited to, um, to help showcase some of the great innovation that startups are doing on top of AWS. >>Yeah. We last spoke at AWS reinvent and, uh, a lot's happened there, service loss of serverless as the center of the, of the action, but all these start-ups rock set Dremio Cribble monks next Liccardo, a HANA imply all doing great stuff. Data as code has a lot of traction. So a lot of still momentum going on in the marketplace. Uh, pretty exciting. >>No, it's, uh, it's awesome. I mean, I think there's so much innovation happening and you know, the, the wonderful part of working with data is that the demand for services and products that help customers drive insight from data is just skyrocketing and has no sign of no sign of slowing down. And so it's a great time to be in the data business. >>It's interesting to see the theme of the show getting traction, because you start to see data being treated almost like how developers write software, taking things out of branches, working on them, putting them back in, uh, machine learnings, uh, getting iterated on you, seeing more models, being trained differently with better insights, action ones that all kind of like working like code. And this is a whole nother way. People are reinventing their businesses. This has been a big, huge wave. What's your reaction to that? >>Uh, I think it's spot on, I mean, I think the idea of data's code and bringing some of the repeatability of processes from software development into how people built it, applications is absolutely fundamental and especially so in machine learning where you need to think about the explainability of a model, what version of the world was it trained on? When you build a better model, you need to be able to explain and reproduce it. So I think your insights are spot on and these ideas are showing up in all stages of the data work flow from ingestion to analytics to I'm out >>This next way is about modernization and going to the next level with cloud-scale. Uh, thank you so much for coming on and being the keynote presenter here for this great event. Um, I'll let you take it away. Reinventing businesses, uh, with ads analytics, right? We'll take it away. >>Okay, perfect. Well, folks, we're going to talk about, uh, um, reinventing your business with, uh, data. And if you think about it, the first wave of reinvention was really driven by the cloud. As customers were able to really transform how they thought about technology and that's well on her way. Although if you stop and think about it, I think we're only about five to 10% of the way done in terms of it span being on the cloud. So lots of work to do there, but we're seeing another wave of reinvention, which is companies reinventing their businesses with data and really using data to transform what they're doing to look for new opportunities and look for ways to operate more efficiently. And I think the past couple of years of the pandemic, it really only accelerated that trend. And so what we're seeing is, uh, you know, it's really about the survival of the most informed folks for the best data are able to react more quickly to what's happening. >>Uh, we've seen customers being able to scale up if they're in, say the delivery business or scale down, if they were in the travel business at the beginning of all of this, and then using data to be able to find new opportunities and new ways to serve customers. And so it's really foundational and we're seeing this across the board. And so, um, you know, it's great to see the innovation that's happening to help customers make sense of all of this. And our customers are really looking at ways to put data to work. It's about making better decisions, finding new efficiencies and really finding new opportunities to succeed and scale. And, um, you know, when it comes to, uh, good examples of this FINRA is a great one. You may not have heard of them, but that the U S equities regulators, all trading that happens in equities, they keep track of they're look at about 250 billion records per day. >>Uh, the examiner, I was only EMR, which is our spark and Hadoop service, and they're processing 20 terabytes of data running across tens of thousands of nodes. And they're looking for fraud and bad actors in the market. So, um, you know, huge, uh, transformation journey for FINRA over the years of customer I've gotten to work with personally since really 2013 onward. So it's been amazing to see their journey, uh, Pinterest, not a great customer. I'm sure everyone's familiar with, but, um, you know, they're about visual search and discovery and commerce, and, um, they're able to scale their daily lot searches, um, really a factor of three X or more, uh, drive down their costs. And they're using the Amazon Opus search service. And really what we're trying to do at AWS is give our customers the most comprehensive set of services for the end-to-end journey around, uh, data from ingestion to analytics and machine learning. And we will want to provide a comprehensive set of capabilities for ingestion, cataloging analytics, and then machine learning. And all of these are things that our partners and the startups that are run on us have available to them to build on as they build and deliver value for their customers. >>And, you know, the way we think about this is we want customers to be able to modernize what they're doing and their infrastructure. And we provide services for that. It's about unifying data, wherever it lives, connecting it. So the customers can build a complete picture of their customers and business. And then it's about innovation and really using machine learning to bring all of this unified data, to bear on driving new innovation and new opportunities for customers. And what we're trying to do AWS is really provide a scalable and secure cloud platform that customers and partners can build on a unifying is about connecting data. And it's also about providing well-governed access to data. So one of the big trends that we see is customers looking for the ability to make self-service data available to that customer there and use. And the key to that is good foundational governance. >>Once you can define good access controls, you then are more comfortable setting data free. And, um, uh, the other part of it is, uh, data lakes play a huge role because you need to be able to think about structured and unstructured data. In fact, about 80% of the data being generated today, uh, is unstructured. And you want to be able to connect data that's in data lakes with data that's in purpose-built data stores, whether that's databases on AWS databases, outside SAS products, uh, as well as things like data warehouses and machine learning systems, but really connecting data as key. Uh, and then, uh, innovation, uh, how can we bring to bear? And we imagine all processes with new technologies like AI and machine learning, and AI is also key to unlocking a lot of the value that's in unstructured data. If you can figure out what's in an imagine the sentiment of audio and do that in real-time that lets you then personalize and dynamically tailor experiences, all of which are super important to getting an edge, um, in, uh, in the modern marketplace. And so at AWS, we, when we think about connecting the dots across sources of data, allowing customers to use data, lakes, databases, analytics, and machine learning, we want to provide a common catalog and governance and then use these to help drive new experiences for customers and their apps and their devices. And then this, you know, in an ideal world, we'll create a closed loop. So you create a new experience. You observe our customers interact with it, that generates more data, which is a data source that feeds into the system. >>And, uh, you know, on AWS, uh, thinking about a modern data strategy, uh, really at the core is a data lakes built on us three. And I'll talk more about that in a second. Then you've got services like Athena included, lake formation for managing that data, cataloging it and querying it in place. And then you have the ability to use the right tool for the right job. And so we're big believers in purpose-built services for data because that's where you can avoid compromising on performance functionality or scale. Uh, and then as I mentioned, unification and inter interconnecting, all of that data. So if you need to move data between these systems, uh, there's well-trodden pathways that allow you to do that, and then features built into services that enable that. >>And, um, you know, some of the core ideas that guide the work that we do, um, scalable data lakes at key, um, and you know, this is really about providing arbitrarily scalable high throughput systems. It's about open format data for future-proofing. Uh, then we talk about purpose-built systems at the best possible functionality, performance, and cost. Uh, and then from a serverless perspective, this has been another big trend for us. We announced a bunch of serverless services and reinvented the goal here is to really take away the need to manage infrastructure from customers. They can really focus about driving differentiated business value, integrated governance, and then machine learning pervasively, um, not just as an end product for data scientists, but also machine learning built into data, warehouses, visualization and a database. >>And so it's scalable data lakes. Uh, data three is really the foundation for this. One of our, um, original services that AWS really the backbone of so much of what we do, uh, really unmatched your ability, availability, and scale, a huge portfolio of analytics services, uh, both that we offer, but also that our partners and customers offer and really arbitrary skin. We've got individual customers and estimator in the expert range, many in the hundreds of petabytes. And that's just growing. You know, as I mentioned, we see roughly a 10 X increase in data volume every five years. So that's a exponential increase in data volumes, Uh, from a purpose-built perspective, it's the right tool for the right job, the red shift and data warehousing Athena for querying all your data. Uh, EMR is our managed sparking to do, uh, open search for log analytics and search, and then Kinesis and Amex care for CAFCA and streaming. And that's been another big trend is, uh, real time. Data has been exploding and customers wanting to make sense of that data in real time, uh, is another big deal. >>Uh, some examples of how we're able to achieve differentiated performance and purpose-built systems. So with Redshift, um, using managed storage and it's led us and since types, uh, the three X better price performance, and what's out there available to all our customers and partners in EMR, uh, with things like spark, we're able to deliver two X performance of open source with a hundred percent compatibility, uh, almost three X and Presto, uh, with on two, which is our, um, uh, new Silicon chips on AWS, better price performance, about 10 to 12% better price performance, and 20% lower costs. And then, uh, all compatible source. So drop your jobs, then have them run faster and cheaper. And that translates to customer benefits for better margins for partners, uh, from a serverless perspective, this is about simplifying operations, reducing total cost of ownership and freeing customers from the need to think about capacity management. If we invent, we, uh, announced serverless redshifts EMR, uh, serverless, uh, Kinesis and Kafka, um, and these are all game changes for customers in terms of freeing our customers and partners from having to think about infrastructure and allowing them to focus on data. >>And, um, you know, when it comes to several assumptions in analytics, we've really got a very full and complete set. So, uh, whether that's around data warehousing, big data processing streaming, or cataloging or governance or visualization, we want all of our customers to have an option to run something struggles as well as if they have specialized needs, uh, uh, instances are available as well. And so, uh, really providing a comprehensive deployment model, uh, based on the customer's use cases, uh, from a governance perspective, uh, you know, like information is about easy build and management of data lakes. Uh, and this is what enables data sharing and self service. And, um, you know, with you get very granular access controls. So rule level security, uh, simple data sharing, and you can tag data. So you can tag a group of analysts in the year when you can say those only have access to the new data that's been tagged with the new tags, and it allows you to very, scaleably provide different secure views onto the same data without having to make multiple copies, another big win for customers and partners, uh, support transactions on data lakes. >>So updates and deletes. And time-travel, uh, you know, John talked about data as code and with time travel, you can look at, um, querying on different versions of data. So that's, uh, a big enabler for those types of strategies. And with blue, you're able to connect data in multiple places. So, uh, whether that's accessing data on premises in other SAS providers or, uh, clouds, uh, as well as data that's on AWS and all of this is, uh, serverless and interconnected. And, um, and really it's about plugging all of your data into the AWS ecosystem and into our partner ecosystem. So this API is all available for integration as well, but then from an AML perspective, what we're really trying to do is bring machine learning closer to data. And so with our databases and warehouses and lakes and BI tools, um, you know, we've infused machine learning throughout our, by, um, the state of the art machine running that we offer through SageMaker. >>And so you've got a ML in Aurora and Neptune for broths. Uh, you can train machine learning models from SQL, directly from Redshift and a female. You can use free inference, and then QuickSight has built in forecasting built in natural language, querying all powered by machine learning, same with anomaly detection. And here are the ideas, you know, how can we up our systems get smarter at the surface, the right insights for our customers so that they don't have to always rely on smart people asking the right questions, um, and you know, uh, really it's about bringing data back together and making it available for innovation. And, uh, thank you very much. I appreciate your attention. >>Okay. Well done reinventing the business with AWS analytics rural. That was great. Thanks for walking through that. That was awesome. I have to ask you some questions on the end-to-end view of the data. That seems to be a theme serverless, uh, in there, uh, Mel integration. Um, but then you also mentioned picking the right tool for the job. So then you've got like all these things moving on, simplify it for me right now. So from a business standpoint, how do they modernize? What's the steps that the clients are taking with analytics, what's the best practice? How do they, what's the what's the high order bit here? >>Uh, so the basic hierarchy is, you know, historically legacy systems are rigid and inflexible, and they weren't really designed for the scale of modern data or the variety of it. And so what customers are finding is they're moving to the cloud. They're moving from legacy systems with punitive licensing into more flexible, more systems. And that allows them to really think about building a decoupled, scalable future proof architecture. And so you've got the ability to combine data lakes and databases and data warehouses and connect them using common KPIs and common data protection. And that sets you up to deal with arbitrary scale and arbitrary types. And it allows you to evolve as the future changes since it makes it easy to add in a new type of engine, as we invent a better one a few years from now. Uh, and then, uh, once you've kind of got your data in a cloud and interconnected in this way, you can now build complete pictures of what's going on. You can understand all your touch points with customers. You can understand your complete supply chain, and once you can build that complete picture of your business, you can start to use analytics and machine learning to find new opportunities. So, uh, think about modernizing, moving to the cloud, setting up for the future, connecting data end to end, and then figuring out how to use that to your advantage. >>I know as you mentioned, modern data strategy gives you the best of both worlds. And you've mentioned, um, briefly, I want to get a little bit more, uh, insight from you on this. You mentioned open, open formats. One of the themes that's come out of some of the interviews, these companies we're going to be hearing from today is open source. The role opens playing. Um, how do you see that integrating in? Because again, this is just like software, right? Open, uh, open source software, open source data. It seems to be a trend. What does open look like to you? How do you see that progressing? >>Uh, it's a great question. Uh, open operates on multiple dimensions, John, as you point out, there's open data formats. These are things like JSI and our care for analytics. This allows multiple engines tend to operate on data and it'll, it, it creates option value for customers. If you're going to data in an open format, you can use it with multiple technologies and that'll be future-proofed. You don't have to migrate your data. Now, if you're thinking about using a different technology. So that's one piece now that sort of software, um, also, um, really a big enabler for innovation and for customers. And you've got things like squat arc and Presto, which are popular. And I know some of the startups, um, you know, that we're talking about as part of the showcase and use these technologies, and this allows for really the world to contribute, to innovating and these engines and moving them forward together. And we're big believers in that we've got open source services. We contribute to open-source, we support open source projects, and that's another big part of what we do. And then there's open API is things like SQL or Python. Uh, again, uh, common ways of interacting with data that are broadly adopted. And this one, again, create standardization. It makes it easier for customers to inter-operate and be flexible. And so open is really present all the way through. And it's a big part, I think, of, uh, the present and the future. >>Yeah. It's going to be fun to watch and see how that grows. It seems to be a lot of traction there. I want to ask you about, um, the other comment I thought was cool. You had the architectural slides out there. One was data lakes built on S3, and you had a theme, the glue in lake formation kind of around S3. And then you had the constellation of, you know, Kinesis SageMaker and other things around it. And you said, you know, pick the tool for the right job. And then you had the other slide on the analytics at the center and you had Redshift and all the other, other, other services around it around serverless. So one was more about the data lake with Athena glue and lake formation. The other one's about serverless. Explain that a little bit more for me, because I'm trying to understand where that fits. I get the data lake piece. Okay. Athena glue and lake formation enables it, and then you can pick and choose what you need on the serverless side. What does analytics in the center mean? >>So the idea there is that really, we wanted to talk about the fact that if you zoom into the analytics use case within analytics, everything that we offer, uh, has a serverless option for our customers. So, um, you could look at the bucket of analytics across things like Redshift or EMR or Athena, or, um, glue and league permission. You have the option to use instances or containers, but also to just not worry about infrastructure and just think declaratively about the data that you want to. >>Oh, so basically you're saying the analytics is going serverless everywhere. Talking about volumes, you mentioned 10 X volumes. Um, what are other stats? Can you share in terms of volumes? What are people seeing velocity I've seen data warehouses can't move as fast as what we're seeing in the cloud with some of your customers and how they're using data. How does the volume and velocity community have any kind of other kind of insights into those numbers? >>Yeah, I mean, I think from a stats perspective, um, you know, take Redshift, for example, customers are processing. So reading and writing, um, multiple exabytes of data there across from each shift. And, uh, you know, one of the things that we've seen in, uh, as time has progressed as, as data volumes have gone up and did a tapes have exploded, uh, you've seen data warehouses get more flexible. So we've added things like the ability to put semi-structured data and arbitrary, nested data into Redshift. Uh, we've also seen the seamless integration of data warehouses and data lakes. So, um, actually Redshift was one of the first to enable a straightforward acquiring of data. That's sitting in locally and drives as well as feed and that's managed on a stream and, uh, you know, those trends will continue. I think you'll kind of continue to see this, um, need to query data wherever it lives and, um, and, uh, allow, uh, leaks and warehouses and purpose-built stores to interconnect. >>You know, one of the things I liked about your presentation was, you know, kind of had the theme of, you know, modernize, unify, innovate, um, and we've been covering a lot of companies that have been, I won't say stumbling, but like getting to the future, some go faster than others, but they all kind of get stuck in an area that seems to be the same spot. It's the silos, breaking down the silos and get in the data lakes and kind of blending that purpose built data store. And they get stuck there because they're so used to silos and their teams, and that's kind of holding back the machine learning side of it because the machine learning can't do its job if they don't have access to all the data. And that's where we're seeing machine learning kind of being this new iterative model where the models are coming in faster. And so the silo brake busting is an issue. So what's your take on this part of the equation? >>Uh, so there's a few things I plan it. So you're absolutely right. I think that transition from some old data to interconnected data is always straightforward and it operates on a number of levels. You want to have the right technology. So, um, you know, we enable things like queries that can span multiple stores. You want to have good governance, you can connect across multiple ones. Uh, then you need to be able to get data in and out of these things and blue plays that role. So there's that interconnection on the technical side, but the other piece is also, um, you know, you want to think through, um, organizationally, how do you organize, how do you define it once data when they share it? And one of the asylees for enabling that sharing and, um, think about, um, some of the processes that need to get put in place and create the right incentives in your company to enable that data sharing. And then the foundational piece is good guardrails. You know, it's, uh, it can be scary to open data up. And, uh, the key to that is to put good governance in place where you can ensure that data can be shared and distributed while remaining protected and adhering to the privacy and compliance and security regulations that you have for that. And once you can assert that level of protection, then you can set that data free. And that's when, uh, customers really start to see the benefits of connecting all of it together, >>Right? And then we have a batch of startups here on this episode that are doing a lot of different things. Uh, some have, you know, new lake new lakes are forming observability lakes. You have CQL innovation on the front end data, tiering innovation at the data tier side, just a ton of innovation around this new data as code. How do you see as executive at AWS? You're enabling all this, um, where's the action going? Where are the white spaces? Where are the opportunities as this architecture continues to grow, um, and get traction because of the relevance of machine learning and AI and the apps are embedding data in there now as code where's the opportunities for these startups and how can they continue to grow? >>Yeah, the, I mean, the opportunity is it's amazing, John, you know, we talked a little bit about this at the beginning, but the, there is no slow down insight for the volume of data that we're generating pretty much everything that we have, whether it's a watch or a phone or the systems that we interact with are generating data and, uh, you know, customers, uh, you know, we talk a lot about the things that'll stay the same over time. And so, you know, the data volumes will continue to go up. Customers are gonna want to keep analyzing that data to make sense of it. They're going to want to be able to do it faster and more cheaply than they were yesterday. And then we're going to want to be able to make decisions and innovate, uh, in a shorter cycle and run more experiments than they were able to do. >>And so I think as long as, and they're always going to want this data to be secure and well-protected, and so I think as long as we, and the startups that we work with can continue to push on making these things better. Can I deal with more data? Can I deal with it more cheaply? Can I make it easier to get insight? And can I maintain a super high bar in security investments in these areas will just be off. Um, because, uh, the demand side of this equation is just in a great place, given what we're seeing in terms of theater and the architect for forum. >>I also love your comment about, uh, ML integration being the last leg of the equation here or less likely the journey, but you've got that enablement of the AIP solves a lot of problems. People can see benefits from good machine learning and AI is creating opportunities. Um, and also you also have mentioned the end to end with security piece. So data and security are kind of going hand in hand these days, not just the governments and the compliance stuff we're talking about security. So machine learning integration kind of connects all of this. Um, what's it all mean for the customers, >>For customers. That means that with machine learning and really enabling themselves to use machine learning, to make sense of data, they're able to find patterns that can represent new opportunities, um, quicker than ever before. And they're able to do it, uh, dynamically. So, you know, in a prior version of the world, we'd have little bit of systems and they would be relatively rigid and then we'd have to improve them. Um, with machine learning, this can be dynamic and near real time and you can customize them. So, uh, that just represents an opportunity to deepen relationships with customers and create more value and to find more efficiency in how businesses are run. So that piece is there. Um, and you know, your ideas around, uh, data's code really come into play because machine learning needs to be repeatable and explainable. And that means versioning, uh, keeping track of everything that you've done from a code and data and learning and training perspective >>And data sets are updating the machine learning. You got data sets growing, they become code modules that can be reused and, uh, interrogated, um, security okay. Is a big as a big theme data, really important security is seen as one of our top use cases. Certainly now in this day and age, we're getting a lot of, a lot of breaches and hacks coming in, being defended. It brings up the open, brings up the data as code security is a good proxy for kind of where this is going. What's your what's take on that and your reaction to that. >>So I'm, I'm security. You can, we can never invest enough. And I think one of the things that we, um, you know, guide us in AWS is security, availability, durability sort of jobs, you know, 1, 2, 3, and, um, and it operates at multiple levels. You need to protect data and rest with encryption, good key management and good practices though. You need to protect data on the wire. You need to have a good sense of what data is allowed to be seen by whom. And then you need to keep track of who did what and be able to verify and come back and prove that, uh, you know, uh, only the things that were allowed to happen actually happened. And you can actually then use machine learning on top of all of this apparatus to say, uh, you know, can I detect things that are happening that shouldn't be happening in near real time so they could put a stop to them. So I don't think any of us can ever invest enough in securing and protecting my data and our systems, and it is really fundamental or adding customer trust and it's just good business. So I think it is absolutely crucial. And we think about it all the time and are always looking for ways to raise >>Well, I really appreciate you taking the time to give the keynote final word here for the folks watching a lot of these startups that are presenting, they're doing well. Business wise, they're being used by large enterprises and people buying their products and using their services for customers are implementing more and more of the hot startups products they're relevant. What's your advice to the customer out there as they go on this journey, this new data as code this new future of analytics, what's your recommendation. >>So for customers who are out there, uh, recommend you take a look at, um, what, uh, the startups on AWS are building. I think there's tremendous innovation and energy, uh, and, um, there's really great technology being built on top of a rock solid platform. And so I encourage customers thinking about it to lean forward, to think about new technology and to embrace, uh, move to the cloud suite, modernized, you know, build a single picture of our data and, and figure out how to innovate and when >>Well, thanks for coming on. Appreciate your keynote. Thanks for the insight. And thanks for the conversation. Let's hand it off to the show. Let the show begin. >>Thank you, John pleasure, as always.
SUMMARY :
And we're going to kick it off here with our opening keynote with um, to help showcase some of the great innovation that startups are doing on top of AWS. service loss of serverless as the center of the, of the action, but all these start-ups rock set Dremio And so it's a great time to be in the data business. It's interesting to see the theme of the show getting traction, because you start to see data being treated and especially so in machine learning where you need to think about the explainability of a model, Uh, thank you so much for coming on and being the keynote presenter here for this great event. And so what we're seeing is, uh, you know, it's really about the survival And so, um, you know, it's great to see the innovation that's happening to help customers make So, um, you know, huge, uh, transformation journey for FINRA over the years of customer And the key to that is good foundational governance. And you want to be able to connect data that's in data lakes with data And then you have the ability to use the right tool for the right job. And, um, you know, some of the core ideas that guide the work that we do, um, scalable data lakes at And that's been another big trend is, uh, real time. and freeing customers from the need to think about capacity management. those only have access to the new data that's been tagged with the new tags, and it allows you to And time-travel, uh, you know, John talked about data as code And here are the ideas, you know, how can we up our systems get smarter at the surface, I have to ask you some questions on the end-to-end Uh, so the basic hierarchy is, you know, historically legacy systems are I know as you mentioned, modern data strategy gives you the best of both worlds. And I know some of the startups, um, you know, that we're talking about as part of the showcase And then you had the other slide on the analytics at the center and you had Redshift and all the other, So the idea there is that really, we wanted to talk about the fact that if you zoom about volumes, you mentioned 10 X volumes. And, uh, you know, one of the things that we've seen And so the silo brake busting is an issue. side, but the other piece is also, um, you know, you want to think through, Uh, some have, you know, new lake new lakes are forming observability lakes. And so, you know, the data volumes will continue to go up. And so I think as long as, and they're always going to want this data to be secure and well-protected, Um, and also you also have mentioned the end to end with security piece. And they're able to do it, uh, that can be reused and, uh, interrogated, um, security okay. And then you need to keep track of who did what and be able Well, I really appreciate you taking the time to give the keynote final word here for the folks watching a And so I encourage customers thinking about it to lean forward, And thanks for the conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rahul Pathak | PERSON | 0.99+ |
John | PERSON | 0.99+ |
20 terabytes | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
20% | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
FINRA | ORGANIZATION | 0.99+ |
10 X | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
hundred percent | QUANTITY | 0.99+ |
SQL | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
80 minutes | QUANTITY | 0.98+ |
each shift | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
about 80% | QUANTITY | 0.98+ |
Neptune | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
today | DATE | 0.97+ |
QuickSight | ORGANIZATION | 0.97+ |
three | QUANTITY | 0.97+ |
Redshift | TITLE | 0.97+ |
wave of reinvention | EVENT | 0.97+ |
first | EVENT | 0.96+ |
hundreds of petabytes | QUANTITY | 0.96+ |
HANA | TITLE | 0.96+ |
first | QUANTITY | 0.95+ |
both worlds | QUANTITY | 0.95+ |
Aurora | LOCATION | 0.94+ |
Amex | ORGANIZATION | 0.94+ |
SAS | ORGANIZATION | 0.94+ |
pandemic | EVENT | 0.94+ |
12% | QUANTITY | 0.93+ |
about 10 | QUANTITY | 0.93+ |
past couple of years | DATE | 0.92+ |
Kafka | TITLE | 0.92+ |
Kinesis | ORGANIZATION | 0.92+ |
Liccardo | TITLE | 0.91+ |
EMR | TITLE | 0.91+ |
about five | QUANTITY | 0.89+ |
tens of thousands of nodes | QUANTITY | 0.88+ |
Kinesis | TITLE | 0.88+ |
10% | QUANTITY | 0.87+ |
three X | QUANTITY | 0.86+ |
Athena | ORGANIZATION | 0.86+ |
about 250 billion records per | QUANTITY | 0.85+ |
U S | ORGANIZATION | 0.85+ |
CAFCA | ORGANIZATION | 0.84+ |
Silicon | ORGANIZATION | 0.83+ |
every five years | QUANTITY | 0.82+ |
Season two | QUANTITY | 0.82+ |
Athena | OTHER | 0.78+ |
single picture | QUANTITY | 0.74+ |
Liran Tal, Synk | CUBE Conversation
(upbeat music) >> Hello, everyone. Welcome to theCUBE's coverage of the "AWS Startup Showcase", season two, episode one. I'm Lisa Martin, and I'm excited to be joined by Snyk, next in this episode. Liran Tal joins me, the director of developer advocacy. Liran, welcome to the program. >> Lisa, thank you for having me. This is so cool. >> Isn't it cool? (Liran chuckles) All the things that we can do remotely. So I had the opportunity to speak with your CEO, Peter McKay, just about a month or so ago at AWS re:Invent. So much growth and momentum going on with Snyk, it's incredible. But I wanted to talk to you about specifically, let's start with your role from a developer advocate perspective, 'cause Snyk is saying modern development is changing, so traditional AppSec gatekeeping doesn't apply anymore. Talk to me about your role as a developer advocate. >> It is definitely. The landscape is changing, both developer and security, it's just not what it was before, and what we're seeing is developers need to be empowered. They need some help, just working through all of those security issues, security incidents happening, using open source, building cloud native applications. So my role is basically about making them successful, helping them any way we can. And so getting that security awareness out, or making sure people are having those best practices, making sure we understand what are the frustrations developers have, what are the things that we can help them with, to be successful day to day. And how they can be a really good part of the organization in terms of fixing security issues, not just knowing about it, but actually being proactively on it. >> And one of the things also that I was reading is, Shift Left is not a new concept. We've been talking about it for a long time. But Snyk's saying it was missing some things and proactivity is one of those things that it was missing. What else was it missing and how does Snyk help to fix that gap? >> So I think Shift Left is a good idea. In general, the idea is we want to fix security issues as soon as we can. We want to find them. Which I think that is a small nuance that what's kind of missing in the industry. And usually what we've seen with traditional security before was, 'cause notice that, the security department has like a silo that organizations once they find some findings they push it over to the development team, the R&D leader or things like that, but until it actually trickles down, it takes a lot of time. And what we needed to do is basically put those developer security tools, which is what Snyk is building, this whole security platform. Is putting that at the hands and at the scale of, and speed of modern development into developers. So, for example, instead of just finding security issues in your open source dependencies, what we actually do at Snyk is not just tell you about them, but you actually open a poll request to your source codes version and management system. And through that we are able to tell you, now you can actually merge it, you can actually review it, you can actually have it as part of your day-to-day workflows. And we're doing that through so many other ways that are really helpful and actually remediating the problem. So another example would be the IDE. So we are actually embedding an extension within your IDEs. So, once you actually type in your own codes, that is when we actually find the vulnerabilities that could exist within your own code, if that's like insecure code, and we can tell you about it as you hit Command + S and you will save the file. Which is totally different than what SaaS tools starting up application security testing was before because, when things started, you usually had SaaS tools running in the background and like CI jobs at the weekend and in deltas of code bases, because they were so slow to run, but developers really need to be at speed. They're developing really fast. They need to deploy. One development is deployed to production several times a day. So we need to really enable developers to find and fix those security issues as fast as we can. >> Yeah, that speed that you mentioned is absolutely critical to their workflow and what they're expecting. And one of the unique things about Snyk, you mentioned, the integration into how this works within development workflow with IDE, CIDC, they get environment enabling them to work at speed and not have to be security experts. I imagine are two important elements to the culture of the developer environment, right? >> Correct, yes. It says, a large part is we don't expect developers to be security experts. We want to help them, we want to, again, give them the tools, give them the knowledge. So we do it in several ways. For example, that IDE extension has a really cool thing that's like kind of unique to it that I really like, and that is, when we find, for example, you're writing code and maybe there's a batch traversal vulnerability in the function that you just wrote, what we'll actually do when we tell you about it, it will actually tell you, hey, look, these are some other commits made by other open source projects where we found the same vulnerability and those commits actually fixed it. So actually giving you example cases of what potentially good code looks like. So if you think about it, like who knows what patch reversal is, but prototype pollution like many types of vulnerabilities, but at the same time, we don't expect developers to actually know, the deep aspects of security. So they're left off with, having some findings, but not really, they want to fix them, but they don't really have the expertise to do it. So what we're doing is we're bridging that gap and we're being helpful. So I think this is what really proactive security is for developers, that says helping them remediate it. And I can give like more examples, like the security database, it's like a wonderful place where we also like provide examples and references of like, where does their vulnerability come from if there's like, what's fogging in open-source package? And we highlight that with a lot of references that provide you with things, the pull requests that fixed date, or the issue with where this was discussed. You have like an entire context of what is the... What made this vulnerability happen. So you have like a little bit more context than just specifically, emerging some stuff and updating, and there's a ton more. I'm happy to like dive more into this. >> Well, I can hear your enthusiasm for it, a developer advocate it seems like you are. But talking about the burdens of the gaps that you guys are filling it also seems like the developers and the security folks that this is also a bridge for those teams to work better together. >> Correct. I think that is not siloed anymore. I think the idea of having security champions or having threat modeling activities are really, really good, or like insightful both like developers and security, but more than just being insightful, useful practices that organizations should actually do actually bringing a discussion together to actually creating a more cohesive environment for both of those kind of like expertise, development and security to work together towards some of these aspects of like just mitigating security issues. And one of the things that actually Snyk is doing in that, in bringing their security into the developer mindset is also providing them with the ability to prioritize and understand what policies to put in place. So a lot of the times security organizations actually, the security org wants to do is put just, guardrails to make sure that developers have a good leeway to work around, but they're not like doing things that like, they definitely shouldn't do that, like prior to bringing a big risk into today organizations. And that's what I think we're doing also like great, which is the fact that we're providing the security folks to like put the policies in place and then developers who actually like, work really well within those understand how to prioritize vulnerabilities is an important part. And we kind of like quantify that, we put like an urgency score that says, hey, you should fix this vulnerability first. Why? Because it has, first of all, well, you can upgrade really quickly. It has a fix right there. Secondly, there's like an exploit in the wild. It means potentially an attacker can weaponize this vulnerability and like attack your organizations, in an automated fashion. So you definitely want to put that put like a lead on that, on that broken window, if so to say. So we ended up other kind of metrics that we can quantify and put this as like an urgency score, which we called a priority score that helps again, developers really know what to fix first, because like they could get a scan of like hundreds of vulnerabilities, but like, what do I start first with? So I find that like very useful for both the security and the developers working together. >> Right, and especially now, as we've seen such changes in the last couple of years to the threat landscape, the vulnerabilities, the security issues that are impacting every industry. The ability to empower developers to not only work at the speed with which they are accustomed and need to work, but also to be able to find those vulnerabilities faster prioritize which ones need to be fixed. I mean, I think of Log4Shell, for example, and when the challenge is going on with the supply chain, that this is really a critical capability from a developer empowerment perspective, but also from a overall business health and growth perspective. >> Definitely. I think, first of all, like if you want to step just a step back in terms of like, what has changed. Like what is the landscape? So I think we're seeing several things happening. First of all, there's this big, tremendous... I would call it a trend, but now it's like the default. Like of the growth of open source software. So first of all as developers are using more and more open source and that's like a growing trend of have like drafts of this. And it's like always increasing across, by the way, every ecosystem go, rust, .net, Java, JavaScript, whatever you're building, that's probably like on a growing trend, more open source. And that is, we will talk about it in a second what are the risks there. But that is one trend that we're saying. The other one is cloud native applications, which is also worth to like, I think dive deep into it in terms of the way that we're building applications today has completely shifted. And I think what AWS is doing in that sense is also creating a tremendous shift in the mindset of things. For example, out of the cloud infrastructure has basically democratized infrastructure. I do not need to, own my servers and own my monitoring and configure everything out. I can actually write codes that when I deploy it, when something parses this and runs this, it actually creates servers and monitoring, logging, different kinds of things for me. So it democratize the whole sense of building applications from what it was decades ago. And this whole thing is important and really, really fast. It makes things scalable. It also introduces some rates. For example, some of these configuration. So there's a lot that has been changed. And in that landscape of like what modern developer is and I think in that sense, we kind of can need a lead to a little bit more, be helpful to developers and help them like avoid all those cases. And I'm like happy to dive into like the open source and the cloud native. That was like follow-ups on this one. >> I want to get into a little bit more about your relationship with AWS. When I spoke with Peter McKay for re:Invent, he talked about the partnership being a couple of years old, but there's some kind of really interesting things that AWS is doing in terms of leveraging, Snyk. Talk to me about that. >> Indeed. So Snyky integrates with almost, I think probably a lot of services, but probably almost all of those that are unique and related to developers building on top of the AWS platform. And for example, that would be, if you actually are building your code, it connects like the source code editor. If you are pushing that code over, it integrates with code commits. As you build and CIS are running, maybe code build is something you're using that's in code pipeline. That is something that you have like native integrations. At the end of the day, like you have your container registry or Lambda. If you're using like functions as a service for your obligations, what we're doing is integrating with all of that. So at the end of the day, you really have all of that... It depends where you're integrating, but on all of those points of integration, you have like Snyk there to help you out and like make sure that if we find on any of those, any potential issues, anything from like licenses to vulnerabilities in your containers or just your code or your open source code in those, they actually find it at that point and mitigate the issue. So this kind of like if you're using Snyk, when you're a development machine, it kind of like accompanies you through this journey all over what a CIC kind of like landscape looks like as an architectural landscape for development, kind of like all the way there. And I think what you kind of might be I think more interested, I think to like put your on and an emphasis would be this recent integration with the Amazon Inspector. Which is as it's like very pivotal parts on the AWS platform to provide a lot of, integrate a lot of services and provide you with those insights on security. And I think the idea that now that is able to leverage vulnerability data from the Snyk's security intelligence database that says that's tremendous. And we can talk about that. We'd look for shell and recent issues. >> Yeah. Let's dig into that. We've have a few minutes left, but that was obviously a huge issue in November of 2021, when obviously we're in a very dynamic global situation period, but it's now not a matter of if an organization is going to be hit by vulnerabilities and security threats. It's a matter of when. Talk to me about really how impactful Snyk was in the Log4Shell vulnerability and how you help customers evade probably some serious threats, and that could have really impacted revenue growth, customer satisfaction, brand reputation. >> Definitely. The Log4Shell is, well, I mean was a vulnerability that was disclosed, but it's probably still a major part and going to be probably for the foreseeable future. An issue for organizations as they would need to deal with us. And we'll dive in a second and figure out like why, but in like a summary here, Log4Shell was the vulnerability that actually was found in Java library called Log4J. A logging library that is so popular today and used. And the thing is having the ability to react fast to those new vulnerabilities being disclosed is really a vital part of the organizations, because when it is asking factful, as we've seen Log4Shell being that is when, it determines where the security tool you're using is actually helping you, or is like just an added thing on like a checkbox to do. And that is what I think made Snyk's so unique in the sense. We have a team of those folks that are really boats, manually curating the ecosystem of CVEs and like finding by ourselves, but also there's like an entire, kind of like an intelligence platform beyond us. So we get a lot of notifications on chatter that happens. And so when someone opens an issue on an open source repository says, Hey, I found an issue here. Maybe that's an XSS or code injection or something like that. We find it really fast. And we at that point, before it goes to CVE requirement and stuff like that through like a miter and NVD, we find it really fast and can add it to the database. So this has been something that we've done with Log4Shell, where we found that as it was disclosed, not on the open source, but just on the open source system, but it was generally disclosed to everyone at that point. But not only that, because look for J as the library had several iterations of fixes they needed. So they fixed one version. Then that was the recommendation to upgrade to then that was actually found as vulnerable. So they needed to fix the another time and then another time and so on. So being able to react fast, which is, what I think helped a ton of customers and users of Snyk is that aspect. And what I really liked in the way that this has been received very well is we were very fast on creating those command line tools that allow developers to actually find cases of the Log4J library, embedded into (indistinct) but not true a package manifest. So sometimes you have those like legacy applications, deployed somewhere, probably not even legacy, just like the Log4J libraries, like bundled into a net or Java source code base. So you may not even know that you're using it in a sense. And so what we've done is we've like exposed with Snyk CLI tool and a command line argument that allows you to search for all of those cases. Like we can find them and help you, try and mitigate those issues. So that has been amazing. >> So you've talked in great length, Liran about, and detail about how Snyk is really enabling and empowering developers. One last question for you is when I spoke with Peter last month at re:Invent, he talked about the goal of reaching 28 million developers. Your passion as a director of developer advocacy is palpable. I can feel it through the screen here. Talk to me about where you guys are on that journey of reaching those 28 million developers and what personally excites you about what you're doing here. >> Oh, yeah. So many things. (laughs) Don't know where to start. We are constantly talking to developers on community days and things like that. So it's a couple of examples. We have like this dev site community, which is a growing and kicking community of developers and security people coming together and trying to work and understand, and like, just learn from each other. We have those events coming up. We actually have this, "The Big Fix". It's a big security event that we're launching on February 25th. And the idea is, want to help the ecosystem secure security obligations, open source or even if it's closed source. We like help you fix that though that yeah, it's like helping them. We've launched this Snyk ambassadors program, which is developers and security people, CSOs are even in there. And the idea is how can we help them also be helpful to the community? Because they are like known, they are passionate as we are, on application security and like helping developers code securely, build securely. So we launching all of those programs. We have like social impact related programs and the way that we like work with organizations, like maybe non-profit maybe they just need help, like getting, the security part of things kind of like figured out, students and things like that. Like, there's like a ton of those initiatives all over the boards, helping basically the world be a little bit more secure. >> Well, we could absolutely use Snyk's help in making the world more secure. Liran it's been great talking to you. Like I said, your passion for what you do and what Snyk is able to facilitate and enable is palpable. And it was a great conversation. I appreciate that. And we look forward to hearing what transpires during 2022 for Snyk so you got to come back. >> I will. Thank you. Thank you, Lisa. This has been fun. >> All right. Excellent. Liran Tal, I'm Lisa Martin. You're watching theCUBE's second season, season two of the "AWS Startup Showcase". This has been episode one. Stay tuned for more great episodes, full of fantastic content. We'll see you soon. (upbeat music)
SUMMARY :
of the "AWS Startup Showcase", Lisa, thank you for having me. So I had the opportunity to speak of the organization in terms And one of the things and like CI jobs at the weekend and not have to be security experts. the expertise to do it. that you guys are filling So a lot of the times and need to work, So it democratize the whole he talked about the partnership So at the end of the day, you and that could have really the ability to react fast and what personally excites you and the way that we like in making the world more secure. I will. We'll see you soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Liran | PERSON | 0.99+ |
Peter McKay | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
February 25th | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
November of 2021 | DATE | 0.99+ |
Liran Tal | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Snyk | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Log4Shell | TITLE | 0.99+ |
second season | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
JavaScript | TITLE | 0.99+ |
last month | DATE | 0.99+ |
decades ago | DATE | 0.98+ |
Lambda | TITLE | 0.98+ |
Log4J | TITLE | 0.98+ |
one version | QUANTITY | 0.98+ |
one trend | QUANTITY | 0.97+ |
One last question | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
AppSec | TITLE | 0.96+ |
2022 | DATE | 0.95+ |
One development | QUANTITY | 0.95+ |
Secondly | QUANTITY | 0.95+ |
28 million developers | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
episode one | QUANTITY | 0.88+ |
hundreds of vulnerabilities | QUANTITY | 0.86+ |
Shift Left | ORGANIZATION | 0.84+ |
two important elem | QUANTITY | 0.83+ |
Snyk | PERSON | 0.82+ |
about a month or | DATE | 0.8+ |
Snyky | PERSON | 0.8+ |
last couple of years | DATE | 0.76+ |
couple of years | QUANTITY | 0.75+ |
several times a day | QUANTITY | 0.75+ |
re | EVENT | 0.74+ |
Startup Showcase | TITLE | 0.74+ |
Synk | ORGANIZATION | 0.74+ |
CIC | TITLE | 0.73+ |
Left | TITLE | 0.72+ |
season two | QUANTITY | 0.7+ |
re:Invent | EVENT | 0.7+ |
First | QUANTITY | 0.68+ |
customers | QUANTITY | 0.68+ |
Rajiv Mirani and Thomas Cornely, Nutanix | .NEXTConf 2021
(upbeat electronic music plays) >> Hey everyone, welcome back to theCube's coverage of .NEXT 2021 Virtual. I'm John Furrier, hosts of theCube. We have two great guests, Rajiv Mirani, who's the Chief Technology Officer, and Thomas Cornely, SVP of Product Management. Day Two keynote product, the platform, announcements, news. A lot of people, Rajiv, are super excited about the, the platform, uh, moving to a subscription model. Everything's kind of coming into place. How are the customers, uh, seeing this? How they adopted hybrid cloud as a hybrid, hybrid, hybrid, data, data, data? Those are the, those are the, that's the, that's where the puck is right now. You guys are there. How are customers seeing this? >> Mirani: Um, um, great question, John, by the way, great to be back here on theCube again this year. So when we talk to our customers, pretty much, all of them agreed that for them, the ideal state that they want to be in is a hybrid world, right? That they want to essentially be able to run both of those, both on the private data center and the public cloud, and sort of have a common platform, common experience, common, uh, skillset, same people managing, managing workloads across both locations. And unfortunately, most of them don't have that that tooling available today to do so, right. And that's where the platform, the Nutanix platform's come a long way. We've always been great at running in the data center, running every single workload, we continue to make great strides on our core with the increased performance for, for the most demanding, uh, workloads out there. But what we have done in the last couple of years has also extended this platform to run in the public cloud and essentially provide the same capabilities, the same operational behavior across locations. And that's when you're seeing a lot of excitement from our customers because they really want to be in that state, for it to have the common tooling across work locations, as you can imagine, we're getting traction. Customers who want to move workloads to public cloud, they don't want to spend the effort to refactor them. Or for customers who really want to operate in a hybrid mode with things like disaster recovery, cloud bursting, workloads like that. So, you know, I think we've made a great step in that direction. And we look forward to doing more with our customers. >> Furrier: What is the big challenge that you're seeing with this hybrid transition from your customers and how are you solving that specifically? >> Mirani: Yeah. If you look at how public and private operate today, they're very different in the kind of technologies used. And most customers today will have two separate teams, like one for their on-prem workloads, using a certain set of tooling, a second completely different team, managing a completely different set of workloads, but with different technologies. And that's not an ideal state in some senses, that's not true hybrid, right? It's like creating two new silos, if anything. And our vision is that you get to a point where both of these operate in the same manner, you've got the same people managing all of them, the same workloads anyway, but similar performance, similar SaaS. So they're going to literally get to point where applications and data can move back and forth. And that's, that's, that's where I think the real future is for hybrid >> Furrier: I have to ask you a personal question. As the CTO, you've got be excited with the architecture that's evolving with hybrid and multi-cloud, I mean, I mean, it's pretty, pretty exciting from a tech standpoint, what is your reaction to that? >> Mirani: %100 and it's been a long time coming, right? We have been building pieces of this over years. And if you look at all the product announcements, Nutanix has made over the last few years and the acquisitions that made them and so on, there's been a purpose behind them. That's been a purpose to get to this model where we can operate a customer's workloads in a hybrid environment. So really, really happy to see all of that come together. Years and years of work finally finally bearing fruit. >> Furrier: Well, we've had many conversations in the past, but it congratulates a lot more to do with so much more action happening. Thomas, you get the keys to the kingdom, okay, and the product management you've got to prioritize, you've got to put it together. What are the key components of this Nutanix cloud platform? The hybrid cloud, multi-cloud strategy that's in place, because there's a lot of headroom there, but take us through the key components today and then how that translates into hybrid multi-cloud for the future. >> Cornely: Certainly, John, thank you again and great to be here, and kind of, Rajiv, you said really nicely here. If you look at our portfolio at Nutanix, what we have is great technologies. They've been sold as a lot of different products in the past, right. And what we've done last few months is we kind of bring things together, simplify and streamline, and we align everything around a cloud platform, right? And this is really the messaging that we're going after is look, it's not about the price of our solutions, but business outcomes for customers. And so are we focusing on pushing the cloud platform, which we encompasses five key areas for us, which we refer to as cloud infrastructure, no deficiencies running your workloads. Cloud management, which is how you're going to go and actually manage, operate, automate, and get governance. And then services on top that started on all around data, right? So we have unified storage, finding the objects, data services. We have database services. Now we have outset of desktop services, which is for EMC. So all of this, the big change for us is this is something that, you know, you can consume in terms of solutions and consume on premises. As Rajiv discussed, you know, we can take the same platform and deploy it in public cloud regions now, right? So you can now get no seamless hybrid cloud, same operating model. But increasingly what we're doing is taking your solutions and re-targeting issues and problems at workers running native public clouds. So think of this as going, after automating more governance, security, you know, finding objects, database services, wherever you're workload is running. So this is taking this portfolio and reapplying it, and targeting on prem at the edge in hybrid and in christening public cloud in ATV. >> Furrier: That's awesome. I've been watching some of the footage and I was noticing quite a lot of innovation around virtualized, networking, disaster, recovery security, and data services. It's all good. You guys were, and this is in your wheelhouse. I know you guys are doing this for many, many years. I want to dive deeper into that because the theme right now that we've been reporting on, you guys are hitting right here what the keynote is cloud scale is about faster development, right? Cloud native is about speed, it's about not waiting for these old departments, IT or security to get back to them in days or weeks and responding to either policy or some changes, you got to move faster. And data, data is critical in all of this. So we'll start with virtualized networking because networking again is a key part of it. The developers want to go faster. They're shifting left, take us through the virtualization piece of how important that is. >> Mirani: Yeah, that's actually a great question as well. So if you think about it, virtual networking is the first step towards building a real cloud like infrastructure on premises that extends out to include networking as well. So one of the key components of any cloud is automation. Another key component is self service and with the API, is it bigger on virtual networking All of that becomes much simpler, much more possible than having to, you know, qualify it, work with someone there to reconfigure physical networks and slots. We can, we can do that in a self service way, much more automated way. But beyond that, the, the, the notion of watching networks is really powerful because it helps us to now essentially extend networks and, and replicate networks anywhere on the private data center, but in the public cloud as well. So now when customers move their workloads, we'd already made that very simple with our clusters offering. But if you're only peek behind the layers a little bit, it's like, well, yea, but the network's not the same on the side. So now it, now it means that a go re IP, my workloads create new subnets and all of that. So there was a little bit of complication left in that process. So to actual network that goes away also. So essentially you can repeat the same network in both locations. You can literally move your workloads, no redesign of your network acquired and still get that self service and automation capabilities of which cookies so great step forward, it really helps us complete the infrastructure as a service stack. We had great storage capabilities before, we create compute capabilities before, and sort of networking the third leg and all of that. >> Furrier: Talk about the complexity here, because I think a lot of people will look at dev ops movement and say, infrastructure is code when you go to one cloud, it's okay. You can, you can, you know, make things easier. Programmable. When, when you start getting into data center, private data centers, or essentially edges now, cause if it's distributed cloud environment or cloud operations, it's essentially one big cloud operation. So the networks are different. As you said, this is a big deal. Okay. This is sort of make infrastructure as code happen in multiple environments across multiple clouds is not trivial. Could you talk about the main trends and how you guys see this evolving and how you solve that? >> Mirani: Yeah. Well, the beauty here is that we are actually creating the same environment everywhere, right? From, from, from point of view of networking, compute, and storage, but also things like security. So when you move workloads, things with security, posture also moves, which is also super important. It's a really hard problem, and something a lot of CIO's struggle with, but having the same security posture in public and private clouds reporting as well. So with this, with this clusters offering and our on-prem offering competing with the infrastructure service stack, you may not have this capability where your operations really are unified across multicloud hybrid cloud in any way you run. >> Furrier: Okay, so if I have multiple cloud vendors, there are different vendors. You guys are creating a connection unifying those three. Is that right? >> Mirani: Essentially, yes, so we're running the same stack on all of them and abstracting away the differences between the clouds that you can run operations. >> Furrier: And when the benefits, the benefits of the customers are what? What's the main, what's the main benefit there? >> Mirani: Essentially. They don't have to worry about, about where their workloads are running. Then they can pick the best cloud for their workloads. It can seamlessly move them between Cloud. They can move their data over easily, and essentially stop worrying about getting locked into a single, into a single cloud either in a multi-cloud scenario or in a hybrid cloud scenario, right. There many, many companies now were started on a cloud first mandate, but over time realized that they want to move workloads back to on-prem or the other way around. They have traditional workloads that they started on prem and want to move them to public cloud now. And we make that really simple. >> Furrier: Yeah. It's kind of a trick question. I wanted to tee that up for Thomas, because I love that kind of that horizontal scales, what the cloud's all about, but when you factor data into it, this is the sweet spot, because this is where, you know, I think it gets really exciting and complicated too, because, you know, data's got, can get unwieldy pretty quickly. You got state got multiple applications, Thomas, what's your, what can you share the data aspect of this? This is super, super important. >> Absolutely. It's, you know, it's really our core source of differentiation, when you think about it. That's what makes Nutanix special right? In, in the market. When we talk about cloud, right. Actually, if you've been following Nutanix for years, you know, we've been talking a lot about making infrastructure invisible, right? The new way for us to talk about what we're doing, with our vision is, is to make clouds invisible so that in the end, you can focus on your own business, right? So how do you make Cloud invisible? Lots of technology is at the application layer to go and containerize applications, you know, make them portable, modernize them, make them cloud native. That's all fine when you're not talking of state class containers, that the simplest thing to move around. Right. But as we all know, you know, applications end of the day, rely on data and measure the data across all of these different locations. I'm not even going to go seconds. Cause that's almost a given, you're talking about attribution. You can go straight from edge to on-prem to hybrid, to different public cloud regions. You know, how do you go into the key control of that and get consistency of all of this, right? So that's part of it is being aware of where your data is, right? But the other part is that inconsistency of set up data services regardless of where you're running. And so this is something that we look at the cloud platform, where we provide you the cloud infrastructure go and run the applications. But we also built into the cloud platform. You get all of your core data services, whether you have to consume file services, object services, or database services to really support your application. And that will move with your application, that is the key thing here by bringing everything onto the same platform. You now can see all operations, regardless of where you're running the application. The last thing that we're adding, and this is a new offering that we're just launching, which is a service, it's called, delete the dead ends. Which is a solution that gives you visibility and allow you to go and get better governance around all your data, wherever it may live, across on-prem edge and public clouds. That's a big deal again, because to manage it, you first have to make sense of it and get control over it. And that's what data answer's is going to be all about. >> Furrier: You know, one of the things we've we've been reporting on is data is now a competitive advantage, especially when you have workflows involved, um, super important. Um, how do you see customers going to the edge? Because if you have this environment, how does the data equation, Thomas, go to the edge? How do you see that evolving? >> Cornely: So it's yeah. I mean, edge is not one thing. And that's actually the biggest part of the challenge of defining what the edge is depending on the customer that you're working with. But in many cases you get data ingesting or being treated at the edge that you then have to go move to either your private cloud or your public cloud environment to go and basically aggregate it, analyze it and get insights from it. Right? So this is where a lot of our technologies, whether it's, I think the object's offering built in, we'll ask you to go and make the ingest over great distances over the network, right? And then have your common data to actually do an ethics audit over our own object store. Right? Again, announcements, we brought into our storage solutions here, we want to then actually organize it then actually organize it directly onto the objects store solution. Nope. Using things, things like or SG select built into our protocols. So again, make it easy for you to go in ingest anywhere, consolidate your data, and then get value out of it. Using some of the latest announcements on the API forms. >> Furrier: Rajiv databases are still the heart of most applications in the enterprise these days, but databases are not just the data is a lot of different data. Moving around. You have a lot a new data engineering platforms coming in. A lot of customers are scratching their head and, and they want to kind of be, be ready and be ready today. Talk about your view of the database services space and what you guys are doing to help enterprise, operate, manage their databases. >> Mirani: Yeah, it's a super important area, right? I mean, databases are probably the most important workload customers run on premises and pretty close on the public cloud as well. And if you look at it recently, the tooling that's available on premises, fairly traditional, but the clouds, when we integrate innovation, we're going to be looking at things like Amazon's relational database service makes it an order of magnitude simpler for our customers to manage the database. At the same time, also a proliferation of databases and we have the traditional Oracle and SQL server. But if you have open source Mongo, DB, and my SQL, and a lot of post-grads, it's a lot of different kinds of databases that people have to manage. And now it just becomes this cable. I have the spoke tooling for each one of them. So with our Arab product, what we're doing is essentially creating a data management layer, a database management layer that unifies operations across your databases and across locations, public cloud and private clouds. So all the operations that you need, you do, which are very complicated in, in, in, in with traditional tooling now, provisioning of databases backing up and restoring them providing a true time machine capabilities, so you can pull back transactions. We can copy data management for your data first. All of that has been tested in Era for a wide variety of database engines, your choice of database engine at the back end. And so the new capabilities are adding sort of extend that lead that we have in that space. Right? So, so one of the things we announced at .Next is, is, is, is one-click storage scaling. So one of the common problems with databases is as they grow over time, it's not running out of storage capacity. Now re-provisions to storage for a database, migrate all the data where it's weeks and months of look, right? Well, guess what? With Era, you can do that in one click, it uses the underlying AOS scale-out architecture to provision more storage and it does it have zero downtime. So on the fly, you can resize your databases that speed, you're adding some security capabilities. You're adding some capabilities around resilience. Era continues to be a very exciting product for us. And one of the things, one of the real things that we are really excited about is that it can really unify database operations between private and public. So in the future, we can also offer an aversion of Era, which operates on native public cloud instances and really excited about that. >> Furrier: Yeah. And you guys got that two X performance on scaling up databases and analytics. Now the big part point there, since you brought up security, I got to ask you, how are you guys talking about security? Obviously it's embedded in from the beginning. I know you guys continue to talk about that, but talk about, Rajiv, the security on, on that's on everyone's mind. Okay. It goes evolving. You seeing ransomware are continuing to happen more and more and more, and that's just the tip of the iceberg. What do you guys, how are you guys helping customers stay secure? >> Mirani: Security is something that you always have to think about as a defense in depth when it comes to security, right? There's no one product that, that's going to do everything for you. That said, what we are trying to do is to essentially go with the gamut of detection, prevention, and response with our security, and ransom ware is a great example of that, right. We've partnered with Qualys to essentially be able to do a risk assessment of your workloads, to basically be able to look into your workloads, see whether they have been bashed, whether they have any known vulnerabilities and so on. To try and prevent malware from infecting your workloads in the first place, right? So that's, that's the first line of defense. Now not systems will be perfect. Some, some, some, some malware will probably get in anyway But then you detect it, right. We have a database of all the 4,000 ransomware signatures that you can use to prevent ransomware from, uh, detecting ransom ware if it does infect the system. And if that happens, we can prevent it from doing any damage by putting your fire systems and storage into read-only mode, right. We can also prevent lateral spread of, of your ransomware through micro-segmentation. And finally, if you were, if you were to invade, all those defenses that you were actually able to encrypt data on, on, on a filer, we have immutable snapshots, they can recover from those kinds of attacks. So it's really a defense in depth approach. And in keeping with that, you know, we also have a rich ecosystem of partners while this is one of them, but older networks market sector that we work with closely to make sure that our customers have the best tooling around and the simplest way to manage security of their infrastructure. >> Furrier: Well, I got to say, I'm very impressed guys, by the announcements from the team I've been, we've been following Nutanix in the beginning, as you know, and now it's back in the next phase of the inflection point. I mean, looking at my notebook here from the announcements, the VPC virtual networking, DR Observability, zero trust security, workload governance, performance expanded availability, and AWS elastic DR. Okay, we'll get to that in a second, clusters on Azure preview cloud native ecosystem, cloud control plane. I mean, besides all the buzzword bingo, that's going on there, this is cloud, this is a cloud native story. This is distributed computing. This is virtualization, containers, cloud native, kind of all coming together around data. >> Cornely: What you see here is, I mean, it is clear that it is about modern applications, right? And this is about shifting strategy in terms of focusing on the pieces where we're going to be great at. And a lot of these are around data, giving you data services, data governance, not having giving you an invisible platform that can be running in any cloud. And then partnering, right. And this is just recognizing what's going on in the world, right? People want options, customers and options. When it comes to cloud, they want options to where they're running the reports, what options in terms of, whether it be using to build the modern applications. Right? So our big thing here is providing and being the best platform to go and actually support for Devers to come in and build and run their new and modern applications. That means that for us supporting a broad ecosystem of partners, entrepreneur platform, you know, we announced our partnership with Red Hat a couple of months ago, right? And this is going to be a big deal for us because again, we're bringing two leaders in the industry that are eminently complimentary when it comes to providing you a complete stack to go and build, run, and manage your client's applications. When you do that on premises, utilizing like the preferred ATI environment to do that. Using the Red Hat Open Shift, or, you're doing this open to public cloud and again, making it seamless and easy, to move the applications and their supporting data services around, around them that support them, whether they're running on prem in hybrid winter mechanic. So client activity is a big deal, but when it comes to client activity, the way we look at this, it's all about giving customers choice, choice of that from services and choice of infrastructure service. >> Furrier: Yeah. Let's talk to the red hat folks, Rajiv, it's you know, it's, they're an operating system thinking company. You know, you look at the internet now in the cloud and edge, and on-premise, it's essentially an operating system. you need your backup and recovery needs to disaster recovery. You need to have the HCI, you need to have all of these elements part of the system. It's, it's, it's, it's building on top of the existing Nutanix legacy, then the roots and the ecosystem with new stuff. >> Mirani: Right? I mean, it's, in fact, the Red Hat part is a great example of, you know, the perfect marriage, if you will, right? It's, it's, it's the best in class platform for running the cloud-native workloads and the best in class platform with a service offering in there. So two really great companies coming together. So, so really happy that we could get that done. You know, the, the point here is that cloud native applications still need infrastructure to run off, right? And then that infrastructure, if anything, the demands on that and growing it since it's no longer that hail of, I have some box storage, I have some filers and, you know, just don't excite them, set. People are using things like object stores, they're using databases increasingly. They're using the Kafka and Map Reduce and all kinds of data stores out there. And back haul must be great at supporting all of that. And that's where, as Thomas said, earlier, data services, data storage, those are our strengths. So that's certainly a building from platform to platform. And then from there onwards platform services, great to have right out of the pocket. >> Furrier: People still forget this, you know, still hardware and software working together behind the scenes. The old joke we have here on the cube is server less is running on a bunch of servers. So, you know, this is the way that is going. It's really the innovation. This is the infrastructure as code truly. This is what's what's happened is super exciting. Rajiv, Thomas, thank you guys for coming on. Always great to talk to you guys. Congratulations on an amazing platform. You guys are developing. Looks really strong. People are giving it rave reviews and congratulations on, on, on your keynotes. >> Cornely: Thank you for having us >> Okay. This is theCube's coverage of.next global virtual 2021 cube coverage day two keynote review. I'm John Furrier Furrier with the cube. Thanks for watching.
SUMMARY :
How are the customers, uh, seeing this? the effort to refactor them. the same workloads anyway, As the CTO, you've got be excited with the And if you look at all get the keys to the kingdom, of different products in the because the theme right now So one of the key components So the networks are different. the beauty here is that we Is that right? between the clouds that you They don't have to the data aspect of this? Lots of technology is at the application layer to go and one of the things we've the edge that you then have are still the heart of So on the fly, you can resize Now the big part point there, since you of all the 4,000 ransomware of the inflection point. the way we look at this, now in the cloud and edge, the perfect marriage, if you will, right? Always great to talk to you guys. This is theCube's coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cornely | PERSON | 0.99+ |
Mirani | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Thomas Cornely | PERSON | 0.99+ |
Rajiv | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Qualys | ORGANIZATION | 0.99+ |
two separate teams | QUANTITY | 0.99+ |
Rajiv Mirani | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
4,000 ransomware | QUANTITY | 0.99+ |
two leaders | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one click | QUANTITY | 0.99+ |
both locations | QUANTITY | 0.98+ |
first line | QUANTITY | 0.98+ |
red hat | ORGANIZATION | 0.98+ |
first mandate | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
this year | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
three | QUANTITY | 0.97+ |
SQL | TITLE | 0.97+ |
one-click | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.96+ |
each one | QUANTITY | 0.96+ |
second | QUANTITY | 0.96+ |
two great guests | QUANTITY | 0.96+ |
Kafka | TITLE | 0.96+ |
Azure | TITLE | 0.95+ |
two new silos | QUANTITY | 0.95+ |
EMC | ORGANIZATION | 0.95+ |
both locations | QUANTITY | 0.94+ |
Map Reduce | TITLE | 0.94+ |
one cloud | QUANTITY | 0.93+ |
Devers | ORGANIZATION | 0.91+ |
AOS | TITLE | 0.91+ |
third leg | QUANTITY | 0.91+ |
Day Two | QUANTITY | 0.91+ |
single | QUANTITY | 0.9+ |
five key areas | QUANTITY | 0.89+ |
Arab | OTHER | 0.88+ |
single cloud | QUANTITY | 0.87+ |
great companies | QUANTITY | 0.86+ |
couple of months ago | DATE | 0.85+ |
2021 | DATE | 0.84+ |
Mongo | TITLE | 0.82+ |
Justin Cormack, Docker | DockerCon 2021
(upbeat music) >> Okay, welcome back to theCUBES's coverage of Dockercon 2021. I'm John Furrier, your host of theCUBE. We have Justin Cormack, CTO of Docker. Was also involved in the CNCF technical oversight and variety of other technical activities. Justin, great to see you. Thanks for coming on theCUBE Virtual this year, again, twice in a row and maybe next year will be in person but certainly hybrid, great to see you. >> Yeah, great to see you too. Yeah, in person would be nice one of these days, yes. >> Yeah, when we get real life back. It's almost there, I can feel it, but there's so much activity. One of the things that we've been talking about, certainly in theCUBE and even here at DockerCon, same story. The pandemic really hasn't truly impacted developer community, because most of the people have been working remotely and virtually for many, many decades. And if you think about just in the past 10 years, all the innovation in cloud has come from virtual teams, open-source softwares, always had good kind of governance and a democratization of kind of how it becomes built. So not a bit's been skipped during the pandemic. In fact, if anything supply chain of software development has increased. So- >> Yeah, I think that it's definitely true that open-source was really the place that pioneered remote working. And a lot of the work methods the people worked out to do open-source as in communication and things like that, were things that people have adopted. It's a slightly different community. I'd say open-source projects like meetings less than some other organizations, but there was definitely that pioneering thing. And a lot of the companies that started off remote first, were in open-source software, and they started off for those reasons as well because developers were already working like that, and they could just hire them and they could continue to work like that. >> Yeah, one of the upsides of all this is that people won't tolerate even zoom or in person meetings that just go on, 15, 30 minutes good call. Why do we have a meeting? What's the purpose? (faintly speaking) the way to go. Let's get into the developer community. One of the things I love about DockerCon this year 2021 is the envelopes being pushed again almost to another level, it's almost a new level, this next level of containers is bringing more innovation to the table and productivity and simplicity. Some of the same messages last year but now more than ever, stuff's going on. What are you hearing directly from the community? You talk to a lot of the developers out of the millions of developers in the Docker ecosystem. What are they saying now in 2021? What's going on in their mind? >> Yeah, I think it's an area... More and more people are using Docker, and they're using it every day and it's a change that's been going on, obviously for a while, but it begins to sort of, as it spreads, the kind of developers using Docker, so different from... When I started at Docker, coming up for six years ago, it was a very bleeding edge type thing for early adopters. Now it's everywhere, millions and millions of ordinary developers are using Docker every day. And the kinds of things that's telling us is, well, some of this stuff that we thought, well, five years ago was an amazing breakthrough and simplicity. Now that's on its own still too hard. One of the things I mentioned in my keynote was that, we're talking to developers who just primarily have been working windows all their life but more and more applications being shipped on Linux. And they using Linux containers, but they find Docker files really hard because they have really, Linux shell scrapes and not a windows developer doesn't know how to use a Linux shell script. And it's bringing it down to that next level of use where you can adopt these things more easily, the pitched to the kind of level of developer who is just thinking about their language, their APIs and they don't want to have to learn kind of lots of new things to do Docker. They'll learn some, but they really wanted to kind of integrate better into the environments they work in and help them more. We've been working on a lot of detailed instructions about like how to use Docker better with JavaScript and Python, because people have told us, be specific about these things, tell us exactly how I do make things work well with the way I'm doing things now. >> What is the big upside for containers for the folks watching? And last year, one of the most popular sessions was the one-on-one Peter McKay did, which was fascinating, packed with people. And the adoption of containers is going everywhere and enabling a lot of growth. What's the main message to these new developers that are coming on board to ecosystem. >> I think what's happening is that people are gradually, very slowly starting to think about containers in a different way. When we started, the question everyone kept asking was about containers and VMS, what's the difference? That question didn't really, kind of really address what the big fundamental changes that containers made to how people work was. I'd like to think about it in terms of the physical shipping containers, like people are concerned about like, can you escape from the box? Can I get out of a container? These kinds of questions. This is not really the important question about containers is kind of escape from the box. The question is, what does it enable you to build? The shipping container let us build the supply chains that let people build products and factories and things that would never have been possible without the ability to actually just ship things in a routine and predictable and reliable and secure way, getting that content and the things that come in the container and you actually work more effectively. And, so I think that now we're talking about like what's the effect of containers on the industry as a whole? What are the things that we can learn about repeatability and documentation and metadata and reliability, that we kind of talked about a little bit before, but these are becoming the important use cases for containers. Containers are really about, they're not about that kind of security and escape piece, there're about the content, the supply chain and your actual process of working. >> What do you, first of all, great call out on the security piece. I want to get that in a second. I think that's a killer one. You've mentioned supply chain, can you define software supply chain, and is that where the automation value comes in? Because a lot of people are talking about automation is improving the developer experience. So can you clarify quickly, what do you mean by the software supply chain? And is that where automation comes in? Am I getting that right? >> Yeah, so the software supply chain is really that process by which you get components of software to build your applications. Around 99% of companies are using open-source software to build applications. And the vast majority of the pieces of any modern application art consists mainly of open-source software and some tries source software, and some software that people are writing themselves. But you've got to get these components in, you've got to make sure that they're updated and scanned and they're reliable. And that's the software supply chain is that process for bringing in components that you're using to build your applications. And so, the way automation comes in, is just because there's so much of the software dealing with it manually is just difficult, and it's an ongoing process of build and test and CI and all those scanning and all those processes. And I think as software developers, we fundamentally know that the most valuable things are the things that we automate. They're the things that we do all the time and they're important. And that a lot of building a software is about building repeatable processes, rather than just doing things one by one, because we know that we have to keep updating software, we have to keep fixing bags, we have to keep improving software. And so you've got to be able to keep doing these things, and automation is what helps us do that. >> I was talking to Dana Lawson the VP of Engineering at GitHub, and she and I were chatting about this one topic. I want to get your thoughts on it, because she was definitely of the camp of automation helps with productivity. No doubt, check, double check there. The question I have for you is how do you see the impact on say the developer experience and innovation specifically? Because, okay, I can see the productivity, okay, something happens a bunch of times automated. Then you start thinking about supply chain, then you thought about developer experience and ultimately with Kubernetes around the corner, with the relationship with containers, you can see the cloud-native benefits from an innovation standpoint. Can you share your thoughts on the automation impact to experience for the developer and the innovation strategies they need? >> Well, I think that one of the ways we're trying to think about everything we do at Docker is that we should be helping build processes rather than helping you do something once, because, if you do something three times, you want to automate it, but what if the first time you did it, that could also build that automated process. And if it was, why isn't it as easy to make something automated as it is to do it once? There's no real reason why it shouldn't be. And I think that kind of... I was having a conversation with someone the other day about how they would... They had kind of reversed their thinking and they found that often it was easier to start with automation and harder to do things manually. And that's a kind of real reversal of that kind of role between automation and doing stuff run, so, and it's not how we think about it, but I think it's really interesting to think about that kind of thing, and how could we make automation really, really simple. >> Well, that's a great example when you have that kind of environment, and certainly the psychology is better to have automation but if everyone's saying it's hard to do manual, that means they're at some sort of scale, right? So scale matters, right? So as you start getting the SRE vibes going, and you start getting Cloud Scale in cloud-native apps, that's going to be cool. Now, the question I want to ask you, because while the other thing that's happening is more people are coming into open-source than ever before, not just young developers, but also end users. Not like the hardcore-end users, looking like classic enterprises are coming in. So as more developers come in and increase over the year, what does that mean for the experience for developers? Now you have, does that change it? How do you view that? Because as more developers come in, you have institutional knowledge, you have scale, you have learnings, what's your thoughts on on the impact as the population of developers increase? How does Docker view that? >> Yeah, now, I think it's really interesting trend. It's been very visible in CNCF for the last few years. We've been seeing a lot more active end-user, company's doing open-source. Spotify has been one of the examples with a backstage project they brought into CNCF and other areas where they work. And I think it's part of this growing trend that's really important to Docker, Docker is a bottom up technology adoption company. Developers are using Docker because it works for them and they love it. And developers are doing open-source in their companies because open-source works for them and they love it. And it works for their business as well. And whereas historically like the the model was, you would buy kind of large enterprise products, with big procurement deals that were often not what the developers wanted, but now you're getting developers saying, what we want to do is adopt these open-source projects, because we know how they work, we already understand that we know how to integrate them better into our processes. And I think it's that developer lad demand that's really important, and it's the kind of integration that developers want to do, the kind of products that they want to work with, because they understand them and love them, and they had targeted at developers and that's incredibly important. And I think that's very much where Docker's focused and we really want to... Open-source is of the core of everything we've always done. We've built with the open-source community, and we've kind of come from that kind of environment. And we built things that we love as developers and that other developers love. >> Talk about your thoughts on security. Obviously it's always built in from the beginning, Shift-Left is the ethos, day two operations, AI apps, whatever people want to call that. Post-deployment mode, security has to be at the center of this, containers can be a great solution and give some great flexibility for developers. Can you talk about your view and Docker view on the security posture and situation? >> Yeah, I think Shift-Left is incredibly important because just doing things late, everyone knows is the wrong thing from the point of view of productivity. But I think Shift-Left can just mean, ask the developers to do everything, which is really a bit too much. I think that sometimes things need to be shifted even further left than people have actually thought. So like, why are you expecting developers to scan components to see if they're allowed to use? If they should be using them or they should be updated, why hasn't that happened before the developer even gets there? I think there's a, I sorted my keynote about this whole piece, about trusted content. And it's really important that we really shift that even further left, so it's long before it gets to the developer, those things that are happening. Security, it's a huge area, of course, but it's very much, we need to help developers because security is non-obvious. I think the more you understand about security, the more you understand that it doesn't come naturally to people and they need to be helped with it, and they need to learn a lot about things in a way to, I found myself that, learning how to think like an attacker is a really important way of thinking about how to secure softwares, like what what would they do rather than just thinking about the normal kind of, oh, this works in the (faintly speaking) What happens if things go wrong? That you have to think about as well. So there's a lot of work to do to educate and help and build tools that help developers there. And it's been really good working with Snyk, cause they're a very developer focused security company, that's why we chose to work with them. Whereas historically, security companies have been very oriented towards kind of the operator side of it, not the development side, not the developer experience. And the other piece is really around supply chain security. That's just kind of a new security area. And it's very important from the container point of view, because one of the things containers let you do is really control the components that you're using to build applications and manage them better. And so we can really build tooling that helps you manage, that helps you understand what's in a container, helps you understand where it came from, how it was built and automate those processes and sign and authenticate them as well. And we've been working with CNCF on Nature V2, which is for signing revamp of the container signing process, because people really want to know who originated this container? Where did it come from? What did they say is in it? There's a lot of work about build up materials and composition analysis and all those things that you need to know about. What's in a container, and the... >> Everyone wants to know what's in a container. If you've got a Kubernetes cluster for instance, that's all highly secure and in comes a container, how do you know what the... There's no perimeter, right? So again, as you said, thinking like an attack vector there, you got to understand that, this is where the action is, right? This is where a lot of work's being done on this idea of always on security. You don't know when the container's coming in. during the run stage, you're running a business now, it's not just build and share, your running infrastructure. >> Absolutely, you really want full control about everything that goes into it, and you want to know where everything that you're running in production came from, and you pretty tired of this, and that's your end to end supply chain. It's everything from developer inputs through the build process and grow to production. And in production, understanding whether it needs to be updated and whether there's new discover vulnerabilities and whether it's being attacked and how that relates back to what came into it in the first place. >> Lot more intelligence, lot more monitoring. You guys are enabling all that I know it's cool. Great stuff. Hey, I want to get your thoughts on just what got you here on the calendar, looking at the DockerCon '21 event, and we're having a fun time here with, we're on theCUBE track, get the keynote track. But if you look at the sessions that's going on, you got, and I'll get your comment on this, cause it's really interesting how it's cleverly laid out this is. You've got the classic run share build and then you've got a track called accelerate, interesting metadata around these labels. Take us through, because this basically shows the maturation of containers. We already talked about the relationship, somewhat with Kubernetes, everyone kind of sees that direction clearly, but you got acceleration, which is a key new track, but run, share, build, what's your reaction to that? Share your observations of what the layout of those names and what it means to an enterprise and people building. >> Yeah, (faintly speaking) has been Docker's kind of motto for a long time. It kind of encapsulates that kind of process of like, the developer building application, the collaborative piece that's really important about sharing content in containers and then obviously putting into production because that's the aim. But, accelerate is incredibly important too. Developers are just being asked to do a lot. Everything is software, there's a lot of software, and a lot of software has to be created and we've got to make it easier to do this. And that kind of getting quickly from idea to business outcomes and results is what modern software teams are really driving at. And, I think we've really been focused this last year on what the team needs to succeed, and especially, small focused teams delivering business value. It's how we're structured internally as well and is how our customers, to a large extent are structured. And there's that kind of focus on accelerating those business outcomes and the feedback loops from your ideas to what the feedback that your customers give you at helping you understand that it's really important. >> Talk about final question for you in terms of the topic here, cloud, hybrid cloud, multicloud, this is, put multicloud asides more hype. Everyone has multiple clouds, but it speaks to the general distributed computing architecture when you talk about public cloud and on-premises cloud operations. So modern developers looking at that as, okay, distributed environment, edge, whatever you're going to call it. What's your view of Docker as it goes forward for the folks watching, who have experience with Docker, loved the vibe, loved the open-source, but now I've got to start thinking about putting the containers everywhere. What's the Docker pitch, so to speak, with a tech story that they should walk away with from you? What's the story, what's the pitch? >> Yeah, so containers everywhere has been a sort of emerging trend for a while, the last year or so. The whole Kubernetes at the edge thing has really exploded with people experimenting with lots and lots of different architectures for different kinds of environments at the edge. What's totally clear is that people want to be able to update software really easily at the edge the way you can in the cloud. We can't have the sort of, there's no point in shipping a modern piece of manufacturing equipment that you can't update the software on, because the software is how it works, more and more equipment is becoming very general purpose, people making general purpose robots, general purpose factories, general purpose everything which need to be specialized into the application they're going to run that week. And also people are getting more and more feedback and data and feedback from the data. So if you're building something that runs on a farm, you're getting permanent feedback about how well it's doing and how well the crops are growing was coming back. And so everywhere you've got this, we need to update. And everywhere you need to update, you want containers because containers are the simple reliable way to update software. >> I know you talked about CNCF and your role there. Also the CTO of Docker, I have to ask cause we were just covered Coop con and cloud-native con just last month and this month. And it's clear that Kubernetes is becoming boringly good in a way that's good to be boring, right? It means it's working. And it's becoming more cloud-native con than Coop-con. That has been kind of editorial observation, which speaks to what we feel is a trend towards more cloud-native discussions, less about Kubernetes. So, it's still Kubernetes stuff going on, don't get me wrong, just saying it's not as controversial in the sense that people kind of clearly understand why that's important, and all the discussions now seem to be on cloud-native modern developer workflows. What's your reaction to that? Do you agree, if not, what's your take? >> Yeah, I think that's definitely true. Kubernetes is definitely much more boring. Everyone is using it. They're using it in production now vastly more than they were a few years ago, when it was just experiment, experiment, experiment, now it's production scale out. The ecosystem in CNCF is kind of huge. There's so many little bits that have to be filled in storage and networking and all that. So there's actually a lot of pieces that are around Kubernetes, but, there's definitely more of a focus coming on the developer experience there. Compared to DockerCon, the audience at Coop Con is incarnated kind of still much more operator focused rather than developer focused. And it's very nice coming to DockerCon, just to feel like being amongst that developer community, Coop Con still has a way to gauge to have more of a real developer audience, but the project is starting to pair with a more developer focused kind of aim or things like backstage from Spotify is a really interesting one where it's about operations, but it's a developer portal focused things. So, I think it's happening, and there's a lot more talk about that. There's a whole bunch of infrastructure, there's a lot more security projects in CNCF than they were before. And we're doing a lot of work on supply chain security and CNCF just released a white paper on that few days ago. So there's a lot of work there that touches on developer needs. I still think that audience (faintly speaking) that much different from DockerCon which is I think 80% developers and maybe 10% infrastructure rather than the other way round. >> I think if you're going to get operators it can be SRE/platformleads. The platform leads are definitely inside DockerCon now than they've ever been before from my observation. So, but that speaks to the sign of the times. Most development teams have an SRE in the team, not an SRE team. They're just starting to see much more integration amongst the kind of a threaded or threaded teams or whatnot. So... >> Yeah. (faintly speaking) Operate your apps is the model. And I think that it's going to lead to more and more crossover between these communities. It's what DevOps was supposed to be about, somehow got diverted into building DevOps teams instead of working together, but we'll get there. >> It's clear from my standpoint, at least from reporting here is that, from the DockerCon and community at large, cloud-native community, having end-to-end work-load visibility on developer test run, everything seems to be the consensus, without a doubt. And then having multiple teams, and then having some platform, have some flexing people moving between teams for the most part, but built insecurity, built in SRE, built in DevOps, DevSecOps, all the way from end-to-end. >> Absolutely, we know that that's what does work best, it's where most organizations are heading at different speeds, because it's very different from the traditional architecture. It takes time to get there, but that's the model that has come out of microservices that really containers enabled and allow that model to happen. And it's the team architecture of containers. >> Hey, monolithic applications have monolithic organizations, microservices have microservices teams. Justin, great to have you on theCUBE for this conversation. If folks watching this interview, check out Justin's keynote, came from the main stage, great stuff. Justin, thanks for coming on theCUBE, we really appreciate your time and insight. >> Thank you, good to see you again. >> Okay, this is theCUBES's coverage of DockerCon 2021 Virtual. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Was also involved in the Yeah, great to see you too. One of the things that And a lot of the work One of the things I love the pitched to the kind And the adoption of and the things that come in the container and is that where the And that's the software supply chain and the innovation strategies they need? is that we should be and increase over the year, and it's the kind of integration Shift-Left is the ethos, ask the developers to do everything, during the run stage, you're and grow to production. the maturation of containers. and the feedback loops from your ideas What's the Docker pitch, so to speak, and data and feedback from the data. Also the CTO of Docker, I have to ask but the project is starting to pair So, but that speaks to And I think that it's going to lead for the most part, but built and allow that model to happen. Justin, great to have you on of DockerCon 2021 Virtual.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dana Lawson | PERSON | 0.99+ |
Justin Cormack | PERSON | 0.99+ |
Peter McKay | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
three times | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
windows | TITLE | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
last month | DATE | 0.99+ |
JavaScript | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
DockerCon | EVENT | 0.99+ |
Python | TITLE | 0.99+ |
first | QUANTITY | 0.98+ |
Snyk | ORGANIZATION | 0.98+ |
this month | DATE | 0.98+ |
Docker | TITLE | 0.98+ |
CNCF | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
DockerCon '21 | EVENT | 0.98+ |
30 minutes | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
six years ago | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
one topic | QUANTITY | 0.96+ |
five years ago | DATE | 0.96+ |
this year | DATE | 0.96+ |
Kubernetes | TITLE | 0.95+ |
DevSecOps | TITLE | 0.93+ |
Coop-con | ORGANIZATION | 0.93+ |
Shift-Left | TITLE | 0.93+ |
double | QUANTITY | 0.9+ |
Dockercon 2021 | EVENT | 0.89+ |
DevOps | TITLE | 0.87+ |
theCUBE | ORGANIZATION | 0.85+ |
theCUBES | ORGANIZATION | 0.85+ |
few years ago | DATE | 0.84+ |
SRE | TITLE | 0.81+ |
Coop Con | ORGANIZATION | 0.81+ |
80% developers | QUANTITY | 0.8+ |
Around 99% | QUANTITY | 0.8+ |
millions of developers | QUANTITY | 0.79+ |
pandemic | EVENT | 0.79+ |
DockerCon 2021 | EVENT | 0.78+ |
last few years | DATE | 0.76+ |
DockerCon 2021 Virtual | EVENT | 0.75+ |
past 10 years | DATE | 0.74+ |
twice in a row | QUANTITY | 0.73+ |
Robin Hernandez, IBM | IBM Think 2021
>> Narrator: From around the globe It's theCUBE with digital coverage of IBM Think 2021. Brought to you by IBM. >> Welcome back everyone to theCUBE's coverage of IBM Think 2021 virtual, I'm John Furrier, your host. I've got a great guest here Robin Hernandez, vice president Hybrid Cloud Management and Watson AIOps. Robin, great to see you. Thanks for coming on theCUBE. >> Thanks so much for having me, John. >> You know, Hybrid Cloud, the CEO of IBM Arvind loves Cloud. We know that we've talked to him all the time about it. And Cloud is now part of the entire DNA of the company. Hybrid Cloud is validated multi clouds around the corner. This is the underlying pinnings of the new operating system of business. And with that, that's massive change that we've seen IT move to large scale. You're seeing transformation, driving innovation, driving scale, and AI is the center of it. So AIOps is a huge topic. I want to jump right into it. Can you just tell me about your day to day IT operations teams what you guys are doing? How are you guys organized? How you guys bring in value to the customers? What are your teams responsible for? >> Yeah, so for a few years we've been working with our IT customers, our enterprise customers in this transformation that they're going through. As they move more workloads to cloud, and they still have some of their workloads on premise, or they have a strategy of using multiple public clouds, each of those cloud vendors have different tools. And so they're forced with, how do I keep up with the changing rate and pace of this technology? How do I build skills on a particular public cloud vendor when, you know, maybe six months from now we'll have another cloud vendor that will be introduced or another technology that will be introduced. And it's almost impossible for an it team to keep up with the rate and pace of the change. So we've really been working with IT operations in transforming their processes and their skills within their teams and that looking at what tools do they use to move to this cloud operations model. And then as part of that, how do they leverage the benefits of AI and make that practical and purposeful in this new mode of cloud operations >> And the trend that's been booming is this idea of a site reliability engineer. It's really an IT operations role. It's become kind of a new mix between engineering and IT and development. I mean, classic DevOps, we've seen, you know dev and ops, right? You got to operate the developers and the software modern apps are coming in that's infrastructure as course has been around for a while. But now as the materialization of things like Kubernetes and microservices, people are programming the infrastructure. And so the scale is there, and that's been around for a while. Now it's going to go to a whole enterprise level with containers and other things. How is the site reliability engineering persona if you will, or ITOps changed specifically because that's where the action is. And that's where you hear things like observability and I need more data, break down the silos. What's this all about? What's your view? >> Yeah, so site reliability engineering or SRE practices as we call it has really not changed the processes per se that IT has to do, but it's more accelerated at an enormous rate and pace. Those processes and the tools as you mentioned, the cloud native tools like Kubernetes have accelerated how those processes are executed. Everything from releasing new code and how they work with development to actually code the infrastructure and the policies in that development process to maintaining and observing over the life cycle of an application, the performance, the availability, the response time, and the customer experience. All of those processes that used to happen in silos with separate teams and sort of a waterfall approach, with SRE practices now, they're happening instantaneously. They're being scaled out. They're being... Failback is happening much more quickly so that applications didn't do not have outages. And the rate and pace of this has just accelerated so quickly. This is the transformation of what we call cloud operations. And we believe that as IT teams work more closely with developers and they moved towards this SRE model, that they cannot just do this with their personnel and changing skills and changing tools. They have to do this with modernized tools like AI. And this is where we are recommending applying AI to those processes so that you can then get automation out of the back end that you would not think about in a traditional IT operations, or even in an SRE practice. You have to leverage capabilities and new technologies like AI to even accelerate further. >> Let's unpack the AI operations piece because I think that's where I think I'm in hearing. I'd love you to clarify this because it becomes I think the key important point but also kind of confusing to some folks because IT operations people see that changing. You just pointed out why, honestly, the tools and the culture is changing, but AI becomes a scale point because of the automation piece you mentioned. How does that thread together? How does AIOps specifically change the customer's approach in terms of how they work with their teams and how that automation is being applied? 'Cause I think that's the key thread, right? 'Cause everyone kind of gets the cultural shifts and the tools, if they're not living it and putting it in place, but now they want to scale it. That's where automation comes in. Is that right? Is that the right way to think about it? What's your view on this? This is important. >> It's absolutely right. And I always like to talk about AI in other industries before we apply it to IT to help IT understand. Because a lot of times, IT looks at AI as a buzzword and they say, "Oh, you know, yes, sure. "This is going to help me." But if you think about... We've been doing AI for a long time at many different companies not just at IBM, but if you think about the other industries where we've applied it, healthcare in particular is so tangible for most people, right? It didn't replace a doctor but it helps a doctor see the things that would take them weeks and months of studying and analyzing different patients to say, "Hey, John, I think this may be a symptom "that we overlooked or didn't think about "or a diagnosis that we didn't think about," without manually looking at all this research. AI can accelerate that so rapidly for a doctor, the same notion for IT. If we apply AI properly to IT, we can accelerate things like remediating incidents or finding a performance problem that may take your eye months or weeks or even hours to find, AI applied properly find those issues and diagnose just like they could in healthcare it diagnoses issues correctly much more rapidly. >> Now again, I want to get your thoughts on something while you're here 'cause you've been in the business for many, many decades 20 years experience, you know, cloud cold, you know the new modern area you're managing it now. Clients are having a scenario where they, "Okay, I'm changing over the culture." I'm "Okay, I got some cloud, I got some public "and I got some hybrid and man, "we did some agile things. "We're provisioned, it's all done. "It's out there." And all of a sudden someone adds something new and it crashes (chuckles) And now I've got to get in, "Where's the risks? where's the security holes?" They're seeing this kind of day two operations as some people call, another buzz word but it's becoming more of, "Okay, we got it up and running "but we still now going to still push some code "and things are starting to break. "and that's net new thing." So it's kind of like they're out of their comfort zone. This is where I kind of see the AIOps evolving quickly because there's kind of a DevSecOps piece. There's also data involved, observability. How do you talk to that scenario? Where, okay, you sold me on cloud, I've been doing it. I did some projects. We're not been running. We got a production system and we added something new. Something maybe trivial and it breaks stuff? >> Yes. Yeah, so with the new cloud operations and SRE, the IT teams are much more responsible for business outcomes. And not just as you say, the application being deployed and the application being available, but the life cycle of that application and the results that it's bringing to the end users and the business. And what this means is that it needs to partner much more closely with development. And it is hard for them to keep up with the tools that are being used and the new code and the architectures of microservices that developers are using. So we like to apply AI on what we call the change risk management process. And so everyone's familiar with change management that means a new piece of code is being released. You have to maintain where that code is being released to was part of the application architecture and make sure that it's scaled out and rolled out properly within your enterprise policies. When we apply AI, we then apply what we call a risk factor to that change because we know so often, application outages occur not something new within the environment. So by applying AI, we can then give you a risk rating that says, "There's an 80% probability "that this change that you're about to roll out, "a code change is going to cause a problem "in this application." So it allows you to then go back and work with the development team and say, "Hey, how do we reduce this risk?" Or decide to take that calculated risk and put into the visibility of where those risks may occur. So this is a great example, change risk management of how applying AI can make you more intelligent in your decisions much more tied to the business and tied to the application release team. >> That's awesome. Well, I got you here on this point of change management. The term "Shift Left" has come up a lot in the industry. I'd love to get your quick definition of what that is in your mind. What does Shift Left mean for Ops teams with AIOps? >> Yeah, so in the early days of IT there was a hard line definitely between your development and IT team. It was kind of we always said throwing it over the fence, right? The developers would throw the code over the fence and say, good luck IT, you know, figure out how to deploy it where it needs to be deployed and cross your fingers that nothing bad happens. Well, Shift Left is really about a breaking down that fence. And if you think of your developers on your left-hand side you'd being the IT team, it's really shifting more towards that development team and getting involved in that code release process, getting involved in their CI/CD pipeline to make sure that all of your enterprise policies and what that code needs to run effectively in your enterprise application and architecture, those pieces are coded ahead of time with the developer. So it's really about partnering between it and development, shifting left to have a more collaboration versus throwing things over the fence and playing the blame game, which is what happens a lot in the early days IT. >> Yeah, and you get a smarter team out of it, great point. That's great insight. Thanks for sharing that. I think it's super relevant. That's the hot trend right now making dealers more productive, building security from the beginning. While they're doing it code it right in, make it a security proof if you will. I got to ask you one of the organizational questions as IBM leader. What are some of the roadblocks that you see in organizations that when they embrace AIOps, are trying to embrace AI ops are trying to scale it and how they can overcome those blockers. What are some of the things you're seeing that you could share with other folks that are maybe watching and trying to solve this problem? >> Yeah, so you know, AI in any industry or discipline is only as good as the data you feed it. AI is about learning from past trends and creating a normal baseline for what is normal in your environment. What is most optimal in your environment this being your enterprise application running in steady state. And so if you think back to the healthcare example, if we only have five or six pieces of patient data that we feed the AI, then the AI recommendation to the doctor is going to be pretty limited. We need a broad set of use cases across a wide demographic of people in the healthcare example, it's the same with IT, applying AI to IT. You need a broad set of data. So one of the roadblocks that we hear from many customers is, well I using an analytics tool already and I'm not really getting a lot of good recommendations or automation out of that analytics tool. And we often find it's because they're pulling data from one source, likely they're pulling data from performance metrics, performance of what's happening with the infrastructure, CPU utilization or memory utilization, storage utilization. And those are all good metrics, but without the context of everything else in your environment, without pulling in data from what's happening in your logs, pulling in data from unstructured data, from things like collaboration tools, what are your team saying? What are the customers saying about the experience with your application? You have to pull in many different data sets across IT and the business in order to make that AI recommendation the most useful. And so we recommend a more holistic true AI platform versus a very segregated data approach to applying and eating the analytics or AI engine. >> That's awesome, it's like a masterclass right there. Robin, great stuff. Great insight. We'll quickly wrap. I would love to you to take a quick minute to explain and share what are some of the use cases to get started and really get into AIOps system successes for people that want to explore more, dig in, and get into this fast, what are some use case, what's some low hanging fruit? What would you share? >> Yeah, we know that IT teams like to see results and they hate black boxes. They like to see into everything that's happening and understand deeply. And so this is one of our major focus areas as we do. We say, we're making AI purposeful for IT teams but some of the low hanging fruits, we have visions. And lots of our enterprise customers have visions of applying AI to everything from a customer experience of the application, costs management of the application and infrastructure in many different aspects. But some of the low hanging fruit is really expanding the availability and the service level agreements of your applications. So many people will say, you know I have a 93% uptime availability or an agreement with my business that this application will be up 93% of the time. Applying AI, we can increase those numbers to 99.9% of the time because it learns from past problems and it creates that baseline of what's normal in your environment. And then we'll tell you before an application outage occurs. So avoiding application outages, and then improving performance, recommendations and scalability. What's the number of users coming in versus your normal scale rate and automating that scalability. So, performance improvements and scalability is another low-hanging fruit area where many IT teams are starting. >> Yeah, I mean, why wouldn't you want to have the AIOps? They're totally cool, very relevant. You know, you're seeing hybrid cloud, standardized all across business. You've got to have that data and you got to have that incident management work there. Robin, great insight. Thank you for sharing. Robin Hernandez, vice president of Hybrid Cloud Management in Watson AIOps. Thanks for coming on theCUBE. >> Thank you so much for having me John. >> Okay, this theCUBE's coverage of IBM Think 2021. I'm John Furrier your host. Thanks for watching. (bright upbeat music)
SUMMARY :
Brought to you by IBM. Robin, great to see you. And Cloud is now part of the and that looking at what tools do they use and the software modern apps are coming in and the policies in and the tools, if they're not living it but it helps a doctor see the things "Okay, I'm changing over the culture." and the results that it's bringing I'd love to get your quick definition and playing the blame game, I got to ask you one across IT and the business the use cases to get started and the service level and you got to have that coverage of IBM Think 2021.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robin Hernandez | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Arvind | PERSON | 0.99+ |
99.9% | QUANTITY | 0.99+ |
93% | QUANTITY | 0.99+ |
six pieces | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
one source | QUANTITY | 0.98+ |
Hybrid Cloud Management | ORGANIZATION | 0.97+ |
each | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.95+ |
Think 2021 | COMMERCIAL_ITEM | 0.93+ |
theCUBE | ORGANIZATION | 0.88+ |
20 years | QUANTITY | 0.87+ |
SRE | TITLE | 0.87+ |
Hybrid Cloud | TITLE | 0.87+ |
Shift Left | TITLE | 0.86+ |
Watson | TITLE | 0.86+ |
AIOps | ORGANIZATION | 0.83+ |
six months | QUANTITY | 0.77+ |
day | QUANTITY | 0.76+ |
Watson AIOps | ORGANIZATION | 0.72+ |
weeks and | QUANTITY | 0.7+ |
Hybrid Cloud | ORGANIZATION | 0.65+ |
Cloud | TITLE | 0.64+ |
Hybrid | ORGANIZATION | 0.54+ |
DevSecOps | TITLE | 0.51+ |
two | QUANTITY | 0.5+ |
months | QUANTITY | 0.44+ |
AIOps | TITLE | 0.4+ |
Kamal Shah, Red Hat & Kirsten Newcomer, Red Hat | Red Hat Summit 2021 Virtual Experience
>>Hey, welcome to the Cubes coverage of Red Hat Summit 2021, the virtual experience, I'm lisa martin, I have two guests joining me. One is a cube alum kamal Shah is back, he's now the VP of cloud platforms at Brent had come on, it's great to have you back on the program. You're in a new role, we're going to talk about that. Thank you. And Kirsten newcomer is here as well. She's the Director of cloud and Death stickups strategy at Red Hat, Kirsten, Welcome and thank you for bringing the red hat vibe to the segment. >>Absolutely, very happy to be here. >>So looking forward to this conversation that we're going to be having in the next 20 minutes or so. We're gonna be talking about the last time come on, you were on, you were the ceo of stack rocks In January of 2021. The announcement that red hat plans to acquire stack rocks, it wouldn't be talking all about that. But I'd like to start with Kirsten, give us your perspective from red hats perspective, why is red hat a good fit for stack rocks? >>You know, there are so many reasons first of all as as you know, right? Red hat has been working with product Izing kubernetes since kubernetes one dato. Right, so so open shift three dato shipped with kubernetes one dot Oh, so we've been working with kubernetes for a long time, stack rocks embraces kind of is kubernetes native security embraces the declarative nature of kubernetes and brings that to security. Red hats, Custer's red hat enterprise customers, we have a great set across different verticals that are very security conscious and and during my five years at red hat, that's where I spend the majority of my time is talking with our customers about container and kubernetes security. And while there's a great deal of security built in to open shift as it goes to market out of the box, customers need the additional capabilities that stack rock springs. Historically, we've met those needs with our security partners. We have a great ecosystem of security partners. And with the stack rocks acquisition, we're now in a position to offer additional choice. Right. If a customer wants those capabilities from Red hat tightly integrated with open shift, we'll have those available and we continue to support and work with our broad ecosystem of security partners. >>Excellent customers always want choice. Come on. Give me your perspective. You were at the helm the ceo of stack rocks as you were last time you were on the cube. Talk to me about the redhead acquisition from your seat. >>Yeah. So as as Kirsten mentioned, we were partners of red hat. You're part of the red hat partner ecosystem. And uh, what we found is that was both a great strategic fit and a great cultural fit between our two companies. Right? And so the discussions that we had were how do we go and quickly enable our customers to accelerate their digital transformation initiatives to move workloads to the cloud, to containerized them, to manage them through kubernetes and make sure that we seamlessly addressed their security concerns. Right? Because it continues to be the number one concern for large enterprises and medium sized enterprises and frankly any enterprise that uh, you know, uh, working out today. So, so that was kind of the impetus behind it. And I must say that so far the the acquisition has been going on very smoothly. So we had two months in roughly and everybody and has been very welcoming, very collaborative, very supportive. And we are already working hand in hand to to integrate our companies and to make sure that we are working closely together to make our customers successful. >>Excellent. We're gonna talk about that integration in a second. But I can imagine challenging going through an acquisition during a global pandemic. Um but that is one of the things that I think lends itself to the cultural alignment. Kamal that you talked about, Kirsten. I want to get your perspective. We know we talk about corporate culture and corporate culture has changed a lot in the last year with everybody or so many of us being remote. Talk to me about kind of the core values that red hat and stack rocks share >>actually, you know, that's been one of the great joys doing during the acquisition process in particular, Kamal and and ali shared kind of their key values and how they um how they talked to talk with their team And some of the overlap was just so resonated so much for all of us. In particular the sense of transparency, uh, that the, that the team the stack rocks executive team brings and approaches. That's a that's a clear value for red hat um strongly maintained. Uh, that was one of the key things the interest in um uh, containers and kubernetes. Right. So the technology alignment was very clear. We probably wouldn't have proceeded without that. But again, um and I think the investment in people and the independence and the and the strong drive of the individuals and supporting the individuals as they contribute to the offering so that it really creates that sense of community um and collaboration that is key. Uh and and it's just really strong overlap in in cultural values and we so appreciated that >>community and collaboration couldn't be more important these days. And ultimately the winner is the customers. So let's dig in. Let's talk about what stack rocks brings to open shift Kirsten take it away >>man. So as I said earlier, um so I think we we really believe in continuous security at red hat and in defense and depth. And so when we look at an enterprise kubernetes distribution that involves security at the real core os layer security and kubernetes adding the things into the distribution, making sure they're there by default, that any distribution needs to be secured to be hardened, auditing, logging, identity, access management, just a wealth of things. And Red hat has historically focused on infrastructure and platform security, building those capabilities into what we bring to market stack rocks enhances what we already have and really adds workload protection, which is really when it comes down to it. Especially if you're looking at hybrid cloud, multi cloud, how you secure, not just the platform, but how you secure your workloads changes. And we're moving from a world where, you know, you're deploying anti virus or malware scanners on your VMS and your host operating system to a world where those work clothes may be very short lived. And if they aren't secured from the get go, you miss your opportunity to secure them right? You can't rely on, you know, you do need controls in the infrastructure but they need to be kubernetes native controls and you need to shift that security left. Right? You never patch a running container. You always have to rebuild and redeploy if you patch the running container the next time that container images deployed, you've missed, you've lost that patch. And so the whole ethos the whole shift left. The Deb sec ops capabilities that stack rock springs really adds such value. Right? You can't just do DEF SEc or set cops. You need to do a full infinity loop to really have def SEc ops and stack rocks. I'm gonna let Kamal tell you about it, but they have so many capabilities that that really drive that shift left and enable that closed loop. We're just so excited that they're part of our offerings. >>So can you take us through that? How does stack rocks facilitate the shift left? >>Yeah, absolutely. So stack rocks, which we we announced at summit is now being rebranded as red hat. Advanced cluster security was really purpose built to help our customers address the use cases across the entire application lifecycle. Right? So from bill to deploy to run time. So this is the infinite loop that Kirsten mentioned earlier and one of our foundations was to be kubernetes native to ensure that security is really built into the application is supposed to bolt it on. So specifically, we help our customers shift left by securing the supply chain and we're making sure that we identifying vulnerabilities early during the build process before they make it to a production environment. We helped them secure the infrastructure by preventing miS configurations again early in the process because as we all know, MIS configurations often lead to breaches at at runtime. Right? We help them address uh compliance requirements by ensuring that we can check for CS benchmarks are regulatory requirements around the C I P C I, hip hop and this and and that's uh you know, just focusing on shift left, doesn't really mean that you ignore the right side or ignore the controls you need uh when your applications are running in production. So we help them secure that at runtime by identifying preventing breaches the threat detection, prevention and incident response. >>That built in security is you both mentioned that built in versus bolt on Kirsten? Talk to me about that, that as really kind of a door opener. We talked a lot about security issues, especially in the last year. I don't know how many times we've talked about miS configurations leading to breaches that we've seen so many security challenges present in the last year. We talked to me a little bit Kirsten about >>what >>customers appetites are for going. All right now, I've got cloud native security, I'm going to be able to, I'm going to feel more comfortable with rolling out production deployments. >>It's, it's a great place to go. So there are a number of elements to think about. And if I could, I could, I could start with by building on the example that Kamal said, Right, So when we think about um I need to build security into my pipeline so that when I deliver my containerized workloads, they're secure. What if I miss a step or what if a new vulnerability is discovered after the fact? Right. So one of the things that stack rocks or redhead a CS offers is it has built in policy checks to see whether a container or running image has something like a package manager in it. Well, a package manager can be used to load software that is not delivered with the container. And so the idea of ensuring that you are including workload, built in workload, protect locks with policies that are written for you. So you can focus on building your applications. You don't necessarily have to learn everything there is to know about the new attack vectors that are really just it it's new packaging, it's new technology. It's not so much there are some new attack vectors, but mostly it's a new way of delivering and running your applications. That requires some changes to how you implement your security policies. And so ensuring that you have the tools and the technology that you're running on have those capabilities built in. So that when we have conversations with our security conscious customers, we can talk with them about the attack vectors they care about. We can illustrate how we are addressing those particular concerns. Right? One of them being malware in a container, we can look for stack. Rocks can look for a package manager that could be used to pull in, you know, code that could be exploited and you can stop a running container. Um, we can do deeper data collection with stack rocks. Again, one of the challenges when you're looking at moving your security capabilities from a traditional application environment is containers come and go all the time. In a kubernetes cluster nodes, your servers can come and go in a cloud native kubernetes cluster, right? If you're running on on cloud public cloud infrastructure, um, those things are the nodes are ephemeral to, they're designed to be shut down and brought back up. So you've got a lot more data that you need to collect and that you need to analyze and you need to correlate the information between these. Right? I no longer have one application stack running on one or more VMS, it's just things are things are moving fast so you want the right type of data collection and the right correlation to have good visibility into your environment. >>And if I can just build on that a little bit. The whole idea here is that these policies really serve as god rails right for the developers. So the it allows developers to move quickly to accelerate the speed of development without having to worry about hundreds of potential security issues because there are guardrails that will notify that with concrete recommendations early in the process. And the analogy I often use is that you know the reason we have breaks in our cars, it's not to slow us down but to allow us to go faster because we know we can slow down when we need to write. So similarly these policies are really it's really designed to accelerate the speed of development and accelerate digital transformation initiatives that our customers are embarking on >>and come on. I want to stick with you on the digital transformation front. We've talked so much about how accelerated that has been in the last year with everything going on in such a dynamic market. Talk to me Kamal about some of the feedback that you've gotten from stack rocks customers about the acquisition and how it is that maybe that facilitator of the many pivots that businesses have had to do in the last year to go from survival mode to thriving business. >>Yeah. Yes, absolutely. The feedback from all of our customers bar none has been very very positive. So it's been it's allowed us to invest more in the business and you know, we publicly stated that we are going to invest more in adding more capabilities. We are more than doubling the size of our teams as an example. And really working hand in hand with our uh the broader team at Red had to uh further accelerate the speed of development and digital transformation initiatives. So it's been extremely positive because we're adding more resources, We're investing more. We're accelerating the product roadmap uh based on uh compared to what we could do as a, as a start up as you can imagine. And and the feedback has been nothing but positive. So that's kind of where we are today. And what we're doing with the summit is rolling out a new bundle called open shift uh, Open shift platform plus, which includes not just Red hat A CS which used to be Stock rocks, but also red hat open shift hybrid cloud platform as well as Red hat advanced uh container cluster management, ACM capabilities as well as create the container registry. So we're making it easier for our customers to get all the capabilities that they need to for the drive digital transformation initiatives to get. It goes back to this whole customer centric city team that red hat has, that was also core value of stack rocks and and the winner and all of this, we believe ultimately is our, our our customers because that's where we exist to serve them, >>right. And I really like that if I could chime in kind of on top of that a little bit. Um so, so I think that one of the things we've seen with the pandemic is more of the red Hat customers are accelerating their move to public cloud and away from on premises data centers. Uh and and you know, that's just part partly because of so many people working remotely. Um it just has really pushed things. And so with Hybrid cloud becoming even more key to our joint customer base and by hybrid cloud, I mean that they have some environments that are on premises as they're making this transition. Some of those environments may stay that footprint may stay on premises, but it might be smaller, they may not have settled on a single public cloud. They could, in fact, they often are picking a public cloud based on where their development focuses. Google is very popular for ai and ml workloads. Amazon of course is just used, you know, by pretty much everybody. Um and then Azzurri is popular with um a subset of customers as well. And so we see our customers investing in all of these environments and stack rocks red hat A CS like open shift runs in all these environments. So with open shift platform plus you get a complete solution that helps with multi cluster management with a C. M with security across all of these environments, right? You can take one approach to how you secure your cluster, how you secure your workloads, how you manage configurations, You get one approach no matter where you're running your containers and kubernetes platform when you're doing this with open shift platform plus. So you also get portability. If today you want to be running an amazon maybe tomorrow you need to spin up a cluster in google, you can do that if you're working with the K s or G K E, you can or a Ks, you can do that with red hat a CS as well. So we really give you everything you need to be successful in this move and we give you back to that choice word, right? We give you the opportunity to choose and to migrate at the speed that works for you. >>So that's simplicity. That streamlining. I gotta ask you the last question here in our last couple of minutes. Come on, what's the integration process been like? as we said the acquisition just a couple of months in. But talk to me about that integration process. What that's been like? >>Yeah, absolutely. So as I mentioned earlier, the process has been very smooth so far, so two months in and it's largely driven by the common set of culture and core values that exists between our two companies. And so uh you know, from a product standpoint, we've been working hand in hand because I mentioned earlier, we were partners are working hand in hand on accelerating the road map the joint roadmap that we have here uh from a go to market perspective teams are well integrated. We are going to be rolling out the rolling out the bundle and we're gonna be rolling out additional uh options for our customers. We've also publicly announced that will be open sourcing uh red hat A. C. S. Uh formerly known as Stock Rock. So stay tuned for further news and that announcement. And, and so you know, uh, again two months and everybody's been super collaborative. Super helpful, super welcoming. And the team is the well settled and we're looking forward to now focusing on our primary objective is just to make sure that our customers are successful. >>Absolutely. That customer focus is absolutely critical. But also so is the employee experience. And it sounds like we both talked about the ethos and the and the core value alignment. They're probably being pretty critical to doing an integration during a very challenging time globally. I appreciate both of you joining me on the program today, sharing what's going on stack rocks now asks the opportunities for customers to have that built in cuBA and the security. Thanks so much for your time. >>Thank you. Thank >>you for Camel shaw and Kirsten newcomer. I'm lisa martin. You're watching the cubes coverage of Red Hat Summit, The virtual experience. Mhm
SUMMARY :
at Brent had come on, it's great to have you back on the program. the last time come on, you were on, you were the ceo of stack rocks In January of 2021. security embraces the declarative nature of kubernetes and brings that to security. Talk to me about the redhead acquisition from your seat. And so the discussions that we had were Um but that is one of the things that I think lends the individuals and supporting the individuals as they contribute to And ultimately the winner is the customers. You always have to rebuild and redeploy if you patch the running container the next time or ignore the controls you need uh when your applications are running in production. We talked a lot about security issues, especially in the last year. I'm going to be able to, I'm going to feel more comfortable with rolling out production deployments. And so ensuring that you have And the analogy I often use is that you know the reason we have breaks in our cars, the many pivots that businesses have had to do in the last year to go from invest more in the business and you know, we publicly stated that we are going to You can take one approach to how you secure your cluster, how you secure your workloads, But talk to me about that integration process. And so uh you know, from a product standpoint, we've been working hand in hand because the opportunities for customers to have that built in cuBA and the security. Thank you. you for Camel shaw and Kirsten newcomer.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kirsten | PERSON | 0.99+ |
Kamal | PERSON | 0.99+ |
January of 2021 | DATE | 0.99+ |
two months | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
red hat | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
lisa martin | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
red hat | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
red Hat | ORGANIZATION | 0.99+ |
Red hat | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
today | DATE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Stock Rock | ORGANIZATION | 0.99+ |
Red | ORGANIZATION | 0.98+ |
amazon | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
Kirsten Newcomer | PERSON | 0.98+ |
Red Hat Summit 2021 | EVENT | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
one approach | QUANTITY | 0.98+ |
Red hats | ORGANIZATION | 0.97+ |
red hats | ORGANIZATION | 0.96+ |
ORGANIZATION | 0.96+ | |
ORGANIZATION | 0.96+ | |
single | QUANTITY | 0.96+ |
Custer | ORGANIZATION | 0.95+ |
Kamal Shah | PERSON | 0.93+ |
year | DATE | 0.93+ |
first | QUANTITY | 0.92+ |
one application | QUANTITY | 0.91+ |
Azzurri | ORGANIZATION | 0.91+ |
Red hat | TITLE | 0.84+ |
pandemic | EVENT | 0.81+ |
kubernetes | ORGANIZATION | 0.78+ |
kamal Shah | PERSON | 0.77+ |
stack rocks | ORGANIZATION | 0.76+ |
Red Hat Summit 2021 Virtual | EVENT | 0.75+ |
red hat | TITLE | 0.74+ |
K s | COMMERCIAL_ITEM | 0.73+ |
M | TITLE | 0.7+ |
Camel shaw | PERSON | 0.66+ |
20 minutes | DATE | 0.65+ |
G K E | COMMERCIAL_ITEM | 0.65+ |
potential security | QUANTITY | 0.64+ |
second | QUANTITY | 0.61+ |
Brent | ORGANIZATION | 0.57+ |
BOS14 Jason McGee & Briana Frank VTT
>>from >>around the globe, it's the cube with digital coverage of IBM Think >>2021 >>brought to you by IBM. >>Hey, welcome to the cubes coverage of IBM Think 2021. I'm lisa martin, I have to IBM alumni with me here today please welcome Brianna frank the director of product management at IBM and Jason McGee is here as well. IBM fellow VP and Cto of the IBM cloud platform, Brianna and Jason welcome back to the cube. >>Thank you so much for having us, >>you guys were here a couple months ago, but I know there's been a whole bunch of things going on. So Brianna, we'll start with you, what's new, what's new with IBM cloud? >>We are, it's just, it's been such a rush of announcements lately, but one of my favorite announcements uh, is the IBM cloud satellite product. We went g a back in March and you know, this has been one of the most fun projects to work on as a product manager because you know, it's all about our clients coming to us and saying, hey look, we're having these are the problems that we're really facing with as we, as we move to cloud in our journey to cloud and can you help us solve them? And I think this has been just an exciting place to be in terms of distributed cloud, this new category that's really emerging where we've taken the IBM cloud, but we've distributed into lots of different locations on prem at the edge and on other public clouds. And it's been a really fun journey and it's such a great fulfilling thing to see it come to life and see clients using it and getting feedback from analysts and um in the industry. So it's been a, it's been a great, you know, a few months. >>That's good. Lots of excitement going on. Jason talk to me a little bit about kind of unpack uh, the cloud satellite from, you see what flashing in Jason's background is an IBM cloud satellite. Me, I'm sorry, I love that. You talk to me a little bit about the genesis of it. What were some of the things that customers were asking for? >>Yeah, absolutely. I mean, so, so look, I think as we've talked about a lot of IBM, you know, as as people have gone on their journey to cloud and been moving workloads in the cloud over the last few years, um, you know, not all workloads have moved right. Maybe 20 of workloads have moved to the cloud and that remaining 80% sometimes that thing that's inhibiting that is regulation compliance data late and see where my data lives. And so people have been kind of struggling with, how do I get the kind of benefits and speed and agility to public cloud, But apply it to all these applications that maybe need to live in my data center or need to live on the edge of the network close to my users or need to live where the data is being generated or in a certain country. And so the genesis of satellite was really to take our hybrid strategy and combine it with the public cloud consumption model and really allow you to have public cloud anywhere you needed it, bring those public cloud services into your data center or bring them to the edge of the network where your data is being generated and let you get the best of both. And we think that really will unlock, you know, the next wave of applications to be able to get the advantages of as a service kind of public consumption um, while retaining the flexibility to run wherever you need, >>curious station, did you see any particular industries in the last year of, I don't want to say mayhem, but you know, mayhem taking really the lead and the edge in wanting to work with you guys to understand how to really facilitate digital business transformation, because we saw a lot of acceleration going on last year. >>Yeah, absolutely. I mean, it's interesting. Cloud is fundamentally pretty horizontal technology. It applies to lots of industries, but I think the past year, especially, um, with, you know, Covid and lockdowns and changes in how we all work have accelerated massively, um, clients adoption of cloud. Um, and they've they've been looking for ways to apply those benefits across more of what they do. All right. And, and I think there's different drivers, you know, there's, you know, security compliance drivers, maybe in, in places like the financial services industry, but there's also the industries like manufacturing and retail that have, you know, they have a geographic footprint, like where things run matters to them. And so they're like, well, how do I get that kind of remote cloud benefit in all those places too. And so I've seen some acceleration in those areas. >>And one of the interesting things that I thought has emerged from an industry focus is this concept of RFs file control. So we have specific control and compliance built into the IBM cloud and one of the most prevalent questions I get from clients, you know, when can I get this FSL controls in satellite, you know, in all of these different locations. And so we built that in that's coming later this year. But I was really surprised to hear every industry and I guess it shouldn't be surprised. I mean every, every industry is trading money so it's important to keep things secure. But those fs cloud controls being extended into the satellite location is something I hear constantly as a need no matter the industry, whether it's, you know, retail or insurance, you know, etcetera. So I think that the security concerns and being able to offload the burden and chores of security is, is huge. >>One of the things we saw a lot last year, Brianna along the security lines is was ransomware booming, ransomware is a service, ransomware getting more personal. Talk to a lot of customers and to your point in different industries that are really focused on, it's not if we get hit by ransomware, it's when. So I'm wondering if that if some of the things that we saw last year or maybe why you're saying this being so such a pervasive need across industries, what do you think? >>Absolutely. I think that it's something that you really have to concentrate on full time and you know, it has to be something you're just maniacally focused on. And we have all kinds of frameworks and actually, uh, groups where we're looking at shaping regulation and compliance and it's really something that we study. Um, so if when we can pass on that expertise to our clients and again offload them. So, you know, not everyone can be an expert in these areas. I find that, you know, relieving, you know, our clients of these operational security tours allows them to get back to what they want to do, which is actually just keep inventing and building better technology for their business. >>I think that's such a, I think that's such an important point that brand is bringing up to those like part of the value of something like satellite is that we can we can run these technology platforms as a service. Right. And well, what does that? The service means? It means you can tap into a team of people who are the industry's best at building and operating that technology platform, right? Like maybe you've decided that, you know, kubernetes and open shift is your go for or platform as a business, but do you have the team and the skills that you need to operate that yourself? You know, you want to use a I you probably don't want to become an expert in how to run like whatever the latest and greatest ai framework is, you want to actually like figure out how to apply that to your business. And so we think that part of what's really attracting people to solutions like satellite, especially now with the threat you described is that they can tap into this expertise by consuming things as a service instead of figuring out how to round all themselves >>to that point. A lot of times we see really talented developers, I really like talking to incubation teams where there, you know, they're building new and they're just trying to figure out how to create, you know, the next new thing and um, it's not that they're not talented enough, they could do whatever they put their mind to, it's just that they don't have enough time and they, you know, then it just becomes too, comes down to, you know, what do you really want to spend your time doing? Is it, you know, security and operational chores, or is it inventing the next big thing for your business? And I think that that's where we're seeing the market really shift, is that, you know, it's not efficient or you know, um you know, a great idea and really no one wants to do that, you know, so we can over, if we can offload those chores, then that becomes really powerful. >>It does resource allocation is key to let those businesses to your point, we're gonna focus on their core competencies, innovating new products, new services, meeting customers where they are as customers like us become more and more demanding of things being readily available. I do want to understand a little bit, Jason, help me understand how this service is differentiated from some of the competitors in the market. >>Yeah, it's a it's a totally fair question. Um so I would answer that in a couple of ways. Um first off, you know, anytime you're talking about extending a cloud into some other environment, you obviously need some infrastructure for that application to run on whether the infrastructure is in your data center or at the edge or somewhere else. And one of the things that we've been able to is by leveraging our hybrid cloud platform by leveraging things like open shift and Lennox, we've been able to build satellite in a way where you can bring almost any clinics infrastructure to the table and use it to run satellite. So we don't require you to buy a certain lack of hardware or particular gear from us. You don't have to replace all your infrastructure. You can kind of use what you have and extend the cloud and that to me is all about, you know, if the goal is to help people build things more quickly and consume cloud, like you don't want step one to be like wheel in a whole new data center full of hardware before you get started. Um the second thing I would say is we have built our whole cloud um on this, this containerized technology, on kubernetes and open ship, which means that we're able to deliver more of our portfolio through satellite. We can deliver application platforms and databases and Deb tools and ai and security functions all as a service via satellite. So the breath of cloud capability that we think we can deliver in this model is much higher than what I think our competitors are going to be able to do. And then finally, I would say the tide to kind of IBM view of enterprise and regulated industries, you know, the work brand I mentioned around things like FS cloud, the work we're doing in telco, like we spent a lot of our energy, I'm like, how do we help, you know, enterprises regulated industries take advantage of cut and we're extending all of that work outside of our cloud data centers with satellite to all these other places. And so you really can move those mission critical applications into a cot environment when you do it with us. >>Let's talk about some successes Brianna tell me about some of the customers that are getting some pretty big business outcomes and this is a new service to talk to me about how it's being used consumed in the benefits. >>Absolutely. You know what I I find a trend that I'm seeing is really uh the cloud being distributed to the edge and there's so many interesting use cases I hear every single day about how to really use machine learning and ai at the edge. And so you know, maybe it's something as simple as, you know, a worker safety app or you're you know, making sure that workers are safe using video cameras in an office building and alerting someone if they're going into a construction area and you're using the Ai and although the the images that's coming, they're coming in through the security cameras, you're doing some analysis and saying this person is wearing a hard hat or not and warning them, but that those use cases can be changed so quickly. And you know, we've we've seen that, I think I've talked about it before with Covid you change that to masks. Um you could change that. You could hook up the application of thermal devices. We've seen situations where you know, um machine learning is used at the manufacturing edge. So you can determine if there's an issue with your um production of, you know, in a factory there's we're seeing uh edge use cases and hospitals in terms of, you know, keeping the waiting room sanitized because of, you know, over usage. So there's all kinds of just really interesting solutions and I think this is kind of the next area where we're really able to um and even partner with folks that have extraordinary vertical expertise in a specific area and you know, bringing that to life at the edge and being able to really process that data at the edge. So there's very little latency and then also you're able to change those use cases so quickly because you're really consuming cloud native best practices in cloud cloud services at the end. So you're not having to install and and manage and operate those services at the edge. It's done for you >>imagine changing the ability to change use cases so quickly in a year that plus that we've seen so much dynamics and pivoting is really key for businesses in any industry Brianna. >>I agree. And that's the thing. You know, there hasn't been one particular industry I think, you know, of course we do see a lot in the financial services industry just probably because we're IBM, but in every industry, you know, we see, you know retail, it's interesting to see sporting goods companies want to have pop up shops in a specific sporting events and how do you, you know, have a van that is a sporting goods shop, but it's just there temporarily. And how do you have a satellite location at, in the van? So there's really interesting use cases that, you know, have emerged, um, you know, just over time due to, um, you know, the need to have this capability at the edge. >>Yeah, it's necessity is the mother of invention as they say, right, well thank you both so much for stopping by sharing what's going on with IBM Cloud Satellite, the new service, the new offerings, the opportunities in it for customers. I'm sure it's going to be another exciting year for IBM because you clearly have been very busy. Thank you both for stopping by the program. >>Thanks. Thanks so much lisa >>for Brianna frank and Jason McGee. I'm lisa martin. You're watching the cube live coverage of IBM, think. >>Mm
SUMMARY :
IBM fellow VP and Cto of the IBM cloud platform, you guys were here a couple months ago, but I know there's been a whole bunch of things going on. to work on as a product manager because you know, it's all about our clients coming to us the cloud satellite from, you see what flashing in Jason's background is an IBM cloud satellite. And we think that really will unlock, you know, I don't want to say mayhem, but you know, mayhem taking really the lead and the edge you know, there's, you know, security compliance drivers, maybe in, in places like the financial services and one of the most prevalent questions I get from clients, you know, when can I get this FSL being so such a pervasive need across industries, what do you think? I find that, you know, relieving, you know, our clients of these operational security tours the latest and greatest ai framework is, you want to actually like figure out how to apply that to your business. And I think that that's where we're seeing the market really shift, is that, you know, it's not efficient It does resource allocation is key to let those businesses to your point, we're gonna focus on their and extend the cloud and that to me is all about, you know, if the goal is to help people build things more and this is a new service to talk to me about how it's being used consumed in the benefits. And so you know, maybe it's something as simple as, you know, a worker safety app or you're you know, imagine changing the ability to change use cases so quickly in a year that plus that we've seen you know, just over time due to, um, you know, Yeah, it's necessity is the mother of invention as they say, right, well thank you both so much for Thanks so much lisa of IBM, think.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jason | PERSON | 0.99+ |
Brianna | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jason McGee | PERSON | 0.99+ |
Brianna frank | PERSON | 0.99+ |
last year | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
Briana Frank | PERSON | 0.99+ |
lisa martin | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
past year | DATE | 0.98+ |
today | DATE | 0.98+ |
step one | QUANTITY | 0.98+ |
Brianna frank | PERSON | 0.98+ |
Lennox | ORGANIZATION | 0.97+ |
lisa | PERSON | 0.97+ |
second thing | QUANTITY | 0.97+ |
later this year | DATE | 0.96+ |
One | QUANTITY | 0.96+ |
20 of workloads | QUANTITY | 0.94+ |
couple months ago | DATE | 0.92+ |
open shift | ORGANIZATION | 0.83+ |
BOS14 | PERSON | 0.8+ |
IBM Think 2021 | COMMERCIAL_ITEM | 0.77+ |
FS cloud | ORGANIZATION | 0.75+ |
last | DATE | 0.73+ |
single day | QUANTITY | 0.7+ |
IBM cloud | ORGANIZATION | 0.69+ |
Cloud Satellite | COMMERCIAL_ITEM | 0.64+ |
wave | EVENT | 0.62+ |
cloud | TITLE | 0.54+ |
IBM | TITLE | 0.53+ |
industry | QUANTITY | 0.52+ |
Think | COMMERCIAL_ITEM | 0.51+ |
years | DATE | 0.5+ |
2021 | COMMERCIAL_ITEM | 0.47+ |
Covid | ORGANIZATION | 0.45+ |
IBM11 Robin Hernandez V2
(bright upbeat music) >> Narrator: From around the globe. It's theCUBE with digital coverage of IBM Think 2021. Brought to you by IBM. >> Welcome back everyone to theCUBE's coverage of IBM Think 2021 virtual, I'm John Furrier, your host. I've got a great guest here Robin Hernandez, vice president Hybrid Cloud Management and Watson AIOps. Robin, great to see you. Thanks for coming on theCUBE. >> Thanks so much for having me, John. >> You know, Hybrid Cloud, the CEO of IBM Arvind loves Cloud. We know that we've talked to him all the time about it. And Cloud is now part of the entire DNA of the company. Hybrid Cloud is validated multi clouds around the corner. This is the underlying pinnings of the new operating system of business. And with that, that's massive change that we've seen IT move to large scale. You're seeing transformation, driving innovation, driving scale, and AI is the center of it. So AIOps is a huge topic. I want to jump right into it. Can you just tell me about your day to day IT operations teams what you guys are doing? How are you guys organized? How you guys bring in value to the customers? What are your teams responsible for? >> Yeah, so for a few years we've been working with our IT customers, our enterprise customers in this transformation that they're going through. As they move more workloads to cloud, and they still have some of their workloads on premise, or they have a strategy of using multiple public clouds, each of those cloud vendors have different tools. And so they're forced with, how do I keep up with the changing rate and pace of this technology? How do I build skills on a particular public cloud vendor when, you know, maybe six months from now we'll have another cloud vendor that will be introduced or another technology that will be introduced. And it's almost impossible for an it team to keep up with the rate and pace of the change. So we've really been working with IT operations in transforming their processes and their skills within their teams and that looking at what tools do they use to move to this cloud operations model. And then as part of that, how do they leverage the benefits of AI and make that practical and purposeful in this new mode of cloud operations >> And the trend that's been booming is this idea of a site reliability engineer. It's really an IT operations role. It's become kind of a new mix between engineering and IT and development. I mean, classic DevOps, we've seen, you know dev and ops, right? You got to operate the developers and the software modern apps are coming in that's infrastructure as course has been around for a while. But now as the materialization of things like Kubernetes and microservices, people are programming the infrastructure. And so the scale is there, and that's been around for a while. Now it's going to go to a whole enterprise level with containers and other things. How is the site reliability engineering persona if you will, or ITOps changed specifically because that's where the action is. And that's where you hear things like observability and I need more data, break down the silos. What's this all about? What's your view? >> Yeah, so site reliability engineering or SRE practices as we call it has really not changed the processes to say that it has to do, but it's more accelerated at an enormous rate and pace. Those processes and the tools as you mentioned, the cloud native tools like Kubernetes have accelerated how those processes are executed. Everything from releasing new code and how they work with development to actually code the infrastructure and the policies in that development process to maintaining and observing over the life cycle of an application, the performance, the availability, the response time, and the customer experience. All of those processes that used to happen in silos with separate teams and sort of a waterfall approach, with SRA practices now, they're happening instantaneously. They're being scaled out. They're being... Failback is happening much more quickly so that applications didn't do not have outages. And the rate and pace of this has just accelerated so quickly. This is the transformation of what we call cloud operations. And we believe that as IT teams work more closely with developers and they moved towards this SRE model, that they cannot just do this with their personnel and changing skills and changing tools. They have to do this with modernized tools like AI. And this is where we are recommending applying AI to those processes so that you can then get automation out of the back end that you would not think about in a traditional IT operations, or even in an SRE practice. You have to leverage capabilities and new technologies like AI to even accelerate further. >> Let's unpack the AI operations piece because I think that's where I think I'm in hearing. I'd love you to clarify this because it becomes I think the key important point but also kind of confusing to some folks because IT operations people see that changing. You just pointed out why, honestly, the tools and the culture is changing, but AI becomes a scale point because of the automation piece you mentioned. How does that thread together? How does AIOps specifically change the customer's approach in terms of how they work with their teams and how that automation is being applied? 'Cause I think that's the key thread, right? 'Cause everyone kind of gets the cultural shifts and the tools, if they're not living it and putting it in place, but now they want to scale it. That's where automation comes in. Is that right? Is that the right way to think about it? What's your view on this? This is important. >> It's absolutely right. And I always like to talk about AI in other industries before we apply it to IT to help IT understand. Because a lot of times, IT looks at AI as a buzzword and they say, "Oh, you know, yes, sure. "This is going to help me." But if you think about... We've been doing AI for a long time at many different companies not just at IBM, but if you think about the other industries where we've applied it, healthcare in particular is so tangible for most people, right? It didn't replace a doctor but it helps a doctor see the things that would take them weeks and months of studying and analyzing different patients to say, "Hey, John, I think this may be a symptom "that we overlooked or didn't think about "or a diagnosis that we didn't think about," without manually looking at all this research. AI can accelerate that so rapidly for a doctor, the same notion for IT. If we apply AI properly to IT, we can accelerate things like remediating incidents or finding a performance problem that may take your eye months or weeks or even hours to find, AI applied properly find those issues and diagnose just like they could in healthcare it diagnoses issues correctly much more rapidly. >> Now again, I want to get your thoughts on something while you're here 'cause you've been in the business for many, many decades 20 years experience, you know, cloud cold, you know the new modern area you're managing it now. Clients are having a scenario where they, "Okay, I'm changing over the culture." I'm "Okay, I got some cloud, I got some public "and I got some hybrid and man, "we did some agile things. "We're provisioned, it's all done. "It's out there." And all of a sudden someone adds something new and it crashes (chuckles) And now I've got to get in, "Where's the risks? where's the security holes?" They're seeing this kind of day two operations as some people call, another buzz word but it's becoming more of, "Okay, we got it up and running "but we still now going to still push some code "and things are starting to break. "and that's net new thing." So it's kind of like they're out of their comfort zone. This is where I kind of see the AIOps evolving quickly because there's kind of a DevSecOps piece. There's also data involved, observability. How do you talk to that scenario? Where, okay, you sold me on cloud, I've been doing it. I did some projects. We're not been running. We got a production system and we added something new. Something maybe trivial and it breaks stuff? >> Yes. Yeah, so with the new cloud operations and SRE, the IT teams are much more responsible for business outcomes. And not just as you say, the application being deployed and the application being available, but the life cycle of that application and the results that it's bringing to the end users and the business. And what this means is that it needs to partner much more closely with development. And it is hard for them to keep up with the tools that are being used and the new code and the architectures of microservices that developers are using. So we like to apply AI on what we call the change risk management process. And so everyone's familiar with change management that means a new piece of code is being released. You have to maintain where that code is being released to was part of the application architecture and make sure that it's scaled out and rolled out properly within your enterprise policies. When we apply AI, we then apply what we call a risk factor to that change because we know so often, application outages occur not something new within the environment. So by applying AI, we can then give you a risk rating that says, "There's an 80% probability "that this change that you're about to roll out, "a code change is going to cause a problem "in this application." So it allows you to then go back and work with the development team and say, "Hey, how do we reduce this risk?" Or decide to take that calculated risk and put into the visibility of where those risks may occur. So this is a great example, change risk management of how applying AI can make you more intelligent in your decisions much more tied to the business and tied to the application release team. >> That's awesome. Well, I got you here on this point of change management. The term "Shift Left" has come up a lot in the industry. I'd love to get your quick definition of what that is in your mind. What does Shift Left mean for Ops teams with AIOps? >> Yeah, so in the early days of IT there was a hard line definitely between your development and IT team. It was kind of we always said throwing it over the fence, right? The developers would throw the code over the fence and say, good luck IT, you know, figure out how to deploy it where it needs to be deployed and cross your fingers that nothing bad happens. Well, Shift Left is really about a breaking down that fence. And if you think of your developers on your left-hand side you'd being the IT team, it's really shifting more towards that development team and getting involved in that code release process, getting involved in their CI/CD pipeline to make sure that all of your enterprise policies and what that code needs to run effectively in your enterprise application and architecture, those pieces are coded ahead of time with the developer. So it's really about partnering between it and development, shifting left to have a more collaboration versus throwing things over the fence and playing the blame game, which is what happens a lot in the early days IT. >> Yeah, and you get a smarter team out of it, great point. That's great insight. Thanks for sharing that. I think it's super relevant. That's the hot trend right now making dealers more productive, building security from the beginning. While they're doing it code it right in, make it a security proof if you will. I got to ask you one of the organizational questions as IBM leader. What are some of the roadblocks that you see in organizations that when they embrace AIOps, are trying to embrace AI ops are trying to scale it and how they can overcome those blockers. What are some of the things you're seeing that you could share with other folks that are maybe watching and trying to solve this problem? >> Yeah, so you know, AI in any industry or discipline is only as good as the data you feed it. AI is about learning from past trends and creating a normal baseline for what is normal in your environment. What is most optimal in your environment this being your enterprise application running in steady state. And so if you think back to the healthcare example, if we only have five or six pieces of patient data that we feed the AI, then the AI recommendation to the doctor is going to be pretty limited. We need a broad set of use cases across a wide demographic of people in the healthcare example, it's the same with IT, applying AI to IT. You need a broad set of data. So one of the roadblocks that we hear from many customers is, well I using an analytics tool already and I'm not really getting a lot of good recommendations or automation out of that analytics tool. And we often find it's because they're pulling data from one source, likely they're pulling data from performance metrics, performance of what's happening with the infrastructure, CPU utilization or memory utilization, storage utilization. And those are all good metrics, but without the context of everything else in your environment, without pulling in data from what's happening in your logs, pulling in data from unstructured data, from things like collaboration tools, what are your team saying? What are the customers saying about the experience with your application? You have to pull in many different data sets across IT and the business in order to make that AI recommendation the most useful. And so we recommend a more holistic true AI platform versus a very segregated data approach to applying and eating the analytics or AI engine. >> That's awesome, it's like a masterclass right there. Robin, great stuff. Great insight. We'll quickly wrap. I would love to you to take a quick minute to explain and share what are some of the use cases to get started and really get into AIOps system successes for people that want to explore more, dig in, and get into this fast, what are some use case, what's some low hanging fruit? What would you share? >> Yeah, we know that IT teams like to see results and they hate black boxes. They like to see into everything that's happening and understand deeply. And so this is one of our major focus areas as we do. We say, we're making AI purposeful for IT teams but some of the low hanging fruits, we have visions. And lots of our enterprise customers have visions of applying AI to everything from a customer experience of the application, costs management of the application and infrastructure in many different aspects. But some of the low hanging fruit is really expanding the availability and the service level agreements of your applications. So many people will say, you know I have a 93% uptime availability or an agreement with my business that this application will be up 93% of the time. Applying AI, we can increase those numbers to 99.9% of the time because it learns from past problems and it creates that baseline of what's normal in your environment. And then we'll tell you before an application outage occurs. So avoiding application outages, and then improving performance, recommendations and scalability. What's the number of users coming in versus your normal scale rate and automating that scalability. So, performance improvements and scalability is another low-hanging fruit area where many IT teams are starting. >> Yeah, I mean, why wouldn't you want to have the AIOps? They're totally cool, very relevant. You know, you're seeing hybrid cloud, standardized all across business. You've got to have that data and you got to have that incident management work there. Robin, great insight. Thank you for sharing. Robin Hernandez, vice president of Hybrid Cloud Management in Watson AIOps. Thanks for coming on theCUBE. >> Thank you so much for having me John. >> Okay, this theCUBE's coverage of IBM Think 2021. I'm John Furrier your host. Thanks for watching. (bright upbeat music)
SUMMARY :
Brought to you by IBM. Robin, great to see you. And Cloud is now part of the and that looking at what tools do they use and the software modern apps are coming in and the policies in and the tools, if they're not living it but it helps a doctor see the things "Okay, I'm changing over the culture." and the results that it's bringing I'd love to get your quick definition and playing the blame game, I got to ask you one across IT and the business the use cases to get started and the service level and you got to have that coverage of IBM Think 2021.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robin Hernandez | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
99.9% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
six pieces | QUANTITY | 0.99+ |
Arvind | PERSON | 0.99+ |
93% | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one source | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Hybrid Cloud Management | ORGANIZATION | 0.95+ |
AIOps | ORGANIZATION | 0.93+ |
six months | QUANTITY | 0.92+ |
Kubernetes | TITLE | 0.91+ |
20 years | QUANTITY | 0.91+ |
Think 2021 | COMMERCIAL_ITEM | 0.89+ |
Hybrid Cloud | TITLE | 0.89+ |
Watson AIOps | ORGANIZATION | 0.88+ |
theCUBE | ORGANIZATION | 0.87+ |
Left | TITLE | 0.87+ |
Watson | TITLE | 0.87+ |
Shift Left | TITLE | 0.78+ |
Cloud | TITLE | 0.75+ |
DevSecOps | TITLE | 0.72+ |
weeks | QUANTITY | 0.71+ |
two | QUANTITY | 0.62+ |
Hybrid | ORGANIZATION | 0.58+ |
vice president | PERSON | 0.55+ |
months | QUANTITY | 0.55+ |
day | QUANTITY | 0.55+ |
SRE | TITLE | 0.53+ |
V2 | OTHER | 0.52+ |
AIOps | TITLE | 0.49+ |
Cloud | PERSON | 0.48+ |
Think | COMMERCIAL_ITEM | 0.47+ |
2021 | DATE | 0.28+ |
Io-Tahoe Episode 5: Enterprise Digital Resilience on Hybrid and Multicloud
>>from around the globe. It's the Cube presenting enterprise. Digital resilience on hybrid and multi cloud Brought to You by Iota Ho. Hello, everyone, and welcome to our continuing Siri's covering data automation brought to you by Io Tahoe. Today we're gonna look at how to ensure enterprise resilience for hybrid and multi cloud. Let's welcome in age. Eva Hora, who is the CEO of Iota A J. Always good to see you again. Thanks for coming on. >>Great to be back. David Pleasure. >>And he's joined by Fozzy Coons, who is a global principal architect for financial services. The vertical of financial services. That red hat. He's got deep experiences in that sector. Welcome, Fozzie. Good to see you. >>Thank you very much. Happy to be here. >>Fancy. Let's start with you. Look, there are a lot of views on cloud and what it is. I wonder if you could explain to us how you think about what is a hybrid cloud and and how it works. >>Sure, yes. So the hybrid cloud is a 90 architecture that incorporates some degree off workload, possibility, orchestration and management across multiple clouds. Those clouds could be private cloud or public cloud or even your own data centers. And how does it all work? It's all about secure interconnectivity and on demand. Allocation of resources across clouds and separate clouds can become hydrate when they're similarly >>interconnected. And >>it is that interconnectivity that allows the workloads workers to be moved and how management can be unified in off the street. You can work and how well you have. These interconnections has a direct impact on how well your hybrid cloud will work. >>Okay, so we'll fancy staying with you for a minute. So in the early days of Cloud that turned private Cloud was thrown a lot around a lot, but often just meant virtualization of an on PREM system and a network connection to the public cloud. Let's bring it forward. What, in your view, does a modern hybrid cloud architecture look like? >>Sure. So for modern public clouds, we see that, um, teams organizations need to focus on the portability off applications across clouds. That's very important, right? And when organizations build applications, they need to build and deploy these applications as small collections off independently, loosely coupled services, and then have those things run on the same operating system which means, in other words, running it on Lenox everywhere and building cloud native applications and being able to manage and orchestrate thes applications with platforms like KUBERNETES or read it open shit, for example. >>Okay, so that Z, that's definitely different from building a monolithic application that's fossilized and and doesn't move. So what are the challenges for customers, you know, to get to that modern cloud? Aziz, you've just described it. Is it skill sets? Is that the ability to leverage things like containers? What's your view there? >>So, I mean, from what we've seen around around the industry, especially around financial services, where I spent most of my time, we see that the first thing that we see is management right now because you have all these clouds and all these applications, you have a massive array off connections off interconnections. You also have massive array off integrations, possibility and resource allocations as well, and then orchestrating all those different moving pieces. Things like storage networks and things like those are really difficult to manage, right? That's one. What s O Management is the first challenge. The second one is workload, placement, placement. Where do you place this? How do you place this cloud? Native applications. Do you or do you keep on site on Prem? And what do you put in the cloud? That is the the the other challenge. The major one. The third one is security. Security now becomes the key challenge and concern for most customers. And we could talk about how hundreds? Yeah, >>we're definitely gonna dig into that. Let's bring a J into the conversation. A J. You know, you and I have talked about this in the past. One of the big problems that virtually every companies face is data fragmentation. Um, talk a little bit about how I owe Tahoe unifies data across both traditional systems legacy systems. And it connects to these modern I t environments. >>Yeah, sure, Dave. I mean, fancy just nailed it. There used to be about data of the volume of data on the different types of data. But as applications become or connected and interconnected at the location of that data really matters how we serve that data up to those those app. So working with red hat in our partnership with Red Hat being able Thio, inject our data Discovery machine learning into these multiple different locations. Would it be in AWS on IBM Cloud or A D. C p R. On Prem being able thio Automate that discovery? I'm pulling that. That single view of where is all my data then allows the CEO to manage cast that can do things like one. I keep the data where it is on premise or in my Oracle Cloud or in my IBM cloud on Connect. The application that needs to feed off that data on the way in which you do that is machine learning. That learns over time is it recognizes different types of data, applies policies to declassify that data. Andi and brings it all together with automation. >>Right? And that's one of the big themes and we've talked about this on earlier episodes. Is really simplification really abstracting a lot of that heavy lifting away so we can focus on things A. J A. Z. You just mentioned e nifaz e. One of the big challenges that, of course, we all talk about his governance across thes disparity data sets. I'm curious as your thoughts. How does Red Hat really think about helping customers adhere to corporate edicts and compliance regulations, which, of course, are are particularly acute within financial services. >>Oh, yeah, Yes. So for banks and the payment providers, like you've just mentioned their insurers and many other financial services firms, Um, you know, they have to adhere Thio standards such as a PC. I. D. S s in Europe. You've got the G g d p g d p r, which requires strange and tracking, reporting documentation. And you know, for them to to remain in compliance and the way we recommend our customers to address these challenges is by having an automation strategy. Right. And that type of strategy can help you to improve the security on compliance off the organization and reduce the risk after the business. Right. And we help organizations build security and compliance from the start without consulting services residencies. We also offer courses that help customers to understand how to address some of these challenges. And that's also we help organizations build security into their applications without open sources. Mueller, where, um, middle offerings and even using a platform like open shift because it allows you to run legacy applications and also continue rights applications in a unified platform right And also that provides you with, you know, with the automation and the truly that you need to continuously monitor, manage and automate the systems for security and compliance >>purposes. Hey, >>Jay, anything. Any color you could add to this conversation? >>Yeah, I'm pleased. Badly brought up Open shift. I mean, we're using open shift to be able. Thio, take that security application of controls to to the data level. It's all about context. So, understanding what data is there being able to assess it to say who should have access to it. Which application permission should be applied to it. Um, that za great combination of Red Hat tonight. Tahoe. >>But what about multi Cloud? Doesn't that complicate the situation even even further? Maybe you could talk about some of the best practices to apply automation across not only hybrid cloud, but multi >>cloud a swell. Yeah, sure. >>Yeah. So the right automation solution, you know, can be the difference between, you know, cultivating an automated enterprise or automation caress. And some of the recommendations we give our clients is to look for an automation platform that can offer the first thing is complete support. So that means have an automation solution that provides that provides, um, you know, promotes I t availability and reliability with your platform so that you can provide, you know, enterprise great support, including security and testing, integration and clear roadmaps. The second thing is vendor interoperability interoperability in that you are going to be integrating multiple clouds. So you're going to need a solution that can connect to multiple clouds. Simples lee, right? And with that comes the challenge off maintain ability. So you you you're going to need to look into a automation Ah, solution that that is easy to learn or has an easy learning curve. And then the fourth idea that we tell our customers is scalability in the in the hybrid cloud space scale is >>is >>a big, big deal here, and you need a to deploy an automation solution that can span across the whole enterprise in a constituent, consistent manner, right? And then also, that allows you finally to, uh, integrate the multiple data centers that you have, >>So A J I mean, this is a complicated situation, for if a customer has toe, make sure things work on AWS or azure or Google. Uh, they're gonna spend all their time doing that, huh? What can you add really? To simplify that that multi cloud and hybrid cloud equation? >>Yeah. I could give a few customer examples here Warming a manufacturer that we've worked with to drive that simplification Onda riel bonuses for them is has been a reduction cost. We worked with them late last year to bring the cost bend down by $10 million in 2021 so they could hit that reduced budget. Andre, What we brought to that was the ability thio deploy using open shift templates into their different environments. Where there is on premise on bond or in as you mentioned, a W s. They had G cps well, for their marketing team on a cross, those different platforms being out Thio use a template, use pre built scripts to get up and running in catalog and discover that data within minutes. It takes away the legacy of having teams of people having Thio to jump on workshop cause and I know we're all on a lot of teens. The zoom cause, um, in these current times, they just sent me is in in of hours in the day Thio manually perform all of this. So yeah, working with red hat applying machine learning into those templates those little recipes that we can put that automation toe work, regardless of which location the data is in allows us thio pull that unified view together. Right? >>Thank you, Fozzie. I wanna come back to you. So the early days of cloud, you're in the big apple, you know, financial services. Really well. Cloud was like an evil word within financial services, and obviously that's changed. It's evolved. We talked about the pandemic, has even accelerated that, Um And when you really, you know, dug into it when you talk to customers about their experiences with security in the cloud it was it was not that it wasn't good. It was great, whatever. But it was different. And there's always this issue of skill, lack of skills and multiple tools suck up teams, they're really overburdened. But in the cloud requires new thinking. You've got the shared responsibility model you've got obviously have specific corporate requirements and compliance. So this is even more complicated when you introduce multiple clouds. So what are the differences that you can share from your experience is running on a sort of either on Prem or on a mono cloud, um, or, you know, and versus across clouds. What? What? What do you suggest there? >>Yeah, you know, because of these complexities that you have explained here, Miss Configurations and the inadequate change control the top security threats. So human error is what we want to avoid because is, you know, as your clouds grow with complexity and you put humans in the mix, then the rate off eras is going to increase, and that is going to exposure to security threat. So this is where automation comes in because automation will streamline and increase the consistency off your infrastructure management. Also application development and even security operations to improve in your protection, compliance and change control. So you want to consistently configure resources according to a pre approved um, you know, pre approved policies and you want to proactively maintain a to them in a repeatable fashion over the whole life cycle. And then you also want to rapid the identified system that require patches and and reconfiguration and automate that process off patching and reconfiguring so that you don't have humans doing this type of thing, right? And you want to be able to easily apply patches and change assistant settings. According Thio, Pre defined, based on like explained before, you know, with the pre approved policies and also you want is off auditing and troubleshooting, right? And from a rate of perspective, we provide tools that enable you to do this. We have, for example, a tool called danceable that enables you to automate data center operations and security and also deployment of applications and also obvious shit yourself, you know, automates most of these things and obstruct the human beings from putting their fingers on, causing, uh, potentially introducing errors right now in looking into the new world off multiple clouds and so forth. The difference is that we're seeing here between running a single cloud or on prem is three main areas which is control security and compliance. Right control here it means if your on premise or you have one cloud, um, you know, in most cases you have control over your data and your applications, especially if you're on Prem. However, if you're in the public cloud, there is a difference there. The ownership, it is still yours. But your resources are running on somebody else's or the public clouds. You know, e w s and so forth infrastructure. So people that are going to do this need to really especially banks and governments need to be aware off the regulatory constraints off running, uh, those applications in the public cloud. And we also help customers regionalize some of these choices and also on security. You will see that if you're running on premises or in a single cloud, you have more control, especially if you're on Prem. You can control this sensitive information that you have, however, in the cloud. That's a different situation, especially from personal information of employees and things like that. You need to be really careful off that. And also again, we help you rationalize some of those choices. And then the last one is compliant. Aziz. Well, you see that if you're running on Prem or a single cloud, um, regulations come into play again, right? And if you're running a problem, you have control over that. You can document everything you have access to everything that you need. But if you're gonna go to the public cloud again, you need to think about that. We have automation, and we have standards that can help you, uh, you know, address some of these challenges for security and compliance. >>So that's really strong insights, Potsie. I mean, first of all, answerable has a lot of market momentum. Red hats in a really good job with that acquisition, your point about repeatability is critical because you can't scale otherwise. And then that idea you're you're putting forth about control, security compliance It's so true is I called it the shared responsibility model. And there was a lot of misunderstanding in the early days of cloud. I mean, yeah, maybe a W s is gonna physically secure the, you know, s three, but in the bucket. But we saw so many Miss configurations early on. And so it's key to have partners that really understand this stuff and can share the experiences of other clients. So this all sounds great. A j. You're sharp, you know, financial background. What about the economics? >>You >>know, our survey data shows that security it's at the top of the spending priority list, but budgets are stretched thin. E especially when you think about the work from home pivot and and all the areas that they had toe the holes that they had to fill their, whether it was laptops, you know, new security models, etcetera. So how do organizations pay for this? What's the business case look like in terms of maybe reducing infrastructure costs so I could, you know, pay it forward or there's a There's a risk reduction angle. What can you share >>their? Yeah. I mean, the perspective I'd like to give here is, um, not being multi cloud is multi copies of an application or data. When I think about 20 years, a lot of the work in financial services I was looking at with managing copies of data that we're feeding different pipelines, different applications. Now what we're saying I talk a lot of the work that we're doing is reducing the number of copies of that data so that if I've got a product lifecycle management set of data, if I'm a manufacturer, I'm just gonna keep that in one location. But across my different clouds, I'm gonna have best of breed applications developed in house third parties in collaboration with my supply chain connecting securely to that. That single version of the truth. What I'm not going to do is to copy that data. So ah, lot of what we're seeing now is that interconnectivity using applications built on kubernetes. Um, that decoupled from the data source that allows us to reduce those copies of data within that you're gaining from the security capability and resilience because you're not leaving yourself open to those multiple copies of data on with that. Couldn't come. Cost, cost of storage on duh cost of compute. So what we're seeing is using multi cloud to leverage the best of what each cloud platform has to offer That goes all the way to Snowflake and Hiroko on Cloud manage databases, too. >>Well, and the people cost to a swell when you think about yes, the copy creep. But then you know when something goes wrong, a human has to come in and figured out um, you brought up snowflake, get this vision of the data cloud, which is, you know, data data. I think this we're gonna be rethinking a j, uh, data architectures in the coming decade where data stays where it belongs. It's distributed, and you're providing access. Like you said, you're separating the data from the applications applications as we talked about with Fozzie. Much more portable. So it Z really the last 10 years will be different than the next 10 years. A. >>J Definitely. I think the people cast election is used. Gone are the days where you needed thio have a dozen people governing managing black policies to data. Ah, lot of that repetitive work. Those tests can be in power automated. We've seen examples in insurance were reduced teams of 15 people working in the the back office China apply security controls compliance down to just a couple of people who are looking at the exceptions that don't fit. And that's really important because maybe two years ago the emphasis was on regulatory compliance of data with policies such as GDP are in CCP a last year, very much the economic effect of reduce headcounts on on enterprises of running lean looking to reduce that cost. This year, we can see that already some of the more proactive cos they're looking at initiatives such as net zero emissions how they use data toe under understand how cape how they can become more have a better social impact. Um, and using data to drive that, and that's across all of their operations and supply chain. So those regulatory compliance issues that may have been external we see similar patterns emerging for internal initiatives that benefiting the environment, social impact and and, of course, course, >>great perspectives. Yeah, Jeff Hammer, Bucker once famously said, The best minds of my generation are trying to get people to click on ads and a J. Those examples that you just gave of, you know, social good and moving. Uh, things forward are really critical. And I think that's where Data is gonna have the biggest societal impact. Okay, guys, great conversation. Thanks so much for coming on the program. Really appreciate your time. Keep it right there from, or insight and conversation around, creating a resilient digital business model. You're watching the >>Cube digital resilience, automated compliance, privacy and security for your multi cloud. Congratulations. You're on the journey. You have successfully transformed your organization by moving to a cloud based platform to ensure business continuity in these challenging times. But as you scale your digital activities, there is an inevitable influx of users that outpaces traditional methods of cybersecurity, exposing your data toe underlying threats on making your company susceptible toe ever greater risk to become digitally resilient. Have you applied controls your data continuously throughout the data Lifecycle? What are you doing to keep your customer on supply data private and secure? I owe Tahoe's automated, sensitive data. Discovery is pre programmed with over 300 existing policies that meet government mandated risk and compliance standards. Thes automate the process of applying policies and controls to your data. Our algorithm driven recommendation engine alerts you to risk exposure at the data level and suggests the appropriate next steps to remain compliant on ensure sensitive data is secure. Unsure about where your organization stands In terms of digital resilience, Sign up for a minimal cost commitment. Free data Health check. Let us run our sensitive data discovery on key unmapped data silos and sources to give you a clear understanding of what's in your environment. Book time within Iot. Tahoe Engineer Now >>Okay, let's now get into the next segment where we'll explore data automation. But from the angle of digital resilience within and as a service consumption model, we're now joined by Yusuf Khan, who heads data services for Iot, Tahoe and Shirish County up in. Who's the vice president and head of U. S. Sales at happiest Minds? Gents, welcome to the program. Great to have you in the Cube. >>Thank you, David. >>Trust you guys talk about happiest minds. This notion of born digital, foreign agile. I like that. But talk about your mission at the company. >>Sure. >>A former in 2011 Happiest Mind is a born digital born a child company. The reason is that we are focused on customers. Our customer centric approach on delivering digitals and seamless solutions have helped us be in the race. Along with the Tier one providers, Our mission, happiest people, happiest customers is focused to enable customer happiness through people happiness. We have Bean ranked among the top 25 i t services company in the great places to work serving hour glass to ratings off 41 against the rating off. Five is among the job in the Indian nineties services company that >>shows the >>mission on the culture. What we have built on the values right sharing, mindful, integrity, learning and social on social responsibilities are the core values off our company on. That's where the entire culture of the company has been built. >>That's great. That sounds like a happy place to be. Now you said you had up data services for Iot Tahoe. We've talked in the past. Of course you're out of London. What >>do you what? Your >>day to day focus with customers and partners. What you focused >>on? Well, David, my team work daily with customers and partners to help them better understand their data, improve their data quality, their data governance on help them make that data more accessible in a self service kind of way. To the stakeholders within those businesses on dis is all a key part of digital resilience that will will come on to talk about but later. You're >>right, e mean, that self service theme is something that we're gonna we're gonna really accelerate this decade, Yussef and so. But I wonder before we get into that, maybe you could talk about the nature of the partnership with happiest minds, you know? Why do you guys choose toe work closely together? >>Very good question. Um, we see Hyo Tahoe on happiest minds as a great mutual fit. A Suresh has said, uh, happiest minds are very agile organization um, I think that's one of the key things that attracts their customers on Io. Tahoe is all about automation. Uh, we're using machine learning algorithms to make data discovery data cataloging, understanding, data done. See, uh, much easier on. We're enabling customers and partners to do it much more quickly. So when you combine our emphasis on automation with the emphasis on agility that happiest minds have that that's a really nice combination work works very well together, very powerful. I think the other things that a key are both businesses, a serious have said, are really innovative digital native type type companies. Um, very focused on newer technologies, the cloud etcetera on. Then finally, I think they're both Challenger brands on happiest minds have a really positive, fresh ethical approach to people and customers that really resonates with us at Ideo Tahoe to >>great thank you for that. So Russia, let's get into the whole notion of digital resilience. I wanna I wanna sort of set it up with what I see, and maybe you can comment be prior to the pandemic. A lot of customers that kind of equated disaster recovery with their business continuance or business resilient strategy, and that's changed almost overnight. How have you seen your clients respond to that? What? I sometimes called the forced march to become a digital business. And maybe you could talk about some of the challenges that they faced along the way. >>Absolutely. So, uh, especially during this pandemic, times when you say Dave, customers have been having tough times managing their business. So happiest minds. Being a digital Brazilian company, we were able to react much faster in the industry, apart from the other services company. So one of the key things is the organisation's trying to adopt onto the digital technologies. Right there has bean lot off data which has been to manage by these customers on There have been lot off threats and risk, which has been to manage by the CEO Seo's so happiest minds digital resilient technology, right where we bring in the data. Complaints as a service were ableto manage the resilience much ahead off other competitors in the market. We were ableto bring in our business continuity processes from day one, where we were ableto deliver our services without any interruption to the services. What we were delivered to our customers So that is where the digital resilience with business community process enabled was very helpful for us. Toe enable our customers continue their business without any interruptions during pandemics. >>So I mean, some of the challenges that customers tell me they obviously they had to figure out how to get laptops to remote workers and that that whole remote work from home pivot figure out how to secure the end points. And, you know, those were kind of looking back there kind of table stakes, But it sounds like you've got a digital business. Means a data business putting data at the core, I like to say, but so I wonder if you could talk a little bit more about maybe the philosophy you have toward digital resilience in the specific approach you take with clients? >>Absolutely. They seen any organization data becomes. The key on that, for the first step is to identify the critical data. Right. So we this is a six step process. What we following happiest minds. First of all, we take stock off the current state, though the customers think that they have a clear visibility off their data. How are we do more often assessment from an external point off view on see how critical their data is, then we help the customers to strategies that right. The most important thing is to identify the most important critical herself. Data being the most critical assert for any organization. Identification off the data's key for the customers. Then we help in building a viable operating model to ensure these identified critical assets are secure on monitor dearly so that they are consumed well as well as protected from external threats. Then, as 1/4 step, we try to bring in awareness, toe the people we train them >>at >>all levels in the organization. That is a P for people to understand the importance off the digital ourselves and then as 1/5 step, we work as a back up plan in terms of bringing in a very comprehensive and a holistic testing approach on people process as well as in technology. We'll see how the organization can withstand during a crisis time, and finally we do a continuous governance off this data, which is a key right. It is not just a one step process. We set up the environment, we do the initial analysis and set up the strategy on continuously govern this data to ensure that they are not only know managed will secure as well as they also have to meet the compliance requirements off the organization's right. That is where we help organizations toe secure on Meet the regulations off the organizations. As for the privacy laws, so this is a constant process. It's not on one time effort. We do a constant process because every organization goes towards their digital journey on. They have to face all these as part off the evolving environment on digital journey. And that's where they should be kept ready in terms off. No recovering, rebounding on moving forward if things goes wrong. >>So let's stick on that for a minute, and then I wanna bring yourself into the conversation. So you mentioned compliance and governance when when your digital business, you're, as you say, you're a data business, so that brings up issues. Data sovereignty. Uh, there's governance, this compliance. There's things like right to be forgotten. There's data privacy, so many things. These were often kind of afterthoughts for businesses that bolted on, if you will. I know a lot of executives are very much concerned that these air built in on, and it's not a one shot deal. So do you have solutions around compliance and governance? Can you deliver that as a service? Maybe you could talk about some of the specifics there, >>so some of way have offered multiple services. Tow our customers on digital against. On one of the key service is the data complaints. As a service here we help organizations toe map the key data against the data compliance requirements. Some of the features includes in terms off the continuous discovery off data right, because organizations keep adding on data when they move more digital on helping the helping and understanding the actual data in terms off the residents of data, it could be a heterogeneous data soldiers. It could be on data basis, or it could be even on the data legs. Or it could be a no even on compromise all the cloud environment. So identifying the data across the various no heterogeneous environment is very key. Feature off our solution. Once we identify classify this sensitive data, the data privacy regulations on the traveling laws have to be map based on the business rules So we define those rules on help map those data so that organizations know how critical their digital assets are. Then we work on a continuous marching off data for anomalies because that's one of the key teachers off the solution, which needs to be implemented on the day to day operational basis. So we're helping monitoring those anomalies off data for data quality management on an ongoing basis. On finally, we also bringing the automated data governance where we can manage the sensory data policies on their later relationships in terms off mapping on manage their business roots on we drive reputations toe Also suggest appropriate actions to the customers. Take on those specific data sets. >>Great. Thank you, Yousef. Thanks for being patient. I want to bring in Iota ho thio discussion and understand where your customers and happiest minds can leverage your data automation capability that you and I have talked about in the past. I'm gonna be great if you had an example is well, but maybe you could pick it up from there, >>John. I mean, at a high level, assertions are clearly articulated. Really? Um, Hyoty, who delivers business agility. So that's by, um accelerating the time to operationalize data, automating, putting in place controls and actually putting helping put in place digital resilience. I mean way if we step back a little bit in time, um, traditional resilience in relation to data often met manually, making multiple copies of the same data. So you have a d b A. They would copy the data to various different places, and then business users would access it in those functional style owes. And of course, what happened was you ended up with lots of different copies off the same data around the enterprise. Very inefficient. ONDA course ultimately, uh, increases your risk profile. Your risk of a data breach. Um, it's very hard to know where everything is. And I realized that expression. They used David the idea of the forced march to digital. So with enterprises that are going on this forced march, what they're finding is they don't have a single version of the truth, and almost nobody has an accurate view of where their critical data is. Then you have containers bond with containers that enables a big leap forward so you could break applications down into micro services. Updates are available via a p I s on. So you don't have the same need thio to build and to manage multiple copies of the data. So you have an opportunity to just have a single version of the truth. Then your challenge is, how do you deal with these large legacy data states that the service has been referring Thio, where you you have toe consolidate and that's really where I attack comes in. Um, we massively accelerate that process of putting in a single version of the truth into place. So by automatically discovering the data, discovering what's dubica? What's redundant? Uh, that means you can consolidate it down to a single trusted version much more quickly. We've seen many customers have tried to do this manually, and it's literally taken years using manual methods to cover even a small percentage of their I T estates. With our tire, you could do it really very quickly on you can have tangible results within weeks and months on Ben, you can apply controls to the data based on context. So who's the user? What's the content? What's the use case? Things like data quality validations or access permissions on. Then, once you've done there. Your applications and your enterprise are much more secure, much more resilient. As a result, you've got to do these things whilst retaining agility, though. So coming full circle. This is where the partnership with happiest minds really comes in as well. You've got to be agile. You've gotta have controls. Um, on you've got a drug toward the business outcomes. Uh, and it's doing those three things together that really deliver for the customer. >>Thank you. Use f. I mean you and I. In previous episodes, we've looked in detail at the business case. You were just talking about the manual labor involved. We know that you can't scale, but also there's that compression of time. Thio get to the next step in terms of ultimately getting to the outcome. And we talked to a number of customers in the Cube, and the conclusion is, it's really consistent that if you could accelerate the time to value, that's the key driver reducing complexity, automating and getting to insights faster. That's where you see telephone numbers in terms of business impact. So my question is, where should customers start? I mean, how can they take advantage of some of these opportunities that we've discussed today. >>Well, we've tried to make that easy for customers. So with our Tahoe and happiest minds, you can very quickly do what we call a data health check. Um, this is a is a 2 to 3 week process, uh, to really quickly start to understand on deliver value from your data. Um, so, iota, who deploys into the customer environment? Data doesn't go anywhere. Um, we would look at a few data sources on a sample of data. Onda. We can very rapidly demonstrate how they discovery those catalog e on understanding Jupiter data and redundant data can be done. Um, using machine learning, um, on how those problems can be solved. Um, And so what we tend to find is that we can very quickly, as I say in the matter of a few weeks, show a customer how they could get toe, um, or Brazilian outcome on then how they can scale that up, take it into production on, then really understand their data state? Better on build. Um, Brasiliense into the enterprise. >>Excellent. There you have it. We'll leave it right there. Guys, great conversation. Thanks so much for coming on the program. Best of luck to you and the partnership Be well, >>Thank you, David Suresh. Thank you. Thank >>you for watching everybody, This is Dave Volonte for the Cuban are ongoing Siris on data automation without >>Tahoe, digital resilience, automated compliance, privacy and security for your multi cloud. Congratulations. You're on the journey. You have successfully transformed your organization by moving to a cloud based platform to ensure business continuity in these challenging times. But as you scale your digital activities, there is an inevitable influx of users that outpaces traditional methods of cybersecurity, exposing your data toe underlying threats on making your company susceptible toe ever greater risk to become digitally resilient. Have you applied controls your data continuously throughout the data lifecycle? What are you doing to keep your customer on supply data private and secure? I owe Tahoe's automated sensitive data. Discovery is pre programmed with over 300 existing policies that meet government mandated risk and compliance standards. Thes automate the process of applying policies and controls to your data. Our algorithm driven recommendation engine alerts you to risk exposure at the data level and suggests the appropriate next steps to remain compliant on ensure sensitive data is secure. Unsure about where your organization stands in terms of digital resilience. Sign up for our minimal cost commitment. Free data health check. Let us run our sensitive data discovery on key unmapped data silos and sources to give you a clear understanding of what's in your environment. Book time within Iot. Tahoe Engineer. Now. >>Okay, now we're >>gonna go into the demo. We want to get a better understanding of how you can leverage open shift. And I owe Tahoe to facilitate faster application deployment. Let me pass the mic to Sabetta. Take it away. >>Uh, thanks, Dave. Happy to be here again, Guys, uh, they've mentioned names to be the Davis. I'm the enterprise account executive here. Toyota ho eso Today we just wanted to give you guys a general overview of how we're using open shift. Yeah. Hey, I'm Noah Iota host data operations engineer, working with open ship. And I've been learning the Internets of open shift for, like, the past few months, and I'm here to share. What a plan. Okay, so So before we begin, I'm sure everybody wants to know. Noel, what are the benefits of using open shift. Well, there's five that I can think of a faster time, the operation simplicity, automation control and digital resilience. Okay, so that that's really interesting, because there's an exact same benefits that we had a Tahoe delivered to our customers. But let's start with faster time the operation by running iota. Who on open shift? Is it faster than, let's say, using kubernetes and other platforms >>are >>objective iota. Who is to be accessible across multiple cloud platforms, right? And so by hosting our application and containers were able to achieve this. So to answer your question, it's faster to create and use your application images using container tools like kubernetes with open shift as compared to, like kubernetes with docker cry over container D. Okay, so we got a bit technical there. Can you explain that in a bit more detail? Yeah, there's a bit of vocabulary involved, uh, so basically, containers are used in developing things like databases, Web servers or applications such as I have top. What's great about containers is that they split the workload so developers can select the libraries without breaking anything. And since Hammond's can update the host without interrupting the programmers. Uh, now, open shift works hand in hand with kubernetes to provide a way to build those containers for applications. Okay, got It s basically containers make life easier for developers and system happens. How does open shift differ from other platforms? Well, this kind of leads into the second benefit I want to talk about, which is simplicity. Basically, there's a lot of steps involved with when using kubernetes with docker. But open shift simplifies this with their source to image process that takes the source code and turns it into a container image. But that's not all. Open shift has a lot of automation and features that simplify working with containers, an important one being its Web console. Here. I've set up a light version of open ship called Code Ready Containers, and I was able to set up her application right from the Web console. And I was able to set up this entire thing in Windows, Mac and Lennox. So its environment agnostic in that sense. Okay, so I think I've seen the top left that this is a developers view. What would a systems admin view look like? It's a good question. So here's the administrator view and this kind of ties into the benefit of control. Um, this view gives insights into each one of the applications and containers that are running, and you could make changes without affecting deployment. Andi can also, within this view, set up each layer of security, and there's multiple that you can prop up. But I haven't fully messed around with it because with my luck, I'd probably locked myself out. So that seems pretty secure. Is there a single point security such as you use a log in? Or are there multiple layers of security? Yeah, there are multiple layers of security. There's your user login security groups and general role based access controls. Um, but there's also a ton of layers of security surrounding like the containers themselves. But for the sake of time, I won't get too far into it. Okay, eso you mentioned simplicity In time. The operation is being two of the benefits. You also briefly mention automation. And as you know, automation is the backbone of our platform here, Toyota Ho. So that's certainly grabbed my attention. Can you go a bit more in depth in terms of automation? Open shift provides extensive automation that speeds up that time the operation. Right. So the latest versions of open should come with a built in cryo container engine, which basically means that you get to skip that container engine insulation step and you don't have to, like, log into each individual container host and configure networking, configure registry servers, storage, etcetera. So I'd say, uh, it automates the more boring kind of tedious process is Okay, so I see the iota ho template there. What does it allow me to do? Um, in terms of automation in application development. So we've created an open shift template which contains our application. This allows developers thio instantly, like set up our product within that template. So, Noah Last question. Speaking of vocabulary, you mentioned earlier digital resilience of the term we're hearing, especially in the banking and finance world. Um, it seems from what you described, industries like banking and finance would be more resilient using open shift, Correct. Yeah, In terms of digital resilience, open shift will give you better control over the consumption of resource is each container is using. In addition, the benefit of containers is that, like I mentioned earlier since Hammond's can troubleshoot servers about bringing down the application and if the application does go down is easy to bring it back up using templates and, like the other automation features that open ship provides. Okay, so thanks so much. Know us? So any final thoughts you want to share? Yeah. I just want to give a quick recap with, like, the five benefits that you gained by using open shift. Uh, the five are timeto operation automation, control, security and simplicity. You could deploy applications faster. You could simplify the workload you could automate. A lot of the otherwise tedious processes can maintain full control over your workflow. And you could assert digital resilience within your environment. Guys, >>Thanks for that. Appreciate the demo. Um, I wonder you guys have been talking about the combination of a Iot Tahoe and red hat. Can you tie that in subito Digital resilience >>Specifically? Yeah, sure, Dave eso when we speak to the benefits of security controls in terms of digital resilience at Io Tahoe, we automated detection and apply controls at the data level, so this would provide for more enhanced security. >>Okay, But so if you were trying to do all these things manually. I mean, what what does that do? How much time can I compress? What's the time to value? >>So with our latest versions, Biota we're taking advantage of faster deployment time associated with container ization and kubernetes. So this kind of speeds up the time it takes for customers. Start using our software as they be ableto quickly spin up io towel on their own on premise environment are otherwise in their own cloud environment, like including aws. Assure or call GP on IBM Cloud a quick start templates allow flexibility deploy into multi cloud environments all just using, like, a few clicks. Okay, so so now just quickly add So what we've done iota, Who here is We've really moved our customers away from the whole idea of needing a team of engineers to apply controls to data as compared to other manually driven work flows. Eso with templates, automation, previous policies and data controls. One person can be fully operational within a few hours and achieve results straight out of the box on any cloud. >>Yeah, we've been talking about this theme of abstracting the complexity. That's really what we're seeing is a major trend in in this coming decade. Okay, great. Thanks, Sabina. Noah, How could people get more information or if they have any follow up questions? Where should they go? >>Yeah, sure. They've. I mean, if you guys are interested in learning more, you know, reach out to us at info at iata ho dot com to speak with one of our sales engineers. I mean, we love to hear from you, so book a meeting as soon as you can. All >>right. Thanks, guys. Keep it right there from or cube content with.
SUMMARY :
Always good to see you again. Great to be back. Good to see you. Thank you very much. I wonder if you could explain to us how you think about what is a hybrid cloud and So the hybrid cloud is a 90 architecture that incorporates some degree off And it is that interconnectivity that allows the workloads workers to be moved So in the early days of Cloud that turned private Cloud was thrown a lot to manage and orchestrate thes applications with platforms like Is that the ability to leverage things like containers? And what do you put in the cloud? One of the big problems that virtually every companies face is data fragmentation. the way in which you do that is machine learning. And that's one of the big themes and we've talked about this on earlier episodes. And that type of strategy can help you to improve the security on Hey, Any color you could add to this conversation? is there being able to assess it to say who should have access to it. Yeah, sure. the difference between, you know, cultivating an automated enterprise or automation caress. What can you add really? bond or in as you mentioned, a W s. They had G cps well, So what are the differences that you can share from your experience is running on a sort of either And from a rate of perspective, we provide tools that enable you to do this. A j. You're sharp, you know, financial background. know, our survey data shows that security it's at the top of the spending priority list, Um, that decoupled from the data source that Well, and the people cost to a swell when you think about yes, the copy creep. Gone are the days where you needed thio have a dozen people governing managing to get people to click on ads and a J. Those examples that you just gave of, you know, to give you a clear understanding of what's in your environment. Great to have you in the Cube. Trust you guys talk about happiest minds. We have Bean ranked among the mission on the culture. Now you said you had up data services for Iot Tahoe. What you focused To the stakeholders within those businesses on dis is of the partnership with happiest minds, you know? So when you combine our emphasis on automation with the emphasis And maybe you could talk about some of the challenges that they faced along the way. So one of the key things putting data at the core, I like to say, but so I wonder if you could talk a little bit more about maybe for the first step is to identify the critical data. off the digital ourselves and then as 1/5 step, we work as a back up plan So you mentioned compliance and governance when when your digital business, you're, as you say, So identifying the data across the various no heterogeneous environment is well, but maybe you could pick it up from there, So you don't have the same need thio to build and to manage multiple copies of the data. and the conclusion is, it's really consistent that if you could accelerate the time to value, to really quickly start to understand on deliver value from your data. Best of luck to you and the partnership Be well, Thank you, David Suresh. to give you a clear understanding of what's in your environment. Let me pass the mic to And I've been learning the Internets of open shift for, like, the past few months, and I'm here to share. into each one of the applications and containers that are running, and you could make changes without affecting Um, I wonder you guys have been talking about the combination of apply controls at the data level, so this would provide for more enhanced security. What's the time to value? a team of engineers to apply controls to data as compared to other manually driven work That's really what we're seeing I mean, if you guys are interested in learning more, you know, reach out to us at info at iata Keep it right there from or cube content with.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jeff Hammer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Eva Hora | PERSON | 0.99+ |
David Suresh | PERSON | 0.99+ |
Sabina | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Yusuf Khan | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
Siri | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
Fozzie | PERSON | 0.99+ |
2 | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
David Pleasure | PERSON | 0.99+ |
iata ho dot com | ORGANIZATION | 0.99+ |
Jay | PERSON | 0.99+ |
Five | QUANTITY | 0.99+ |
six step | QUANTITY | 0.99+ |
five benefits | QUANTITY | 0.99+ |
15 people | QUANTITY | 0.99+ |
Yousef | PERSON | 0.99+ |
$10 million | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
Ideo Tahoe | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Andre | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one cloud | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
Tahoe | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Noel | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Prem | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
tonight | DATE | 0.99+ |
Io Tahoe | ORGANIZATION | 0.99+ |
second benefit | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Iota A J. | ORGANIZATION | 0.99+ |
one step | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
third one | QUANTITY | 0.98+ |
Siris | TITLE | 0.98+ |
Aziz | PERSON | 0.98+ |
red hat | ORGANIZATION | 0.98+ |
each layer | QUANTITY | 0.98+ |
both businesses | QUANTITY | 0.98+ |
fourth idea | QUANTITY | 0.98+ |
apple | ORGANIZATION | 0.98+ |
1/5 step | QUANTITY | 0.98+ |
Toyota Ho | ORGANIZATION | 0.98+ |
first challenge | QUANTITY | 0.98+ |
41 | QUANTITY | 0.98+ |
azure | ORGANIZATION | 0.98+ |
Io Tahoe | PERSON | 0.98+ |
One person | QUANTITY | 0.98+ |
one location | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
Noah | PERSON | 0.98+ |
over 300 existing policies | QUANTITY | 0.98+ |
Iot Tahoe | ORGANIZATION | 0.98+ |
Thio | PERSON | 0.98+ |
Lenox | ORGANIZATION | 0.98+ |
two years ago | DATE | 0.98+ |
A. J A. Z. | PERSON | 0.98+ |
single point | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.97+ |
Yussef | PERSON | 0.97+ |
Jupiter | LOCATION | 0.97+ |
second thing | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
about 20 years | QUANTITY | 0.97+ |
single cloud | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
Suresh | PERSON | 0.97+ |
3 week | QUANTITY | 0.97+ |
each container | QUANTITY | 0.97+ |
each cloud platform | QUANTITY | 0.97+ |
Bratin Saha, Amazon | AWS re:Invent 2020
>>From around the globe. It's the cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. >>Welcome back to the cubes, ongoing coverage, AWS, AWS reinvent virtual. The cube has gone virtual too, and continues to bring our digital coverage of events across the globe. It's been a big week, big couple of weeks at reinvent and a big week for machine intelligence in learning and AI and new services for customers. And with me to discuss the trends in this space is broadened Sahab, who is the vice president and general manager of machine learning services at AWS Rodan. Great to see you. Thanks for coming on the cube. >>Thank you, Dave. Thank you for having me. >>You're very welcome. Let's get right into it. I mean, I remember when SageMaker was announced it was 2017. Uh, it was really a seminal moment in the whole machine learning space, but take us through the journey over the last few years. Uh, what can you tell us? >>So, you know, what, when we came out with SageMaker customers were telling us that machine learning is hard and it was within, you know, it's only a few large organizations that could truly deploy machine learning at scale. And so we released SageMaker in 2017 and we have seen really broad adoption of SageMaker across the entire spectrum of industries. And today, most of the machine learning in the cloud, the vast majority of it happens on AWS. In fact, AWS has more than two weeks of the machine learning than any other provider. And, you know, we saw this morning that more than 90% of the TensorFlow in the cloud and more than 92% of the pipe out in the cloud happens on AWS. So what has happened in that is customers saw that it was much easier to do machine learning once they were using tools like SageMaker. >>And so many customers started applying a handful of models and they started to see that they were getting real business value. You know, machine learning was no longer a niche machine learning was no longer a fictional thing. It was something that they were getting real business value. And then they started to proliferate across that use cases. And so these customers went from deploying like tens of models to deploying hundreds and thousands of models inside. We have one customer that is deploying more than a million models. And so that is what we have seen is really making machine learning broadly accessible to our customers through the use of SageMaker. >>Yeah. So you probably very quickly went through the experimentation phase and people said, wow, you got the aha moments. And, and, and so adoption went through the roof. What kind of patterns have you seen in terms of the way in which people are using data and maybe some of the problems and challenges that has created for organizations that they've asked you to erect help them rectify? Yes. >>And in fact, in a SageMaker is today one of the fastest growing services in AWS history. And what we have seen happen is as customer scaled out the machine learning deployments, they asked us to help them solve the issues that used to come when you deploy machine learning at scale. So one of the things that happens is when you're doing machine learning, you spend a lot of time preparing the data, cleaning the data, making sure the data is done correctly, so it can train your models. And customers wanted to be able to do the data prep in the same service in which they were doing machine learning. And hence we launched Sage, make a data and learn where with a few clicks, you can connect a variety of data stores, AWS data stores, or third party data stores, and do all of your data preparation. >>Now, once you've done your data preparation, customers wanted to be able to store that data. And that's why we came out with SageMaker feature store and then customers want to be able to take this entire end to end pipeline and be able to automate the whole thing. And that is why we came up with SageMaker pipelines. And then one of the things that customers have asked us to help them address is this issue of statistical bias and explainability. And so we released SageMaker clarify that actually helps customers look at statistical bias to the entire machine learning workflow before you do, when you're doing a data processing before you train your model. And even after you have deployed your model and it gives us insights into why your model is behaving in a particular way. And then we had machine learning in the cloud and many customers have started deploying machine learning at the edge, and they want to be able to deploy these models at the edge and wanted a solution that says, Hey, can I take all of these machine learning capabilities that I have in the cloud, specifically, the model management and the MLR SKP abilities and deploy them to the edge devices. >>And that is why we launched SageMaker edge manager. And then customers said, you know, we still need our basic functionality of training and so on to be faster. And so we released a number of enhancements to SageMaker distributed training in terms of new data, parallel models and new model parallelism models that give the fastest training time on SageMaker across both the frameworks. And, you know, that is one of the key things that we have at AWS is we give customers choice. We don't force them onto a single framework. >>Okay, great. And we, I think we hit them all except, uh, I don't know if you talked about SageMaker debugger, but we will. So I want to come back to and ask you a couple of questions about these features. So it's funny. Sometimes people make fun of your names, but I like them because they said, it says what it does because, because people tell me that I spend all my time wrangling data. So you have data Wrangler, it's, you know, it's all about transformation cleaning. And, and because you don't want to spend 80% of your time wrangling data, you want to spend 80 of your time, you know, driving insights and, and monetization. So, so how, how does one engage with, with data Wrangler and how do you see the possibilities there? >>So data angler is part of SageMaker studio. SageMaker studio was the world's first, fully integrated development run for machine learning. So you come to SageMaker studio, you have a tab there, which you SageMaker data angler, and then you have a visual UI. So that visual UI with just a single click, you can connect to AWS data stores like, you know, red shift or a Tina or third party data stores like snowflake and Databricks and Mongo DB, which will be coming. And then you have a set of built-in data processes for machine learning. So you get that data and you do some interactive processing. Once you're happy with the results of your data, you can just send it off as an automated data pipeline job. And, you know, it's really today the easiest and fastest way to do machine learning and really take out that 80% that you were talking about. >>Has it been so hard to automate the Sage, the pipelines to bring CIC D uh, to, uh, data pipelines? Why has that been such a challenge? And how did you resolve that? >>You know, what has happened is when you look at machine learning, machine learning deals with both code and data, okay. Unlike software, which really has to deal with only code. And so we had the CIC D tools for software, but someone needed to extend it to operating on both data and code. And at the same time, you know, you want to provide reproducibility and lineage and trackability, and really getting that whole end to end system to work across code and data across multiple capabilities was what made it hard. And, you know, that is where we brought in SageMaker pipelines to make this easy for our customers. >>Got it. Thank you. And then let me ask you about, uh, clarify. And this is a huge issue in, in machine intelligence, uh, you know, humans by the very nature of bias that they build models, the models of bias in them. Uh, and so you bringing transplant the other problem with, with AI, and I'm not sure that you're solving this problem, but please clarify if you are no pun intended, but it's that black box AI is a black box. I don't know how the answer, how we got to the answer. It seems like you're attacking that, bringing more transparency and really trying to deal with the biases. I wonder if you could talk about how you do that and how people can expect this to affect their operations. >>I'm glad you asked this question because you know, customers have also asked us about the SageMaker clarify is really intended to address the questions that you brought up. One is it gives you the tools to provide a lot of statistical analysis on the data set that you started with. So let's say you were creating a model for loan approvals, and you want to make sure that, you know, you have equal number of male applicants and equal number of female applicants and so on. So SageMaker clarify, lets you run these kinds of analysis to make sure that your data set is balanced to start with. Now, once that happens, you have trained the model. Once you've trained the model, you want to make sure that the training process did not introduce any unintended statistical bias. So then you can use, SageMaker clarify to again, say, well, is the model behaving in the way I expected it to behave based on the training data I had. >>So let's say your training data set, you know, 50% of all the male applicants got the loans approved after training, you can use, clarify to say, does this model actually predict that 50% of the male applicants will get approved? And if it's more than less, you know, you have a problem. And then after that, we get to the problem you mentioned, which is how do we unravel the black box nature of this? And you know, we took the first steps of it last year with autopilot where we actually gave notebooks. But SageMaker clarify really makes it much better because it tells you why our model is predicting the way it's predicting. It gives you the reasons and it tells you, you know, here is why the model predicts that, you know, you had approved a loan and here's why the model said that you may or may not get a loan. So it really makes it easier, gives visibility and transparency and helps to convert insights that you get from model predictions into actionable insights because you now know why the model is predicting what it's predicting. >>That brings out the confidence level. Okay. Thank you for that. Let me, let me ask you about distributed training on SageMaker help us understand what problem you're solving. You're injecting auto parallelism. Is that about, about scale? Help us understand that. >>Yeah. So one of the things that's happening is, you know, our customers are starting to train really large models like, you know, three years back, they will train models with like 20 million parameters. You know, last year they would train models with like couple of hundred million parameters. Now customers are actually training models with billions of parameters. And when you have such large models, that training can take days and sometimes weeks. And so what we have done E are two concepts. One is we introduced a way of taking a model and training it in parallel and multiple GPU's. And that's, you know what we call a data parallel implementation. We have our own custom libraries for this, which give you the fastest performance on AWS. And then the other thing that happens is customer stakes. Some of these models that are fairly large, you know, like billions of parameters and we showed one of them today called T five and these models are so big that they cannot fit in the memory of a single GPU. And so what happens is today customers have to train such a model. They spend weeks of effort trying to paralyze that Marlon, what we introduced in SageMaker today is a mechanism that automatically takes these large models and distributes it across multiple GPU's the auto parallelization that you were talking about, making it much easier and much faster for customers to really work with these big models. >>Well, the GPU is a very expensive resource. And prior to this, you would have the GPU waiting, waiting, waiting, load me up and you don't want to do that with it. Expensive resources. Yeah. >>And you know, one of the things I mentioned before is Sage make a debugger. So one of the things that we also came out with today is the SageMaker profiler, which is only part of the debugger that lets you look at your GPU utilization at your CPU utilization at, in network utilization and so on. And so now, you know, when your training job has started at which point has the GPU utilization gone down and you can go in and fix it. So this really lets you meet, utilize your resources much better and ultimately reducing your cost of training and making it more efficient. Awesome. >>Let's talk about edge manager because I, you know, Andy Jassy, his keynote was interesting. He his, where he's talking about hybrid and his vision is basically an Amazon's vision is we want to bring AWS to the edge. We see the data center as just another edge node. And so, so this is, to me, another example of, uh, of AWS is, you know, edge strategy, talk about how that works and, and, and, and in practice, uh, how does, how does it work? Am I doing inference at the edge and then bringing back data into the cloud? Uh, am I, am I doing things locally? >>Yes. So, you know what? See each man got edge manager does, is it helps you manage, deploy and manage and manage models at the edge. The inference is happening on the edge device. Now considers his case. So Lenovo has been working with us. And what Lenovo wants to do is to take these models and do predictive maintenance on laptops. So you want to get an it shop and you have a couple of hundred thousand laptops. You would want to know when something may go down. And so the deployed is predictive maintenance models on the laptop. They're doing inference locally on the laptop, but you want to see are the models getting degraded and you want to be able to see is the quality up. So what H manager does is number one, it takes your models, optimizes them so they can run on an edge device and we get up to 25 X benefit and then once you've deployed it, it helps you monitor the quality of the models by letting you upload data samples to SageMaker so that you can see if there is drift in your models, that if there's any other degradation, >>All right. And jumpstart is where I go to. It's kind of the portal that I go to, to access all these cool tools. Is that right? Yep. >>And you know, we have a lot of getting started material, lots of false party models, lots of open source models and solutions. >>I probably we're out of time, but I could go on forever and we did thanks so much for, for bringing this knowledge to the cube audience. Really appreciate your time. >>Thank you. Thank you, Dave, for having me. >>And you're very welcome and good luck with the, the announcements. And thank you for watching everybody. This is Dave Volante for the cube and our coverage of AWS reinvent 2020 continues right after this short break.
SUMMARY :
It's the cube with digital coverage of AWS And with me to discuss the trends in this Uh, what can you tell us? and it was within, you know, it's only a few large organizations that And so that is what we have seen is really making machine learning broadly accessible and challenges that has created for organizations that they've asked you to erect help them rectify? to come when you deploy machine learning at scale. And even after you have And then customers said, you know, we still need our basic functionality of training And we, I think we hit them all except, uh, I don't know if you talked about SageMaker debugger, And then you have a set of built-in data processes And at the same time, you know, you want to provide reproducibility and And then let me ask you about, uh, clarify. is really intended to address the questions that you brought up. And if it's more than less, you know, you have a problem. Thank you for that. And when you have such large models, And prior to this, you would have the GPU waiting, And so now, you know, when your training job has started at you know, edge strategy, talk about how that works and, and, They're doing inference locally on the laptop, but you want And jumpstart is where I go to. And you know, we have a lot of getting started material, lots of false party models, knowledge to the cube audience. Thank you. And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Sahab | PERSON | 0.99+ |
more than two weeks | QUANTITY | 0.99+ |
80 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
more than a million models | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Bratin Saha | PERSON | 0.99+ |
tens of models | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
today | DATE | 0.99+ |
more than 92% | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
one customer | QUANTITY | 0.99+ |
two concepts | QUANTITY | 0.99+ |
SageMaker | ORGANIZATION | 0.99+ |
more than 90% | QUANTITY | 0.99+ |
Lenovo | ORGANIZATION | 0.98+ |
SageMaker | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Tina | ORGANIZATION | 0.98+ |
three years back | DATE | 0.97+ |
Sage | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.96+ |
first steps | QUANTITY | 0.96+ |
single framework | QUANTITY | 0.96+ |
both data | QUANTITY | 0.95+ |
Intel | ORGANIZATION | 0.95+ |
this morning | DATE | 0.94+ |
AWS Rodan | ORGANIZATION | 0.92+ |
20 million parameters | QUANTITY | 0.92+ |
snowflake | ORGANIZATION | 0.91+ |
single GPU | QUANTITY | 0.9+ |
hundreds and thousands of models | QUANTITY | 0.89+ |
billions of parameters | QUANTITY | 0.86+ |
Mongo DB | ORGANIZATION | 0.8+ |
couple of hundred thousand laptops | QUANTITY | 0.77+ |
Rahul Pathak, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, welcome back to the cubes. Ongoing coverage of AWS reinvent virtual Cuba's Gone Virtual along with most events these days are all events and continues to bring our digital coverage of reinvent With me is Rahul Pathak, who is the vice president of analytics at AWS A Ro. It's great to see you again. Welcome. And thanks for joining the program. >>They have Great co two and always a pleasure. Thanks for having me on. >>You're very welcome. Before we get into your leadership discussion, I want to talk about some of the things that AWS has announced. Uh, in the early parts of reinvent, I want to start with a glue elastic views. Very notable announcement allowing people to, you know, essentially share data across different data stores. Maybe tell us a little bit more about glue. Elastic view is kind of where the name came from and what the implication is, >>Uh, sure. So, yeah, we're really excited about blue elastic views and, you know, as you mentioned, the idea is to make it easy for customers to combine and use data from a variety of different sources and pull them together into one or many targets. And the reason for it is that you know we're really seeing customers adopt what we're calling a lake house architectural, which is, uh, at its core Data Lake for making sense of data and integrating it across different silos, uh, typically integrated with the data warehouse, and not just that, but also a range of other purpose. Both stores like Aurora, Relation of Workloads or dynamodb for non relational ones. And while customers typically get a lot of benefit from using purpose built stores because you get the best possible functionality, performance and scale forgiven use case, you often want to combine data across them to get a holistic view of what's happening in your business or with your customers. And before glue elastic views, customers would have to either use E. T. L or data integration software, or they have to write custom code that could be complex to manage, and I could be are prone and tough to change. And so, with elastic views, you can now use sequel to define a view across multiple data sources pick one or many targets. And then the system will actually monitor the sources for changes and propagate them into the targets in near real time. And it manages the anti pipeline and can notify operators if if anything, changes. And so the you know the components of the name are pretty straightforward. Blues are survivalists E T Elling data integration service on blue elastic views about our about data integration their views because you could define these virtual tables using sequel and then elastic because it's several lists and will scale up and down to deal with the propagation of changes. So we're really excited about it, and customers are as well. >>Okay, great. So my understanding is I'm gonna be able to take what's called what the parlance of materialized views, which in my laypersons terms assumes I'm gonna run a query on the database and take that subset. And then I'm gonna be ableto thio. Copy that and move it to another data store. And then you're gonna automatically keep track of the changes and keep everything up to date. Is that right? >>Yes. That's exactly right. So you can imagine. So you had a product catalog for example, that's being updated in dynamodb, and you can create a view that will move that to Amazon Elasticsearch service. You could search through a current version of your catalog, and we will monitor your dynamodb tables for any changes and make sure those air all propagated in the real time. And all of that is is taken care of for our customers as soon as they defined the view on. But they don't be just kept in sync a za long as the views in effect. >>Let's see, this is being really valuable for a person who's building Looks like I like to think in terms of data services or data products that are gonna help me, you know, monetize my business. Maybe, you know, maybe it's a simple as a dashboard, but maybe it's actually a product. You know, it might be some content that I want to develop, and I've got transaction systems. I've got unstructured data, may be in a no sequel database, and I wanna actually combine those build new products, and I want to do that quickly. So So take me through what I would have to do. You you sort of alluded to it with, you know, a lot of e t l and but take me through in a little bit more detail how I would do that, you know, before this innovation. And maybe you could give us a sense as to what the possibilities are with glue. Elastic views? >>Sure. So, you know, before we announced elastic views, a customer would typically have toe think about using a T l software, so they'd have to write a neat L pipeline that would extract data periodically from a range of sources. They then have to write transformation code that would do things like matchup types. Make sure you didn't have any invalid values, and then you would combine it on periodically, Write that into a target. And so once you've got that pipeline set up, you've got to monitor it. If you see an unusual spike in data volume, you might have to add more. Resource is to the pipeline to make a complete on time. And then, if anything changed in either the source of the destination that prevented that data from flowing in the way you would expect it, you'd have toe manually, figure that out and have data, quality checks and all of that in place to make sure everything kept working but with elastic views just gets much simpler. So instead of having to write custom transformation code, you right view using sequel and um, sequel is, uh, you know, widely popular with data analysts and folks that work with data, as you well know. And so you can define that view and sequel. The view will look across multiple sources, and then you pick your destination and then glue. Elastic views essentially monitors both the source for changes as well as the source and the destination for any any issues like, for example, did the schema changed. The shape of the data change is something briefly unavailable, and it can monitor. All of that can handle any errors, but it can recover from automatically. Or if it can't say someone dropped an important table in the source. That was part of your view. You can actually get alerted and notified to take some action to prevent bad data from getting through your system or to prevent your pipeline from breaking without your knowledge and then the final pieces, the elasticity of it. It will automatically deal with adding more resource is if, for example, say you had a spiky day, Um, in the markets, maybe you're building a financial services application and you needed to add more resource is to process those changes into your targets more quickly. The system would handle that for you. And then, if you're monetizing data services on the back end, you've got a range of options for folks subscribing to those targets. So we've got capabilities like our, uh, Amazon data exchange, where people can exchange and monetize data set. So it allows this and to end flow in a much more straightforward way. It was possible before >>awesome. So a lot of automation, especially if something goes wrong. So something goes wrong. You can automatically recover. And if for whatever reason, you can't what happens? You quite ask the system and and let the operator No. Hey, there's an issue. You gotta go fix it. How does that work? >>Yes, exactly. Right. So if we can recover, say, for example, you can you know that for a short period of time, you can't read the target database. The system will keep trying until it can get through. But say someone dropped a column from your source. That was a key part of your ultimate view and destination. You just can't proceed at that point. So the pipeline stops and then we notify using a PS or an SMS alert eso that programmatic action can be taken. So this effectively provides a really great way to enforce the integrity of data that's going between the sources and the targets. >>All right, make it kindergarten proof of it. So let's talk about another innovation. You guys announced quicksight que, uh, kind of speaking to the machine in my natural language, but but give us some more detail there. What is quicksight Q and and how doe I interact with it. What What kind of questions can I ask it >>so quick? Like you is essentially a deep, learning based semantic model of your data that allows you to ask natural language questions in your dashboard so you'll get a search bar in your quick side dashboard and quick site is our service B I service. That makes it really easy to provide rich dashboards. Whoever needs them in the organization on what Q does is it's automatically developing relationships between the entities in your data, and it's able to actually reason about the questions you ask. So unlike earlier natural language systems, where you have to pre define your models, you have to pre define all the calculations that you might ask the system to do on your behalf. Q can actually figure it out. So you can say Show me the top five categories for sales in California and it'll look in your data and figure out what that is and will prevent. It will present you with how it parse that question, and there will, in line in seconds, pop up a dashboard of what you asked and actually automatically try and take a chart or visualization for that data. That makes sense, and you could then start to refine it further and say, How does this compare to what happened in New York? And we'll be able to figure out that you're tryingto overlay those two data sets and it'll add them. And unlike other systems, it doesn't need to have all of those things pre defined. It's able to reason about it because it's building a model of what your data means on the flight and we pre trained it across a variety of different domains So you can ask a question about sales or HR or any of that on another great part accused that when it presents to you what it's parsed, you're actually able toe correct it if it needs it and provide feedback to the system. So, for example, if it got something slightly off you could actually select from a drop down and then it will remember your selection for the next time on it will get better as you use it. >>I saw a demo on in Swamis Keynote on December 8. That was basically you were able to ask Quick psych you the same question, but in different ways, you know, like compare California in New York or and then the data comes up or give me the top, you know, five. And then the California, New York, the same exact data. So so is that how I kind of can can check and see if the answer that I'm getting back is correct is ask different questions. I don't have to know. The schema is what you're saying. I have to have knowledge of that is the user I can. I can triangulate from different angles and then look and see if that's correct. Is that is that how you verify or there are other ways? >>Eso That's one way to verify. You could definitely ask the same question a couple of different ways and ensure you're seeing the same results. I think the third option would be toe, uh, you know, potentially click and drill and filter down into that data through the dash one on, then the you know, the other step would be at data ingestion Time. Typically, data pipelines will have some quality controls, but when you're interacting with Q, I think the ability to ask the question multiple ways and make sure that you're getting the same result is a perfectly reasonable way to validate. >>You know what I like about that answer that you just gave, and I wonder if I could get your opinion on this because you're you've been in this business for a while? You work with a lot of customers is if you think about our operational systems, you know things like sales or E r. P systems. We've contextualized them. In other words, the business lines have inject context into the system. I mean, they kind of own it, if you will. They own the data when I put in quotes, but they do. They feel like they're responsible for it. There's not this constant argument because it's their data. It seems to me that if you look back in the last 10 years, ah, lot of the the data architecture has been sort of generis ized. In other words, the experts. Whether it's the data engineer, the quality engineer, they don't really have the business context. But the example that you just gave it the drill down to verify that the answer is correct. It seems to me, just in listening again to Swamis Keynote the other day is that you're really trying to put data in the hands of business users who have the context on the domain knowledge. And that seems to me to be a change in mindset that we're gonna see evolve over the next decade. I wonder if you could give me your thoughts on that change in the data architecture data mindset. >>David, I think you're absolutely right. I mean, we see this across all the customers that we speak with there's there's an increasing desire to get data broadly distributed into the hands of the organization in a well governed and controlled way. But customers want to give data to the folks that know what it means and know how they can take action on it to do something for the business, whether that's finding a new opportunity or looking for efficiencies. And I think, you know, we're seeing that increasingly, especially given the unpredictability that we've all gone through in 2020 customers are realizing that they need to get a lot more agile, and they need to get a lot more data about their business, their customers, because you've got to find ways to adapt quickly. And you know, that's not gonna change anytime in the future. >>And I've said many times in the The Cube, you know, there are industry. The technology industry used to be all about the products, and in the last decade it was really platforms, whether it's SAS platforms or AWS cloud platforms, and it seems like innovation in the coming years, in many respects is coming is gonna come from the ecosystem and the ability toe share data we've We've had some examples today and then But you hit on. You know, one of the key challenges, of course, is security and governance. And can you automate that if you will and protect? You know the users from doing things that you know, whether it's data access of corporate edicts for governance and compliance. How are you handling that challenge? >>That's a great question, and it's something that really emphasized in my leadership session. But the you know, the notion of what customers are doing and what we're seeing is that there's, uh, the Lake House architectural concept. So you've got a day late. Purpose build stores and customers are looking for easy data movement across those. And so we have things like blue elastic views or some of the other blue features we announced. But they're also looking for unified governance, and that's why we built it ws late formation. And the idea here is that it can quickly discover and catalog customer data assets and then allows customers to define granular access policies centrally around that data. And once you have defined that, it then sets customers free to give broader access to the data because they put the guardrails in place. They put the protections in place. So you know you can tag columns as being private so nobody can see them on gun were announced. We announced a couple of new capabilities where you can provide row based control. So only a certain set of users can see certain rose in the data, whereas a different set of users might only be able to see, you know, a different step. And so, by creating this fine grained but unified governance model, this actually sets customers free to give broader access to the data because they know that they're policies and compliance requirements are being met on it gets them out of the way of the analyst. For someone who can actually use the data to drive some value for the business, >>right? They could really focus on driving value. And I always talk about monetization. However monetization could be, you know, a generic term, for it could be saving lives, admission of the business or the or the organization I meant to ask you about acute customers in bed. Uh, looks like you into their own APs. >>Yes, absolutely so one of quick sites key strengths is its embed ability. And on then it's also serverless, so you could embed it at a really massive scale. And so we see customers, for example, like blackboard that's embedding quick side dashboards into information. It's providing the thousands of educators to provide data on the effectiveness of online learning. For example, on you could embed Q into that capability. So it's a really cool way to give a broad set of people the ability to ask questions of data without requiring them to be fluent in things like Sequel. >>If I ask you a question, we've talked a little bit about data movement. I think last year reinvent you guys announced our A three. I think it made general availability this year. And remember Andy speaking about it, talking about you know, the importance of having big enough pipes when you're moving, you know, data around. Of course you do. Doing tearing. You also announced Aqua Advanced Query accelerator, which kind of reduces bringing the computer. The data, I guess, is how I would think about that reducing that movement. But then we're talking about, you know, glue, elastic views you're copying and moving data. How are you ensuring you know, maintaining that that maximum performance for your customers. I mean, I know it's an architectural question, but as an analytics professional, you have toe be comfortable that that infrastructure is there. So how does what's A. W s general philosophy in that regard? >>So there's a few ways that we think about this, and you're absolutely right. I think there's data volumes were going up, and we're seeing customers going from terabytes, two petabytes and even people heading into the exabyte range. Uh, there's really a need to deliver performance at scale. And you know, the reality of customer architectures is that customers will use purpose built systems for different best in class use cases. And, you know, if you're trying to do a one size fits all thing, you're inevitably going to end up compromising somewhere. And so the reality is, is that customers will have more data. We're gonna want to get it to more people on. They're gonna want their analytics to be fast and cost effective. And so we look at strategies to enable all of this. So, for example, glue elastic views. It's about moving data, but it's about moving data efficiently. So What we do is we allow customers to define a view that represents the subset of their data they care about, and then we only look to move changes as efficiently as possible. So you're reducing the amount of data that needs to get moved and making sure it's focused on the essential. Similarly, with Aqua, what we've done, as you mentioned, is we've taken the compute down to the storage layer, and we're using our nitro chips to help with things like compression and encryption. And then we have F. P. J s in line to allow filtering an aggregation operation. So again, you're tryingto quickly and effectively get through as much data as you can so that you're only sending back what's relevant to the query that's being processed. And that again leads to more performance. If you can avoid reading a bite, you're going to speed up your queries. And that Awkward is trying to do. It's trying to push those operations down so that you're really reducing data as close to its origin as possible on focusing on what's essential. And that's what we're applying across our analytics portfolio. I would say one other piece we're focused on with performance is really about innovating across the stack. So you mentioned network performance. You know, we've got 100 gigabits per second throughout now, with the next 10 instances and then with things like Grab it on to your able to drive better price performance for customers, for general purpose workloads. So it's really innovating at all layers. >>It's amazing to watch it. I mean, you guys, it's a It's an incredible engineering challenge as you built this hyper distributed system. That's now, of course, going to the edge. I wanna come back to something you mentioned on do wanna hit on your leadership session as well. But you mentioned the one size fits all, uh, system. And I've asked Andy Jassy about this. I've had a discussion with many folks that because you're full and and of course, you mentioned the challenges you're gonna have to make tradeoffs if it's one size fits all. The flip side of that is okay. It's simple is you know, 11 of the Swiss Army knife of database, for example. But your philosophy is Amazon is you wanna have fine grained access and to the primitives in case the market changes you, you wanna be able to move quickly. So that puts more pressure on you to then simplify. You're not gonna build this big hairball abstraction layer. That's not what he gonna dio. Uh, you know, I think about, you know, layers and layers of paint. I live in a very old house. Eso your That's not your approach. So it puts greater pressure on on you to constantly listen to your customers, and and they're always saying, Hey, I want to simplify, simplify, simplify. We certainly again heard that in swamis presentation the other day, all about, you know, minimizing complexity. So that really is your trade office. It puts pressure on Amazon Engineering to continue to raise the bar on simplification. Isn't Is that a fair statement? >>Yeah, I think so. I mean, you know, I think any time we can do work, so our customers don't have to. I think that's a win for both of us. Um, you know, because I think we're delivering more value, and it makes it easier for our customers to get value from their data way. Absolutely believe in using the right tool for the right job. And you know you talked about an old house. You're not gonna build or renovate a house of the Swiss Army knife. It's just the wrong tool. It might work for small projects, but you're going to need something more specialized. The handle things that matter. It's and that is, uh, that's really what we see with that, you know, with that set of capabilities. So we want to provide customers with the best of both worlds. We want to give them purpose built tools so they don't have to compromise on performance or scale of functionality. And then we want to make it easy to use these together. Whether it's about data movement or things like Federated Queries, you can reach into each of them and through a single query and through a unified governance model. So it's all about stitching those together. >>Yeah, so far you've been on the right side of history. I think it serves you well on your customers. Well, I wanna come back to your leadership discussion, your your leadership session. What else could you tell us about? You know, what you covered there? >>So we we've actually had a bunch of innovations on the analytics tax. So some of the highlights are in m r, which is our managed spark. And to do service, we've been able to achieve 1.7 x better performance and open source with our spark runtime. So we've invested heavily in performance on now. EMR is also available for customers who are running and containerized environment. So we announced you Marnie chaos on then eh an integrated development environment and studio for you Marco D M R studio. So making it easier both for people at the infrastructure layer to run em are on their eks environments and make it available within their organizations but also simplifying life for data analysts and folks working with data so they can operate in that studio and not have toe mess with the details of the clusters underneath and then a bunch of innovation in red shift. We talked about Aqua already, but then we also announced data sharing for red Shift. So this makes it easy for red shift clusters to share data with other clusters without putting any load on the central producer cluster. And this also speaks to the theme of simplifying getting data from point A to point B so you could have central producer environments publishing data, which represents the source of truth, say into other departments within the organization or departments. And they can query the data, use it. It's always up to date, but it doesn't put any load on the producers that enables these really powerful data sharing on downstream data monetization capabilities like you've mentioned. In addition, like Swami mentioned in his keynote Red Shift ML, so you can now essentially train and run models that were built in sage maker and optimized from within your red shift clusters. And then we've also automated all of the performance tuning that's possible in red ships. So we really invested heavily in price performance, and now we've automated all of the things that make Red Shift the best in class data warehouse service from a price performance perspective up to three X better than others. But customers can just set red shift auto, and it'll handle workload management, data compression and data distribution. Eso making it easier to access all about performance and then the other big one was in Lake Formacion. We announced three new capabilities. One is transactions, so enabling consistent acid transactions on data lakes so you can do things like inserts and updates and deletes. We announced row based filtering for fine grained access control and that unified governance model and then automated storage optimization for Data Lake. So customers are dealing with an optimized small files that air coming off streaming systems, for example, like Formacion can auto compact those under the covers, and you can get a 78 x performance boost. It's been a busy year for prime lyrics. >>I'll say that, z that it no great great job, bro. Thanks so much for coming back in the Cube and, you know, sharing the innovations and, uh, great to see you again. And good luck in the coming here. Well, >>thank you very much. Great to be here. Great to see you. And hope we get Thio see each other in person against >>I hope so. All right. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after this short break
SUMMARY :
It's great to see you again. They have Great co two and always a pleasure. to, you know, essentially share data across different And so the you know the components of the name are pretty straightforward. And then you're gonna automatically keep track of the changes and keep everything up to date. So you can imagine. services or data products that are gonna help me, you know, monetize my business. that prevented that data from flowing in the way you would expect it, you'd have toe manually, And if for whatever reason, you can't what happens? So if we can recover, say, for example, you can you know that for a So let's talk about another innovation. that you might ask the system to do on your behalf. but in different ways, you know, like compare California in New York or and then the data comes then the you know, the other step would be at data ingestion Time. But the example that you just gave it the drill down to verify that the answer is correct. And I think, you know, we're seeing that increasingly, You know the users from doing things that you know, whether it's data access But the you know, the notion of what customers are doing and what we're seeing is that admission of the business or the or the organization I meant to ask you about acute customers And on then it's also serverless, so you could embed it at a really massive But then we're talking about, you know, glue, elastic views you're copying and moving And you know, the reality of customer architectures is that customers will use purpose built So that puts more pressure on you to then really what we see with that, you know, with that set of capabilities. I think it serves you well on your customers. speaks to the theme of simplifying getting data from point A to point B so you could have central in the Cube and, you know, sharing the innovations and, uh, great to see you again. thank you very much. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rahul Pathak | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
December 8 | DATE | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
third option | QUANTITY | 0.99+ |
Swami | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
A. W | PERSON | 0.99+ |
this year | DATE | 0.99+ |
10 instances | QUANTITY | 0.98+ |
A three | COMMERCIAL_ITEM | 0.98+ |
78 x | QUANTITY | 0.98+ |
two petabytes | QUANTITY | 0.98+ |
five | QUANTITY | 0.97+ |
Amazon Engineering | ORGANIZATION | 0.97+ |
Red Shift ML | TITLE | 0.97+ |
Formacion | ORGANIZATION | 0.97+ |
11 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
one way | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
five categories | QUANTITY | 0.94+ |
Aqua | ORGANIZATION | 0.93+ |
Elasticsearch | TITLE | 0.93+ |
terabytes | QUANTITY | 0.93+ |
both worlds | QUANTITY | 0.93+ |
next decade | DATE | 0.92+ |
two data sets | QUANTITY | 0.91+ |
Lake Formacion | ORGANIZATION | 0.9+ |
single query | QUANTITY | 0.9+ |
Data Lake | ORGANIZATION | 0.89+ |
thousands of educators | QUANTITY | 0.89+ |
Both stores | QUANTITY | 0.88+ |
Thio | PERSON | 0.88+ |
agile | TITLE | 0.88+ |
Cuba | LOCATION | 0.87+ |
dynamodb | ORGANIZATION | 0.86+ |
1.7 x | QUANTITY | 0.86+ |
Swamis | PERSON | 0.84+ |
EMR | TITLE | 0.82+ |
one size | QUANTITY | 0.82+ |
Red Shift | TITLE | 0.82+ |
up to three X | QUANTITY | 0.82+ |
100 gigabits per second | QUANTITY | 0.82+ |
Marnie | PERSON | 0.79+ |
last decade | DATE | 0.79+ |
reinvent 2020 | EVENT | 0.74+ |
Invent | EVENT | 0.74+ |
last 10 years | DATE | 0.74+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
today | DATE | 0.74+ |
A Ro | EVENT | 0.71+ |
three new capabilities | QUANTITY | 0.71+ |
two | QUANTITY | 0.7+ |
E T Elling | PERSON | 0.69+ |
Eso | ORGANIZATION | 0.66+ |
Aqua | TITLE | 0.64+ |
Cube | ORGANIZATION | 0.63+ |
Query | COMMERCIAL_ITEM | 0.63+ |
SAS | ORGANIZATION | 0.62+ |
Aurora | ORGANIZATION | 0.61+ |
Lake House | ORGANIZATION | 0.6+ |
Sequel | TITLE | 0.58+ |
P. | PERSON | 0.56+ |
How T-Mobile is Building a Data-Driven Organization | Beyond.2020 Digital
>>Yeah, yeah, hello again and welcome to our last session of the day before we head to the meat. The experts roundtables how T Mobile is building a data driven organization with thought spot and whip prone. Today we'll hear how T Mobile is leaving Excel hell by enabling all employees with self service analytics so they can get instant answers on curated data. We're lucky to be closing off the day with these two speakers. Evo Benzema, manager of business intelligence services at T Mobile Netherlands, and Sanjeev Chowed Hurry, lead architect AT T Mobile, Netherlands, from Whip Chrome. Thank you both very much for being with us today, for today's session will cover how mobile telco markets have specific dynamics and what it waas that T Mobile was facing. We'll also go over the Fox spot and whip pro solution and how they address T mobile challenges. Lastly, but not least, of course, we'll cover Team Mobil's experience and learnings and takeaways that you can use in your business without further ado Evo, take us away. >>Thank you very much. Well, let's first talk a little bit about T Mobile, Netherlands. We are part off the larger deutsche Telekom Group that ISS operating in Europe and the US We are the second largest mobile phone company in the Netherlands, and we offer the full suite awful services that you expect mobile landline in A in an interactive TV. And of course, Broadbent. Um so this is what the Mobile is appreciation at at the moment, a little bit about myself. I'm already 11 years at T Mobile, which is we part being part of the furniture. In the meantime, I started out at the front line service desk employee, and that's essentially first time I came into a touch with data, and what I found is that I did not have any possibility of myself to track my performance. Eso I build something myself and here I saw that this need was there because really quickly, roughly 2020 off my employer colleagues were using us as well. This was a little bit where my efficient came from that people need to have access to data across the organization. Um, currently, after 11 years running the BR Services Department on, I'm driving this transformation now to create a data driven organization with a heavy customer focus. Our big goal. Our vision is that within two years, 8% of all our employees use data on a day to day basis to make their decisions and to improve their decision. So over, tuition Chief. Now, thank >>you. Uh, something about the proof. So we prize a global I T and business process consulting and delivery company. Uh, we have a comprehensive portfolio of services with presents, but in 61 countries and maybe 1000 plus customers. As we're speaking with Donald, keep customers Region Point of view. We primary look to help our customers in reinventing the business models with digital first approach. That's how we look at our our customers toe move to digitalization as much as possible as early as possible. Talking about myself. Oh, I have little over two decades of experience in the intelligence and tell cope landscape. Calico Industries. I have worked with most of the telcos totally of in us in India and in Europe is well now I have well known cream feed on brownfield implementation off their house on big it up platforms. At present, I'm actively working with seminal data transform initiative mentioned by evil, and we are actively participating in defining the logical and physical footprint for future architectures for criminal. I understand we are also, in addition, taking care off and two and ownership off off projects, deliveries on operations, back to you >>so a little bit over about the general telco market dynamics. It's very saturated market. Everybody has mobile phones already. It's the growth is mostly gone, and what you see is that we have a lot of trouble around customer brand loyalty. People switch around from provider to provider quite easily, and new customers are quite expensive. So our focus is always to make customer loyal and to keep them in the company. And this is where the opportunities are as well. If we increase the retention of customers or reduce what we say turned. This is where the big potential is for around to use of data, and we should not do this by only offering this to the C suite or the directors or the mark managers data. But this needs to be happening toe all employees so that they can use this to really help these customers and and services customers is situated. This that we can create his loyalty and then This is where data comes in as a big opportunity going forward. Yeah. So what are these challenges, though? What we're facing two uses the data. And this is, uh, these air massive over our big. At least let's put it like that is we have a lot of data. We create around four billion new record today in our current platforms. The problem is not everybody can use or access this data. You need quite some technical expertise to add it, or they are pre calculated into mawr aggregated dashboard. So if you have a specific question, uh, somebody on the it side on the buy side should have already prepared something so that you can get this answer. So we have a huge back lock off questions and data answers that currently we cannot answer on. People are limited because they need technical expertise to use this data. These are the challenges we're trying to solve going forward. >>Uh, so the challenge we see in the current landscape is T mobile as a civil mentioned number two telco in Europe and then actually in Netherlands. And then we have a lot of acquisitions coming in tow of the landscape. So overall complexity off technical stack increases year by year and acquisition by acquisition it put this way. So we at this time we're talking about Claudia Irureta in for Matic Uh, aws and many other a complex silo systems. We actually are integrated where we see multiple. In some cases, the data silos are also duplicated. So the challenge here is how do we look into this data? How do we present this data to business and still ensure that Ah, mhm Kelsey of the data is reliable. So in this project, what we looked at is we curated that around 10% off the data of us and made it ready for business to look at too hot spot. And this also basically help us not looking at the A larger part of the data all together in one shot. What's is going to step by step with manageable set of data, obviously manages the time also and get control on cost has. >>So what did we actually do and how we did? Did we do it? And what are we going to do going forward? Why did we chose to spot and what are we measuring to see if we're successful is is very simply, Some stuff I already alluded to is usual adoption. This needs to be a tool that is useable by everybody. Eso This is adoption. The user experience is a major key to to focus on at the beginning. Uh, but lastly, and this is just also cold hard. Fact is, it needs to save time. It needs to be faster. It needs to be smarter than the way we used to do it. So we focused first on setting up the environment with our most used and known data set within the company. The data set that is used already on the daily basis by a large group. We know what it's how it works. We know how it acts on this is what we decided to make available fire talksport this cut down the time around, uh, data modeling a lot because we had this already done so we could go right away into training users to start using this data, and this is already going on very successfully. We have now 40 heavily engaged users. We go went life less than a month ago, and we see very successful feedback on user experience. We had either yesterday, even a beautiful example off loading a new data set and and giving access to user that did not have a training for talk sport or did not know what thoughts, what Waas. And we didn't in our he was actively using this data set by building its own pin boards and asking questions already. And this shows a little bit the speed off delivery we can have with this without, um, much investments on data modeling, because that's part was already done. So our second stage is a little bit more ambitious, and this is making sure that all this information, all our information, is available for frontline uh, employees. So a customer service but also chills employees that they can have data specifically for them that make them their life easier. So this is performance KP ice. But it could also be the beautiful word that everybody always uses customer Terry, 60 fuse. But this is giving the power off, asking questions and getting answers quickly to everybody in the company. That's the big stage two after that, and this is going forward a little bit further in the future and we are not completely there yet, is we also want Thio. Really? After we set up the government's properly give the power to add your own data to our curated data sets that that's when you've talked about. And then with that, we really hope that Oh, our ambition and our plan is to bring this really to more than 800 users on a daily basis to for uses on a daily basis across our company. So this is not for only marketing or only technology or only one segment. This is really an application that we want to set in our into system that works for everybody. And this is our ambition that we will work through in these three, uh, steps. So what did we learn so far? And and Sanjeev, please out here as well, But one I already said, this is no which, which data set you start. This is something. Start with something. You know, start with something that has a wide appeal to more than one use case and make sure that you make this decision. Don't ask somebody else. You know what your company needs? The best you should be in the driver seat off this decision. And this is I would be saying really the big one because this will enable you to kickstart this really quickly going forward. Um, second, wellness and this is why we introduce are also here together is don't do this alone. Do this together with, uh I t do this together with security. Do this together with business to tackle all these little things that you don't think about yourself. Maybe security, governance, network connections and stuff like that. Make sure that you do this as a company and don't try to do this on your own, because there's also again it's removes. Is so much obstacles going forward? Um, lastly, I want to mention is make sure that you measure your success and this is people in the data domain sometimes forget to measure themselves. Way can make sure everybody else, but we forget ourselves. But really try to figure out what makes its successful for you. And we use adoption percentages, usual experience, surveys and and really calculations about time saved. We have some rough calculations that we can calculate changes thio monetary value, and this will save us millions in years. by just automating time that is now used on, uh, now to taken by people on manual work. So, do you have any to adhere? A swell You, Susan, You? >>Yeah. So I'll just pick on what you want to mention about. Partner goes live with I t and other functions. But that is a very keating, because from my point of view, you see if you can see that the data very nice and data quality is also very clear. If we have data preparing at the right level, ready to be consumed, and data quality is taken, care off this feel 30 less challenges. Uh, when the user comes and questioned the gator, those are the things which has traded Quiz it we should be sure about before we expose the data to the Children. When you're confident about your data, you are confident that the user will also get the right numbers they're looking for and the number they have. Their mind matches with what they see on the screen. And that's where you see there. >>Yeah, and that that that again helps that adoption, and that makes it so powerful. So I fully agree. >>Thank you. Eva and Sanjeev. This is the picture perfect example of how a thought spot can get up and running, even in a large, complex organization like T Mobile and Sanjay. Thank you for sharing your experience on how whip rose system integration expertise paved the way for Evo and team to realize value quickly. Alright, everyone's favorite part. Let's get to some questions. Evil will start with you. How have your skill? Data experts reacted to thought spot Is it Onley non technical people that seem to be using the tool or is it broader than that? You may be on. >>Yes, of course, that happens in the digital environment. Now this. This is an interesting question because I was a little bit afraid off the direction off our data experts and are technically skilled people that know how to work in our fight and sequel on all these things. But here I saw a lot of enthusiasm for the tool itself and and from two sides, either to use it themselves because they see it's a very easy way Thio get to data themselves, but also especially that they see this as a benefit, that it frees them up from? Well, let's say mundane questions they get every day. And and this is especially I got pleasantly surprised with their reaction on that. And I think maybe you can also say something. How? That on the i t site that was experienced. >>Well, uh, yeah, from park department of you, As you mentioned, it is changing the way business is looking at. The data, if you ask me, have taken out talkto data rather than looking at it. Uh, it is making the interactivity that that's a keyword. But I see that the gap between the technical and function folks is also diminishing, if I may say so over a period of time, because the technical folks now would be able to work with functional teams on the depth and coverage of the data, rather than making it available and looking at the technical side off it. So now they can have a a fair discussion with the functional teams on. Okay, these are refute. Other things you can look at because I know this data is available can make it usable for you, especially the time it takes for the I t. G. When graduate dashboard, Uh, that time can we utilize toe improve the quality and reliability of the data? That's yeah. See the value coming. So if you ask me to me, I see the technical people moving towards more of a technical functional role. Tools such as >>That's great. I love that saying now we can talk to data instead of just looking at it. Um Alright, Evo, I think that will finish up with one last question for you that I think you probably could speak. Thio. Given your experience, we've seen that some organizations worry about providing access to data for everyone. How do you make sure that everyone gets the same answer? >>Yes. The big data Girlfriends question thesis What I like so much about that the platform is completely online. Everything it happens online and everything is terrible. Which means, uh, in the good old days, people will do something on their laptop. Beirut at a logic to it, they were aggregated and then they put it in a power point and they will share it. But nobody knew how this happened because it all happened offline. With this approach, everything is transparent. I'm a big I love the word transparency in this. Everything is available for everybody. So you will not have a discussion anymore. About how did you get to this number or how did you get to this? So the question off getting two different answers to the same question is removed because everything happens. Transparency, online, transparent, online. And this is what I think, actually, make that question moot. Asl Long as you don't start exporting this to an offline environment to do your own thing, you are completely controlling, complete transparent. And this is why I love to share options, for example and on this is something I would really keep focusing on. Keep it online, keep it visible, keep it traceable. And there, actually, this problem then stops existing. >>Thank you, Evelyn. Cindy, That was awesome. And thank you to >>all of our presenters. I appreciate your time so much. I hope all of you at home enjoyed that as much as I did. I know a lot of you did. I was watching the chat. You know who you are. I don't think that I'm just a little bit in awe and completely inspired by where we are from a technological perspective, even outside of thoughts about it feels like we're finally at a time where we can capitalize on the promise that cloud and big data made to us so long ago. I loved getting to see Anna and James describe how you can maximize the investment both in time and money that you've already made by moving your data into a performance cloud data warehouse. It was cool to see that doubled down on with the session, with AWS seeing a direct query on Red Shift. And even with something that's has so much scale like TV shows and genres combining all of that being able to search right there Evo in Sanjiv Wow. I mean being able to combine all of those different analytics tools being able to free up these analysts who could do much more important and impactful work than just making dashboards and giving self service analytics to so many different employees. That's incredible. And then, of course, from our experts on the panel, I just think it's so fascinating to see how experts that came from industries like finance or consulting, where they saw the imperative that you needed to move to thes third party data sets enriching and organizations data. So thank you to everyone. It was fascinating. I appreciate everybody at home joining us to We're not quite done yet. Though. I'm happy to say that we after this have the product roadmap session and that we are also then going to move into hearing and being able to ask directly our speakers today and meet the expert session. So please join us for that. We'll see you there. Thank you so much again. It was really a pleasure having you.
SUMMARY :
takeaways that you can use in your business without further ado Evo, the Netherlands, and we offer the full suite awful services that you expect mobile landline deliveries on operations, back to you somebody on the it side on the buy side should have already prepared something so that you can get this So the challenge here is how do we look into this data? And this shows a little bit the speed off delivery we can have with this without, And that's where you see there. Yeah, and that that that again helps that adoption, and that makes it so powerful. Onley non technical people that seem to be using the tool or is it broader than that? And and this is especially I got pleasantly surprised with their But I see that the gap between I love that saying now we can talk to data instead of just looking at And this is what I think, actually, And thank you to I loved getting to see Anna and James describe how you can maximize the investment
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Europe | LOCATION | 0.99+ |
Claudia Irureta | PERSON | 0.99+ |
Eva | PERSON | 0.99+ |
Donald | PERSON | 0.99+ |
Evelyn | PERSON | 0.99+ |
T Mobile | ORGANIZATION | 0.99+ |
Cindy | PERSON | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Evo Benzema | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Calico Industries | ORGANIZATION | 0.99+ |
Sanjeev | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
8% | QUANTITY | 0.99+ |
11 years | QUANTITY | 0.99+ |
Kelsey | PERSON | 0.99+ |
Today | DATE | 0.99+ |
Sanjeev Chowed Hurry | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
BR Services Department | ORGANIZATION | 0.99+ |
more than 800 users | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Whip Chrome | ORGANIZATION | 0.99+ |
Anna | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Team Mobil | ORGANIZATION | 0.99+ |
T mobile | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
today | DATE | 0.99+ |
second stage | QUANTITY | 0.99+ |
Susan | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
three | QUANTITY | 0.98+ |
Sanjay | ORGANIZATION | 0.98+ |
61 countries | QUANTITY | 0.98+ |
one shot | QUANTITY | 0.98+ |
deutsche Telekom Group | ORGANIZATION | 0.98+ |
Thio | PERSON | 0.98+ |
Broadbent | ORGANIZATION | 0.98+ |
two years | QUANTITY | 0.98+ |
1000 plus customers | QUANTITY | 0.98+ |
T-Mobile | ORGANIZATION | 0.98+ |
one last question | QUANTITY | 0.97+ |
around 10% | QUANTITY | 0.97+ |
Fox | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
AT T Mobile | ORGANIZATION | 0.97+ |
two speakers | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
telco | ORGANIZATION | 0.96+ |
Evo | PERSON | 0.95+ |
less than a month ago | DATE | 0.95+ |
first approach | QUANTITY | 0.94+ |
over two decades | QUANTITY | 0.94+ |
one segment | QUANTITY | 0.94+ |
Red Shift | TITLE | 0.93+ |
Matic Uh | ORGANIZATION | 0.92+ |
two different answers | QUANTITY | 0.92+ |
second | QUANTITY | 0.91+ |
second largest mobile phone | QUANTITY | 0.89+ |
60 fuse | QUANTITY | 0.89+ |
ISS | ORGANIZATION | 0.89+ |
T Mobile Netherlands | ORGANIZATION | 0.86+ |
Mobile | ORGANIZATION | 0.86+ |
more than one use case | QUANTITY | 0.84+ |
30 less challenges | QUANTITY | 0.83+ |
Sanjiv | TITLE | 0.82+ |
around four billion new record | QUANTITY | 0.81+ |
aws | ORGANIZATION | 0.8+ |
Terry | PERSON | 0.8+ |
40 heavily engaged users | QUANTITY | 0.79+ |
2020 | DATE | 0.75+ |
one | QUANTITY | 0.57+ |
Evo | ORGANIZATION | 0.48+ |
Beyond.2020 | ORGANIZATION | 0.43+ |
Unleash the Power of Your Cloud Data | Beyond.2020 Digital
>>Yeah, yeah. Welcome back to the third session in our building, A vibrant data ecosystem track. This session is unleash the power of your cloud data warehouse. So what comes after you've moved your data to the cloud in this session will explore White Enterprise Analytics is finally ready for the cloud, and we'll discuss how you can consume Enterprise Analytics in the very same way he would cloud services. We'll also explore where analytics meets cloud and see firsthand how thought spot is open for everyone. Let's get going. I'm happy to say we'll be hearing from two folks from thought spot today, Michael said Cassie, VP of strategic partnerships, and Vika Valentina, senior product marketing manager. And I'm very excited to welcome from our partner at AWS Gal Bar MIA, product engineering manager with Red Shift. We'll also be sharing a live demo of thought spot for BTC Marketing Analytics directly on Red Shift data. Gal, please kick us off. >>Thank you, Military. And thanks. The talks about team and everyone attending today for joining us. When we talk about data driven organizations, we hear that 85% of businesses want to be data driven. However, on Lee. 37% have been successful in We ask ourselves, Why is that and believe it or not, Ah, lot of customers tell us that they struggled with live in defining what being data driven it even means, and in particular aligning that definition between the business and the technology stakeholders. Let's talk a little bit. Let's look at our own definition. A data driven organization is an organization that harnesses data is an asset. The drive sustained innovation and create actionable insights. The super charge, the experience of their customers so they demand more. Let's focus on a few things here. One is data is an asset. Data is very much like a product needs to evolve sustained innovation. It's not just innovation innovation, it's sustained. We need to continuously innovate when it comes to data actionable insights. It's not just interesting insights these air actionable that the business can take and act upon, and obviously the actual experience we. Whether whether the customers are internal or external, we want them to request Mawr insights and as such, drive mawr innovation, and we call this the for the flywheel. We use the flywheel metaphor here where we created that data set. Okay, Our first product. Any focused on a specific use case? We build an initial NDP around that we provided with that with our customers, internal or external. They provide feedback, the request, more features. They want mawr insights that enables us to learn bringing more data and reach that actual data. And again we create MAWR insights. And as the flywheel spins faster, we improve on operational efficiencies, supporting greater data richness, and we reduce the cost of experimentation and legacy environments were never built for this kind of agility. In many cases, customers have struggled to keep momentum in their fleet, flywheel in particular around operational efficiency and experimentation. This is where Richie fits in and helps customer make the transition to a true data driven organization. Red Shift is the most widely used data warehouse with tens of thousands of customers. It allows you to analyze all your data. It is the only cloud data warehouse that sits, allows you to analyze data that sits in your data lake on Amazon, a street with no loading duplication or CTL required. It is also allows you to scale with the business with its hybrid architectures it also accelerates performance. It's a shared storage that provides the ability to scale toe unlimited concurrency. While the UN instant storage provides low late and say access to data it also provides three. Key asks that customers consistently tell us that matter the most when it comes to cost. One is usage based pricing Instead of license based pricing. Great value as you scale your data warehouse using, for example, reserved instances they can save up to 75% compared to on the mind demand prices. And as your data grows, infrequently accessed data can be stored. Cost effectively in S three encouraged through Amazon spectrum, and the third aspect is predictable. Month to month spend with no hitting charges and surprises. Unlike and unlike other cloud data warehouses, where you need premium versions for additional enterprise capabilities. Wretched spicing include building security compression and data transfer. >>Great Thanks. Scout um, eso. As you can see, everybody wins with the cloud data warehouses. Um, there's this evolution of movement of users and data and organizations to get value with these cloud data warehouses. And the key is the data has to be accessible by the users, and this data and the ability to make business decisions on the data. It ranges from users on the front line all the way up to the boardroom. So while we've seen this evolution to the Cloud Data Warehouse, as you can see from the statistic from Forrester, we're still struggling with how much of that data actually gets used for analytics. And so what is holding us back? One of the main reasons is old technology really trying to work with today's modern cloud data warehouses? They weren't built for it. So you run into issues of trying to do data replication, getting the data out of the cloud data warehouse. You can do analysis and then maintaining these middle layers of data so that you can access it quickly and get the answers you need. Another issue that's holding us back is this idea that you have to have your data in perfect shape with the perfect pipeline based on the exact dashboard unique. Um, this isn't true. Now, with Cloud data warehouse and the speed of important business data getting into those cloud data warehouses, you need a solution that allows you to access it right away without having everything to be perfect from the start, and I think this is a great opportunity for GAL and I have a little further discussion on what we're seeing in the marketplace. Um, one of the primary ones is like, What are the limiting factors, your Siegel of legacy technologies in the market when it comes to this cloud transformation we're talking about >>here? It's a great question, Michael and the variety of aspect when it comes to legacy, the other warehouses that are slowing down innovation for companies and businesses. I'll focus on 21 is performance right? We want faster insights. Companies want the ability to analyze MAWR data faster. And when it comes to on prem or legacy data warehouses, that's hard to achieve because the second aspect comes into display, which is the lack of flexibility, right. If you want to increase your capacity of your warehouse, you need to ensure request someone needs to go and bring an actual machine and install it and expand your data warehouse. When it comes to the cloud, it's literally a click of a button, which allows you to increase the capacity of your data warehouse and enable your internal and external users to perform analytics at scale and much faster. >>It falls right into the explanation you provided there, right as the speed of the data warehouses and the data gets faster and faster as it scales, older solutions aren't built toe leverage that, um, you know, they're either they're having to make technical, you know, technical cuts there, either looking at smaller amounts of data so that they can get to the data quicker. Um, or it's taking longer to get to the data when the data warehouse is ready, when it could just be live career to get the answers you need. And that's definitely an issue that we're seeing in the marketplace. I think the other one that you're looking at is things like governance, lineage, regulatory requirements. How is the cloud you know, making it easier? >>That's That's again an area where I think the cloud shines. Because AWS AWS scale allows significantly more investment in securing security policies and compliance, it allows customers. So, for example, Amazon redshift comes by default with suck 1 to 3 p. C. I. Aiso fared rampant HIPPA compliance, all of them out of the box and at our scale. We have the capacity to implement those by default for all of our customers and allow them to focus. Their very expensive, valuable ICTY resource is on actual applications that differentiate their business and transform the customer experience. >>That's a great point, gal. So we've talked about the, you know, limiting factors. Technology wise, we've mentioned things like governance. But what about the cultural aspect? Right? So what do you see? What do you see in team struggling in meeting? You know, their cloud data warehouse strategy today. >>And and that's true. One of the biggest challenges for large large organizations when they moved to the cloud is not about the technology. It's about people, process and culture, and we see differences between organizations that talk about moving to the cloud and ones that actually do it. And first of all, you wanna have senior leadership, drive and be aligned and committed to making the move to the cloud. But it's not just that you want. We see organizations sometimes Carol get paralyzed. If they can't figure out how to move each and every last work clothes, there's no need to boil the ocean, so we often work with organizations to find that iterative motion that relative process off identifying the use cases are date identifying workloads in migrating them one at a time and and through that allowed organization to grow its knowledge from a cloud perspective as well as adopt its tooling and learn about the new capabilities. >>And from an analytics perspective, we see the same right. You don't need a pixel perfect dashboard every single time to get value from your data. You don't need to wait until the data warehouse is perfect or the pipeline to the data warehouse is perfect. With today's technology, you should be able to look at the data in your cloud data warehouse immediately and get value from it. And that's the you know, that's that change that we're pushing and starting to see today. Thanks. God, that was That was really interesting. Um, you know, as we look through that, you know, this transformation we're seeing in analytics, um, isn't really that old? 20 years ago, data warehouses were primarily on Prem and the applications the B I tools used for analytics around them were on premise well, and so you saw things like applications like Salesforce. That live in the cloud. You start having to pull data from the cloud on Prem in order to do analytics with it. Um, you know, then we saw the shift about 10 years ago in the explosion of Cloud Data Warehouse Because of their scale, cost reduced, reduce shin reduction and speed. You know, we're seeing cloud data. Warehouses like Amazon Red Shift really take place, take hold of the marketplace and are the predominant ways of storing data moving forward. What we haven't seen is the B I tools catch up. And so when you have this new cloud data warehouse technology, you really need tools that were custom built for it to take advantage of it, to be able to query the cloud data warehouse directly and get results very quickly without having to worry about creating, you know, a middle layer of data or pipelines in order to manage it. And, you know, one company captures that really Well, um, chick fil A. I'm sure everybody has heard of is one of the largest food chains in America. And, you know, they made a huge investment in red shift and one of the purposes of that investment is they wanted to get access to the data mawr quickly, and they really wanted to give their business users, um, the ability to do some ad hoc analysis on the data that they were capturing. They found that with their older tools, the problems that they were finding was that all the data when they're trying to do this analysis was staying at the analyst level. So somebody needed to create a dashboard in order to share that data with a user. And if the user's requirements changed, the analysts were starting to become burdened with requests for changes and the time it took to reflect those changes. So they wanted to move to fought spot with embrace to connect to Red Shift so they could start giving business users that capability. Query the database right away. And with this, um, they were able to find, you know, very common things in in the supply chain analysis around the ability to figure out what store should get, what product that was selling better. The other part was they didn't have to wait for the data to get settled into some sort of repository or second level database. They were able to query it quickly. And then with that, they're able to make changes right in the red shift database that were then reflected to customers and the business users right away. So what they found from this is by adopting thought spot, they were actually able to arm business users with the ability to make decisions very quickly. And they cleared up the backlog that they were having and the delay with their analysts. And they're also putting their analysts toe work on different projects where they could get better value from. So when you look at the way we work with a cloud data warehouse, um, you have to think of thoughts about embrace as the tool that access that layer. The perfect analytic partner for the Cloud Data Warehouse. We will do the live query for the business user. You don't need to know how to script and sequel, um Thio access, you know, red shift. You can type the question that you want the answer to and thought spot will take care of that query. We will do the indexing so that the results come back faster for you and we will also do the analysis on. This is one of the things I wanted to cover, which is our spot i. Q. This is new for our ability to use this with embrace and our partners at Red Shift is now. We can give you the ability to do auto analysis to look at things like leading indicators, trends and anomalies. So to put this in perspective amount imagine somebody was doing forecasting for you know Q three in the western region. And they looked at how their stores were doing. And they saw that, you know, one store was performing well, Spot like, you might be able to look at that analysis and see if there's a leading product that is underperforming based on perhaps the last few quarters of data. And bring that up to the business user for analysis right away. They don't need to have to figure that out. And, um, you know, slice and dice to find that issue on their own. And then finally, all the work you do in data management and governance in your cloud data warehouse gets reflected in the results in embrace right away. So I've done a lot of talking about embrace, and I could do more, but I think it would be far better toe. Have Vika actually show you how the product works, Vika. >>Thanks, Michael. We learned a lot today about the power of leveraging your red shift data and thought spot. But now let me show you how it works. The coronavirus pandemic has presented extraordinary challenges for many businesses, and some industries have fared better than others. One industry that seems to weather the storm pretty well actually is streaming media. So companies like Netflix and who Lou. And in this demo, we're going to be looking at data from B to C marketing efforts. First streaming media company in 2020 lately, we've been running campaigns for comedy, drama, kids and family and reality content. Each of our campaigns last four weeks, and they're staggered on a weekly basis. Therefore, we always have four campaigns running, and we can focus on one campaign launch per >>week, >>and today we'll be digging into how our campaigns are performing. We'll be looking at things like impressions, conversions and users demographic data. So let's go ahead and look at that data. We'll see what we can learn from what's happened this year so far, and how we can apply those learnings to future decision making. As you can already see on the thoughts about homepage, I've created a few pin boards that I use for reporting purposes. The homepage also includes what others on my team and I have been looking at most recently. Now, before we dive into a search, will first take a look at how to make a direct connection to the customer database and red shift to save time. I've already pre built the connection Red Shift, but I'll show you how easy it is to make that connection in just three steps. So first we give the connection name and we select our connection type and was on red Shift. Then we enter our red shift credentials, and finally, we select the tables that we want to use Great now ready to start searching. So let's start in this data to get a better idea of how our marketing efforts have been affected either positively or negatively by this really challenging situation. When we think of ad based online marketing campaigns, we think of impressions, clicks and conversions. Let's >>look at those >>on a daily basis for our purposes. So all this data is available to us in Thought spot, and we can easily you search to create a nice line chart like this that shows US trends over the last few months and based on experience. We understand that we're going to have more clicks than impressions and more impressions and conversions. If we started the chart for a minute, we could see that while impressions appear to be pretty steady over the course of the year, clicks and especially conversions both get a nice boost in mid to late March, right around the time that pandemic related policies were being implemented. So right off the bat, we found something interesting, and we can come back to this now. There are few metrics that we're gonna focus on as we analyze our marketing data. Our overall goal is obviously to drive conversions, meaning that we bring new users into our streaming service. And in order to get a visitor to sign up in the first place, we need them to get into our sign up page. A compelling campaign is going to generate clicks, so if someone is interested in our ad, they're more likely to click on it, so we'll search for Click through Rape 5% and we'll look this up by campaign name. Now even compare all the campaigns that we've launched this year to see which have been most effective and bring visitors star site. And I mentioned earlier that we have four different types of campaign content, each one aligned with one of our most popular genres. So by adding campaign content, yeah, >>and I >>just want to see the top 10. I could limit my church. Just these top 10 campaigns automatically sorted by click through rate and assigned a color for each category so we could see right away that comedy and drama each of three of the top 10 campaigns by click through rate reality is, too, including the top spot and kids and family makes one appearance as well. Without spot. We know that any non technical user can ask a question and get an answer. They can explore the answer and ask another question. When you get an answer that you want to share, keep an eye on moving forward, you pin the answer to pin board. So the BBC Marketing Campaign Statistics PIN board gives us a solid overview of our campaign related activities and metrics throughout 2020. The visuals here keep us up to date on click through rate and cost per click, but also another really important metrics that conversions or cost proposition. Now it's important to our business that we evaluate the effectiveness of our spending. Let's do another search. We're going to look at how many new customers were getting so conversions and the price cost per acquisition that we're spending to get each of these by the campaign contact category. So >>this is a >>really telling chart. We can basically see how much each new users costing us, based on the content that they see prior to signing up to the service. Drama and reality users are actually relatively expensive compared to those who joined based on comedy and kids and family content that they saw. And if all the genres kids and family is actually giving us the best bang for our marketing >>buck. >>And that's good news because the genres providing the best value are also providing the most customers. We mentioned earlier that we actually saw a sizable uptick in conversions as stay at home policies were implemented across much of the country. So we're gonna remove cost per acquisition, and we're gonna take a daily look how our campaign content has trended over the years so far. Eso By doing this now, we can see a comparison of the different genres daily. Some campaigns have been more successful than others. Obviously, for example, kids and family contact has always fared pretty well Azaz comedy. But as we moved into the stay at home area of the line chart, we really saw these two genres begin to separate from the rest. And even here in June, as some states started to reopen, we're seeing that they're still trending up, and we're also seeing reality start to catch up around that time. And while the first pin board that we looked at included all sorts of campaign metrics, this is another PIN board that we've created so solely to focus on conversions. So not only can we see which campaigns drug significant conversions, we could also dig into the demographics of new users, like which campaigns and what content brought users from different parts of the country or from different age groups. And all this is just a quick search away without spot search directly on a red shift. Data Mhm. All right, Thank you. And back to you, Michael. >>Great. Thanks, Vika. That was excellent. Um, so as you can see, you can very quickly go from zero to search with thought Spot, um, connected to any cloud data warehouse. And I think it's important to understand that we mentioned it before. Not everything has to be perfect. In your doubt, in your cloud data warehouse, um, you can use thought spot as your initial for your initial tool. It's for investigatory purposes, A Z you can see here with star, Gento, imax and anthem. And a lot of these cases we were looking at billions of rows of data within minutes. And as you as your data warehouse maturity grows, you can start to add more and more thoughts about users to leverage the data and get better analysis from it. So we hope that you've enjoyed what you see today and take the step to either do one of two things. We have a free trial of thoughts about cloud. If you go to the website that you see below and register, we can get you access the thought spots so you can start searching today. Another option, by contacting our team, is to do a zero to search workshop where 90 minutes will work with you to connect your data source and start to build some insights and exactly what you're trying to find for your business. Um thanks, everybody. I would especially like to thank golf from AWS for joining us on this today. We appreciate your participation, and I hope everybody enjoyed what they saw. I think we have a few questions now. >>Thank you, Vika, Gal and Michael. It's always exciting to see a live demo. I know that I'm one of those comedy numbers. We have just a few minutes left, but I would love to ask a couple of last questions Before we go. Michael will give you the first question. Do I need to have all of my data cleaned and ready in my cloud data warehouse before I begin with thought spot? >>That's a great question, Mallory. No, you don't. You can really start using thought spot for search right away and start getting analysis and start understanding the data through the automatic search analysis and the way that we query the data and we've seen customers do that. Chick fil a example that we talked about earlier is where they were able to use thoughts bought to notice an anomaly in the Cloud Data Warehouse linking between product and store. They were able to fix that very quickly. Then that gets reflected across all of the users because our product queries the Cloud Data Warehouse directly so you can get started right away without it having to be perfect. And >>that's awesome. And gal will leave a fun one for you. What can we look forward to from Amazon Red Shift next year? >>That's a great question. And you know, the team has been innovating extremely fast. We released more than 200 features in the last year and a half, and we continue innovating. Um, one thing that stands out is aqua, which is a innovative new technology. Um, in fact, lovely stands for Advanced Square Accelerator, and it allows customers to achieve performance that up to 10 times faster, uh, than what they've seen really outstanding and and the way we've achieved that is through a shift in paradigm in the actual technological implementation section. Uh, aqua is a new distributed and hardware accelerated processing layer, which effectively allows us to push down operations analytics operations like compression, encryption, filtering and aggregations to the storage there layer and allow the aqua nodes that are built with custom. AWS designed analytics processors to perform these operations faster than traditional soup use. And we no longer need to bring, you know, scan the data and bring it all the way to the computational notes were able to apply these these predicates filtering and encourage encryption and compression and aggregations at the storage level. And likewise is going to be available for every are a three, um, customer out of the box with no changes to come. So I apologize for being getting out a little bit, but this is really exciting. >>No, that's why we invited you. Call. Thank you on. Thank you. Also to Michael and Vika. That was excellent. We really appreciate it. For all of you tuning in at home. The final session of this track is coming up shortly. You aren't gonna want to miss it. We're gonna end strong, come back and hear directly from our customer a T mobile on how T Mobile is building a data driven organization with thought spot in which >>pro, It's >>up next, see you then.
SUMMARY :
is finally ready for the cloud, and we'll discuss how you can that provides the ability to scale toe unlimited concurrency. to the Cloud Data Warehouse, as you can see from the statistic from Forrester, which allows you to increase the capacity of your data warehouse and enable your they're either they're having to make technical, you know, technical cuts there, We have the capacity So what do you see? And first of all, you wanna have senior leadership, drive and And that's the you know, that's that change that And in this demo, we're going to be looking at data from B to C marketing efforts. I've already pre built the connection Red Shift, but I'll show you how easy it is to make that connection in just three all this data is available to us in Thought spot, and we can easily you search to create a nice line chart like this that Now it's important to our business that we evaluate the effectiveness of our spending. And if all the genres kids and family is actually giving us the best bang for our marketing And that's good news because the genres providing the best value are also providing the most customers. And as you as your Do I need to have all of my data cleaned the Cloud Data Warehouse directly so you can get started right away without it having to be perfect. forward to from Amazon Red Shift next year? And you know, the team has been innovating extremely fast. For all of you tuning in at home.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael | PERSON | 0.99+ |
Cassie | PERSON | 0.99+ |
Vika | PERSON | 0.99+ |
Vika Valentina | PERSON | 0.99+ |
America | LOCATION | 0.99+ |
90 minutes | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
June | DATE | 0.99+ |
2020 | DATE | 0.99+ |
T Mobile | ORGANIZATION | 0.99+ |
two folks | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
first product | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
85% | QUANTITY | 0.99+ |
third session | QUANTITY | 0.99+ |
Gal | PERSON | 0.99+ |
second aspect | QUANTITY | 0.99+ |
third aspect | QUANTITY | 0.99+ |
more than 200 features | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one campaign | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Each | QUANTITY | 0.99+ |
T mobile | ORGANIZATION | 0.99+ |
Carol | PERSON | 0.99+ |
each category | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
37% | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
two genres | QUANTITY | 0.98+ |
three steps | QUANTITY | 0.98+ |
Red Shift | ORGANIZATION | 0.98+ |
20 years ago | DATE | 0.98+ |
one store | QUANTITY | 0.98+ |
three | QUANTITY | 0.97+ |
tens of thousands of customers | QUANTITY | 0.97+ |
MIA | PERSON | 0.97+ |
21 | QUANTITY | 0.97+ |
US | LOCATION | 0.97+ |
One industry | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
Mallory | PERSON | 0.97+ |
each | QUANTITY | 0.97+ |
Vika | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
up to 75% | QUANTITY | 0.97+ |
mid | DATE | 0.97+ |
Lee | PERSON | 0.96+ |
up to 10 times | QUANTITY | 0.95+ |
S three | TITLE | 0.95+ |
first pin board | QUANTITY | 0.93+ |
both | QUANTITY | 0.93+ |
two things | QUANTITY | 0.93+ |
four campaigns | QUANTITY | 0.93+ |
top 10 | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.92+ |
late March | DATE | 0.91+ |
Cloud Data Warehouse | ORGANIZATION | 0.91+ |
From Zero to Search | Beyond.2020 Digital
>>Yeah, >>yeah. Hello and welcome to Day two at Beyond. I am so excited that you've chosen to join the building a vibrant data ecosystem track. I might be just a little bit biased, but I think it's going to be the best track of the day. My name is Mallory Lassen and I run partner Marketing here, a thought spot, and that might give you a little bit of a clue as to why I'm so excited about the four sessions we're about to hear from. We'll start off hearing from two thought spotters on how the power of embrace can allow you to directly query on the cloud data warehouse of your choice Next up. And I shouldn't choose favorites, but I'm very excited to watch Cindy housing moderate a panel off true industry experts. We'll hear from Deloitte Snowflake and Eagle Alfa as they describe how you can enrich your organization's data and better understand and benchmark by using third party data. They may even close off with a prediction or two about the future that could prove to be pretty thought provoking. So I'd stick around for that. Next we'll hear from the cloud juggernaut themselves AWS. We'll even get to see a live demo using TV show data, which I'm pretty sure is near and dear to our hearts. At this point in time and then last, I'm very excited to welcome our customer from T Mobile. They're going to describe how they partnered with whip pro and developed a full solution, really modernizing their analytics and giving self service to so many employees. We'll see what that's done for them. But first, let's go over to James Bell Z and Ana Son on the zero to search session. James, take us away. >>Thanks, Mallory. I'm James Bell C and I look after the solutions engineering and customer success teams have thought spot here in Asia Pacific and Japan today I'm joined by my colleague Anderson to give you a look at just how simple and quick it is to connect thought spot to your cloud data warehouse and extract value from the data within in the demonstration, and I will show you just how we can connect to data, make it simple for the business to search and then search the data itself or within this short session. And I want to point out that everything you're going to see in the demo is Run Live against the Cloud Data Warehouse. In this case, we're using snowflake, and there's no cashing of data or summary tables in terms of what you're going to see. But >>before we >>jump into the demo itself, I just like to provide a very brief overview of the value proposition for thought spot. If you're already familiar with thought spot, this will come as no surprise. But for those new to the platform, it's all about empowering the business to answer their own questions about data in the most simple way possible Through search, the personalized user experience provides a familiar search based way for anyone to get answers to their questions about data, not just the analysts. The search, indexing and ranking makes it easy to find the data you're looking for using business terms that you understand. While the smart ranking constantly adjust the index to ensure the most relevant information is provided to you. The query engine removes the complexity of SQL and complex joint paths while ensuring that users will always get thio the correct answers their questions. This is all backed up by an architecture that's designed to be consumed entirely through a browser with flexibility on deployment methods. You can run thought spot through our thoughts about cloud offering in your own cloud or on premise. The choice is yours, so I'm sure you're thinking that all sounds great. But how difficult is it to get this working? Well, I'm happy to tell you it's super easy. There's just forced steps to unlock the value of your data stored in snowflake, Red Shift, Google, Big Query or any of the other cloud data warehouses that we support. It's a simple is connecting to the Cloud Data Warehouse, choosing what data you want to make available in thought spot, making it user friendly. That column that's called cussed underscore name in the database is great for data management, but when users they're searching for it, they'll probably want to use customer or customer name or account or even client. Also, the business shouldn't need to know that they need to get data from multiple tables or the joint parts needed to get the correct results in thought spot. The worksheet allows you to make all of this simple for the users so they can simply concentrate on getting answers to their questions on Once the worksheet is ready, you can start asking those questions by now. I'm sure you're itching to see this in action. So without further ado, I'm gonna hand over to Anna to show you exactly how this works over to you. Anna, >>In this demo, I'm going to go to cover three areas. First, we'll start with how simple it is to get answers to your questions in class spot. Then we'll have a look at how to create a new connection to Cloud Data Warehouse. And lastly, how to create a use of friendly data layer. Let's get started to get started. I'm going to show you the ease off search with thoughts Spot. As you can see thought spot is or were based. I'm simply lobbying. Divide a browser. This means you don't need to install an application. Additionally, possible does not require you to move any data. So all your data stays in your cloud data warehouse and doesn't need to be moved around. Those sports called differentiator is used experience, and that is primarily search. As soon as we come into the search bar here, that's what suggestion is guiding uses through to the answers? Let's let's say that I would wanna have a look at spending across the different product categories, and we want Thio. Look at that for the last 12 months, and we also want to focus on a trending on monthly. And just like that, we get our answer straightaway without alive from Snowflake. Now let's say we want to focus on 11 product category here. We want to have a look at the performance for finished goods. As I started partially typing my search them here, Thoughts was already suggesting the data value that's available for me to use as a filter. The indexing behind the scene actually index everything about the data which allowed me to get to my data easily and quickly as an end user. Now I've got my next to my data answer here. I can also go to the next level of detail in here. In third spot to navigate on the next level of detail is simply one click away. There's no concept off drill path, pre defined drill path in here. That means we've ordered data that's available to me from Snowflake. I'm able to navigate to the level of detail. Allow me to answer those questions. As you can see as a business user, I don't need to do any coding. There's no dragon drop to get to the answer that I need right here. And she can see other calculations are done on the fly. There is no summary tables, no cubes building are simply able to ask the questions. Follow my train or thoughts, and this provides a better use experience for users as anybody can search in here, the more we interact with the spot, the more it learns about my search patterns and make those suggestions based on the ranking in here and that a returns on the fly from Snowflake. Now you've seen example of a search. Let's go ahead and have a look at How do we create a connection? Brand new one toe a cloud at a warehouse. Here we are here, let me add a new connection to the data were healthy by just clicking at new connection. Today we're going to connect Thio retail apparel data step. So let's start with the name. As you can see, we can easily connect to all the popular data warehouse easily. By just one single click here today, we're going to click to Snowflake. I'm gonna ask some detail he'd let me connect to my account here. Then we quickly enter those details here, and this would determine what data is available to me. I can go ahead and specify database to connect to as well, but I want to connect to all the tables and view. So let's go ahead and create a connection. Now the two systems are talking to each other. I can see all the data that's available available for me to connect to. Let's go ahead and connect to the starter apparel data source here and expanding that I can see all the data tables as available to me. I could go ahead and click on any table here, so there's affect herbal containing all the cells information. I also have the store and product information here I can make. I can choose any Data column that I want to include in my search. Available in soft spot, what can go ahead and select entire table, including all the data columns. I will. I would like to point out that this is important because if any given table that you have contains hundreds of columns it it may not be necessary for you to bring across all of those data columns, so thoughts would allow you to select what's relevant for your analysis. Now that's selected all the tables. Let's go ahead and create a connection. Now force what confirms the data columns that we have selected and start to read the medic metadata from Snowflake and automatically building that search index behind the scene. Now, if your daughter does contain information such as personal, identifiable information, then you can choose to turn those investing off. So none of that would be, um, on a hot spots platform. Now that my tables are ready here, I can actually go ahead and search straight away. Let's go ahead and have a look at the table here. I'm going to click on the fact table heat on the left hand side. It shows all the data column that we've brought across from Snowflake as well as the metadata that also brought over here as well. A preview off the data shows me off the data that's available on my snowflake platform. Let's take a look at the joints tap here. The joint step shows may relationship that has already been defined the foreign and primary care redefining snowflake, and we simply inherited he in fourth spot. However, you don't have toe define all of this relationship in snowflake to add a joint. He is also simple and easy. If I click on at a joint here, I simply select the table that I wanted to create a connection for. So select the fact table on the left, then select the product table onto the right here and then simply selected Data column would wish to join those two tables on Let's select Product ID and clicking next, and that's always required to create a joint between those two tables. But since we already have those strong relationship brought over from Snow Flag, I won't go ahead and do that Now. Now you have seen how the tables have brought over Let's go and have a look at how easy is to search coming to search here. Let's start with selecting the data table would brought over expanding the tables. You can see all the data column that we have previously seen from snowflake that. Let's say I wanna have a look at sales in last year. Let's start to type. And even before I start to type anything in the search bar passport already showing me all those suggestions, guiding me to the answers that's relevant to my need. Let's start with having a look at sales for 2019. And I want to see this across monthly for my trend and out off all of these product line he. I also want to focus on a product line called Jackets as I started partially typing the product line jacket for sport, already proactively recommending me all the matches that it has. So all the data values available for me to search as a filter here, let's go ahead and select jacket. And just like that, I get my answer straight away from Snowflake. Now that's relatively simple. Let's try something a little bit more complex. Let's say I wanna have a look at sales comparing across different regions, um, in us. So I want compare West compared to Southwest, and then I want to combat it against Midwest as well as against based on still and also want to see these trending monthly as well. Let's have look at monthly. If you can see that I can use terms such as monthly Key would like that to look at different times. Buckets. Now all of these is out of the box. As she can see, I didn't have to do any indexing. I didn't have to do any formulas in here. As long as there is a date column in the data set, crossbows able to dynamically calculate those time bucket so she can see. Just by doing that search, I was able to create dynamic groupings segment of different sales across the United States on the sales data here. Now that we've done doing search, you can see that across different tables here might not be the most user friendly layer we don't want uses having to individually select tables. And then, um, you know, selecting different columns with cryptic names in here. We want to make this easy for users, and that's when a work ship comes in. But those were were sheet encapsulate all of the data you want to make available for search as well as formulas, as well as business terminologies that the users are familiar with for a specific business area. Let's start with adding the daughter columns we need for this work shape. Want to slack all of the tables that we just brought across from Snowflake? Expanding each of those tables from the facts type of want sales from the fax table. We want sales as well as the date. Then on the store's table. We want store name as well as the stay eating, then expanding to the product we want name and finally product type. Now that we've got our work shit ready, let's go ahead and save it Now, in order to provide best experience for users to search, would want to optimize the work sheet here. So coming to the worksheet here, you can see the data column that we have selected. Let's start with changing this name to be more user friendly, so let's call it fails record. They will want to call it just simply date, store name, call it store, and then we also want state to be in lower case product name. Simply call it product and finally, product type can also further optimize this worksheet by adding, uh, other areas such as synonyms, so allow users to use terms of familiar with to do that search. So in sales, let's call this revenue and we all cannot also further configure the geo configuration. So want to identify state in here as state for us. And finally, we want Thio. Also add more friendly on a display on a currency. So let's change the currency type. I want to show it in U. S. Dollars. That's all we need. So let's try to change and let's get started on our search now coming back to the search here, Let's go ahead. Now select out worksheet that we have just created. If I don't select any specific tables or worksheets, force what Simply a search across everything that's available to you. Expanding the worksheet. We can see all of the data columns in heat that's we've made available and clicking on search bar for spot already. Reckon, making those recommendations in here to start off? Let's have a look at I wanna have a look at the revenue across different states for here today, so let's use the synonym that we have defined across the different states and we want to see this for here today. Um yesterday as well. I know that I also want to focus on the product line jacket that we have seen before, so let's go ahead and select jacket. Yeah, and just like that, I was able to get the answer straight away in third spot. Let's also share some data label here so we can see exactly the Mount as well to state that police performance across us in here. Now I've got information about the sales of jackets on the state. I want to ask next level question. I want to draw down to the store that has been selling these jackets right Click e. I want to drill down. As you can see out of the box. I didn't have to pre define any drill paths on a target. Reports simply allow me to navigate to the next level of detail to answer my own questions. One Click away. Now I see the same those for the jackets by store from year to date, and this is directly from snowflake data life Not gonna start relatively simple question. Let's go ahead and ask a question that's a little bit more complex. Imagine one. Have a look at Silas this year, and I want to see that by month, month over month or so. I want to see a month. Yeah, and I also want to see that our focus on a sale on the last week off the month. So that's where we see most. Sales comes in the last week off the month, so I want to focus on that as well. Let's focus on last week off each month. And on top of that, I also want to only focus on the top performing stores from last year. So I want to focus on the top five stores from last year, so only store in top five in sales store and for last year. And with that, we also want to focus just on the populist product types as well. So product type. Now, this could be very reasonable question that a business user would like to ask. But behind the scenes, this could be quite complex. But First part takes cares, or the complexity off the data allow the user to focus on the answer they want to get to. If we quickly have a look at the query here, this shows how forceful translate the search that were put in there into queries into that, we can pass on the snowflake. As you can see, the search uses all three tables as well shooting, utilizing the joints and the metadata layer that we have created. Switching over to the sequel here, this sequel actually generate on the fly pass on the snowflake in order for the snowflake to bring back to result and presented in the first spot. I also want to mention that in the latest release Off Hot Spot, we also bringing Embraced um, in the latest version, Off tosspot 6.3 story Q is also coming to embrace. That means one click or two analysis. Those who are in power users to monitor key metrics on kind of anomalies, identify leading indicators and isolate trends, as you can see in a matter of minutes. Using thought spot, we were able to connect to most popular on premise or on cloud data warehouses. We were able to get blazing fast answers to our searches, allow us to transform raw data to incite in the speed off thoughts. Ah, pass it back to you, James. >>Thanks, Anna. Wow, that was awesome. It's incredible to see how much committee achieved in such a short amount of time. I want to close this session by referring to a customer example of who, For those of you in the US, I'm sure you're familiar with who, Lou. But for our international audience, who Lou our immediate streaming service similar to a Netflix or Disney Plus, As you can imagine, the amount of data created by a service like this is massive, with over 32 million subscribers and who were asking questions of over 16 terabytes of data in snow folk. Using regular B I tools on top of this size of data would usually mean using summary or aggregate level data, but with thoughts. What? Who are able to get granular insights into the data, allowing them to understand what they're subscribes of, watching how their campaigns of performing and how their programming is being received, and take advantage of that data to reduce churn and increase revenue. So thank you for your time today. Through the session, you've seen just how simple it is to get thought spot up and running on your cloud data warehouse toe. Unlock the value of your data and minutes. If you're interested in trying this on your own data, you can sign up for a free 14 day trial of thoughts. What cloud? Right now? Thanks again, toe Anna for such awards and demo. And if you have any questions, please feel free to let us know. >>Awesome. Thank you, James and Anna. That was incredible. To see it in action and how it all came together on James. We do actually have a couple of questions in our last few minutes here, Anna. >>The first one will be >>for you. Please. This will be a two part question. One. What Cloud Data Warehouses does embrace support today. And to can we use embrace to connect to multiple data warehouses. Thank you, Mallory. Today embrace supports. Snowflake Google, Big query. Um, Red shift as you assign that Teradata advantage and essay Bahana with more sources to come in the future. And, yes, you can connect on live query from notable data warehouses. Most of our enterprise customers have gotta spread across several data warehouses like just transactional data and red Shift and South will start. It's not like, excellent on James will have the final question go to you, You please. Are there any size restrictions for how much data thought spot can handle? And does one need to optimize their database for performance, for example? Aggregations. >>Yeah, that's a great question. So, you know, as we've just heard from our customer, who there's, there's really no limits in terms of the amount of data that you can bring into thoughts Ponant connect to. We have many customers that have, in excess of 10 terabytes of data that they're connecting to in those cloud data warehouses. And, yeah, there's there's no need to pre aggregate or anything. Thought Spot works best with that transactional level data being able to get right down into the details behind it and surface those answers to the business uses. >>Excellent. Well, thank you both so much. And for everyone at home watching thank you for joining us for that session. You have a few minutes toe. Get up, get some water, get a bite of food. What? You won't want to miss this next panel in it. We have our chief data strategy off Officer Cindy, Housing speaking toe experts in the field from Deloitte Snowflake and Eagle Alfa. All on best practices for leveraging external data sources. See you there
SUMMARY :
I might be just a little bit biased, but I think it's going to be the best track of the day. to give you a look at just how simple and quick it is to connect thought spot to your cloud data warehouse and extract adjust the index to ensure the most relevant information is provided to you. source here and expanding that I can see all the data tables as available to me. Who are able to get granular insights into the data, We do actually have a couple of questions in our last few sources to come in the future. of data that they're connecting to in those cloud data warehouses. And for everyone at home watching thank you for joining
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James | PERSON | 0.99+ |
Anna | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
two tables | QUANTITY | 0.99+ |
T Mobile | ORGANIZATION | 0.99+ |
Asia Pacific | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
14 day | QUANTITY | 0.99+ |
Mallory | PERSON | 0.99+ |
two systems | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
last year | DATE | 0.99+ |
today | DATE | 0.99+ |
Japan | LOCATION | 0.99+ |
Ana Son | PERSON | 0.99+ |
Deloitte Snowflake | ORGANIZATION | 0.99+ |
Eagle Alfa | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
Mallory Lassen | PERSON | 0.99+ |
Today | DATE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
U. S. | LOCATION | 0.99+ |
Anderson | PERSON | 0.99+ |
four sessions | QUANTITY | 0.99+ |
first spot | QUANTITY | 0.99+ |
each month | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
one click | QUANTITY | 0.99+ |
Eagle Alfa | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.98+ |
First part | QUANTITY | 0.98+ |
10 terabytes | QUANTITY | 0.98+ |
11 product | QUANTITY | 0.98+ |
over 32 million subscribers | QUANTITY | 0.98+ |
over 16 terabytes | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Cindy | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
third spot | QUANTITY | 0.97+ |
each | QUANTITY | 0.97+ |
Disney Plus | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.96+ |
fourth spot | QUANTITY | 0.96+ |
first one | QUANTITY | 0.96+ |
Teradata | ORGANIZATION | 0.95+ |
One Click | QUANTITY | 0.94+ |
two analysis | QUANTITY | 0.92+ |
five stores | QUANTITY | 0.91+ |
Off tosspot | TITLE | 0.9+ |
Off Hot Spot | TITLE | 0.89+ |
Beyond | ORGANIZATION | 0.89+ |
Thio | ORGANIZATION | 0.89+ |
one single | QUANTITY | 0.89+ |
Lou | PERSON | 0.88+ |
two part question | QUANTITY | 0.87+ |
two thought spotters | QUANTITY | 0.87+ |
Silas | ORGANIZATION | 0.87+ |
6.3 | QUANTITY | 0.86+ |
three tables | QUANTITY | 0.85+ |
last 12 months | DATE | 0.85+ |
James Bell C | PERSON | 0.8+ |
Snowflake | TITLE | 0.79+ |
five | QUANTITY | 0.77+ |
Midwest | LOCATION | 0.75+ |
three | QUANTITY | 0.75+ |
hundreds of columns | QUANTITY | 0.75+ |
Chris Grusz & Matthew Polly | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020. Special coverage sponsored by AWS Global Partner Network Welcome to the Cubes. Live coverage of AWS reinvent 2020. I'm Lisa Martin. I've got two guests joining me. Next. Chris Gru's director of Business development, AWS Marketplace Service catalog and Control Tower at AWS. Chris, welcome. >>Thank you. Welcome. Good to see you. >>Likewise. And Matthew Polly is an alumni of the Cube. He is back VP of worldwide business development alliances and channels at Crowdstrike Matthew, Welcome toe. Welcome back. >>Great to be here. Lisa, Thanks for having me. >>And I see you're in your garage, your f one car in the background. Very jealous. So we're gonna be talking a little bit about not f one today, but about what's going on. Some of the the news that's coming from the partner Keynote. So, Chris, let's start with you. What's going on? The AWS marketplace news and also give our audience a real good understanding of what the marketplace is. >>Yeah, sure. So So AWS marketplace is actually an eight year old service within the AWS family, and and our charter is really providing a fine by deploy and manage experience for third party software. And so what our organization does. We work with my issues like Crowdstrike, and we really try to get them to package up their software in that same consumption format that other customers are buying AWS services. It's already the best service already. Those customers are used to buying services like Red Shift, and that's three and a consumption format, and they want to be able to buy third party software in that same manner. And so that's really been our charter since we were launched eight years ago. We've had a lot of great mo mentum since our launch. We now have over 8000 listings available in the catalog, and we have over 1.5 million subscriptions going through the catalog. One of things that we announced earlier today is that we are up to 300,000 active customers. That's actually up from 260,000, which is our previous numbers. So we continue to see really good momentum in terms of adoption, from both our eyes, community publishing listings and then from our customers that are actually buying out of the catalog. We work on all types of formats of software, so we provide machine images in an Amazon machine image format. But we also published and make available SAS products, container products and algorithms and models to run in things like our sage maker environment. And then, as of this morning in the Global Partner Summit, we announced the ability to sell professional services through eight of this marketplace as well. >>So lots of expansion, lots of growth. I'd love to get Chris your take on this expansion into offering professional services. What does that mean? And how have your 300,000 plus customers been influential in that? >>Yeah. And so what we've seen is marketplaces evolved is the transaction sizes have actually gone up dramatically. A couple years ago we launched a feature called Private Offers, which allows eyes views to do a negotiated subscription, submit that to an AWS customer and that they accept that goes right on their bill. We've seen very good adoption that we've got thousands of private offers now going through the system and what we found when the transaction sizes started to grow. Both our eyes V s that we're using the platform, as well as the consulting partners that are partners with US through Amazon Partner Network. They typically attached services to those transactions So pure and eyes V you might wanna package on something like an installation service training services. Or it could just be a bespoke statement of work that goes along with your technology and then on the consulting partner side. Resellers want to attach those same type of services to the software that they re sell, and up until this morning we weren't able to do that. And so it provided a lot of friction to our customers or buyers because what they had to do is they actually had to bottom line those transactions, or they had to do those transactions outside of marketplace. And And that wasn't a good experience for either RSV community or restore community or customers. So now, with this launch, we could actually allow customers to buy those services from those Eyes v partners and those resellers. By virtue of doing that to marketplace and basically how it works. It's similar to our private offer experience. They just submit a private offer to that customer. They could upload a statement of work. And if that customer accept, it goes directly on their AWS bill and they did. This marketplace takes care of all the collection, and the building that goes goes along with that transaction. And so we're really excited about this. We had over 100 launch partners that we're ready to go as of this morning, and we think this is gonna be a great feature, is gonna get a lot of adoption. Crowdstrike, which is a company that Matthews with is one of our launch partners for that feature. And so we just think this is gonna be a game changer for us on a number of levels. It's really gonna open up the type of transactions that we can now do to market place. >>Well, you mentioned Ah, good f word frictionless. That's something that every business really aims to do to make that experience just as seamless as possible. So Matthew talk to us about crowdstrike being part of its professional services, launched the opportunities that that opens up for the marketplace, customers and your customers? >>Sure. So just a quick background on crowdstrike were an endpoint protection cybersecurity company that has historically been protecting laptops desktops on premise, uh, devices from from breaches, basically identifying indications of attack or indications of compromise that that may surface on those end points. We do that by having agents run on those devices and point back to our massive body of data that runs in the cloud A W s. In fact, and so collecting tons and tons of data petabytes upon petabytes of data, literally trillions of events per week were able to easily identify and apply machine learning and artificial intelligence, Um, to that corpus of data to be able to identify when there is adversary activity on those devices. Now we've gone through a bit of a digital transformation ourselves, and we're looking at now. Not only, or we have launched products here recently, that not only protect those on premise devices like the desktops, laptops and on premise servers, but also protect workloads that are running in the cloud E C. Two instances, or RDS instances. What have you in in AWS? Or we've also launched what crowdstrike calls are Falcon Horizon product, which is a cloud security posture management product to be able to give people visibility into configurations that may create risk for their cloud environments. And we've been leveraging marketplace for about two years now. Um, it's been a fantastic opportunity for us to really leverage that frictionless sales motion that Chris talked about reducing sale cycles for us and for our channel partners. We have a number of our channel partners that leverage the CPP Oh capability within within the AWS marketplace toe actually transact business with their customers. It's been a It's been a fantastic, um you know, mechanism for for crowdstrike, for our partners and for our customers. Um, you know, we've been part of the enterprise contract scenarios where we don't have to go through that process of negotiating an end user license contract. We've signed up for the enterprise contract. Many of our customers have signed up for that enterprise contracts with reduces the legal iterations to get a transaction done. So that's been fantastic. And what we're doing now with the you know, the professional services offering is we're standing up a few of our professional services, Um, you know, offerings on the AWS marketplace so that our customers and our channel partners can actually transact business through the AWS marketplace toe, acquire those particular professional services offerings. And the one that I think is most interesting is a kind of cloud security assessment where our professional services team will go in and actually evaluate our their configurations. Are there unmanaged, um, you know, accounts running in AWS or what have you that could represent a security risk and make recommendations about how to improve the overall security posture of that cloud environment, leveraging something like crowd strikes Falcon Horizon, as I mentioned earlier, or our cloud workload protection offering. So it >>really >>is about streamlining the procurement, offering them. You know, the ability to thio, offering customers the ability to acquire through the AWS marketplace, whether that's the crowdstrike product or the Crowdstrike service offerings. >>So, Matthew, I imagine given this year that we're all not sitting together face to face in Las Vegas. The events of this year have also brought a lot of challenges from a security perspective. We've seen Ransomware going up dramatically, but also in this massive pitot to work working remotely. I can imagine your customers big opportunity for Crowdstrike to help them when endpoints just scattered. So in terms of that, as well as the impact with what you're doing with AWS marketplace seems like a great opportunity to provide your customers with faster access to ensuring that they can guarantee the security off their all of their data, which is business critical. >>Yeah, 100%. So the kind of global pandemic and work from anywhere has driven demand for crowd strikes capabilities in two ways. Number one people leaving the office and going home. There's a proliferation of physical devices, laptops for people to actually work from home, which obviously need to be protected. And a lot of times these were people that were working from home for the first time. You know, no longer within the protection of the, you know, the corporate network. Maybe they're using a VPN or what have you? But they needed the added protection of an endpoint protection capability like crowd strikes. And the second is a lot of this digital transformation has been accelerated. We've had a few customers tell us they had a three year plan for for their their digital transformation, and a lot of that is moving on. Premise service involves moving on premise servers to the cloud, and they've had to accelerate that two months or even even weeks in cases. And that's driving. You know, huge demand for understanding how to ensure there maintaining the proper security posture for those cloud environments. So speed is key right now, making sure that you're protected and transacting those those you know, those those sale cycles quickly leveraging native US marketplace all is accelerating. >>Yes, speaking of that acceleration and we've talked about that a lot. Matthew. This acceleration of digital transformation years now crammed into months. Chris, let's wrap with you in light of that acceleration, how has that affected positively? The AWS marketplace Bringing in professional services, allowing your customers to have much more available to them, to transact directly and and in a frictionless way, when speed is so critical? >>Yeah, I mean what it really leads to. It just gives us more selection, right? So if you take a step back and you think about the you know, the infamous Amazon fire, well, one of the key components of what makes a fine we'll go a selection. And there was a lot of solutions that we had. We just couldn't sell through marketplace without having some kind of services attach. While there's a lot of products that you could just point, click and go. There are a lot of technology. Do you need to? Some have some kind of hand holding And so, you know, by virtue launching services, this actually opens up the amateur in terms of selection that we could bring into the catalog. One of things that we've been focused on as a late is bringing in business applications as an example. And a lot of times a business application might need services to go on, actually wrap around that solution cell and, you know, be part of that implementation. And so that's the other great thing about this is it's going to give us more selection, and that's just gonna let our customers buy more and more products out of this market place. But do that in this very easy format, where it literally just lets them put these transactions directly on the AWS bill. So we think it's gonna be a great you know, not only for movie deals faster but also providing more solutions to our customers and just giving a better selection experience of AWS customer >>and being able to do that all remotely, which is these days is table stakes. Chris. Matthew, Thank you so much for joining me today. Talking about what's new with the Amazon marketplace. What you guys are doing with professional services and crowdstrike. We appreciate your time. >>Yep. Thank you. Thanks. Lisa. Yep. >>From my guests. I'm Lisa Martin. You're watching the cubes. Live coverage of aws reinvent 2020.
SUMMARY :
It's the Cube with digital Good to see you. He is back VP of worldwide Great to be here. Some of the the news that's coming from the partner Keynote. And then, as of this morning in the Global Partner Summit, we announced the ability to sell professional I'd love to get Chris your take on And so we just think this is gonna be a game changer That's something that every business really aims to We have a number of our channel partners that leverage the You know, the ability to thio, but also in this massive pitot to work working remotely. And a lot of times these were people that were working from home for the first time. to transact directly and and in a frictionless way, when speed is so critical? And a lot of times a business application might need services to go on, actually wrap around and being able to do that all remotely, which is these days is table stakes. Live coverage of aws reinvent 2020.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matthew | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chris Gru | PERSON | 0.99+ |
Matthew Polly | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two months | QUANTITY | 0.99+ |
Chris Grusz | PERSON | 0.99+ |
Amazon Partner Network | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
three year | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
eight years ago | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
300,000 plus customers | QUANTITY | 0.99+ |
two ways | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
over 100 launch partners | QUANTITY | 0.98+ |
Global Partner Summit | EVENT | 0.98+ |
US | LOCATION | 0.98+ |
second | QUANTITY | 0.98+ |
over 1.5 million subscriptions | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
about two years | QUANTITY | 0.98+ |
one car | QUANTITY | 0.98+ |
three | QUANTITY | 0.97+ |
Red Shift | TITLE | 0.97+ |
Both | QUANTITY | 0.97+ |
up to 300,000 active customers | QUANTITY | 0.97+ |
over 8000 listings | QUANTITY | 0.97+ |
this year | DATE | 0.96+ |
eight year old | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
Crowdstrike | ORGANIZATION | 0.96+ |
tons and tons of data petabytes | QUANTITY | 0.95+ |
Keynote | ORGANIZATION | 0.94+ |
earlier today | DATE | 0.93+ |
this morning | DATE | 0.93+ |
Matthews | PERSON | 0.93+ |
trillions of events per week | QUANTITY | 0.9+ |
Crowdstrike | TITLE | 0.89+ |
couple years ago | DATE | 0.87+ |
Two instances | QUANTITY | 0.86+ |
Ransomware | TITLE | 0.85+ |
pandemic | EVENT | 0.83+ |
crowdstrike | ORGANIZATION | 0.82+ |
private offers | QUANTITY | 0.81+ |
fire | COMMERCIAL_ITEM | 0.79+ |
AWS Marketplace | ORGANIZATION | 0.78+ |
Eyes | ORGANIZATION | 0.76+ |
AWS Global Partner Network | ORGANIZATION | 0.74+ |
Cube | ORGANIZATION | 0.67+ |
Falcon Horizon | TITLE | 0.65+ |
Keynote Analysis with Jerry Chen | AWS re:Invent 2020
>>on the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. Hello and welcome back to the Cubes Live coverage Cube live here in Palo Alto, California, with the Virtual Cube this year because we can't be there in person. I'm your host, John Fairy year. We're kicking off Day two of the three weeks of reinvent a lot of great leadership sessions to review, obviously still buzzing from the Andy Jassy three. Our keynote, which had so many storylines, is really hard to impact. We're gonna dig that into into into that today with Jerry Chan, who has been a Cube alumni since the beginning of our AWS coverage. Going back to 2013, Jerry was wandering the hallways as a um, in between. You were in between vm ware and V C. And then we saw you there. You've been on the Cube every year at reinvent with us. So special commentary from you. Thanks for coming on. >>Hey, John, Thanks for having me and a belated happy birthday as well. If everyone out there John's birthday was yesterday. So and hardest. Howard's working man in technology he spent his entire birthday doing live coverage of Amazon re events. Happy birthday, buddy. >>Well, I love my work. I love doing this. And reinvent is the biggest event of the year because it really is. It's become a bellwether and eso super excited to have you on. We've had great conversations by looking back at our conversations over the Thanksgiving weekend. Jerry, the stuff we were talking about it was very proposed that Jassy is leaning in with this whole messaging around change and horizontal scalability. He didn't really say that, but he was saying you could disrupt in these industries and still use machine learning. This was some of the early conversations we were having on the Cube. Now fast forward, more mainstream than ever before. So big, big part of the theme there. >>Yeah, it z you Amazon reinvent Amazon evolution to your point, right, because it's both reinventing what countries are using with the cloud. But also what Amazon's done is is they're evolving year after year with their services. So they start a simple infrastructure, you know, s three and e c. Two. And now they're building basically a lot of what Andy said you actually deconstructed crm? Ah, lot of stuff they're doing around the call centers, almost going after Salesforce with kind of a deconstructed CRM services, which is super interesting. But the day you know, Amazon announces all those technologies, not to mention the AI stuff, the seminar stuff you have slack and inquired by Salesforce for $27.7 billion. So ah, lot of stuff going on in the cloud world these days, and it's funny part of it, >>you know, it really is interesting. You look up the slack acquisition by, um, by Salesforce. It's interesting, you know, That kind of takes slack out of the play here. I mean, they were doing really well again. Message board service turns into, um, or collaboration software. They hit the mainstream. They have great revenue. Is that going to really change the landscape of the industry for Salesforce? They've got to acquire it. It opens the door up from, or innovation. And it's funny you mention the contact Center because I was pressing Jassy on my exclusive one on one with him. Like they said, Andy, my my daughter and my sons, they don't use the phone. They're not gonna call. What's this? Is it a call center deal? And he goes, No, it's the It's about the contact. So think about that notion of the contact. It's not about the call center. It's the point of contact. Okay, Linked in is with Microsoft. You got slack and Salesforce Contact driven collaboration. Interesting kind of play for Microsoft to use voice and their data. What's your take on that? >>I think it's, um you know, I have this framework. As you know, I talked my friend systems of engagement over systems intelligence and systems record. Right? And so you could argue voice email slack because we're all different systems of engagement, and they sit on top of system of record like CRM customer support ticketing HR. Something like that. Now what sells first did by buying slack is they now own a system engagement, right? Not on Lee is slack. A system engagement for CRM, but also system engagement for E. R. P Service. Now is how you interact with a bunch of applications. And so if you think about sales for strategy in the space, compete against Marcus Soft or serves now or other large AARP's now they own slack of system engagement, that super powerful way to actually compete against rival SAS companies. Because if you own the layer engagement layer, you can now just intermediate what's in the background. Likewise, the context center its own voice. Email, chat messaging, right? You can just inter mediate this stuff in the back, and so they're trying to own the system engagement. And then, likewise, Facebook just bought that company customer a week ago for a billion dollars, which also Omni Channel support because it is chat messaging voice. It's again the system engagement between End User, which could be a customer or could be employees. >>You know, this really gonna make Cit's enterprise has been so much fun over the past 10 years, I gotta say, in the past five, you know, it's been even more fun, has become or the new fun area, you know, And the impact to enterprise has been interesting because and we're talking about just engaging system of record. This is now the new challenge for the enterprise. So I wanna get your thoughts, Jerry, because how you see the Sea, X O's and CSOs and the architects out there trying to reinvent the enterprise. Jassy saying Look and find the truth. Be on the right side of history here. Certainly he's got himself service interest there, but there is a true band eight with Cove it and with digital acceleration for the enterprise to change. Um, given all these new opportunities Thio, revolutionize or disrupt or radically improve, what's the C. C X's do? What's your take on? How do you see that? >>It's increasingly messy for the CXS, and I don't I don't envy them, right? Because back in the day they kind of controlled all the I t spend and kind of they had a standard of what technologies they use in the company. And then along came Amazon in cloud all of sudden, like your developers and Dio Hey, let me swipe my credit card and I'm gonna access to a bunch of a P I s around computing stories. Likewise. Now they could swipe the credit card and you strike for billing, right? There's a whole bunch of services now, so it becomes incumbent upon CSOs. They need Thio new set of management tools, right? So not only just like, um, security tools they need, they need also observe ability, tools, understanding what services are being used by the customers, when and how. And I would say the following John like CSOs is both a challenge for them. But I think if I was a C X, so I'll be pretty excited because now I have a bunch of other weapons and other bunch of services I could offer. My end users, my developers, my employees, my customers and, you know it's exciting for them is not only could they do different things, but they also changed how their business being done. And so I think both interact with their end users. Be a chat like slack or be a phone like a contact center or instagram for your for your for your kids. It's actually a new challenge if I were sick. So it's it's time to build again, you know, I think Cove it has said it is time to build again. You can build >>to kind of take that phrase from the movie Shawshank Redemption. Get busy building or get busy dying. Kinda rephrase it there. And that's kind of the theme I'm seeing here because covert kind of forced people saying, Look, this things like work at home. Who would have thought 100% people would be working at home? Who would have thought that now the workloads gonna change differently? So it's an opportunity to deconstruct or distant intermediate these services. And I think, you know, in all the trends that I've seen over my career, it's been those inflection points where breaking the monolith or breaking the proprietary piece of it has always been an opportunity for for entrepreneur. So you know, and and for companies, whether you're CEO or startup by decomposing and you can come in and create value E I think to me, snowflake going public on the back of Amazon. Basically, this is interesting. I mean, so you don't have to be. You could kill one feature and nail it and go big. >>I think we talked to the past like it's Amazon or Google or Microsoft Gonna win. Everything is winner take all winner take most, and you could argue that it's hard to find oxygen as a start up in a broad platform play. But we think Snowflake and other companies have done and comes like mongo DB, for example, elastic have shown that if you can pick a service or a problem space and either developed like I p. That's super deep or own developer audience. You can actually fight the big guys. The Big Three cloud vendors be Amazon, Google or or market soft in different markets. And I think if you're a startup founder, you should not be afraid of competing with the big cloud vendors because there there are success patterns and how you can win and you know and create a lot of value. So I have found Investor. I'm super excited by that because, you know, I don't think you're gonna find a company takedown Amazon completely because they're just the scale and the network effects is too large. But you can create a lot of value and build Valuable comes like snowflake in and around the Amazon. Google Microsoft Ecosystem. >>Yeah, I want to get your thoughts. You have one portfolio we've covered rock rock set, which does a lot of sequel. Um, one of your investments. Interesting part of the Kino yesterday was Andy Jassy kind of going after Microsoft saying Windows sequel server um, they're targeting that with this new, uh, tool, but, you know, sucks in the database of it is called the Babel Fish for Aurora for post Chris sequel. Um, well, how was your take on that? I mean, obviously Microsoft big. Their enterprise sales tactics are looking like more like Oracle, which he was kind of hinting at and commenting on. But sequel is Lingua Franca for data >>correct. I think we went to, like, kind of a no sequel phase, which was kind of a trendy thing for a while and that no sequel still around, not only sequel like mongo DB Document TV. Kind of that interface still holds true, but your point. The world speaks sequel. All your applications be sequel, right? So if you want backwards, compatibility to your applications speaks equal. If you want your tire installed base of employees that no sequel, we gotta speak sequel. So, Rock said, when the first public conversations about what they're building was on on the key with you and Me and vent hat, the founder. And what Rock said is doing their building real time. Snowflake Thio, Lack of better term. It's a real time sequel database in the cloud that's super elastic, just like Snowflake is. But unlike snowflake, which is a data warehouse mostly for dashboards and analytics. Rock set is like millisecond queries for real time applications, and so think of them is the evolution of where cloud databases air going is not only elastic like snowflake in the cloud like Snowflake. We're talking 10 15 millisecond queries versus one or two second queries, and I think what any Jassy did and Amazon with bowel officials say, Hey, Sequels, Legal frank of the cloud. There's a large installed base of sequel server developers out there and applications, and we're gonna use Babel fish to kind of move those applications from on premise the cloud or from old workload to the new workloads. And, I think, the name of the game. For for cloud vendors across the board, big and small startups thio Google markets, often Amazon is how do you reduce friction like, How do you reduce friction to try a new service to get your data in the cloud to move your data from one place to the next? And so you know, Amazon is trying to reduce friction by using Babel fish, and I think it is a great move by them. >>Yeah, by the way. Not only is it for Aurora Post Chris equal, they're also open sourcing it. So that's gonna be something that is gonna be interesting to play out. Because once they open source it essentially, that's an escape valve for locking. I mean, if you're a Microsoft customer, I mean, it ultimately is. Could be that Gateway drug. It's like it is ultimately like, Hey, if you don't like the licensing, come here. Now there's gonna be some questions on the translations. Um, Vince, um, scuttlebutt about that. But we'll see it's open source. We'll see what goes on. Um great stuff on on rocks that great. Great. Start up next. Next, uh, talk track I wanna get with you is You know, over the years, you know, we've talked about your history. We're gonna vm Where, uh, now being a venture capitalist. Successful, wanted Greylock. You've seen the waves, and I would call it the two ways pre cloud Early days of cloud. And now, with co vid, we're kind of in the, you know, not just born in the cloud Total cloud scale cloud operations. This is kind of what jazz he was going after. E think I tweeted Cloud is eating the world and on premise and the edges. What it's hungry for. It kind of goof on mark injuries since quote a software eating the world. This is where it's going. So it's a whole another chapter coming. You saw the pre cloud you saw Cloud. Now we've got basically global I t everything else >>It's cloud only I would say, You know, we saw pre cloud right the VM ware days and before that he called like, you know, data centers. I would say Amazon lawns of what, 6 4007, the Web services. So the past 14 15 years have been what I've been calling cloud transition, right? And so you had cos technologies that were either doing on migration from on premise and cloud or hybrid on premise off premise. And now you're seeing a generation of technologies and companies. Their cloud only John to your point. And so you could argue that this 15 year transitions were like, you know, Thio use a bad metaphor like amphibians. You're half in the water, half on land, you know, And like, you know, you're not You're not purely cloud. You're not purely on premise, but you can do both ways, and that's great. That's great, because that's a that's a dominant architecture today. But come just like rock set and snowflake, your cloud only right? They're born in the cloud, they're built on the cloud And now we're seeing a generation Startups and technology companies that are cloud only. And so, you know, unlike you have this transitionary evolution of like amphibians, land and sea. Now we have ah, no mammals, whatever that are Onley in the cloud Onley on land. And because of that, you can take advantage of a whole different set of constraints that are their cloud. Only that could build different services that you can't have going backwards. And so I think for 2021 forward, we're going to see a bunch of companies or cloud only, and they're gonna look very, very different than the previous set of companies the past 15 years. And as an investor, as you covering as analysts, is gonna be super interesting to see the difference. And if anything, the cloud only companies will accelerate the move of I t spending the move of mawr developers to the cloud because the cloud only technologies are gonna be so much more compelling than than the amphibians, if you will. >>Yeah, insisting to see your point. And you saw the news announcement had a ton of news, a ton of stage making right calls, kind of the democratization layer. We'll look at some of the insights that Amazon's getting just as the monster that they are in terms of size. The scope of what? Their observation spaces. They're seeing all these workloads. They have the Dev Ops guru. They launched that Dev Ops Guru thing I found interesting. They got data acquisition, right? So when you think about these new the new data paradigm with cloud on Lee, it opens up new things. Um, new patterns. Um, S o. I think I think to me. I think that's to me. I see where this notion of agility moves to a whole nother level, where it's it's not just moving fast, it's new capabilities. So how do you How do you see that happening? Because this is where I think the new generation is gonna come in and be like servers. Lambs. I like you guys actually provisioned E c. Two instances before I was servers on data centers. Now you got ec2. What? Lambda. So you're starting to see smaller compute? Um, new learnings, All these historical data insights feeding into the development process and to the application. >>I think it's interesting. So I think if you really want to take the next evolution, how do you make the cloud programmable for everybody? Right. And I think you mentioned stage maker machine learning data scientists, the sage maker user. The data scientists, for example, does not on provisioned containers and, you know, kodama files and understand communities, right? Like just like the developed today. Don't wanna rack servers like Oh, my God, Jerry, you had Iraq servers and data center and install VM ware. The generation beyond us doesn't want to think about the underlying infrastructure. You wanna think about it? How do you just program my app and program? The cloud writ large. And so I think where you can see going forward is two things. One people who call themselves developers. That definition has expanded the past 10, 15 years. It's on Lee growing, so everyone is gonna be developed right now from your white collar knowledge worker to your hard core infrastructure developer. But the populist developers expanding especially around machine learning and kind of the sage maker audience, for sure. And then what's gonna happen is, ah, law. This audience doesn't want to care about the stuff you just mentioned, John in terms of the online plumbing. So what Amazon Google on Azure will do is make that stuff easy, right? Or a starved could make it easy. And I think that the move towards land and services that moved specifically that don't think about the underlying plumbing. We're gonna make it easy for you. Just program your app and then either a startup, well, abstract away, all the all the underlying, um, infrastructure bits or the big three cloud vendors to say, you know, all this stuff would do in a serverless fashion. So I think serverless as, ah paradigm and have, quite frankly, a battlefront for the Big Three clouds and for startups is probably one in the front lines of the next generation. Whoever owns this kind of program will cloud model programming the Internet program. The cloud will be maybe the next platform the next 10 or 15 years. I still have two up for grabs. >>Yeah, I think that is so insightful. I think that's worth calling out. I think that's gonna be a multi year, um, effort. I mean, look at just how containers now, with ks anywhere and you've got the container Service of control plane built in, you got, you know, real time analytics coming in from rock set. And Amazon. You have pinned Pandora Panorama appliance that does machine learning and computer vision with sensors. I mean, this is just a whole new level of purpose built stuff software powered software operated. So you have this notion of Dev ops going to hand in the glove software and operations? Kind of. How do you operate this stuff? So I think the whole new next question was Okay, this is all great. But Amazon's always had this problem. It's just so hard. Like there's so much good stuff. Like, who do you hired operate it? It is not yet programmable. This has been a big problem for them. Your thoughts on that, >>um e think that the data illusion around Dev ops etcetera is the solution. So also that you're gonna have information from Amazon from startups. They're gonna automate a bunch of the operations. And so, you know, I'm involved to come to Kronos Fear that we talked about the past team kind of uber the Bilson called m three. That's basically next generation data dog. Next generation of visibility platform. They're gonna collect all the data from the applications. And once they have their your data, they're gonna know how to operate and automate scaling up, scaling down and the basic remediation for you. So you're going to see a bunch of tools, take the information from running your application infrastructure and automate exactly how to scale and manager your app. And so AI and machine learning where large John is gonna be, say, make a lot of plumbing go away or maybe not completely, but lets you scale better. So you, as a single system admin are used. A single SRE site reliability engineer can scale and manage a bigger application, and it's all gonna be around automation and and to your point, you said earlier, if you have the data, that's a powerful situations. Once have the data can build models on it and can start building solutions on the data. And so I think What happens is when Bill this program of cloud for for your, you know, broad development population automating all this stuff becomes important. So that's why I say service or this, You know, automation of infrastructure is the next battleground for the cloud because whoever does that for you is gonna be your virtualized back and virtualized data center virtualized SRE. And if whoever owns that, it's gonna be a very, very strategic position. >>Yeah, it's great stuff. This is back to the theme of this notion of virtualization is now gone beyond server virtualization. It's, you know, media virtualization with the Cube. My big joke here with the Q virtual. But it's to your point. It's everything can now be replicated in software and scale the cloud scale. So it's super big opportunity for entrepreneurs and companies. Thio, pivot and differentiate. Uh, the question I have for you next is on that thread Huge edge discussion going on, right. So, you know, I think I said it two years ago or three years ago. The data center is just a edges just a big fat edge. Jassy kind of said that in his keynote Hey, looks at that is just a Nedum point with his from his standpoint. But you have data center. You have re alleges you've got five G with wavelength. This local zone concept, which is, you know, Amazon in these metro areas reminds me the old wireless point of presence kind of vibe. And then you've got just purpose built devices like cameras and factory. So huge industrial innovation, robotics, meet software. I mean, whole huge edge development exploding, Which what's your view of this? And how do you look at that from? Is an investor in industry, >>I think edges both the opportunity for start ups and companies as well as a threat to Amazon, right to the reason why they have outposts and all the stuff the edges if you think about, you know, decentralizing your application and moving into the eggs from my wearable to my home to my car to my my city block edges access Super interesting. And so a couple things. One companies like Cloudflare Fastly company I'm involved with called Kato Networks that does. SAS is secure access service edge write their names and the edges In the category definition sassy is about How do you like get compute to the edge securely for your developers, for your customers, for your workers, for end users and what you know comes like Cloudflare and Kate have done is they built out a network of pops across the world, their their own infrastructure So they're not dependent upon. You know, the big cloud providers, the telco providers, you know, they're partnering with Big Cloud, their parting with the telcos. But they have their own kind of system, our own kind of platform to get to the edge. And so companies like Kato Networks in Cloud Player that have, ah, presence on the edge and their own infrastructure more or less, I think, are gonna be in a strategic position. And so Kate was seen benefits in the past year of Of of Cove it and locked down because more remote access more developers, Um, I think edge is gonna be a super great area development going forward. I think if you're Amazon, you're pushing to the edge aggressively without post. I think you're a developer startup. You know, creating your own infrastructure and riding this edge wave could be a great way to build a moat against a big cloud guy. So I'm super excited. You think edge in this whole idea of your own infrastructure. Like what Kato has done, it is gonna be super useful going forward. And you're going to see more and more companies. Um, spend the money to try to copy kind of, ah, Cloudflare Kato presence around the world. Because once you own your own kind of, um, infrastructure instead of pops and you're less depend upon them a cloud provider, you're you're in a good position because there's the Amazon outage last week and I think like twilio and a bunch of services went down for for a few hours. If you own your own set of pops, your independent that it is actually really, really secure >>if you and if they go down to the it's on you. But that was the kinesis outage that they had, uh, they before Thanksgiving. Um, yeah, that that's a problem. So on this on. So I guess the question for you on that is that Is it better to partner with Amazon or try to get a position on the edge? Have them either by you or computer, create value or coexist? How do you see that that strategy move. Do you coexist? Do you play with them? >>E think you have to co exist? I think that the partner coexist, right? I think like all things you compete with Amazon. Amazon is so broad that will be part of Amazon and you're gonna compete with and that's that's fair game, you know, like so Snowflake competes against red shift, but they also part of Amazon's. They're running Amazon. So I think if you're a startup trying to find the edge, you have to coexist in Amazon because they're so big. Big cloud, right, The Big three cloud Amazon, Google, Azure. They're not going anywhere. So if you're a startup founder, you definitely coexist. Leverage the good things of cloud. But then you gotta invest in your own edge. Both both figure early what? Your edge and literally the edge. Right. And I think you know you complement your edge presence be it the home, the car, the city block, the zip code with, you know, using Amazon strategically because Amazon is gonna help you get two different countries, different regions. You know you can't build a company without touching Amazon in some form of fashion these days. But if you're a star found or doing strategically, how use Amazon and picking how you differentiate is gonna be key. And if the differentiation might be small, John. But it could be super valuable, right? So maybe only 10 or 15%. But that could be ah Holton of value that you're building on top of it. >>Yeah, and there's a little bit of growth hack to with Amazon if you you know how it works. If you compete directly against the core building blocks like a C two has three, you're gonna get killed, right? They're gonna kill you if the the white space is interest. In the old days in Microsoft, you had a white space. They give it to you or they would roll you over and level you out. Amazon. If you're a customer and you're in a white space and do better than them, they're cool with that. They're like, basically like, Hey, if you could innovate on behalf of the customer, they let you do that as long as you have a big bill. Yeah. Snowflakes paying a lot of money to Amazon. Sure, but they also are doing a good job. So again, Amazon has been very clear on that. If you do a better job than us for, the customer will do it. But if they want Amazon Red Shift, they want Amazon Onley. They can choose that eso kind of the playbook. >>I think it is absolutely right, John is it sets from any jassy and that the Amazon culture of the customer comes first, right? And so whatever is best for the customer that's like their their mission statement. So whatever they do, they do for the customer. And if you build value for the customer and you're on top of Amazon, they'll be happy. You might compete with some Amazon services, which, no, the GM of that business may not be happy, but overall. Net Net. Amazon's getting a share of those dollars that you're that you're charging the customer getting a share of the value you're creating. They're happy, right? Because you know what? The line rising tide floats all the boats. So the Mork cloud usage is gonna only benefit the Big Three cloud providers Amazon, particularly because they're the biggest of the three. But more and more dollars go the cloud. If you're helping move more. Absolute cloud helping build more solutions in the cloud. Amazon is gonna be happy because they know that regardless of what you're doing, you will get a fraction of those dollars. Now, the key for a startup founder and what I'm looking for is how do we get mawr than you know? A sliver of the dollars. How to get a bigger slice of the pie, if you will. So I think edge and surveillance or two areas I'm thinking about because I think there are two areas where you can actually invest, own some I p owned some surface area and capture more of the value, um, to use a startup founder and, you know, are built last t to Amazon. >>Yeah. Great. Great thesis. Jerry has always been great. You've been with the Cube since the beginning on our first reinvented 2013. Um, and so we're now on our eighth year. Great to see your success. Great investment. You make your world class investor to great firm Greylock. Um great to have you on from your perspective. Final take on this year. What's your view of Jackie's keynote? Just in general, What's the vibe. What's the quick, um, soundbite >>from you? First, I'm so impressed and you can do you feel like a three Archy? No more or less by himself. Right then, that is, that is, um, that's a one man show, and I'm All of that is I don't think I could pull that off. Number one. Number two It's, um, the ability to for for Amazon to execute at so many different levels of stack from semiconductors. Right there, there there ai chips to high level services around healthcare solutions and legit solutions. It's amazing. So I would say both. I'm impressed by Amazon's ability. Thio go so broad up and down the stack. But also, I think the theme from From From Andy Jassy is like It's just acceleration. It's, you know now that we will have things unique to the cloud, and that could be just a I chips unique to the cloud or the services that are cloud only you're going to see a tipping point. We saw acceleration in the past 15 years, John. He called like this cloud transition. But you know, I think you know, we're talking about 2021 beyond you'll see a tipping point where now you can only get certain things in the cloud. Right? And that could be the underlying inference. Instances are training instances, the Amazons giving. So all of a sudden you as a founder or developer, says, Look, I guess so much more in the cloud there's there's no reason for me to do this hybrid thing. You know, Khyber is not gonna go away on Prem is not going away. But for sure. We're going to see, uh, increasing celebration off cloud only services. Um, our edge only services or things. They're only on functions that serve like serverless. That'll be defined the next 10 years of compute. And so that for you and I was gonna be a space and watch >>Jerry Chen always pleasure. Great insight. Great to have you on the Cube again. Great to see you. Thanks for coming on. >>Congrats to you guys in the Cube. Seven years growing. It's amazing to see all the content put on. So you think it isn't? Just Last point is you see the growth of the curve growth curves of the cloud. I'd be curious Johnson, The growth curve of the cube content You know, I would say you guys are also going exponential as well. So super impressed with what you guys have dealt. Congratulations. >>Thank you so much. Cute. Virtual. We've been virtualized. Virtualization is coming here, or Cubans were not in person this year because of the pandemic. But we'll be hybrid soon as events come back. I'm John for a year. Host for AWS reinvent coverage with the Cube. Thanks for watching. Stay tuned for more coverage all day. Next three weeks. Stay with us from around the globe. It's the Cube with digital coverage of aws reinvent 2020 sponsored by Intel >>and AWS. Welcome back here to our coverage here on the Cube of AWS.
SUMMARY :
And then we saw you there. So and hardest. It's become a bellwether and eso super excited to have you on. But the day you know, Amazon announces all those technologies, And it's funny you mention the contact I think it's, um you know, I have this framework. you know, And the impact to enterprise has been interesting because and we're talking about just engaging So it's it's time to build again, you know, I think Cove it has said it is time to build again. And I think, you know, I'm super excited by that because, you know, I don't think you're gonna find a company takedown Amazon completely because they're with this new, uh, tool, but, you know, sucks in the database of And so you know, Amazon is trying to reduce friction by using Babel fish, is You know, over the years, you know, we've talked about your history. You're half in the water, half on land, you know, And like, you know, you're not You're not purely cloud. And you saw the news announcement had a ton of news, And so I think where you can see So you have this notion of Dev ops going to hand And so, you know, I'm involved to come to Kronos Fear that we Uh, the question I have for you next is on that thread Huge the telco providers, you know, they're partnering with Big Cloud, their parting with the telcos. So I guess the question for you on that is that Is it better to partner with Amazon or try to get a position on And I think you know you complement your edge presence be it the home, Yeah, and there's a little bit of growth hack to with Amazon if you you know how it works. the pie, if you will. Um great to have you on from your perspective. And so that for you and I was gonna be a Great to have you on the Cube again. So super impressed with what you guys have dealt. It's the Cube with digital coverage of aws here on the Cube of AWS.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jerry Chan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Rock | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Jerry | PERSON | 0.99+ |
$27.7 billion | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Vince | PERSON | 0.99+ |
Snowflake | TITLE | 0.99+ |
Shawshank Redemption | TITLE | 0.99+ |
John Fairy | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
two areas | QUANTITY | 0.99+ |
snowflake | TITLE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Kato Networks | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
Johnson | PERSON | 0.99+ |
Jackie | PERSON | 0.99+ |
eighth year | QUANTITY | 0.99+ |
Howard | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Stefanie Chiras & Joe Fernandes, Red Hat | KubeCon + CloudNativeCon NA 2020
>>from around the globe. It's the Cube with coverage of Yukon and Cloud. Native Con North America 2020 Virtual brought to you by Red Hat The Cloud, Native Computing Foundation and Ecosystem Partners. Hello, everyone. And welcome back to the cubes Ongoing coverage of Cuba con North America. Joe Fernandez is here. He's with Stephanie, Cheras and Joe's, the V, P and GM for core cloud platforms. That red hat and Stephanie is this s VP and GM of the Red Hat Enterprise. Lennox bu. Two great friends of the Cube. Awesome seeing you guys. How you doing? >>It's great to be here, Dave. Yeah, thanks >>for the opportunity. >>Hey, so we all talked, you know, recently, uh, answerable fest Seems like a while ago, but But we talked about what's new? Red hat really coming at it from an automation perspective. But I wonder if we could take a view from open shift and what's new from the standpoint of you really focus on helping customers, you know, change their operations and operationalize. And Stephanie, Maybe you could start, and then, you know, Joe, you could bring in some added color. >>No, that's great. And I think you know one of the things we try and do it. Red hat clearly building off of open source. We have been focused on this open hybrid cloud strategy for, you know, really years. Now the beauty of it is that hybrid cloud and open hybrid cloud continues to evolve right with bringing in things like speed and stability and scale and now adding in other footprints, like manage services as well as edge and pulling that all together across the whole red hat portfolio from the platforms, right? Certainly with Lennox and roll into open shift in the platform with open shift and then adding automation, which certainly you need for scale. But it's ah, it's continues to evolve as the as the definition of open hybrid cloud evolves. >>Great. So thank you, Stephanie jokes. You guys got hard news here that you could maybe talk about 46? >>Yeah. Eso eso open shift is our enterprise kubernetes platform. With this announcement, we announced the release of open ship 4.6 Eso eso We're doing releases every quarter tracking the upstream kubernetes release cycle. So this brings communities 1.19, which is, um but itself brings a number of new innovations, some specific things to call out. We have this new automated installer for open shift on bare metal, and that's definitely a trend that we're seeing is more customers not only looking at containers but looking at running containers directly on bare metal environments. Open shift provides an abstraction, you know, which combines Cuban. And he's, uh, on top of Lennox with RL. I really across all environments, from bare metal to virtualization platforms to the various public clouds and out to the edge. But we're seeing a lot of interest in bare metal. This is basically increasing the really three automation to install seamlessly and manage upgrades in those environments. We're also seeing a number of other enhancements open shifts service mesh, which is our SDO based solution for managing, uh, the interactions between micro services being able to manage traffic against those services. Being able to do tracing. We have a new release of that on open shift Ford out six on then, um, some work specific to the public cloud that we started extending into the government clouds. So we already supported AWS and Azure. With this release, we added support for the A W s government cloud as well. Azaz Acela's Microsoft Azure government on dso again This is really important to like our public sector customers who are looking to move to the public cloud leveraging open shift as an abstraction but wanted thio support it on the specialized clouds that they need to use with azure gonna meet us Cup. >>So, joke, we stay there for a minute. So so bare metal talking performance there because, you know, you know what? You really want to run fast, right? So that's the attractiveness there. And then the point about SDO in the open, open shift service measure that makes things simpler. Maybe talk a little bit about sort of business impact and what customers should expect to get out of >>these two things. So So let me take them one at a time, right? So so running on bare metal certainly performances a consideration. You know, I think a lot of fixed today are still running containers, and Cuban is on top of some form of virtualization. Either a platform like this fear or open stack, or maybe VMS in the in one of the public clouds. But, you know containers don't depend on a virtualization layer. Containers only depend on Lennox and Lennox runs great on bare metal. So as we see customers moving more towards performance and Leighton see sensitive workloads, they want to get that Barry mental performance on running open shift on bare metal and their containerized applications on that, uh, platform certainly gives them that advantage. Others just want to reduce the cost right. They want to reduce their VM sprawl, the infrastructure and operational cost of managing avert layer beneath their careers clusters. And that's another benefit. So we see a lot of uptake in open shift on bare metal on the service match side. This is really about You know how we see applications evolving, right? Uh, customers are moving more towards these distributed architectures, taking, you know, formally monolithic or enter applications and splitting them out into ah, lots of different services. The challenge there becomes. Then how do you manage all those connections? Right, Because something that was a single stack is now comprised of tens or hundreds of services on DSO. You wanna be able to manage traffic to those services, so if the service goes down, you can redirect that those requests thio to an alternative or fail over service. Also tracing. If you're looking at performance issues, you need to know where in your architecture, er you're having those degradations and so forth. And, you know, those are some of the challenges that people can sort of overcome or get help with by using service mash, which is powered by SDO. >>And then I'm sorry, Stephanie ever get to in a minute. But which is 11 follow up on that Joe is so the rial differentiation between what you bring in what I can just if I'm in a mono cloud, for instance is you're gonna you're gonna bring this across clouds. I'm gonna You're gonna bring it on, Prem And we're gonna talk about the edge in in a minute. Is that right? From a differentiation standpoint, >>Yeah, that That's one of the key >>differentiations. You know, Read has been talking about the hybrid cloud for a long time. We've we've been articulating are open hybrid cloud strategy, Andi, >>even if that's >>not a strategy that you may be thinking about, it is ultimately where folks end up right, because all of our enterprise customers still have applications running in the data center. But they're also all starting to move applications out to the public cloud. As they expand their usage of public cloud, you start seeing them adopted multi cloud strategies because they don't want to put all their eggs in one basket. And then for certain classes of applications, they need to move those applications closer to the data. And and so you start to see EJ becoming part of that hybrid cloud picture on DSO. What we do is basically provide a consistency across all those environments, right? We want run great on Amazon, but also great on Azure on Google on bare metal in the data center during medal out at the edge on top of your favorite virtualization platform. And yeah, that that consistency to take a set of applications and run them the same way across all those environments. That is just one of the key benefits of going with red hat as your provider for open hybrid cloud solutions. >>All right, thank you. Stephanie would come back to you here, so I mean, we talk about rail a lot because your business unit that you manage, but we're starting to see red hats edge strategy unfolded. Kind of real is really the linchpin I wanna You could talk about how you're thinking about the edge and and particularly interested in how you're handling scale and why you feel like you're in a good position toe handle that massive scale on the requirements of the edge and versus hey, we need a new OS for the edge. >>Yeah, I think. And Joe did a great job of said and up it does come back to our view around this open hybrid cloud story has always been about consistency. It's about that language that you speak, no matter where you want to run your applications in between rela on on my side and Joe with open shift and and of course, you know we run the same Lennox underneath. So real core os is part of open shift that consistently see leads to a lot of flexibility, whether it's through a broad ecosystem or it's across footprints. And so now is we have been talking with customers about how they want to move their applications closer to data, you know, further out and away from their data center. So some of it is about distributing your data center, getting that compute closer to the data or closer to your customers. It drives, drives some different requirements right around. How you do updates, how you do over the air updates. And so we have been working in typical red hat fashion, right? We've been looking at what's being done in the upstream. So in the fedora upstream community, there is a lot of working that has been done in what's called the I. O. T Special Interest group. They have been really investigating what the requirements are for this use case and edge. So now we're really pleased in, um, in our most recent release of really aid relate 00.3. We have put in some key capabilities that we're seeing being driven by these edge use cases. So things like How do you do quick image generation? And that's important because, as you distribute, want that consistency created tailored image, be able to deploy that in a consistent way, allow that to address scale, meet security requirements that you may have also right updates become very important when you start to spread this out. So we put in things in order to allow remote device mirroring so that you can put code into production and then you can schedule it on those remote devices toe happen with the minimal disruption. Things like things like we all know now, right with all this virtual stuff, we often run into things like not ideal bandwidth and sometimes intermittent connectivity with all of those devices out there. So we put in, um, capabilities around, being able to use something called rpm Austria, Um, in order to be able to deliver efficient over the air updates. And then, of course, you got to do intelligent rollbacks for per chance that something goes wrong. How do you come back to a previous state? So it's all about being able to deploy at scale in a distributed way, be ready for that use case and have some predictability and consistency. And again, that's what we build our platforms for. It's all about predictability and consistency, and that gives you flexibility to add your innovation on top. >>I'm glad you mentioned intelligent rollbacks I learned a long time ago. You always ask the question. What happens when something goes wrong? You learn a lot from the answer to that, but You know, we talk a lot about cloud native. Sounds like you're adapting well to become edge native. >>Yeah. I mean, I mean, we're finding whether it's inthe e verticals, right in the very specific use cases or whether it's in sort of an enterprise edge use case. Having consistency brings a ton of flexibility. It was funny, one of our talking with a customer not too long ago. And they said, you know, agility is the new version of efficiency. So it's that having that sort of language be spoken everywhere from your core data center all the way out to the edge that allows you a lot of flexibility going forward. >>So what if you could talk? I mentioned just mentioned Cloud Native. I mean, I think people sometimes just underestimate the effort. It takes tow, make all this stuff run in all the different clouds the engineering efforts required. And I'm wondering what kind of engineering you do with if any with the cloud providers and and, of course, the balance of the ecosystem. But But maybe you could describe that a little bit. >>Yeah, so? So Red Hat works closely with all the major cloud providers you know, whether that's Amazon, Azure, Google or IBM Cloud. Obviously, Andi, we're you know, we're very keen on sort of making sure that we're providing the best environment to run enterprise applications across all those environments, whether you're running it directly just with Lennox on Ralph or whether you're running it in a containerized environment with Open Chef, which which includes route eso eso, our partnership includes work we do upstream, for example. You know, Red Hat help. Google launched the Cuban community, and I've been, you know, with Google. You know, we've been the top two contributors driving that product that project since inception, um, but then also extends into sort of our hosted services. So we run a jointly developed and jointly managed service called the Azure Red Hat Open Shift Service. Together with Microsoft were our joint customers can get access to open shift in an azure environment as a native azure service, meaning it's, you know, it's fully integrated, just like any other. As your service you can tied into as you're building and so forth. It's sold by by Azure Microsoft's sales reps. Um, but you know, we get the benefit of working together with our Microsoft counterparts and developing that service in managing that service and then in supporting our joint customers. We over the summer announced sort of a similar partnership with Amazon and we'll be launching are already doing pilots on the Amazon Red Hat Open ship service, which is which is, you know, the same concept now applied to the AWS cloud. So that will be coming out g a later this year, right? But again, whether it's working upstream or whether it's, you know, partnering on managed services. I know Stephanie team also do a lot of work with Microsoft, for example, on sequel server on Lenox dot net on Lenox. Whoever thought be running that applications on Linux. But that's, you know, a couple of years old now, a few years old, So eso again. It's been a great partnership, not just with Microsoft, but with all the cloud providers. >>So I think you just shared a little little He showed a little leg there, Joe, what's what's coming g A. Later this year. I want to circle back to >>that. Yeah, eso we way announced a preview earlier this year of of the Amazon Red Hat Open ships service. It's not generally available yet. We're you know, we're taking customers. We want toe, sort of be early access, get access to pilots and then that'll be generally available later this year. Although Red Hat does manage our own service Open ship dedicated that's available on AWS today. But that's a service that's, you know, solely, uh, operated by Red Hat. This new service will be jointly operated by Red Hat and Amazon together Idea. That would be sort of a service that we are delivering together as partners >>as a managed service and and okay, so that's in beta now. I presume if it's gonna be g a little, it's >>like, Yeah, that's yeah, >>that's probably running on bare metal. I would imagine that >>one is running >>on E. C. Two. That's running an A W C C T V exactly, and >>run again. You know, all of our all of >>our I mean, we you know, that open shift does offer bare metal cloud, and we do you know, we do have customers who can take the open shift software and deploy it there right now are managed. Offering is running on top of the C two and on top of Azure VM. But again, this is this is appealing to customers who, you know, like what we bring in terms of an enterprise kubernetes platform, but don't wanna, you know, operated themselves, right? So it's a fully managed service. You just come and build and deploy your APS, and then we manage all of the infrastructure and all the underlying platform for you >>that's going to explode. My prediction. Um, let's take an example of heart example of security. And I'm interested in how you guys ensure a consistent, you know, security experience across all these locations on Prem Cloud. Multiple clouds, the edge. Maybe you could talk about that. And Stephanie, I'm sure you have a perspective on this is Well, from the standpoint of of Ralph. So who wants to start? >>Yeah, Maybe I could start from the bottom and then I'll pass it over to Joe to talk a bit. I think one of these aspects about security it's clearly top of mind of all customers. Um, it does start with the very bottom and base selection in your OS. We continue to drive SC Lennox capabilities into rural to provide that foundational layer. And then as we run real core OS and open shift, we bring over that s C Lennox capability as well. Um, but, you know, there's a whole lot of ways to tackle this we've done. We've done a lot around our policies around, um see ve updates, etcetera around rail to make sure that we are continuing to provide on DCA mitt too. Mitigating all critical and importance, providing better transparency toe how we assess those CVS. So security is certainly top of mind for us. And then as we move forward, right there's also and joke and talk about the security work we do is also capabilities to do that in container ization. But you know, we we work. We work all the way from the base to doing things like these images in these easy to build images, which are tailored so you can make them smaller, less surface area for security. Security is one of those things. That's a lifestyle, right? You gotta look at it from all the way the base in the operating system, with things like sc Lennox toe how you build your images, which now we've added new capabilities. There And then, of course, in containers. There's, um there's a whole focus in the open shift area around container container security, >>Joe. Anything you want to add to that? >>Yeah, sure. I >>mean, I think, you know, obviously, Lennox is the foundation for, you know, for all public clouds. It's it's driving enterprise applications in the data center, part of keeping those applications. Security is keeping them up to date And, you know, through, you know, through real, we provide, you know, securing up to date foundation as a Stephanie mentioned as you move into open shift, you're also been able to take advantage of, uh, Thio to take advantage of essentially mutability. Right? So now the application that you're deploying isn't immutable unit that you build once as a container image, and then you deploy that out all your various environments. When you have to do an update, you don't go and update all those environments. You build a new image that includes those updates, and then you deploy those images out rolling fashion and, as you mentioned that you could go back if there's issues. So the idea, the notion of immutable application deployments has a lot to do with security, and it's enabled by containers. And then, obviously you have cured Panetti's and, you know, and all the rest of our capabilities as part of open Shift managing that for you. We've extended that concept to the entire platform. So Stephanie mentioned, real core West Open shift has always run on real. What we have done in open shift for is we've taken an immutable version of Ralph. So it's the same red hat enterprise Lennox that we've had for years. But now, in this latest version relate, we have a new way to package and deploy it as a relic or OS image, and then that becomes part of the platform. So when customers want toe in addition to keeping their applications up to date, they need to keep their platform up to dates. Need to keep, you know, up with the latest kubernetes patches up with the latest Lennox packages. What we're doing is delivering that as one platform, so when you get updates for open shift, they could include updates for kubernetes. They could include updates for Lennox itself as well as all the integrated services and again, all of this is just you know this is how you keep your applications secure. Is making sure your you know, taking care of that hygiene of, you know, managing your vulnerabilities, keeping everything patched in up to date and ultimately ensuring security for your application and users. >>I know I'm going a little bit over, but I have I have one question that I wanna ask you guys and a broad question about maybe a trends you see in the business. I mean, you look at what we talk a lot about cloud native, and you look at kubernetes and the interest in kubernetes off the charts. It's an area that has a lot of spending momentum. People are putting resource is behind it. But you know, really, to build these sort of modern applications, it's considered state of the art on. Do you see a lot of people trying to really bring that modern approach toe any cloud we've been talking about? EJ. You wanna bring it also on Prem And people generally associate this notion of cloud native with this kind of elite developers, right? But you're bringing it to the masses and there's 20 million plus software developers out there, and most you know, with all due respect that you know they may not be the the the elites of the elite. So how are you seeing this evolve in terms of re Skilling people to be able, handle and take advantage of all this? You know, cool new stuff that's coming out. >>Yeah, I can start, you know, open shift. Our focus from the beginning has been bringing kubernetes to the enterprise. So we think of open shift as the dominant enterprise kubernetes platform enterprises come in all shapes and sizes and and skill sets. As you mentioned, they have unique requirements in terms of how they need toe run stuff in their data center and then also bring that to production, whether it's in the data center across the public clouds eso So part of it is, you know, making sure that the technology meets the requirements and then part of it is working. The people process and and culture thio make them help them understand what it means to sort of take advantage of container ization and cloud native platforms and communities. Of course, this is nothing new to red hat, right? This is what we did 20 years ago when we first brought Lennox to the Enterprise with well, right on. In essence, Carozza is basically distributed. Lennox right Kubernetes builds on Lennox and brings it out to your cluster to your distributed systems on across the hybrid cloud. So So nothing new for Red Hat. But a lot of the same challenges apply to this new cloud native world. >>Awesome. Stephanie, we'll give you the last word, >>all right? And I think just a touch on what Joe talked about it. And Joe and I worked really closely on this, right? The ability to run containers right is someone launches down this because it is magical. What could be done with deploying applications? Using a container technology, we built the capabilities and the tools directly into rural in order to be able to build and deploy, leveraging things like pod man directly into rural. And that's exactly so, folks. Everyone who has a real subscription today can start on their container journey, start to build and deploy that, and then we work to help those skills then be transferrable as you movinto open shift in kubernetes and orchestration. So, you know, we work very closely to make sure that the skills building can be done directly on rail and then transfer into open shift. Because, as Joe said, at the end of the day, it's just a different way to deploy. Lennox, >>You guys are doing some good work. Keep it up. And thanks so much for coming back in. The Cube is great to talk to you today. >>Good to see you, Dave. >>Yes, Thank you. >>All right. Thank you for watching everybody. The cubes coverage of Cuba con en a continues right after this.
SUMMARY :
Native Con North America 2020 Virtual brought to you by Red Hat The Cloud, It's great to be here, Dave. Hey, so we all talked, you know, recently, uh, answerable fest Seems like a We have been focused on this open hybrid cloud strategy for, you know, You guys got hard news here that you could maybe talk about 46? Open shift provides an abstraction, you know, you know, you know what? And, you know, those are some of the challenges is so the rial differentiation between what you bring in what I can just if I'm in a mono cloud, You know, Read has been talking about the hybrid cloud for a long time. And and so you start to see EJ becoming part of that hybrid cloud picture on Stephanie would come back to you here, so I mean, we talk about rail a lot because your business and that gives you flexibility to add your innovation on top. You learn a lot from the answer to that, And they said, you know, So what if you could talk? So Red Hat works closely with all the major cloud providers you know, whether that's Amazon, So I think you just shared a little little He showed a little leg there, Joe, what's what's coming g A. But that's a service that's, you know, solely, uh, operated by Red Hat. as a managed service and and okay, so that's in beta now. I would imagine that You know, all of our all of But again, this is this is appealing to customers who, you know, like what we bring in terms of And I'm interested in how you guys ensure a consistent, you know, security experience across all these But you know, we we work. I Need to keep, you know, up with the latest kubernetes patches up But you know, really, to build these sort of modern applications, eso So part of it is, you know, making sure that the technology meets the requirements Stephanie, we'll give you the last word, So, you know, we work very closely to make sure that the skills building can be done directly on The Cube is great to talk to you today. Thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stephanie | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Joe Fernandez | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Lenox | ORGANIZATION | 0.99+ |
Joe Fernandes | PERSON | 0.99+ |
tens | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Cheras | PERSON | 0.99+ |
Ralph | PERSON | 0.99+ |
C two | TITLE | 0.99+ |
Lennox | PERSON | 0.99+ |
one question | QUANTITY | 0.99+ |
Ecosystem Partners | ORGANIZATION | 0.99+ |
Leighton | ORGANIZATION | 0.98+ |
two things | QUANTITY | 0.98+ |
Ford | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
one platform | QUANTITY | 0.98+ |
Read | PERSON | 0.98+ |
Red Hat Enterprise | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
Azure | ORGANIZATION | 0.97+ |
20 years ago | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
later this year | DATE | 0.97+ |
Andi | PERSON | 0.96+ |
CloudNativeCon | EVENT | 0.96+ |
DCA | ORGANIZATION | 0.96+ |
one basket | QUANTITY | 0.95+ |
Linux | TITLE | 0.95+ |
earlier this year | DATE | 0.95+ |
single stack | QUANTITY | 0.94+ |
Later this year | DATE | 0.92+ |
Michal Klaus, Ataccama
>> From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Welcome back to CUBE 365. I'm your host, Rebecca Knight. Today we are with Michal Klaus. He is the CEO of Ataccama. Today Ataccama has just launched generation two of Ataccama ONE, a self-driving platform for data management and data governance. We're going to do a deep dive into the generation two of Ataccama ONE. We're going to learn what it means to make data management and governance self-driving and the impact it will have on organizations. Thanks so much for joining us on theCUBE, Michal. >> Thank you, Rebecca. Thanks for having me. >> So you are a technology veteran. You've been CEO of this company for 13 years. Tell our viewers a little bit about Ataccama. >> So Ataccama was started as basically a spinoff of a professional services company. And I was part of the professional services company. We were doing data integrations, data warehousing, things like that. And on every project, we would struggle with data quality and actually what we didn't know what it was called, but it was mastering, you know, scattered data across the whole enterprises. So after several projects, we developed a little kind of utility that we would use on the projects and it seemed to be very popular with our customers. So we decided to give it a try and spin it off as a product company. And that's how Ataccama was born. That's how it all started. And... >> That's how it all started, and now today you're launching generation two of Ataccama ONE. And this is about self-driving data management and governance. I can't hear the word self-driving without thinking about Elon Musk. Can you talk a little bit about what self-driving means in this context? >> So self-driving in the car industry, it will break a major shift into individual transportation, right? People will be able to reclaim one to two hours per day, which they now spend driving, which is pretty kind of mundane, low added value activity. But that's what the self-driving cars will bring. Basically people will be free to do more creative, more fun stuff, right? And we've taken this concept on a high level and we are bringing it to data management and data governance in a similar fashion, meaning organizations and people, data people, business people, will be free from the mundane activity of finding data, trying to put it together. They will be able to use readily made let's say data product, which will be, you know, available. It will be high quality. It will be governed. So that's how we are kind of using the analogy between the car industry and the data management industry. >> So what was the problem that you were seeing in the space? Was it just the way that your data scientists were spending their time? Was it the cumbersome ways that they were trying to mine the data? What was the problem? What was the challenge that you were trying to solve here? >> So there are actually a few challenges. One challenge is basically time to value. Today, when a business decides to come up with a new product or you need a new campaign for Christmas or something like this, there is an underlying need for data product, right? And it takes weeks or months to prepare that. And that's only if you have some infrastructure, in some cases it can take even longer. And that's one big issue. You need to be able to give non-technical users a way to instantly get the data they need. And you don't have that in organizations, basically nowhere at the moment. So that's the time to value. The other thing is basically resources, right? You have very valuable resources, data scientists, even analysts who spend, you know, there is this kind of (indistinct), right? They spend 80% on really preparing the data, and only 20% on the value added part of their jobs. And we are getting rid of the 80% again. And last but not least what we've been seeing, and it's really painful for organizations. You have a very kind of driven business people who just want to deliver business results. They don't want to bother with, you know, "Where do I get the data? How do I do it?" And then you have rightly so people who are focused on doing things in the right way, people focus on governance in general sense, meaning, you know, we have to follow policies. We have to, when integrating data, we want to do it in the right way so that it's reusable, et cetera, et cetera. And there is a growing tension between those two views, worldviews, I would say, and it's kind of really painful, creating a lot of conflict, preventing the business people to do what they want to do fast, and preventing the people who focus on governance, keeping things in order. And again, that's what our platform is solving or actually is actually making the gap disappear completely. >> It's removing that tension that you're talking about. So how is this different from the AI and machine learning that so many other companies are investing in? >> It is and isn't different. It isn't different in one way. Many companies, you know, in data management, outside of data management, are using AI to make life easier for people and organizations. Basically the machine learning is taking part of what people needed to be doing before that. And you have that in consumer applications, you have that in data management, B2B applications. Now the huge difference is that we've taken the several disciplines, kind of sub domains of data management, namely data profiling, data cataloging, data quality management, by that, we also mean data cleansing, and data mastering, and data integration as well. So we've taken all this. We redeveloped, we had that in our platform. We redeveloped it from scratch. And that allows us basically one critical thing, which is different. If you only apply AI on the level of the individual, let's say modules or products, you will end up with broken processes. You will have, you know, augmented data profiling, augmented data cataloging, but you will still have the walls between the products, from a customer's view, it's kind of a wall between the processes or sub-processes, the domains. So the fact that we have redeveloped it, or the reason why we have redeveloped it, was to get rid of those walls, those silos, and this way we can actually automate the whole process, not just the parts of the process. That's the biggest difference. >> I definitely want to ask you about removing those silos, but I want to get back to something you were saying before, and that is this idea that you built it from scratch. That really is what sets Ataccama apart, is that you architect these things in-house, which is different from a lot of competitors. Talk a little bit about why you see that as such an advantage. >> So this has been in our DNA, kind of from day one. When we started to build the core of our product, which is let's say data processing engine, we realize from day one, that it needs to be, you know, high performance, powerful. It needs to support real time scenarios. And it paid off greatly because if you have a product, for example, that doesn't have the real-time capability of slapping on the real time, it's almost impossible, right? You end up with a not so good core with some added functionality. And this is how we build the product gradually, you know, around the data processing, we build the data quality, we build the data mastering, then we build a metadata core next to it. And the whole platform now basically is built on basically on top of three major underlying components. One is the data processing. One is the metadata management core. And one is actually the AI core. And this allows us to do everything that I was talking about. This allows us to automate the whole process. >> I want to ask a little bit about the silos that you were talking about, and also the tension that you were just talking about earlier in our conversation that exists between business people and the data scientists, the ones who want to make sure we're getting everything right and fidelity, and that we're paying attention to governance. And then the people who are more focused on business outcomes, particularly at this time where we're all enduring a global pandemic, which has changed everything about the way we live and the way we work. Do you think that the silos have gotten worse during this pandemic when people are working from home, working asynchronously, working remotely, and how do you think this generation two of Ataccama ONE can help ease those challenges and those struggles that so many teams are having? >> Yeah. Thank you for the question. It's kind of, it's been on my mind for almost a year now, and actually in two ways, one way is how governments, our governments, how they're dealing with the pandemic, because there, the data is also the key to everything, right? It's the critical factor there. And I have to say the governments are not doing exactly a great job, also in the way they are managing the data and governing the data, because at the end of the day, what will be needed to fight the pandemic for good is a way to predict on a very highly granular basis, what is, and what is not happening in each city, in each county, and, you know, tighten or release the measures based on that. And of course you need very good data science for that, but you also need very good data management below that to have real time granular data. So that's one kind of thing that's been a little bit frustrating for me for a long time. Now, if we look at our customers, organizations and users, what's happening there is that, of course, we all see the shift to work from home. And we also see the needs to better support cooperation between the people who are not in one place anymore, right? So on the level of, let's say the user interface, what we brought to Ataccama ONE generation two is a new way users will be interacting with the platform, basically because of the self-driving nature, the users will more or less be confirming what the platform is suggesting. That's one major shift. And the other thing is there is a kind of implicitly built-in collaboration and governance process within the platform. So we believe that this will help the whole data democratization process, emphasized now by the pandemic and work from home and all these drivers. >> So what is the impact? We hear a lot about data democratization. What do you think the impact that will have going forward in terms of what will be driving companies, and how will that change the way employees and colleagues interact with and collaborate with each other? >> We've been hearing about digital transformation for quite a few years, all of us. And I guess, you know the joke, right? "Who is driving the digital transformation for you today? Is it CEO, COO, or CFO? No, it's COVID," right? It really accelerated transformation in ways we couldn't imagine. Now what that means is that if organizations are to succeed, bringing all the processes to the digital realm and all processes means everything from the market-facing, customer-facing customer service, but also all the internal processes you have to bring to the digital. What that really means is you also have to be able to give data to the people throughout the company, and you have to be able to do it in a way that's kind of on one hand safe. So you need to be able to define who can do what, who can see what in the data. On the other hand, you need to have kind of the courage simply to give the data to people and let them do what they understand best, which is their local kind of part of the organization, right? Local part of the process. And that's the biggest value we think our platform is bringing to the market, meaning it will allow exactly what I was talking about. Not to be afraid to give the data to the people, give the high quality instantly available data to the people. And at the same time, be assured that it is safe from the governance perspective. >> So it's helping companies think about problems differently, think about potential solutions differently, but most importantly, it's empowering the employees to be able to have the data themselves, and getting back to the self-driving car example, where we don't need to worry about driving places, we can use our own time for much more value-added things in our lives. And those employees can do the much more value-added things in their jobs. >> Yes, absolutely. You're absolutely right. The digital transformation is kind of followed, or maybe led by the change organizations are managed, right? If you look at the successful, you know, digital-first organizations like the big tech, right, Google, et cetera, you can see that their organization is very flat, which is something else than what you have in the traditional brick and mortar companies. So I think the shift from, you know, hierarchical organization to the more flat, more decentralized way of managing things, companies, needs to be also accompanied by the data availability for people. And you have to empower, as you say, everyone through the organization. >> How do you foresee the next 12 to 24 months playing out as we all adjust to this new normal? >> Wow, that's a pretty interesting question. I won't talk about what I think will be happening with the pandemic. I think we will see, I will talk about it a little bit. I think we will see the waves, hopefully with the amplitudes kind of narrowing. So that's on that side. What I think we will see, let's say in the economy and in the industry, I can comment on from the data management perspective. I think organizations will have to adopt the new way of working with data, giving the data to the people, empowering the people. If you don't do it, there is of course, some let's say momentum, right? When you're a large enterprise with a lot of, let's say, you know, big customer base, a lot of contracts accumulated. It won't go away that fast. But those who will not adapt, they will see a small, like longer gradual decline in their revenues, and their competitiveness in reality. Whereas those small and big ones who will adopt this new way of working with data, we will see them growing faster than the other ones. >> So for our viewers who want to know more about Ataccama's launch, it is www.Ataccama/selfdriving. What is next for this platform? I want you to close this out here and tell us what is next for generation two of Ataccama ONE? >> So we have just launched the platform. It is available to a limited number of customers in the beta version. The GA version is going to be available in spring, in February next year. And we will be kind of speeding up with additional releases of the platform, that will gradually make the whole suite of functionality available in the self-driving fashion. So that let's say a year from now, you will really be able to go to your browser and actually speak to the platform, speak your wish, which we call intent. We call the principle from intent to result. So for example, you'll be able to say, "I need all my customer and product ownership data as an API which is updated every two hours." And without having to do anything else, you will be able to get that API, which means really complex thing, right? You need to be able to map the sources, translate the data, transform it, populate the API, basically build the integration and governance pipeline. So we think we will get to this point, about the same time Elon Musk will actually deliver the full self-driving capability to the cars. >> It's an exciting future that you're painting right now. >> We think so too. >> Excellent, Michal Klaus, thank you so much for joining us today. >> Thank you, Rebecca. >> Stay tuned for more of CUBE 365. >> Thank you. (calm music)
SUMMARY :
leaders all around the world, and the impact it will Thanks for having me. So you are a technology veteran. and it seemed to be very I can't hear the word self-driving So self-driving in the car industry, So that's the time to value. the AI and machine learning So the fact that we have redeveloped it, is that you architect And one is actually the AI core. and the way we work. And the other thing is there is a kind of the way employees and the data to the people, it's empowering the employees And you have to empower, as you say, giving the data to the I want you to close this out here available in the self-driving fashion. that you're painting right now. thank you so much for joining us today. Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michal Klaus | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Ataccama | ORGANIZATION | 0.99+ |
13 years | QUANTITY | 0.99+ |
Elon Musk | PERSON | 0.99+ |
Elon Musk | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Michal | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two ways | QUANTITY | 0.99+ |
Christmas | EVENT | 0.99+ |
Today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
pandemic | EVENT | 0.99+ |
One challenge | QUANTITY | 0.99+ |
February next year | DATE | 0.99+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one big issue | QUANTITY | 0.98+ |
www.Ataccama | OTHER | 0.98+ |
one way | QUANTITY | 0.98+ |
Ataccama | PERSON | 0.97+ |
20% | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
spring | DATE | 0.95+ |
two | QUANTITY | 0.94+ |
one critical thing | QUANTITY | 0.94+ |
each city | QUANTITY | 0.93+ |
24 months | QUANTITY | 0.93+ |
two hours per day | QUANTITY | 0.93+ |
each county | QUANTITY | 0.91+ |
two views | QUANTITY | 0.88+ |
one place | QUANTITY | 0.87+ |
12 | QUANTITY | 0.84+ |
generation two | QUANTITY | 0.83+ |
three major underlying components | QUANTITY | 0.82+ |
waves | EVENT | 0.79+ |
first | QUANTITY | 0.78+ |
one major shift | QUANTITY | 0.78+ |
one kind | QUANTITY | 0.77+ |
every two hours | QUANTITY | 0.74+ |
almost a year | QUANTITY | 0.73+ |
day one | QUANTITY | 0.73+ |
ONE generation two | COMMERCIAL_ITEM | 0.71+ |
Ataccama ONE | TITLE | 0.71+ |
selfdriving | OTHER | 0.71+ |
CUBE 365 | ORGANIZATION | 0.68+ |
ONE | TITLE | 0.54+ |
Ataccama | LOCATION | 0.54+ |
a year from | DATE | 0.49+ |
CUBE 365 | TITLE | 0.47+ |
ONE | COMMERCIAL_ITEM | 0.46+ |
CUBE | ORGANIZATION | 0.39+ |
ONE | QUANTITY | 0.35+ |