Image Title

Search Results for Path ID 6:

LaDavia Drane, AWS | International Women's Day


 

(bright music) >> Hello, everyone. Welcome to theCUBE special presentation of International Women's Day. I'm John Furrier, host of theCUBE. This is a global special open program we're doing every year. We're going to continue it every quarter. We're going to do more and more content, getting the voices out there and celebrating the diversity. And I'm excited to have an amazing guest here, LaDavia Drane, who's the head of Global Inclusion Diversity & Equity at AWS. LaDavia, we tried to get you in on AWS re:Invent, and you were super busy. So much going on. The industry has seen the light. They're seeing everything going on, and the numbers are up, but still not there, and getting better. This is your passion, our passion, a shared passion. Tell us about your situation, your career, how you got into it. What's your story? >> Yeah. Well, John, first of all, thank you so much for having me. I'm glad that we finally got this opportunity to speak. How did I get into this work? Wow, you know, I'm doing the work that I love to do, number one. It's always been my passion to be a voice for the voiceless, to create a seat at the table for folks that may not be welcome to certain tables. And so, it's been something that's been kind of the theme of my entire professional career. I started off as a lawyer, went to Capitol Hill, was able to do some work with members of Congress, both women members of Congress, but also, minority members of Congress in the US Congress. And then, that just morphed into what I think has become a career for me in inclusion, diversity, and equity. I decided to join Amazon because I could tell that it's a company that was ready to take it to the next level in this space. And sure enough, that's been my experience here. So now, I'm in it, I'm in it with two feet, doing great work. And yeah, yeah, it's almost a full circle moment for me. >> It's really an interesting background. You have a background in public policy. You mentioned Capitol Hill. That's awesome. DC kind of moves slow, but it's a complicated machinery there. Obviously, as you know, navigating that, Amazon grew significantly. We've been at every re:Invent with theCUBE since 2013, like just one year. I watched Amazon grow, and they've become very fast and also complicated, like, I won't say like Capitol, 'cause that's very slow, but Amazon's complicated. AWS is in the realm of powering a generation of public policy. We had the JEDI contract controversy, all kinds of new emerging challenges. This pivot to tech was great timing because one, (laughs) Amazon needed it because they were growing so fast in a male dominated world, but also, their business is having real impact on the public. >> That's right, that's right. And when you say the public, I'll just call it out. I think that there's a full spectrum of diversity and we work backwards from our customers, and our customers are diverse. And so, I really do believe, I agree that I came to the right place at the right time. And yeah, we move fast and we're also moving fast in this space of making sure that both internally and externally, we're doing the things that we need to do in order to reach a diverse population. >> You know, I've noticed how Amazon's changed from the culture, male dominated culture. Let's face it, it was. And now, I've seen over the past five years, specifically go back five, is kind of in my mental model, just the growth of female leaders, it's been impressive. And there was some controversy. They were criticized publicly for this. And we said a few things as well in those, like around 2014. How is Amazon ensuring and continuing to get the female employees feel represented and empowered? What's going on there? What programs do you have? Because it's not just doing it, it's continuing it, right? And 'cause there is a lot more to do. I mean, the half (laughs) the products are digital now for everybody. It's not just one population. (laughs) Everyone uses digital products. What is Amazon doing now to keep it going? >> Well, I'll tell you, John, it's important for me to note that while we've made great progress, there's still more that can be done. I am very happy to be able to report that we have big women leaders. We have leaders running huge parts of our business, which includes storage, customer experience, industries and business development. And yes, we have all types of programs. And I should say that, instead of calling it programs, I'm going to call it strategic initiatives, right? We are very thoughtful about how we engage our women. And not only how we hire, attract women, but how we retain our women. We do that through engagement, groups like our affinity groups. So Women at Amazon is an affinity group. Women in finance, women in engineering. Just recently, I helped our Black employee network women's group launch, BEN Women. And so you have these communities of women who come together, support and mentor one another. We have what we call Amazon Circles. And so these are safe spaces where women can come together and can have conversations, where we are able to connect mentors and sponsors. And we're seeing that it's making all the difference in the world for our women. And we see that through what we call Connections. We have an inclusion sentiment tracker. So we're able to ask questions every single day and we get a response from our employees and we can see how are our women feeling, how are they feeling included at work? Are they feeling as though they can be who they are authentically at Amazon? And so, again, there's more work that needs to be done. But I will say that as I look at the data, as I'm talking to engaging women, I really do believe that we're on the right path. >> LaDavia, talk about the urgent needs of the women that you're hearing from the Circles. That's a great program. The affinity circles, the groups are great. Now, you have the groups, what are you hearing? What are the needs of the women? >> So, John, I'll just go a little bit into what's becoming a conversation around equity. So, initially I think we talked a lot about equality, right? We wanted everyone to have fair access to the same things. But now, women are looking for equity. We're talking about not just leveling the playing field, which is equality, but don't give me the same as you give everyone else. Instead, recognize that I may have different circumstances, I may have different needs. And give me what I need, right? Give me what I need, not just the same as everyone else. And so, I love seeing women evolve in this way, and being very specific about what they need more than, or what's different than what a man may have in the same situation because their circumstances are not always the same and we should treat them as such. >> Yeah, I think that's a great equity point. I interviewed a woman here, ex-Amazonian, she's now a GSI, Global System Integrator. She's a single mom. And she said remote work brought her equity because people on her team realized that she was a single mom. And it wasn't the, how do you balance life, it was her reality. And what happened was, she had more empathy with the team because of the new work environment. So, I think this is an important point to call out, that equity, because that really makes things smoother in terms of the interactions, not the assumptions, you have to be, you know, always the same as a man. So, how does that go? What's the current... How would you characterize the progress in that area right now? >> I believe that employers are just getting better at this. It's just like you said, with the hybrid being the norm now, you have an employer who is looking at people differently based on what they need. And it's not a problem, it's not an issue that a single mother says, "Well, I need to be able to leave by 5:00 PM." I think that employers now, and Amazon is right there along with other employers, are starting just to evolve that muscle of meeting the needs. People don't have to feel different. You don't have to feel as though there's some kind of of special circumstance for me. Instead, it's something that we, as employers, we're asking for. And we want to meet those needs that are different in some situations. >> I know you guys do a lot of support of women outside of AWS, and I had a story I recorded for the program. This woman, she talked about how she was a nerd from day one. She's a tomboy. They called her a tomboy, but she always loved robotics. And she ended up getting dual engineering degrees. And she talked about how she didn't run away and there was many signals to her not to go. And she powered through, at that time, and during her generation, that was tough. And she was successful. How are you guys taking the education to STEM, to women, at young ages? Because we don't want to turn people away from tech if they have the natural affinity towards it. And not everyone is going to be, as, you know, (laughs) strong, if you will. And she was a bulldog, she was great. She's just like, "I'm going for it. I love it so much." But not everyone's like that. So, this is an educational thing. How do you expose technology, STEM for instance, and making it more accessible, no stigma, all that stuff? I mean, I think we've come a long way, but still. >> What I love about women is we don't just focus on ourselves. We do a very good job of thinking about the generation that's coming after us. And so, I think you will see that very clearly with our women Amazonians. I'll talk about three different examples of ways that Amazonian women in particular, and there are men that are helping out, but I'll talk about the women in particular that are leading in this area. On my team, in the Inclusion, Diversity & Equity team, we have a program that we run in Ghana where we meet basic STEM needs for a afterschool program. So we've taken this small program, and we've turned their summer camp into this immersion, where girls and boys, we do focus on the girls, can come and be completely immersed in STEM. And when we provide the technology that they need, so that they'll be able to have access to this whole new world of STEM. Another program which is run out of our AWS In Communities team, called AWS Girls' Tech Day. All across the world where we have data centers, we're running these Girls' Tech Day. They're basically designed to educate, empower and inspire girls to pursue a career in tech. Really, really exciting. I was at the Girls' Tech Day here recently in Columbus, Ohio, and I got to tell you, it was the highlight of my year. And then I'll talk a little bit about one more, it's called AWS GetIT, and it's been around for a while. So this is a program, again, it's a global program, it's actually across 13 countries. And it allows girls to explore cloud technology, in particular, and to use it to solve real world problems. Those are just three examples. There are many more. There are actually women Amazonians that create these opportunities off the side of their desk in they're local communities. We, in Inclusion, Diversity & Equity, we fund programs so that women can do this work, this STEM work in their own local communities. But those are just three examples of some of the things that our Amazonians are doing to bring girls along, to make sure that the next generation is set up and that the next generation knows that STEM is accessible for girls. >> I'm a huge believer. I think that's amazing. That's great inspiration. We need more of that. It's awesome. And why wouldn't we spread it around? I want to get to the equity piece, that's the theme for this year's IWD. But before that, getting that segment, I want to ask you about your title, and the choice of words and the sequence. Okay, Global Inclusion, Diversity, Equity. Not diversity only. Inclusion is first. We've had this debate on theCUBE many years now, a few years back, it started with, "Inclusion is before diversity," "No, diversity before inclusion, equity." And so there's always been a debate (laughs) around the choice of words and their order. What's your opinion? What's your reaction to that? Is it by design? And does inclusion come before diversity, or am I just reading it to it? >> Inclusion doesn't necessarily come before diversity. (John laughs) It doesn't necessarily come before equity. Equity isn't last, but we do lead with inclusion in AWS. And that is very important to us, right? And thank you for giving me the opportunity to talk a little bit about it. We lead with inclusion because we want to make sure that every single one of our builders know that they have a place in this work. And so it's important that we don't only focus on hiring, right? Diversity, even though there are many, many different levels and spectrums to diversity. Inclusion, if you start there, I believe that it's what it takes to make sure that you have a workplace where everyone knows you're included here, you belong here, we want you to stay here. And so, it helps as we go after diversity. And we want all types of people to be a part of our workforce, but we want you to stay. And inclusion is the thing. It's the thing that I believe makes sure that people stay because they feel included. So we lead with inclusion. Doesn't mean that we put diversity or equity second or third, but we are proud to lead with inclusion. >> Great description. That was fabulous. Totally agree. Double click, thumbs up. Now let's get into the theme. Embracing equity, 'cause this is a term, it's in quotes. What does that mean to you? You mentioned it earlier, I love it. What does embrace equity mean? >> Yeah. You know, I do believe that when people think about equity, especially non-women think about equity, it's kind of scary. It's, "Am I going to give away what I have right now to make space for someone else?" But that's not what equity means. And so I think that it's first important that we just educate ourselves about what equity really is. It doesn't mean that someone's going to take your spot, right? It doesn't mean that the pie, let's use that analogy, gets smaller. The pie gets bigger, right? >> John: Mm-hmm. >> And everyone is able to have their piece of the pie. And so, I do believe that I love that IWD, International Women's Day is leading with embracing equity because we're going to the heart of the matter when we go to equity, we're going to the place where most people feel most challenged, and challenging people to think about equity and what it means and how they can contribute to equity and thus, embrace equity. >> Yeah, I love it. And the advice that you have for tech professionals out there on this, how do you advise other groups? 'Cause you guys are doing a lot of great work. Other organizations are catching up. What would be your advice to folks who are working on this equity challenge to reach gender equity and other equitable strategic initiatives? And everyone's working on this. Sustainability and equity are two big projects we're seeing in every single company right now. >> Yeah, yeah. I will say that I believe that AWS has proven that equity and going after equity does work. Embracing equity does work. One example I would point to is our AWS Impact Accelerator program. I mean, we provide 30 million for early stage startups led by women, Black founders, Latino founders, LGBTQ+ founders, to help them scale their business. That's equity. That's giving them what they need. >> John: Yeah. >> What they need is they need access to capital. And so, what I'd say to companies who are looking at going into the space of equity, I would say embrace it. Embrace it. Look at examples of what companies like AWS is doing around it and embrace it because I do believe that the tech industry will be better when we're comfortable with embracing equity and creating strategic initiatives so that we could expand equity and make it something that's just, it's just normal. It's the normal course of business. It's what we do. It's what we expect of ourselves and our employees. >> LaDavia, you're amazing. Thank you for spending the time. My final couple questions really more around you. Capitol Hill, DC, Amazon Global Head of Inclusion, Diversity & Equity, as you look at making change, being a change agent, being a leader, is really kind of similar, right? You've got DC, it's hard to make change there, but if you do it, it works, right? (laughs) If you don't, you're on the side of the road. So, as you're in your job now, what are you most excited about? What's on your agenda? What's your focus? >> Yeah, so I'm most excited about the potential of what we can get done, not just for builders that are currently in our seats, but for builders in the future. I tend to focus on that little girl. I don't know her, I don't know where she lives. I don't know how old she is now, but she's somewhere in the world, and I want her to grow up and for there to be no question that she has access to AWS, that she can be an employee at AWS. And so, that's where I tend to center, I center on the future. I try to build now, for what's to come, to make sure that this place is accessible for that little girl. >> You know, I've always been saying for a long time, the software is eating the world, now you got digital transformation, business transformation. And that's not a male only, or certain category, it's everybody. And so, software that's being built, and the systems that are being built, have to have first principles. Andy Jassy is very strong on this. He's been publicly saying, when trying to get pinned down about certain books in the bookstore that might offend another group. And he's like, "Look, we have first principles. First principles is a big part of leading." What's your reaction to that? How would you talk to another professional and say, "Hey," you know this, "How do I make the right call? Am I doing the wrong thing here? And I might say the wrong thing here." And is it first principles based? What's the guardrails? How do you keep that in check? How would you advise someone as they go forward and lean in to drive some of the change that we're talking about today? >> Yeah, I think as leaders, we have to trust ourselves. And Andy actually, is a great example. When I came in as head of ID&E for AWS, he was our CEO here at AWS. And I saw how he authentically spoke from his heart about these issues. And it just aligned with who he is personally, his own personal principles. And I do believe that leaders should be free to do just that. Not to be scripted, but to lead with their principles. And so, I think Andy's actually a great example. I believe that I am the professional in this space at this company that I am today because of the example that Andy set. >> Yeah, you guys do a great job, LaDavia. What's next for you? >> What's next. >> World tour, you traveling around? What's on your plate these days? Share a little bit about what you're currently working on. >> Yeah, so you know, at Amazon, we're always diving deep. We're always diving deep, we're looking for root cause, working very hard to look around corners, and trying to build now for what's to come in the future. And so I'll continue to do that. Of course, we're always planning and working towards re:Invent, so hopefully, John, I'll see you at re:Invent this December. But we have some great things happening throughout the year, and we'll continue to... I think it's really important, as opposed to looking to do new things, to just continue to flex the same muscles and to show that we can be very, very focused and intentional about doing the same things over and over each year to just become better and better at this work in this space, and to show our employees that we're committed for the long haul. So of course, there'll be new things on the horizon, but what I can say, especially to Amazonians, is we're going to continue to stay focused, and continue to get at this issue, and doing this issue of inclusion, diversity and equity, and continue to do the things that work and make sure that our culture evolves at the same time. >> LaDavia, thank you so much. I'll give you the final word. Just share some of the big projects you guys are working on so people can know about them, your strategic initiatives. Take a minute to plug some of the major projects and things that are going on that people either know about or should know about, or need to know about. Take a minute to share some of the big things you guys got going on, or most of the things. >> So, one big thing that I would like to focus on, focus my time on, is what we call our Innovation Fund. This is actually how we scale our work and we meet the community's needs by providing micro grants to our employees so our employees can go out into the world and sponsor all types of different activities, create activities in their local communities, or throughout the regions. And so, that's probably one thing that I would like to focus on just because number one, it's our employees, it's how we scale this work, and it's how we meet our community's needs in a very global way. And so, thank you John, for the opportunity to talk a bit about what we're up to here at Amazon Web Services. But it's just important to me, that I end with our employees because for me, that's what's most important. And they're doing some awesome work through our Innovation Fund. >> Inclusion makes the workplace great. Empowerment, with that kind of program, is amazing. LaDavia Drane, thank you so much. Head of Global Inclusion and Diversity & Equity at AWS. This is International Women's Day. I'm John Furrier with theCUBE. Thanks for watching and stay with us for more great interviews and people and what they're working on. Thanks for watching. (bright music)

Published Date : Mar 2 2023

SUMMARY :

And I'm excited to have that I love to do, number one. AWS is in the realm of powering I agree that I came to the And 'cause there is a lot more to do. And so you have these communities of women of the women that you're And give me what I need, right? not the assumptions, you have to be, "Well, I need to be able the education to STEM, And it allows girls to and the choice of words and the sequence. And so it's important that we don't What does that mean to you? It doesn't mean that the pie, And everyone is able to And the advice that you I mean, we provide 30 million because I do believe that the to make change there, that she has access to AWS, And I might say the wrong thing here." I believe that I am the Yeah, you guys do a great job, LaDavia. World tour, you traveling around? and to show that we can Take a minute to share some of the And so, thank you John, Inclusion makes the workplace great.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

JohnPERSON

0.99+

AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

John FurrierPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

GhanaLOCATION

0.99+

CongressORGANIZATION

0.99+

LaDavia DranePERSON

0.99+

5:00 PMDATE

0.99+

two feetQUANTITY

0.99+

30 millionQUANTITY

0.99+

International Women's DayEVENT

0.99+

LaDaviaPERSON

0.99+

thirdQUANTITY

0.99+

Columbus, OhioLOCATION

0.99+

firstQUANTITY

0.99+

ID&EORGANIZATION

0.99+

three examplesQUANTITY

0.99+

todayDATE

0.99+

Girls' Tech DayEVENT

0.99+

Capitol HillLOCATION

0.99+

first principQUANTITY

0.98+

three examplesQUANTITY

0.98+

13 countriesQUANTITY

0.98+

first principlesQUANTITY

0.98+

First principlesQUANTITY

0.98+

oneQUANTITY

0.98+

2013DATE

0.98+

Capitol HillLOCATION

0.98+

secondQUANTITY

0.98+

Capitol Hill, DCLOCATION

0.97+

one yearQUANTITY

0.97+

single motherQUANTITY

0.97+

AmazonianOTHER

0.96+

theCUBEORGANIZATION

0.96+

GSIORGANIZATION

0.96+

bothQUANTITY

0.96+

each yearQUANTITY

0.96+

LatinoOTHER

0.96+

one thingQUANTITY

0.95+

One exampleQUANTITY

0.93+

single momQUANTITY

0.93+

two big projectsQUANTITY

0.93+

DCLOCATION

0.91+

HPE Compute Engineered for your Hybrid World - Transform Your Compute Management Experience


 

>> Welcome everyone to "theCUBE's" coverage of "Compute engineered for your hybrid world," sponsored by HP and Intel. Today we're going to going to discuss how to transform your compute management experience with the new 4th Gen Intel Xeon scalable processors. Hello, I'm John Furrier, host of "theCUBE," and my guests today are Chinmay Ashok, director cloud engineering at Intel, and Koichiro Nakajima, principal product manager, compute at cloud services with HPE. Gentlemen, thanks for coming on this segment, "Transform your compute management experience." >> Thanks for having us. >> Great topic. A lot of people want to see that system management one pane of glass and want to manage everything. This is a really important topic and they started getting into distributed computing and cloud and hybrid. This is a major discussion point. What are some of the major trends you guys see in the system management space? >> Yeah, so system management is trying to help user manage their IT infrastructure effectively and efficiently. So, the system management is evolving along with the IT infrastructures which is trying to accommodate market trends. We have been observing the continuous trends like digital transformation, edge computing, and exponential data growth never stops. AI, machine learning, deep learning, cloud native applications, hybrid cloud, multi-cloud strategies. There's a lot of things going on. Also, COVID-19 pandemic has changed the way we live and work. These are all the things that, given a profound implication to the system design architectures that system management has to consider. Also, security has always been the very important topic, but it has become more important than ever before. Some of the research is saying that the cyber criminals becoming like a $10.5 trillion per year. We all do our efforts on the solution provider size and on the user side, but still cyber criminals are growing 15% year by year. So, with all this kind of thing in the mind, system management really have to evolve in a way to help user efficiently and effectively manage their more and more distributed IT infrastructure. >> Chinmay, what's your thoughts on the major trends in system management space? >> Thanks, John, Yeah, to add to what Koichiro said, I think especially with the view of the system or the service provider, as he was saying, is changing, is evolving over the last few years, especially with the advent of the cloud and the different types of cloud usage models like platform as a service, on-premises, of course, infrastructure is a service, but the traditional software as a service implies that the service provider needs a different view of the system and the context in which we need the CPU vendor, or the platform vendor needs to provide that, is changing. That includes both in-band telemetry being able to monitor what is going on on the system through traditional in-band methods, but also the advent of the out-of-band methods to do this without end user disruption is a key element to the enhancements that our customers are expecting from us as we deploy CPUs and platforms. >> That's great. You know what I love about this discussion is we had multiple generation enhancements, 4th Gen Xeon, 11th Gen ProLiant, iLOs going to come up with got another generation increase on that one. We'll get into that on the next segment, but while we're here, what is iLO? Can you guys define what that is and why it's important? >> Yeah, great question. Real quick, so HPE Integrated Lights-Out is the formal name of the product and we tend to call it as a iLO for short. iLO is HPE'S BMC. If you're familiar with this topic it's a Baseboard Management Controller. If not, this is a small computer on the server mother board and it runs independently from host CPU and the operating system. So, that's why it's named as Lights-Out. Now what can you do with the iLO? iLO really helps a user manage and use and monitor the server remotely, securely, throughout its life from the deployment to the retirement. So, you can really do things like, you know, turning a server power on, off, install operating system, access to IT, firmware update, and when you decide to retire server, you can completely wipe the data off that server so then it's ready to trash. iLO is really a best solution to manage a single server, but when you try to manage hundreds or thousand of servers in a larger scale environment, then managing server one by one by one through the iLO is not practical. So, HPE has two options. One of them is a HPE OneView. OneView is a best solution to manage a very complex, on-prem IT infrastructure that involves a thousand of servers as well as the other IT elements like fiber channel storage through the storage agent network and so on. Another option that we have is HPE for GreenLake Compute Ops Management. This is our latest, greatest product that we recently launched and this is a best solution to manage a distributed IT environment with multiple edge points or multiple clouds. And I recently involved in the customer conversation about the computer office management and with the hotel chain, global hotel chain with 9,000 locations worldwide and each of the location only have like a couple of servers to manage, but combined it's, you know, 27,000 servers and over the 9,000 locations, we didn't really have a great answer for that kind of environment before, but now HPE has GreenLake for computer office management for also deal with, you know, such kind of environment. >> Awesome. We're going to do a big dive on iLO in the next segment, but Chinmay, before we end this segment, what is PMT? >> Sure, so yeah, with the introduction of the 4th Gen Intel Xeon scalable processor, we of course introduce many new technologies like PCI Gen 5, DDR5, et cetera. And these are very key to general system provision, if you will. But with all of these new technologies come new sources of telemetry that the service provider now has to manage, right? So, the PMT is a technology called Platform Monitoring Technology. That is a capability that we introduced with the Intel 4th Gen Xeon scalable processor that allows the service provider to monitor all of these sources of telemetry within the system, within the system on chip, the CPU SOC, in all of these contexts that we talked about, like the hybrid cloud and cloud infrastructure as a service or platform as a service, but both in their in-band traditional telemetry collection models, but also out-of-band collection models such as the ones that Koichiro was talking about through the BMC et cetera. So, this is a key enhancement that we believe that takes the Intel product line closer to what the service providers require for managing their end user experience. >> Awesome, well thanks so much for spending the time in this segment. We're going to take a quick break, we're going to come back and we're going to discuss more what's new with Gen 11 and iLO 6. You're watching "theCUBE," the leader in high tech enterprise coverage. We'll be right back. (light music) Welcome back. We're continuing the coverage of "theCUBE's" coverage of compute engineered for your hybrid world. I'm John Furrier, I'm joined by Chinmay Ashok who's from Intel and Koichiro Nakajima with HPE. We're going to dive deeper into transforming your compute management experience with 4th Gen Intel Xeon scalable processors and HP ProLiant Gen11. Okay, let's get into it. We want to talk about Gen11. What's new with Gen11? What's new with iLO 6? So, NexGen increases in performance capabilities. What's new, what's new at Gen11 and iLO 6 let's go. >> Yeah, iLO 6 accommodates a lot of new features and the latest, greatest technology advancements like a new generation CPUs, DDR5 memories, PCI Gen 5, GPGPUs, SmartNICs. There's a lot of great feature functions. So, it's an iLO, make sure that supports all the use cases that associate with those latest, greatest advancements. For instance, like you know, some of the higher thermal design point CPU SKUs that requires a liquid cooling. We all support those kind of things. And also iLO6 accommodates latest, greatest industry standard system management, standard specifications, for instance, like an DMTF, TLDN, DMTF, RDE, SPDM. And what are these means for the iLO6 and Gen11? iLO6 really offers the greatest manageability and monitoring user experiences as well as the greatest automation through the refresh APIs. >> Chinmay, what's your thoughts on the Gen11 and iLO6? You're at Intel, you're enabling all this innovation. >> Yeah. >> What's the new features? >> Yeah, thanks John. Yeah, so yeah, to add to what Koichiro said, I think with the introduction of Gen11, 4th Gen Intel Xeon scalable processor, we have all of these rich new feature sets, right? With the DDR5, PCI Gen5, liquid cooling, et cetera. And then all of these new accelerators for various specific workloads that customers can use using this processor. So, as we were discussing previously, what this brings is all of these different sources of telemetry, right? So, our sources of data that the system provider or the service provider then needs to utilize to manage the compute experience for their end user. And so, what's new from that perspective is Intel realized that these new different sources of telemetry and the new mechanisms by which the service provider has to extract this telemetry required us to fundamentally think about how we provide the telemetry experience to the service provider. And that meant extending our existing best-in-class, in-band telemetry capabilities that we have today already built into in market Intel processors. But now, extending that with the introduction of the PMT, the Platform Monitoring Technology, that allows us to expand on that in-band telemetry, but also include all of these new sources of telemetry data through all of these new accelerators through the new features like PCI Gen5, DDR5, et cetera, but also bring in that out-of-band telemetry management experience. And so, I think that's a key innovation here, helping prepare for the world that the cloud is enabling. >> It's interesting, you know, Koichiro you had mentioned on the previous segment, COVID-19, we all know the impact of how that changed, how IT at the managed, you know, all of a sudden remote work, right? So, as you have cloud go to hybrid, now we got the edge coming, we're talking about a distributed computing environment, we got telemetry, you got management. This is a huge shift and it's happening super fast. What's the Gen11 iLO6 mean for architects as they start to look at going beyond hybrid and going to the edge, you're going to need all this telemetry. What's the impact? Can you guys just riff and share your thoughts on what this means for that kind of NexGen cloud that we see coming on on which is essentially distributed computing. >> Yeah, that's a great topic to discuss. So, there's a couple of the things. Really, to make sure those remote environment and also the management distributed IT environments, the system management has to reach across the remote location, across the internet connections, and the connectivities. So, the system management protocol, for instance, like traditionally IPMI or SNMP, or those things, got to be modernized into more restful API and those modern integration friendly to the modern tool chains. So, we're investing on those like refresh APIs and also again, the security becomes paramount importance because those are exposed to the bad people to snoop and trying to do some bad thing like men in a middle attacks, things like that. So we really, you know, focus on the security side on the two aspects on the iLO6 and Gen11. One other thing is we continue our industry unique silicon root of trust technology. So, that one is fortunate platform making sure the platform firmware, only the authentic and legitimate image of the firmware can run on HP server. And when you check in, validating the firmware images, the root of the trust reside in the silicon. So, no one can change it. Even the bad people trying to change the root of trust, it's bond in the chips so you cannot really change. And that's why, even bad people trying to compromise, you know, install compromise the firmware image on the HPE servers, you cannot do that. Another thing is we're making a lot of enhancements to make sure security on board our HP server into your network or onto a services like a GreenLake. Give you a couple of example, for instance, like a IDevID, Initial Device ID. That one is conforming to IEEE 802.1AR and it's immutable so no one can change it. And by using the IDevID, you can really identify you are not onboarding a rogue server or unknown server, but the server that you you want to onboard, right? It's absolutely important. Another thing is like platform certificate. Platform certificate really is the measurement of the configuration. So again, this is a great feature that makes sure you receive a server from the factory and no one during the transportation touch the server and alter the configuration. >> Chinmay, what's your reaction to this new distributed NextGen cloud? You got data, security, edge, move the compute to the data, don't move the data around. These are big conversations. >> Yeah, great question, John. I think this is an important thing to consider for the end user, the service provider in all of these contexts, right? I think Koichiro mentioned some of these key elements that go into as we develop and design these new products. But for example, from a security perspective, we introduce the trust domain extensions, TDX feature, for confidential computing in Intel 4th Generation Xeon scalable processors. And that enables the isolation of user workloads in these cloud environments, et cetera. But again, going back to the point Koichiro was making where if you go to the edge, you go to the cloud and then have the edge connect to the cloud you have independent networks for system management, independent networks for user data, et cetera. So, you need the ability to create that isolation. All of this telemetry data that needs to be isolated from the user, but used by the service provider to provide the best experience. All of these are built on the foundations of technologies such as TDX, PMT, iLO6, et cetera. >> Great stuff, gentlemen. Well, we have a lot more to discuss on our next segment. We're going to take a break here before wrapping up. We'll be right back with more. You're watching "theCUBE," the leader in high tech coverage. (light music) Okay, welcome back here, on "theCUBE's" coverage of "Compute engineered for your hybrid world." I'm John Furrier, host of the Cube. We're wrapping up our discussion here on transforming compute management experience with 4th Gen Intel Xeon scalable processors and obviously HPE ProLiant Gen11. Gentlemen, welcome back. Let's get into the takeaways for this discussion. Obviously, systems management has been around for a while, but transforming that experience on the management side is super important as the environment just radically changing for the better. What are some of the key takeaways for the audience watching here that they should put into their kind of tickler file and/or put on their to-do list to keep an eye on? >> Yeah, so Gen11 and iLO6 offers the latest, greatest technologies with new generation CPUs, DDR5, PCI Gen5, and so on and on. There's a lot of things in there and also iLO6 is the most mature version of iLO and it offers the best manageability and security. On top of iLO, HP offers the best of read management options like HP OneView and Compute Ops Management. It's really a lot of the things that help user achieve a lot of the things regardless of the use case like edge computing, or distributed IT, or hybrid strategy and so on and on. And you could also have a great system management that you can unleash all the full potential of latest, greatest technology. >> Chinmay, what's your thoughts on the key takeaways? Obviously as the world's changing, more gen chips are coming out, specialized workloads, performance. I mean, I've never met anyone that says they want to run on slower infrastructure. I mean, come on, performance matters. >> Yes, no, it definitely, I think one of the key things I would say is yes, with Gen11 Intel for gen scalable we're introducing all of these technologies, but I think one of the key things that has grown over the last few years is the view of the system provider, the abstraction that's needed, right? Like the end user today is migrating a lot of what they're traditionally used to from a physical compute perspective to the cloud. Everything goes to the cloud and when that happens there's a lot of just the experience that the end user sees, but everything underneath is abstracted away and then managed by the system provider, right? So we at Intel, and of course, our partners at HP, we have spent a lot of time figuring out what are the best sets of features that provide that best system management experience that allow for that abstraction to work seamlessly without the end user noticing? And I think from that perspective, the 4th Gen Intel Xeon scalable processors is so far the best Intel product that we have introduced that is prepared for that type of abstraction. >> So, I'm going to put my customer hat on for a second. I'll ask you both. What's in it for me? I'm the customer. What's in it for me? What's the benefit to me? What does this all mean to me? What's my win? >> Yeah, I can start there. I think the key thing here is that when we create capabilities that allow you to build the best cloud, at the end of the day that efficiency, that performance, all of that translates to a better experience for the consumer, right? So, as the service provider is able to have all of these myriad capabilities to use and choose from and then manage the system experience, what that implies is that the end user sees a seamless experience as they go from one application to another as they go about their daily lives. >> Koichiro, what's your thoughts on what's in it for me? You guys got a lot of engineering going on in Gen11, every gen increase always is a step function and increase of value. What's in it for me? What do I care? What's in it for me? I'm the customer. >> Alright. Yeah, so I fully agree with Chinmay's point. You know, he lays out the all the good points, right? Again, you know what the Gen11 and iLO6 offer all the latest, greatest features and all the technology and advancements are packed in the Gen11 platform and iLO6 unleash all full potentials for those benefits. And things are really dynamic in today's world and IT system also going to be agile and the system management get really far, to the point like we never imagine what the system management can do in the past. For instance, the managing on-prem devices across multiple locations from a single point, like a single pane of glass on the cloud management system, management on the cloud, that's what really the compute office management that HP offers. It's all new and it's really help customers unleash full potential of the gear and their investment and provide the best TCO and ROIs, right? I'm very excited that all the things that all the teams have worked for the multiple years have finally come to their life and to the public. And I can't really wait to see our customers start putting their hands on and enjoy the benefit of the latest, greatest offerings. >> Yeah, 4th Gen Xeon, Gen11 ProLiant, I mean, all the things coming together, accelerators, more cores. You got data, you got compute, and you got now this idea of security, I mean, you got hitting all the points, data and security big features here, right? Data being computed in a way with Gen4 and Gen11. This is like the big theme, data security, kind of the the big part of the core here in this announcement, in this relationship. >> Absolutely. I believe, I think the key things as these new generations of processors enable is new types of compute which imply is more types of data, more types of and hence, with more types of data, more types of compute. You have more types of system management more differentiation that the service provider has to then deal with, the disaggregation that they have to deal with. So yes, absolutely this is, I think exciting times for end users, but also for new frontiers for service providers to go tackle. And we believe that the features that we're introducing with this CPU and this platform will enable them to do so. >> Well Chinmay thank you so much for sharing your Intel perspective, Koichiro with HPE. Congratulations on all that hard work and engineering coming together. Bearing fruit, as you said, Koichiro, this is an exciting time. And again, keep moving the needle. This is an important inflection point in the industry and now more than ever this compute is needed and this kind of specialization's all awesome. So, congratulations and participating in the "Transforming your compute management experience" segment. >> Thank you very much. >> Okay. I'm John Furrier with "theCUBE." You're watching the "Compute Engineered for your Hybrid World Series" sponsored by HP and Intel. Thanks for watching. (light music)

Published Date : Dec 27 2022

SUMMARY :

how to transform your in the system management space? that the cyber criminals becoming of the out-of-band methods to do this We'll get into that on the next segment, of the product and we tend to on iLO in the next segment, of telemetry that the service provider now for spending the time in this segment. and the latest, greatest on the Gen11 and iLO6? that the system provider at the managed, you know, and legitimate image of the move the compute to the data, by the service provider to I'm John Furrier, host of the Cube. a lot of the things Obviously as the world's experience that the end user sees, What's the benefit to me? that the end user sees I'm the customer. that all the things that kind of the the big part of the core here that the service provider And again, keep moving the needle. for your Hybrid World Series"

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KoichiroPERSON

0.99+

Koichiro NakajimaPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

Chinmay AshokPERSON

0.99+

hundredsQUANTITY

0.99+

iLO 6COMMERCIAL_ITEM

0.99+

HPORGANIZATION

0.99+

IntelORGANIZATION

0.99+

HPEORGANIZATION

0.99+

27,000 serversQUANTITY

0.99+

9,000 locationsQUANTITY

0.99+

OneQUANTITY

0.99+

eachQUANTITY

0.99+

COVID-19OTHER

0.99+

two optionsQUANTITY

0.99+

bothQUANTITY

0.99+

iLO6COMMERCIAL_ITEM

0.99+

ChinmayPERSON

0.99+

BMCORGANIZATION

0.98+

two aspectsQUANTITY

0.98+

COVID-19 pandemicEVENT

0.97+

iLOTITLE

0.97+

single pointQUANTITY

0.96+

IEEE 802.1AROTHER

0.96+

Gen11COMMERCIAL_ITEM

0.96+

PCI Gen 5OTHER

0.96+

oneQUANTITY

0.96+

TodayDATE

0.96+

4th Generation XeonCOMMERCIAL_ITEM

0.95+

todayDATE

0.95+

PCI Gen5OTHER

0.95+

single serverQUANTITY

0.94+

HPE ProLiant Gen11COMMERCIAL_ITEM

0.94+

Gen11 ProLiantCOMMERCIAL_ITEM

0.93+

4th Gen XeonCOMMERCIAL_ITEM

0.91+

NexGenCOMMERCIAL_ITEM

0.91+

$10.5 trillion per yearQUANTITY

0.9+

XeonCOMMERCIAL_ITEM

0.89+

Omri Gazitt, Aserto | KubeCon + CloudNative Con NA 2022


 

>>Hey guys and girls, welcome back to Motor City, Lisa Martin here with John Furrier on the Cube's third day of coverage of Coon Cloud Native Con North America. John, we've had some great conversations over the last two and a half days. We've been talking about identity and security management as a critical need for enterprises within the cloud native space. We're gonna have another quick conversation >>On that. Yeah, we got a great segment coming up from someone who's been in the industry, a long time expert, running a great company. Now it's gonna be one of those pieces that fits into what we call super cloud. Others are calling cloud operating system. Some are calling just Cloud 2.0, 3.0. But there's definitely a major trend happening around how cloud is going Next generation. We've been covering it. So this segment should be >>Great. Let's unpack those trends. One of our alumni is back with us, O Rika Zi, co-founder and CEO of Aerio. Omri. Great to have you back on the >>Cube. Thank you. Great to be here. >>So identity move to the cloud, Access authorization did not talk to us about why you found it assertive, what you guys are doing and how you're flipping that script. >>Yeah, so back 15 years ago, I helped start Azure at Microsoft. You know, one of the first few folks that you know, really focused on enterprise services within the Azure family. And at the time I was working for the guy who ran all of Windows server and you know, active directory. He called it the linchpin workload for the Windows Server franchise, like big words. But what he meant was we had 95% market share and all of these new SAS applications like ServiceNow and you know, Workday and salesforce.com, they had to invent login and they had to invent access control. And so we were like, well, we're gonna lose it unless we figure out how to replace active directory. And that's how Azure Active Directory was born. And the first thing that we had to do as an industry was fix identity, right? Yeah. So, you know, we worked on things like oof Two and Open, Id Connect and SAML and Jot as an industry and now 15 years later, no one has to go build login if you don't want to, right? You have companies like Odd Zero and Okta and one login Ping ID that solve that problem solve single sign-on, on the web. But access Control hasn't really moved forward at all in the last 15 years. And so my co-founder and I who were both involved in the early beginnings of Azure Active directory, wanted to go back to that problem. And that problem is even bigger than identity and it's far from >>Solved. Yeah, this is huge. I think, you know, self-service has been a developer thing that's, everyone knows developer productivity, we've all experienced click sign in with your LinkedIn or Twitter or Google or Apple handle. So that's single sign on check. Now the security conversation kicks in. If you look at with this no perimeter and cloud, now you've got multi-cloud or super cloud on the horizon. You've got all kinds of opportunities to innovate on the security paradigm. I think this is kind of where I'm hearing the most conversation around access control as well as operationally eliminating a lot of potential problems. So there's one clean up the siloed or fragmented access and two streamlined for security. What's your reaction to that? Do you agree? And if not, where, where am I missing that? >>Yeah, absolutely. If you look at the life of an IT pro, you know, back in the two thousands they had, you know, l d or active directory, they add in one place to configure groups and they'd map users to groups. And groups typically corresponded to roles and business applications. And it was clunky, but life was pretty simple. And now they live in dozens or hundreds of different admin consoles. So misconfigurations are rampant and over provisioning is a real problem. If you look at zero trust and the principle of lease privilege, you know, all these applications have these course grained permissions. And so when you have a breach, and it's not a matter of if, it's a matter of when you wanna limit the blast radius of you know what happened, and you can't do that unless you have fine grained access control. So all those, you know, all those reasons together are forcing us as an industry to come to terms with the fact that we really need to revisit access control and bring it to the age of cloud. >>You guys recently, just this week I saw the blog on Topaz. Congratulations. Thank you. Talk to us about what that is and some of the gaps that's gonna help sarto to fill for what's out there in the marketplace. >>Yeah, so right now there really isn't a way to go build fine grains policy based real time access control based on open source, right? We have the open policy agent, which is a great decision engine, but really optimized for infrastructure scenarios like Kubernetes admission control. And then on the other hand, you have this new, you know, generation of access control ideas. This model called relationship based access control that was popularized by Google Zanzibar system. So Zanzibar is how they do access control for Google Docs and Google Drive. If you've ever kind of looked at a Google Doc and you know you're a viewer or an owner or a commenter, Zanzibar is the system behind it. And so what we've done is we've married these two things together. We have a policy based system, OPPA based system, and at the same time we've brought together a directory, an embedded directory in Topaz that allows you to answer questions like, does this user have this permission on this object? And bringing it all together, making it open sources a real game changer from our perspective, real >>Game changer. That's good to hear. What are some of the key use cases that it's gonna help your customers address? >>So a lot of our customers really like the idea of policy based access management, but they don't know how to bring data to that decision engine. And so we basically have a, you know, a, a very opinionated way of how to model that data. So you import data out of your identity providers. So you connect us to Okta or oze or Azure, Azure Active directory. And so now you have the user data, you can define groups and then you can define, you know, your object hierarchy, your domain model. So let's say you have an applicant tracking system, you have nouns like job, you know, know job descriptions or candidates. And so you wanna model these things and you want to be able to say who has access to, you know, the candidates for this job, for example. Those are the kinds of rules that people can express really easily in Topaz and in assertive. >>What are some of the challenges that are happening right now that dissolve? What, what are you looking at to solve? Is it complexity, sprawl, logic problems? What's the main problem set you guys >>See? Yeah, so as organizations grow and they have more and more microservices, each one of these microservices does authorization differently. And so it's impossible to reason about the full surface area of, you know, permissions in your application. And more and more of these organizations are saying, You know what, we need a standard layer for this. So it's not just Google with Zanzibar, it's Intuit with Oddy, it's Carta with their own oddy system, it's Netflix, you know, it's Airbnb with heed. All of them are now talking about how they solve access control extracted into its own service to basically manage complexity and regain agility. The other thing is all about, you know, time to market and, and tco. >>So, so how do you work with those services? Do you replace them, you unify them? What is the approach that you're taking? >>So basically these organizations are saying, you know what? We want one access control service. We want all of our microservices to call that thing instead of having to roll out our own. And so we, you know, give you the guts for that service, right? Topaz is basically the way that you're gonna go implement an access control service without having to go build it the same way that you know, large companies like Airbnb or Google or, or a car to >>Have. What's the competition look like for you guys? I'm not really seeing a lot of competition out there. Are there competitors? Are there different approaches? What makes you different? >>Yeah, so I would say that, you know, the biggest competitor is roll your own. So a lot of these companies that find us, they say, We're sick and tired of investing 2, 3, 4 engineers, five engineers on this thing. You know, it's the gift that keeps on giving. We have to maintain this thing and so we can, we can use your solution at a fraction of the cost a, a fifth, a 10th of what it would cost us to maintain it locally. There are others like Sty for example, you know, they are in the space, but more in on the infrastructure side. So they solve the problem of Kubernetes submission control or things like that. So >>Rolling your own, there's a couple problems there. One is do they get all the corner cases who built a they still, it's a company. Exactly. It's heavy lifting, it's undifferentiated, you just gotta check the box. So probably will be not optimized. >>That's right. As Bezo says, only focus on the things that make your beer taste better. And access control is one of those things. It's part of your security, you know, posture, it's a critical thing to get right, but you know, I wanna work on access control, said no developer ever, right? So it's kind of like this boring, you know, like back office thing that you need to do. And so we give you the mechanisms to be able to build it securely and robustly. >>Do you have a, a customer story example that is one of your go-tos that really highlights how you're improving developer productivity? >>Yeah, so we have a couple of them actually. So there's the largest third party B2B marketplace in the us. Free retail. Instead of building their own, they actually brought in aer. And what they wanted to do with AER was be the authorization layer for both their externally facing applications as well as their internal apps. So basically every one of their applications now hooks up to AER to do authorization. They define users and groups and roles and permissions in one place and then every application can actually plug into that instead of having to roll out their own. >>I'd like to switch gears if you don't mind. I get first of all, great update on the company and progress. I'd like to get your thoughts on the cloud computing market. Obviously you were your legendary position, Azure, I mean look at the, look at the progress over the past few years. Just been spectacular from Microsoft and you set the table there. Amazon web service is still, you know, thundering away even though earnings came out, the market's kind of soft still. You know, you see the cloud hyperscalers just continuing to differentiate from software to chips. Yep. Across the board. So the hyperscalers kicking ass taking names, doing great Microsoft right up there. What's the future? Cuz you now have the conversation where, okay, we're calling it super cloud, somebody calling multi-cloud, somebody calling it distributed computing, whatever you wanna call it. The old is now new again, it just looks different as cloud becomes now the next computer industry, >>You got an operating system, you got applications, you got hardware, I mean it's all kind of playing out just on a massive global scale, but you got regions, you got all kinds of connected systems edge. What's your vision on how this plays out? Because things are starting to fall into place. Web assembly to me just points to, you know, app servers are coming back, middleware, Kubernetes containers, VMs are gonna still be there. So you got the progression. What's your, what's your take on this? How would you share, share your thoughts to a friend or the industry, the audience? So what's going on? What's, what's happening right now? What's, what's going on? >>Yeah, it's funny because you know, I remember doing this quite a few years ago with you probably in, you know, 2015 and we were talking about, back then we called it hybrid cloud, right? And it was a vision, but it is actually what's going on. It just took longer for it to get here, right? So back then, you know, the big debate was public cloud or private cloud and you know, back when we were, you know, talking about these ideas, you know, we said, well you know, some applications will always stay on-prem and some applications will move to the cloud. I was just talking to a big bank and they basically said, look, our stated objective now is to move everything we can to the public cloud and we still have a large private cloud investment that will never go away. And so now we have essentially this big operating system that can, you know, abstract all of this stuff. So we have developer platforms that can, you know, sit on top of all these different pieces of infrastructure and you know, kind of based on policy decide where these applications are gonna be scheduled. So, you know, the >>Operating schedule shows like an operating system function. >>Exactly. I mean like we now, we used to have schedulers for one CPU or you know, one box, then we had schedulers for, you know, kind of like a whole cluster and now we have schedulers across the world. >>Yeah. My final question before we kind of get run outta time is what's your thoughts on web assembly? Cuz that's getting a lot of hype here again to kind of look at this next evolution again that's lighter weight kind of feels like an app server kind of direction. What's your, what's your, it's hyped up now, what's your take on that? >>Yeah, it's interesting. I mean back, you know, what's, what's old is new again, right? So, you know, I remember back in the late nineties we got really excited about, you know, JVMs and you know, this notion of right once run anywhere and yeah, you know, I would say that web assembly provides a pretty exciting, you know, window into that where you can take the, you know, sandboxing technology from the JavaScript world, from the browser essentially. And you can, you know, compile an application down to web assembly and have it real, really truly portable. So, you know, we see for example, policies in our world, you know, with opa, one of the hottest things is to take these policies and can compile them to web assemblies so you can actually execute them at the edge, you know, wherever it is that you have a web assembly runtime. >>And so, you know, I was just talking to Scott over at Docker and you know, they're excited about kind of bringing Docker packaging, OCI packaging to web assemblies. So we're gonna see a convergence of all these technologies right now. They're kind of each, each of our, each of them are in a silo, but you know, like we'll see a lot of the patterns, like for example, OCI is gonna become the packaging format for web assemblies as it is becoming the packaging format for policies. So we did the same thing. We basically said, you know what, we want these policies to be packaged as OCI assembly so that you can sign them with cosign and bring the entire ecosystem of tools to bear on OCI packages. So convergence is I think what >>We're, and love, I love your attitude too because it's the open source community and the developers who are actually voting on the quote defacto standard. Yes. You know, if it doesn't work, right, know people know about it. Exactly. It's actually a great new production system. >>So great momentum going on to the press released earlier this week, clearly filling the gaps there that, that you and your, your co-founder saw a long time ago. What's next for the assertive business? Are you hiring? What's going on there? >>Yeah, we are really excited about launching commercially at the end of this year. So one of the things that we were, we wanted to do that we had a promise around and we delivered on our promise was open sourcing our edge authorizer. That was a huge thing for us. And we've now completed, you know, pretty much all the big pieces for AER and now it's time to commercially launch launch. We already have customers in production, you know, design partners, and you know, next year is gonna be the year to really drive commercialization. >>All right. We will be watching this space ery. Thank you so much for joining John and me on the keep. Great to have you back on the program. >>Thank you so much. It was a pleasure. >>Our pleasure as well For our guest and John Furrier, I'm Lisa Martin, you're watching The Cube Live. Michelle floor of Con Cloud Native Con 22. This is day three of our coverage. We will be back with more coverage after a short break. See that.

Published Date : Oct 28 2022

SUMMARY :

We're gonna have another quick conversation So this segment should be Great to have you back on the Great to be here. talk to us about why you found it assertive, what you guys are doing and how you're flipping that script. You know, one of the first few folks that you know, really focused on enterprise services within I think, you know, self-service has been a developer thing that's, If you look at the life of an IT pro, you know, back in the two thousands they that is and some of the gaps that's gonna help sarto to fill for what's out there in the marketplace. you have this new, you know, generation of access control ideas. What are some of the key use cases that it's gonna help your customers address? to say who has access to, you know, the candidates for this job, area of, you know, permissions in your application. And so we, you know, give you the guts for that service, right? What makes you different? Yeah, so I would say that, you know, the biggest competitor is roll your own. It's heavy lifting, it's undifferentiated, you just gotta check the box. So it's kind of like this boring, you know, Yeah, so we have a couple of them actually. you know, thundering away even though earnings came out, the market's kind of soft still. So you got the progression. So we have developer platforms that can, you know, sit on top of all these different pieces know, one box, then we had schedulers for, you know, kind of like a whole cluster and now we Cuz that's getting a lot of hype here again to kind of look at this next evolution again that's lighter weight kind the edge, you know, wherever it is that you have a web assembly runtime. And so, you know, I was just talking to Scott over at Docker and you know, on the quote defacto standard. that you and your, your co-founder saw a long time ago. And we've now completed, you know, pretty much all the big pieces for AER and now it's time to commercially Great to have you back on the program. Thank you so much. We will be back with more coverage after a short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

Omri GazittPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

2015DATE

0.99+

AirbnbORGANIZATION

0.99+

ScottPERSON

0.99+

DockerORGANIZATION

0.99+

five engineersQUANTITY

0.99+

O Rika ZiPERSON

0.99+

AmazonORGANIZATION

0.99+

BezoPERSON

0.99+

AppleORGANIZATION

0.99+

eachQUANTITY

0.99+

one boxQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

ServiceNowTITLE

0.99+

AerioORGANIZATION

0.99+

third dayQUANTITY

0.99+

two thousandsQUANTITY

0.99+

WindowsTITLE

0.99+

next yearDATE

0.99+

dozensQUANTITY

0.99+

4 engineersQUANTITY

0.99+

singleQUANTITY

0.99+

hundredsQUANTITY

0.99+

NetflixORGANIZATION

0.99+

TwitterORGANIZATION

0.99+

OktaORGANIZATION

0.98+

bothQUANTITY

0.98+

15 years laterDATE

0.98+

MichellePERSON

0.98+

ZanzibarORGANIZATION

0.98+

Odd ZeroORGANIZATION

0.98+

The Cube LiveTITLE

0.98+

this weekDATE

0.98+

10thQUANTITY

0.97+

one placeQUANTITY

0.97+

KubeConEVENT

0.97+

twoQUANTITY

0.97+

Google DocTITLE

0.97+

late ninetiesDATE

0.97+

oneQUANTITY

0.96+

Azure Active DirectoryTITLE

0.96+

Google DocsTITLE

0.96+

15 years agoDATE

0.95+

StyORGANIZATION

0.95+

AERORGANIZATION

0.95+

first thingQUANTITY

0.95+

earlier this weekDATE

0.95+

OmriPERSON

0.94+

JavaScriptTITLE

0.94+

OCIORGANIZATION

0.94+

few years agoDATE

0.93+

AzureTITLE

0.93+

last 15 yearsDATE

0.92+

AERTITLE

0.92+

OddyORGANIZATION

0.92+

3QUANTITY

0.91+

CoonORGANIZATION

0.9+

CloudNative Con NA 2022EVENT

0.9+

single signQUANTITY

0.89+

end of this yearDATE

0.89+

95% marketQUANTITY

0.88+

Azure Active directoryTITLE

0.88+

Con Cloud Native Con 22EVENT

0.87+

Google DriveTITLE

0.86+

TopazORGANIZATION

0.85+

one CPUQUANTITY

0.85+

SAMLTITLE

0.85+

each oneQUANTITY

0.84+

Chase Doelling Final


 

(upbeat music) >> Hey, everyone. Welcome to this CUBE Conversation that's part of the AWS startup showcase Season Two, Episode Four. I'm your host Lisa Martin. Chase Doelling joins me, the principles strategist at JumpCloud. Chase, welcome to theCUBE. It's great to have you. >> Chase: Perfect. Well, thank you so much, Lisa. I really appreciate the opportunity to come and hang out. >> Let's talk about JumpCloud. First of all, love the name. This is an open directory platform. Talk to the audience about what the platform is, obviously, the evolution of the domain controller. But give us that backstory? >> Yeah, absolutely. And so, company was started, and I think, from serial entrepreneurs, and after kind of last exit, taking a look around and saying, "Why is this piece of hardware still the dominant force when you're thinking about identities, especially when the world is moving to cloud, and all the different pieces that have been around it?" And so, over the years, we've evolved JumpCloud into an open directory platform. And what that is, is we're managing your identities, the devices that are associated to that, all the access points that employees need just to get their job done. And the best part is, is we're able to do that no matter where they are within the world. >> It seems like kind of a reinvention of how modern IT teams are getting worked done, especially in these days of remote work. Talk to me a little bit about the last couple of years particularly as remote work exploded, and here we are still probably, permanently, in that situation? >> Yeah, absolutely. And I think it's probably going to be one of those situations where we stick with it for quite a while. We had a very abrupt force in making sure that essentially every IT and security team could grapple with the fact of their users are no longer coming into the office. You know, how do we VPN into all of our different resources? Those are very common and unfortunate pain points that we've had over the last couple years. And so, now, people have starting to kind of get into the motion of it, working from home, having background and setups and other pieces. But one of the main areas of concern, especially as you're thinking about that, is how does it relate to my security infrastructure, or kind of my approach to my organization. And making sure that too, on the tail end, that a user's access and making sure that they can get into everything that they need to do in order to get work done, is still happening? And so, what we've done, is we've really taken, evolving and really kind of ripping apart this notion of what a directory was. 'Cause originally, it was just like, great, almost like a phone directory. It's where people lived they're going into all those different pieces. But it wasn't set up for the modern world, and kind of how we're approaching it, and how organizations now are started with a credit card and have all of their infrastructure. And essentially, all of their IP, is now hosted somewhere else. And so, we wanted to take a different approach where we're thinking about, not only managing that identity, but taking an open approach. So, matter where the identity's coming from, we can integrate that into the platform but then we're also managing and securing those devices, which is often the most important piece that we have sitting right in front of us in order to get into that. But then, also that final question, of when you're accessing networks applications, can you create the conditions for trust, right? And so, if you're looking at zero trust, or kind of going after different levels of compliance, ISO, SOC2, whatever that might be, making sure that you have all that put in place no matter where your employees are. So, in that way, as we kind of moved into this remote, now hybrid world, it wasn't the office as the gating point anymore, right? So, key cards, as much as we love 'em, final part, whereas the new perimeter, the kind of the new barrier for organizations especially how they're thinking about security, is the people's identities behind that. And so, that's the approach that we really wanted to take as we continue to evolve and really open up what a directory platform can do. >> Yeah. Zero trust security, remote work. Two things that have exploded in the last couple of years. But as employees, we expected to be able to still have the access that we needed to apps, to the network, to WiFi, et cetera. And, of course, on the security side, we saw massive changes in the threat landscape that really, obviously, security elevates to a board level conversation. So, I imagine zero trust security, remote work, probably compliance, you mentioned SOC2, are some of the the key use cases that you're helping organizations with? >> Those are a lot of the drivers. And what we do, is we're able to combine a lot of different aspects that you need for each one of those. And so, now you're thinking about essentially, the use case of someone joins an organization, they need access to all these different things. But behind the scenes, it's a combination of identity access management, device management, applications, networks, everything else, and creating those conditions for them to do their roles. But the other piece of that, is you also don't want to be overly cumbersome. I think a lot of us think about security as like great biometrics, so I'm going to add in these keys, I'm going to do everything else to kind of get into these secured resources. But the reality of it now, is those secure resources might be AWS infrastructure. It might be other Salesforce reporting tools. It might be other pieces, or kind of IP within the organization. And those are now your crown jewel. And so, if you're not thinking about the identities behind them and the security that you have in order to facilitate that transaction, it becomes a board level conversation very quickly. But you want to do it in a way that people can move forward with their lives, and they're not spending a ton of time battling the systems and procedures you put in place to protect it, but that it's working together seamlessly. And so, that's where, kind of this notion for us of bringing all these different technologies into one platform. You're able to consolidate a lot of those and remove a lot of the friction while maintaining the visibility, and answering the question, of who has access to what? And when did they do that? Those are the most critical pieces that IT and security teams are asking themselves when something happens. And hopefully, on the preventative side and not so much on the redacted side. >> Have you seen the escalation up the C-Suite change of the board in terms of really focusing on how do we do identity management? How do we do single sign on? How do we do device management and network access? Is that all the way up to the C-Suite board level as well? >> It certainly can be. And we've seen it in a lot of different conversations, because now you are thinking about all different portions of the organization. And then, two, as we're thinking about times we're currently in, there's also a cost associated to that. And so, when you start to consolidate all of those technologies into one area, now it becomes much more of total cost optimization types of story while you're still maintaining a lot of the security and basic blocking and tackling that you need for most organizations. So, everything you just mentioned, those are now table stakes for a lot of small, medium, startups to be at the table. So, how do you have access to enterprise level, essentially technology, without the cost that's associated to it. And that's a lot of the trade offs that organizations are facing and having those types of conversations as it relates to business preparedness and how we're making sure that we are putting our best foot forward, and we're able to be resilient in no matter what type, of either economic or security threat that the organization might be looking at. >> So, let's talk about the go-to market, the strategy from a sales and marketing perspective. Where are the customer conversations happening? Are they at the IT level? Are they higher up the stack? >> It's really at, I'd say the IT level. And so, by that, I mean the builders, the implementers, everyone that's responsible for putting devices in people's hands, and making sure that they can do their job effectively. And so, those are their, I'd say the IT admins the world as well as the managed service providers who support those organizations, making sure that we can enable them to making sure that their organizations or their client organizations have all the tools that their disposable to make sure that they have the security or the policies, and the technology behind them to enable all those different practices. >> Let's unpack the benefits from an IT perspective? Obviously, they're getting one console that they can manage at all. One user identity for email, and devices, and apps, and things. You mentioned regardless of location, but this is also regardless of operating system, correct? >> That's correct. And so, part of taking an open approach, is also the devices that you're running on. And so, we take a cross OS approach. So, Mac, Windows, Linux, iPhone, whatever it might be, we can make sure that, that device is secure. And so, it does a couple different things. So, one, is the employees have device choice, right? So, I'm a Mac person coming in. If forced into a Windows, it'd be an interesting experience. But then, also too, from the back end, now you have essentially one platform to manage your entire fleet. And also give visibility and data behind what's happening behind those. And then, from the end user perspective as well, everything's tied together. And so, instead of having, what we'll call user ID schizophrenia, it might be one employee, but hundreds of different identities and logins just to get their work done. We can now centralize that into one person, making sure you have one password to get into your advice, get into the network, to get into your single sign on. We also have push MFA associated with that. So, you can actually create the conditions for your most secured access, or you understand, say, "Hey, I'm actually in the office. I'm going to be a hybrid employee. Maybe I can actually relax some of those security concerns I might have for people outside of the network." And all we do, is making sure that we give all that optionality to our IT admins, manage service providers of the world to enable that type of work for their employees to happen. >> So, they have the ability to toggle that, is critically important in this day and age of the hybrid work model, that's probably here to stay? >> It is, yeah. And it's something that organizations change, right? Our own organizations, they grow, they change different. New threats might emerge, or same old existing threats continue to come back. And we need to just have better processes and automations put within that. And it's when you start to consolidate all of those technologies, not only are you thinking about the visibility behind that, but then you're automating a lot of those different pieces that are already tightly coupled together. And that actually is truly powerful for a lot of the IT admins of the world, because that's where they spend a lot of time, and they're able to spend more time helping users tackling big projects instead of run rate security, and blocking, and tackling. That should be enabled from the organization from the get go. >> You mentioned automation. And I think that there's got to be a TCO reduction aspect here with respect to security and IT practices. Can you talk about that a little bit? >> Yeah, absolutely. Let's think about the opposite of that. Let's say we have a laundry list of technology that we need to go out and source. One is, great, where the identity is, so we have an identity provider. Now, we need to make sure that we have application access that might look like single sign on. Now, we need to make sure, you are who you are no matter where you are in the world. Well, now we need multifactor authentication and that might involve either a push button, or biometrics. And then, well, great the device's in front of us, that's a huge component, making sure that I can understand, not only who's on the device, but that the device is secure, that there's certificates there, that there's policies that ensure the proper use of that wherever it might be. Especially, if I'm an employee, either, it used to be on the the jet center going between flying anywhere you need. Now, it's kind of cross country, cross domain, all those different areas. And when you start to have that, it really unlocks, essentially IT sprawl. You have a lot of different pieces, a lot of different contracts, trying to figure out one technology works, but the other might not. And you're now you're creating workarounds for all these different pieces. So, the opposite of that, is essentially, let's take all those technologies and consolidate that into one platform. So, not only is it cheaper essentially, looking after that and understanding all the different technologies, but now it's all the other soft costs around it that many people don't think about. It's all the other automations. It's all the workarounds that you didn't have to do in the first place. It's all the other pieces that you'd spend a lot of time trying to wire it together. Into the hopes of that, it creates some security model. But then again, you lose a lot of the visibility. So, you might have an incident happen over here, or a trigger, or alert, but it's not tied to the rest of the stack. And so, now you're spending a lot of time, especially, either trying to understand. And worse timing, is if you have an incident and you're trying to understand what's happening? Unraveling all of that as it happens, becomes impossible, especially if it's not consolidated with one platform. So, there's not only the hard cost aspect of bringing all that together, but also the soft costs of thinking about how your business can perform, or at least optimize for a lot of those different standard processes, including onboarding, offboarding, and everything else in between. >> Yeah. On the soft cost side, I can imagine. I can see huge benefits for HR onboarding, offboarding. I can see benefits for the employee experience period, which directly relates to the customer experience. So, in terms of the business impact that JumpCloud can make, it seems to be pretty horizontal across any type of organization? >> It is, and especially as you mentioned HR. Because when you think about, where does the origin of someone's identity start? Well, typically, it starts with a resume and that might be in applicant tracking software. Now, we're going to get hired, so we're going to move into HR, because, well, everyone likes payroll, and we need that in our lives, right? But now you get into the second phase, of great, now I've joined the organization. Now, I need access to all of these different pieces. But when you look at it, essentially horizontally, from HR, all the way into the employee experience, and their whole life cycle within the organization, now you're touching multiple different teams And that's one of the other, I'd say benefits of that, is now you're actually bringing in HR, and IT, and security, and everyone else that might be related within these kind of larger use cases of making work happen all coming under. And when they're tightly integrated, it's also a lot more secure, right? So, you're not passing notes along. You're not having a checklist of other stuff, especially when it relates to something as important as someone's identity, which is more often than not, the most common attack vector for people to go after. Because they know it's the keys to the kingdom. There's going to be a lot of different attempts, maybe malware and other pieces, but a lot of it comes back into, can I impersonate, or become the person that I want within the organization, because it's the identity allows you to access all those different pieces. And so, if it's coming from a disjointed process or something that's not as tightly as it could be, that's where it really opens up a lot of different vectors that organizations don't think about. >> Right, and those vectors are only growing and multiplying as we know, and here to stay. When you're in customer conversations what do you describe as maybe the top three differentiators of JumpCloud compared to the competition? >> Well, I think a lot of it is we take an open approach. And so, by that, I mean, it's one we're not locking into, I'd say different vendors or other areas. We're really looking into making sure that we can work within your environment as it stands today, or where you want to migrate in the future. And so, this could be a combination of on-prem resources, cloud resources, or nothing if you're starting a company from today. And the second, is again, coming back into how we're looking at devices. So, we take a cross OS approach that way, no matter what you're operating on, it all comes back from the same dashboard. But then, finally, we leverage a ton of different protocols to make sure it works with everything within your current technology stack, as well as it continues to elevate and evolve over time. So, it could be LD app and Radius, and Sam, and skim, and open ID Connect, and open APIs. And whatever that might be, we are able to tie in all those different pieces. So, now, all of a sudden, it's not just one platform, but you have your whole business tied into as that gives you some flexibility too, to evolve. Because even during the pandemic and the shift for remote, there's a lot of technology choices that shifted. A lot of people are like, "Okay, now's the time to go to the cloud." There might be other events that organizations change. There's other things that might happen. So, creating that flexibility for organizations to move and make those calls, is essentially how we're differentiating ourselves. And we're not locking you into this, walled garden of technology that's just our own. We really want to make sure that we can operate, and be that glue, so that way, no matter what you're trying to do and making sure that your work is being done, we can help facilitate that. >> Nice. No matter what happens. Because boy, at this day, anything's possible. One more question for you about your AWS partnership. Talk to me a little bit about that? >> Yeah, absolutely. So, we are preferred ADP identity provider and SSO provider for AWS. And so, now rebranded under their identity center. But it's crucial for a lot of our organizations and joint customers because again, when we think about a lot of organization IP and how they operate as a business, is tied into AWS. And so, really understanding, who has the right level of access? Who should be in there or not? And when too, you should challenge in making sure that actually there's something fishy there. Like let's make sure that they're not just traveling to Europe on a sabbatical, and it's really who they are instead of a threat actor. Those are some of the pieces when we're thinking about creating that authentication, but then also, the right authorization into those AWS resources. And so, that's actually something that we've been very close to, especially, I'd say that the origins of a company. Because a lot of startups, that's where they go. That's where they begin their journey. And so, we meet them where they are, and making sure that we're protecting not only everything else within their organization, but also what they're trying to get into, which is typically AWS >> Meeting customers where they are. It's all about that. Chase, thank you so much for joining me on the program talking about JumpCloud, it's open directory platform. The benefits, the capabilities, what's in it for IT, HR, security, et cetera. We appreciate all of your insights and time. Where do you want to point folks to go to learn more? >> Well, absolutely. Well, thank you so much for having us. And I'd say, if you're curious about any and all these different technologies, the best part is everything I talked about is free up to 10 users, 10 devices. So, just go to jumpcloud.com. You can create an organization, and it's great for startups, people at home. Any size company that you're at, we can help support all of those different facets in bringing in those different types of technologies all into one roof. >> Awesome. Chase, thank you so much. This is awesome, go to jumpcloud.com. For Chase Doelling, I'm Lisa Martin. We want to thank you so much for giving us some of your time and watching this CUBE Conversation. (upbeat music)

Published Date : Aug 16 2022

SUMMARY :

that's part of the AWS startup showcase I really appreciate the First of all, love the name. And so, over the years, the last couple of years And so, that's the approach And, of course, on the security and the security that you have a lot of the security So, let's talk about the go-to market, And so, by that, I mean the that they can manage at all. all that optionality to our IT admins, for a lot of the IT admins of the world, And I think that there's got to be a lot of the visibility. So, in terms of the business impact And that's one of the other, of JumpCloud compared to the competition? "Okay, now's the time to go to the cloud." Talk to me a little bit about that? I'd say that the origins of a company. joining me on the program the best part is everything I talked about This is awesome, go to jumpcloud.com.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

EuropeLOCATION

0.99+

10 devicesQUANTITY

0.99+

ChasePERSON

0.99+

LisaPERSON

0.99+

hundredsQUANTITY

0.99+

Chase DoellingPERSON

0.99+

one platformQUANTITY

0.99+

one consoleQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

second phaseQUANTITY

0.99+

one personQUANTITY

0.99+

twoQUANTITY

0.99+

One more questionQUANTITY

0.98+

LinuxTITLE

0.98+

ADPORGANIZATION

0.98+

one employeeQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.97+

secondQUANTITY

0.97+

Two thingsQUANTITY

0.96+

singleQUANTITY

0.96+

WindowsTITLE

0.96+

JumpCloudORGANIZATION

0.96+

jumpcloud.comOTHER

0.96+

LDTITLE

0.95+

OneQUANTITY

0.95+

each oneQUANTITY

0.95+

MacCOMMERCIAL_ITEM

0.94+

Season TwoQUANTITY

0.93+

up to 10 usersQUANTITY

0.92+

JumpCloudTITLE

0.9+

one roofQUANTITY

0.9+

One userQUANTITY

0.9+

pandemicEVENT

0.89+

ID ConnectTITLE

0.89+

RadiusTITLE

0.87+

last couple of yearsDATE

0.87+

one passwordQUANTITY

0.87+

FirstQUANTITY

0.86+

zero trustQUANTITY

0.85+

SalesforceORGANIZATION

0.85+

C-SuiteTITLE

0.84+

SOC2TITLE

0.83+

zeroQUANTITY

0.83+

one areaQUANTITY

0.82+

Chase DoellingTITLE

0.82+

last couple yearsDATE

0.81+

Zero trustQUANTITY

0.8+

single signQUANTITY

0.8+

SSOORGANIZATION

0.8+

SamTITLE

0.79+

ISOORGANIZATION

0.79+

three differentiatorsQUANTITY

0.77+

jumpcloud.comORGANIZATION

0.76+

first placeQUANTITY

0.72+

Episode FourQUANTITY

0.72+

coupleQUANTITY

0.67+

theCUBEORGANIZATION

0.63+

skimTITLE

0.53+

CUBE ConversationEVENT

0.46+

2021 045 Shiv Gupta


 

(upbeat electronic music) >> Welcome back to the Quantcast Industry Summit on the demise of third-party cookies. The Cookie Conundrum, A Recipe for Success. I'm John Furrier, host of theCUBE. The changing landscape of advertising is here, and Shiv Gupta, founder of U of Digital is joining us. Shiv, thanks for coming on this segment. I really appreciate it. I know you're busy. You've got two young kids, as well as providing education to the digital industry. You got some kids to take care of and train them too. So, welcome to the cube conversation here as part of the program. >> Yeah, thanks for having me. Excited to be here. >> So, the house of the changing landscape of advertising really centers around the open to walled garden mindset of the web and the big power players. We know the big three, four tech players dominate the marketplace. So, clearly in a major inflection point. And you know, we've seen this movie before. Web, now mobile revolution. Which was basically a re-platforming of capabilities, but now we're in an era of refactoring the industry, not replatforming. A complete changing over of the value proposition. So, a lot at stake here as this open web, open internet-- global internet, evolves. What are your, what's your take on this? There's industry proposals out there that are talking to this specific cookie issue? What does it mean and what proposals are out there? >> Yeah, so, you know, I really view the identity proposals in kind of two kinds of groups. Two separate groups. So, on one side you have what the walled gardens are doing. And really that's being led by Google, right? So, Google introduced something called the Privacy Sandbox when they announced that they would be deprecating third-party cookies. And as part of the Privacy Sandbox, they've had a number of proposals. Unfortunately, or you know, however you want to say, they're all bird-themed, for some reason I don't know why. But the one, the bird-themed proposal that they've chosen to move forward with is called FLOC, which stands for Federated Learning of Cohorts. And, essentially what it all boils down to is Google is moving forward with cohort level learning and understanding of users in the future after third-party cookies. Unlike what we've been accustomed to in this space, which is a user level understanding of people and what they're doing online for targeting and tracking purposes. And so, that's on one side of the equation. It's what Google is doing with FLOC and Privacy Sandbox. Now, on the other side is, you know, things like unified ID 2.0 or the work that ID5 is doing around building new identity frameworks for the entire space that actually can still get down to the user level. Right? And so again, Unified ID 2.0 comes to mind because it's the one that's probably gotten the most adoption in the space. It's an open source framework. So the idea is that it's free and pretty much publicly available to anybody that wants to use it. And Unified ID 2.0 again is user level. So, it's basically taking data that's authenticated data from users across various websites that are logging in and taking those authenticated users to create some kind of identity map. And so, if you think about those two work streams, right? You've got the walled gardens and or, you know, Google with FLOC on one side. And then you've got Unified ID 2.0 and other ID frameworks for the open internet on the other side. You've got these two very different type of approaches to identity in the future. Again, on the Google side it's cohort level, it's going to be built into Chrome. The idea is that you can pretty much do a lot of the things that we do with advertising today but now you're just doing them at a group level so that you're protecting privacy. Whereas, on the other side with the open internet you're still getting down to the user level and that's pretty powerful but the the issue there is scale, right? We know that a lot of people are not logged in on lots of websites. I think the stat that I saw was under 5% of all website traffic is authenticated. So, really if you simplify things and you boil it all down you have kind of these two very differing approaches. >> So we have a publishing business. We'd love to have people authenticate and get that closed loop journalism thing going on. But, if businesses wannna get this level too, they can have concerns. So, I guess my question is, what's the trade-off? Because you have power in Google and the huge data set that they command. They command a lot of leverage with that. And again, centralized. And you've got open. But it seems to me that the world is moving more towards decentralization, not centralization. Do you agree with that? And does that have any impact to this? Because, you want to harness the data, so it rewards people with the most data. In this case, the powerful. But the world's going decentralized, where there needs to be a new way for data to be accessed and leveraged by anyone. >> Yeah. John, it's a great point. And I think we're at kind of a crossroads, right? To answer that question. You know, I think what we're hearing a lot right now in the space from publishers, like yourself, is that there's an interesting opportunity right now for them, right? To actually have some more control and say about the future of their own business. If you think about the last, let's say 10, 15, 20 years in advertising in digital, right? Programmatic has really become kind of the primary mechanism for revenue for a lot of these publishers. Right? And so programmatic is a super important part of their business. But, with everything that's happening here with identity now, a lot of these publishers are kind of taking a look in the mirror and thinking about, "Okay, we have an interesting opportunity here to make a decision." And, the decision, the trade off to your question is, Do we continue? Right? Do we put up the login wall? The registration wall, right? Collect that data. And then what do we do with that data? Right? So it's kind of a two-fold process here. Two-step process that they have to make a decision on. First of all, do we hamper the user experience by putting up a registration wall? Will we lose consumers if we do that? Do we create some friction in the process that's not necessary. And if we do, right? We're taking a hit already potentially, to what end? Right? And, I think that's the really interesting question, is to what end? But, what we're starting to see is publishers are saying you know what? Programmatic revenue is super important to us. And so, you know, path one might be: Hey, let's give them this data. Right? Let's give them the authenticated information, the data that we collect. Because if we do, we can continue on with the path that our business has been on. Right? Which is generating this awesome kind of programmatic revenue. Now, alternatively we're starting to see some publishers say hold up. If we say no, if we say: "Hey, we're going to authenticate but we're not going to share the data." Right? Some of the publishers actually view programmatic as almost like the programmatic industrial complex, right? That's almost taken a piece of their business in the last 10, 15, 20 years. Whereas, back in the day, they were selling directly and making all the revenue for themselves, right? And so, some of these publishers are starting to say: You know what? We're not going to play nice with FLOC and Unified ID. And we're going to kind of take some of this back. And what that means in the short term for them, is maybe sacrificing programmatic revenue. But their bet is long-term, maybe some of that money will come back to them direct. Now, that'll probably only be the premium pubs, right? The ones that really feel like they have that leverage and that runway to do something like that. And even so, you know, I'm of the opinion that if certain publishers kind of peel away and do that, that's probably not great for the bigger picture. Even though it might be good for their business. But, you know, let's see what happens. To each business their own >> Yeah. I think the trade-off of monetization and user experience has always been there. Now, more than ever, people want truth. They want trust. And I think the trust factor is huge. And if you're a publisher, you wannna have your audience be instrumental. And I think the big players have sucked out of the audience from the publishers for years. And that's well-documented. People talk about that all the time. I guess the question, it really comes down to is, what alternatives are out there for cookies and which ones do you think will be more successful? Because, I think the consensus is, at least from my reporting and my view, is that the world agrees. Let's make it open. Which one's going to be better? >> Yeah. That's a great question, John. So as I mentioned, right? We have two kinds of work streams here. We've got the walled garden work stream being led by Google and their work around FLOC. And then we've got the open internet, right? Let's say Unified ID 2.0 kind of represents that. I personally don't believe that there is a right answer or an end game here. I don't think that one of them wins over the other, frankly. I think that, you know, first of all, you have those two frameworks. Neither of them are perfect. They're both flawed in their own ways. There are pros and cons to both of them. And so what we're starting to see now, is you have other companies kind of coming in and building on top of both of them as kind of a hybrid solution, right? So they're saying, hey we use, you know, an open ID framework in this way to get down to the user level and use that authenticated data. And that's important, but we don't have all the scale. So now we go to a Google and we go to FLOC to kind of fill the scale. Oh and hey, by the way, we have some of our own special sauce. Right? We have some of our own data. We have some of our own partnerships. We're going to bring that in and layer it on top, right? And so, really where I think things are headed is the right answer, frankly, is not one or the other. It's a little mishmash of both with a little extra, you know, something on top. I think that's what we're starting to see out of a lot of companies in the space. And I think that's frankly, where we're headed. >> What do you think the industry will evolve to, in your opinion? Because, I think this is going to be- You can't ignore the big guys on this Obviously the programmatic you mentioned, also the data's there. But, what do you think the market will evolve to with this conundrum? >> So, I think John, where we're headed, you know, I think right now we're having this existential crisis, right? About identity in this industry. Because our world is being turned upside down. All the mechanisms that we've used for years and years are being thrown out the window and we're being told, "Hey, we're going to have new mechanisms." Right? So cookies are going away. Device IDs are going away. And now we've got to come up with new things. And so, the world is being turned upside down and everything that you read about in the trades and you know, we're here talking about it, right? Everyone's always talking about identity, right? Now, where do I think this is going? If I was to look into my crystal ball, you know, this is how I would kind of play this out. If you think about identity today, right? Forget about all the changes. Just think about it now and maybe a few years before today. Identity, for marketers, in my opinion, has been a little bit of a checkbox activity, right? It's been, Hey, Okay. You know, ad tech company or media company. Do you have an identity solution? Okay. Tell me a little bit more about it. Okay. Sounds good. That sounds good. Now, can we move on and talk about my business and how are you going to drive meaningful outcomes or whatever for my business. And I believe the reason that is, is because identity is a little abstract, right? It's not something that you can actually get meaningful validation against. It's just something that, you know? Yes, you have it. Okay, great. Let's move on, type of thing, right? And so, that's kind of where we've been. Now, all of a sudden, the cookies are going away. The device IDs are going away. And so the world is turning upside down. We're in this crisis of: how are we going to keep doing what we were doing for the last 10 years in the future? So, everyone's talking about it and we're tryna re-engineer the mechanisms. Now, if I was to look into the crystal ball, right? Two, three years from now, where I think we're headed is, not much is going to change. And what I mean by that, John is, I think that marketers will still go to companies and say, "Do you have an ID solution? Okay, tell me more about it. Okay. Let me understand a little bit better. Okay. You do it this way. Sounds good." Now, the ways in which companies are going to do it will be different. Right now it's FLOC and Unified ID and this and that, right? The ways, the mechanisms will be a little bit different. But, the end state. Right? The actual way in which we operate as an industry and the view of the landscape in my opinion, will be very simple or very similar, right? Because marketers will still view it as a, tell me you have an ID solution, make me feel good about it, help me check the box and let's move on and talk about my business and how you're going to solve for my needs. So, I think that's where we're going. That is not by any means to discount this existential moment that we're in. This is a really important moment, where we do have to talk about and figure out what we're going to do in the future. My viewpoint is that the future will actually not look all that different than the present. >> And then I'll say the user base is the audience, their data behind it helps create new experiences, machine learning and AI are going to create those. And if you have the data, you're either sharing it or using it. That's what we're finding. Shiv Gupta, great insights. Dropping some nice gems here. Founder of U of Digital and also the adjunct professor of programmatic advertising at Leavey School of business in Santa Clara University. Professor, thank you for coming and dropping the gems here and insight. Thank you. >> Thanks a lot for having me, John. Really appreciate it. >> Thanks for watching The Cookie Conundrum This is theCUBE host, John Furrier, me. Thanks for watching. (uplifting electronic music)

Published Date : May 10 2021

SUMMARY :

on the demise of third-party cookies. Excited to be here. of the web and the big power players. Now, on the other side is, you know, Google and the huge data set kind of the primary mechanism for revenue People talk about that all the time. kind of fill the scale. Obviously the programmatic you mentioned, And I believe the reason that is, and also the adjunct professor Thanks a lot for having me, This is theCUBE host, John Furrier, me.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Shiv GuptaPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

10QUANTITY

0.99+

ShivPERSON

0.99+

twoQUANTITY

0.99+

Two-stepQUANTITY

0.99+

Two separate groupsQUANTITY

0.99+

ChromeTITLE

0.99+

bothQUANTITY

0.99+

two young kidsQUANTITY

0.99+

two kindsQUANTITY

0.99+

15QUANTITY

0.99+

oneQUANTITY

0.99+

FLOCTITLE

0.99+

two-foldQUANTITY

0.99+

two frameworksQUANTITY

0.98+

Leavey SchoolORGANIZATION

0.98+

todayDATE

0.98+

under 5%QUANTITY

0.98+

four tech playersQUANTITY

0.97+

one sideQUANTITY

0.97+

U of DigitalORGANIZATION

0.97+

2021 045OTHER

0.97+

20 yearsQUANTITY

0.96+

The Cookie ConundrumTITLE

0.96+

Quantcast Industry SummitEVENT

0.95+

each businessQUANTITY

0.93+

A Recipe for SuccessTITLE

0.93+

FirstQUANTITY

0.92+

GooglORGANIZATION

0.92+

one sideQUANTITY

0.92+

firstQUANTITY

0.91+

Federated Learning of CohortsORGANIZATION

0.91+

Privacy SandboxTITLE

0.9+

unified ID 2.0TITLE

0.87+

two work streamsQUANTITY

0.87+

FLOCORGANIZATION

0.87+

last 10 yearsDATE

0.86+

Santa Clara UniversityORGANIZATION

0.83+

groupsQUANTITY

0.81+

yearsQUANTITY

0.8+

TwoQUANTITY

0.78+

ID 2.0OTHER

0.78+

theCUBEORGANIZATION

0.77+

few years beforeDATE

0.74+

three yearsQUANTITY

0.72+

FLOCOTHER

0.67+

threeQUANTITY

0.64+

ID 2.0TITLE

0.63+

Unified ID 2.0TITLE

0.6+

ID5TITLE

0.58+

differing approachesQUANTITY

0.54+

UnifiedTITLE

0.53+

UnifiedOTHER

0.52+

differentQUANTITY

0.51+

lastDATE

0.5+

Privacy SandboxCOMMERCIAL_ITEM

0.37+

Yusef Khan & Suresh Kanniappan


 

>> Announcer: From around the globe, It's theCUBE. Presenting Enterprise Digital Resilience on Hybrid and Multicloud. Brought to you by Io-Tahoe. >> Okay, Let's now get into the next segment where we'll explore data automation but from the angle of digital resilience within and as a service consumption model. We're now joined by Yusef Khan, who heads data services for Io-Tahoe and Suresh Kanniappan who's the vice president and head of US sales at Happiest Minds. Gents, welcome to the program, great to have you in theCUBE. >> Thank you, David. >> Suresh, you guys talk about at Happiest Minds this notion of born digital, foreign agile, I like that but talk about your mission at the company. >> Sure, far in 2011, Happiest minds is a born digital, born agile company. The reason is that, we are focused on customers. Our customer centric approach and delivering digital and seamless solutions, have helped us be in the race along with the Tier 1 providers. Our mission, Happiest People, Happiest Customers is focused to enable customer happiness through people happiness. We have been ranked among the top 25 ID services company in the great places to work in service. Our Glassdoor ratings, of four dot one against the rating of five, is among the top in the Indian ID services company, that shows the mission and the culture what we have built on the values, right? Is sharing, mindful, integrity, learning and social responsibilities, are the core values of our company. And that's where the entire culture of the company has been built. >> That's great, sounds like a happy place to be. Now Yusef, you had updated services for Io-Tahoe, we've talked in the past year, of course you're at London. What's your day to day focus with customers and partners? What are you focused on? >> Well David, my team worked daily with customers and partners to help them better understand their data, improve their data quality, their data governance, and help them make that data more accessible in a self-service kind of way to the stakeholders within those businesses. And this is a key part of digital resilience that we allow. We'll come on to talk about a bit later. >> You're right, I mean that self-service theme is something that we're going to really accelerate this decade Yusef. And so, but I wonder before we get into that, maybe you could talk about the nature of the partnership with Happiest Minds, why do you guys choose to work closely together? >> Very good question. We see Io-Tahoe and Happiest Minds as a great mutual fit. As Suresh said, Happiest Minds are a very agile organization. I think that's one of the key things that attracts the customers. And Io-Tahoe is all about automation. We're using machine learning algorithms to make data discovery, data cataloging, understanding data redundancy much easier and we're enabling customers and partners to do it much more quickly. So when you combine our emphasis on automation, with the emphasis on agility that Happiest Minds have. That's a really nice combination, works very well together, very powerful. I think the other things that are key, both businesses as Suresh have said, are really innovative, digital native type companies. Very focused on newer technologies, the cloud, et cetera. And then finally I think they're both challenge brands and Happiest Minds have a really positive, fresh, ethical approach to people and customers that really resonates with us at Io-Tahoe too. >> That's great, thank you for that. Suresh, let's get into the whole notion of digital resilience. I want to sort of set it up with what I see and maybe you can comment. Being prior to the pandemic, a lot of customers that kind of equated disaster recovery with their business continuance or business resilience strategy and that's changed almost overnight. How have you seen your clients respond to that? What I sometimes call the forced match to become a digital business and maybe you could talk about some of the challenges that they've faced along the way. >> Absolutely, So especially during this pandemic times when you see Dave, customers have been having tough times managing their business. So Happiest Minds being a digital resilient company, we were able to react much faster in the industry apart from the other services company. So, one of the key things is, the organizations are trying to adapt onto the digital technologies, right? There has been lot of data which has to be managed by these customers, and there've been a lot of threats and risk which has to be managed by the CIOs. So Happiest Minds Digital Resilient Technology, right? We're bringing the data complaints as a service. We were able to manage the resilience much ahead of other competitors in the market. We were able to bring in our business continuity processes from day one, where we were able to deliver our services without any interruption to the services what we are delivering to our customers. So that is where the digital, the resilience with business continuity process enabled was very helpful for us to enable our customers continue their business without any interruptions during pandemics. >> So, I mean some of the challenges that customers tell me if I may obviously had to figure out how to get laptops to remote workers, that whole remote, work from home pivot, figure out how to secure the end points, and those were kind of looking back they're kind of table stakes. And it sounds like, you got, I mean digital business means, a data business, putting data at the core, I like to say it. But so, I wonder if you could talk a little bit more about, maybe the philosophy you have toward digital resilience and the specific approach you take with clients. >> Absolutely Dave, see in any organization, data becomes the key. And thus for the first step, is to identify the critical data, right? So, this is a six step process plot we follow in Happiest Minds. First of all, we take stock of the current state, though the customers think that they have a clear visibility of their data. However, we do more often assessment from an external point of view and see how critical their data is. Then we help the customers to strategize that, right? The most important thing is to identify the most important critical asset. Data being the most critical asset for any organization, identification of the data are key for the customers. Then we help in building a viable operating model to ensure these identified critical assets are secure and monitored duly so that they are consumed well as well as protected from external threats. Then as a fourth step, now we try to bring in awareness to the people. We train them, at all levels in the organization. That is a key for people to understand the importance of the digital lessons. And then, as a fifth step, we work as a backup plan. In terms of bringing in a very comprehensive and a wholistic distinct approach on people, process, as well as in technology, to see how the organization can withstand during a crisis time. And finally, we do a continuous governance of these data. Which is a key, right? It is not just a one-step process. We set up the environment, we do the initial analysis, and set up the strategy and continuously govern these data to ensure that they are not only not managed well, secure, as well as they also have to meet the compliance requirements of the organizations, right? That is where we help organizations to secure and meet the regulations of the organizations as per the privacy laws. So this is a constant process. It's not a one time effort, we do a constant process because every organization grows towards their digital journey, and they have to face all these as part of the evolving environment on digital journey. And that's where they should be kept ready in terms of recovering, rebounding and moving forward if things goes wrong. >> So, let's stick on that for a minute and then I want to bring Yusef into the conversation. So, you mentioned compliance and governance. When you're in digital business here as you say you're a data business, so that brings up issues, data sovereignty, there's governance, there's compliance, there's things like right to be forgotten, there's data privacy, so many things. These were often kind of afterthoughts for businesses that bolted on, if you will. I know a lot of executives are very much concerned that these are built in and it's not a one-shot deal. So, do you have solutions around compliance and governance? Can you deliver that as a service? Maybe you could talk about some of the specifics there. >> Sure, we offer multiple services to our customers on digital residents. And one of the key service is the data compliance as a service. Here, we help organizations to map the key data against the data compliance requirements. Some of the features includes in terms of the continuous discovery of data, right? Because organizations keep adding on data when they move more digital. And helping and understanding the actual data in terms of the resilience of data, it could be an heterogeneous data sources, It could be on data basis, or it could be even on the data lakes, or it could be even on on-prem or on the cloud environment. So, identifying the data across the various heterogeneous environment is a very key feature of our solution. Once we identify and classify these sensitive data, the data privacy regulations and the prevalent laws have to be mapped based on the business rules. So we define those rules and help map those data so that organizations know how critical their digital assets are. Then we work on a continuous monitoring of data for anomalies. Because that's one of the key features of the solution, which needs to be implemented on the day-to-day operational basis. So, we help in monitoring those anomalies of data, for data quality management on an ongoing basis. And finally, we also bring in the automated data governance where we can manage the sensitive data policies and their data relationships in terms of mapping and manage that business rules. And we drive limitations and also suggest appropriate actions to the customers to take on those specific data assets. >> Great, thank you. Yusef thanks for being patient. I want to bring in Io-Tahoe to the discussion and understand where your customers and Happiest Minds can leverage your data automation capability that you and I have talked about in the past. And I mean it'd be great if you had an example as well, but maybe you could pick it up from there. >> Sure, I mean at a high level as Suresh articulated really, Io-Tahoe delivers business agility. So that's by accelerating the times operationalized data, automating, putting in place controls, and also helping put in place digital resilience. I mean, if we stepped back a little bit in time, traditional resilience in relation to data, often meant manually making multiple copies of the same data. So you'd have a DBA, they would copy the data to various different places, and then business users would access it in those functional silos. And of course, what happened was you ended up with lots of different copies of the same data around the enterprise. Very inefficient, and of course ultimately increases your risk profile, your risk of a data breach, It's very hard to know where everything is. And I realized that expression you used David, the idea of the forced match to digital. So, with enterprises that are going on this forced match, what they're finding is, they don't have a single version of the truth. And almost nobody has an accurate view of where their critical data is. Then you have containers, and with containers that enables a big leap forward. So you can break applications down into microservices, updates are available via APIs, and so you don't have the same need to to build and to manage multiple copies of the data. So, you have an opportunity to just have a single version of a truth. Then your challenge is, how do you deal with these large legacy data states that Suresh has been referring to? Where you have to consolidate. And that's really where Io-Tahoe comes in. We massively accelerate that process of putting in a single version of truth into place. So by automatically discovering the data, discovering what's duplicate, what's redundant, that means you can consolidate it down to a single trusted version, much more quickly. We've seen many customers who've tried to do this manually and it's literally taken years using manual methods to cover even a small percentage of their IT estates. With Io-Tahoe you can do it really very quickly and you can have tangible results within weeks and months. And then you can apply controls to the data based on context. So, who's the user? What's the content? What's the use case? Things like data quality validations or access permissions, and then once you've done that, your applications and your enterprise are much more secure, much more resilient as a result. You've got to do these things whilst retaining agility though. So, coming full circle, this is where the partnership with Happiest Minds really comes in as well. You've got to be agile, you've got to have controls and you've got to drive towards the business outcomes. And it's doing those three things together, we really deliver for the customer. >> Thank you, Yusef. I mean you and I in previous episodes we've looked in detail at the business case you were just talking about the manual labor involved. We know that you can't scale, but also there's that compression of time to get to the next step in terms of ultimately getting to the outcome and we've to a number of customers in theCUBE and the conclusion is, it's really consistent that if you can accelerate the time to value, that's the key driver, reducing complexity, automating and getting to insights faster. That's where you see telephone numbers in terms of business impact. So my question is, where should customers start? I mean how can they take advantage of some of these opportunities that we've discussed today? >> Well, we've tried to make that easy for customers. So, with Io-Tahoe and Happiest Minds you can very quickly do what we call a data health check. And this is a two to three week process to really quickly start to understand and deliver value from your data. So, Io-Tahoe deploys into the customer environment, data doesn't go anywhere, we would look at a few data sources, and a sample of data and we can very rapidly demonstrate how data discovery, data cataloging and understanding duplicate data or redundant data can be done, using machine learning, and how those problems can be solved. And so what we tend to find is that we can very quickly as I said in a matter of a few weeks, show a customer how they can get to a more resilient outcome and then how they can scale that up, take it into production, and then really understand their data state better, and build resilience into the enterprise. >> Excellent, there you have it. We'll leave it right there guys. Great conversation. Thanks so much for coming into the program. Best of luck to you in the partnership, be well. >> Thank you David, Suresh. >> Thank you Yusef. >> And thank you for watching everybody. This is Dave Vellante for theCUBE and our ongoing series on Data Automation with Io-Tahoe. (soft upbeat music)

Published Date : Jan 13 2021

SUMMARY :

Brought to you by Io-Tahoe. great to have you in theCUBE. mission at the company. in the great places to work in service. like a happy place to be. and partners to help of the partnership with Happiest Minds, that attracts the customers. and maybe you can comment. of other competitors in the market. at the core, I like to say it. identification of the data some of the specifics there. and the prevalent laws have to be mapped that you and I have the same need to to build the time to value, and build resilience into the enterprise. Best of luck to you in And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Yusef KhanPERSON

0.99+

2011DATE

0.99+

DavePERSON

0.99+

SureshPERSON

0.99+

Suresh KanniappanPERSON

0.99+

twoQUANTITY

0.99+

YusefPERSON

0.99+

Dave VellantePERSON

0.99+

Io-TahoeORGANIZATION

0.99+

Happiest MindsORGANIZATION

0.99+

Happiest MindsORGANIZATION

0.99+

fourth stepQUANTITY

0.99+

oneQUANTITY

0.99+

Yusef KhanPERSON

0.99+

first stepQUANTITY

0.99+

fifth stepQUANTITY

0.99+

six stepQUANTITY

0.99+

three weekQUANTITY

0.99+

fiveQUANTITY

0.99+

LondonLOCATION

0.99+

todayDATE

0.99+

fourQUANTITY

0.98+

singleQUANTITY

0.98+

bothQUANTITY

0.98+

one-shotQUANTITY

0.97+

both businessesQUANTITY

0.97+

three thingsQUANTITY

0.96+

FirstQUANTITY

0.96+

USLOCATION

0.96+

Happiest mindsORGANIZATION

0.96+

GlassdoorORGANIZATION

0.95+

single versionQUANTITY

0.9+

day oneQUANTITY

0.89+

one-step processQUANTITY

0.87+

single versionQUANTITY

0.86+

-TahoeORGANIZATION

0.84+

past yearDATE

0.82+

a minuteQUANTITY

0.82+

one time effortQUANTITY

0.8+

weeksQUANTITY

0.8+

Io-TahoePERSON

0.77+

IndianOTHER

0.74+

theCUBEORGANIZATION

0.7+

25 ID servicesQUANTITY

0.61+

1QUANTITY

0.61+

IoTITLE

0.53+

pandemicEVENT

0.52+

topQUANTITY

0.51+

TierOTHER

0.49+

Day 3 Keynote Analysis | AWS re:Invent 2020 Partner Network Day


 

>>From around the globe. It's the queue with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS, and our community partners. >>Hello, and welcome back to the cube live coverage of reinvent 2020 virtual. We're not there this year. It's the cube virtual. We are the cube virtual. I'm your host, John fro with Dave Alante and analyzing our take on the partner day. Um, keynotes and leadership sessions today was AWS APN, which is Amazon partner network global partner network day, where all the content being featured today is all about the partners and what Amazon is doing to create an ecosystem, build the ecosystem, nurture the ecosystem and reinvent what it means to be a partner. Dave, thanks for joining me today on the analysis of Amazon's ecosystem and partner network and a great stuff today. Thanks for coming on. >>Yeah, you're welcome. I mean, watch the keynote this morning. I mean, partners are critical to AWS. Look, the fact is that when, when AWS was launched, it was the developers ate it up. You know, if you're a developer, you dive right in infrastructure is code beautiful. You know, if you're mainstream it, this thing's just got more complex with the cloud. And so there's, there's a big gap right between how I, where I am today and where I want to be. And partners are critical to help helping people get there. And we'll talk about the details of specifically what Amazon did, but I mean, especially when John, when you look at things like smaller outposts, you know, going hybrid, Andy Jassy redefining hybrid, you need partners to really help you plan design, implement, manage at scale. >>Yeah. You know, one of the things I'm always, um, you know, saying nice things about Amazon, but one of the things that they're vulnerable on in my opinion is how they balanced their own SAS offerings and with what they develop in the ecosystem. This has been a constant, um, challenge and, and they've balanced it very well. Um, so other vendors, they are very clear. They make their own software, right. And they have a channel and it's kind of the old playbook. Amazon's got to reinvent the playbook here. And I think that's, what's key today on stage Doug Yom. He's the, uh, the leader you had, um, also Dave McCann who heads up marketplace and Sandy Carter who heads up worldwide public sector partners. So Dave interesting combination of three different teams, you had the classic ISV partners in the ecosystem, the cohesiveness of the world, the EMCs and so on, you had the marketplace with Dave McCann. That's where the future of procurement is. That's where people are buying product and you had public sector, huge tsunami of innovation happening because of the pandemic and Sandy is highlighting their partners. So it's partner day it's partner ecosystem, but multiple elements. They're moving marketplace where you buy programs and competencies with public sector and then ISV, all of those three areas are changing. Um, I want to get your take because you've been following ecosystems years and you've been close to the enterprise and how they buy your, >>And I think, I think John, Oh, a couple of things. One is, you know, Dave McCann was talking a lot about how CIO is one of modernize applications and they have to rationalize, and it will save some of that talk for later on, you know, Tim prophet on. But there's no question that Amazon's out to reinvent, as you said, uh, the whole experience from procurement all the way through, and, you know, normally you had to, to acquire services outside of the marketplace. And now what they're doing is bundling the services and software together. You know, it's straightforward services, implementation services, but those are well understood. The processes are known. You can pretty much size them and price them. So I think that's a huge opportunity for partners and customers to reduce friction. I think the other thing I would say is ecosystems are, are critical. >>Uh, one of the themes that we've been talking about in the cube as we've gone from a product centric world in the old days of it to a platform centric world, which has really been the last decade has been about SAS platforms and cloud platforms. And I think ecosystems are going to be a really power, the new innovation in the coming decade. And what I mean by that is look, if you're just building a service and Amazon is going to do that same service, you know, you got to keep innovating. And one of the ways you can innovate is you can build on ecosystems. There's all this data within industries, across industries, and you can through the partner network and through customer networks within industry start building new innovation around ecosystems and partners or that glue, Amazon's not going to go in. And like Jandy Jesse even said in the, uh, in his fireside chat, you know, customers will ask us for our advice and we're happy to give it to them, but frankly partners are better at that nitty gritty hardcore stuff. They have closer relationships with the customers. And so that's a really important gap that Amazon has been closing for the last, you know, frankly 10 years. And I think that to your point, they've still got a long way to go, but that's a huge opportunity in that. >>A good call out on any Jess, I've got to mention that one of the highlights of today's keynote was on a scheduled, um, Andy Jassy fireside chat. Uh, normally Andy does his keynote and then he kind of talks to customers and does his thing normally at a normal re-invent this time he came out on stage. And I think what I found interesting was he was talking about this builder. You always use the word builder customer, um, solutions. And I think one of the things that's interesting about this partner network is, is that I think there's a huge opportunity for companies to be customer centric and build on top of Amazon. And what I mean by that is, is that Amazon is pretty cool with you doing things on top of their platform that does two things serves the customer's needs better than they do, and they can make more money on and other services look at snowflake as an example, um, that's a company built on AWS. I know they've got other clouds going on, but mainly Amazon Zoom's the same way. They're doing a great solution. They've got Redshift, Amazon, Amazon's got Redshift, Dave, but also they're a customer and a partner. So this is the dynamic. If you can be successful on Amazon serving customers better than Amazon does, that's the growth hack. That's the hack on Amazon's partner network. If you could. >>I think, I think Snowflake's a really good example. You snowflake you use new Relic as an example, I've heard Andy Jess in the past use cloud air as an example, I like snowflake better because they're, they're sort of thriving. And so, but, but I will say this there's a, they're a great example of that ecosystem that we just talked about because yes, not only are they building on AWS, they're connecting to other clouds and that is an ecosystem that they're building out. And Amazon's got a lot of snowflake, I guess, unless you're the Redshift team, but, but generally speaking, Snowflake's driving a lot of business for Amazon and Andy Jesse addressed that in that, uh, in that fireside chat, he's asked that question a lot. And he said, look, we, we, we have our primary services. And at the same time we want to enable our partners to be successful. And snowflake is a really good example of that. >>Yeah. I want to call out also, uh, yesterday. Um, I had our Monday, I should say Tuesday, December 1st, uh, Jesse's keynote. I did an interview with Jerry chin with gray lock. He's investing in startups and one of the things he observed and he pointed out Dave, is that with Amazon, if you're, if you're a full all-in in the cloud, you're going to take advantage of things that are just not available on say on premises that is data patterns, other integrations. And I think one of the things that Doug pointed out was with interoperability and integration with say things like the SAS factor that they put out there there's advantages for being in the cloud specifically with Amazon, that you can get on integrations. And I think Dave McCann teases that out with the marketplace when they talk about integrations. But the idea of being in the cloud with all these other partners makes integration and interoperability different and unique and better potentially a differentiator. This is going to become a huge deal. >>I didn't pick up on that because yesterday I thought I wasn't in the keynote. I think it was in the analyst one-on-one with, with Jesse, he talked about, you know, this notion that people, I think he was addressing multi-cloud he didn't use that term, but this notion of an abstraction layer and how it does simplify things in, in his basic, he basically said, look, our philosophy is we want to have, you know, the, the ability to go deep with the primitives and have that fine grain access, because that will give us control. A lot of times when you put in this abstraction layer, which people are trying to do across clouds, you know, it limits your ability to really move fast. And then of course it's big theme is, is this year, at the same time, if you look at a company who was called out today, like, like Octa, you know, when you do an identity management and single sign-on, you're, you're touching a lot of pieces, there's a lot of integration to your point. >>So you need partners to come in and be that glue that does a lot of that heavy lifting that needs to needs to be done. Amazon. What Jessie was essentially saying, I think to the partner network is, look, we're not going to put in that abstraction layer. You're going to you, you got to do that. We're going to do stuff maybe between our own own services like they did with the, you know, the glue between databases, but generally speaking, that's a giant white space for partner organizations. He mentioned Okta. He been talked about in for apt Aptio. This was Dave McCann, actually Cohesity came up a confluent doing fully managed Kafka. So that to me was a signal to the partners. Look, here's where you guys should be playing. This is what customers need. And this is where we're not going to, you know, eat your lunch. >>Yeah. And the other thing McCann pointed out was 200 new Dave McCann pointed out who leads these leader of the, of the marketplace. He pointed out 200 new ISP. ISV is out there, huge news, and they're going to turn already. He went, he talked with his manage entitlements, which got my attention. And this is kind of an, um, kind of one of those advantage points that it's kind of not sexy and mainstream to talk about, but it's really one of those details. That's the heavy lifting. That's a pain in the butt to deal with licensing and tracking all this compliance stuff that goes on under the covers and distribution of software. I think that's where the cloud could be really advantaged. And also the app service catalog registry that he talked about and the professional services. So these are areas that Amazon is going to kind of create automation around. >>And as Jassy always talks about that undifferentiated heavy lifting, they're going to take care of some of these plumbing issues. And I think you're right about this differentiation because if I'm a partner and I could build on top of Amazon and have my own cloud, I mean, let's face it. Snowflake is a born in the cloud, in the cloud only solution on Amazon. So they're essentially Amazon's cloud. So I think the thing that's not being talked about this year, that is probably my come up in future reinvents is that whoever can build their own cloud on top of Amazon's cloud will be a winner. And I, I talked about this years ago, data around this tier two, I call it tier two clouds. This new layer of cloud service provider is going to be kind of the, on the power law, the, the second wave of cloud. >>In other words, you're on top of Amazon differentiating with a modern application at scale inside the cloud with all the other people in there, a whole new ecosystem is going to emerge. And to me, I think this is something that is not yet baked out, but if I was a partner, I would be out there planning like hell right now to say, I'm going to build a cloud business on Amazon. I'm going to take advantage of the relationships and the heavy lifting and compete and win that way. I think that's a re redefining moment. And I think whoever does that will win >>And a big theme around reinventing everything, reinvent the industry. And one of the areas that's being reinvented as is the, you know, the VAR channel really well, consultancies, you know, smaller size for years, these companies made a ton of dough selling boxes, right? All the, all the Dell and the IBM and the EMC resellers, you know, they get big boats and big houses, but that business changed dramatically. They had to shift toward value, value, value add. So what did they do? They became VMware specialists. They came became SAP specialists. There's a couple of examples, maybe, you know, adding into security. The cloud was freaking them out, but the cloud is really an opportunity for them. And I'll give you an example. We've talked a lot about snowflake. The other is AWS glue elastic views. That's what the AWS announced to connect all their databases together. Think about a consultancy that is able to come in and totally rearchitect your big data life cycle and pipeline with the people, the processes, the skillsets, you know, Amazon's not going to do that work, but the upside value for the organizations is tremendous. So you're seeing consultancies becoming managed service providers and adding all kinds of value throughout the stack. That's really reinvention of the partnership. >>Yeah. I think it's a complete, um, channel strategy. That's different. It doesn't, it looks like other channels, but it's not, it's, it's, it's driven by value. And I think this idea of competing on value versus just being kind of a commodity play is shifting. I think the ISV and the VARs, those traditional markets, David, as you pointed out, are going to definitely go value oriented. And you can just own a specialty area because as data comes in and when, and this is interesting. And one of the key things that Andy Jassy said in his fireside chat want to ask directly, how do partners benefit when asked about his keynote, how that would translate to partners. He really kind of went in and he was kind of rambling, but he, he, he hit the chips. He said, well, we've got our own chips, which means compute. Then he went into purpose-built data store and data Lake data, elastic views SageMaker Q and QuickSight. He kind of went down the road of, we have the horsepower, we have the data Lake data, data, data. So he was kind of hinting at innovate on the data and you'll do okay. >>Well, and this is again, we kind of, I'm like a snowflake fan boy, you know, in the way you, you like AWS. But look, if you look at AWS glue elastic views, that to me is like snowflakes data cloud is different, a lot of pushing and moving a date, a lot of copying data. But, but this is a great example of where like, remember last year at reinvent, they said, Hey, we're separating compute from storage. Well, you know, of course, snowflake popularized that. So this is great example of two companies thriving that are both competitors and partners. >>Well, I've got to ask you, you know, you, you and I always say we kind of his stories, we've been around the block on the enterprise for years. Um, where do you Mark the, um, evolution of their partner? Because again, Amazon has been so explosive in their growth. The numbers have been off the charts and they've done it well with and pass. And now you have the pandemic which kind of puts on full display, digital transformation. And then Jassy telegraphing that the digital global it spend is their next kind of conquering ground, um, to take, and they got the edge exploding with 5g. So you have this kind of range and they doing all kinds of stuff with IOT, and they're doing stuff in you on earth and in space. So you have this huge growth and they still don't have their own fully oriented business model. They rely on people to build on top of Amazon. So how do you see that evolving in your opinion? Because they're trying to add their own Amazon only, we've got Redshift that competes with others. How do you see that playing out? >>So I think it's going to be specialized and, and something that, uh, that I've talked about is Amazon, you know, AWS in the old day, old days being last decade, they really weren't that solution focused. It was really, you know, serving the builders with tooling, with you, look at something like what they're doing in the call center and what they're doing at the edge and IOT there. I think they're, so I think their move up the stack is going to be very solution oriented, but not necessarily, you know, horizontal going after CRM or going after, you know, uh, supply chain management or ERP. I don't think that's going to be their play. I think their play is going to be to really focus on hard problems that they can automate through their tooling and bring special advantage. And that's what they'll SAS. And at the same time, they'll obviously allow SAS players. >>It's just reminds me of the early days when you and I first met, uh, VMware. Everybody had to work with VMware because they had a such big ecosystem. Well, the SAS players will run on top. Like Workday does like Salesforce does Infour et cetera. And then I think you and I, and Jerry Chen talked about this years ago, I think they're going to give tools to builders, to disrupt the service now is in the sales forces who are out buying companies like crazy to try to get a, you know, half, half a billion dollar, half a trillion dollar market caps. And that is a really interesting dynamic. And I think right now, they're, they're not even having to walk a fine line. I think the lines are reasonably clear. We're going up to database, we're going to do specialized solutions. We're going to enable SAS. We're going to compete where we compete, come on, partner ecosystem. And >>Yeah, I, I, I think that, you know, the Slack being bought by Salesforce is just going to be one of those. I think it's a web van moment, you know, um, you know, where it's like, okay, Slack is going to go die on Salesforce. Okay. I get that. Um, but it's, it's just, it's just, it's just, it's just old school thinking. And I think if you're an entrepreneur and if you're a developer or a partner, you could really reinvent the business model because if you're, dis-aggregating all these other services like you can compete with Salesforce, Slack has now taken out of the game with Salesforce, but what Amazon is doing with say connect, which they're promoting heavily at this conference. I mean, you hear it, you heard it on Andy Jessie's keynote, Sandy Carter. They've had huge success with AWS connect. It's a call center mindset, but it's not calling just on phones. >>It's contact that is descent, intermediating, the Salesforce model. And I think when you start getting into specialists and specialism in channels, you have customer opportunity to be valuable. And I think call center, these kinds of stories that you can stand up pretty quickly and then integrate into a business model is going to be game changing. And I think that's going to going to a lot of threat on these big incumbents, like Salesforce, like Slack, because let's face it. Bots is just the chat bot is just a call center front end. You can innovate on the audio, the transcriptions there's so much Amazon goodness there, that connect. Isn't just a call center that could level the playing field and every vertical >>Well, and SAS is getting disrupted, you know, to your, to your point. I mean, you think about what happened with, with Oracle and SAP. You had, you know, these new emerging players come up like, like Salesforce, like Workday, like service now, but their pricing model, it was all the same. We lock you in for a one-year two-year three-year term. A lot of times you have to pay up front. Now you look at guys like Datadog. Uh, you, you look at a snowflake, you look at elastic, they're disrupting the Splunks of the world. And that model, I think that SAS model is right for disruption with a consumption pricing, a true cloud pricing model. You combine that with new innovation that developers are going to attack. I mean, you know, people right now, they complain about service now pricing, they complain about Splunk pricing. They, you know, they talk about, Oh, elastic. We can get that for half the price Datadog. And so I'm not predicting that those companies service now Workday, the great companies, but they are going to have to respond much in the same way that Oracle and SAP had to respond to the disruption that they saw. >>Yeah. It's interesting. During the keynote, they'll talk about going out to the mainframes today, too. So you have Amazon going into Oracle and Microsoft, and now the mainframes. So you have Oracle database and SQL server and windows server all going to being old school technologies. And now mainframe very interesting. And I think the, this whole idea of this SAS factory, um, got my attention to Cohesity, which we've been covering Dave on the storage front, uh, Mo with the founder was on stage. I'm a data management as a service they're part of this new SAS factory thing that Amazon has. And what they talk about here is they're trying to turn ISV and VARs into full-on SAS providers. And I think if they get that right with the SAS factory, um, then that's going to be potentially game changing. And I'm gonna look at to see if what the successes are there, because if Amazon can create more SAS applications, then their Tam and the global it market is there is going to, it can be mopped up pretty quickly, but they got to enable it. They got to enable that quickly. Yeah. >>Enabling to me means not just, and I think, you know, when Jesse answered your question, I saw it in the article that you wrote about, you know, you asked them about multi-cloud and it, to me, it's not about running on AWS and being compatible with Azure and being compatible with Google. No, it's about that frankly abstraction layer that he talked about, and that's what Cohesity is trying to do. You see others trying to do it as well? Snowflake for sure. It's about abstracting that complexity away and adding value on top of the cloud. In other words, you're using the cloud for scale being really expert at taking advantage of the native cloud services, which requires is that Jessie was saying different API APIs, different control, plane, different data plane, but taking that complexity away and then adding new value on top that's white space for a lot of players there. And, and, and I'll tell you, it's not trivial. It takes a lot of R and D and it takes really smart people. And that's, what's going to be really interesting to see, shake out is, you know, can the Dell and HPE, can they go fast enough to compete with the, the Cohesity's you've got guys like CLU Mayo coming in that are, that are brand new. Obviously we talked about snowflake a lot and many others. >>I think there's going to be a huge change in expectations, experience, huge opportunity for people to come in with unique solutions. We're going to have specialty programming on the cube all day today. So if you're watching us here on the Amazon channel, you know that we're going to have an all of a sudden demand. There's a little link on our page. On the, on the, um, the Amazon reinvent virtual event platform, click here, the bottom, it's going to be a landing page, check out all the interviews as we roll them out all day. We got a great lineup, Dave, we got Nutanix pure storage, big ID, BMC, Amazon leaders, all coming in to talk today. Uh, chaos search ed Walsh, Rachel Rose, uh, Medicar Kumar, um, Mike Gill, flux, tons of great, great, uh, partners coming in and they're going to share their story and what's working for them and their new strategies. And all throughout the day, you're going to hear specific examples of how people are changing and reinventing their business development, their partnership strategies on the product, and go to market with Amazon. So really interesting learnings. We're going to have great conversations all throughout the day. So check it out. And again, everything's going to be on demand. And when in doubt, go to the cube.net, we have everything there and Silicon angle.com, uh, for all the great coverage. So >>I don't think John is, we're going to have a conversation with him. David McCann touched on this. You talked about the need for modernization and rationalization, Tim Crawford on, on later. And th this is, this is sort of the, the, uh, the call-out that Andy Jassy made in his keynote. He gave the story of that one. CIO is a good friend of his who said, Hey, I love what you're doing, but it's not going to happen on my watch. And, and so, you know, Jessie's sort of poking at that, that, uh, complacency saying, guys, you have to reinvent, you have to go fast, you have to keep moving. And so we're gonna talk a little bit about what, what does that mean to modernize applications, why the CIO is want to rationalize what is the role of AWS and its ecosystem and providing that, that, that level of innovation, and really try to understand what the next five to seven years are gonna look like in that regard. >>Funny, you mentioned, uh, Andy Jesuit that story. When I had my one-on-one conversation with them, uh, he was kind of talking about that anonymous CIO and I, if people don't know Andy, he's a big movie buff, too, right? He loves it goes to Sundance every year. Um, so I said to him, I said, this error of digital transformation, uh, is kind of like that scene in the godfather, Dave, where, um, Michael Corleone goes to Tom Hagen, Tom, you're not a wartime conciliary. And what he meant by that was is that, you know, they were going to war with the other five families. I think now I think this is what chassis pointed out is that, that this is such an interesting, important time in history. And he pointed this out. If you don't have the leadership chops to lean into this, you're going to get swept away. >>And that story about the CIO being complacent. Yeah. He didn't want to shift. And the new guy came in or gal and they, and they, and they lost three years, three years of innovation. And the time loss, you can't get that back. And during this time, I think you have to have the stomach for the digital transformation. You have to have the fortitude to go forward and face the truth. And the truth is you got to learn new stuff. So the old way of doing things, and he pointed that out very aggressively. And I think for the partners, that same thing is true. You got to look in the mirror and say, where are we? What's the opportunity. And you gotta gotta go there. If not, you can wait, be swept away, be driftwood as Pat Gelsinger would say, or lean in and pick up a, pick up a shovel and start digging the new solution. >>You know what the other interesting thing, I mean, every year when you listen to Jassy and his keynotes and you sort of experienced re-invent culture comes through and John you're live in Silicon Valley, you talked to leaders of Silicon Valley, you know, well, what's the secret of success though? Nine times out of 10, they'll talk about culture, maybe 10 times out of 10. And, and, and so that's, that comes through in Jesse's keynotes. But one of the things that was interesting this year, and it's been thematic, you know, Andy, you know, repetition is important, uh, to, to him because he wants to educate people and make sure it sticks. One of the things that's really been he's been focused on is you actually can change your culture. And there's a lot of inertia. People say, well, not on my watch. Well, it doesn't work that way around here. >>And then he'll share stories about how AWS encourages people to write papers. Anybody in the organization say we should do it differently. And, and you know, they have to follow their protocol and work backwards and all of those stuff. But I believe him when he says that they're open to what you have a great example today. He said, look, if somebody says, well, it's 10 feet and somebody else says, well, it's, it's five feet. He said, okay, let's compromise and say it's seven and a half feet. Well, we know it's not seven and a half feet. We don't want to compromise. We either want to be a 10, Oh, we want to be at five, which is the right answer. And they push that. And that that's, he gives examples like that for the AWS culture, the working backwards, the frequently asked questions, documents, and he's always pushing. And that to me is very, very important and fundamental to understanding AWS. >>It's no doubt that Andy Jassy is the best CEO in the business. These days. If you look at him compared to everyone else, he's hands down, more humble as keynote who does three hour keynotes, the way he does with no notes with no, he memorize it all. So he's competitive and he's open. And he's a good leader. I think he's a great CEO. And I think it will be written and then looked back at his story this time in history. The next, I think post COVID Dave is going to be an error. We're going to look back and say the digital transformation was accelerated. Yes, all that good stuff, people process technology. But I think we're gonna look at this time, this year and saying, this was the year that there was before COVID and after COVID and the people who change and modernize will build the winners and not, and the losers will, will be sitting still. So I think it's important. I think that was a great message by him. So great stuff. All right. We gotta leave it there. Dave, the analysis we're going to be back within the power panel. Two sessions from now, stay with us. We've got another great guest coming on next. And then we have a pair of lb talk about the marketplace pricing and how enterprises have CIO is going to be consuming the cloud in their ecosystem. This is the cube. Thanks for watching..

Published Date : Dec 4 2020

SUMMARY :

It's the queue with digital coverage of create an ecosystem, build the ecosystem, nurture the ecosystem and reinvent what it means And partners are critical to help helping people get there. in the ecosystem, the cohesiveness of the world, the EMCs and so on, you had the marketplace you know, normally you had to, to acquire services outside of the marketplace. And one of the ways you can innovate is you can build on ecosystems. And I think one of the things that's interesting about this partner network is, And at the same time we And I think one of the things that Doug pointed out was with interoperability and integration And then of course it's big theme is, is this year, at the same time, if you look at a company We're going to do stuff maybe between our own own services like they did with the, you know, the glue between databases, That's a pain in the butt to deal with licensing And I think you're right about this differentiation because if I'm a partner and I could build on And I think whoever does that will win and the IBM and the EMC resellers, you know, they get big boats and big houses, And I think this idea of competing on value versus just being kind of a commodity play is you know, in the way you, you like AWS. And now you have the pandemic which kind I don't think that's going to be their play. And I think right now, they're, they're not even having to walk a fine line. I think it's a web van moment, you know, um, you know, where it's like, And I think call center, these kinds of stories that you can stand And that model, I think that SAS model is right for disruption with And I think if they get that right with I saw it in the article that you wrote about, you know, you asked them about multi-cloud and it, I think there's going to be a huge change in expectations, experience, huge opportunity for people to come in with And, and so, you know, Jessie's sort of poking at that, that, If you don't have the leadership chops to lean into this, you're going to get swept away. And the truth is you got to learn new stuff. One of the things that's really been he's been focused on is you And that that's, he gives examples like that for the AWS culture, the working backwards, And I think it will be written and then looked back at his story this time in history.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave McCannPERSON

0.99+

Andy JassyPERSON

0.99+

David McCannPERSON

0.99+

Tim CrawfordPERSON

0.99+

DavidPERSON

0.99+

Michael CorleonePERSON

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

Sandy CarterPERSON

0.99+

Andy JassyPERSON

0.99+

10 feetQUANTITY

0.99+

Dave AlantePERSON

0.99+

JessePERSON

0.99+

Sandy CarterPERSON

0.99+

Jandy JessePERSON

0.99+

Pat GelsingerPERSON

0.99+

Andy JessePERSON

0.99+

Tom HagenPERSON

0.99+

DavePERSON

0.99+

OracleORGANIZATION

0.99+

five feetQUANTITY

0.99+

Jerry ChenPERSON

0.99+

fiveQUANTITY

0.99+

three yearsQUANTITY

0.99+

MondayDATE

0.99+

three hourQUANTITY

0.99+

JassyPERSON

0.99+

Mike GillPERSON

0.99+

JohnPERSON

0.99+

AndyPERSON

0.99+

SAPORGANIZATION

0.99+

Andy JessPERSON

0.99+

Rachel RosePERSON

0.99+

BMCORGANIZATION

0.99+

10QUANTITY

0.99+

DougPERSON

0.99+

10 yearsQUANTITY

0.99+

Justin Hotard, HPE Japan | HPE Discover 2020


 

>>from around the globe. It's the Cube covering HP. Discover Virtual experience Brought to you by HP. >>Hello, everyone. Welcome to the Cube's coverage we're covering HP Discover Virtual experience. 2020. I'm John Furrier, host of the Cube. Great online experience. Check it out. A lot of content go poke around a lot of Cube interviews. A lot of content from HP. It's their virtual conference. HP Discover virtual experience. We have Cube alumni Justin Hotard, who's now s VP and general manager of HP Japan. Justin, great to see you virtually here for the virtual experience. How you doing >>Doing well, John. Great to see you again. A swell and really glad to be here. >>You know, just reminiscing about our previous interview a couple times. You know Jeff Frick is interviewed. I've interviewed HP Discover a couple years ago. Um, service provider Edge now is booming. Everyone's working at home. Everyone is seeing the global pandemic play out on a global stage and impacting our lives. But anyone in the in the I T. Business or technology business is seeing the massive gaps and the areas that need to be worked on. This is something that we're gonna dig into it, I think is really interesting conversation as someone who's in Japan. Honestly, Big telco presence, but also part of the global stage. So I want to get into that. But before we do, tell us about your new role at HP. What are you working on and what are you doing? >>Yes. So, John, currently, I'm the president of HP Japan. I'm responsible is the managing director of Japan and also the managing managing director. Our business in China as well. So keeping myself busy these days. >>A pack your own a lot of zoom calls, conference calls, could imagine the work. You're doing pretty big disruptions. I want to get your thoughts as an industry participant and who's seen these ways before. What is some of the disruptions that you're seeing right now? I see there will document in terms of VM or video, um, VPNs under proficient. Where are you seeing the big disruption? Because those are the obvious low hanging fruit. But it's certainly being an impact. The disruptions or creating opportunities, but major challenges right now. What's your thoughts? >>You >>know, I think I think specific and, uh John and we're seeing in Japan, and a big pillar is, you know, this is really a big inflection point in terms of how people work, and as you as you know, you think about Japan. The culture and the economy has been very reliant on face to face in relation, relationship driven. It's also there's been some traditional paper based activity in that space, as well as things like the Hong Kong stamp away. You sign documents to get you're not just for government approval, but even in private transactions. So all of that is actually under a great way to change. And so the obvious part is, we talk about virtualization and VD I It's really forcing people to rethink, um, you know, work flows and it's not, you know, it's not just one thing. Generally, it's across many, many parts. Education, manufacturing, obviously, obviously traditional enterprise. You touched on Zoom and other virtualization and beady eye, but it's it's I think it's coming across all industries right now. Based on this change, >>what's going on in Japan? Specifically, I know that some GDP numbers were coming in pre covert. I'll see when Covic it's given some of the things you were just talking about how they do business. The culture there must be impacted by the covert 19. What do you what you're seeing there, and how do they move forward? What is some of the changes that need to happen? What do you see? >>Yeah, I mean, I think you touched on. I think the economy that was already under pressure. Um, then you have Cove. It hit. Um, you know, Japan has a huge has had a huge tourism business booming based on the growth in Asia and obviously particularly in China, all of that gets hit. And, uh huh. And then, obviously, you know, the traditional way of doing business has been challenged over the past few months, but it's actually creating quite a bit of opportunity. And some of it is some of it is similar to what you see in other parts of the world. But, you know, we've seen many of the Japanese companies and medical devices and pharmaceuticals jump into innovation and everything from masks toe, um, you know, investment in, you know, in virology and other and, you know, in other areas and testing and all the things that you see, but beyond that we're also seeing is a lot, a lot more discussion around innovation. One place that we're seeing it immediately is education. There's a huge initiative around connecting uh, schools, primary schools, great schools and bringing technology into those schools is a way to accelerate the learning experience. I think obviously in this in this new world in the short term help manage on and ensure continuity of learning through through social distancing and some of the challenges that and everybody has, you know, in in primary education. >>It's interesting, you know, those traditional things like you mentioned just signatures converting at the digitally signatures of the stamping thing you mentioned. Also, the face to face with education, every vertical up is going to be disrupted and an opportunity. So that's what you guys see. That transformation is part of that. What are some of the patterns you see emerging so that your customers and prospects can capture it? What is some of the highlights? What's the big picture? >>Yeah, I think I think at a high level we talk a lot about digital transformation and remote work. These, by the way, were discussed before Covic hit, so I think it's It's just an acceleration. The other one is really around edge, and I ot, um Japan. Obviously great tradition of manufacturing this actually is gonna probably create new investment around manufacturing. Is Japan looks to build its manufacturing base is part of what we expect from the government stimulus programs out there. Um, but they're investing in. And I don't think the factory that will be built tomorrow is gonna is going to start off with a traditional labour view. In fact, it's going to start very, very organized against robotics AI using using i O. T. Using sensors to drive greater levels of automation. A lot of that exists today, but I think this this event just creates more opportunities for acceleration, particularly Greenfield. So we're having conversations with customers around all those areas right now. >>You know, one of the biggest observations I would say in the past 10 years, looking at the wave we've been on and looking at the massive wave coming in now is culture is always a part of the blocker of adoption, and you're kind of getting at some of this with the world you're in now, >>where >>the culture has to shift pretty radically fast. Whether it's the remote workforce, the remote workplace, workloads with robotics and AI everything work related workplace workloads, workflow was with the work. We're forced. I mean, always changing, right? So this is a critical cultural thing. Your thoughts on this because this has to move faster. What are you seeing as catalysts? Any kind of technology? Enablement. What's the What's the What's the data tell you? >>Yeah, yeah, I think I think a couple of things were, you know, we're seeing I think, one that we're seeing that given that we've obviously seen in the rest of the world for a number of years now is a is a shift, that consumption. And we've seen that grow from customers, right? So they're looking at How do we accelerate this experience, how they stand it up? How did they get it? Running and consumption as a service, you know, as a service, models are becoming even more attractive, and so we're seeing new interest in that as a way to build things, to scale things, to create flexibility for future growth. And it's not, you know, it's not just public cloud, it's it's public cloud and on premise applications. It's integration into the virtualization stack, obviously, with, um, you know, with players like VM Ware and Nutanix and Red Hat, it's ah, you know, with open shift containers. It's bringing all of that, you know, bringing all of that scale and flexibility and the other good place. Honestly, we're still seeing it is even in some of our traditional businesses, and we had a very large consumption model in a traditional transaction processing business and for that customer was about creating the flexibility for growth. Um, and so I think we're you know, I think we really are on the brink of a very different I t model in, you know, certainly in Japan to enable a lot of this innovation and to provide more more flexibility and more automation for, you know, for companies there in the businesses. >>And I just want to just validate that by seeing the day that we're looking at in the interviews we've had and even our internal conversation with our editorial Cuban research teams is, is it's happening now in the change you can't ignore it. You could ignore in the past were not ready for it. People process technology. Three pillars of transformation with Cove ID and we've seven, which is having this debate with our team this past month where it's not so much an acceleration in the future. The future got pulled to today, and people are now seeing it and saying, Wow, I need to move because the consequences of not changing are obvious. It's not like a hypothetical. You're starting to see specific use cases where the folks that under invested or didn't make the right bets might be on the wrong side of history coming out of covitz. So to your point about growth is a really key point. This >>is what >>everyone is thinking about right now. So I got to ask you, what solutions do you guys have ready to help customers? Because right now, solutions Walk are really all that matters. It walks that fine line between making it and not making it's having the right solutions is key. >>Yeah, and actually, you know, I think one of things you mentioned a great example of what you're talking about in transformation right in the airline industry. You know, we're seeing that we're going to see this in in Japan, right? This is a place where based if a service was considered a premium experience where you go to kiosks and automation. But now I think we're going to see now we're seeing already interested complete and an automation right bag check bag drop. And that stuff's been talked about for many years. But now it's an acceleration of the experience, and the difference is going to be no longer is it going to be a premium to talk to someone? It's actually about speed. So that's a place where, you know, obviously that's a heavily impacted industry. But as we see it come back in Japan and probably throughout Asia, I think we're gonna see a very different model. And to your question on, uh, you know, to your question on technologies, when I see us doing is really kind of three pieces I think you've got You've got solutions like VD. I were literally out of the box and we built a partners so that customers that are small, medium or large that wants something standard that they could just take into it quickly. We have a platform for also things like SD wan to our business, and we're seeing significant growth there, obviously, you know, mobile access, wireless access, Another place where we're seeing demand, just building on our core business and really seeing healthy growth. I mentioned education is one vertical, but we're seeing it in, obviously in places like manufacturing and on. I'm expecting this even more broken enterprise there as this customer, Aziz, many of our customers come back to the office and bring employees back in. And you can't. You can't have a traditional, you know, just density of desks, right? You've really got to think about how people have mobility and have flexibility to make being distancing and and even even kind of the in and out of office, right? How do I mean by that? That work experience in the productivity, whether I'm in the office for a couple days and how so? I think those are places where we see the technology. Then we talk about consumption service. So the flexibility consume it as a service which in all of those solutions we have offers around and then ultimately even a pop it out or hp fs our financial services, giving customers flexibility and payment options, which for many people that are cash strapped solves a real challenge, right? We talk a lot about the technology but fundamental business challenge of saying yes, I want to invest today. I need to get my work, my workforce up in productive with beady eye. But so they can start generating revenue and cash flow, but one of the cash flow to invest in that productivity. And so this becomes a place where, you know, we're just seeing a lot of traction with our customers. We can help them actually get that up and running, not not created huge cash flow outlay upfront and making get productive and get back on their feet. And definitely in the mid market and the smaller businesses, we're seeing a lot of a lot of activity there. >>That's a huge point, because right now, more than ever, that need is there because of the financial hardships that we're seeing that's evident and well reported. Having that financial flexibilities primary, that's a key thing. So that's great. So good to hear that. The second thing I want to ask you on the business side that's important is not just a financing because you want to have that consumption buy as you go from a cloud technology like standpoint as a service. But now you've got the financial support check. Next step is ecosystem. What are you guys doing on the ecosystem side? If I'm trying to rebuild my business or have a growth strategy check technology check. I'm gonna get some business help on the finance side. Third is partners. What's the status there? >>Yeah, yeah, I think there's I think there's a couple things. One is there's obviously the global relationships we have, you know, close relationship with VM Ware. You know that Nutanix relationship red hat, others that were standing up solutions that some of things I mentioned like me. I literally packaged out of the box experience with a complete turnkey solution, right? So so our partners don't even have to. You don't have to optimize that they can. They can just deploy and enable their their customers. I think the other place in Japan, it's you know what? We didn't touch on it earlier, but one of the really important things and is most of our customers depend on their vendors, depend on their partners, actually do a lot of their I t work. It's a little bit unique in Japan versus the rest of the world. And so this is a place to We're spending a lot of time with our partners with our entire partner ecosystem to make sure they're ready. And I was just actually in a conversation yesterday with a partner talking about the investments they're bringing their they're putting in to really bring that that core innovation around, um around beady eye and around around SD win for as an example and working with them to make sure that they've got all the tools they need from us so that what they can deliver into their into their ecosystem is very turnkey and easy. And I think I think that's really, really, really important. So it's not just the, you know, the global technology relationships that we talked about certainly in Japan, it's also about it's about stitching together. That entire ecosystem that, you know that allows the the end customer toe have ah have a turnkey experience and everybody that's involved in that delivery, you know, to have to have a seamless experience to get these customers up and running. >>And it's great to you guys had that foundational services, but also now with some great acquisitions. You got the cloud native experience across environments and then the reality of the edge Actually, work force in workplaces are changing. VD I etcetera. But you've got edge exploding. You guys also made a great has been years of investments and edge. So with telco and WiFi, all kind of coming together kind of sets up for a nice kind of front end piece with the APP development piece going on. You're seeing that in Japan as well. >>Yeah, I think all of our major telcos there have you have announced five G projects projects and launch is we've got a new you know, we've got a new entrant in the telco space Pakatan launched just a couple months ago. Therefore G solution. But I think all of that is very favorable to driving greater levels of connectivity. And I think you know, it's a lot of times we talk about five G. We talk about kind of the next mobile hands when we think about the next mobile device or handset. But it's also a lot of the private lt and connectivity, and I think we'll see that actually, the intersection of five G and WiFi. In some cases, we're having conversations about, you know, are there opportunities in five G and as the back whole and actually using WiFi in a smaller medium sized office home? And so there's a number of things like that that I think will be compelling and great opportunities for growth, because Japan's an incredibly A. So you know, John is incredibly well connected society and a lot of connectivity, but but I think this is also creating new demand. I mean, people weren't working at home all the time and way. Obviously, you see that in other countries where maybe media streaming and video conferencing we're working on the plans where people got their original Internet service. I think in Japan that's even more so because this tradition, if I go to the office at work and I know when I'm home, I'm relaxing. I mean, this is fundamentally under a huge shift right now, and so I think it's gonna be a you know, a really significant wave of growth and five g n and wife by as this this new. Imagine this new, this new remote work experience this new mobile work experience happens a >>lot of architecture to really work a little bit. Not radical, but certainly transforming. And its benefits. Exciting time, tough environment. Right now, let people working hard have to come out of it. But it's super exciting from a tech perspective. What it can enable. Really appreciate. Of course, we're here in the HP Discover virtual experience bringing you the best content. So I have to ask you, what sessions? Um, do you think people should turn into for the virtual experience? >>Well, you know, it's of course, the one that I think everyone has to make. And I never liked the missus is the keynote is that obviously Antonio always gives us not only, you know, some of the great technologies and launches, but but also really a vision of where we see the industry going to. I think Tom ones foundational. But we've got some great sessions on consumption and as a service that are actually set up for some of our customers and partners in Japan and across Asia. And I think those will be really good discussions, you know, with, uh, you know, with folks like our CTO commercial coffee and our our global general manager for green like white. So I'd encourage folks to turn into, you know, to really learn about as a service because I think a lot of times we talk about the cloud and we think about public Cloud only. Um and I think for certainly for many of my customers and partners in Japan, um, I think with everything we just talked about, the cloud is gonna be an inevitable reality. But the cloud is an architecture, and that's where some of these new technologies and services that we're bringing out will be will be really, really valuable, whether it's in storage or it's in compute virtualization, enabling collaboration or some things that we're doing right now, John. But be a video video conference, but but also also even just in automating the data center and bringing, you know, being a new levels of productivity back into some of the traditional data center. A swee as we need to do that in order to enable the new edge and some of these new applications around AI and machine learning that are necessary, Teoh to support the growth of the economy. But you know net net. I think this is going to be. These are all things they're going to support growth and recovery. So I think it's a great opportunity and discover for our customers and partners to learn what they could do to help accelerate that and and and accelerate the recovery. >>Certainly, Cloud has shown the way it's operating model. It's not just public, it's on premise. It's an edge is so it's not just multi cloud either. It's multi environment. This is where the market's going. So you guys are on the right track. Justin really appreciate the time. But I want to ask the final question. I want you to complete this sentence for me as we end this out on our virtual experience, Our competitive advantage HP HP is competitive advantage to our clients is that we are blank. >>Our competitive advantage is that we are the best partner, deeply understanding their needs and bringing them the right innovation and value that they need to deliver their business outcomes and in this case, obviously recover and get back to growth. >>There's a whole chart. Managing director of President of HP Japan great to see you. Congratulations on your new role over there on Asia Pacific. Um, and thanks for checking in on the virtual experience. Thanks for coming in. And good to see you again. >>Great. Great to see you, John. Thanks again for having time for me. And best of luck for a successful discover virtual experience. >>Awesome. Okay, I'm John Furry here in the Cube studios, getting the remote injuries for this virtual experience for HP Discover. Thanks for watching. >>Yeah. Yeah, yeah, yeah.

Published Date : Jun 24 2020

SUMMARY :

Discover Virtual experience Brought to you by HP. Justin, great to see you virtually here for the virtual experience. A swell and really glad to be here. What are you working on and what are you doing? I'm responsible is the managing director of Japan and also the managing managing What is some of the disruptions that you're seeing right now? um, you know, work flows and it's not, you know, it's not just one thing. What is some of the changes that need to happen? some of it is similar to what you see in other parts of the world. of the stamping thing you mentioned. And I don't think the factory that will be built tomorrow is gonna the culture has to shift pretty radically fast. Um, and so I think we're you know, I think we really are on the brink And I just want to just validate that by seeing the day that we're looking at in the interviews we've had and even our internal So I got to ask you, what solutions do you guys have ready to help And so this becomes a place where, you know, we're just seeing a lot of traction What are you guys doing on the ecosystem side? you know, the global technology relationships that we talked about certainly in Japan, And it's great to you guys had that foundational services, but also now with some great acquisitions. And I think you know, it's a lot of times we talk about five G. Of course, we're here in the HP Discover virtual experience bringing you the best content. And I think those will be really good discussions, you know, with, uh, you know, with folks like our CTO I want you to complete this sentence for me as we end this out that they need to deliver their business outcomes and in this case, obviously recover And good to see you again. Great to see you, John. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin HotardPERSON

0.99+

JapanLOCATION

0.99+

John FurrierPERSON

0.99+

Jeff FrickPERSON

0.99+

HPORGANIZATION

0.99+

AsiaLOCATION

0.99+

ChinaLOCATION

0.99+

JustinPERSON

0.99+

JohnPERSON

0.99+

NutanixORGANIZATION

0.99+

John FurryPERSON

0.99+

Asia PacificLOCATION

0.99+

yesterdayDATE

0.99+

AzizPERSON

0.99+

sevenQUANTITY

0.99+

ThirdQUANTITY

0.99+

telcoORGANIZATION

0.99+

2020DATE

0.99+

oneQUANTITY

0.98+

TomPERSON

0.98+

OneQUANTITY

0.98+

tomorrowDATE

0.97+

todayDATE

0.97+

GreenfieldLOCATION

0.95+

second thingQUANTITY

0.95+

PakatanORGANIZATION

0.95+

three piecesQUANTITY

0.95+

HP JapanORGANIZATION

0.95+

Hong KongLOCATION

0.95+

VM WareORGANIZATION

0.94+

couple months agoDATE

0.93+

HP DiscoverORGANIZATION

0.93+

HP DiscoverORGANIZATION

0.91+

EdgeORGANIZATION

0.91+

one thingQUANTITY

0.9+

JapaneseOTHER

0.9+

PresidentPERSON

0.9+

past 10 yearsDATE

0.89+

CubanOTHER

0.88+

Cove IDORGANIZATION

0.88+

Three pillarsQUANTITY

0.87+

five G.TITLE

0.86+

couple years agoDATE

0.86+

one verticalQUANTITY

0.85+

AntonioPERSON

0.85+

couple thingsQUANTITY

0.84+

CoveORGANIZATION

0.83+

CovicPERSON

0.83+

CTOORGANIZATION

0.83+

five GTITLE

0.82+

HPEORGANIZATION

0.78+

HPE JapanORGANIZATION

0.77+

Red HatORGANIZATION

0.74+

Kiran Narsu, Alation & William Murphy, BigID | CUBE Conversation, May 2020


 

from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation LeBron welcome to the cube studio I'm John Ferrier here in Palo Alto in our remote coverage of the tech industry we are in our quarantine crew here getting all the stories in the technology industry from all the thought leaders and all the newsmakers we've got a great story here about data data compliance and really about the platforms around how enterprises are using data I've got two great guests and some news to announce Kieran our CEO is the vice president of business development with elation and William Murphy vice president of technology alliances of big ID got some interesting news a integration partnership between the two companies really kind of compelling especially now as people have to look at the cloud scale what's happening in our world certainly in the new realities of kovin 19 and going forward the role of data new kinds of applications and the speed and agility are gonna require more and more automation more reality around making sure things are in place so guys thanks for coming on appreciate it Kieran William thanks for joining me thank you thank you so let's take a step back elation you guys have been on the cube many times we've been following you guys been a leader and Enterprise catalog a new approach it's a real new technology approach and methodology and team approach to building out the data catalogues so talk about the Alliance here why what's the news why you guys in Creighton is integration partnership well let me start and thank you for having us today you know as you know elation launched the data catalog a category seven years ago and even today we're acknowledging the leader as a leader in that space you know and but we really began with the core belief that ultimately data management will be drive driven more and more by business demand and less by information suppliers so you know another way to think about that is you know how people behave with data will drive how companies manage data so our philosophy put very simply is to start with people and not first not data and our customers really seem to agree with this approach and we've got close to 200 brands using our data you know our tool every single day to drive vibrant data communities and and foster a real data culture in the environment so one of the things that was really exciting to us is the in been in data privacy by large corporate customers to get their arms around this and you know we really strive to improve our ability to use the tool inside you know these enterprises across more use cases so the partnership that we're announcing with big ID today is really you know Big Ideas the leading modern data intelligence platform for privacy and what we're trying to do is to bring bring a level of integration between our two technologies so that enterprises in better manage and scale their their data privacy compliance capability William talked about big ID what you guys are doing you guys also have a date intelligence platform we've been covering gdpr for a very long time I once called I won't say it again because it wasn't really that complimentary but the reality has sit in and they and the users now understand more than ever privacy super important companies have to deal with this you guys have a solution take a minute to explain big-big ID and what you guys are doing yeah absolutely so our founders Demetri Shirota and Nimrod Beck's founded big idea in 2016 Sam you know gdpr was authored and the big reason there is that data changed and how companies and enterprises doubled data was changing pretty much forever that profound change meant that the status quo could no longer exist and so privacy was gonna have to become a day-to-day reality to these enterprises but what big ID realized is that to start to do to do anything with privacy you actually have to understand where your data is what it is and whose it is and so that's really the genesis of what dimitri nimrod created which which is a privacy centric data discovery and intelligence platform that allows our enterprise customers and we have over 70 customers in the enterprise space many within the Fortune hundred to be able to find classify and correlate sensitive data as they defined it across data sources whether its own Prem or in the cloud and this gives our users and kind of unprecedented ability to look into their data to get better visibility which if both allows for collaboration and also allows for real-time decision-making a big place with better accuracy and confidence that regulations are not being broken and that customers data is being treated appropriately great I'm just reading here from the release that I want to get you guys thoughts and unpack some of the concepts on here but the headline is elation strengthens privacy capabilities with big ID part nur ship empowering organizations to mitigate risks delivering privacy aware data use and improved adherence to data privacy regulations it's a mouthful but the bottom line is is that there's a lot of stuff to that's a lot of complexity around these rules and these platforms and what's interesting you mentioned discovery the enterprise discovery side of the business has always been a complex nightmare I think what's interesting about this partnership from my standpoint is that you guys are bringing an interface into a complex platform and creating an easy abstraction to kind of make it usable I mean the end of the day you know we're seeing the trends with Amazon they have Kendre which they announced and they're gonna have a ship soon fast speed of insights has to be there so unifying data interfaces with back-end is really what seems to be the pattern is that the magic going on here can you guys explain what's going on with this and what's the outcome gonna be for customers yeah I guess I'll kick off and we'll please please chime in I think really there's three overarching challenges that I think enterprises are facing is they're grappling with these regulations as as we'll talked about you know number one it's really hard to both identify and classify private data right it's it's not as easy as it might sound and you know we can talk a little bit more about that it's also very difficult to flag at the point of analysis when somebody wants to find information the relevant policies that might apply to the given data that they're looking to it to run an analysis on and lastly the enterprise's are constantly in motion as enterprises change and by new businesses and enter new markets and launch new products these policies have to keep up with that change and these are real challenges to address and you know with Big Idea halation we're trying to really accelerate that compliance right with the the you know the combination of our tools you know reduce the the cost and complexity of compliance and fundamentally keep up through a single interface so that users can know what to do with data at the point of consumption and I think that's the way to think about it well I don't know if you want to add something to that absolutely I think when Karen and I have been working on this for actually many months at this point but most companies don't have a business plan of just saying let's store as much data as possible without getting anything out of it but in order to get something out of it the ability to find that data rapidly and then analyze it so that decision makers make up-to-date decisions is pretty vital a lot of these things when they have to be done manually take a long time they're huge business issues there and so the ability to both automate data discovery and then cataloging across elation and big ID gives those decision makers whether the data steward the data analyst the chief data officer an ability to really dive deeper than they have previously with better speed you know one of the things that we've been talking about for a long time with big data as these data links and they're fairly easy to pull I mean you can put a bunch of data into a corpus and you you act on them but as you start to get across these silos there's a need for you know getting a process down around managing just not only the data wrangling but the policies behind it and platforms are becoming more complex can you guys talk about the product market fit here because there's sass involved so there's also a customer activity what's the product market fit that you guys see with this integration what are some of the things that you're envisioning to emerge out of this value proposition I think I can start I think you're exactly right enterprises have made huge investments in you know historically data warehouses data Mart's data lakes all kinds of other technology infrastructure aimed at making the data easier to get to but they've effectively just layered on to the problem so elations catalog has made it incredibly much more effective at helping organizations to find to understand trust to reuse and use that data so that stewards and people who know about the data can inform users who may need need to run a particular report or conduct a specific analysis can accelerate that process and compress the time the insights much much more than then it's are possible with today's technologies and if you if you overlay that on to the data privacy challenge its compounded and I think you know will it would be great for you to comment on what the data discovery capability it's a big ID do to improve that that even further yeah absolutely so as to companies we're trying to bridge this gap between data governance and privacy and and John as you mentioned there's been a proliferation of a lot of tools whether their data lakes data analysis tools etc what Big Idea is able to do is we're looking across over 70 different types of data platforms whether they be legacy systems like SharePoint and sequel whether they be on pram or in the cloud whether it's data at rest or in motion and we're able to auto populate our metadata findings into relations data catalog the main purpose there being that those data stewards and have access to the most authentic real time data possible so on the terms of the customer value they're going to see what more built in privacy aware features is its speed but you know what I mean the problem is compounded with the data getting that catalog and getting insights out of it but for this partnership is it speed to outcome what does the outcome that you guys are envisioning here for the customer I think it's a combination of speed as you said you know they can much more rapidly get up to speed so an analyst who needs to make a decision about specific data set whether they can use it or not and know at the point of analysis if this data is governed by policies that has been informed by big IDs so the elation catalog user can make a much more rapid decision about how to use that the second piece is the complexity and costs of compliance they can really reduce and start to winnow down their technology footprint because with the combination of the discovery that big ID provides the the the ongoing discovery the big ID provides and the enterprise it data catalog provided violation we give the framework for being able to keep up with these changes in policies as rules and as companies change so they don't have to keep reinventing the wheel every time so we think that there's a significant speed time the market advantage as well as an ability to really consolidate technology footprint well I'll add to that yeah yeah just one moment so elation when they helped create this marketplace seven years ago one of the goals there and I think we're Big Ideas assisting as well as the trusting confidence that both the users of these software's the data store of the analysts have and the data that they're using and then the the trust and confidence are building with their end consumers is much better knowing that there is the this is both bi-directional and ongoing continuously you know I've always been impressed with relations vision it's big vision around the role of the human and data and it's always been impressive and yeah I think the world spinning in that direction you starting to see that now William I want to get your thoughts with big id because you know one of the things is challenging out there from what we're hearing is you know people want to protect the sensitive data obviously with the hacks and everything else and personal information there's all kinds of regulation and believe me state by state nation by nation it's crazy complex at the same time they've got to ensure this compliance tripwires everywhere right so you have this kind of nested complex web of stuff and some real security concerns at the same time you want to make data available for machine learning and for things like that this is the real kind of things that the problem has twisted around so if I'm an enterprise I'm like oh man this is a pain in the butt so how are you guys seeing this evolve because this solution is one step in that direction what are some of the pain points what are some of the examples can you share any insights around how people are overcoming that because they want to get the data out there they want to create applications that are gonna be modern robust and augmented with whether it's augmented AI of some sort or some sort of application at the same time protecting the information and compliance it's a huge problem challenge your thoughts absolutely so to your point regulations and compliance measures both state-by-state and internationally they're growing I mean I think when we saw GDP our four years ago in the proliferation of other things whether it be in Latin America in Asia Pacific or across the United States potentially even at the federal level in the future it's not making it easier to add complexity to that every industry and many companies individually have their own policies in the way that they describe data whether what's sensitive to them is it patent numbers is it loyalty card numbers is it any number of different things where they could just that that enterprise says that this type of data is particularly sensitive the way we're trying to do this is we're saying that if we can be a force multiplier for the individuals within our organization that are in charge of the stewardship over their data whether it be on the privacy side on the security side or on the data and analytics side that's what we want to do and automation is a huge piece of this so yes the ID has a number of patents in the machine learning area around data discovery and classification cluster analysis being able to find duplicate of data out there and when we put that in conjunction with what elations doing and actually gave the users of the data the kind of unprecedented ability to curate deduplicate secure sensitive data all by a policy driven automated platform that's actually I think the magic gear is we want to make sure that when humans get involved their actions can be made how do I say this minimum minimum human interaction and when it's done it's done for a reason of remediation so they're there the second step not the first step here I'll get your thoughts you know I always riff on the idea of DevOps and it's a cloud term and when you apply that the data you talk about programmability scale automation but the humans are making calls whether you're a programmer and devops world or to a data customer of the catalog and halation i'm making decisions with my business I'm a human I'm taking action at the point of design or whatever this is where I think the magic can happen your thoughts on how this evolves for that use case because what you're doing is you're augmenting the value for the user by taking advantage of these things is is that right or am i around the right area yeah I think so I think the one way to think about elation and that analogy is that the the biggest struggle that enterprise business users have and we target the the consumers of data we're not a provider to the information suppliers if you will but the people who had need to make decisions every single day on the right set of data we're here to empower them to be able to do that with the data that they know has been given the thumbs up by people who know about the data connecting stewards who know about the subject matter at hand with the data that the analyst wants to use at the time of consumption and that powerful connection has been so effective in our customers that enabling them to do in our analytical work that they just couldn't dream of before so the key piece here is with the combination with big ID we can now layer in a privacy aware consumption angle which means if you have a question about running some customer propensity model and you don't know if you can use this data or that data the big ID data discovery platform informs the elation catalog of the usage capabilities of that given data set at the moment the analyst wants conduct his or her analysis with the appropriate data set as identified by the stewards and and as endorsed by the steward so that point in time is really critical because that's where the we can we can fundamentally shrink the decision sight yeah it's interesting and so have the point of attack on the user in this case the person in the business who's doing some real work that's where the action is yeah it's a whole nother meaning of actionable data right so you know this seems to where the values quits its agility really it's kind of what we're talking about here isn't it it is very agile on the differentiation between elation and big idea in what we're bringing to the market now is we're also bringing flexibility and you meant that the point of agility there is because we allow our customers to say what their policies are what their sense of gait is define that themselves within our platforms and then go out find that data classify and catalog at etc like that's giving them that extra flexibility the enterprise's today need so that it can make business decisions and faster and I actually operationalize data guys great job good good news it's I think this is kind of a interesting canary in the coal mine around the trends that are going on around how data is evolving what's next how you guys gonna go to market partnership obviously makes a lot of sense technical integration business model integration good fit what's next for you guys I'm sorry I mean I think the the great thing is that you know from the CEO down our organizations are very much aligned in terms of how we want to integrate our two solutions and how we want to go to market so myself and will have been really focused on making sure that the skill sets of the various constituents within both of our companies have the level of education and knowledge to bring these results to bear coupled with the integration of our two technologies well your thoughts yeah absolutely I mean between our CEOs who have a good cadence to care to myself who probably spend too much time on the phone at this point we might have to get him a guest bedroom or something alignments a huge key here ensuring that we've enabled our field to - and to evangelize this out to the marketplace itself and then doing whether it's this or our webinars or or however we're getting the news out it's important that the markets know that these capabilities are out there because the biggest obstacle honestly to adoption it's not that other solutions or build-it-yourself it's just lack of knowledge that it could be easier it could be done better that you could have you could know your data better you could catalog it better great final question to end the segment message to the potential customer out there what it what about their environment that might make them a great prospect for this solution is it is it a known problem is it a blind spot when would someone know to call you guys up in this to ship and leverage this partnership is it too much data as it's just too much many applications across geographies I'm just trying to understand the folks watching when it's an opportunity to call you guys welcome a relation perspective there that can never be too much data they the a signal that may may indicate an interest or a potential fit for us would be you know the need to be compliant with one or more data privacy regulations and as well said these are coming up left and right individual states in the in addition to the countries are rolling out data privacy regulations that require a whole set of capabilities to be in place and a very rigorous framework of compliance those those requirements and the ability to make decisions every single day all day long about what data to use and when and under what conditions are a perfect set of conditions for the use of a data catalog evacuation coupled with a data discovery and data privacy solution like big I well absolutely if you're an organization out there and you have a lot of customers you have a lot of employees you have a lot of different data sources and disparate locations whether they're on prime of the cloud these are solid indications that you should look at purchasing best-of-breed solutions like elation and Big Ideas opposed to trying to build something internally guys congratulations relations strengthening your privacy capabilities with the big ID partnership congratulations on the news and we'll we'll be tracking it thanks for coming I appreciate it thank you okay so cube coverage here in Palo Alto on remote interviews as we get through this kovat crisis we have our quarantine crew here in Palo Alto I'm John Fourier thanks for watching [Music] okay guys

Published Date : May 13 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
KieranPERSON

0.99+

KarenPERSON

0.99+

Kieran WilliamPERSON

0.99+

2016DATE

0.99+

Palo AltoLOCATION

0.99+

John FourierPERSON

0.99+

Palo AltoLOCATION

0.99+

AmazonORGANIZATION

0.99+

WilliamPERSON

0.99+

May 2020DATE

0.99+

two solutionsQUANTITY

0.99+

second pieceQUANTITY

0.99+

Kiran NarsuPERSON

0.99+

two companiesQUANTITY

0.99+

William MurphyPERSON

0.99+

Nimrod BeckPERSON

0.99+

United StatesLOCATION

0.99+

Latin AmericaLOCATION

0.99+

two technologiesQUANTITY

0.99+

Demetri ShirotaPERSON

0.99+

Asia PacificLOCATION

0.99+

first stepQUANTITY

0.99+

AlationPERSON

0.99+

JohnPERSON

0.99+

William MurphyPERSON

0.99+

todayDATE

0.99+

seven years agoDATE

0.99+

over 70 customersQUANTITY

0.99+

second stepQUANTITY

0.99+

BostonLOCATION

0.99+

four years agoDATE

0.99+

LeBronPERSON

0.98+

John FerrierPERSON

0.98+

bothQUANTITY

0.98+

two great guestsQUANTITY

0.98+

oneQUANTITY

0.98+

KendreORGANIZATION

0.97+

200 brandsQUANTITY

0.96+

single interfaceQUANTITY

0.95+

SharePointTITLE

0.95+

over 70 different typesQUANTITY

0.94+

one stepQUANTITY

0.93+

three overarching challengesQUANTITY

0.89+

BigIDORGANIZATION

0.85+

Big IdeaORGANIZATION

0.84+

Big IdeasORGANIZATION

0.81+

MartORGANIZATION

0.78+

one momentQUANTITY

0.77+

gdprTITLE

0.76+

every single dayQUANTITY

0.74+

big IDORGANIZATION

0.74+

one wayQUANTITY

0.73+

monthsQUANTITY

0.73+

bunch of dataQUANTITY

0.72+

every single dayQUANTITY

0.7+

muchQUANTITY

0.69+

a lot of stuffQUANTITY

0.68+

a lot of toolsQUANTITY

0.67+

CreightonLOCATION

0.66+

DevOpsTITLE

0.65+

vicePERSON

0.58+

nimrodPERSON

0.58+

thingsQUANTITY

0.57+

kovin 19ORGANIZATION

0.55+

dimitriPERSON

0.52+

every industryQUANTITY

0.52+

bigORGANIZATION

0.47+

hundredQUANTITY

0.46+

bigTITLE

0.44+

IDTITLE

0.35+

Rachel Tobac, SocialProof Security | CUBE Conversation, April 2020


 

>> Narrator: From theCUBE studios in Palo Alto and Boston connecting with thought leaders all around the world. This is theCUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We are here in our Palo Alto studios today. We got through March, this is some really crazy time. So we're taking advantage of the opportunity to reach out to some of the community leaders that we have in our community to get some good tips and tricks as to know how to kind of deal with this current situation. All the working from home, school from home. And we're really excited to have one of the experts. One of my favorite CUBE guests. We haven't had her around since October 2017, which I find crazy. And we'd love to welcome into theCUBE via the remote dial-in, Rachel Tobac. She is the CEO of SocialProof Security. Rachel, great to see you and I cannot believe that we have not sat down since 2017. >> I know, I can't believe it, it's been so much time. Thanks for having me back. >> Absolutely, but we are good Twitter friends. >> Oh yeah >> Exchanging stuff all the time. So, first of, great to see you. Just a kind of of introduction, tell us a little bit about SocialProof Security and your very unique specialty. >> Yes. SocialProof Security is all about social engineering and protecting you from the those types of attackers. So, basically we help you understand how folks manipulate you and try and gain access to your information. I am an attacker myself so I basically go out, try it, learn what we can learn about how we do our attacks and then go on and train you to protect your organization. So, training and testing. >> Alright. Well, I am going to toot your horn a little bit louder than that because I think it's amazing. I think that you are basically 100% undefeated in hacking people during contests at conventions, live. And it's fascinating to me and why I think it's so important it's not a technical hack at all. It's a human hack, and your success is amazing. And I've seen you do it. There's tons of videos out there with you doing it. So, what are kind of just the quick and dirty takeaways that people need to think about knowing that there are social hackers, not necessarily machine hackers out there, trying to take advantage of them. What are some of these inherit weaknesses that we just have built into the system? >> Yeah, thanks for your kind words too, I appreciate that. The challenge with social engineering is that it leverages your principles of persuasion. The parts of you that you cannot switch off. And so, I might pretend to be similar to you so that I can build rapport with you. And it's really hard for you to switch that off because you want to be a kind person, you want to be nice and trusting. But it's hard, it's a tough world out there and unfortunately criminals will leverage elements of your personality and your preferences against you. So, for instance if I know you have a dog, then I might play a YouTube video of a dog barking and try and gain access to information about your systems and your data, while pretending to be IT support, for example. And that's really tough because, you know three minutes into the conversation we are already talking about our dog breeds and now you want to trust me more. But unfortunately just because we have something in common, it doesn't mean that I am who I say I am. And so, I always recommend people are politely paranoid. It just basically means that you use two methods of communication to confirm that people are who they say they are. And if they are trying to get you to divulge sensitive information or go through with a wire transfer, for instance, you want to make sure that you check that first. We just saw an example of this with Barbara Corcoran. Famously on Shark Tank. Where she has many investments in real estate. And unfortunately a cyber criminal was able to take advantage and get almost $400,000 wired over to them and they did lose that money because they were able to take advantage of the bookkeeper, the accountant and the assistant and folks just were not checking back and forth that people are who they say they were with multiple methods of communication. >> It's crazy. A friend of mine actually is in the real estate business. And we were talking earlier this year and he got a note from his banker. Looked like his banker's email. It was the guy's name that he works with all the time. Was talking about a transfer. It didn't have a bunch of weird misspelling and bad grammar. And all kind of the old school things that kind of would expose it as a hack. And he picked up the phone and called the guy, and said "we don't have a transaction happening right now. "Why did you send this to me?" So it gets really really really good. But lets dive into just a little vocabulary 101. When people talk about "fishing" and "spearphishing" what does that exactly mean for people that aren't really familiar with those terms? >> Sure. Most likely you are going to see it happen over email. In fact, with COVID-19 right now we've seen through Google's Transparency Report on fishing that there's been a 350% increase in fishing attacks. And I believe Brisk had this huge research that said that there were 300,000 plus suspicious COVID 19 fishing websites that were just spun up in the past couple of weeks. It's pretty scary but basically what they are trying to do is get you to input your credentials. They are trying to get access to your machine or your credentials so that they can use them on other high value sites, gain access to your information, your data, points, your sensitive data basically. And use that against you. It's really tough. Unfortunately, criminals don't take a break even in crisis. >> Yeah they are not self-isolating unfortunately, I guess they are sitting there with their computers. So that's interesting. So, I was going to ask you, kind of what is the change in the landscape now. So you answered a little bit there but then the other huge thing that's happening now is everybody is working from home. They are all on Zoom, they are all on Skype, WebEx. And you've actually had some really timely post just recently about little things that people should think about in terms on just settings on Zoom to avoid some of the really unfortunate things that are popping in kind of randomly on Zoom meetings. So, I wonder if you could share some of those tips and tricks with the audience. >> Yeah, absolutely. Some of the big issues that we are seeing recently is what people have coined as Zoombombing. It's all over the news. So you've probably heard about it before but in case you are wondering exactly what that is. It's whenever an attacker either guesses your Zoom ID code and you don't have a password on your Zoom call that you are in the middle of. Or they might gain access to your Zoom ID code because maybe your took a screenshot of your Zoom and posted that to social media. And now if you don't have password protection or your waiting room is on they can just join your call and sometimes you might not notice that they are on the call, which could lead to privacy issues, data breach for instance or just a sensitive data leak. If they join via the phone you might not even notice that they are on the call. And so it's really important to make sure that you have password protection on for your Zoom and you have waiting rooms enabled. And you don't want to take pictures of your workstation. I know that's really tough for folks. because they want to showcase how connected they are during these difficult times I do understand that. But realize that when you take those screenshots of your workstation, this is something that we just saw in the news with Boris Johnson just a few days ago. He posted an image of his zoom call and it included some of the software they used. And so, you just mentioned spearphishing, right? I can look at some of that software get an idea for maybe the version of his operating system the version of some of the software he may be using on his machine and craft a very specific spearfish just for him that I know will likely work on his machine, with his software installed because I understand the version and the known vulnerabilities in that software. So, there's a lot of problems with posting those types of pictures. As a blanket rule you are not going to want to take pictures of your workstation. Especially not now. >> Okay, so, I remember that lesson that you taught me when we're in Houston at Grace Hopper. Do not take selfies in front of your pics, in front of your work laptop. 'Cause as you said, you can identify all types of OS information. Information that gives you incredible advantage when you are trying to hack into my machine. >> Yeah, that's true. And I think a lot of people don't realize they are like, "everybody uses the browser, everybody uses Power Point", for example. But sometimes, the icons and logos that you have on your machine, really give me good information about the exact version and potentially the versions that might be out of data in your machine. When I can look up those non-vulnerabilities pretty easily that's a pretty big risk. The other things that we see is people take screenshots and I can see their desktop and when I can see your desktop, I might know the naming convention that you use for your files which I can name drop with you or talk about on the phone or over email to convince you that I really do have access to your machine like I am IT support or something. >> Yeah, it's great stuff. So for people who want more of this great stuff go to Rachel's Twitter handle. I'm sure we have it here on the lower third. You've got the great piece with. Last week with John Oliver hacking the voting machines like a week before the elections last year which was phenomenal. Now I just saw your in this new HBO piece where you actually just sit down at the desk with the guy running the show and hacker disciplines systems. Really good stuff. Really simple stuff. Let's shift gears one more time, really in terms of what you are doing now. You said you are doing some help in the community to directly help those in need as we go through this crisis. People are trying to find a way to help. Tell us a little bit more about what you are doing. >> Yeah, as soon as I started noticing how intense COVID-19 was wreaking havoc on the hospital and healthcare systems in the world I decided to just make my services available for free. And so I put out a call on my social medias and let folks know "Hey if you need training ,if you need support if you just want to walk through some of your protocols and how I might gain access to your systems or your sensitive data through those protocols, let me know and I'll chat with you" And, I've had an amazing response. Being able to work with hospitals all over the world for free to make sure that they have the support that they need during COVID-19 it really does mean a lot to me because it's tough I feel kind of powerless in this situation there's not a lot that I can personally do there are many brave folks who are out there risking it all every single day to be able to do the work to keep folks safe. So, just trying to do something to help support the healthcare industry as they save lives. >> Well, that's great. I mean, it is great 'cause if you are helping the people that are helping ,you know, you are helping maybe not directly with patients but that's really important work and there's a lot of stuff now that's coming out in terms of, kind of of this tunnel vision on COVID-19 and letting everything else kind of fall by the wayside including other medical procedures and there is going to be a lot of collateral damage that we don't necessarily see because the COVID situation has kind of displaced everything out and kind of blown it out. Anything that you can do to help people get more out of the resources, protect their vulnerability is nothing but goodness. So, thank you for doing that. So, I will give you a last word. What's your favorite, kind of closing line when you are at Black Hat or RSA to these people to give them the last little bit "Come on, don't do stupid things. There is some simple steps you can take to be a little bit less vulnerable" >> Yeah, I think something that we hear a lot is that people kind of give a blanket piece of advice. Like, don't click links. And, that's not really actionable advice. Because a lot of times you are required to click links or download that PDF attachment from HR. And, many times it is legitimate for work. And so, that type of advice isn't really the type of advice I like to give. Instead, I like to say just be politely paranoid and use two methods of communication to confirm if it is legitimate before you go ahead and do that. And, it will take a little bit of time I'm not going to lie it'll take you an extra 30 seconds to 60 seconds to just chat somebody and say "Hey quick question about that thing you sent over" But it can start to change the security consciousness of your culture. And maybe they'll put out a chat while they send out an email from HR to let you know that it is legitimate and then you are kind of starting this cycle at the beginning. Not every single person has to ask individually you can start getting that security consciousness going where people are politely paranoid and they know that you are going to be too so they are going to preempt it and make sure that you understand something is legitimate with a second form of communication. >> Great tip, I am a little taken aback, everybody now wants to get their score so high their customer satisfaction score so after like every transaction you get this silly surveys "How was your time at SafeWay? "Or Bank of America?" All these things Survey Monkey. I don't really know how those businesses stay in anymore. I am not clicking on any Bank of America customer satisfaction or Safeway customer satisfaction link. But I will be politely paranoid and look for the right ones to click on. (giggle) >> That's good and use two methods of communication to confirm they are real. >> That's right,two-factor authentication. Alright,well Rachel, thank you for taking a few minutes of your time. Thank you for your good work with hospitals in the community and really enjoyed catching up. As always, love your work and I'm sure we'll be talking you more on Twitter. >> Thanks for having me on again and I'll see you on the Internet. >> All right, be safe. >> Rachel: Thank you >> All right, that was Rachel. I am Jeff. You are watching theCUBE. We are coming to you from our Palo Alto Studios. Thanks for watching. Stay safe and we'll see you next time. (instrumental music)

Published Date : Apr 2 2020

SUMMARY :

connecting with thought leaders all around the world. Rachel, great to see you and I cannot believe I know, I can't believe it, it's been so much time. and your very unique specialty. and then go on and train you to protect your organization. I think that you are basically 100% undefeated And so, I might pretend to be similar to you "Why did you send this to me?" is get you to input your credentials. So, I wonder if you could share and you don't have a password on your Zoom call Okay, so, I remember that lesson that you taught me But sometimes, the icons and logos that you have to convince you that I really do have access to your machine of what you are doing now. if you just want to walk through some of your protocols that are helping ,you know, you are helping and they know that you are going to be too and look for the right ones to click on. to confirm they are real. Thank you for your good work with hospitals in the community and I'll see you on the Internet. We are coming to you from our Palo Alto Studios.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Barbara CorcoranPERSON

0.99+

RachelPERSON

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

John OliverPERSON

0.99+

HoustonLOCATION

0.99+

Rachel TobacPERSON

0.99+

Bank of AmericaORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

April 2020DATE

0.99+

MarchDATE

0.99+

60 secondsQUANTITY

0.99+

350%QUANTITY

0.99+

October 2017DATE

0.99+

SocialProof SecurityORGANIZATION

0.99+

Boris JohnsonPERSON

0.99+

SafewayORGANIZATION

0.99+

100%QUANTITY

0.99+

two methodsQUANTITY

0.99+

last yearDATE

0.99+

SkypeORGANIZATION

0.99+

SafeWayORGANIZATION

0.99+

Last weekDATE

0.99+

BostonLOCATION

0.99+

COVID-19OTHER

0.99+

YouTubeORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

three minutesQUANTITY

0.99+

CUBEORGANIZATION

0.99+

WebExORGANIZATION

0.99+

2017DATE

0.99+

TwitterORGANIZATION

0.99+

second formQUANTITY

0.98+

two-factorQUANTITY

0.98+

HBOORGANIZATION

0.98+

GoogleORGANIZATION

0.97+

todayDATE

0.97+

Grace HopperORGANIZATION

0.97+

earlier this yearDATE

0.97+

firstQUANTITY

0.96+

oneQUANTITY

0.96+

Black HatORGANIZATION

0.95+

RSAORGANIZATION

0.95+

almost $400,000QUANTITY

0.95+

OneQUANTITY

0.94+

COVID 19OTHER

0.94+

30 secondsQUANTITY

0.93+

ZoomORGANIZATION

0.91+

few days agoDATE

0.9+

Palo Alto StudiosORGANIZATION

0.88+

300,000 plus suspiciousQUANTITY

0.84+

single personQUANTITY

0.82+

past couple of weeksDATE

0.81+

a week beforeDATE

0.8+

101QUANTITY

0.8+

SocialProof SecurityTITLE

0.77+

tonsQUANTITY

0.76+

Shark TankORGANIZATION

0.74+

Zoom IDOTHER

0.71+

COVIDEVENT

0.67+

single dayQUANTITY

0.63+

one more timeQUANTITY

0.63+

SocialProofORGANIZATION

0.62+

videosQUANTITY

0.62+

ZoomOTHER

0.6+

ReportTITLE

0.55+

thirdQUANTITY

0.54+

SecurityTITLE

0.53+

BriskORGANIZATION

0.51+

PowerTITLE

0.5+

UNLIST TILL 4/2 - Tapping Vertica's Integration with TensorFlow for Advanced Machine Learning


 

>> Paige: Hello, everybody, and thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled "Tapping Vertica's Integration with TensorFlow for Advanced Machine Learning." I'm Paige Roberts, Opensource Relations Manager at Vertica, and I'll be your host for this session. Joining me is Vertica Software Engineer, George Larionov. >> George: Hi. >> Paige: (chuckles) That's George. So, before we begin, I encourage you guys to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click submit. So, as soon as a question occurs to you, go ahead and type it in, and there will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to get to during that time. Any questions we don't get to, we'll do our best to answer offline. Now, alternatively, you can visit Vertica Forum to post your questions there, after the session. Our engineering team is planning to join the forums to keep the conversation going, so you can ask an engineer afterwards, just as if it were a regular conference in person. Also, reminder, you can maximize your screen by clicking the double-arrow button in the lower right corner of the slides. And, before you ask, yes, this virtual session is being recorded, and it will be available to view by the end this week. We'll send you a notification as soon as it's ready. Now, let's get started, over to you, George. >> George: Thank you, Paige. So, I've been introduced. I'm a Software Engineer at Vertica, and today I'm going to be talking about a new feature, Vertica's Integration with TensorFlow. So, first, I'm going to go over what is TensorFlow and what are neural networks. Then, I'm going to talk about why integrating with TensorFlow is a useful feature, and, finally, I am going to talk about the integration itself and give an example. So, as we get started here, what is TensorFlow? TensorFlow is an opensource machine learning library, developed by Google, and it's actually one of many such libraries. And, the whole point of libraries like TensorFlow is to simplify the whole process of working with neural networks, such as creating, training, and using them, so that it's available to everyone, as opposed to just a small subset of researchers. So, neural networks are computing systems that allow us to solve various tasks. Traditionally, computing algorithms were designed completely from the ground up by engineers like me, and we had to manually sift through the data and decide which parts are important for the task and which are not. Neural networks aim to solve this problem, a little bit, by sifting through the data themselves, automatically and finding traits and features which correlate to the right results. So, you can think of it as neural networks learning to solve a specific task by looking through the data without having human beings have to sit and sift through the data themselves. So, there's a couple necessary parts to getting a trained neural model, which is the final goal. By the way, a neural model is the same as a neural network. Those are synonymous. So, first, you need this light blue circle, an untrained neural model, which is pretty easy to get in TensorFlow, and, in edition to that, you need your training data. Now, this involves both training inputs and training labels, and I'll talk about exactly what those two things are on the next slide. But, basically, you need to train your model with the training data, and, once it is trained, you can use your trained model to predict on just the purple circle, so new training inputs. And, it will predict the training labels for you. You don't have to label it anymore. So, a neural network can be thought of as... Training a neural network can be thought of as teaching a person how to do something. For example, if I want to learn to speak a new language, let's say French, I would probably hire some sort of tutor to help me with that task, and I would need a lot of practice constructing and saying sentences in French. And a lot of feedback from my tutor on whether my pronunciation or grammar, et cetera, is correct. And, so, that would take me some time, but, finally, hopefully, I would be able to learn the language and speak it without any sort of feedback, getting it right. So, in a very similar manner, a neural network needs to practice on, example, training data, first, and, along with that data, it needs labeled data. In this case, the labeled data is kind of analogous to the tutor. It is the correct answers, so that the network can learn what those look like. But, ultimately, the goal is to predict on unlabeled data which is analogous to me knowing how to speak French. So, I went over most of the bullets. A neural network needs a lot of practice. To do that, it needs a lot of good labeled data, and, finally, since a neural network needs to iterate over the training data many, many times, it needs a powerful machine which can do that in a reasonable amount of time. So, here's a quick checklist on what you need if you have a specific task that you want to solve with a neural network. So, the first thing you need is a powerful machine for training. We discussed why this is important. Then, you need TensorFlow installed on the machine, of course, and you need a dataset and labels for your dataset. Now, this dataset can be hundreds of examples, thousands, sometimes even millions. I won't go into that because the dataset size really depends on the task at hand, but if you have these four things, you can train a good neural network that will predict whatever result you want it to predict at the end. So, we've talked about neural networks and TensorFlow, but the question is if we already have a lot of built-in machine-learning algorithms in Vertica, then why do we need to use TensorFlow? And, to answer that question, let's look at this dataset. So, this is a pretty simple toy dataset with 20,000 points, but it shows, it simulates a more complex dataset with some sort of two different classes which are not related in a simple way. So, the existing machine-learning algorithms that Vertica already has, mostly fail on this pretty simple dataset. Linear models can't really draw a good line separating the two types of points. Naïve Bayes, also, performs pretty badly, and even the Random Forest algorithm, which is a pretty powerful algorithm, with 300 trees gets only 80% accuracy. However, a neural network with only two hidden layers gets 99% accuracy in about ten minutes of training. So, I hope that's a pretty compelling reason to use neural networks, at least sometimes. So, as an aside, there are plenty of tasks that do fit the existing machine-learning algorithms in Vertica. That's why they're there, and if one of your tasks that you want to solve fits one of the existing algorithms, well, then I would recommend using that algorithm, not TensorFlow, because, while neural networks have their place and are very powerful, it's often easier to use an existing algorithm, if possible. Okay, so, now that we've talked about why neural networks are needed, let's talk about integrating them with Vertica. So, neural networks are best trained using GPUs, which are Graphics Processing Units, and it's, basically, just a different processing unit than a CPU. GPUs are good for training neural networks because they excel at doing many, many simple operations at the same time, which is needed for a neural network to be able to iterate through the training data many times. However, Vertica runs on CPUs and cannot run on GPUs at all because that's not how it was designed. So, to train our neural networks, we have to go outside of Vertica, and exporting a small batch of training data is pretty simple. So, that's not really a problem, but, given this information, why do we even need Vertica? If we train outside, then why not do everything outside of Vertica? So, to answer that question, here is a slide that Philips was nice enough to let us use. This is an example of production system at Philips. So, it consists of two branches. On the left, we have a branch with historical device log data, and this can kind of be thought of as a bunch of training data. And, all that data goes through some data integration, data analysis. Basically, this is where you train your models, whether or not they are neural networks, but, for the purpose of this talk, this is where you would train your neural network. And, on the right, we have a branch which has live device log data coming in from various MRI machines, CAT scan machines, et cetera, and this is a ton of data. So, these machines are constantly running. They're constantly on, and there's a bunch of them. So, data just keeps streaming in, and, so, we don't want this data to have to take any unnecessary detours because that would greatly slow down the whole system. So, this data in the right branch goes through an already trained predictive model, which need to be pretty fast, and, finally, it allows Philips to do some maintenance on these machines before they actually break, which helps Philips, obviously, and definitely the medical industry as well. So, I hope this slide helped explain the complexity of a live production system and why it might not be reasonable to train your neural networks directly in the system with the live device log data. So, a quick summary on just the neural networks section. So, neural networks are powerful, but they need a lot of processing power to train which can't really be done well in a production pipeline. However, they are cheap and fast to predict with. Prediction with a neural network does not require GPU anymore. And, they can be very useful in production, so we do want them there. We just don't want to train them there. So, the question is, now, how do we get neural networks into production? So, we have, basically, two options. The first option is to take the data and export it to our machine with TensorFlow, our powerful GPU machine, or we can take our TensorFlow model and put it where the data is. In this case, let's say that that is Vertica. So, I'm going to go through some pros and cons of these two approaches. The first one is bringing the data to the analytics. The pros of this approach are that TensorFlow is already installed, running on this GPU machine, and we don't have to move the model at all. The cons, however, are that we have to transfer all the data to this machine and if that data is big, if it's, I don't know, gigabytes, terabytes, et cetera, then that becomes a huge bottleneck because you can only transfer in small quantities. Because GPU machines tend to not be that big. Furthermore, TensorFlow prediction doesn't actually need a GPU. So, you would end up paying for an expensive GPU for no reason. It's not parallelized because you just have one GPU machine. You can't put your production system on this GPU, as we discussed. And, so, you're left with good results, but not fast and not where you need them. So, now, let's look at the second option. So, the second option is bringing the analytics to the data. So, the pros of this approach are that we can integrate with our production system. It's low impact because prediction is not processor intensive. It's cheap, or, at least, it's pretty much as cheap as your system was before. It's parallelized because Vertica was always parallelized, which we'll talk about in the next slide. There's no extra data movement. You get the benefit from model management in Vertica, meaning, if you import multiple TensorFlow models, you can keep track of their various attributes, when they were imported, et cetera. And, the results are right where you need them, inside your production pipeline. So, two cons are that TensorFlow is limited to just prediction inside Vertica, and, if you want to retrain your model, you need to do that outside of Vertica and, then, reimport. So, just as a recap of parallelization. Everything in Vertica is parallelized and distributed, and TensorFlow is no exception. So, when you import your TensorFlow model to your Vertica cluster, it gets copied to all the nodes, automatically, and TensorFlow will run in fenced mode which means that it the TensorFlow process fails for whatever reason, even though it shouldn't, but if it does, Vertica itself will not crash, which is obviously important. And, finally, prediction happens on each node. There are multiple threads of TensorFlow processes running, processing different little bits of data, which is faster, much faster, than processing the data line by line because it happens all in a parallelized fashion. And, so, the result is fast prediction. So, here's an example which I hope is a little closer to what everyone is used to than the usual machine learning TensorFlow example. This is the Boston housing dataset, or, rather, a small subset of it. Now, on the left, we have the input data to go back to, I think, the first slide, and, on the right, is the training label. So, the input data consists of, each line is a plot of land in Boston, along with various attributes, such as the level of crime in that area, how much industry is in that area, whether it's on the Charles River, et cetera, and, on the right, we have as the labels the median house value in that plot of land. And, so, the goal is to put all this data into the neural network and, finally, get a model which can train... I don't know, which can predict on new incoming data and predict a good housing value for that data. Now, I'm going to go through, step by step, how to actually use TensorFlow models in Vertica. So, the first step I won't go into much detail on because there are countless tutorials and resources online on how to use TensorFlow to train a neural network, so that's the first step. Second step is to save the model in TensorFlow's 'frozen graph' format. Again, this information is available online. The third step is to create a small, simple JSON file describing the inputs and outputs of the model, and what data type they are, et cetera. And, this is needed for Vertica to be able to translate from TensorFlow land into Vertica equal land, so that it can use a sequel table instead of the input set TensorFlow usually takes. So, once you have your model file and your JSON file, you want to put both of those files in a directory on a node, any node, in a Vertica cluster, and name that directory whatever you want your model to ultimately be called inside of Vertica. So, once you do that you can go ahead and import that directory into Vertica. So, this import model's function already exists in Vertica. All we added was a new category to be able to import. So, what you need to do is specify the pass to your neural network directory and specify that the category that the model is is a TensorFlow model. Once you successfully import, in order to predict, you run this brand new predict TensorFlow function, so, in this case, we're predicting on everything from the input table, which is what the star means. The model name is Boston housing net which is the name of your directory, and, then, there's a little bit of boilerplate. And, the two ID and value after the as are just the names of the columns of your outputs, and, finally, the Boston housing data is whatever sequel table you want to predict on that fits the import type of your network. And, this will output a bunch of predictions. In this case, values of houses that the network thinks are appropriate for all the input data. So, just a quick summary. So, we talked about what is TensorFlow and what are neural networks, and, then, we discussed that TensorFlow works best on GPUs because it needs very specific characteristics. That is TensorFlow works best for training on GPUs while Vertica is designed to use CPUs, and it's really good at storing and accessing a lot of data quickly. But, it's not very well designed for having neural networks trained inside of it. Then, we talked about how neural models are powerful, and we want to use them in our production flow. And, since prediction is fast, we can go ahead and do that, but we just don't want to train there, and, finally, I presented Vertica TensorFlow integration which allows importing a trained neural model, a trained neural TensorFlow model, into Vertica and predicting on all the data that is inside Vertica with few simple lines of sequel. So, thank you for listening. I'm going to take some questions, now.

Published Date : Mar 30 2020

SUMMARY :

and I'll be your host for this session. So, as soon as a question occurs to you, So, the second option is bringing the analytics to the data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
VerticaORGANIZATION

0.99+

PhilipsORGANIZATION

0.99+

BostonLOCATION

0.99+

GeorgePERSON

0.99+

99%QUANTITY

0.99+

20,000 pointsQUANTITY

0.99+

second optionQUANTITY

0.99+

Charles RiverLOCATION

0.99+

GoogleORGANIZATION

0.99+

thousandsQUANTITY

0.99+

Paige RobertsPERSON

0.99+

third stepQUANTITY

0.99+

first stepQUANTITY

0.99+

George LarionovPERSON

0.99+

first optionQUANTITY

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.99+

Second stepQUANTITY

0.99+

PaigePERSON

0.99+

each lineQUANTITY

0.99+

two branchesQUANTITY

0.99+

TodayDATE

0.99+

two optionsQUANTITY

0.99+

hundredsQUANTITY

0.99+

300 treesQUANTITY

0.99+

two approachesQUANTITY

0.99+

millionsQUANTITY

0.99+

first slideQUANTITY

0.99+

TensorFlowTITLE

0.99+

Tapping Vertica's Integration with TensorFlow for Advanced Machine LearningTITLE

0.99+

two typesQUANTITY

0.99+

two different classesQUANTITY

0.99+

todayDATE

0.99+

bothQUANTITY

0.99+

VerticaTITLE

0.99+

first oneQUANTITY

0.98+

two consQUANTITY

0.97+

about ten minutesQUANTITY

0.97+

two hidden layersQUANTITY

0.97+

FrenchOTHER

0.96+

each nodeQUANTITY

0.95+

oneQUANTITY

0.95+

end this weekDATE

0.94+

two IDQUANTITY

0.91+

four thingsQUANTITY

0.89+

UNLIST TILL 4/2 - Optimizing Query Performance and Resource Pool Tuning


 

>> Jeff: Hello, everybody and thank you for Joining us today for the virtual "Vertica VBC" 2020. Today's breakout session has been titled "Optimizing Query Performance and Resource Pool Tuning" I'm Jeff Ealing, I lead Vertica marketing. I'll be your host for this breakout session. Joining me today are Rakesh Banula, and Abhi Thakur, Vertica product technology engineers and key members of the Vertica customer success team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. We'll answer as many questions we're able to during that time. Any questions we don't address, we'll do our best to answer them offline. Alternatively, visit Vertica forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to Join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of your slides. And yes, this virtual session is being recorded, will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you Rakesh. >> Rakesh: Thank you, Jeff. Hello, everyone. My name is Rakesh Bankula. Along with me, we have Bir Abhimanu Thakur. We both are going to cover the present session on "Optimizing Query Performance and Resource Pool Tuning" In this session, we are going to discuss query optimization, how to review the query plans and how to get the best query plans with proper production design. Then discuss on resource allocations and how to find resource contention. And we will continue the discussion on important use cases. In general, to successfully complete any activity or any project, the main things it requires are the plan. Plan for that activity on what to do first, what to do next, what are things you can do in parallel? The next thing you need, the best people to work on that project as per the plan. So, first thing is a plan and next is the people or resources. If you overload the same set of people, our resources by involving them in multiple projects or activities or if any person or resource is sick in a given project is going to impact on the overall completion of that project. The same analogy we can apply through query performance too. For a query to perform well, it needs two main things. One is the best query plan and other is the best resources to execute the plan. Of course, in some cases, resource contention, whether it can be from system side or within the database may slow down the query even when we have best query plan and best resource allocations. We are going to discuss each of these three items a little more in depth. Let us start with query plan. User submits the query to database and Vertica Optimizer generates the query plan. In generating query plans, optimizer uses the statistics information available on the tables. So, statistics plays a very important role in generating good query plans. As a best practice, always maintain up-to-date statistics. If you want to see how query plan looks like, add explain keyword in front of your query and run that query. It displays the query plan on the screen. Other option is BC explained plans. It saves all the explained plans of the queries run on the database. So, once you have a query plan, once you're checking it to make sure plan is good. The first thing I would look for, no statistics are predicted out of range. If you see any of these, means table involved in the query, have no up to date statistics. It is now the time to update the statistics. Next thing to explain plans are broadcast, three segments around the Join operator, global re segments around a group by operators. These indicate during the runtime of the query, data flow between the nodes over the network and will slow down the query execution. As far as possible, prevent such operations. How to prevent this, we will discuss in the projection design topic. Regarding the Join order, check on inner side and outer side, which tables are used, how many rows each side processing. In (mumbles) picking a table, having smaller number of rows is good in case of as shown as, as Join built in memory, smaller the number of rows, faster it is to build the hash table and also helps in consuming less memory. Then check if the plan is picking query specific projection or default projections. If optimizer ignoring any query specific projection, but picking the default super projection will show you how to use query specific hints to follow the plant to pick query specific projections which helps in improving the performance. Okay, here is one example query plan of a query trying to find number of products sold from a store in a given state. This query is having Joins between store table, product table and group by operation to find the count. So, first look for no statistics particularly around storage access path. This plan is not reporting any no statistics. This means statistics are up to date and plan is good so far. Then check what projections are used. This is also around the storage access part. For Join orders check, we have Hash Join in path ID 4 having it In Path ID 6 processing 60,000 rows and outer is in Path ID 7 processing 20 million rows. Inner side processing last record is good. This helps in building hash table quicker by using less memory. Check if any broadcast re segments, Joins in Path ID 4 and also Path ID 3. Both are having inner broadcast, Inners are having 60,000 records are broadcasted to all nodes in the cluster. This could impact the query performance negatively. These are some of the main things which we normally check in the explained plans. Still now, We have seen that how to get good query plans. To get good query plans, we need to maintain up to date statistics and also discussed how to review query plans. Projection design is the next important thing in getting good query plans, particularly in preventing broadcasts re segments. Broadcast re segments happens during Join operation, random existing segmentation class of the projections involved in the Join not matching with the Join columns in the query. These operations causes data flow over the network and negatively impacts the query performance particularly when it transfers millions or billions of rows. These operations also causes query acquire more memory particularly in network send and receive operations. One can avoid these broadcast re segments with proper projection segmentation, say, Join involved between two fact tables, T1, T2 on column I then segment the projections on these T1, T2 tables on column I. This is also called identically segmenting projections. In other cases, Join involved between a fact table and a dimension table then replicate or create an unsegmented projection on dimension table will help avoiding broadcast re segments during Join operation. During group by operation, global re segment groups causes data flow over the network. This can also slow down the query performance. To avoid these global re segment groups, create segmentation class of the projection to match with the group by columns in the query. In previous slides, we have seen the importance of projection segmentation plus in preventing the broadcast re segments during the Join operation. The order by class of production design plays important role in picking the Join method. We have two important Join methods, Merge Join and Hash Join. Merge Join is faster and consumes less memory than hash Join. Query plan uses Merge Join when both projections involved in the Join operation are segmented and ordered on the Join keys. In all other cases, Hash Join method will be used. In case of group by operation too, we have two methods. Group by pipeline and group by Hash. Group by pipeline is faster and consumes less memory compared to group by Hash. The requirements for group by pipeline is, projection must be segmented and ordered by on grouping columns. In all other cases, group by hash method will be used. After all, we have seen importance of stats and projection design in getting good query plans. As statistics are based on estimates over sample of data, it is possible in a very rare cases, default query plan may not be as good as you expected, even after maintaining up-to-date stats and good projection design. To work around this, Vertica providing you some query hints to force optimizer to generate even better query plans. Here are some example Join hints which helps in picking Join method and how to distribute the data, that is broadcast or re segment on inner or outer side and also which group by method to pick. The table level hints helps to force pick query specific projection or skipping any particular projection in a given query. These all hints are available in Vertica documentation. Here are a few general hints useful in controlling how to load data with the class materialization et cetera. We are going to discuss some examples on how to use these query hints. Here is an example on how to force query plan to pick Hash Join. The hint used here is JTYPE, which takes arguments, H for HashJoin, M for MergeJoin. How to place this hint, just after the Join keyword in the query as shown in the example here. Another important Join in this, JFMT, Join For My Type hint. This hint is useful in case when Join columns are lost workers. By default Vertica allocates memory based on column data type definition, not by looking at the actual data length in those columns. Say for example, Join column defined as (mumbles) 1000, 5000 or more, but actual length of the data in this column is, say, less than 50 characters. Vertica going to use more memory to process such columns in Join and also slow down the Join processing. JSMP hint is useful in this particular case. JSMP parameter uses the actual length of the Join column. As shown in the example, using JFMP of V hint helps in reducing the memory requirement for this query and executes faster too. Distrib hint helps in how to force inner or outer side of the Join operator to be distributed using broadcast or re segment. Distrib takes two parameters. First is the outer site and second is the inner site. As shown in the example, DISTRIB(A,R) after Join keyword in the query helps to force re segment the inner side of the Join, outer side, leaving it to optimizer to choose that distribution method. GroupBy Hint helps in forcing query plan to pick Group by Hash or Group by Pipeline. As shown in the example, GB type or hash, used just after group by class in the query helps to force this query to pick Group by Hashtag. See now, we discussed the first part of query performance, which is query plans. Now, we are moving on to discuss next part of query performance, which is resource allocation. Resource Manager allocates resources to queries based on the settings on resource pools. The main resources which resource pools controls are memory, CPU, query concurrency. The important resource pool parameters, which we have to tune according to the workload are memory size, plan concurrency, mass concurrency and execution parallelism. Query budget plays an important role in query performance. Based on the query budget, query planner allocate worker threads to process the query request. If budget is very low, query gets less number of threads, and if that query requires to process huge data, then query takes longer time to execute because of less threads or less parallelism. In other case, if the budget is very high and query executed on the pool is a simple one which results in a waste of resources, that is, query which acquires the resources holds it till it complete the execution, and that resource is not available to other queries. Every resource pool has its own query budget. This query budget is calculated based on the memory size and client and currency settings on that pool. Resource pool status table has a column called Query Budget KB, which shows the budget value of a given resource pool. The general recommendation for query budget is to be in the range of one GB to 10 GB. We can do a few checks to validate if the existing resource pool settings are good or not. First thing we can check to see if query is getting resource allocations quickly, or waiting in the resource queues longer. You can check this in resource queues table on a live system multiple times, particularly during your peak workload hours. If large number of queries are waiting in resource queues, indicates the existing resource pool settings not matching with your workload requirements. Might be, memory allocated is not enough, or max concurrency settings are not proper. If query's not spending much time in resource queues indicates resources are allocated to meet your peak workload, but not sure if you have over or under allocated the resources. For this, check the budget in resource pool status table to find any pool having way larger than eight GB or much smaller than one GB. Both over allocation and under allocation of budget is not good for query performance. Also check in DC resource acquisitions table to find any transaction acquire additional memory during the query execution. This indicates the original given budget is not sufficient for the transaction. Having too many resource pools is also not good. How to create resource pools or even existing resource pools. Resource pool settings should match to the present workload. You can categorize the workload into well known workload and ad-hoc workload. In case of well-known workload, where you will be running same queries regularly like daily reports having same set of queries processing similar size of data or daily ETL jobs et cetera. In this case, queries are fixed. Depending on the complexity of the queries, you can further divide it into low, medium, high resource required pools. Then try setting the budget to 1 GB, 4 GB, 8 GB on these pools by allocating the memory and setting the plan concurrency as per your requirement. Then run the query and measure the execution time. Try couple UP iterations by increasing and then decreasing the budget to find the best settings for your resource pools. For category of ad-hoc workload where there is no control over the number of users going to run the queries concurrently, or complexity of queries user going to submit. For this category, we cannot estimate, in advance, the optimum query budget. So for this category of workload, we have to use cascading resource pool settings where query starts on the pool based on the runtime they have set, then query resources moves to a secondary pool. This helps in preventing smaller queries waiting for resources, longer time when a big query consuming all resources and rendering for a longer time. Some important resource pool monitoring tables, analyze system, you can query resource cues table to find any transaction waiting for resources. You will also find on which resource pool transaction is waiting, how long it is waiting, how many queries are waiting on the pool. Resource pool status gives info on how many queries are in execution on each resource pool, how much memory in use and additional info. For resource consumption of a transaction which was already completed, you can play DC resource acquisitions to find how much memory a given transaction used per node. DC resource pool move table shows info on what our transactions moved from primary to secondary pool in case of cascading resource pools. DC resource rejections gives info on which node, which resource a given transaction failed or rejected. Query consumptions table gives info on how much CPU disk network resources a given transaction utilized. Till now, we discussed query plans and how to allocate resources for better query performance. It is possible for queries to perform slower when there is any resource contention. This contention can be within database or from system side. Here are some important system tables and queries which helps in finding resource contention. Table DC query execution gives the information on transaction level, how much time it took for each execution step. Like how much time it took for planning, resource allocation, actual execution etc. If the time taken is more in planning, which is mostly due to catalog contentions, you can play DC lock releases table as shown here to see how long transactions are waiting to acquire global catalog lock, how long transaction holding GCL x. Normally, GCL x acquire and release should be done within a couple of milliseconds. If the transactions are waiting for a few seconds to acquire GCL x or holding GCL x longer indicates some catalog contention, which may be due to too many concurrent queries or due to long running queries, or system services holding catalog mutexes and causing other transactions to queue up. A query is given here, particularly the system tables will help you further narrow down the contention. You can vary sessions table to find any long-running user queries. You can query system services table to find any service like analyze row counts, move out, merge operation and running for a long time. DC all evens table gives info on what are slower events happening. You can also query system resource usage table to find any particular system resource like CPU memory, disk IO or network throughput, saturating on any node. It is possible once slow node in the cluster could impact overall performance of queries negatively. To identify any slow node in the cluster, we use queries. Select one, and (mumbles) Clearly key one query just executes on initiative node. On a good node, kV one query returns within 50 milliseconds. As shown here, you can use a script to run this, select kV one query on all nodes in the cluster. You can repeat this test multiple times, say five to 10 times then reveal the time taken by this query on all nodes in all tech (mumbles) . If there is any one node taking more than a few seconds compared to other notes taking just milliseconds, then something is wrong with that node. To find what is going on with the node, which took more time for kV one query, run perf top. Perf top gives info on stopped only lister functions in which system spending most of the time. These functions can be counter functions or Vertica functions, as shown here. Based on their systemic spending most of the time we'll get some clue on what is going on with that code. Abhi will continue with the remaining part of the session. Over to you Abhi. >> Bir: Hey, thanks, Rakesh. My name is Abhimanu Thakur and today I will cover some performance cases which we had addressed recently in our customer clusters which we will be applying the best practices just showed by Rakesh. Now, to find where the performance problem is, it is always easy if we know where the problem is. And to understand that, like Rakesh just explained, the life of a query has different phases. The phases are pre execution, which is the planning, execution and post execution which is releasing all the required resources. This is something very similar to a plane taking a flight path where it prepares itself, gets onto the runway, takes off and lands back onto the runway. So, let's prepare our flight to take off. So, this is a use case which is from a dashboard application where the dashboard fails to refresh once in a while, and there is a batch of queries which are sent by the dashboard to the Vertica database. And let's see how we can be able to see where the failure is or where the slowness is. To reveal the dashboard application, these are very shortly queries, we need to see what were the historical executions and from the historical executions, we basically try to find where is the exact amount of time spent, whether it is in the planning phase, execution phase or in the post execution and if they are pretty consistent all the time, which means the plan has not changed in the execution which will also help us determine what is the memory used and if the memory budget is ideal. As just showed by Rakesh, the budget plays a very important role. So DC query executions, one-stop place to go and find your timings, whether it is a timing extra or is it execute plan or is it an abandoned plan. So, looking at the queries which we received and the times from the scrutinize, we find most of the time average execution, the execution is pretty consistent and there is some time, extra time spent in the planning phase which users of (mumbles) resource contention. This is a very simple matrix which you can follow to find if you have issues. So the system resource convention catalog contention and resource contention, all of these contribute mostly because of the concurrency. And let's see if we can drill down further to find the issue in these dashboard application queries. So, to get the concurrency, we pull out the number of queries issued, what is the max concurrency achieved, what are the number of threads, what is the overall percentage of query duration and all this data is available in the V advisor report. So, as soon as you provide scrutinize, we generate the V advisor report which helps us get complete insight of this data. So, based on this we definitely see there is very high concurrency and most of the queries finish in less than a second which is good. There are queries which go beyond 10 seconds and over a minute, but so definitely, the cluster had concurrency. What is more interesting is to find from this graph is... I'm sorry if this is not very readable, but the topmost line what you see is the Select and the bottom two or three lines are the create, drop and alters. So definitely this cluster is having a lot of DDL and DMLs being issued and what do they contribute is if there is a large DDL and DMLs, they cause catalog contention. So, we need to make sure that the batch, what we're sending is not causing too many catalog contention into the cluster which delays the complete plan face as the system resources are busy. And the same time, what we also analyze is the analyze tactics running every hour which is very aggressive, I would say. It should be scheduled to be need only so if a table has not changed drastically that's not scheduled analyzed tactics for the table. A couple more settings has shared by Rakesh is, it definitely plays a important role in the modeled and mode operations. So now, let's look at the budget of the query. The budget of the resource pool is currently at about two GB and it is the 75 percentile memory. Queries are definitely executing at that same budget, which is good and bad because these are dashboard queries, they don't need such a large amount of memory. The max memory as shown here from the capture data is about 20 GB which is pretty high. So what we did is, we found that there are some queries run by different user who are running in the same dashboard pool which should not be happening as dashboard pool is something like a premium pool or kind of a private run way to run your own private jet. And why I made that statement is as you see, resource pools are lik runways. You have different resource pools, different runways to cater different types of plane, different types of flights which... So, as you can manage your resource pools differently, your flights can take off and land easily. So, from this we did remind that the budget is something which could be well done. Now let's look... As we saw in the previous numbers that there were some resource weights and like I said, because resource pools are like your runways. So if you have everything ready, your plane is waiting just to get onto the runway to take off, you would definitely not want to be in that situation. So in this case, what we found is the coolest... There're quite a bit number of queries which have been waited in the pool and they waited almost a second and which can be avoided by modifying the the amount of resources allocated to the resource pool. So in this case, we increase the resource pool to provide more memory which is 80 GB and reduce the budget from two GB to one GB. Also making sure that the plan concurrency is increased to match the memory budget and also we moved the user who was running into the dashboard query pool. So, this is something which we have gone, which we found also in the resource pool is the execution parallelism and how this affects and what what number changes. So, execution parallelism is something which allocates the plan, allocates the number of threads, network buffers and all the data around it before even the query executes. And in this case, this pool had auto, which defaults to the core count. And so, dashboard queries not being too high on resources, they need to just get what they want. So we reduced the execution parallelism to eight and this drastically brought down the amount of threads which were needed without changing the time of execution. So, this is all what we saw how we could tune before the query takes off. Now, let's see what path we followed. This is the exact path what we followed. Hope of this diagram helps and these are the things which we took care of. So, tune your resource pool, adjust your execution parallelism based on the type of the queries the resource pool is catering to and match your memory sizes and don't be too aggressive on your resource budget. And see if you could replace your staging tables with temporary tables as they help a lot in reducing the DDLs and DMLs, reducing the catalog contention and the places where you cannot replace them with the truncate tables, reduce your analyzed statics duration and if possible, follow the best practices for a couple more operations. So moving on, let's let our query take a flight and see what best practices can be applied here. So this is another, I would say, very classic example of query where the query has been running and suddenly stops to fail. And if there is... I think most of the other seniors in a Join did not fit in memory. What does this mean? It basically means the inner table is trying to build a large Hash table, and it needs a lot of memory to fit. There are only two reasons why it could fail. One, your statics are outdated and your resource pool is not letting you grab all the memory needed. So in this particular case, the resource pool is not allowing all the memory it needs. As you see, the query acquire 180 GB of memory, and it failed. When looking at the... In most cases, you should be able to figure out the issue looking at the explained plan of the query as shared by Rakesh earlier. But in this case if you see, the explained plan looks awesome. There's no other operator like in a broadcast or outer V segment or something like that, it's just Join hash. So looking further we find into the projection. So inner is on segmented projection, the outer is segmented. Excellent. This is what is needed. So in this case, what we would recommend is go find further what is the cost. The cost to scan this row seems to be pretty high. There's the table DC query execution in their profiles in Vertica, which helps you drill down to every smallest amount of time, memory and what were the number of rows used by individual operators per pack. So, while looking into the execution engine profile details for this query, we found the amount of time spent is on the Join operator and it's the Join inner Hash table build time, which has taking huge amount of time. It's just waiting basically for the lower operators can and storage union to pass the data. So, how can we avoid this? Clearly, we can avoid it by creating a segmented projection instead of unsegmented projection on such a large table with one billion rows. Following the practice to create the projection... So this is a projection which was created and it was segmented on the column which is part of the select clause over here. Now, that plan looks nice and clean still, and the execution of this query now executes in 22 minutes 15 seconds and the most important you see is the memory. It executes in just 15 GB of memory. So, basically to what was done is the unsegmented projection which acquires a lot of memory per node is now not taking that much of memory and executing faster as it has been divided by the number of nodes per node to execute only a small share of data. But, the customer was still not happy as 22 minutes is still high. And let's see if we can tune it further to make the cost go down and execution time go down. So, looking at the explained plan again, like I said, most of the time, you could see the plan and say, "What's going on?" In this case, there is an inner re segment. So, how could we avoid the inner re segments? We can avoid the inner re segment... Most of the times, all the re segments just by creating the projection which are identically segmented which means your inner and outer both have the same amount, same segmentation clause. The same was done over here, as you see, there's now segment on sales ID and also ordered by sales ID which helps us execute the query drop from 22 minutes to eight minutes, and now the memory acquired is just equals to the pool budget which is 8 GB. And if you see, the most What is needed is the hash Join is converted into a merge Join being the ordered by the segmented clause and also the Join clause. So, what this gives us is, it has the new global data distribution and by changing the production design, we have improved the query performance. But there are times when you could not have changed the production design and there's nothing much which can be done. In all those cases, as even in the first case of Vertica after fail of the inner Join, the second Vertica replan (mumbles) spill to this operator. You could let the system degrade by acquiring 180 GB for whatever duration of minutes the query had. You could simply use this hand to replace and run the query in the very first go. Let the system have all the resources it needs. So, use hints wherever possible and filter disk is definitely your option where there're no other options for you to change your projection design. Now, there are times when you find that you have gone through your query plan, you have gone through every other thing and there's not much you see anywhere, but you definitely look at the query and you feel that, "Now, I think I can rewrite this query." And how what makes you decide that is you look at the query and you see that the same table has been accessed several times in my query plan, how can I rewrite this query to access my table just once? And in this particular use case, a very simple use case where a table is scanned three times for several different filters and then a union in Vertica union is kind of costly operator I would say, because union does not know what's the amount of data which should be coming from the underlying query. So we allocate a lot of resources to keep the union running. Now, we could simply replace all these unions by simple "Or" clause. So, simple "Or" clause changes the complete plan of the query and the cost drops down drastically. And now the optimizer almost know the exact amount of rows it has to process. So change, look at your query plans and see if you could make the execution in the profile or the optimizer do better job just by doing some small rewrites. Like if there are some tables frequently accessed you could even use a "With" clause which will do an early materialization and make use the better performance or for the union which I just shared and replace your left Joins with right Joins, use your (mumbles) like shade earlier for you changing your hash table types. This is the exact part what we have followed in this presentation. Hope this presentation was helpful in addressing, at least finding some performance issues in your queries or in your class test. So, thank you for listening to our presentation. Now we are ready for Q&A.

Published Date : Mar 30 2020

SUMMARY :

and key members of the Vertica customer success team. and other is the best resources to execute the plan. and the most important you see is the memory.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rakesh BanulaPERSON

0.99+

RakeshPERSON

0.99+

Abhi ThakurPERSON

0.99+

Jeff EalingPERSON

0.99+

JeffPERSON

0.99+

two GBQUANTITY

0.99+

VerticaORGANIZATION

0.99+

one GBQUANTITY

0.99+

180 GBQUANTITY

0.99+

80 GBQUANTITY

0.99+

Rakesh BankulaPERSON

0.99+

1 GBQUANTITY

0.99+

millionsQUANTITY

0.99+

8 GBQUANTITY

0.99+

forum.vertica.comOTHER

0.99+

OneQUANTITY

0.99+

22 minutesQUANTITY

0.99+

60,000 recordsQUANTITY

0.99+

15 GBQUANTITY

0.99+

4 GBQUANTITY

0.99+

10 GBQUANTITY

0.99+

fiveQUANTITY

0.99+

20 million rowsQUANTITY

0.99+

less than a secondQUANTITY

0.99+

two methodsQUANTITY

0.99+

TodayDATE

0.99+

less than 50 charactersQUANTITY

0.99+

AbhiPERSON

0.99+

firstQUANTITY

0.99+

Abhimanu ThakurPERSON

0.99+

FirstQUANTITY

0.99+

eight minutesQUANTITY

0.99+

one billion rowsQUANTITY

0.99+

todayDATE

0.99+

three linesQUANTITY

0.99+

10 timesQUANTITY

0.99+

secondQUANTITY

0.99+

three timesQUANTITY

0.99+

one exampleQUANTITY

0.98+

each sideQUANTITY

0.98+

BothQUANTITY

0.98+

5000QUANTITY

0.98+

eachQUANTITY

0.98+

over a minuteQUANTITY

0.98+

60,000 rowsQUANTITY

0.98+

2020DATE

0.98+

Path ID 3OTHER

0.98+

1000QUANTITY

0.98+

first partQUANTITY

0.98+

Path ID 7OTHER

0.98+

10 secondsQUANTITY

0.98+

two reasonsQUANTITY

0.97+

three itemsQUANTITY

0.97+

each resource poolQUANTITY

0.97+

about 20 GBQUANTITY

0.97+

GCL xTITLE

0.97+

both projectionsQUANTITY

0.97+

two parametersQUANTITY

0.97+

more than a few secondsQUANTITY

0.97+

Path ID 4OTHER

0.97+

T2OTHER

0.97+

75 percentileQUANTITY

0.97+

Bir Abhimanu ThakurPERSON

0.97+

bothQUANTITY

0.96+

50 millisecondsQUANTITY

0.96+

each executionQUANTITY

0.96+

about two GBQUANTITY

0.96+

Path ID 6OTHER

0.95+

this weekDATE

0.95+

two main thingsQUANTITY

0.93+

eightQUANTITY

0.93+

eight GBQUANTITY

0.93+

twoQUANTITY

0.93+

UNLIST TILL 4/1 - Putting Complex Data Types to Work


 

hello everybody thank you for joining us today from the virtual verdict of BBC 2020 today's breakout session is entitled putting complex data types to work I'm Jeff Healey I lead vertical marketing I'll be a host for this breakout session joining me is Deepak Magette II technical lead from verdict engineering but before we begin I encourage you to submit questions and comments during the virtual session you don't have to wait just type your question or comment and the question box below the slides and click Submit it won't be a Q&A session at the end of the presentation we'll answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forms that formed up Vertica calm to post your questions there after the session engineering team is planning to join the forms conversation going and also as a reminder that you can maximize your screen by clicking a double arrow button in the lower right corner of the slides yes this virtual session is being recorded and will be available to view on demand this week we'll send you a notification as submits ready now let's get started over to you Deepak thanks yes make sure you talk about the complex a textbook they've been doing it wedeck R&D without further delay let's see why and how we should put completely aside to work in your data analytics so this is going to be the outline or overview of my talk today first I'm going to talk about what are complex data types in some use cases I will then quickly cover some file formats that support these complex website I will then deep dive into the current support for complex data types in America finally I'll conclude with some usage considerations and what is coming in are 1000 release and our future roadmap and directions for this project so what are complex stereotypes complex data types are nested data structures composed of tentative types community types are nothing but your int float and string war binary etc the basic types some examples of complex data types include struct also called row are a list set map and Union composite types can also be built by composing other complicated types computer types are very useful for handling sparse data we also make samples on this presentation on that use case and also they help simplify analysis so let's look at some examples of complex data types so the first example on the left you can see a simple customer which is of type struc with two fields namely make a field name of type string and field ID of type integer structs are nothing but a group of fields and each field is a type of its own the type can be primitive or another complex type and on the right we have some example data for this simple customer complex type so it's basically two fields of type string and integer so in this case you have two rows where the first row is Alex with name named Alex and ID 1 0 and the second row has name Mary with ID 2 0 0 2 the second complex type on the left is phone numbers of type array of data has the element type string so area is nothing but a collection of elements the elements could be again a primitive type or another complex type so in this example the collection is of type string which is a primitive type and on the right you have some example of this collection of a fairy type called phone numbers and basically each row has a set or the list or a collection of phone numbers on the first we have two phone numbers and second you have a single phone number in that array and the third type on the slide is the map data type map is nothing but a collection of key value pairs so each element is actually a key value and you have a collection of such elements the key is usually a primitive type however the value is can be a primitive or complex type so in this example the both the key and value are of type string and then if you look on the right side of the slide you have some sample data here we have HTTP requests where the key is the header type and the value is the header value so the for instance on the first row we have a key type pragma with value no cash key type host with value some hostname and similarly on the second row you have some key value called accept with some text HTML because yeah they actually have a collection of elements allison maps are commonly called as collections as a to talking to in mini documents so we saw examples of a one-level complex steps on this slide we have nested complex there types on the right we have the root complex site called web events of type struct script has a for field a session ID of type integer session duration of type timestamp and then the third and the fourth fields customer and history requests are further complex types themselves so customer is again a complex type of type struct with three fields where the first two fields name ID are primitive types however the third field is another complex type phone numbers which we just saw in the previous slide similarly history request is also the same map type that we just saw so in this example each complex types is independent and you can reuse a complex type inside other complex types for example you can build another type called orders and simply reuse the customer type however in a practical implementation you have to deal with complexities involving security ownership and like sets lifecycle dependencies so keeping complex types as independent has that advantage of reusing them however the complication with that is you have to deal with security and ownership and lifecycle dependencies so this is on this slide we have another style of declaring a nested complex type do is call inlined complex data type so we have the same web driven struct type however if you look at the complex sites that embedded into the parent type definition so customer and HTTP request definition is embedded in lined into this parent structure so the advantage of this is you won't have to deal with the security and other lifecycle dependency issues but with the downside being you can't reuse them so it's sort of a trade-off between the these two so so let's see now some use cases of these complex types so the first use case or the benefit of using complex stereotypes is that you'll be able to express analysis mode naturally compute I've simplified the expression of analysis logic thereby simplifying the data pipelines in sequel it feels as if you have tables inside table so let's look at an example on and say you want to list all the customers with more than one thousand website events so if you have complex types you can simply create a table called web events and with one column of type web even which is a complex step so we just saw that difference it has four fields station customer and HTTP request so you can basically have the entire schema or in one type if you don't have complex types you'll have to create four tables one essentially for each complex type and then you have to establish primary key foreign key dependencies across these tables now if you want to achieve your goal of of listing all the customers in more than thousand web requests if you have complex types you can simply use the dot notation to extract the name the contact and also use some special functions for maps that will give you the count of all the HTTP requests grid in thousand however if you don't have complex types you'll have to now join each table individually extract the results from sub query and again joined on the outer query and finally you can apply a predicate of total requests which are greater than thousand to basically get your final result so it's a complex steps basically simplify the query writing part also the execution itself is also simplified so you don't have to have joins if you have complex you can simply have a load step to load the map type and then you can apply the function on top of it directly however if you have separate tables you have to join all these data and apply the filter step and then finally another joint to get your results alright so the other advantage of complex types is that you can cross this semi structured data very efficiently for example if you have data from clique streams or page views the data is often sparse and maps are very well suited for such data so maps or semi-structured by nature and with this support you can now actually have semi structured data represented along with structured columns in in any database so maps have this nice of nice feature to cap encapsulated sparse data as an example the common fields of a kick stream click stream or page view data are pragma host and except if you don't have map types you will have to end up creating a column for each of this header or field types however if you have map you can basically embed as key value pairs for all the data so on the left here on the slide you can see an example where you have a separate column for each field you end up with a lot of nodes basically the sparse however if you can embed them into in a map you can put them into a single column and sort of yeah have better efficiency and better representation of spots they imagine if you have thousands of fields in a click stream or page view you will have thousands of columns you will need thousands of columns represent data if you don't have a map type correct so given these are the most commonly used complexity types let's see what are the file formats that actually support these complex data types so most of file formats popular ones support complex data types however they have different serve variations so for instance if you have JSON it supports arrays and objects which are complex data types however JSON data is schema-less it is row oriented and this text fits because it is Kimmel s it has to store it in encase on every job the second type of file format is Avro and Avro has records enums arrays Maps unions and a fixed type however Avro has a schema it is oriented and it is binary compressed the third category is basically the park' and our style of file formats where the columnar so parquet and arc have support for arrays maps and structs the hewa schema they are column-oriented unlike Avro which is oriented and they're also binary compressed and they support a very nice compression and encoding types additionally so the main difference between parquet and arc is only in terms of how they represent complex types parquet includes the complex type hierarchy as reputation deflation levels however orc uses a separate column at every parent of the complex type to basically the prisons are now less so that apart from that difference in how they represent complex types parking hogs have similar capabilities in terms of optimizations and other compression techniques so to summarize JSON has no schema has no binary format in this columnar so it is not columnar Avro has a schema because binary format however it is not columnar and parquet and art are have a schema have a binary format and are columnar so let's see how we can query these different kinds of complex types and also the different file formats that they can be present in in how we can basically query these different variations in Vertica so in Vertica we basically have this feature called flex tables to where you can load complex data types and analyze them so flex tables use a binary format called vemma to store data as key value pairs clicks tables are schema-less they are weak typed and they trade flexibility for performance so when I mean what I mean by schema-less is basically the keys provide the field name and each row can potentially have different keys and it is weak type because there's no type information at the column level we have some we will see some examples of of this week type in the following slides but basically there's no type information so so the data is stored in text format and because of the week type and schema-less nature of flex tables you can implement some optimum use cases like if you can trivially implement needs like schema evolution or keep the complex types types fluid if that is your use case then the weak tightness and schema-less nature of flex tables will help you a lot to get give you that flexibility however because you have this weak type you you have a downside of not getting the best possible performance so if you if your use case is to get the best possible performance you can use a new feature of the strongly-typed complex types that we started to introduce in Vertica so complex types here are basically a strongly typed complex types they have a schema and then they give you the best possible performance because the optimizer now has enough information from the schema and the type to implement optimization system column selection or all the nice techniques that Vertica employs to give you the best possible color performance can now be supported even for complex types so and we'll see some of the examples of these two types in these slides now so let's use a simple data called restaurants a restaurant data - as running throughout this poll excites to basically see all the different variations of flex and complex steps so on this slide you have some sample data with four fields and essentially two rows if you sort of loaded in if you just operate them out so the four fields are named cuisine locations in menu name in cuisine or of type watch are locations is essentially an array and menu array of a row of two fields item and price so if you the data is in JSON there is no schema and there is no type information so how do we process that in Vertica so in Vertica you can simply create a flex table called restaurants you can copy the restaurant dot J's the restaurants of JSON file into Vertica and basically you can now start analyzing the data so if you do a select star from restaurants you will see that all the data is actually in one column called draw and it also you have the other column called identity which is to give you some unique row row ID but the row column base again encapsulates all the data that gives in the restaurant so JSON file this tall column is nothing but the V map format the V map format is a binary format that encodes the data as key value pairs and RAW format is basically backed by the long word binary column type in Vertica so each key essentially gives you the field name and the values the field value and it's all in its however the values are in the text text representation so see now you want to get better performance of this JSON data flex tables has these nice functions to basically analyze your data or try to extract some schema and type information from your data so if you execute compute flex table keys on the restaurants table you will see a new table called public dot restaurants underscore keys and then that will give you some information about your JSON data so it was able to automatically infer that your data has four fields namely could be name cuisine locations in menu and could also get that the name in cuisine or watch are however since locations in menu are complex types themselves one is array and one is area for row it sort of uses the same be map format as ease to process them so it has four columns to two primitive of type watch R and 2 R P map themselves so now you can materialize these columns by altering the table definitions and adding columns of that particular type it inferred and then you can get better performance from this materialized columns and yeah it's basically it's not in a single column anymore you have four columns for the fare your restaurant data and you can get some column selection and other optimizations on on the data that Whittaker provides all right so that is three flex tables are basically helpful if you don't have a schema and if you don't have any type of permission however we saw earlier that some file formats like Parker and Avro have schema and have some type information so in those cases you don't have to do the first step of inputting the type so you can directly create the type external table definition of the type and then you can target it to the park a file and you can load it in by an external table in vertical so the same restaurants dot JSON if you call if you transfer it to a translations or park' format you can basically get the fields with look however the locations and menu are still in the B map format all right so the V map format also allows you to explode the data and it has some nice functions to yeah M extract the fields from P map format so you have this map items so the same restaurant later if you want to explode and you want to apply predicate on the fields of the RS and the address of pro you can have map items to export your data and then you can apply predicates on a particular field in the complex type data so on this slide is basically showing you how you can explode the entire data the menu items as well as the locations and basically give you the elements of each of these complex types up so as I mentioned the menus so if you go back to the previous slide the locations and menu items are still the bond binary or the V map format so the question is if you want what if you want to get perform better on the V map data so for primitive types you could materialize into the primitive style however if it's an array and array of row we will need some first-class complex type constructs and that is what we will see that are added in what is right now so Vertica has started to introduce complex stereotypes with where these complex types is sort of a strongly typed complex site so on this slide you have an example of a row complex type where so we create an external table called customers and you have a row type of twit to fields name and ID so the complex type is basically inlined into the tables into the column definition and on the second example you can see the create external table items which is unlisted row type so it has an item of type row which is so fast to peals name and the properties is again another nested row type with two fixed quantities label so these are basically strongly typed complex types and then the optimizer can now give you a better performance compared to the V map using the strongly typed information in their queries so we have support for pure rows and extra draws in external tables for power K we have support for arrays and nested arrays as well for external tables in power K so you can declare an external table called contacts with a flip phone number of array of integers similarly you can have a nested array of items of type integer we can declare a column with that strongly typed complex type so the other complex type support that we are adding in the thinner liz's support for optimized one dimensional arrays and sets for both ross and as well as RK external table so you can create internal table called phone numbers with a one-dimensional array so here you have phone numbers of array of type int you can have one dimensional you can have sets as well which is also one color one dimension arrays but sets are basically optimized for fast look ups they are have unique elements and they are ordered so big so you can get fast look ups using sets if that is a use case then set will give you very quick lookups for elements and we also implemented some functions to support arrays sets as well so you have applied min apply max which are scale out that you can apply on top of an array element and you can get the minimum element and so on so you can up you have support for additional functions as well so the other feature that is coming in ten o is the explored arrays of functionality so we have a implemented EU DX that will allow you to similar similar to the example you saw in the math items case you can extract elements from these arrays and you can apply different predicates or analysis on the elements so for example if you have this restaurant table with the column name watch our locations of each an area of archer and menu again an area watch our you can insert values using the array constructor into these columns so here we inserting three values lilies feed the with location with locations cambridge pittsburgh menu items cheese and pepperoni again another row with name restaurant named bob tacos location Houston and totila salsa and Patty on the third example so now you can basically explode the both arrays into and extract the elements out from these arrays so you can explode the location array and extract the location elements which is which are basically Houston Cambridge Pittsburgh New Jersey and also you can explode the menu items and extract individual elements and now you can sort of apply other predicates on the extruded data Kollek so so so let's see what are some usage considerations of these complex data types so complex data types as we saw earlier are nice if you have sparse data so if your data has clickstream or has some page view data then maps are very nice to have to represent your data and then you can sort of efficiently represent the in the space wise fashion for sparse data use a map types and compensate that as we saw earlier for the web request count query it will help you simplify the analysis as well you don't have to have joins and it will simplify your query analysis as I just mentioned if your use cases are for fast look ups then you can use a set type so arrays are nice but they have the ordering on them however if your primary use case to just look up for certain elements then we can use the set type also you can use the B map or the Flex functionality that we have in Vertica if you want flexibility in your complex set data type schema so like I mentioned earlier you can trivially implement needs like scheme evolution or even keep the complex types fluid so if you have multiple iterations of unit analysis and each iteration we are changing the fields because you're just exploring the data then we map and flex will give you that nice ease to change the fields within the complex type or across files and we can load fluid complex you can load complexity types with bit fluids is basically different fields in different Rho into V map and flex tables easily however if you're once you basically treated over your data you figured out what are the fields and the complex types that you really need you can use the strongly typed complex data types that we started to introduce in Vertica so you can use the array type the struct type in the map type for your data analysis so that's sort of the high level use cases for complex types in vertical so it depends on a lot on where your data analysis phase is fear early then your data is usually still fluid and you might want to use V Maps and flex to explore it once you finalize your schema you can use the strongly typed complex data types and to get the best possible performance holic so so what's coming in the following releases of Vertica so antenna which is coming in sometime now so yeah so we are adding which is the next release of vertical basically we're adding support for loading Park a complex data types to the V map format so parquet is a strongly typed file format basically it has the schema it also has the type information for each of the complex type however if you are exploring your data then you might have different park' files with different schemes so you can load them to the V map format first and then you can analyze your data and then you can switch to the strongly typed complex types we're also adding one dimensional optimized arrays and sets in growth and for parquet so yeah the complex sets are not just limited to parquet you can also store them in drawers however right now you only support one dimension arrays and set in rows we're also adding the Explorer du/dx for one-dimensional arrays in the in this release so you can as you saw in the previous example you can explode the data for of arrays in arrays and you can apply predicates on individual elements for the erase data so you can in it'll apply for set so you can cause them to milli to erase and Clinics code sets as well so what are the plans paths that you know release so we are going to continue both for strongly-typed computer types right now we don't have support for the full in the tail release we won't have support for the full all the combinations of complex types so we only have support for nested arrays sorriness listed pure arrays or nested pure rows and some are only limited to park a file format so we will continue to add more support for sub queries and nested complex sites in the following in the in following releases and we're also planning to add this B map data type so you saw in the examples that the V map data format is currently backed by the long word binary data format or the other column type because of this the optimizer really cannot distinguish which is a which is which data is actually a long wall binary or which is actually data and we map format so if we the idea is to basically add a type called V map and then the optimizer can now implement our support optimizations or even syntax such as dot notation and yeah if your data is columnar such as Parque then you can implement optimizations just keep push down where you can push the keys that are actually querying in your in your in your analysis and then only those keys should be loaded from parquet and built into the V map format so that way you get sort of the column selection optimization for complex types as well and yeah that's something you can achieve if you have different types for the V map format so that's something on the roadmap as well and then unless join is basically another nice to have feature right now if you want to explode and join the array elements you have to explode in the sub query and then in the outer query you have to join the data however if you have unless join till I love you to explode as well as join the data in the same query and on the fly you can do both and finally we are also adding support for this new feature called UD vector so that's on the plan too so our work for complex types is is essentially chain the fundamental way Vertica execute in the sense of functions and expression so right now all expressions in Vertica can return only a single column out acceptance in some cases like beauty transforms and so on but the scalar functions for instance if you take aut scalar you can get only one column out of it however if you have some use cases where you want to compute multiple computation so if you also have multiple computations on the same input data say you have input data of two integers and you want to compute both addition and multiplication on those two columns this is for example but in many many machine learning example use cases have similar patterns so say you want to do both these computations on the data at the same time then in the current approach you have to have one function for addition one function for multiplication and both of them will have to load the data once basically loading data twice to get both these computations turn however with the Uni vector support you can perform both these computations in the same function and you can return two columns out so essentially saving you the loading loading these columns twice you can only do it once and get both the results out so that's sort of what we are trying to implement with all the changes that we are doing to support complex data types in Vertica and also you don't have to use these over Clause like a uni transform so PD scale just like we do scalars you can have your a vector and you can have multiple columns returned from your computations so that sort of concludes my talk so thank you for listening to my presentation now we are ready for Q&A

Published Date : Mar 30 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
AmericaLOCATION

0.99+

Jeff HealeyPERSON

0.99+

second rowQUANTITY

0.99+

MaryPERSON

0.99+

two rowsQUANTITY

0.99+

two fieldsQUANTITY

0.99+

first rowQUANTITY

0.99+

two rowsQUANTITY

0.99+

two typesQUANTITY

0.99+

each rowQUANTITY

0.99+

two integersQUANTITY

0.99+

DeepakPERSON

0.99+

one functionQUANTITY

0.99+

three fieldsQUANTITY

0.99+

fourth fieldsQUANTITY

0.99+

each elementQUANTITY

0.99+

each fieldQUANTITY

0.99+

thirdQUANTITY

0.99+

more than thousand web requestsQUANTITY

0.99+

second exampleQUANTITY

0.99+

todayDATE

0.99+

each keyQUANTITY

0.99+

each tableQUANTITY

0.99+

four fieldsQUANTITY

0.99+

third fieldQUANTITY

0.99+

first exampleQUANTITY

0.99+

Deepak Magette IIPERSON

0.99+

two columnsQUANTITY

0.99+

third categoryQUANTITY

0.99+

two columnsQUANTITY

0.99+

two fieldsQUANTITY

0.99+

HoustonLOCATION

0.99+

first stepQUANTITY

0.99+

twiceQUANTITY

0.99+

thousands of columnsQUANTITY

0.98+

three valuesQUANTITY

0.98+

this weekDATE

0.98+

more than one thousand website eventsQUANTITY

0.98+

third typeQUANTITY

0.98+

each iterationQUANTITY

0.98+

bothQUANTITY

0.98+

greater than thousandQUANTITY

0.98+

cambridgeLOCATION

0.98+

JSONTITLE

0.98+

both arraysQUANTITY

0.97+

one columnQUANTITY

0.97+

thousands of fieldsQUANTITY

0.97+

secondQUANTITY

0.97+

third exampleQUANTITY

0.97+

twoQUANTITY

0.97+

single columnQUANTITY

0.96+

thousandQUANTITY

0.96+

AlexPERSON

0.96+

firstQUANTITY

0.96+

BBC 2020ORGANIZATION

0.96+

VerticaTITLE

0.96+

four columnsQUANTITY

0.95+

onceQUANTITY

0.95+

one typeQUANTITY

0.95+

V MapsTITLE

0.94+

one colorQUANTITY

0.94+

second typeQUANTITY

0.94+

one dimensionQUANTITY

0.94+

first two fieldsQUANTITY

0.93+

four tablesQUANTITY

0.91+

eachQUANTITY

0.91+

Exclusive Google & Cisco Cloud Announcement | CUBEConversations April 2019


 

(upbeat jazz music) >> Woman: From our studio's, in the heart of Silicon Valley Palo Alto California this is a CUBE conversation. >> John: Hello and welcome to this CUBE conversation here, exclusive coverage of Google Next 2019. I'm John Furrier, host of theCUBE. Big Google Cisco news, we're here with KD who's the vice president of the data center for compute for Cisco and Kip Compton, senior vice president of Cloud Platform and Solutions Group. Guys, welcome to this exclusive CUBE conversation. Thanks for spending the time. >> KD: Great to be here. >> So Google Next, obviously, showing the way that enterprises are now quickly moving to the cloud. Not just moving to the cloud, the cloud is part of the plan for the enterprise. Google Cloud clearly coming out with a whole new set of systems, set of software, set of relationships. Google Anthos is the big story, the platform. You guys have had a relationship previously announced with Google, your role in joint an engineering integrations. Talk about the relationship with Cisco and Google. What's the news? What's the big deal here? >> Kip: Yeah, no we're really excited. I mean as you mentioned, we've been working with Google Cloud since 2017 on hybrid and Multicloud Kubernetes technologies. We're really excited about what we're able to announce today, with Google Cloud, around Google Cloud's new Anthos system. And we're gonna be doing a lot of different integrations that really bring a lot of what we've learned through our joint work with them over the last few years, and we think that the degree of integration across our Data Center Portfolio and also our Networking and Security Portfolios, ultimately give customers one of the most secure and flexible Multicloud and hybrid architectures. >> One of the things we're seeing in the market place, I want to get your reactions to this Kip because I think this speaks to what's going on here at Google Next and the industry, is that the company's that actually get on the Cloud wave truly, not just say they're doing Cloud, but ride the wave of the enterprise Cloud, which is here. Multicloud is big conversation. Hybrids and implementation of that. Cloud is big part of it, the data center certainly isn't going away. Seeing a whole new huge wave. You guys have been big behind this at Cisco. You saw what the results are with Microsoft. Their stock has gone from where it was really low to really high because they were committed to the Cloud. How committed is Cisco to this Cloud Wave, what specifically are you guys bringing to the table for Enterprises? >> Oh we're very committed. We see it as the seminal IT transformation of our time, and clearly on of the most important topics in our discussions with CIO's across our customer base. And what we're seeing is, really not as much enterprises moving to the Cloud as much as enterprises extending or expanding into the Cloud. And their on-prem infrastructures, including our data centers as you mentioned, certainly aren't going away, and their really looking to incorporate Cloud into a complete system that enables them to run their business and their looking for agility and speed to deliver new experiences to their employees and to their customers. So we're really excited about that and we think sorta this Multicloud approaches is absolutely critical and its one of the things that Google Cloud and Cisco are aligned on. >> I'd like to get this couple talk tracks. One is the application area of Multicloud and Hybrid but first lets unpack the news of what's going on with Cisco and Google. Obviously Anthos is the new system, essentially its just the Cloud platform but that's what they're calling it, Googles anthem. How is Cisco integrating into this? Cause you guys had great integration points before Containers was a big bet that you guys had made. >> Kip: That's right. >> You certainly have, under the covers we learned at Cisco Live in Barcelona around what's going on with HyperFlex and ACI program ability, DevNet developer program going on. So good stuff going on at Cisco. What does this connect in with Google because ya got containers, you guys have been very full throttle on Kubernetes. Containers, Kubernetes, where does this all fit? How should your customers understand the relationship of how Cisco fits with Google Cloud? What's the integration? >> So let me start with, and backing it with the higher level, right? Philosophically we've been talking about Multicloud for a long time. And Google has a very different and unique view of how Cloud should be architected. They've gone 'round the open source Kubernetes Path. They've embraced Multicloud much more so then we would've expected. That's the underpinning of the relationship. Now you bring to that our deep expertise with serving Enterprise IT and our knowledge of what Enterprise IT really needs to productize some of these innovations that are born elsewhere. You get those two ingredients together and you have a powerful solution that democratizes some of the innovations that's born in the Cloud or born elsewhere. So what we've done here with Anthos, with Google HyperFlex, oh with Cisco's HyperFlex, with our Security Portfolio, our Networking Portfolio is created a mechanism for Enterprise ID to serve their constituent developers who are wanting to embrace Containers, readily packaged and easily consumable solution that they can deploy really easily. >> One of the things we're hearing is that this, the difference between moving to the Cloud versus expanding to and with the Cloud, and two kind of areas pop up. Operational's, operations, and developers. >> Kip: Yep. >> People that operate IT mention IT Democratizing IT, certainly with automation scale Cloud's a great win there. But you gotta operate it at that level at the same time serve developers, so it seems that we're hearing from customers its complicated, you got open source, you got developers who are pushing code everyday, and then you gotta run it over and over networks which have security challenges that you need to be managing everyday. Its a hardcore op's problem meets frictionalist development. >> Yeah so lets talk about both of these pieces. What do developers want? They want the latest framework. They want to embrace some of the new, the latest and greatest libraries out there. They want to get on the cutting edge of the stuff. Its great to experiment with open source, its really really hard to productize it. That's what we're bringing to the table here. With Anthos delivering a manage service with Cisco's deep expertise and taking complex technologies, packaging it, creating validated architectures that can work in an enterprise, it takes that complexity out of it. Secondly when you have a enterprise ID operator, lets talk about the complexities there, right? You've gotta tame this wild wild west of open source. You can't have drops every day. You can't have things changing every, you need a certain level of predictability. You need the infrastructure to slot in to a management framework that exists in the dollar center. It needs to slot into a sparing mechanism, to a workflow that exists. On top of that, you've got security and networking on multiple levels right? You've got physical networking, you've got container networking, you've got software define networking, you've got application level networking. Each layer has complexity around policy and intent that needs to marry across those layers. Well, you could try to stitch it together with products from different vendors but its gonna be a hot stinking mess pretty soon. Driving consistency dry across those layers from a vendor who can work in the data center, who can work across the layers of networking, who can work with security, we've got that product set. Between ACI Stealthwatch Cloud providing the security and networking pieces, our container networking expertise, HyperFlex as a hyper converge infrastructure appliance that can be delivered to IT, stood up, its scale out, its easy to deploy. Provides the underpinning for running Anthos and then, now you've got a smooth simple solution that IT can take to its developer and say Hey you know what? You wanna do containers? I've got a solution for you. >> And I think one of the things that's great about that is, you know just as enterprise's are extending into the Cloud so is Cisco. So a lot of the capabilities that KD was just talking about are things that we can deliver for our customers in our data centers but then also in the Cloud. With things like ACI Anywhere. Bringing that ACI Policy framework that they have on-prem into the Cloud, and across multiple Clouds that they get that consistency. The same with Stealthwatch Cloud. We can give them a common security model across their on-prem workloads and multiple public Cloud workload areas. So, we think its a great compliment to what Google's doing with Anthos and that's one of the reasons that we're partners. >> Kip I want to get your thoughts on this, because one of the things we've seen over the past years is that Public Cloud was a great green field, people, you know born in the Cloud no problem. (Kip laughs) And Enterprise would want to put workloads in the Cloud and kind of eliminate some of the compute pieces and some benefits that they could put in the cloud have been great. But the data center never went away, and they're a large enterprise. It's never going away. >> Kip: Yep. >> As we're seeing. But its changing. How should your customers be thinking about the evolution of the data center? Because certainly computes become commodity, okay need some Cloud from compute. Google's got some stuff there, but the network still needs to move packets around. You still got to store stuff, you still need security. They may not be a perimeter, but you still have the nuts and bolts of networking, software, these roles need to be taking place, how should these customers be thinking about Cloud, compute, integration on data primus? >> That is a great point and what we've seen is actually Cloud makes the network even more important, right? So when you have workloads and staff services in the Cloud that you rely on for your business suddenly the reliability and the performance and latency of your networks more important in many ways than it was before, and so that's something any of our customers have seen, its driving a lot of interest and offerings like SD-WAN from Cisco. But to your point on the data center side, we're seeing people modernize their data centers, and their looking to take a lot of the simplicity and agility that they see in a Public Cloud and bring it home, if you will, into the data center. Cause there are lots of reasons why data centers aren't going away. And I think that's one of the reasons we're seeing HyperFlex take off so much is it really simplifies multiple different layers and actually multiple different types of technology, storage, compute, and networking together into a sort of a very simple solution that gives them that agility, and that's why its the center piece of many of our partnerships with the Public Cloud players including Anthos. Because it really provides a Cloud like workload hosting capability on-prem. >> So the news here is that you guys are expanding your relationship with Google. What does it mean? Can you guys summarize the impact to your customers and the industry? >> Well I think that, I mean the impact for our customers is that you've two leaders working together, and in fact they're two leaders who believe in open technology and in a Multicloud approach. And we believe that both of those are fundamentally more aligned with our customers and the market than other approaches and so we're really excited about that and what it means for our customers in the future. You know and we are expanding the relationship, I mean there's not only what we're doing with Google Cloud's Anthos but also associated advances we've made about expanding our collaboration actually in the collaboration area with our Webex capabilities as well as Google Swed. So we're really excited about all of this and what we can enable together for our customers. >> You guys have a great opportunity, I always say latency is important and with low latency, moving stuff around and that's your wheelhouse. KD, talk about the relationship expanding with Google, what specifically is going on? Lets get down and dirty, is it tighter integration? Is it policy? Is it extending HyperFlex into Google? Google coming in? What's actually happening in the relationship that's expanding? >> So let me describe it in three ways. And we've talked a little bit about this already. The first is, how do we drive Cloud like simplicity on-prem? So what we've taken is HyperFlex, which is a scale out appliance, dead simple, easy to manage. We've integrated that with Anthos. Which means that now you've got not only a hyper conversion appliance that you can run workloads on, you can deliver to your developers Kubernetes eco system and tool set that is best in class, comes from Google, its managed from the Cloud and its not only the Kubernetes piece of it you can deliver the silver smash pieces of it, lot of the other pieces that come as part of that Anthos relationship. Then we've taken that and said well to be Enterprise grade, you've gotta makes sure the networking is Enterprise grade at every single layer, whether that is at the physical layer, container layers, fortune machine layer, at the software define networking layer, or in the service layer. We've been working with the teams on both sides, we've been working together to develop that solution and bring back the market for our customers. The third piece of this is to integrate security, right? So Stealthwatch Cloud was mentioned, we're working with the other pieces of our portfolio to integrate security across these offerings to make sure those flows are as secure as can be possible and if we detect anomalies, we flag them. The second big theme is driving this from the Cloud, right? So between Anthos, which is driving the Kubernetes and RAM from the Cloud our SD-WAN technology, Cisco's SD-WAN technology driven from the Cloud being able to terminate those VPN's at the end location. Whether that be a data center, whether that be an edge location and being able to do that seamlessly driven from the Cloud. Innerside, which takes the management of that infrastructure, drives it from the Cloud. Again a Cisco innovation, first in the industry. All of these marry together with driving this infrastructure from the Cloud, and what did it do for our eventual customers? Well it gave them, now a data center environment that has no boundaries. You've got an on-prem data center that's expanding into the Cloud. You can build an application in one place, deploy it in another, have it communicate with another application in the Cloud and suddenly you've kinda demolished those boundaries between data center and the Cloud, between the data center and the edge, and it all becomes a continuum and no other company other than Cisco can do something like that. >> So if I hear you saying, what you're saying is you're bringing the software and security capabilities of Cisco in the data center and around campus et cetera, and SD-WAN to Google Cloud. So the customer experience would be Cisco customer can deploy Google Cloud and Google Cloud runs best on Cisco. That's kinda, is that kind of the guiding principles here to this deal? Is that you're integrating in a deep meaningful way where its plug and play? Google Cloud meets Cisco infrastructure? >> Well we certainly think that with the work that we've done and the integrations that we're doing, that Cisco infrastructure including software capabilities like Stealthwatch Cloud will absolutely be the best way for any customer who wants to adopt Google Cloud's Anthos, to consume it, and to have really the best experience in terms of some of the integration simplicity that KD talked about but also frankly security's very important and being able to bring that consistent security model across Google Cloud, the workloads running there, as well as on-prem through things like Stealthwatch Cloud we think will be very compelling for our customers, and somewhat unique in the marketplace. >> You know one of the things that interesting, TK the new CEO of Google, and I had this question to Diane Green she had enterprise try ops of VM wear, Google's been hiring a lot of strong enterprise people lately and you can see the transformation and we've interviewed a lot of them, I have personally. They're good people, they're smart, and they know what they're doing. But Google still gets dinged for not having those enterprise chops because you just can't have a trajectory of those economy of scales over night, you can't just buy your way into the enterprise. You got to earn it, there's a certain track record, it seems like Google's getting a lot with you guys here. They're bringing Cloud to the table for sure for your customer base but you're bringing, Cisco complete customer footprint to Google Cloud. That seems to be a great opportunity for Google. >> Well I mean I think its a great opportunity for both of us. I mean because we're also bringing a fantastic open Multicloud hybrid solution to our customer base. So I think there's a great opportunity for our customers and we really focus on at the end of the day our customers and what do we do to make them more successful and we think that what we're doing with Google will contribute to that. >> KD talk about, real quickly summarize what's the benefits to the customers? Customers watching the announcements, seeing all the hype and all the buzz on this Google Next, this relationship with Cisco and Google, what's the bottom line for the customer? They're dealing with complexity. What are you guys solving, what the big take away for your customers? >> So its three things. First of all, we've taken the complexity out of the equation, right? We've taken all the complexity around networking, around security, around bridging to multiple Clouds, packaged it in a scale out appliance delivered in an enterprise consistent way. And for them, that's what they want. They want that simplicity of deployment of these next gen technologies, and the second thing is as IT serves their customers, the developers in house, they're able to serve those customers much better with these latest generation technologies and frameworks, whether its Containers, Kubernetes, HDL, some of these pieces that are part of the Anthos solution. They're able to develop that, deliver it back to their internal stakeholders and do it in a way that they control, they feel comfortable with, they feel their secure, and the networking works and they can stand behind it without having to choose or have doubts on whether they should embrace this or not. At the end of the day, customers want to do the right things to develop fast. To be nimble, to act, and to do the latest and greatest and we're taking all those hurtles out of the equations. >> Its about developers. >> It is. >> Running software on secure environments for the enterprise. Guys that's awesome news. Google Next obviously gonna be great conversations. While I have you here I wanna get to a couple talk tracks that are I important around the theme's recovering around Google Next and certainly challenges and opportunities for enterprises that is the application area, Multicloud, and Hybrid Cloud. So lets start with application. You guys are enabling this application revolution, that's the sound bites we hear at your events and certainly that's been something that you guys been publicly talking about. What does that mean for the marketplace? Because certain everyone's developing applications now, (Kip laughs) you got mobile apps, you got block chain apps, we got all kinds of new apps coming out all the time. Software's not going away its a renaissance, its happening. (Kip laughs) How is the application revolution taking shape? How is and what's Cisco's roll in it? >> Sure, I mean our role is to enable that. And that really comes from the fact that we understand that the only reason anyone builds any kind of infrastructure is ultimately to deliver applications and the experiences that applications enable. And so that's why, you know, we pioneered ACI is Application Centric Infrastructure. We pioneered that and start focusing on the implications of applications in the infrastructure any years ago. You know, we think about that and the experience that we can deliver at each layer in the infrastructure and KD talked a little bit about how important it is to integrate those layers but then we also bring tools like AppDynamics. Which really gives our customers the ability to measure the performance of their applications, understand the experience that they're delivering with customers and then actually understand how each piece of the infrastructure is contributing to and affecting that performance and that's a great example of something that customers really wanna be able to do across on-prem and multiple Clouds. They really need to understand that entire thing and so I think something like App D exemplifies our focus on the application. >> Its interesting storage and compute used to be the bottle necks in developers having to stand that up. Cloud solved that problem. >> Kip: That's right. >> Stu Miniman and I always talk about on theCUBE networking's the bottle neck. Now with ACI, you guys are solving that problem, you're making it much more robust and programmable. >> It is. >> This is a key part for application developers because all that policy work can be now automated away. Is that kinda part of that enablement? >> It sure is. I mean if you look at what's happening to applications, they're becoming more consumerized, they're becoming more connected. Whether its micro services, its not just one monolithic application anymore, its all of these applications talking to each other. And they need to become more secure. You need to know what happens, who can talk to whom. Which part of the application can be accessed from where. To deliver that, when my customer tell me listen you deliver the data center, you deliver security, you deliver networking, you deliver multicloud, you've got AppDynamics. Who else can bring this together? And that's what we do. Whether its ACI that specifies policy and does that programmable, delivers that programmable framework for networking, whether its our technologies like titration, like AppDynamics as Kip mentioned. All of these integrate together to deliver the end experience that customers want which is if my application's slow, tell me where, what's happening and help me deliver this application that is not a monolith anymore its all of these bits and pieces that talk to each other. Some of these bits and pieces will reside in the Cloud, a lot of them will be on-prem, some of them will be on the edge. But it all needs to work together-- >> And developers don't care about that they just care about do I get the resources do I need, And you guys kinda take care of all the heavy lifting underneath the covers. >> Yeah and we do that in a modern programmable way. Which is the big change. We do it in intent based way. Which means we let the developers describe the intent and we control that via policy. At multiple levels. >> And that's good for the enterprises, they want to invest more in developing, building applications. Okay track number two, talk track number two Multicloud. its interesting, during the hype cycle of Hybrid Cloud which was a while, I think now people realize Hybrid Cloud is an implementation thing and so its beyond hype now getting into reality. Multicloud never had a hype cycle because people generally woke up one day and said yeah I got multiple Clouds. I'm using this over here, so it wasn't like a, there was no real socialization around the concept of Multicloud they got it right away. They can see it, >> Yep. >> They know what they're paying for. So Multicloud has been a big part of your strategy at Cisco and certainly plays well into what's happening at Google Next. What's going on with Multicloud? Why's the relation with Google important? And where do you guys see Multicloud going from a Cisco perspective? >> Sure enough, I think you're right. The latest data we saw, or have, is 94 percent of enterprises are using or expect to use multiple Clouds and I think those surveys have probably more than six points of potential error so I think for all intensive purposes its 100 percent. (John and KD laughing) I've not met a customer who's unique Cloud, if that's a thing. And so you're right, its an incredibly authentic trend compared with some of these things that seem to be hype. I think what's happening though is the definition of what a Multicloud solution is is shifting. So I think we start out as you said, with a realization, oh wait a second we're all Multicloud this really is a thing and there's a set of problems to solve. I think you're seeing players get more and more sophisticated in how they solve those problems. And what we're seeing is its solving those problems is not about homogenizing all the Clouds and making them all the same because one of the reasons people are using multiple Clouds is to get to the unique capabilities that's in each Cloud. So I think early on there were some approaches where they said okay well we're gonna put down like a layer across all these Clouds and try to make them all look the same. That doesn't really achieve the point. The point is Google has unique capabilities in Google Cloud, certainly the tenser flow capabilities are one that people point to. AWS has unique capabilities as well and so does Dajour. And so customers wanna access all of that innovation. So that kind of answers your question of why is this relationship important to us, its for us to meet our customers needs, we need to have great relationships, partnerships, and integrations with the Clouds that are important to our customers. >> Which is all the Clouds. >> And we know that Google Cloud is important. >> Well not just Google Cloud, which I think in this relationship's got my attention because you're creating a deep relationship with them on a development side. Providing your expertise on the network and other area's you're experts at but you also have to work with other Clouds because, >> That's right we do. >> You're connecting Clouds, that's the-- >> And in fact we do. I mean we have, solutions for Hybrid with AWS and Dejour already launched in the marketplace. So we work with all of them, and what our roll, we see really is to make this simpler for our customers. So there are things like networking and security, application performance management with things like AppDynamics as well as some aspects of management that our customers consistently tell us can you just make this the same? Like these are not the area's of differentiation or unique capabilities. These are area's of friction and complexity and if you can give me a networking framework, whether its SD-WAN or ACI Anywhere that helps me connect those Clouds and manage policy in a consistent way or you can give me application performance the same over these things or security the same over these things, that's gonna make my life easier its gonna be lower friction and I'm expecting it, since your Cisco, you'll be able to integrate with my own Prime environment. >> Yeah, so then we went from hard to simple and easy, is a good business model. >> Kip: Absolutely. >> You guys have done that in the past and you certainly have the, from routing, everything up to switches and storage. KD, but talk about the complexity, because this is where it sounds complex on paper but when you actually unpack the technologies involved, you know in different Cloud suppliers, different technologies and tools. Throw in open sources into the mix is even more complex. So Multicloud, although sounds like a simple reality, the complexities pretty significant. Can you just share your thoughts on that? >> It is, and that's what we excel. We excel, I think complexity and distilling it down and making it simple. One other thing that we've done is, because each Cloud is unique and brings some unique capabilities, we've worked with those vendors along those dimension's that they're really really passionate about and strong end. So for example, with Google we've worked on the container front. They are, maybe one of the pioneers in that space, they've certainly delivered a lot of technologies into that domain. We've worked with them on the Kubeflow front on the AI front, in fact we are one of the biggest contributors to the open source projects on Kubeflow. And we've taken those technologies and then created a simple way for enterprise IT to consume them. So what we've done with Anthos, with Google, takes those technologies, takes our networking constructs, whether its ACI Anywhere, whether its other networking pieces on different parts of it, whether its SD-WAN and so forth. And it creates that environment which makes an enterprise IT feel comfortable with embracing these technologies. >> You said you're contributing to Kubeflow. A lot of people don't look at Cisco and would instantly come to the reaction that you guys are heavily contributing into open source. Can you just share, you know, the level of commitment you guys are making to open source? Just get that out there, and why? Why are you doing it? >> Yeah. For us, some of these technologies are really in need for incubation and nurturing, right? So Kubeflow is early, its really promising technology. People, in fact there's a lot of buzz about AI-- >> In your contributing to Kubeflow, significantly? >> Yes, yeah. >> Cisco? >> We're number three contributor actually. Behind Google. >> Okay so you're up there? You're up at the top of the list? >> Yeah one of the top three. >> Top of the list. >> And why? Is this getting more collaborative? More Multicloud fabric-- >> Well I mean, again it comes back to our customers. We think Kubeflow is a really interesting framework for AI and ML and we've seen our customers that workload type is becoming more and more important to them. So we're supporting that because its something we think will help our customers. In fact, Kubeflow figures into how we think about Hybrid and Multicloud with Google and the Anthos system in terms of giving customers the ability to run those workloads in Google Cloud with TPU's or on-prem with some of the incredible appliances that we've delivered in the data centers using GPU's to accelerate these workings. >> And it also certainly is compatible with the whole Multicloud mission as well-- >> Exactly, yeah. >> That's right. >> So you'll see us, we're committed to open source but that commitment comes through the lens of what we think our customers need and want. So it really again it comes back to the customer for us, and so you'll see us very active in open source areas. Sometimes, I think to your point, we should be louder about that. Talk more about that but we're really there to help our customers. DevNet, DevNet Create that Susie Wee's been working on has been a great success. I mean we've witnessed it first hand, seeing it at the Cisco Live packed house. >> In Barcelona. >> You've got developers developing on the network its a really big shift. >> Yeah absolutely. >> That's a positive shift. >> Well its a huge shift, I think its natural as you see Cisco shifting more and more towards software you see much much more developer engagement and we're thrilled with the way DevNet has grown. >> Yeah, and networking guys in your target audience gravitates easily to software it seems to be a nice fit. So good stuff there. Third talk track, Hybrid. You guys have deep bench of tech and people on network security, networking security, data center, and all the things involved in the years and years of enterprise evolution. Whether its infrastructure and all the way through the facilities, lot of expertise. Now Hybrid comes onto the scene. Went through the little hype cycle, people now get it, you gotta operate across Clouds on-prem to the Cloud and now multiple Clouds so what's the current state of Cisco-Google relationship with Hybrid? How is that fitting in, Google Next and beyond? >> So let me tease that in the context of some history, right? So if we go back, say 10 years, virtualization was the bad word of the day. Things were getting virtualized. We created the best data center infrastructure for virtualization in our UCS platforms. Completely programmable infrastructure's code, a very programmable environment that can back a lot of density of virtual machines, right? Roll forward three or four years, storage and compute were getting unwieldily. There was complexity there to be solved. We created the category of converge infrastructure, became the leader of that category whether we work with DMC and other players. Roll forward another four or five years we got into the hyper conversion infrastructure space with the most performant ACI appliance on the market anywhere. And most performant, most consistent, deeply engineered across all the stacks. Can took that complexity, took our learnings and DNA networking and married it together to create something unique for the industry. Now you think, do other domains come together? Now its the Cloud and on-prem. And if that comes together we see similar kinds of complexity. Complexity in security, complexity in networking, complexity in policy and enforcement across layers. Complexity, frankly in management, and how do you make that management much more simple and consumerized? We're taking that complexity and distilling it down into developing a very simple appliance. So what we're trying to deliver to the customer is a simple appliance that they can stand and procure and set up much in the way that they're used to but now the appliance is scale out. Its much more Cloud like. Its managed from the Cloud. So its got that consumer modern feel to it. Now you can deliver on this a container environment, a container development environment, for your developer stakeholders. You can deliver security that's plumed through and across multiple layers, networking that's plumed through and across multiple layers, at the end of the day we've taken those boundaries between Cloud and data center and blown them away. >> And you've merged operational constructs of the old data center operations to Cloud like operations, >> Yeah. >> Everything's just a service, you got Microservices coming, so you didn't really lose anything, you'd mentioned democratizing IT earlier, you guys are bringing the HyperFlex to ACI to the table so you now can let customers run, is that right? Am I getting it right? >> That's right. Its all about how do you take new interesting technologies that are developed somewhere, that may have complexity because its open source and exchanging all the time or it may have complexity because it was not been for a different environment, not for the on-prem environment. How do you take that innovation and democratize it so that everybody, all of the 100's of thousands and millions of enterprise customers can use it and feel comfortable using it and feel comfortable actually embracing it in a way that gives them the security, gives them the networking that's needed and gives them a way that they can serve their internal stakeholders very easily. >> Guys thanks for taking the time for this awesome conversation. One final question, gettin you both to weigh in on, here at Google Next 2019, we're in 2019. Cloud's going a whole other level here. What's the most important story that customers should pay attention to with respect to expanding into the Cloud, taking advantage of the growing developer ecosystem as open source continues to go to the next level. What's the most important thing happening around Google Next and the industry with respect to Cloud and for the enterprise? >> Well I think certainly here at Google Next the Google Cloud's Anthos announcement is going to be of tremendous interest to enterprises cause as you said they are extending into the Cloud and this is another great option for enterprises who are looking to do that. >> Yeah and as I look at it suddenly IT has a set of new options. They used to be able to pick networking and compute and storage, now they can pick Kubeflow for AI or they can pick Kubernetes for container development, Anthos for an on-prem version. They're shopping list has suddenly gone up. We're trying to keep that simple and organized for them so that they can pick the best ingredients they can and build the best infrastructure they can, they can do it. >> Guys thanks so much. Kip Compton senior vice president Cloud Platform and Solutions Group and KD vice president of the Data Center compute group for Cisco. Its been exclusive CUBE conversation around the Google-Cisco big news at Google Next 2019 and I'm John Furrier thanks for watching. (upbeat jazz music)

Published Date : Apr 9 2019

SUMMARY :

in the heart of Silicon Valley Thanks for spending the time. Talk about the relationship with Cisco and Google. and we think that the degree of integration is that the company's that actually and clearly on of the most important One is the application area of Multicloud and Hybrid What's the integration? born in the Cloud or born elsewhere. the difference between moving to the Cloud and then you gotta run it over and over You need the infrastructure to slot in to a and that's one of the reasons that we're partners. because one of the things we've seen but the network still needs to move packets around. in the Cloud that you rely on for your business So the news here is that you guys are and the market than other approaches What's actually happening in the and its not only the Kubernetes piece of it That's kinda, is that kind of the guiding and to have really the best experience the new CEO of Google, and I had this question to and we think that what we're doing with Google seeing all the hype and all the buzz on this do the right things to develop fast. What does that mean for the marketplace? and the experience that we can deliver having to stand that up. networking's the bottle neck. because all that policy work can be now automated away. the end experience that customers want which is the heavy lifting underneath the covers. Which is the big change. its interesting, during the hype cycle of Why's the relation with Google important? the Clouds that are important to our customers. and other area's you're experts at the same over these things or and easy, is a good business model. You guys have done that in the past on the AI front, in fact we are one of the instantly come to the reaction that you guys So Kubeflow is early, its really promising technology. We're number three contributor actually. and the Anthos system in terms of So it really again it comes back to the customer for us, You've got developers developing on the network and we're thrilled with the way DevNet has grown. Whether its infrastructure and all the way So let me tease that in the all of the 100's of thousands and millions Google Next and the industry with respect to enterprises cause as you said and compute and storage, now they can pick of the Data Center compute group for Cisco.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

April 2019DATE

0.99+

BarcelonaLOCATION

0.99+

JohnPERSON

0.99+

Susie WeePERSON

0.99+

KDPERSON

0.99+

100 percentQUANTITY

0.99+

threeQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kip ComptonPERSON

0.99+

94 percentQUANTITY

0.99+

2019DATE

0.99+

MulticloudORGANIZATION

0.99+

two leadersQUANTITY

0.99+

each pieceQUANTITY

0.99+

four yearsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

bothQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

fourQUANTITY

0.99+

KipPERSON

0.99+

Stu MinimanPERSON

0.99+

each layerQUANTITY

0.99+

Each layerQUANTITY

0.99+

three waysQUANTITY

0.99+

ACIORGANIZATION

0.99+

Cloud Platform and Solutions GroupORGANIZATION

0.99+

oneQUANTITY

0.99+

third pieceQUANTITY

0.99+

firstQUANTITY

0.99+

One final questionQUANTITY

0.99+

Nataraj Nagaratnam, IBM | IBM Think 2019


 

>> Live from San Francisco, it's theCUBE! Covering IBM Think 2019! Brought to you by IBM. >> We're back at IBM Think 2019 Dave Vellante with Stu Miniman and Lisa Martin is also here. John Furey will be up tomorrow. We're here at Moscone North. Stop by and see us. Raj Nagaratnam is here. He's a distinguished engineer, CTO and director of cloud security for IBM hybrid cloud. Raj good to see you again and thanks for coming on. >> Good to see you. Yeah absolutely. >> So you're in all the hot places. Security, cloud, hybrid cloud. A lot going on in your world. >> Absolutely! Lots going on. I mean I think we see a lot of enterprises moving to cloud and like IBM says there's a lot more to move, right. Just 20% is out there. But security is top of mind so you're right it's a sweet spot. >> What is cloud to people? You know. Because you guys define as sort of a what I would say a cloud experience, not a place, but sort of how you operate. But what do customers think of when they think of cloud and hybrid cloud? >> Definitely. So in terms of our customers from them anything that they can consume as a service is a cloud. So it's from a sass perspective. We do have IBM and others have sass properties. But in this context of discussion, my area of focus is as enterprises build applications, it could be enterprise applications, it could be their consumer facing applications. So as they look at that landscape, how do they take advantage of cloud and cloud platform where they can build it on maybe on premise with a private cloud or they can take advantage of a set of services and seamlessly integrate into a public cloud or a multicloud. So ultimately toward their applications how they leverage the benefit. >> Security was always a big concern especially in the early days of cloud. You mentioned that we're in the next phase of the journey. We've hit the 20%. The low-hanging fruit so to speak. But even then early on especially, security was a major, major concern. Won't those concerns heighten as you start moving more mission-critical workloads into the cloud? >> They do, they do. Like you said rightfully, over the last couple of years, I mean definitely early on because even if you go back like two decades back, when web came along and people need to expose what they had in their data centers as a web application, it was a journey. Now we have crossed that point where everything becomes web application. So that's kind of the journey for cloud that's taking place where it was a concern, it continues to be a concern, but not so much. There are risks there are controls that people have put in place to come over the risk and the trust providers like IBM and others because we do a lot more controls in place, and we have an army. If you look at how we host many customer applications and help them, then it's better than many times what a particular company can do just for their application. So from that viewpoint, security concern is kind of they've gotten over it but viewpoint in chapter two, it becomes a lot more. So how do enterprises, enterprise risk, the data becomes kind of the core part. >> Raj, I think you're hitting it dead on. Five years ago it was like oh wait I'm not sure if I'll do cloud because security. Now I understand. Cloud is an opportunity for me to change security but absolutely security is such a huge concern you know at least in the companies we talked to today, nobody feels safe. It's not a question of if but when you're going to be attacked and how you're going to deal with this. So give us a little bit. How do we make sure that enterprises, you know, can live in today's security climate and not be totally paranoid all the time? >> That's right. Security is not a binary thing, right? It's not like you're secure or un-secure. Is it secure enough from a risk perspective, right? So when you look at data, say are you dealing with sensitive data, private data or mission-critical data and how do you protect it? And are you taking the right steps. Like you rightfully said cloud is an opportunity to do security right. Many times in the past, app teams will build apps and throw it over the wall for the security team to secure it. That has changed. We need to put security up front as part of the entire process. As we think about it as Dev and Ops now it is more important to be security risk part of it where you have DevSecOps so that right from as you design it, build applications and build and operate, so that application teams have equal responsibility and accountability as you operate cloud. Not just hey I'm going to throw it away, and I get a security team to do it. So that collaborative model between a line of business and an application team on one end and the security team and operations team on another end, kind of a classic IT, come together and cloud makes it possible. >> What's the role of the line of business in that equation, Raj? Is it sort of to set the risk profile, the value of the data? Talk about that a little bit. >> Yeah. Line of business thinks about, obviously from their perceptive what data they deal with, what business they are in. It may be retail banking on one end or it could be payment processing on another end. So they are looking at how fast they need to reach the data or bring the applications to the cloud for the consumers to reach a digital transformation that they are going through, right? So on one end they are going through digital transformation. On the other end, the security team, from a typical security officer perspective, sets policies. If there are sudden regulations that you need to follow, what kind of data can be put in cloud or if you put it, what kind of controls and protection you need. So the policy from a security and risk perspective comes from the security team. Line of business looks at it and says this is what we need to do faster to go to market, expand your business. And now they need to look at and say how do we bring these things together? What risk? Am I willing to take the risk? Or what controls and security capabilities I need to protect my app and data with to mitigate the risk? So that's the model that they are in discussions about. >> Raj, one of the areas we've talked to IBM a lot about is what's happening in the container space, what's happening in Kubernetes. What role is IBM helping to the industry as a whole and IBM's products specifically to be more secure in that space? >> So it is about helping customers build secure applications and deploy it. It is a responsibility model. From that perspective, you brought a very important point. When you look at Cloud-Native and Kubernetes as an example, it provides (mumbles) opportunity. So the way we are build our (mumbles) service, we have built security in. More importantly, also we are providing security services. So let me simplify this, right? From a elevated perceptive, when you deploy an application, you need to think about how do you manage access to your application? Oh, that may be (mumbles) to an attack so how do you protect against network trash? We have a capability called cloud internet services to protect against it. Okay you are letting the good guys in now how do you know who it is? So you need to authenticate the person, right? So if we have a service called Lap ID that integrate seamlessly because developers don't need to care about the security, the (mumbles) technology details. We make it simple so developers focus on business logic. So that's about manage access. Next things is the application now need to protect the data. So how do you protect data of an application? So you may put in a Cloud-Native data base or a object store, right? In the new models these things evolve. And the first things companies try to do is you need to protect, encrypt them. As some people would say, encryption is for amateurs, key management is for professions. So ultimately it comes down to how do you manage your keys? And ultimately customers want more control of they keys so what we have, in the industry what we tell them is bring your own key, right? So customer controls they key even the encryption happens in the cloud. So we provide the capability with our key protect service so all our data bases are already integrated. Our object store is integrated. Our virtual servers are integrated, right? So these capabilities, this way whenever you encrypt the data it's provided. But given IBM's history, we understand like risk and financial teams go together. We are introducing a new paradigm. We are announcing this week. It's just not bring your keys. Keep your own keys. This way it's not only about how do you control the key, but in cryptography land, the keys get managed and protected by a HSM, a Hardware Security Module. We give the entire module that they can control. The HSM can be controlled by the customer along with the key. This is a shift because now customers can gain more confidence with that, so this service is called Hyper Protect Crypto service that we are bringing to market. Built on IBM's top level security capabilities. If you can imagine banks running on our mainframe and security being kind of the, whenever you talk in movies you look at security people say oh it's mainframe. They didn't hack but they get in to this system. That's a level of security. The top level of security we have. We are bringing that to cloud to make the data secure. And another thing that we are working on and announcing this week is it's not about whether the data is in the database or it's in cryptic form. It's also when it's processed an application in memory. Imagine you have a payment service, a credit card payment, and some one logs into the system and dumps the memory, Voila you get the credit card, right? Now we can protect it. With working with Intel, we are partnered with and launching a capability where when the data goes into memory, we can protect it. So end-to-end we are looking at manage access, protect data, and now you can't protect what you don't see, so we provide visibility. Who has accessed my services through access logs? Are there threats? So we are infusing machine learning and AI to detect malicious behavior on network. So bringing it to a single dashboard called security advisor, looking this piece. So manage access, protect data, gain visibility. More importantly, all of this in the context of developers, developer focus, developer experience, so that in a single click in an automated way, they can protect their apps. That's our goal. That's where our customers what to go. And we are addressing that with these capabilities. It's a journey. >> Yeah so I wanted to ask you how customers, what's best practice for scaling and automating all this and I think you've touched upon several things. It's design security in. Don't bolt it on. DevSecOps for example. It's scaling the key management and automating that key management. Those are at least a couple of the components that I've heard. Maybe you could, you know, follow through and add some color to that. >> Definitely. So when you look at the DevSecOps, right? So from a developer perspective, as they build automation tool, it goes through a pipeline. You have to take an application in order to deploy it. Let's take (mumbles) as an example. In the past or in a traditional IT world, that may be (mumbles) in the system so you need to patch them. Then there becomes tension between an IT operations team saying oh I need to patch these things, whereas a security team saying no, no I got to patch it. In the new world, why patch it? Why don't they spin up a new container that's now the most protected one. As you find availabilities, pin up a new image and spawn it on, right? In that context as you look at a developer integrating these things. So how do I deploy an application for manage access? You can integrate with out internet services so that any attack can be protected. You deploy it in a way. You can integrate your services where with identity can be authenticated. So those kind of build into the application. And then as you put this through the pipe, variabilities are being scanned. You can set your policy to say if you have a lot of variabilites, don't deploy protection. That's part of your DevOps policy that you can set. And then as you work with your security team, you can say hey guys you can manage the keys but tell me which data base and which key to use. So the management may be the security guy's responsibility. Application team looks at is saying which data base which key, let me configure it. So that moves towards what if a policy management configuration problem. So it's about DevTools integrating security into the design and into the develop and automation end-to-end that brings a collaborative culture because it's not a technology problem. It's a cultural problem. Organizational challenge that these kind of capabilities help customers. >> Why IBM? Give us the commercial. >> (laughs) Well IBM it's a trusted provider from a costumer's prospective. We know enterprises. For all these years, for lots, many, many decades, we have run enterprise systems, banking, most critical data, workloads. And with our expertise that's technology on one end. So when you look at IBM cloud built in. IBM security. World-leading enterprise security set of capabilities from IBM security, you have one plus one equal to three. Not to mention our expertise. We know our services capabilities, (mumbles), helping customers understand complainants, how to work with security or even manage security services. So that brings technology, expertise and capabilities with years worth of experience that we bring to the table. >> Stu I would say IBM does hard well, and security's hard so Raj thanks so much for coming on theCUBE and sharing with us some of the progress that IBM's making. Congratulations. >> Absolutely. Thank you very much. >> Alright you're welcome. Keep it right there, buddy. Stu and I will be back with Lisa Martin. We're here at IBM Think day one of theCUBE. Right back right after this short break. (electronic theme music)

Published Date : Feb 11 2019

SUMMARY :

Brought to you by IBM. Raj good to see you again and thanks for coming on. Good to see you. So you're in all the hot places. and like IBM says there's a lot more to move, right. not a place, but sort of how you operate. So as they look at that landscape, Won't those concerns heighten as you start moving So that's kind of the journey for cloud that's taking place How do we make sure that enterprises, you know, So when you look at data, Is it sort of to set the risk profile, So they are looking at how fast they need to reach the data What role is IBM helping to the industry as a whole and So how do you protect data of an application? Yeah so I wanted to ask you how customers, So when you look at the DevSecOps, right? Why IBM? So when you look at IBM cloud built in. some of the progress that IBM's making. Thank you very much. Stu and I will be back with Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

Raj NagaratnamPERSON

0.99+

San FranciscoLOCATION

0.99+

Nataraj NagaratnamPERSON

0.99+

StuPERSON

0.99+

20%QUANTITY

0.99+

Dave VellantePERSON

0.99+

Stu MinimanPERSON

0.99+

John FureyPERSON

0.99+

firstQUANTITY

0.99+

RajPERSON

0.99+

tomorrowDATE

0.99+

threeQUANTITY

0.99+

DevSecOpsTITLE

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

this weekDATE

0.98+

Five years agoDATE

0.98+

IntelORGANIZATION

0.97+

Moscone NorthLOCATION

0.96+

chapter twoOTHER

0.96+

one endQUANTITY

0.96+

KubernetesTITLE

0.94+

single dashboardQUANTITY

0.93+

two decades backDATE

0.92+

DevToolsTITLE

0.91+

single clickQUANTITY

0.91+

IBM Think dayEVENT

0.8+

2019DATE

0.79+

last couple of yearsDATE

0.79+

Lap IDTITLE

0.76+

Cloud-NativeTITLE

0.7+

Think 2019EVENT

0.68+

ProtectORGANIZATION

0.6+

Think 2019COMMERCIAL_ITEM

0.55+

DevOpsTITLE

0.54+

HyperCOMMERCIAL_ITEM

0.4+

IBM ThinkEVENT

0.36+

securityTITLE

0.35+

Massimo Morin, Peter Yen, Lawrence Fong | AWS Executive Summit 2018


 

>> Live from Las Vegas, it's theCUBE covering the AWS Accenture Executive Summit. Brought to you by Accenture. >> Welcome back, everyone, to theCUBE's live coverage of the AWS Executive Summit, here at The Venetian. I'm your host, Rebecca Knight. We have three guests for this segment. We have Lawrence Fong, general manager, information technology at Cathay Pacific; Peter Yen, managing director, Hong Kong Accenture; and Massimo Morin, head world wide business development travel at AWS. Thank you so much, gentlemen, for coming on theCUBE. >> Thank you. >> Thank you. >> So we're going to be talking about applying blockchain to a travel rewards program at Cathay Pacific, but I want to start with you, Lawrence. Let's describe the business problem that you were trying to solve. The Asia Miles program is already, sort of a world-class program, very competitive. But it still had it's kinks. So, what were you trying to do to make it better? >> Okay, first of all, Asia Miles is a lifestyle, you know, frequent flyer loyalty program, and almost every year they're running over 460 marketing campaign a year. So, you can imagine how much work they have to do. So, from the customer point of view, they have a pin point of whatever activities of redemption or for award, all these kind of thing. It's going to take a long time for them to get their miles. So, from the customer point of view, this is not really ideal. And on the other hand, at the back office, because we're running so many marketing campaign. So, there's a lot of back office operation and lot of, where people work and all this kind of thing. So, it's also not, I think, a very good operation efficiency. So, from the customer point of view, from the back office point of view, so that's the key pinpoint we want to be solved. >> Right. So, it was tedious to operate for both the customer and for the business itself. So, why was blockchain the technology? That could solve it? >> Well, we study one of the key features, or component of blockchain, it's called 'smart contract'. And we could see the smart contract would be able to help bringing our customer and Asia Miles, and also our merchant together. So, by using blockchain, the miles, the redemption, all this will happen almost in a second. >> So, how did this work, Lawrence? I mean, in terms of getting, working together with Cathay Pacific, how did you work together to create this new program? >> Okay. Effectively, it's a very co-create process. It started with a conversation with Lawrence. We had the idea, so Lawrence was courageous enough to let us try. We did a very short, quick pilot. We proved the concept. Then we went into a very rapid development cycle, as well. And then, within weeks, we get the product done, and then we launch and go to the market. >> So, Peter, is that generally the way it goes, in terms of this co-creative process? I mean, we're hearing so much, that Accenture and AWS have these solutions that they can bring to clients, and then, is it sort of happening in the background or are you on the ground together, sort of dreaming up ways to make this better and make the technology work? >> Well, we used to call this the new way of doing things, but I think now this is the way of doing things, right? Because it is the perfect combination. The client has perfect knowledge about the business, we understand the technology, and we have enablement partners like Amazon. So, we just work together and make it happen. >> So, from Amazon, so we hear blockchain you automatically think Bitcoin. You just do. But this is actually a very different kind of use case for blockchain, and it's one that really is so pertinent. Can you talk a little, Massimo, about other uses cases that you're seeing? >> So, indeed that you are right. Blockchain has been very nebulous, and always associated to Bitcoins, but there are actually some uses cases that are much more relevant, especially in the travel industry where you complex transaction, multi-party, where you are actually going to do transparency and data integrity. For example, we had a proof of concept to to read IATA about a one ID project that allows a travel agency to register themselves with this authority and get the key, and then seamlessly doing transaction with travel providers by identifying themselves through blockchain. That allows them to actually be recognized, and you have a seamless process with the new NDC, new distribution capabilities coming along. That is going to be extremely important. This is one type. Another type is when you wanted the immutability of the data. For example, when you have planes an you want to see you getting leases, on and off lease, and you want to see all the maintenance that occur there, and you want that that doesn't change. You want to use a trusted system that is transparent, and that is not changeable. And that provide a lot of value. And the third use case that I personally like, is automatic contract. So, when, for example, you have corporate buyers, that buy travel products from a travel provider, like Cathay Pacific, and you wanted that, you buy the ticket. But when is the airline going to get the money? That reconciliation is like, with the frequent flyer miles, you want to be done as soon as possible. Other cases is, is the passengers flying around? If it doesn't fly, well, what happened to the taxes? Taxes should be actually returning back to the customer. So, with automatic contracts, you would be able actually to reconcile that behind the scene. These are use cases that are very valuable in travel industry. >> So, does this immediate reconciliation and this trust, I mean , trust is such an important, thick concept right now. What are you hearing? From both the clients' side and the provider's side. I mean, where are we? >> Yeah, that's true. I think trust is one of the key elements of, you know, doing reconciliation. So, what we are doing now is still within our legal system. So, we trust each other. But, looking forward, I think one of the key areas that blockchain will help a lot, is the entire supply chain. But, when we talk about the supply chain, there's so many stakeholder. So, building a trust, of course, of domestic holder will be a challenge. I think that's something, you know, of course the industry has to put more thought onto it. >> What are we seeing so far? So, this was implemented in April of this year. What has been the return on investments so far? >> It's phenomenal. For those marketing campaign, we're using blockchain. These new capabilities, we had a triple digit growth, in terms of our sales, and also, because we also use kind of a game to gamify the whole thing. So, we create a lot of traction in there, you know? A lot of excitement. So, the number of people and the number of customer engaged in those marketing campaigns also have more than, you know, more than double, you know, growth. >> Peter, what's most exciting to you about this process? >> The most exciting thing is that, as you heard from Lawrence, is indeed generating performance and results. And the process of co-creating a successful solution is a very rewarding experience. >> So, I mean, and then AWS is, in terms of the co-creative process, where does AWS fit into this? >> So, we are their neighbor, and I'm glad that you're able, Cathay Pacific and Accenture, as using AWS for this. So, we have standard templates, blockchain templates that actually take away all the heavy lifting of putting place to platform to found the blockchain. So, actually, the customer and the partner can focus on the business need that they have attend. And this is all open-source, so you can see how it works. And it's so transparent, that we are very glad to enable our customer to do transformative things like this. >> So, the word is out that blockchain is not just for Bitcoin anymore. So, where do we go from here? We're talking about the travel industry, but are the learnings that Cathay Pacific has had and Accenture, in terms of how applicable are they to other industries? And how are you sharing what you've learned in a collaborate, co-creative process? >> Well, all of that, in Asia Miles, now we are taking what we learned from the blockchain, we are going to apply to the cargo industry, and also apply to the airport operation. Particular, the baggage, the consideration baggage between different people, of course they're all the blockchain. >> Great. >> Actually, many clients are now talking about this Cathay Pacific case, and they have very creative ideas, how to borrow the concept and apply to their own business. So, we should see more and more application of this solution. >> And we are seeing acceleration of adoption of cloud technology throughout the travel industry, with airline, and technology providers out there. And I'm very glad that there are taught leadership, for example, from Cathay Pacific, to take this hypothetical use cases and taking the lead on showing how it is done and sharing with the industry. We are looking for those travel leaders that will help the industry to move forward. >> That's true. >> Because it's very challenging industry with very low margin, and any improvement in customer service is going to go a long way. And we are glad to be part of that. >> And is that what it is? I mean, as you said, it sort of seen, even the incremental improvement and how that can be, just, so transformational for a company's bottom line. >> Yep. >> Yes. >> Yep. Absolutely. >> Well, Massimo, Peter, Massimo, Peter, Lawrence, thank you so much for joining us on theCUBE. It's been a really fun conversation. >> Thank you. >> Thank you very much. >> I'm Rebecca Knight. We will have more of theCUBE's live coverage of the AWS Executive Summit coming up in just a little bit. (thrilling music)

Published Date : Nov 28 2018

SUMMARY :

Brought to you by Accenture. of the AWS Executive Summit, here at The Venetian. So, what were you trying to do to make it better? So, from the customer point of view, and for the business itself. And we could see the smart contract would be able to help and then we launch and go to the market. So, we just work together and make it happen. So, from Amazon, so we hear blockchain So, indeed that you are right. So, does this immediate reconciliation and this trust, of course the industry has to put more thought onto it. So, this was implemented in April of this year. So, we create a lot of traction in there, you know? And the process of co-creating a successful solution So, actually, the customer and the partner can focus So, the word is out that blockchain is the blockchain, we are going to apply to the cargo industry, So, we should see more and more application And we are seeing acceleration of adoption And we are glad to be part of that. I mean, as you said, it sort of seen, thank you so much for joining us on theCUBE. of the AWS Executive Summit coming up in just a little bit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

Massimo MorinPERSON

0.99+

Lawrence FongPERSON

0.99+

AWSORGANIZATION

0.99+

Cathay PacificORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Peter YenPERSON

0.99+

PeterPERSON

0.99+

LawrencePERSON

0.99+

AccentureORGANIZATION

0.99+

MassimoPERSON

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

Las VegasLOCATION

0.99+

three guestsQUANTITY

0.99+

theCUBEORGANIZATION

0.98+

one typeQUANTITY

0.98+

Hong Kong AccentureORGANIZATION

0.97+

AWS Executive SummitEVENT

0.95+

over 460 marketing campaignQUANTITY

0.95+

third use caseQUANTITY

0.95+

April of this yearDATE

0.94+

AWS Executive Summit 2018EVENT

0.9+

more than doubleQUANTITY

0.85+

one IDQUANTITY

0.83+

Accenture Executive SummitEVENT

0.82+

Asia MilesORGANIZATION

0.81+

a yearQUANTITY

0.7+

AsiaLOCATION

0.7+

MilesORGANIZATION

0.7+

AWSEVENT

0.67+

Asia Miles programOTHER

0.6+

AsiaTITLE

0.57+

eachQUANTITY

0.56+

IATATITLE

0.56+

VenetianLOCATION

0.55+

BitcoinOTHER

0.53+

BitcoinsOTHER

0.51+

elementsQUANTITY

0.48+

MilesOTHER

0.46+

secondQUANTITY

0.44+

Jon Rooney, Splunk | Splunk .conf18


 

>> Announcer: Live from Orlando, Florida. It's theCube. Covering .conf18, brought to you by Splunk. >> We're back in Orlando, Dave Vellante with Stu Miniman. John Rooney is here. He's the vice president of product marketing at Splunk. Lot's to talk about John, welcome back. >> Thank you, thanks so much for having me back. Yeah we've had a busy couple of days. We've announced a few things, quite a few things, and we're excited about what we're bringing to market. >> Okay well let's start with yesterday's announcements. Splunk 7.2 >> Yup. _ What are the critical aspects of 7.2, What do we need to know? >> Yeah I think first, Splunk Enterprise 7.2, a lot of what we wanted to work on was manageability and scale. And so if you think about the core key features, the smart storage, which is the ability to separate the compute and storage, and move some of that cool and cold storage off to blob. Sort of API level blob storage. A lot of our large customers were asking for it. We think it's going to enable a ton of growth and enable a ton of use cases for customers and that's just sort of smart design on our side. So we've been real excited about that. >> So that's simplicity and it's less costly, right? Free storage. >> Yeah and you free up the resources to just focus on what are you asking out of Splunk. You know running the searches and the safe searches. Move the storage off to somewhere else and when you need it you pull it back when you need it. >> And when I add an index or I don't have to both compute and storage, I can add whatever I need in granular increments, right? >> Absolutely. It just enables more graceful and elastic expansiveness. >> Okay that's huge, what else should we know about? >> So workload management, which again is another manageability and scale feature. It's just the ability to say the great thing about Splunk is you put your data in there and multiple people can ask questions of that data. It's just like an apartment building that has ... You know if you only have one hot water heater and a bunch of people are taking a shower at the same time, maybe you want to give some privileges to say you know, the penthouse they're going to get the hot water first. Other people not so much. And that's really the underlying principle behind workload management. So there are certain groups and certain people that are running business critical, or mission critical, searches. We want to make sure they get the resources first and then maybe people that are experimenting or kind of kicking the tires. We have a little bit of a gradation of resources. >> So that's essentially programmatic SLAs. I can set those policies, I can change them. >> Absolutely, it's the same level of granular control that say you were on access control. It's the same underlying principle. >> Other things? Go ahead. >> Yeah John just you guys always have some cool, pithy statements. One of the things that jumped out to me in the keynotes, because it made me laugh, was the end of metrics. >> John: Yes. >> You've been talking about data. Data's at the ... the line I heard today was Splunk users are at the crossroads of data so it gives a little insight about what you're doing that's different ways of managing data 'cause every company can interact with the same data. Why is the Splunk user, what is it different, what do they do different, and how is your product different? >> Yeah I mean absolutely. I think the core of what we've always done and Doug talked about it in the keynote yesterday is this idea of this expansive, investigative search. The idea that you're not exactly sure what the right question is so you want to go in, ask a question of the data, which is going to lead you to another question, which is going to lead you to another question, and that's that finding a needle in a pile of needles that Splunk's always great at. And we think of that as more the investigative expansive search. >> Yeah so when I think back I remember talking with companies five years ago when they'd say okay I've got my data scientists and finding which is the right question to ask once I'm swimming in the data can be really tough. Sounds like you're getting answers much faster. It's not necessarily a data scientist, maybe it is. We say BMW on stage. >> Yeah. >> But help us understand why this is just so much simpler and faster. >> Yeah I mean again it's the idea for the IT and security professionals to not necessarily have to know what the right question is or even anticipate the answer, but to find that in an evolving, iterative process. And the idea that there's flexibility, you're in no way penalized, you don't have to go back and re-ingest the data or do anything to say when you're changing exactly what your query is. You're just asking the question which leads to another question, And that's how we think about on the investigative side. From a metric standpoint, we do have additional ... The third big feature that we have in Splunk Enterprise 7.2 is an improved metrics visualization experience. Is the idea of our investigative search which we think we are the best in the industry at. When you're not exactly sure what you're looking for and you're doing a deep dive, but if you know what you're looking for from a monitoring standpoint you're asking the same question again and again and again, over and again. You want be able to have an efficient and easy way to track that if you're just saying I'm looking for CPU utilization or some other metric. >> Just one last follow up on that. I look ... the name of the show is .conf >> Yes. >> Because it talks about the config file. You look at everywhere, people are in the code versus gooey and graphical and visualization. What are you hearing from your user base? How do you balance between the people that want to get in there versus being able to point and click? Or ask a question? >> Yeah this company was built off of the strength of our practitioners and our community, so we always want to make sure that we create a great and powerful experience for those technical users and the people that are in the code and in the configuration files. But you know that's one of the underlying principles behind Splunk Next which was a big announcement part of day one is to bring that power of Splunk to more people. So create the right interface for the right persona and the right people. So the traditional Linux sys admin person who's working in IT or security, they have a certain skill set. So the SPL and those things are native to them. But if you are a business user and you're used to maybe working in Excel or doing pivot tables, you need a visual experience that is more native to the way you work. And the information that's sitting in Splunk is valuable to you we just want to get it to you in the right way. And similar to what we talked about today in the keynote with application developers. The idea of saying well everything that you need is going to be delivered in a payload and json objects makes a lot of sense if you're a modern application developer. If you're a business analyst somewhere that may not make a lot of sense so we want to be able to service all of those personas equally. >> So you've made metrics a first class citizen. >> John: Absolutely. >> Opening it up to more people. I also wanted to ask you about the performance gains. I was talking to somebody and I want to make sure I got these numbers right. It was literally like three orders of magnitude faster. I think the number was 2000 times faster. I don't know if I got that number right, it just sounds ... Implausible. >> That's specifically what we're doing around the data fabric search which we announced in beta on day one. Simply because of the approach to the architecture and the approach to the data ... I mean Splunk is already amazingly fast, amazingly best in class in terms of scale and speed. But you realize that what's fast today because of the pace and growth of data isn't quite so fast two, three, four years down the road. So we're really focused looking well into the future and enabling those types of orders of magnitude growth by completely re imagining and rethinking through what the architecture looks like. >> So talk about that a little bit more. Is that ... I was going to say is that the source of the performance gain? Is it sort of the architecture, is it tighter code, was it a platform do over? >> No I mean it wasn't a platform do over, it's just the idea that in some cases the idea of thinking like I'm federating a search between one index here and one index there, to have a virtualization layer that also taps into compute. Let's say living in a patchy Kafka, taking advantage of those sorts of open source projects and open source technologies to further enable and power the experiences that our customers ultimately want. So we're always looking at what problems our customers are trying to solve. How do we deliver to them through the product and that constant iteration, that constant self evaluation is what drives what we're doing. >> Okay now today was all about the line of business. We've been talking about, I've used the term land and expand about a hundred times today. It's not your term but others have used it in the industry and it's really the template that you're following. You're in deep in sec ops, you're in deep in IT, operations management, and now we're seeing just big data permeate throughout the organization. Splunk is a tool for business users and you're making it easier for them. Talk about Splunk business flow. >> Absolutely, so business flow is the idea that we had ... Again we learned from our customers. We had a couple of customers that were essentially tip of the spear, doing some really interesting things where as you described, let's say the IT department said well we need to pull in this data to check out application performance and those types of things. The same data that's following through is going to give you insight into customer behavior. It's going to give you insight into coupons and promotions and all the things that the business cares about. If you're a product manager, if you're sitting in marketing, if you're sitting in promotions, that's what you want to access and you want to be able to access that in real time. So the challenge is that we're now stepping you with things like business flow is how do you create an interface? How do you create an experience that again matches those folks and how they think about the world? The magic, the value that's sitting in the data is we just have to surface it for the right way for the right people. >> Now the demo, Stu knows I hate demos, but the demo today was awesome. And I really do, I hate demos because most of them are just so boring but this demo was amazing. You took a bunch of log data and a business user ingested it and looked at it and it was just a bunch of data. >> Yeah. >> Like you'd expect and go eh what am I supposed to do with this and then he pushed button and then all of a sudden there was a flow chart and it showed the flow of the customer through the buying pattern. Now maybe that's a simpler use case but it was still very powerful. And then he isolated on where the customer actually made a phone call to the call center because you want to avoid if possible and then he looked at the percentage of drop outs, which was like 90% in that case, versus the percentage of drop outs in a normal flow which was 10%- Oop something's wrong, drilled in, fixed the problem. He showed how he fixed it, oh graphically beautiful. Is it really that easy? >> Yeah I mean I think if you think about what we've done in computing over the last 40 years. If you think about even the most basic word processor, the most basic spreadsheet work, that was done by trained technicians 30-40 years ago. But the democratization of data created this notion of the information worker and we're a decade or so now plus into big data and the idea that oh that's only highly trained professionals and scientists and people that have PHDs. There's always going to be an aspect of the market or an aspect of the use cases that is of course going to be that level of sophistication, but ultimately this is all work for an information worker. If you're an information worker, if you're responsible for driving business results and looking at things, it should be the same level of ease as your traditional sort of office suite. >> So I want to push on that a little if I can. So and just test this, because it looked so amazingly simple. Doug Merritt made the point yesterday that business processes they used to be codified. Codifying business processes is a waste of time because business processes are changing so fast. The business process that you used in the example was a very linear process, admittedly. I'm going to search for a product, maybe read a review, I'm going to put it in my cart, I'm going to buy it. You know, very straightforward. But business processes as we know are unpredictable now. Can that level of simplicity work and the data feed in some kind of unpredictable business process? >> Yeah and again that's our fundamental difference. How we've done it differently than everyone in the market. It's the same thing we did with IT surface intelligence when we launched that back in 2015 because it's not a tops down approach. We're not dictating, taking sort of a central planning approach to say this is what it needs to look like. The data needs to adhere to this structure. The structure comes out of the data and that's what we think. It's a bit of a simplification, but I'm a marketing guy and I can get away with it. But that's where we think we do it differently in a way that allows us to reach all these different users and all these different personas. So it doesn't matter. Again that business process emerges from the data. >> And Stu, that's going to be important when we talk about IOT but jump in here. >> Yeah so I wanted to have you give us a bit of insight on the natural language processing. >> John: Yeah natural language processing. >> You've been playing with things like the Alexa. I've got a Google Home at home, I've got Alexa at home, my family plays with it. Certain things it's okay for but I think about the business environment. The requirements in what you might ask Alexa to ask Splunk seems like that would be challenging. You're got a global audience. You know, languages are tough, accents are tough, syntax is really really challenging. So give us the why and where are we. Is this nascent things? Do you expect customers to really be strongly using this in the near future? >> Absolutely. The notion of natural language search or natural language computing has made huge strides over the last five or six years and again we're leveraging work that's done elsewhere. To Dave's point about demos ... Alexa it looks good on stage. Would we think, and if you're to ask me, we'll see. We'll always learn from the customers and the good thing is I like to be wrong all the time. These are my hypotheses, but my hypothesis is the most actual relevant use of that technology is not going to be speech it's going to be text. It's going to be in Slack or Hipchat where you have a team collaborating on an issue or project and they say I'm looking for this information and they're going to pass that search via text into Splunk and back via Slack in a way that's very transparent. That's where I think the business cases are going to come through and if you were to ask me again, we're starting the betas we're going to learn from our customers. But my assumption is that's going to be much more prevalent within our customer base. >> That's interesting because the quality of that text presumably is going to be much much better, at least today, than what you get with speech. We know well with the transcriptions we do of theCUBE interviews. Okay so that's it. ML and MLP I thought I heard 4.0, right? >> Yeah so we've been pushing really hard on the machine learning tool kit for multiple versions. That team is heavily invested in working with customers to figure out what exactly do they want to do. And as we think about the highly skilled users, our customers that do have data scientists, that do have people that understand the math to go in and say no we need to customize or tweak the algorithm to better fit our business, how do we allow them essentially the bare metal access to the technology. >> We're going to leave dev cloud for Skip if that's okay. I want to talk about industrial IOT. You said something just now that was really important and I want to just take a moment to explain to the audience. What we've seen from IOT, particularly from IT suppliers, is a top down approach. We're going to take our IT framework and put it at the edge. >> Yes. >> And that's not going to work. IOT, industrial IOT, these process engineers, it's going to be a bottoms up approach and it's going to be standard set by OT not IT. >> John: Yes. >> Splunk's advantage is you've got the data. You're sort of agnostic to everything else. Wherever the data is, we're going to have that data so to me your advantage with industrial IOT is you're coming at it from a bottoms up approach as you just described and you should be able to plug into the IOT standards. Now having said that, a lot of data is still analog but that's okay you're pulling machine data. You don't really have tight relationships with the IOT guys but that's okay you got a growing ecosystem. >> We're working on it. >> But talk about industrial IOT and we'll get into some of the challenges. >> Yeah so interestingly we first announced the Industrial Asset Intelligence product at the Hannover Messe show in Germany, which is this massive like 300,000 it's a city, it's amazing. >> I've been, Hannover. One hotel, huge show, 400,000 people. >> Lot of schnitzel (laughs) I was just there. And the interesting thing is it's the first time I'd been at a show really first of all in years where people ... You know if you go to an IT or security show they're like oh we know Splunk, we love Splunk, what's in the next version. It was the first time we were having a lot of people come up to us saying yeah I'm a process engineer in an industrial plant, what's Splunk? Which is a great opportunity. And as you explain the technology to them their mindset is very different in the sense they think of very custom connectors for each piece. They have a very, almost bespoke or matched up notion, of a sense to a piece of equipment. So for an example they'll say oh do you have a connector for and again, I don't have the machine numbers, but like the Siemens 123 machine. And I'll be like well as long as it's textural structural to semi structural data ideally with a time stamp, we can ingest and correlate that. Okay but then what about the Siemens ABC machine? Well the idea that, the notion that ... we don't care where the source is as long as there's a sensor sending the data in a format that we can consume. And if you think back to the beginning of the data stream processor demo that Devani and Eric gave yesterday that showed the history over time, the purple boxes that were built, like we can now ingest data via multiple inputs and via multiple ways into Splunk. And that hopefully enables the IOT ecosystems and the machine manufacturers, but more importantly, the sensor manufacturers because it feels like in my understanding of the market we're still at a point of a lot of folks getting those sensors instrumented. But once it's there and essentially the faucet's turned on, we can pull it all in and we can treat it and ingest it just as easily as we can data from AWS Kineses or Apache Access logs or MySequel logs. >> Yeah and so instrumenting the windmill, to use the metaphor, is not your job. Connectivity to the windmill is not your job, but once those steps have been taken and the business takes those steps because there's a business case, once that's done then the data starts flowing and that's where you come in. >> And there's a tremendous amount of incentive in the industry right now to do that level of instrumentation and connectivity. So it feels like that notion of instrument connect then do the analytics, we're sitting there well positioned once all those things are in place to be one of the top providers for those analytics. >> John I want to ask you something. Stu and I were talking about this at our kickoff and I just want to clarify it. >> Doug Merritt said that he didn't like the term unstructured data. I think that's what he said yesterday, it's just data. My question is how do you guys deal with structured data because there is structured data. Bringing transaction processing data and analytics data together for whatever reason. Whether it's fraud detection, to give the buyer an offer before you lose them, better customer service. How do you handle that kind of structured data that lives in IBM mainframes or whatever. USS mainframes in the case of Carnival. >> Again we want to be able to access data that lives everywhere. And so we've been working with partners for years to pull data off mainframes. Again, the traditional in outs aren't necessarily there but there are incentives in the market. We work with our ecosystem to pull that data to give it to us in a format that makes sense. We've long been able to connect to traditional relational databases so I think when people think of structured data they think about oh it's sitting in a relational database somewhere in Oracle or MySequel or SQL Server. Again, we can connect to that data and that data is important to enhance things particularly for the business user. Because if the log says okay whatever product ID 12345, but the business user needs to know what product ID 12345 is and has a lookup table. Pull it in and now all of a sudden you're creating information that's meaningful to you. But structure again, there's fluidity there. Coming from my background a Json object is structured. You can the same way Theresa Vu in the demo today unfurled in the dev cloud what a Json object looks like. There's structure there. You have key value pairs. There's structure to key value pairs. So all of those things, that's why I think to Doug's point, there's fluidity there. It is definitely a continuum and we want to be able to add value and play at all ends of that continuum. >> And the key is you guys your philosophy is to curate that data in the moment when you need it and then put whatever schema you want at that time. >> Absolutely. Going back to this bottoms up approach and how we approach it differently from basically everyone else in the industry. You pull it in, we take the data as is, we're not transforming or changing or breaking the data or trying to put it into a structure anywhere. But when you ask it a question we will apply a structure to give you the answer. If that data changes when you ask that question again, it's okay it doesn't break the question. That's the magic. >> Sounds like magic. 16,000 customers will tell you that it actually works. So John thanks so much for coming to theCUBE it was great to see you again. >> Thanks so much for having me. >> You're welcome. Alright keep it right there everybody. Stu and I will be back. You're watching theCUBE from Splunk conf18 #splunkconf18. We'll be right back. (electronic drums)

Published Date : Oct 3 2018

SUMMARY :

brought to you by Splunk. He's the vice president of product marketing at Splunk. and we're excited about what we're bringing to market. Okay well let's start with yesterday's announcements. _ What are the critical aspects of 7.2, and move some of that cool and cold storage off to blob. So that's simplicity and it's less costly, right? Move the storage off to somewhere else and when you need it It just enables more graceful and elastic expansiveness. It's just the ability to say the great thing about Splunk is So that's essentially programmatic SLAs. Absolutely, it's the same level of granular control that Other things? One of the things that jumped out to me in the keynotes, Why is the Splunk user, what is it different, and Doug talked about it in the keynote yesterday is ask once I'm swimming in the data can be really tough. But help us understand why this is just so much And the idea that there's flexibility, you're in no way I look ... the name of the show is You look at everywhere, people are in the code versus So the SPL and those things are native to them. I also wanted to ask you about the performance gains. Simply because of the approach to the architecture and Is it sort of the architecture, is it tighter code, it's just the idea that in some cases the idea of and it's really the template that you're following. So the challenge is that we're now stepping you with things but the demo today was awesome. made a phone call to the call center because it should be the same level of ease as your traditional The business process that you used in the example It's the same thing we did with IT surface intelligence And Stu, that's going to be important when we talk about Yeah so I wanted to have you give us a bit of insight The requirements in what you might ask Alexa to ask Splunk It's going to be in Slack or Hipchat where you have a team That's interesting because the quality of that text bare metal access to the technology. We're going to take our IT framework and put it at the edge. And that's not going to work. Wherever the data is, we're going to have that data some of the challenges. Industrial Asset Intelligence product at the I've been, Hannover. And that hopefully enables the IOT ecosystems and the Yeah and so instrumenting the windmill, once all those things are in place to be one of the top John I want to ask you something. Doug Merritt said that he didn't like the term but the business user needs to know what product ID 12345 is curate that data in the moment when you need it to give you the answer. it was great to see you again. Stu and I will be back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Doug MerrittPERSON

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

Dave VellantePERSON

0.99+

OrlandoLOCATION

0.99+

John RooneyPERSON

0.99+

90%QUANTITY

0.99+

Jon RooneyPERSON

0.99+

GermanyLOCATION

0.99+

2015DATE

0.99+

IBMORGANIZATION

0.99+

DougPERSON

0.99+

ExcelTITLE

0.99+

SplunkORGANIZATION

0.99+

10%QUANTITY

0.99+

AWSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Orlando, FloridaLOCATION

0.99+

yesterdayDATE

0.99+

StuPERSON

0.99+

Theresa VuPERSON

0.99+

2000 timesQUANTITY

0.99+

BMWORGANIZATION

0.99+

400,000 peopleQUANTITY

0.99+

each pieceQUANTITY

0.99+

todayDATE

0.99+

HannoverLOCATION

0.99+

EricPERSON

0.99+

threeQUANTITY

0.99+

DevaniPERSON

0.99+

one indexQUANTITY

0.99+

four yearsQUANTITY

0.99+

16,000 customersQUANTITY

0.99+

twoQUANTITY

0.99+

300,000QUANTITY

0.98+

first timeQUANTITY

0.98+

oneQUANTITY

0.98+

One hotelQUANTITY

0.97+

SiemensORGANIZATION

0.97+

SQL ServerTITLE

0.97+

30-40 years agoDATE

0.96+

five years agoDATE

0.96+

bothQUANTITY

0.96+

OneQUANTITY

0.95+

LinuxTITLE

0.95+

Hannover MesseEVENT

0.95+

one hot water heaterQUANTITY

0.94+

firstQUANTITY

0.94+

SplunkTITLE

0.94+

KafkaTITLE

0.94+

AlexaTITLE

0.92+

three ordersQUANTITY

0.92+

OracleORGANIZATION

0.92+

day oneQUANTITY

0.91+

.confOTHER

0.87+

#splunkconf18EVENT

0.86+

MySequelTITLE

0.86+

third big featureQUANTITY

0.85+

Dan Bates, Impact PPA | Coin Agenda Caribbean 2018


 

I'm from San Juan Puerto Rico it's the cube covering coin agenda brought to you by silicon angle hello everyone welcome to special cube covers we're here exclusive conversations at coin agenda we just had blockchain unbound Puerto Rico is where we're at and we'll covering all the trends and latest news and analysis and cryptocurrency blockchain decentralized internet my next guest is Dan Bates founder and president of impact PPA Dan great to have you on thanks John glad to be here so one of the trends I'm noticing is a couple things flight to quality on the ico side first of all lot of the Deadwood's being pushed aside by the community still some stuff out there that you know might not have a business model but good entrepreneurs doing it then you start to see real use cases emerging I interviewed Green Chain they're disrupting how produce grains movement between suppliers and buyers and other impact mission driven stuff like how do you solve the energy crisis right we're in Puerto Rico right the grids half alive everyone knows that you're doing something really compelling take a minute to explain what you guys are doing is you have a token up and running what are you guys doing what's your value proposition so what we do is we've created a system by which you can now have renewable energy delivered to developing nations and we've taken the intermediary out of the equation whereby the world bank historically would take years to fund a project if they would do that they're big trepidation was how do you get paid at the end of the day so what we've done is we've come up with a solution that allows for generation of rent you know with the energy using renewables and track it all the way through to a payment rail if you will that a user can now prepay for energy on their mobile device it's m-pesa if you know what end pace is in Africa 70% of the transactions in Africa are done on a mobile device we are decentralized and pace up for energy so ok mobile app I get out you everyone can think about BRR and all the benefits and Airbnb brings you just have stuff happened talk about what's under the hood what's actually disruptive about what you guys are doing give some specifics because you're tying into Isis and jittering energy using the blocks you have a token how does it all work okay so what we do is in in the in the ability to fund the project getting the World Bank getting USA USA ID out of the equation what we now allow for is the community typically we are very liberal or tend to skew liberal right we actually believe that climate change is real and we want to help support these economies and these new types of you know betterment of the planet right but we don't expect that to be a philanthropic effort so people will buy the impact token which will fund projects that will then create what we're doing right now it's an AR C 20 of what we call a gen credit not a secondary token it's a credit that allows people to access the ledger so a guy will go down to the store just like he does right now and he charges his phones with more minutes or with a data plan it's fiat to a plan a digital currency we do the same thing it's now fiat into a gen credit we call it that allows them and transact with the blockchain so we get identity we get reputation we get trust and honesty about those transactions using the blockchain okay so where does the energy come from because the energy sources now are interesting because you're seeing people do great amazing things solar panels wind farming they see an Asia top of the apartment buildings there's a lot of wind yeah generally but how do they move that power into the market all right so what we do is right now we're using wind and solar rooftop or micro grid for instance we just finished a project in Haiti doing 150 kilowatts for a town called les a wha they haven't had power in two years a hurricane matthew prior to Hurricane Maria coming through Puerto Rico Puerto Rico's in a similar situation right so we created this micro grid using their existing infrastructure of transmission to distribution then we put smart meters on the home that smart meter connects to the blockchain and now people can have power at their homes pay-as-you-go awesome so what I've been to some of the hurdles you guys have had obviously to me it's a no-brainer of energy being tokenized is that makes such sense why wouldn't you want to do that obviously these regulatory issues that are incumbent legacy Dogma or or specific legislation of paperwork what not where's the efficiencies being automated away with the blockchain and what are some of the hurdles that you guys have gone through to get to this point all right so we work in the emerging economies of the world oftentimes there is not the kind of regulation that we have in the US or in the developed world like the EU something like that so when we go out to remote we go out to remote places like you know in Kenya and Ethiopia Latin America wherever it might be we don't have some of the Institute's that you would have if you were trying to set this up in Palo Alto I don't have to worry about PG&E and an interconnect and an off take and all that so what we do is that we'll go out set up a micro grid we're giving power to people who may have never had it before so all those regulatory layers are stripped away they're grateful for it that can they pay for it yes they can't afford to go buy a solar panel and a wind turbine on batteries and inverters nor would they know how to hook it all up yeah but they know that if they can buy power on a cellphone like they're already doing for other goods and services now we've got a game-changer Dan talk about the token economics I get this the payment rail piece mobile app no-brainer I get that check okay easy to use now I want to as a buyer of energy there's a token I some children there where's the other side of the marketplace how does that token economics work do you just take us through a use case and walk us through that example sure so as I said the impact token is our base token that will be the the value token that purchasers will buy in order to fund projects once we go beyond that and we now have what we call a Jen credit it may not be a token in the traditional sense or a coin it's a credit that allows us to transact with the ledger that way we can know about these people one of the greatest opportunities that we feel that we have in the marketplace is identity and reputation you have a billion two people who don't have a connection what if we could learn about those billion too and understand how they use power and where they use power so Jen credits kind of off chain management that you're doing you write to the ledger for in term util access right for that Tran action I got to ask you about things like spoofing why can't I just take your energy this is where the tokens become interesting because I mean it should solve the spoofing problem well right and you know energy energy needs to be passed down copper it's got to go on a wire that doesn't mean somebody's not going to cut the wire and bootleg it and all that stuff smoothing is not going to be the problem in this case it is a physical connection that needs to be made our smart meters allow for us to turn on or off let that connection by the user right if he doesn't pay you don't forget it yeah you know there's vandalism they're stuffed all over the world and we have methods in place to try and mitigate that as much as possible you you saw our platform that we're building we're tokenizing our media business amazing you liked it was good thanks for the plug there I was an aspirin and we were talking last night about our you know generational gap between us and our kids and you have your son here and your son's working with you my sons Alex working with us we have a young team as well I want you to talk about someone who's so experienced in the business you've done a lot of variety adventures from you know film to entertainment technology us older veterans it's the polite way to say it have seen the movie before they've seen the waves this is a huge way but this wave is gonna be can surf a few of them hang ten on our boards but this wave is really gonna be powered and led by the younger generation what's your thoughts share your vision of the role that the younger generation has to take here and what makes them capable in your mind okay so I'm gonna answer that question two ways first of all I'm so enamored with what the younger generation is trying to do with this corruption let's change the existing paradigm and make something better that's what blockchain allows for all sorts of industries goods and services right it's gonna be amazing what these guys come up with that's one of the things I love about doing this thing right I'm an old guy and I get to hang around these young people makes me feel young again yeah but the other thing that we have and I think you share it as well as we have to offer to these young guys experience right it's not like we're gonna go out to a market that we don't know about and try and explore it for success you know I've been in the renewables business for ten years delivering projects to 35 countries I got my boots on the ground I got my hands dirty doing this for 10 years now and I think the other part of that building this project and making it successful is the team that we've put together behind it we have an advisor who advises presidents dr. Michael Dorsey that's really important that's valuable that he understands the global marketplace the way he does the other one is Vinay Gupta who has been in blockchain since the 90s and has always wanted to work in the developing world with a block came distributed ledger technology that's really important I think I want to just double down and we amplify that point this is not a young man's game only exclusively it's such this markets attracting alpha entrepreneurs older veterans because as you said earlier it disrupts every vertical that's right so experience and mentorship bringing people together that can help that's right celebrate the disruption that's right driven by the young guys cool I love that but it's not like Zuckerberg made this one comment oh if you're not under the age of 30 then you don't know such delivery he got his ass handed to him on that but in this case this market is open and willing to learn and the disruptions the mission that's right and this team matters that's our experience the makeup of your board makeup of your advisors since a roll for everybody look experiences capital right it is its own virtual currency having been in all these countries having worked with presidents having worked in the 90s you know since the 90s and what that is valuable it's intangible but it is valuable dan I think we just invented a new category in the ico category an advisor tokens [Laughter] cottage industry believe tokens anyway but imagine if you could actually measure an advisor yeah the quality of adviser and the roles that they play absolutely as a token that's coming up next in our our next I see oh hey I really appreciate what you're doing I love how you work with your son father-son team you recognize the role of how the generations can shift together love it love your mission thank you thanks for sharing the news coverage here in Puerto Rico been here on the island all week getting the best stories the best people sharing them with you were open content that's the cube doing our part here at coin agenda for one day we're not gonna be you tomorrow go to Vegas just came back from watching unbound great stuff John let me give you the the URL if you don't mind no problem please if you want to go learn more about us impact PPI calm great job impact PBA calm is the cube live coverage here in Puerto Rico more after this short break

Published Date : Mar 17 2018

SUMMARY :

have some of the Institute's that you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan BatesPERSON

0.99+

KenyaLOCATION

0.99+

World BankORGANIZATION

0.99+

Puerto RicoLOCATION

0.99+

Puerto RicoLOCATION

0.99+

EthiopiaLOCATION

0.99+

10 yearsQUANTITY

0.99+

150 kilowattsQUANTITY

0.99+

ten yearsQUANTITY

0.99+

USLOCATION

0.99+

Puerto RicoLOCATION

0.99+

HaitiLOCATION

0.99+

AfricaLOCATION

0.99+

Vinay GuptaPERSON

0.99+

Palo AltoLOCATION

0.99+

ZuckerbergPERSON

0.99+

AlexPERSON

0.99+

VegasLOCATION

0.99+

35 countriesQUANTITY

0.99+

JohnPERSON

0.99+

dr.PERSON

0.99+

two yearsQUANTITY

0.98+

PG&EORGANIZATION

0.98+

tomorrowDATE

0.98+

AsiaLOCATION

0.97+

wavesEVENT

0.96+

Hurricane MariaEVENT

0.96+

oneQUANTITY

0.96+

Green ChainORGANIZATION

0.96+

two waysQUANTITY

0.95+

last nightDATE

0.95+

San Juan Puerto RicoLOCATION

0.93+

2018DATE

0.92+

billionQUANTITY

0.92+

90sDATE

0.91+

AR C 20OTHER

0.89+

AirbnbORGANIZATION

0.89+

two peopleQUANTITY

0.88+

DanPERSON

0.88+

firstQUANTITY

0.87+

one dayQUANTITY

0.87+

Latin AmericaLOCATION

0.85+

70% ofQUANTITY

0.84+

EULOCATION

0.83+

DogmaTITLE

0.83+

USAORGANIZATION

0.81+

impactORGANIZATION

0.78+

all weekQUANTITY

0.76+

coupleQUANTITY

0.75+

coin agendaORGANIZATION

0.75+

Michael DorseyPERSON

0.74+

thingsQUANTITY

0.73+

DeadwoodPERSON

0.69+

underQUANTITY

0.69+

BRRTITLE

0.67+

PPIOTHER

0.67+

age ofQUANTITY

0.65+

30QUANTITY

0.63+

naTITLE

0.61+

JenORGANIZATION

0.58+

Impact PPAORGANIZATION

0.56+

IsisORGANIZATION

0.55+

transactionsQUANTITY

0.52+

waveEVENT

0.5+

PPAORGANIZATION

0.47+

CaribbeanLOCATION

0.46+

Coin AgendaTITLE

0.43+

IDTITLE

0.32+