Image Title

Search Results for Women of Computing 2017:

Vipul Nagrath, ADP, Grace Hopper Celebration of Women of Computing 2017


 

>> Announcer: Live from Orlando, Florida it's theCUBE, covering Grace Hopper's Celebration of Women in Computing, brought to you by SiliconANGLE Media. >> Welcome back to theCUBE's coverage of the Grace Hopper Conference, here in Orlando, Florida. I'm your host, Rebecca Knight. We're joined by Vipul Nagrath. He is the Global CIO at ADP, a provider of human resources management software in New York. Welcome, Vipul. >> Thank you. >> It's great to have you on the show. So, before the cameras were rolling, you were talking about how this is your first ever Grace Hopper. How do you find things? >> I think this is exciting. Just the sheer numbers: 18,000 attendees, all the various different companies that are represented over here, the talent. I'm here with a sizeable team, there's about 30 of us. Many of my colleagues have been walking the floor and they've been just thoroughly impressed with the talent that they're meeting and the people that they're talking to. We're here actively recruiting. We've actually been doing on-site interviews. So, we're looking for top talent and if we can find it right here at the show, we'll do it. >> So, there are a lot of tech conferences that you attend, but what is it about Grace Hopper in particular? >> Well, this one specifically, one of our initiatives is around diversity and inclusion. So, what better place to come than Grace Hopper if you want to talk about diversity and inclusion? In addition to that, is we were talking earlier, right? The marketplace that engineering and tech and computer science is going to go into, the need is actually only increasing. Everything is run by software today or very shortly will be. In the end, every company's becoming a software company and offering some other services with it. We're all headed that way. Yet, the talent pool's actually getting tighter and smaller, yet more jobs are going to be created in that industry. So, I think it's a phenomenal and wonderful opportunity, and specifically from a Grace Hopper perspective and the Anita Borg perspective, is get more women involved in this. The pie is going to get bigger, and I think women have an opportunity to gain more of that share of that pie. >> So, is ADP doing anything to actively engage more women earlier in their career trajectories to get them interested in this area? >> There are a number of multiple- Sorry, there's a multiple set of initiatives that we have. In fact, I was joined here at this conference with our Chief Diversity Officer. She's also responsible for corporate social responsibility. So, diversity and inclusion is really huge for her, not just for us at ADP, but she actually has a larger message for the entire industry. So, she's pushing that agenda. So, there are actually many different things that we're working on. >> And as a human resources company that message can get through. >> Exactly. >> So, talk to me. We always hear about the business case for diversity and inclusion. How do you view it? >> How do I view it, is I start with, again, top talent, and then it's thought diversity. When you bring multiple disciplines in together, bring people with multiple backgrounds in together, even a different point of view, you realize, or I think you open up and realize that you might have had some blinders on some things. Now you start really getting rid of those blinders. Instead of them being blinders, they turn into opportunities. I think if you have too many people thinking exactly the same way, doing exactly the same thing, you fall into a not-so-good method, right? You fall into a not-so-good idea of just really channeling the same idea over and over and over again. >> The groupthink that is a big problem in so many companies. So, how do diverse teams work together in your experience? You talked about seeing wider perspectives and different kinds of ideas and insights that you wouldn't necessarily get if it's just a bunch of similar people from similar backgrounds, similar races, all one gender, sitting in a room together. How do these teams work together in your experience? >> Well, what I believe in is you got to put these teams together and you got to empower them. Absolutely, there's a stated goal. There is an outcome. There is a result we have to achieve. Give 'em the outcome, give 'em the goal, give 'em a loose framework, and then give 'em guiding principles. Then, after that: team, go ahead. You're empowered to do the right thing. But, these goals will be aggressive, right? We may want to make something two orders of magnitude faster. That's no small task. We may want to expand our capabilities so that we can handle six times the load that we handled today. That's no small task. So, they're very large goals to achieve, but they just have to go out and do them. If you leave that creativity to the team, and you let everyone bring in what their different viewpoints, some that have expertise today, and some that don't necessarily have expertise in it but they're really good programmers or they're really good software developers. So, they can learn from those folks that have the expertise, then develop a new solution that's more powerful than the one that exists today. >> What are some of the most exciting things you're working on at ADP right now? >> Well, me personally, we're going through a huge transformation in my group within ADP. That transformation is really just implementing more of what I just talked about, is these small, nimble teams that are multidisciplinary, and they're given, again, guiding principles and goals, and they go out and be creative and be innovative, and figure out how to do this. >> So, what your customers expect on the pipeline though, in terms of products coming out of ADP, and helping them manage their human capital? >> Sure, well actually, we have a lot of exciting, new, and innovative products coming out of our company, which in the coming months, in the coming years, will be released and put into production. But, basically, they should expect a better way to work. 'Cause that is our job. We're really out there to make work better. >> Rebecca: And more inclusive, too, and more, okay. >> All those things actually just go into being and making work better. Inclusion is in there, diversity is in there, creativity is in there, innovation is in there, stability is in there. But, all of that makes work better. >> Is there more pressure on a company like ADP to walk the walk? Because, you are a human capital management company. That is your bread and butter. >> I believe there is, sure. Just naturally, yes, there is. >> So, what is your advice to companies out there? I know you said your Chief Diversity Officer had a wider message to companies about the importance of diversity and inclusive teams. What would you say from your perspective as CIO? >> From my perspective, again, I do believe that diversity, that inclusion, makes for a more powerful team, makes for a wider understanding of what we're actually trying to do. So, I would just encourage others to do that, too, and not be very narrow-minded. >> Great. Well, Vipul, it has been so much fun talking to you. Thanks for coming on theCUBE. >> Thank you. >> We will have more from the Orange County Convention Center, Grace Hopper, just after this. (light, electronic music)

Published Date : Oct 6 2017

SUMMARY :

brought to you by SiliconANGLE Media. He is the Global CIO at ADP, So, before the cameras were rolling, you were talking about and the people that they're talking to. and the Anita Borg perspective, So, she's pushing that agenda. that message can get through. So, talk to me. that you might have had some blinders on some things. that you wouldn't necessarily get if it's just and you let everyone bring in what their different and figure out how to do this. We're really out there to make work better. But, all of that makes work better. Because, you are a human capital management company. I believe there is, sure. I know you said your Chief Diversity Officer had and not be very narrow-minded. Well, Vipul, it has been so much fun talking to you. the Orange County Convention Center, Grace Hopper,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

Rebecca KnightPERSON

0.99+

VipulPERSON

0.99+

New YorkLOCATION

0.99+

Anita BorgPERSON

0.99+

Orlando, FloridaLOCATION

0.99+

ADPORGANIZATION

0.99+

18,000 attendeesQUANTITY

0.99+

six timesQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

firstQUANTITY

0.98+

Grace HopperPERSON

0.98+

Vipul NagrathPERSON

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

two ordersQUANTITY

0.96+

Grace Hopper ConferenceEVENT

0.94+

theCUBEORGANIZATION

0.94+

about 30QUANTITY

0.94+

Orange County Convention CenterLOCATION

0.93+

one genderQUANTITY

0.84+

CelebrationEVENT

0.81+

Celebration of Women in ComputingEVENT

0.79+

Women of Computing 2017EVENT

0.79+

GracePERSON

0.69+

HopperORGANIZATION

0.48+

ofEVENT

0.45+

Rachel Faber Tobac, Course Hero, Grace Hopper Celebration of Women in Computing 2017


 

>> Announcer: Live from Orlando, Florida. It's the CUBE. Covering Grace Hopper Celebration of Women in Computing. Brought to you by Silicon Angle Media. >> Welcome back everybody. Jeff Frick here with the Cube. We are winding down day three of the Grace Hopper Celebration of Women in Computing in Orlando. It's 18,000, mainly women, a couple of us men hangin' out. It's been a phenomenal event again. It always amazes me to run into first timers that have never been to the Grace Hopper event. It's a must do if you're in this business and I strongly encourage you to sign up quickly 'cause I think it sells out in about 15 minutes, like a good rock concert. But we're excited to have our next guest. She's Rachel Faber Tobac, UX Research at Course Hero. Rachel, great to see you. >> Thank you so much for having me on. >> Absolutely. So, Course Hero. Give people kind of an overview of what Course Hero is all about. >> Yup. So we are an online learning platform and we help about 200 million students and educators master their classes every year. So we have all the notes, >> 200 million. >> Yes, 200 million! We have all the notes, study guides, resources, anything a student would need to succeed in their classes. And then anything an educator would need to prepare for their classes or connect with their students. >> And what ages of students? What kind of grades? >> They're usually in college, but sometimes we help high schoolers, like AP students. >> Okay. >> Yeah. >> But that's not why you're here. You want to talk about hacking. So you are, what you call a "white hat hacker". >> White hat. >> So for people that aren't familiar with the white hat, >> Yeah. >> We all know about the black hat conference. What is a white hat hacker. >> So a "white hat hacker" is somebody >> Sounds hard to say three times fast. >> I know, it's a tongue twister. A white hat hacker is somebody who is a hacker, but they're doing it to help people. They're trying to make sure that information is kept safer rather than kind of letting it all out on the internet. >> Right, right. Like the old secret shoppers that we used to have back in the pre-internet days. >> Exactly. Exactly. >> So how did you get into that? >> It's a very non-linear story. Are you ready for it? >> Yeah. >> So I started my career as a special education teacher. And I was working with students with special needs. And I wanted to help more people. So, I ended up joining Course Hero. And I was able to help more people at scale, which was awesome. But I was interested in kind of more of the technical side, but I wasn't technical. So my husband went to Defcon. 'cause he's a cyber security researcher. And he calls me at Defcon about three years ago, and he's like, Rach, you have to get over here. I'm like, I'm not really technical. It's all going to go over my head. Why would I come? He's like, you know how you always call companies to try and get our bills lowered? Like calling Comcast. Well they have this competition where they put people in a glass booth and they try and have them do that, but it's hacking companies. You have to get over here and try it. So I bought a ticket to Vegas that night and I ended up doing the white hat hacker competition called The Social Engineering Capture the Flag and I ended up winning second, twice in a row as a newb. So, insane. >> So you're hacking, if I get this right, not via kind of hardcore command line assault. You're using other tools. So like, what are some of the tools that are vulnerabilities that people would never think about. >> So the biggest tool that I use is actually Instagram, which is really scary. 60% of the information that I need to hack a company, I find on Instagram via geolocation. So people are taking pictures of their computers, their work stations. I can get their browser, their version information and then I can help infiltrate that company by calling them over the phone. It's called vishing. So I'll call them and try and get them to go to a malicious link over the phone and if I can do that, I can own their company, by kind of presenting as an insider and getting in that way. (chuckling) It's terrifying. >> So we know phishing right? I keep wanting to get the million dollars from the guy in Africa that keeps offering it to me. >> (snickers) Right. >> I don't whether to bite on that or. >> Don't click the link. >> Don't click the link. >> No. >> But that interesting. So people taking selfies in the office and you can just get a piece of the browser data and the background of that information. >> Yep. >> And that gives you what you need to do. >> Yeah, so I'll find a phone number from somebody. Maybe they take a picture of their business card, right? I'll call that number. Test it to see if it works. And then if it does, I'll call them in that glass booth in front of 400 people and attempt to get them to go to malicious links over the phone to own their company or I can try and get more information about their work station, so we could, quote unquote, tailor an exploit for their software. >> Right. Right. >> We're not actually doing this, right? We're white hat hackers. >> Right. >> If we were the bad guys. >> You'd try to expose the vulnerability. >> Right. The risk. >> And what is your best ruse to get 'em to. Who are you representing yourself as? >> Yeah, so. The representation thing is called pre-texting. It's who you're pretending to be. If you've ever watched like, Catch Me If You Can. >> Right. Right. >> With Frank Abagnale Jr. So for me, the thing that works the best are low status pretext. So as a woman, I would kind of use what we understand about society to kind of exploit that. So you know, right now if I'm a woman and I call you and I'm like, I don't know how to trouble shoot your website. I'm so confused. I have to give a talk, it's in five minutes. Can you just try my link and see if it works on your end? (chuckling) >> You know? Right? You know, you believe that. >> That's brutal. >> Because there's things about our society that help you understand and believe what I'm trying to say. >> Right, right. >> Right? >> That's crazy and so. >> Yeah. >> Do you get, do you make money white hacking for companies? >> So. >> Do they pay you to do this or? Or is it like, part of the service or? >> It didn't start that way. >> Right. >> I started off just doing the Social Engineering Capture the Flag, the SECTF at Defcon. And I've done that two years in a row, but recently, my husband, Evan and I, co-founded a company, Social Proof Security. So we work with companies to train them about how social media can impact them from a social engineering risk perspective. >> Right. >> And so we can come in and help them and train them and understand, you know, via a webinar, 10 minute talk or we can do a deep dive and have them actually step into the shoes of a hacker and try it out themselves. >> Well I just thought the only danger was they know I'm here so they're going to go steal my bike out of my house, 'cause that's on the West Coast. I'm just curious and you may not have a perspective. >> Yeah. >> 'Cause you have niche that you execute, but between say, you know kind of what you're doing, social engineering. >> Yeah. >> You know, front door. >> God, on the telephone. Versus kind of more traditional phishing, you know, please click here. Million dollars if you'll click here versus, you know, what I would think was more hardcore command line. People are really goin' in. I mean do you have any sense for what kind of the distribution of that is, in terms of what people are going after? >> Right, we don't know exactly because usually that information's pretty confidential, >> Sure. when a hack happens. But we guess that about 90% of infiltrations start with either a phishing email or a vishing call. So they're trying to gain information so they can tailor their exploits for your specific machine. And then they'll go in and they'll do that like actual, you know, >> Right. >> technical hacking. >> Right. >> But, I mean, if I'm vishing you right and I'm talking to you over the phone and I get you to go to a malicious link, I can just kind of bypass every security protocol you've set up. I don't even a technical hacker, right? I just got into your computer because. >> 'Cause you're in 'Cause I'm in now, yup. >> I had the other kind of low profile way and I used to hear is, you know, you go after the person that's doin' the company picnic. You know Wordpress site. >> Yes. >> That's not thinking that that's an entry point in. You know, kind of these less obvious access points. >> Right. That's something that I talk about a lot actually is sometimes we go after mundane information. Something like, what pest service provider you use? Or what janitorial service you use? We're not even going to look for like, software on your machine. We might start with a softer target. So if I know what pest extermination provider you use, I can look them up on LinkedIn. See if they've tagged themselves in pictures in your office and now I can understand how do they work with you, what do their visitor badges look like. And then emulate all of that for an onsite attack. Something like, you know, really soft, right? >> So you're sitting in the key note, right? >> Yeah. >> Fei-Fei Li is talking about computer visualization learning. >> Right. >> And you know, Google running kagillions of pictures through an AI tool to be able to recognize the puppy from the blueberry muffin. >> Right. >> Um, I mean, that just represents ridiculous exploitation opportunity at scale. Even you know, >> Yeah. >> You kind of hackin' around the Instagram account, can't even begin to touch, as you said, your other thing. >> Right. >> You did and then you did it at scale. Now the same opportunity here. Both for bad and for good. >> I'm sure AI is going to impact social engineering pretty extremely in the future here. Hopefully they're protecting that data. >> Okay so, give a little plug so they'll look you up and get some more information. But what are just some of the really easy, basic steps that you find people just miss, that should just be, they should not be missing. From these basic things. >> The first thing is that if they want to take a picture at work, like a #TBT, right? It's their third year anniversary at their company. >> Right. Right. >> Step away from your work station. You don't need to take that picture in front of your computer. Because if you do, I'm going to see that little bottom line at the bottom and I'm going to see exactly the browser version, OS and everything like that. Now I'm able to exploit you with that information. So step away when you take your pictures. And if you do happen to take a picture on your computer. I know you're looking at computer nervously. >> I know, I'm like, don't turn my computer on to the cameras. >> Don't look at it! >> You're scarin' me Rachel. >> If you do take a picture of that. Then you don't want let someone authenticate with that information. So let's say I'm calling you and I'm like, hey, I'm with Google Chrome. I know that you use Google Chrome for your service provider. Has your network been slow recently? Everyone's network's been slow recently, right? >> Right. Right. >> So of course you're going to say yes. Don't let someone authenticate with that info. Think to yourself. Oh wait, I posted a picture of my work station recently. I'm not going to let them authenticate and I'm going to hang up. >> Interesting. All right Rachel. Well, I think the opportunity in learning is one thing. The opportunity in this other field is infinite. >> Yeah. >> So thanks for sharing a couple of tips. >> Yes. >> And um. >> Thank you for having me. >> Hopefully we'll keep you on the good side. We won't let you go to the dark side. >> I won't. I promise. >> All right. >> Rachel Faber Tobac and I'm Jeff Frick. You're watchin the Cube from Grace Hopper Celebration Women in Computing. Thanks for watching. (techno music)

Published Date : Oct 6 2017

SUMMARY :

Brought to you by Silicon Angle Media. and I strongly encourage you to sign up quickly Give people kind of an overview of what Course Hero So we have all the notes, to prepare for their classes or connect with their students. but sometimes we help high schoolers, So you are, We all know about the black hat conference. but they're doing it to help people. Like the old secret shoppers that we used to have Exactly. Are you ready for it? and he's like, Rach, you have to get over here. So like, what are some of the tools that 60% of the information that I need to hack a company, from the guy in Africa that keeps offering it to me. and you can just get a piece of the browser data in front of 400 people and attempt to get them Right. We're white hat hackers. Right. Who are you representing yourself as? It's who you're pretending to be. Right. So you know, You know, you believe that. that help you understand and believe what I'm trying to say. So we work with companies to train them and understand, you know, via a webinar, 10 minute talk I'm just curious and you may not have a perspective. but between say, you know kind of what you're doing, I mean do you have any sense like actual, you know, and I'm talking to you over the phone 'Cause I'm in now, yup. you know, you go after the person You know, kind of these less obvious access points. So if I know what pest extermination provider you use, Fei-Fei Li is talking And you know, Google running kagillions of pictures Even you know, can't even begin to touch, as you said, You did and then you did it at scale. I'm sure AI is going to impact social engineering basic steps that you find people just miss, to take a picture at work, Right. So step away when you take your pictures. I know, I'm like, I know that you use Google Chrome for your service provider. Right. and I'm going to hang up. The opportunity in this other field is infinite. We won't let you go to the dark side. I won't. Rachel Faber Tobac and I'm Jeff Frick.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ComcastORGANIZATION

0.99+

RachelPERSON

0.99+

Jeff FrickPERSON

0.99+

DefconORGANIZATION

0.99+

AfricaLOCATION

0.99+

Rachel Faber TobacPERSON

0.99+

60%QUANTITY

0.99+

EvanPERSON

0.99+

10 minuteQUANTITY

0.99+

Course HeroORGANIZATION

0.99+

400 peopleQUANTITY

0.99+

two yearsQUANTITY

0.99+

VegasLOCATION

0.99+

Orlando, FloridaLOCATION

0.99+

Silicon Angle MediaORGANIZATION

0.99+

Frank Abagnale Jr.PERSON

0.99+

million dollarsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

secondQUANTITY

0.99+

Fei-Fei LiPERSON

0.99+

Million dollarsQUANTITY

0.99+

Social Proof SecurityORGANIZATION

0.99+

200 millionQUANTITY

0.99+

BothQUANTITY

0.98+

five minutesQUANTITY

0.98+

18,000QUANTITY

0.98+

Grace HopperEVENT

0.97+

SECTFORGANIZATION

0.97+

RachPERSON

0.97+

about 15 minutesQUANTITY

0.97+

about 90%QUANTITY

0.96+

Grace Hopper Celebration of Women in ComputingEVENT

0.96+

day threeQUANTITY

0.96+

first thingQUANTITY

0.96+

about 200 million studentsQUANTITY

0.95+

InstagramORGANIZATION

0.95+

three timesQUANTITY

0.94+

third year anniversaryQUANTITY

0.92+

OrlandoLOCATION

0.91+

kagillions of picturesQUANTITY

0.9+

one thingQUANTITY

0.84+

firstQUANTITY

0.83+

HopperEVENT

0.8+

three years agoDATE

0.78+

LinkedORGANIZATION

0.77+

Women in ComputingEVENT

0.77+

CubeORGANIZATION

0.76+

black hatEVENT

0.75+

aboutDATE

0.75+

West CoastLOCATION

0.75+

Google ChromeTITLE

0.69+

ChromeTITLE

0.68+

Catch Me If YouTITLE

0.67+

Celebration ofEVENT

0.67+

Engineering Capture the FlagEVENT

0.66+

twice in a rowQUANTITY

0.64+

WordpressTITLE

0.62+

that nightDATE

0.61+

everyQUANTITY

0.6+

inEVENT

0.55+

2017DATE

0.54+

Social EngineeringORGANIZATION

0.5+

coupleQUANTITY

0.49+

The SocialTITLE

0.48+

#TBTORGANIZATION

0.48+

FlagTITLE

0.47+

CubeTITLE

0.47+

CaptureTITLE

0.45+

GracePERSON

0.44+

GoogleCOMMERCIAL_ITEM

0.34+

Joanna Parke, ThoughtWorks, Grace Hopper Celebration of Women in Computing 2017


 

>> Announcer: Live from Orlando, Florida, it's theCUBE, covering Grace Hopper Celebration of Women in Computing, brought to you by SiliconANGLE Media. (light, electronic music) >> Welcome back to theCUBE's coverage of the Grace Hopper Conference here in Orlando, Florida. I'm your host, Rebecca Knight. We're joined by Joanna Parke. She is the Group Managing Director, North America, at ThoughtWorks based in Chicago. Thanks so much for joining us, Joanna. >> Thank you, it's a pleasure to be here. >> Your company is being honored for the second year in a row as a top company for women technologists by the Anita Borg Institute. Tell our viewers what that means. >> Yeah, we're incredibly proud and super humble to be recognized again for the second year in a row. Our journey towards diversity and inclusivity really began about eight or nine years ago. It started with the top leadership of the company saying that this is a crisis in our industry, and we need to take a stand and we need to do something about it. So, it's been a long journey. It's not something that we started a couple of years ago, so there's been a lot of work by many people over the years to get us to where we are today, and we still feel that we have a long way to go. There's still a lot to do. >> So, being recognized as a top company for women technologists, it obviously means there are many women who work there. But, what else can a woman technologist looking for a job expect at ThoughtWorks? >> So, we think about, not just the aspects of diversity, which is what is the make up of your work for us look like, but also put equal if not more importance on inclusivity. So, you can go out and you can make all sorts of efforts to hire women or minorities into your company, but if you don't have a culture and an environment in which they feel welcome and they feel like they can succeed and they can bring themselves to work, then that success won't be very lasting. So, we've focused not only on the recruiting process but also our culture, our benefits, the environment in which we work. We are a software development company and we come from a history of agile software practices, which means that we work together in a very people-oriented and collaborative way. So, in some ways we had a little bit of a head start in that, by working in that way, our culture was already built to be more team-focused and collaborative and inclusive, so that was a good advantage for us when we got started. >> So, how else do you implement these best practices of the collaboration and the inclusivity? Because, I mean, it is one thing to say that we want everyone to have a voice at the table, but it's harder to pull off. >> It is, absolutely. So, a couple things that we've done over our history, one is just starting with open conversation. We talk a lot about unconscious bias, we do education and training through the workforce, we try to encourage those uncomfortable conversations that really create breakthroughs in understanding. We look for people that are open and curious in the interview process, and we feel like if you are open to having your views about the world challenged, that's a really good sign. So, that's kind of one step. Then, I think, when bad behavior arrives, which it always does, it's how you react and how you deal with it. So, making it clear to everyone that behavior that excludes or belittles others on the team is not tolerated. That's not the kind of culture that we want to build. It's on ongoing process. >> So, how do you call out the bad behavior, because that's hard to do, particularly if you're a junior employee. >> Yes, so we try and create a safe environment where people feel like, if I have an issue with someone on my team, particularly if it's someone more senior than me, we have a complete open-door and flat organization. So, anyone can pick up the phone and call me or our CEO or whoever they feel comfortable talking to. I think, what happens is, when that happens and people see action being taken, whether it's feedback being given or a more serious action, then it reinforces the fact that it's okay to speak up and that you are going to be heard and listened to. >> One of the underlying themes of this conference is that women technologists have a real responsibility to have a voice in this industry, and to shape how the future of software progresses. Can you talk a little bit more about that, about what you've seen and observed and also the perspective of ThoughtWorks on this issue? >> Absolutely, we all have seen the power that technology has in transforming our society, and that is only going to grow over time. It's not going away. So, it really impacts every aspect of our life, whether it's healthcare or how we interact with our family or how we go to work every day. Having a diverse set of perspectives that reflects the makeup of our society is so important. I was really impressed by Dr. Faith Ilee's keynote on Wednesday morning-- >> She's at Stanford. >> Yeah, Stanford and at Google right now as well. She spoke about the importance of having diverse voices in the field of artificial intelligence. She said, no other technology reflects its designers more than AI, and it is so critical that we have that diverse set of voices that are involved in shaping that technology. >> Is it almost too much though? As a woman technologist, not only do you have to be a trailblazer and put up with a lot of bias and sexism in the industry, and then you have this added responsibility. What's your advice to women in the field? Particularly the young women here who are at their first Grace Hopper. >> Absolutely, our CEO-- Sorry, our CTO, Rebecca Parsons, often says that the reason that she put up with it for so many years is because she's a geek, and because she's passionate about technology. So, when you're in those trying times, being able to connect with your passion and know that you're making a difference is so important. Because, if it's just something that you view as a job, or a way to make a living, you don't have that level of passion to get you through some of the hardships. So, I think, for me, that sense of responsibility is kind of a motivating and driving force. The good news is it will get easier over time. As we make progress in our industry, you don't feel so alone. You start to have other women and other marginalized groups around you that you can connect with and share experiences. >> What are some of the most exciting projects you're working on at ThoughtWorks? >> We really try to cover a broad landscape of technology. We think of ourselves as early adopters that can spot the trends in the industry and help bring them into the enterprise. So, we're doing some really exciting things in the machine-learning space, around predictive maintenance, understanding when machine parts are going to fail and being able to repair them ahead of time. Things like understanding customer insights through data. I think those areas are emerging and super exciting. >> Excellent. What are you looking for? Are you here recruiting? >> Absolutely. >> And, with a top company sticker on your booth, I'm sure that you are highly sought after. What are you looking for in a candidate? >> We for a long time have articulated our strategy in three words: attitude, aptitude, and integrity. Because we feel like if we can find a person that has a passion for learning, the ability to learn, and the right attitude about that, we can work with that, right? The world of technology is changing so fast, so even if you know the tech of today, if you don't have that passion and ability to learn, you're not going to be able to keep up. So, we really look for people in terms of those character traits and those people are the kind of people that are successful and thrive at ThoughtWorks. >> If you look at the data, it looks as though there is a looming talent shortage. Are you worried about that at ThoughtWorks? What's your-- >> Absolutely. There is a huge talent gap. It's growing by the day. We see it at our clients as well as ourselves. For me, it really comes down to the responsibility of society as well as companies to invest in upscaling our workforce. We have seen some clients take that investment and realize that the skills they needed in their workforce a few years ago look very different from what they're going to need into the future. So, we believe strongly in investing in and training and upscaling our employees. We help work with our clients to do so as well. But, I think we can't rely on the existing educational system to create all of the talent that we're going to need. It's really going to take investment, I believe, from society and from companies. >> And on the job training. >> Absolutely. There's no replacement for that, right? You can do the kind of academic and educational studies but there's no replacement for once you get into the real world and you're with people and the day to day challenges arise. >> Excellent. Well, Joanna, thanks so much for coming on. It was a real pleasure talking to you. >> Thank you, it was my pleasure. >> We will have more from the Orange County Convention Center, the Grace Hopper Celebration of Women in Computing just after this. (light, electronic music)

Published Date : Oct 6 2017

SUMMARY :

brought to you by SiliconANGLE Media. She is the Group Managing Director, Your company is being honored for the second year in a row It's not something that we started a couple of years ago, So, being recognized as a top company So, in some ways we had a little bit of a head start Because, I mean, it is one thing to say that we want That's not the kind of culture that we want to build. the bad behavior, because that's hard to do, and that you are going to be heard and listened to. and to shape how the future of software progresses. and that is only going to grow over time. and it is so critical that we have that diverse set and then you have this added responsibility. Because, if it's just something that you view as a job, and being able to repair them ahead of time. What are you looking for? I'm sure that you are highly sought after. a passion for learning, the ability to learn, If you look at the data, that the skills they needed in their workforce and the day to day challenges arise. It was a real pleasure talking to you. the Grace Hopper Celebration of Women in Computing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

JoannaPERSON

0.99+

Joanna ParkePERSON

0.99+

Rebecca ParsonsPERSON

0.99+

ChicagoLOCATION

0.99+

GoogleORGANIZATION

0.99+

Wednesday morningDATE

0.99+

ThoughtWorksORGANIZATION

0.99+

second yearQUANTITY

0.99+

Faith IleePERSON

0.99+

Orlando, FloridaLOCATION

0.99+

Anita Borg InstituteORGANIZATION

0.99+

firstQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

StanfordORGANIZATION

0.99+

theCUBEORGANIZATION

0.98+

Orange County Convention CenterLOCATION

0.98+

three wordsQUANTITY

0.98+

todayDATE

0.97+

one stepQUANTITY

0.97+

North AmericaLOCATION

0.96+

OneQUANTITY

0.96+

Grace Hopper ConferenceEVENT

0.95+

one thingQUANTITY

0.94+

Grace HopperEVENT

0.94+

couple of years agoDATE

0.92+

Grace Hopper Celebration of Women in ComputingEVENT

0.9+

Dr.PERSON

0.87+

Celebration of Women in ComputingEVENT

0.86+

nine years agoDATE

0.8+

Grace HopperPERSON

0.78+

about eightDATE

0.77+

few years agoDATE

0.76+

oneQUANTITY

0.64+

coupleQUANTITY

0.61+

2017DATE

0.51+

Telle Whitney, AnitaB.org, Grace Hopper Celebration of Women in Computing 2017


 

[Techno Music] >> Narrator: Live, from Orlando, Florida it's the Cube covering Grace Hopper's celebration of women in computing. Brought to you by SiliconANGLE Media >> Hey welcome back everybody, Jeff Frick here with the Cube. We're at the Grace Hopper Celebration of women in computing 2017, 18,000 women and men here at the Orlando Convention Center it gets bigger and bigger every year and we're really excited to have our next guest, the soon-to-be looking for a new job, and former CEO but still employed by AnitaB.org, Telle Whitney, the founder of this fantastic organization and really, the force behind turning it from, as you said, an okay non-profit to really a force. >> Yes So Telle, as always, fantastic to see you. >> Oh it's great to see you, glad to welcome you back and glad to have you here. >> Yes, thank you. So, interesting times, so you're going to be stepping down at the end of the year, you've passed the baton to Brenda. So as you kind of look back, get a moment to reflect, which I guess you can't do until January, they're still working you, what an unbelievable legacy, what an unbelievable baton that you are passing on for Brenda's stewardship for the next chapter. >> Yes, I mean, I've been CEO for the last 15 years and under that time period, we've grown into a global force with impact, well over 700,000 people. We have well over 100,000 people who participated with the Grace Hopper or the Grace Hopper India. It's grown, and what's been really exciting the last few days, is hearing the stories. >> Jeff: Right, right. >> Of how, the impact that this, the AnitaB.org has had on the lives of young women but also mid-career and senior executives. It's very inspiring to me. >> It is, it's fantastic, and I think the mid-career and more senior executive part of the story isn't as well-known, and we've talked to, Work Day was here, I think they said they had 140 people I think I talked to Google, I think they had like 180. And I asked them, I said, is there any other show, besides your own, that you bring that many people to from the company for their own professional development, and growth. And there's nothing like it. >> That's true. The reason why the Grace Hopper celebration has grown as significantly as it has is because more and more organizations, companies, bring a large part of their workforce. I mean, there are some companies that have brought up to 800 people, and sometimes even 1,000. >> Jeff: Wow >> And there's a reason why, because they see the impact that the conference has on retention and advancement of the women who work for them. >> And that's really a growing and increasing important part of the conversation, >> It is. >> Is retention, and two, getting the women who maybe left to have a baby, or talk about military veterans getting back in, so there's a whole group of people kind of outside of the traditional took my four years of college, I got a CS degree, now I need a job, that are also leveraging the benefits of this conference to make that way back in to tech. So important now as autonomous vehicles are coming on board and all these other things that are going to displace a bunch of traditional jobs. The message here is, you can actually get into CS later in life and find a successful career. >> Yes, we have a real diversity of attendees. So about a third of them are students, and that's really, they're brought here by their universities because that makes a difference. We have a great group from the government. So there's this real effort to bring state-of-the-art technology into our government, initially spearheaded by Megan Smith but really has grown. And the government brought quite a few women. And yes, we do have re-entry people. The companies are looking for women who are very interested in getting back in the workforce. The wonder about our profession, is that they're in desperate need of talented computer scientists. And so, because of that, more and more organizations are being innovative in how they reach out to different audiences. >> And outside of you, I don't know that anyone is more enthusiastic about this conference than Megan Smith. >> Yeah (laughs) >> She is a force of nature. We saw her last year, we were fortunate to get her on the Cube this year, which was really exciting. And she just brings so much energy. We're seeing so much activity on the government side, regardless of your partisanship, of using cloud, using new technology, and that's really driving, again, more innovation, more computing, and demand for more great people. >> Yes, we're very blessed that Megan has continued to come here every year. She came back this year, she sat on the main stage, and she has really been, her message to so many of the young women is that, consider government technology as something you do, at least for a while. And I think that that's a very important message if you think about how that impacts our lives. >> Right, for the good. >> Telle: Yes. >> And that was a big part of her message, she went through a classic legal resume, and some other classic resumes where you have that chapter in your career where you do go into government and you do make a contribution to something a little bit bigger than potentially your regular job. It does strike me though, how technology and software engineering specifically is such an unbelievable vehicle in which to change the world. The traditional barriers of distribution, access to capital, the amount of funding that you used to have to have to build a company, all those things are gone now through cloud, and the internet, and now you can write software and change the world pretty easily. >> Yes. Technology has the possibility of being equal access for anybody. Open-source, anybody can start to code through open-source. There are many ways for anybody, but particularly women to get back in. But I also like to think about many of the companies here who bring their diversity, they bring their senior executives, they bring this large number of women and they create this view across the entire company of how to create a company that's impactful as well as, you know, developing the products that they are invested in. >> Jeff: Right. >> I mean you can have impact in many different ways, through companies, through non-profits, through government, through many different ways. >> Right, and not only the diversity of the people, but one of the other things we love about this show is the diversity of the companies that are here. Like you said, as government, as I look out there's industrial equipment companies, there's entertainment companies, MLB is right across from us and has been there the three days. So it's really a fantastic display of this kind of horizontal impact of technology, and then of course, as we know, it does make better business to have diversity in teams. It's not about doing just the right thing, it's actually about having better bottom-line impact and better bottom-line results. And that's been proven time and time again. >> Well yes, and, so what I know is that every company is a technology company. If you think about the entire banking industry, they have this huge technology workforce. Certainly classic technology companies have a lot of engineers, but insurance, and banking, and almost anything. I mean, we have a lot increasing amount of retail, Target, Best Buy, places like that. >> Right. Okay so I tried to order in a horse so you could ride off into the sunset at the end of this interview, but they wouldn't let me get it through security. >> Okay >> But before I let you go, I'd just love to get your thoughts on Brenda, and the passing of the baton. How did you find her, what are some of the things that you feel comfortable, feel good about, beyond comfortable, to give her the mantle, the baton, if you will, for the next chapter of AnitaB.org? >> I've been very blessed to lead this organization for 15 years, and this is my baby. But there is nothing more heart-warming than to be able to talk to a visionary leader like Brenda. Brenda is extraordinary. She really believes in computer science for all. She believes that all women should be at the table creating technologies. She has a vision of where she wants to take it and yes, she just started last Sunday, so we have to give her a little time. (laughs) >> You were right into the deep end right? Swim! (laughs) >> But she is just, I mean, I just feel very blessed to have Brenda in my life and I will be there in any way that she needs for me to be there to work with her. But she is going to be a great leader. >> Oh absolutely. Well Telle as always, great, and as you said, you're more busy than maybe you expected to be here, so to find a few minutes to stop by the Cube again, thank you for inviting us to be here. It is really one of our favorite places to be every year. Finally my youngest daughter turns 18 next year, so I can bring her too. And congratulations for everything you've accomplished. >> I love to be here, thank you for coming. Glad we could talk. >> Alright, she's Telle Whitney, I'm Jeff Frick, if you're looking for a highly-qualified woman in tech, she might be on the market in 2018. (Telle laughs) Give me a call, I'll set you up. Alright, you're watching the Cube, from the Grace Hopper Celebration of women in computing. Thanks for watching. (techno music)

Published Date : Oct 6 2017

SUMMARY :

Brought to you by SiliconANGLE Media and really, the force behind turning it from, So Telle, as always, fantastic to see you. and glad to have you here. at the end of the year, Yes, I mean, I've been CEO for the last 15 years has had on the lives of young women and more senior executive part of the story I mean, there are some companies that have brought of the women who work for them. that are also leveraging the benefits of this conference So there's this real effort to bring state-of-the-art And outside of you, I don't know that anyone is more We're seeing so much activity on the government side, and she has really been, her message to so many and the internet, and now you can write software of how to create a company that's impactful I mean you can have impact in many different ways, Right, and not only the diversity of the people, If you think about the entire banking industry, so you could ride off into the sunset at the end that you feel comfortable, feel good about, But there is nothing more heart-warming than to be able that she needs for me to be there to work with her. and as you said, you're more busy than maybe you expected I love to be here, thank you for coming. she might be on the market in 2018.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Megan SmithPERSON

0.99+

2018DATE

0.99+

Jeff FrickPERSON

0.99+

BrendaPERSON

0.99+

Dave VellantePERSON

0.99+

EMCORGANIZATION

0.99+

RobPERSON

0.99+

Rob EmsleyPERSON

0.99+

JeffPERSON

0.99+

BostonLOCATION

0.99+

Telle WhitneyPERSON

0.99+

2017DATE

0.99+

February 2020DATE

0.99+

2019DATE

0.99+

DellORGANIZATION

0.99+

DavePERSON

0.99+

$3 billionQUANTITY

0.99+

15 yearsQUANTITY

0.99+

BethPERSON

0.99+

MeganPERSON

0.99+

last yearDATE

0.99+

two typesQUANTITY

0.99+

four yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Best BuyORGANIZATION

0.99+

Orlando, FloridaLOCATION

0.99+

TellePERSON

0.99+

twoQUANTITY

0.99+

14 daysQUANTITY

0.99+

oneQUANTITY

0.99+

140 peopleQUANTITY

0.99+

threeQUANTITY

0.99+

three monthQUANTITY

0.99+

three daysQUANTITY

0.99+

18 monthsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

OneQUANTITY

0.99+

18QUANTITY

0.99+

180QUANTITY

0.99+

Dell EMC Data Protection DivisionORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

JanuaryDATE

0.99+

over 157%QUANTITY

0.99+

next yearDATE

0.99+

18,000 womenQUANTITY

0.98+

todayDATE

0.98+

this yearDATE

0.98+

Orlando Convention CenterLOCATION

0.98+

TargetORGANIZATION

0.98+

four timesQUANTITY

0.98+

a yearQUANTITY

0.98+

SiliconANGLE MediaORGANIZATION

0.98+

two integrated appliancesQUANTITY

0.97+

bothQUANTITY

0.97+

last SundayDATE

0.97+

MLBORGANIZATION

0.97+

Grace HopperORGANIZATION

0.96+

90 plus billion dollarQUANTITY

0.96+

first scaleQUANTITY

0.96+

up to 800 peopleQUANTITY

0.94+

AnitaB.orgORGANIZATION

0.93+

400COMMERCIAL_ITEM

0.93+

SiliconANGLEORGANIZATION

0.91+

MassachusetsLOCATION

0.91+

CubeCOMMERCIAL_ITEM

0.91+

X 400COMMERCIAL_ITEM

0.9+

Breaking Analysis: Google's Point of View on Confidential Computing


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data and isolating data from apps in a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show, but before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing. I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics, are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data and transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system. Arm, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images updates different services and the entire code flow aren't directly addressed by memory encryption, rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Branco sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign for memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the the consortium is seen as limiting by AWS. This is my guess, not AWS's words, and but I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got a lead with this Annapurna acquisition. This was way ahead with Arm integration and so it probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names including Arm, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic, Nelly Porter is head of product for GCP confidential computing and encryption, and Dr. Patricia Florissi is the technical director for the office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again security or infrastructure securities that I usually own. And we are talking about encryption and when encryption and confidential computing is a part of portfolio in additional areas that I contribute together with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operate in your confidential environment to have end-to-end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay. Patricia? >> Well, I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologists from large corporations, institutions and a lot of success, we're startups as well. And we have two main goals. First, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we are devise Google and Google Cloud engineering and product management and tech on there, on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO, I spend a lot of time collaborating with customers and the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing? From Google's perspective, how do you define it? >> Confidential computing is a tool and it's still one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do, Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential commuting matters, because at the end of the day, it reduces more and more the customer's thresh boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way, is a natural progression that you're using encryption to secure and protect the data. In the same way that we are encrypting data in transit and at rest, now we are also encrypting data while in use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud, and specifically double finance where you are, a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting. And I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can because there's a narrative out there that says confidential computing is a marketing ploy, I talked about this upfront, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption and it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree, as you can imagine, with this statement, but the most importantly is we mixing multiple concepts, I guess. And exactly as Patricia said, we need to look at the end-to-end story, not again the mechanism how confidential computing trying to again, execute and protect a customer's data and why it's so critically important because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud covering to offer additional stronger isolation. They called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenant that's running on the same host but also us because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers, so tenants from us. We also writing code, we also software providers will also make mistakes or have some zero days. Sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and amongst those tenants, we're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating to gather this very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. Operator access, yeah, maybe I trust my clouds provider, but if I can fence off your access even better, I'll sleep better at night. Separating a code from the data, everybody's, Arm, Intel, AMD, Nvidia, others, they're all doing it. I wonder if, Nelly, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally. We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely. And Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate on those VMs exactly as they would with normal non-confidential VMs, but to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any cloud can, something that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, when the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 called Titan. It was our specific ASIC, specific, again, inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tampered. We do it for everybody, confidential computing included. But for confidential computing, what we have to change, we bring in AMD, or again, future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity, not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine, as you can see, we validate that integrity of all of the system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD secure processor, it's special ASICs, best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop or Spark capability. We offer all of that. And those keys are not available to us. It's the best keys ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing provides so revolutionary technology, us cloud providers, who don't have access to the keys. They sitting in the hardware and they head to memory controller. And it means when hypervisors that also know about these wonderful things saying I need to get access to the memories that this particular VM trying to get access to, they do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but the most importantly, in hardware not exportable. And it means now you would be able to have this very interesting role that customers or cloud providers will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications, their VMs are running exactly as it should run and what you're running in VM, you actually see your memory in clear, it's not encrypted, but God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, they would not be able to do it. Now you'll see cyber and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified. And OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you, as customer, can verify. But the most interesting thing, I guess, how to ensure the super performance of this environment because you can imagine, Dave, that encrypting and it's additional performance, additional time, additional latency. So we were able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent. Appreciate that explanation. So, again, the narrative on this as well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is, in addition to, let's go pre confidential computing days, what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that recovered with Nelly, that it is. Confidential computing actually ensures that the applications and data internals remain secret, right? The code is actually looking at the data, the only the memory is decrypting the data with a key that is ephemeral and per VM and generated on demand. Then you have the second point where you have code and data integrity, and now customers want to know whether their data was corrupted, tampered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data, it's also, it has not been tampered and preserves integrity. I would also say that this is all verifiable. So you have attestation and these attestation actually generates a log trail and the log trail guarantees that, provides a proof that it was preserved. And I think that the offer's also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tampered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications, it's transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before. I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem, or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate in open, so again, our operating system, we working with operating system repository OSs, OS vendors to ensure that all capabilities that we need is part of the kernels, are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors a kernel, host kernel to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this whole, we moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed, Intel is pulling the lead and also announcing their trusted domain extension, very similar architecture. And no surprise, it's, again, a lot of work done with our partners to, again, convince, work with them and make this capability available. The same with Arm this year, actually last year, Arm announced their future design for confidential computing. It's called Confidential Computing Architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop, as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this attestation sig, the, again, the community based systems that we want to build and influence and work with Arm and every other cloud providers to ensure that we can interrupt and it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers way. And to do it, we need to continue what we are doing, working open, again, and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what we want it to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, the technology industry and sometimes is problematic. I know there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove that data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability, that you can actually survive if you are untethered to the cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing, it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here, Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data and the code. And that's similar because with data sovereignty we care about whether it resides, where, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data are going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement, now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in 23 and what's the maturity curve look like, this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years, as I started, it'll become utility. It'll become TLS as of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do and it's become ubiquity. It's exactly where confidential computing is getting and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we will be there. >> Thank you. And Patricia, what's your prediction? >> I will double that and say, hey, in the future, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes evermore top of mind with sovereign states and also for multi national organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, if I say, mode of operation. I like to compare that today is inconceivable. If we talk to the young technologists, it's inconceivable to think that at some point in history, and I happen to be alive that we had data at rest that was not encrypted, data in transit that was not encrypted, and I think that will be inconceivable at some point in the near future that to have unencrypted data while in use. >> And plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those, as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look, as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition, in our view, will moderate price hikes. And at the end of the day, this is under the covers technology that essentially will come for free. So we'll take it. I want to thank our guests today, Nelly and Patricia from Google, and thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio, Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at siliconangle.com. Does some great editing for us, thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or dm me @DVellante. And you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (upbeat music)

Published Date : Feb 11 2023

SUMMARY :

bringing you data-driven and at the end of the day, Just tell the audience a little and confidential computing Got it. and the industry at large for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. people that are scared of the cloud. and eliminate some of the we could stay with you and they head to memory controller. So, again, the narrative on this as well, and integrity of the data and of the code. how does Google ensure the compatibility and ideas of our partners to this role One of the frequent examples and that the data will be only used of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive beauty of the this industry and the constraints of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

International Data Space AssociationORGANIZATION

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BrancoPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

2019DATE

0.99+

2017DATE

0.99+

Kristin MartinPERSON

0.99+

Nelly PorterPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Rob HofPERSON

0.99+

Cheryl KnightPERSON

0.99+

last yearDATE

0.99+

Palo AltoLOCATION

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

AMDORGANIZATION

0.99+

Patricia FlorissiPERSON

0.99+

IntelORGANIZATION

0.99+

oneQUANTITY

0.99+

fiveQUANTITY

0.99+

second pointQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

MetaORGANIZATION

0.99+

secondQUANTITY

0.99+

thirdQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

ArmORGANIZATION

0.99+

eachQUANTITY

0.99+

two expertsQUANTITY

0.99+

FirstQUANTITY

0.99+

first questionQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

two decades agoDATE

0.99+

bothQUANTITY

0.99+

this yearDATE

0.99+

seven yearsQUANTITY

0.99+

OCTOORGANIZATION

0.99+

zero daysQUANTITY

0.98+

10 years agoDATE

0.98+

each weekQUANTITY

0.98+

todayDATE

0.97+

Ali Ghodsi, Databricks | Cube Conversation Partner Exclusive


 

(outro music) >> Hey, I'm John Furrier, here with an exclusive interview with Ali Ghodsi, who's the CEO of Databricks. Ali, great to see you. Preview for reinvent. We're going to launch this story, exclusive Databricks material on the notes, after the keynotes prior to the keynotes and after the keynotes that reinvent. So great to see you. You know, you've been a partner of AWS for a very, very long time. I think five years ago, I think I first interviewed you, you were one of the first to publicly declare that this was a place to build a company on and not just post an application, but refactor capabilities to create, essentially a platform in the cloud, on the cloud. Not just an ISV; Independent Software Vendor, kind of an old term, we're talking about real platform like capability to change the game. Can you talk about your experience as an AWS partner? >> Yeah, look, so we started in 2013. I swiped my personal credit card on AWS and some of my co-founders did the same. And we started building. And we were excited because we just thought this is a much better way to launch a company because you can just much faster get time to market and launch your thing and you can get the end users much quicker access to the thing you're building. So we didn't really talk to anyone at AWS, we just swiped a credit card. And eventually they told us, "Hey, do you want to buy extra support?" "You're asking a lot of advanced questions from us." "Maybe you want to buy our advanced support." And we said, no, no, no, no. We're very advanced ourselves, we know what we're doing. We're not going to buy any advanced support. So, you know, we just built this, you know, startup from nothing on AWS without even talking to anyone there. So at some point, I think around 2017, they suddenly saw this company with maybe a hundred million ARR pop up on their radar and it's driving massive amounts of compute, massive amounts of data. And it took a little bit in the beginning just us to get to know each other because as I said, it's like we were not on their radar and we weren't really looking, we were just doing our thing. And then over the years the partnership has deepened and deepened and deepened and then with, you know, Andy (indistinct) really leaning into the partnership, he mentioned us at Reinvent. And then we sort of figured out a way to really integrate the two service, the Databricks platform with AWS . And today it's an amazing partnership. You know, we directly connected with the general managers for the services. We're connected at the CEO level, you know, the sellers get compensated for pushing Databricks, we're, we have multiple offerings on their marketplace. We have a native offering on AWS. You know, we're prominently always sort of marketed and you know, we're aligned also vision wise in what we're trying to do. So yeah, we've come a very, very long way. >> Do you consider yourself a SaaS app or an ISV or do you see yourself more of a platform company because you have customers. How would you categorize your category as a company? >> Well, it's a data platform, right? And actually the, the strategy of the Databricks is take what's otherwise five, six services in the industry or five, six different startups, but do them as part of one data platform that's integrated. So in one word, the strategy of data bricks is "unification." We call it the data lake house. But really the idea behind the data lake house is that of unification, or in more words it's, "The whole is greater than the sum of its parts." So you could actually go and buy five, six services out there or actually use five, six services from the cloud vendors, stitch it together and it kind of resembles Databricks. Our power is in doing those integrated, together in a way in which it's really, really easy and simple to use for end users. So yeah, we're a data platform. I wouldn't, you know, ISV that's a old term, you know, Independent Software Vendor. You know, I think, you know, we have actually a whole slew of ISVs on top of Databricks, that integrate with our platform. And you know, in our marketplace as well as in our partner connect, we host those ISVs that then, you know, work on top of the data that we have in the Databricks, data lake house. >> You know, I think one of the things your journey has been great to document and watch from the beginning. I got to give you guys credit over there and props, congratulations. But I think you're the poster child as a company to what we see enterprises doing now. So go back in time when you guys swiped a credit card, you didn't need attending technical support because you guys had brains, you were refactoring, rethinking. It wasn't just banging out software, you had, you were doing some complex things. It wasn't like it was just write some software hosted on server. It was really a lot more. And as a result your business worth billions of dollars. I think 38 billion or something like that, big numbers, big numbers of great revenue growth as well, billions in revenue. You have customers, you have an ecosystem, you have data applications on top of Databricks. So in a way you're a cloud on top of the cloud. So is there a cloud on top of the cloud? So you have ISVs, Amazon has ISVs. Can you take us through what this means and at this point in history, because this seems to be an advanced version of benefits of platforming and refactoring, leveraging say AWS. >> Yeah, so look, when we started, there was really only one game in town. It was AWS. So it was one cloud. And the strategy of the company then was, well Amazon had this beautiful set of services that they're building bottom up, they have storage, compute, networking, and then they have databases and so on. But it's a lot of services. So let us not directly compete with AWS and try to take out one of their services. Let's not do that because frankly we can't. We were not of that size. They had the scale, they had the size and they were the only cloud vendor in town. So our strategy instead was, let's do something else. Let's not compete directly with say, a particular service they're building, let's take a different strategy. What if we had a unified holistic data platform, where it's just one integrated service end to end. So think of it as Microsoft office, which contains PowerPoint, and Word, and Excel and even Access, if you want to use it. What if we build that and AWS has this really amazing knack for releasing things, you know services, lots of them, every reinvent. And they're sort of a DevOps person's dream and you can stitch these together and you know you have to be technical. How do we elevate that and make it simpler and integrate it? That was our original strategy and it resonated with a segment of the market. And the reason it worked with AWS so that we wouldn't butt heads with AWS was because we weren't a direct replacement for this service or for that service, we were taking a different approach. And AWS, because credit goes to them, they're so customer obsessed, they would actually do what's right for the customer. So if the customer said we want this unified thing, their sellers would actually say, okay, so then you should use Databricks. So they truly are customer obsessed in that way. And I really mean it, John. Things have changed over the years. They're not the only cloud anymore. You know, Azure is real, GCP is real, there's also Alibaba. And now over 70% of our customers are on more than one cloud. So now what we hear from them is, not only want, do we want a simplified, unified thing, but we want it also to work across the clouds. Because those of them that are seriously considering multiple clouds, they don't want to use a service on cloud one and then use a similar service on cloud two. But it's a little bit different. And now they have to do twice the work to make it work. You know, John, it's hard enough as it is, like it's this data stuff and analytics. It's not a walk in the park, you know. You hire an administrator in the back office that clicks a button and its just, now you're a data driven digital transformed company. It's hard. If you now have to do it again on the second cloud with different set of services and then again on a third cloud with a different set of services. That's very, very costly. So the strategy then has changed that, how do we take that unified simple approach and make it also the same and standardize across the clouds, but then also integrate it as far down as we can on each of the clouds. So that you're not giving up any of the benefits that the particular cloud has. >> Yeah, I think one of the things that we see, and I want get your reaction to this, is this rise of the super cloud as we call it. I think you were involved in the Sky paper that I saw your position paper came out after we had introduced Super Cloud, which is great. Congratulations to the Berkeley team, wearing the hat here. But you guys are, I think a driver of this because you're creating the need for these things. You're saying, okay, we went on one cloud with AWS and you didn't hide that. And now you're publicly saying there's other clouds too, increased ham for your business. And customers have multiple clouds in their infrastructure for the best of breed that they have. Okay, get that. But there's still a challenge around the innovation, growth that's still around the corner. We still have a supply chain problem, we still have skill gaps. You know, you guys are unique at Databricks as other these big examples of super clouds that are developing. Enterprises don't have the Databricks kind of talent. They need, they need turnkey solutions. So Adam and the team at Amazon are promoting, you know, more solution oriented approaches higher up on the stack. You're starting to see kind of like, I won't say templates, but you know, almost like application specific headless like, low code, no code capability to accelerate clients who are wanting to write code for the modern error. Right, so this kind of, and then now you, as you guys pointed out with these common services, you're pushing the envelope. So you're saying, hey, I need to compete, I don't want to go to my customers and have them to have a staff or this cloud and this cloud and this cloud because they don't have the staff. Or if they do, they're very unique. So what's your reaction? Because this kind is the, it kind of shows your leadership as a partner of AWS and the clouds, but also highlights I think what's coming. But you share your reaction. >> Yeah, look, it's, first of all, you know, I wish I could take credit for this but I can't because it's really the customers that have decided to go on multiple clouds. You know, it's not Databricks that you know, push this or some other vendor, you know, that, Snowflake or someone who pushed this and now enterprises listened to us and they picked two clouds. That's not how it happened. The enterprises picked two clouds or three clouds themselves and we can get into why, but they did that. So this largely just happened in the market. We as data platforms responded to what they're then saying, which is they're saying, "I don't want to redo this again on the other cloud." So I think the writing is on the wall. I think it's super obvious what's going to happen next. They will say, "Any service I'm using, it better work exactly the same on all the clouds." You know, that's what's going to happen. So in the next five years, every enterprise will say, "I'm going to use the service, but you better make sure that this service works equally well on all of the clouds." And obviously the multicloud vendors like us, are there to do that. But I actually think that what you're going to see happening is that you're going to see the cloud vendors changing the existing services that they have to make them work on the other clouds. That's what's goin to happen, I think. >> Yeah, and I think I would add that, first of all, I agree with you. I think that's going to be a forcing function. Because I think you're driving it. You guys are in a way, one, are just an actor in the driving this because you're on the front end of this and there are others and there will be people following. But I think to me, I'm a cloud vendor, I got to differentiate. Adam, If I'm Adam Saleski, I got to say, "Hey, I got to differentiate." So I don't wan to get stuck in the middle, so to speak. Am I just going to innovate on the hardware AKA infrastructure or am I going to innovate at the higher level services? So what we're talking about here is the tail of two clouds within Amazon, for instance. So do I innovate on the silicon and get low level into the physics and squeeze performance out of the hardware and infrastructure? Or do I focus on ease of use at the top of the stack for the developers? So again, there's a channel of two clouds here. So I got to ask you, how do they differentiate? Number one and number two, I never heard a developer ever say, "I want to run my app or workload on the slower cloud." So I mean, you know, back when we had PCs you wanted to go, "I want the fastest processor." So again, you can have common level services, but where is that performance differentiation with the cloud? What do the clouds do in your opinion? >> Yeah, look, I think it's pretty clear. I think that it's, this is, you know, no surprise. Probably 70% or so of the revenue is in the lower infrastructure layers, compute, storage, networking. And they have to win that. They have to be competitive there. As you said, you can say, oh you know, I guess my CPUs are slower than the other cloud, but who cares? I have amazing other services which only work on my cloud by the way, right? That's not going to be a winning recipe. So I think all three are laser focused on, we going to have specialized hardware and the nuts and bolts of the infrastructure, we can do it better than the other clouds for sure. And you can see lots of innovation happening there, right? The Graviton chips, you know, we see huge price performance benefits in those chips. I mean it's real, right? It's basically a 20, 30% free lunch. You know, why wouldn't you, why wouldn't you go for it there? There's no downside. You know, there's no, "got you" or no catch. But we see Azure doing the same thing now, they're also building their own chips and we know that Google builds specialized machine learning chips, TPU, Tenor Processing Units. So their legs are focused on that. I don't think they can give up that or focused on higher levels if they had to pick bets. And I think actually in the next few years, most of us have to make more, we have to be more deliberate and calculated in the picks we do. I think in the last five years, most of us have said, "We'll do all of it." You know. >> Well you made a good bet with Spark, you know, the duke was pretty obvious trend that was, everyone was shut on that bandwagon and you guys picked a big bet with Spark. Look what happened with you guys? So again, I love this betting kind of concept because as the world matures, growth slows down and shifts and that next wave of value coming in, AKA customers, they're going to integrate with a new ecosystem. A new kind of partner network for AWS and the other clouds. But with aws they're going to need to nurture the next Databricks. They're going to need to still provide that SaaS, ISV like experience for, you know, a basic software hosting or some application. But I go to get your thoughts on this idea of multiple clouds because if I'm a developer, the old days was, old days, within our decade, full stack developer- >> It was two years ago, yeah (John laughing) >> This is a decade ago, full stack and then the cloud came in, you kind had the half stack and then you would do some things. It seems like the clouds are trying to say, we want to be the full stack or not. Or is it still going to be, you know, I'm an application like a PC and a Mac, I'm going to write the same application for both hardware. I mean what's your take on this? Are they trying to do full stack and you see them more like- >> Absolutely. I mean look, of course they're going, they have, I mean they have over 300, I think Amazon has over 300 services, right? That's not just compute, storage, networking, it's the whole stack, right? But my key point is, I think they have to nail the core infrastructure storage compute networking because the three clouds that are there competing, they're formidable companies with formidable balance sheets and it doesn't look like any of them is going to throw in the towel and say, we give up. So I think it's going to intensify. And given that they have a 70% revenue on that infrastructure layer, I think they, if they have to pick their bets, I think they'll focus it on that infrastructure layer. I think the layer above where they're also placing bets, they're doing that, the full stack, right? But there I think the demand will be, can you make that work on the other clouds? And therein lies an innovator's dilemma because if I make it work on the other clouds, then I'm foregoing that 70% revenue of the infrastructure. I'm not getting it. The other cloud vendor is going to get it. So should I do that or not? Second, is the other cloud vendor going to be welcoming of me making my service work on their cloud if I am a competing cloud, right? And what kind of terms of service are I giving me? And am I going to really invest in doing that? And I think right now we, you know, most, the vast, vast, vast majority of the services only work on the one cloud that you know, it's built on. It doesn't work on others, but this will shift. >> Yeah, I think the innovators dilemma is also very good point. And also add, it's an integrators dilemma too because now you talk about integration across services. So I believe that the super cloud movement's going to happen before Sky. And I think what explained by that, what you guys did and what other companies are doing by representing advanced, I call platform engineering, refactoring an existing market really fast, time to value and CAPEX is, I mean capital, market cap is going to be really fast. I think there's going to be an opportunity for those to emerge that's going to set the table for global multicloud ultimately in the future. So I think you're going to start to see the same pattern of what you guys did get in, leverage the hell out of it, use it, not in the way just to host, but to refactor and take down territory of markets. So number one, and then ultimately you get into, okay, I want to run some SLA across services, then there's a little bit more complication. I think that's where you guys put that beautiful paper out on Sky Computing. Okay, that makes sense. Now if you go to today's market, okay, I'm betting on Amazon because they're the best, this is the best cloud win scenario, not the most robust cloud. So if I'm a developer, I want the best. How do you look at their bet when it comes to data? Because now they've got machine learning, Swami's got a big keynote on Wednesday, I'm expecting to see a lot of AI and machine learning. I'm expecting to hear an end to end data story. This is what you do, so as a major partner, how do you view the moves Amazon's making and the bets they're making with data and machine learning and AI? >> First I want to lift off my hat to AWS for being customer obsessed. So I know that if a customer wants Databricks, I know that AWS and their sellers will actually help us get that customer deploy Databricks. Now which of the services is the customer going to pick? Are they going to pick ours or the end to end, what Swami is going to present on stage? Right? So that's the question we're getting. But I wanted to start with by just saying, their customer obsessed. So I think they're going to do the right thing for the customer and I see the evidence of it again and again and again. So kudos to them. They're amazing at this actually. Ultimately our bet is, customers want this to be simple, integrated, okay? So yes there are hundreds of services that together give you the end to end experience and they're very customizable that AWS gives you. But if you want just something simply integrated that also works across the clouds, then I think there's a special place for Databricks. And I think the lake house approach that we have, which is an integrated, completely integrated, we integrate data lakes with data warehouses, integrate workflows with machine learning, with real time processing, all these in one platform. I think there's going to be tailwinds because I think the most important thing that's going to happen in the next few years is that every customer is going to now be obsessed, given the recession and the environment we're in. How do I cut my costs? How do I cut my costs? And we learn this from the customers they're adopting the lake house because they're thinking, instead of using five vendors or three vendors, I can simplify it down to one with you and I can cut my cost. So I think that's going to be one of the main drivers of why people bet on the lake house because it helps them lower their TCO; Total Cost of Ownership. And it's as simple as that. Like I have three things right now. If I can get the same job done of those three with one, I'd rather do that. And by the way, if it's three or four across two clouds and I can just use one and it just works across two clouds, I'm going to do that. Because my boss is telling me I need to cut my budget. >> (indistinct) (John laughing) >> Yeah, and I'd rather not to do layoffs and they're asking me to do more. How can I get smaller budgets, not lay people off and do more? I have to cut, I have to optimize. What's happened in the last five, six years is there's been a huge sprawl of services and startups, you know, you know most of them, all these startups, all of them, all the activity, all the VC investments, well those companies sold their software, right? Even if a startup didn't make it big, you know, they still sold their software to some vendors. So the ecosystem is now full of lots and lots and lots and lots of different software. And right now people are looking, how do I consolidate, how do I simplify, how do I cut my costs? >> And you guys have a great solution. You're also an arms dealer and a innovator. So I have to ask this question, because you're a professor of the industry as well as at Berkeley, you've seen a lot of the historical innovations. If you look at the moment we're in right now with the recession, okay we had COVID, okay, it changed how people work, you know, people working at home, provisioning VLAN, all that (indistinct) infrastructure, okay, yeah, technology and cloud health. But we're in a recession. This is the first recession where the Amazon and the other cloud, mainly Amazon Web Services is a major economic puzzle in the piece. So they were never around before, even 2008, they were too small. They're now a major economic enabler, player, they're serving startups, enterprises, they have super clouds like you guys. They're a force and the people, their customers are cutting back but also they can also get faster. So agility is now an equation in the economic recovery. And I want to get your thoughts because you just brought that up. Customers can actually use the cloud and Databricks to actually get out of the recovery because no one's going to say, stop making profit or make more profit. So yeah, cut costs, be more efficient, but agility's also like, let's drive more revenue. So in this digital transformation, if you take this to conclusion, every company transforms, their company is the app. So their revenue is tied directly to their technology deployment. What's your reaction and comment to that because this is a new historical moment where cloud and scale and data, actually could be configured in a way to actually change the nature of a business in such a short time. And with the recession looming, no one's got time to wait. >> Yeah, absolutely. Look, the secular tailwind in the market is that of, you know, 10 years ago it was software is eating the world, now it's AI's going to eat all of software software. So more and more we're going to have, wherever you have software, which is everywhere now because it's eaten the world, it's going to be eaten up by AI and data. You know, AI doesn't exist without data so they're synonymous. You can't do machine learning if you don't have data. So yeah, you're going to see that everywhere and that automation will help people simplify things and cut down the costs and automate more things. And in the cloud you can also do that by changing your CAPEX to OPEX. So instead of I invest, you know, 10 million into a data center that I buy, I'm going to have headcount to manage the software. Why don't we change this to OPEX? And then they are going to optimize it. They want to lower the TCO because okay, it's in the cloud. but I do want the costs to be much lower that what they were in the previous years. Last five years, nobody cared. Who cares? You know what it costs. You know, there's a new brave world out there. Now there's like, no, it has to be efficient. So I think they're going to optimize it. And I think this lake house approach, which is an integration of the lakes and the warehouse, allows you to rationalize the two and simplify them. It allows you to basically rationalize away the data warehouse. So I think much faster we're going to see the, why do I need the data warehouse? If I can get the same thing done with the lake house for fraction of the cost, that's what's going to happen. I think there's going to be focus on that simplification. But I agree with you. Ultimately everyone knows, everybody's a software company. Every company out there is a software company and in the next 10 years, all of them are also going to be AI companies. So that is going to continue. >> (indistinct), dev's going to stop. And right sizing right now is a key economic forcing function. Final question for you and I really appreciate you taking the time. This year Reinvent, what's the bumper sticker in your mind around what's the most important industry dynamic, power dynamic, ecosystem dynamic that people should pay attention to as we move from the brave new world of okay, I see cloud, cloud operations. I need to really make it structurally change my business. How do I, what's the most important story? What's the bumper sticker in your mind for Reinvent? >> Bumper sticker? lake house 24. (John laughing) >> That's data (indistinct) bumper sticker. What's the- >> (indistinct) in the market. No, no, no, no. You know, it's, AWS talks about, you know, all of their services becoming a lake house because they want the center of the gravity to be S3, their lake. And they want all the services to directly work on that, so that's a lake house. We're Bumper see Microsoft with Synapse, modern, you know the modern intelligent data platform. Same thing there. We're going to see the same thing, we already seeing it on GCP with Big Lake and so on. So I actually think it's the how do I reduce my costs and the lake house integrates those two. So that's one of the main ways you can rationalize and simplify. You get in the lake house, which is the name itself is a (indistinct) of two things, right? Lake house, "lake" gives you the AI, "house" give you the database data warehouse. So you get your AI and you get your data warehousing in one place at the lower cost. So for me, the bumper sticker is lake house, you know, 24. >> All right. Awesome Ali, well thanks for the exclusive interview. Appreciate it and get to see you. Congratulations on your success and I know you guys are going to be fine. >> Awesome. Thank you John. It's always a pleasure. >> Always great to chat with you again. >> Likewise. >> You guys are a great team. We're big fans of what you guys have done. We think you're an example of what we call "super cloud." Which is getting the hype up and again your paper speaks to some of the innovation, which I agree with by the way. I think that that approach of not forcing standards is really smart. And I think that's absolutely correct, that having the market still innovate is going to be key. standards with- >> Yeah, I love it. We're big fans too, you know, you're doing awesome work. We'd love to continue the partnership. >> So, great, great Ali, thanks. >> Take care (outro music)

Published Date : Nov 23 2022

SUMMARY :

after the keynotes prior to the keynotes and you know, we're because you have customers. I wouldn't, you know, I got to give you guys credit over there So if the customer said we So Adam and the team at So in the next five years, But I think to me, I'm a cloud vendor, and calculated in the picks we do. But I go to get your thoughts on this idea Or is it still going to be, you know, And I think right now we, you know, So I believe that the super cloud I can simplify it down to one with you and startups, you know, and the other cloud, And in the cloud you can also do that I need to really make it lake house 24. That's data (indistinct) of the gravity to be S3, and I know you guys are going to be fine. It's always a pleasure. We're big fans of what you guys have done. We're big fans too, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

Ali GhodsiPERSON

0.99+

AdamPERSON

0.99+

AWSORGANIZATION

0.99+

2013DATE

0.99+

GoogleORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

2008DATE

0.99+

five vendorsQUANTITY

0.99+

Adam SaleskiPERSON

0.99+

fiveQUANTITY

0.99+

John FurrierPERSON

0.99+

AliPERSON

0.99+

DatabricksORGANIZATION

0.99+

three vendorsQUANTITY

0.99+

70%QUANTITY

0.99+

WednesdayDATE

0.99+

ExcelTITLE

0.99+

38 billionQUANTITY

0.99+

fourQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

WordTITLE

0.99+

threeQUANTITY

0.99+

two cloudsQUANTITY

0.99+

AndyPERSON

0.99+

three cloudsQUANTITY

0.99+

10 millionQUANTITY

0.99+

PowerPointTITLE

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

twiceQUANTITY

0.99+

SecondQUANTITY

0.99+

over 300 servicesQUANTITY

0.99+

one gameQUANTITY

0.99+

second cloudQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

SkyORGANIZATION

0.99+

one wordQUANTITY

0.99+

OPEXORGANIZATION

0.99+

two thingsQUANTITY

0.98+

two years agoDATE

0.98+

AccessTITLE

0.98+

over 300QUANTITY

0.98+

six yearsQUANTITY

0.98+

over 70%QUANTITY

0.98+

five years agoDATE

0.98+

Ali Ghosdi, Databricks | AWS Partner Exclusive


 

(outro music) >> Hey, I'm John Furrier, here with an exclusive interview with Ali Ghodsi, who's the CEO of Databricks. Ali, great to see you. Preview for reinvent. We're going to launch this story, exclusive Databricks material on the notes, after the keynotes prior to the keynotes and after the keynotes that reinvent. So great to see you. You know, you've been a partner of AWS for a very, very long time. I think five years ago, I think I first interviewed you, you were one of the first to publicly declare that this was a place to build a company on and not just post an application, but refactor capabilities to create, essentially a platform in the cloud, on the cloud. Not just an ISV; Independent Software Vendor, kind of an old term, we're talking about real platform like capability to change the game. Can you talk about your experience as an AWS partner? >> Yeah, look, so we started in 2013. I swiped my personal credit card on AWS and some of my co-founders did the same. And we started building. And we were excited because we just thought this is a much better way to launch a company because you can just much faster get time to market and launch your thing and you can get the end users much quicker access to the thing you're building. So we didn't really talk to anyone at AWS, we just swiped a credit card. And eventually they told us, "Hey, do you want to buy extra support?" "You're asking a lot of advanced questions from us." "Maybe you want to buy our advanced support." And we said, no, no, no, no. We're very advanced ourselves, we know what we're doing. We're not going to buy any advanced support. So, you know, we just built this, you know, startup from nothing on AWS without even talking to anyone there. So at some point, I think around 2017, they suddenly saw this company with maybe a hundred million ARR pop up on their radar and it's driving massive amounts of compute, massive amounts of data. And it took a little bit in the beginning just us to get to know each other because as I said, it's like we were not on their radar and we weren't really looking, we were just doing our thing. And then over the years the partnership has deepened and deepened and deepened and then with, you know, Andy (indistinct) really leaning into the partnership, he mentioned us at Reinvent. And then we sort of figured out a way to really integrate the two service, the Databricks platform with AWS . And today it's an amazing partnership. You know, we directly connected with the general managers for the services. We're connected at the CEO level, you know, the sellers get compensated for pushing Databricks, we're, we have multiple offerings on their marketplace. We have a native offering on AWS. You know, we're prominently always sort of marketed and you know, we're aligned also vision wise in what we're trying to do. So yeah, we've come a very, very long way. >> Do you consider yourself a SaaS app or an ISV or do you see yourself more of a platform company because you have customers. How would you categorize your category as a company? >> Well, it's a data platform, right? And actually the, the strategy of the Databricks is take what's otherwise five, six services in the industry or five, six different startups, but do them as part of one data platform that's integrated. So in one word, the strategy of data bricks is "unification." We call it the data lake house. But really the idea behind the data lake house is that of unification, or in more words it's, "The whole is greater than the sum of its parts." So you could actually go and buy five, six services out there or actually use five, six services from the cloud vendors, stitch it together and it kind of resembles Databricks. Our power is in doing those integrated, together in a way in which it's really, really easy and simple to use for end users. So yeah, we're a data platform. I wouldn't, you know, ISV that's a old term, you know, Independent Software Vendor. You know, I think, you know, we have actually a whole slew of ISVs on top of Databricks, that integrate with our platform. And you know, in our marketplace as well as in our partner connect, we host those ISVs that then, you know, work on top of the data that we have in the Databricks, data lake house. >> You know, I think one of the things your journey has been great to document and watch from the beginning. I got to give you guys credit over there and props, congratulations. But I think you're the poster child as a company to what we see enterprises doing now. So go back in time when you guys swiped a credit card, you didn't need attending technical support because you guys had brains, you were refactoring, rethinking. It wasn't just banging out software, you had, you were doing some complex things. It wasn't like it was just write some software hosted on server. It was really a lot more. And as a result your business worth billions of dollars. I think 38 billion or something like that, big numbers, big numbers of great revenue growth as well, billions in revenue. You have customers, you have an ecosystem, you have data applications on top of Databricks. So in a way you're a cloud on top of the cloud. So is there a cloud on top of the cloud? So you have ISVs, Amazon has ISVs. Can you take us through what this means and at this point in history, because this seems to be an advanced version of benefits of platforming and refactoring, leveraging say AWS. >> Yeah, so look, when we started, there was really only one game in town. It was AWS. So it was one cloud. And the strategy of the company then was, well Amazon had this beautiful set of services that they're building bottom up, they have storage, compute, networking, and then they have databases and so on. But it's a lot of services. So let us not directly compete with AWS and try to take out one of their services. Let's not do that because frankly we can't. We were not of that size. They had the scale, they had the size and they were the only cloud vendor in town. So our strategy instead was, let's do something else. Let's not compete directly with say, a particular service they're building, let's take a different strategy. What if we had a unified holistic data platform, where it's just one integrated service end to end. So think of it as Microsoft office, which contains PowerPoint, and Word, and Excel and even Access, if you want to use it. What if we build that and AWS has this really amazing knack for releasing things, you know services, lots of them, every reinvent. And they're sort of a DevOps person's dream and you can stitch these together and you know you have to be technical. How do we elevate that and make it simpler and integrate it? That was our original strategy and it resonated with a segment of the market. And the reason it worked with AWS so that we wouldn't butt heads with AWS was because we weren't a direct replacement for this service or for that service, we were taking a different approach. And AWS, because credit goes to them, they're so customer obsessed, they would actually do what's right for the customer. So if the customer said we want this unified thing, their sellers would actually say, okay, so then you should use Databricks. So they truly are customer obsessed in that way. And I really mean it, John. Things have changed over the years. They're not the only cloud anymore. You know, Azure is real, GCP is real, there's also Alibaba. And now over 70% of our customers are on more than one cloud. So now what we hear from them is, not only want, do we want a simplified, unified thing, but we want it also to work across the clouds. Because those of them that are seriously considering multiple clouds, they don't want to use a service on cloud one and then use a similar service on cloud two. But it's a little bit different. And now they have to do twice the work to make it work. You know, John, it's hard enough as it is, like it's this data stuff and analytics. It's not a walk in the park, you know. You hire an administrator in the back office that clicks a button and its just, now you're a data driven digital transformed company. It's hard. If you now have to do it again on the second cloud with different set of services and then again on a third cloud with a different set of services. That's very, very costly. So the strategy then has changed that, how do we take that unified simple approach and make it also the same and standardize across the clouds, but then also integrate it as far down as we can on each of the clouds. So that you're not giving up any of the benefits that the particular cloud has. >> Yeah, I think one of the things that we see, and I want get your reaction to this, is this rise of the super cloud as we call it. I think you were involved in the Sky paper that I saw your position paper came out after we had introduced Super Cloud, which is great. Congratulations to the Berkeley team, wearing the hat here. But you guys are, I think a driver of this because you're creating the need for these things. You're saying, okay, we went on one cloud with AWS and you didn't hide that. And now you're publicly saying there's other clouds too, increased ham for your business. And customers have multiple clouds in their infrastructure for the best of breed that they have. Okay, get that. But there's still a challenge around the innovation, growth that's still around the corner. We still have a supply chain problem, we still have skill gaps. You know, you guys are unique at Databricks as other these big examples of super clouds that are developing. Enterprises don't have the Databricks kind of talent. They need, they need turnkey solutions. So Adam and the team at Amazon are promoting, you know, more solution oriented approaches higher up on the stack. You're starting to see kind of like, I won't say templates, but you know, almost like application specific headless like, low code, no code capability to accelerate clients who are wanting to write code for the modern error. Right, so this kind of, and then now you, as you guys pointed out with these common services, you're pushing the envelope. So you're saying, hey, I need to compete, I don't want to go to my customers and have them to have a staff or this cloud and this cloud and this cloud because they don't have the staff. Or if they do, they're very unique. So what's your reaction? Because this kind is the, it kind of shows your leadership as a partner of AWS and the clouds, but also highlights I think what's coming. But you share your reaction. >> Yeah, look, it's, first of all, you know, I wish I could take credit for this but I can't because it's really the customers that have decided to go on multiple clouds. You know, it's not Databricks that you know, push this or some other vendor, you know, that, Snowflake or someone who pushed this and now enterprises listened to us and they picked two clouds. That's not how it happened. The enterprises picked two clouds or three clouds themselves and we can get into why, but they did that. So this largely just happened in the market. We as data platforms responded to what they're then saying, which is they're saying, "I don't want to redo this again on the other cloud." So I think the writing is on the wall. I think it's super obvious what's going to happen next. They will say, "Any service I'm using, it better work exactly the same on all the clouds." You know, that's what's going to happen. So in the next five years, every enterprise will say, "I'm going to use the service, but you better make sure that this service works equally well on all of the clouds." And obviously the multicloud vendors like us, are there to do that. But I actually think that what you're going to see happening is that you're going to see the cloud vendors changing the existing services that they have to make them work on the other clouds. That's what's goin to happen, I think. >> Yeah, and I think I would add that, first of all, I agree with you. I think that's going to be a forcing function. Because I think you're driving it. You guys are in a way, one, are just an actor in the driving this because you're on the front end of this and there are others and there will be people following. But I think to me, I'm a cloud vendor, I got to differentiate. Adam, If I'm Adam Saleski, I got to say, "Hey, I got to differentiate." So I don't wan to get stuck in the middle, so to speak. Am I just going to innovate on the hardware AKA infrastructure or am I going to innovate at the higher level services? So what we're talking about here is the tail of two clouds within Amazon, for instance. So do I innovate on the silicon and get low level into the physics and squeeze performance out of the hardware and infrastructure? Or do I focus on ease of use at the top of the stack for the developers? So again, there's a channel of two clouds here. So I got to ask you, how do they differentiate? Number one and number two, I never heard a developer ever say, "I want to run my app or workload on the slower cloud." So I mean, you know, back when we had PCs you wanted to go, "I want the fastest processor." So again, you can have common level services, but where is that performance differentiation with the cloud? What do the clouds do in your opinion? >> Yeah, look, I think it's pretty clear. I think that it's, this is, you know, no surprise. Probably 70% or so of the revenue is in the lower infrastructure layers, compute, storage, networking. And they have to win that. They have to be competitive there. As you said, you can say, oh you know, I guess my CPUs are slower than the other cloud, but who cares? I have amazing other services which only work on my cloud by the way, right? That's not going to be a winning recipe. So I think all three are laser focused on, we going to have specialized hardware and the nuts and bolts of the infrastructure, we can do it better than the other clouds for sure. And you can see lots of innovation happening there, right? The Graviton chips, you know, we see huge price performance benefits in those chips. I mean it's real, right? It's basically a 20, 30% free lunch. You know, why wouldn't you, why wouldn't you go for it there? There's no downside. You know, there's no, "got you" or no catch. But we see Azure doing the same thing now, they're also building their own chips and we know that Google builds specialized machine learning chips, TPU, Tenor Processing Units. So their legs are focused on that. I don't think they can give up that or focused on higher levels if they had to pick bets. And I think actually in the next few years, most of us have to make more, we have to be more deliberate and calculated in the picks we do. I think in the last five years, most of us have said, "We'll do all of it." You know. >> Well you made a good bet with Spark, you know, the duke was pretty obvious trend that was, everyone was shut on that bandwagon and you guys picked a big bet with Spark. Look what happened with you guys? So again, I love this betting kind of concept because as the world matures, growth slows down and shifts and that next wave of value coming in, AKA customers, they're going to integrate with a new ecosystem. A new kind of partner network for AWS and the other clouds. But with aws they're going to need to nurture the next Databricks. They're going to need to still provide that SaaS, ISV like experience for, you know, a basic software hosting or some application. But I go to get your thoughts on this idea of multiple clouds because if I'm a developer, the old days was, old days, within our decade, full stack developer- >> It was two years ago, yeah (John laughing) >> This is a decade ago, full stack and then the cloud came in, you kind had the half stack and then you would do some things. It seems like the clouds are trying to say, we want to be the full stack or not. Or is it still going to be, you know, I'm an application like a PC and a Mac, I'm going to write the same application for both hardware. I mean what's your take on this? Are they trying to do full stack and you see them more like- >> Absolutely. I mean look, of course they're going, they have, I mean they have over 300, I think Amazon has over 300 services, right? That's not just compute, storage, networking, it's the whole stack, right? But my key point is, I think they have to nail the core infrastructure storage compute networking because the three clouds that are there competing, they're formidable companies with formidable balance sheets and it doesn't look like any of them is going to throw in the towel and say, we give up. So I think it's going to intensify. And given that they have a 70% revenue on that infrastructure layer, I think they, if they have to pick their bets, I think they'll focus it on that infrastructure layer. I think the layer above where they're also placing bets, they're doing that, the full stack, right? But there I think the demand will be, can you make that work on the other clouds? And therein lies an innovator's dilemma because if I make it work on the other clouds, then I'm foregoing that 70% revenue of the infrastructure. I'm not getting it. The other cloud vendor is going to get it. So should I do that or not? Second, is the other cloud vendor going to be welcoming of me making my service work on their cloud if I am a competing cloud, right? And what kind of terms of service are I giving me? And am I going to really invest in doing that? And I think right now we, you know, most, the vast, vast, vast majority of the services only work on the one cloud that you know, it's built on. It doesn't work on others, but this will shift. >> Yeah, I think the innovators dilemma is also very good point. And also add, it's an integrators dilemma too because now you talk about integration across services. So I believe that the super cloud movement's going to happen before Sky. And I think what explained by that, what you guys did and what other companies are doing by representing advanced, I call platform engineering, refactoring an existing market really fast, time to value and CAPEX is, I mean capital, market cap is going to be really fast. I think there's going to be an opportunity for those to emerge that's going to set the table for global multicloud ultimately in the future. So I think you're going to start to see the same pattern of what you guys did get in, leverage the hell out of it, use it, not in the way just to host, but to refactor and take down territory of markets. So number one, and then ultimately you get into, okay, I want to run some SLA across services, then there's a little bit more complication. I think that's where you guys put that beautiful paper out on Sky Computing. Okay, that makes sense. Now if you go to today's market, okay, I'm betting on Amazon because they're the best, this is the best cloud win scenario, not the most robust cloud. So if I'm a developer, I want the best. How do you look at their bet when it comes to data? Because now they've got machine learning, Swami's got a big keynote on Wednesday, I'm expecting to see a lot of AI and machine learning. I'm expecting to hear an end to end data story. This is what you do, so as a major partner, how do you view the moves Amazon's making and the bets they're making with data and machine learning and AI? >> First I want to lift off my hat to AWS for being customer obsessed. So I know that if a customer wants Databricks, I know that AWS and their sellers will actually help us get that customer deploy Databricks. Now which of the services is the customer going to pick? Are they going to pick ours or the end to end, what Swami is going to present on stage? Right? So that's the question we're getting. But I wanted to start with by just saying, their customer obsessed. So I think they're going to do the right thing for the customer and I see the evidence of it again and again and again. So kudos to them. They're amazing at this actually. Ultimately our bet is, customers want this to be simple, integrated, okay? So yes there are hundreds of services that together give you the end to end experience and they're very customizable that AWS gives you. But if you want just something simply integrated that also works across the clouds, then I think there's a special place for Databricks. And I think the lake house approach that we have, which is an integrated, completely integrated, we integrate data lakes with data warehouses, integrate workflows with machine learning, with real time processing, all these in one platform. I think there's going to be tailwinds because I think the most important thing that's going to happen in the next few years is that every customer is going to now be obsessed, given the recession and the environment we're in. How do I cut my costs? How do I cut my costs? And we learn this from the customers they're adopting the lake house because they're thinking, instead of using five vendors or three vendors, I can simplify it down to one with you and I can cut my cost. So I think that's going to be one of the main drivers of why people bet on the lake house because it helps them lower their TCO; Total Cost of Ownership. And it's as simple as that. Like I have three things right now. If I can get the same job done of those three with one, I'd rather do that. And by the way, if it's three or four across two clouds and I can just use one and it just works across two clouds, I'm going to do that. Because my boss is telling me I need to cut my budget. >> (indistinct) (John laughing) >> Yeah, and I'd rather not to do layoffs and they're asking me to do more. How can I get smaller budgets, not lay people off and do more? I have to cut, I have to optimize. What's happened in the last five, six years is there's been a huge sprawl of services and startups, you know, you know most of them, all these startups, all of them, all the activity, all the VC investments, well those companies sold their software, right? Even if a startup didn't make it big, you know, they still sold their software to some vendors. So the ecosystem is now full of lots and lots and lots and lots of different software. And right now people are looking, how do I consolidate, how do I simplify, how do I cut my costs? >> And you guys have a great solution. You're also an arms dealer and a innovator. So I have to ask this question, because you're a professor of the industry as well as at Berkeley, you've seen a lot of the historical innovations. If you look at the moment we're in right now with the recession, okay we had COVID, okay, it changed how people work, you know, people working at home, provisioning VLAN, all that (indistinct) infrastructure, okay, yeah, technology and cloud health. But we're in a recession. This is the first recession where the Amazon and the other cloud, mainly Amazon Web Services is a major economic puzzle in the piece. So they were never around before, even 2008, they were too small. They're now a major economic enabler, player, they're serving startups, enterprises, they have super clouds like you guys. They're a force and the people, their customers are cutting back but also they can also get faster. So agility is now an equation in the economic recovery. And I want to get your thoughts because you just brought that up. Customers can actually use the cloud and Databricks to actually get out of the recovery because no one's going to say, stop making profit or make more profit. So yeah, cut costs, be more efficient, but agility's also like, let's drive more revenue. So in this digital transformation, if you take this to conclusion, every company transforms, their company is the app. So their revenue is tied directly to their technology deployment. What's your reaction and comment to that because this is a new historical moment where cloud and scale and data, actually could be configured in a way to actually change the nature of a business in such a short time. And with the recession looming, no one's got time to wait. >> Yeah, absolutely. Look, the secular tailwind in the market is that of, you know, 10 years ago it was software is eating the world, now it's AI's going to eat all of software software. So more and more we're going to have, wherever you have software, which is everywhere now because it's eaten the world, it's going to be eaten up by AI and data. You know, AI doesn't exist without data so they're synonymous. You can't do machine learning if you don't have data. So yeah, you're going to see that everywhere and that automation will help people simplify things and cut down the costs and automate more things. And in the cloud you can also do that by changing your CAPEX to OPEX. So instead of I invest, you know, 10 million into a data center that I buy, I'm going to have headcount to manage the software. Why don't we change this to OPEX? And then they are going to optimize it. They want to lower the TCO because okay, it's in the cloud. but I do want the costs to be much lower that what they were in the previous years. Last five years, nobody cared. Who cares? You know what it costs. You know, there's a new brave world out there. Now there's like, no, it has to be efficient. So I think they're going to optimize it. And I think this lake house approach, which is an integration of the lakes and the warehouse, allows you to rationalize the two and simplify them. It allows you to basically rationalize away the data warehouse. So I think much faster we're going to see the, why do I need the data warehouse? If I can get the same thing done with the lake house for fraction of the cost, that's what's going to happen. I think there's going to be focus on that simplification. But I agree with you. Ultimately everyone knows, everybody's a software company. Every company out there is a software company and in the next 10 years, all of them are also going to be AI companies. So that is going to continue. >> (indistinct), dev's going to stop. And right sizing right now is a key economic forcing function. Final question for you and I really appreciate you taking the time. This year Reinvent, what's the bumper sticker in your mind around what's the most important industry dynamic, power dynamic, ecosystem dynamic that people should pay attention to as we move from the brave new world of okay, I see cloud, cloud operations. I need to really make it structurally change my business. How do I, what's the most important story? What's the bumper sticker in your mind for Reinvent? >> Bumper sticker? lake house 24. (John laughing) >> That's data (indistinct) bumper sticker. What's the- >> (indistinct) in the market. No, no, no, no. You know, it's, AWS talks about, you know, all of their services becoming a lake house because they want the center of the gravity to be S3, their lake. And they want all the services to directly work on that, so that's a lake house. We're Bumper see Microsoft with Synapse, modern, you know the modern intelligent data platform. Same thing there. We're going to see the same thing, we already seeing it on GCP with Big Lake and so on. So I actually think it's the how do I reduce my costs and the lake house integrates those two. So that's one of the main ways you can rationalize and simplify. You get in the lake house, which is the name itself is a (indistinct) of two things, right? Lake house, "lake" gives you the AI, "house" give you the database data warehouse. So you get your AI and you get your data warehousing in one place at the lower cost. So for me, the bumper sticker is lake house, you know, 24. >> All right. Awesome Ali, well thanks for the exclusive interview. Appreciate it and get to see you. Congratulations on your success and I know you guys are going to be fine. >> Awesome. Thank you John. It's always a pleasure. >> Always great to chat with you again. >> Likewise. >> You guys are a great team. We're big fans of what you guys have done. We think you're an example of what we call "super cloud." Which is getting the hype up and again your paper speaks to some of the innovation, which I agree with by the way. I think that that approach of not forcing standards is really smart. And I think that's absolutely correct, that having the market still innovate is going to be key. standards with- >> Yeah, I love it. We're big fans too, you know, you're doing awesome work. We'd love to continue the partnership. >> So, great, great Ali, thanks. >> Take care (outro music)

Published Date : Nov 23 2022

SUMMARY :

after the keynotes prior to the keynotes and you know, we're because you have customers. I wouldn't, you know, I got to give you guys credit over there So if the customer said we So Adam and the team at So in the next five years, But I think to me, I'm a cloud vendor, and calculated in the picks we do. But I go to get your thoughts on this idea Or is it still going to be, you know, And I think right now we, you know, So I believe that the super cloud I can simplify it down to one with you and startups, you know, and the other cloud, And in the cloud you can also do that I need to really make it lake house 24. That's data (indistinct) of the gravity to be S3, and I know you guys are going to be fine. It's always a pleasure. We're big fans of what you guys have done. We're big fans too, you know,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

Ali GhodsiPERSON

0.99+

AdamPERSON

0.99+

AWSORGANIZATION

0.99+

2013DATE

0.99+

GoogleORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

2008DATE

0.99+

Ali GhosdiPERSON

0.99+

five vendorsQUANTITY

0.99+

Adam SaleskiPERSON

0.99+

fiveQUANTITY

0.99+

John FurrierPERSON

0.99+

AliPERSON

0.99+

DatabricksORGANIZATION

0.99+

three vendorsQUANTITY

0.99+

70%QUANTITY

0.99+

WednesdayDATE

0.99+

ExcelTITLE

0.99+

38 billionQUANTITY

0.99+

fourQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

WordTITLE

0.99+

threeQUANTITY

0.99+

two cloudsQUANTITY

0.99+

AndyPERSON

0.99+

three cloudsQUANTITY

0.99+

10 millionQUANTITY

0.99+

PowerPointTITLE

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

twiceQUANTITY

0.99+

SecondQUANTITY

0.99+

over 300 servicesQUANTITY

0.99+

one gameQUANTITY

0.99+

second cloudQUANTITY

0.99+

SnowflakeORGANIZATION

0.99+

SkyORGANIZATION

0.99+

one wordQUANTITY

0.99+

OPEXORGANIZATION

0.99+

two thingsQUANTITY

0.98+

two years agoDATE

0.98+

AccessTITLE

0.98+

over 300QUANTITY

0.98+

six yearsQUANTITY

0.98+

over 70%QUANTITY

0.98+

five years agoDATE

0.98+

Alex Ellis, OpenFaaS | Kubecon + Cloudnativecon Europe 2022


 

(upbeat music) >> Announcer: TheCUBE presents KubeCon and CloudNativeCon Europe, 2022. Brought to you by Red Hat, the Cloud Native Computing Foundation and its ecosystem partners. >> Welcome to Valencia, Spain, a KubeCon, CloudNativeCon Europe, 2022. I'm your host, Keith Townsend alongside Paul Gillon, Senior Editor, Enterprise Architecture for SiliconANGLE. We are, I think at the half point way point this to be fair we've talked to a lot of folks in open source in general. What's the difference between open source communities and these closed source communities that we attend so so much? >> Well open source is just it's that it's open it's anybody can contribute. There are a set of rules that manage how your contributions are reflected in the code base. What has to be shared, what you can keep to yourself but the it's an entirely different vibe. You know, you go to a conventional conference where there's a lot of proprietary being sold and it's all about cash. It's all about money changing hands. It's all about doing the deal. And open source conferences I think are more, they're more transparent and yeah money changes hands, but it seems like the objective of the interaction is not to consummate a deal to the degree that it is at a more conventional computer conference. >> And I think that can create an uneven side effect. And we're going to talk about that a little bit with, honestly a friend of mine Alex Ellis, founder of OpenFaaS. Alex welcome back to the program. >> Thank you, good to see Keith. >> So how long you've been doing OpenFaaS? >> Well, I first had this idea that serverless and function should be run on your own hardware back in 2016. >> Wow and I remember seeing you at DockerCon EU, was that in 2017? >> Yeah, I think that's when we first met and Simon Foskett took us out to dinner and we got chatting. And I just remember you went back to your hotel room after the presentation. You just had your iPhone out and your headphones you were talking about how you tried to OpenWhisk and really struggled with it and OpenFaaS sort of got you where you needed to be to sort of get some value out of the solution. >> And I think that's the magic of these open source communities in open source conferences that you can try stuff, you can struggle with it, come to a conference either get some advice or go in another direction and try something like a OpenFaaS. But we're going to talk about the business perspective. >> Yeah. >> Give us some, like give us some hero numbers from the project. What types of organizations are using OpenFaaS and what are like the download and stars all those, the ways you guys measure project success. >> So there's a few ways that you hear this talked about at KubeCon specifically. And one of the metrics that you hear the most often is GitHub stars. Now a GitHub star means that somebody with their laptop like yourself has heard of a project or seen it on their phone and clicked a button that's it. There's not really an indication of adoption but of interest. And that might be fleeting and a blog post you might publish you might bump that up by 2000. And so OpenFaaS quite quickly got a lot of stars which encouraged me to go on and do more with it. And it's now just crossed 30,000 across the whole organization of about 40 different open source repositories. >> Wow that is a number. >> Now you are in ecosystem where Knative is also taken off. And can you distinguish your approach to serverless or FaaS to Knatives? >> Yes so, Knative isn't an approach to FaaS. That's simply put and if you listen to Aikas Ville from the Knative project, he was working inside Google and wished that Kubernetes would do a little bit more than what it did. And so he started an initiative with some others to start bringing more abstractions like Auto Scaling, revision management so he can have two versions of code and and shift traffic around. And that's really what they're trying to do is add onto Kubernetes and make it do some of the things that a platform might do. Now OpenFaaS started from a different angle and frankly, two years earlier. >> There was no Kubernetes when you started it. >> It kind of led in the space and and built out that ecosystem. So the idea was, I was working with Lambda and AWS Alexa skills. I wanted to run them on my own hardware and I couldn't. And so OpenFaaS from the beginning started from that developer experience of here's my code, run it for me. Knative is a set of extensions that may be a building block but you're still pretty much working with Kubernetes. We get calls come through. And actually recently I can't tell you who they are but there's a very large telecommunications provider in the US that was using OpenFaaS, like yourself heard of Knative and in the hype they switched. And then they switched back again recently to OpenFaaS and they've come to us for quite a large commercial deal. >> So did they find Knative to be more restrictive? >> No, it's the opposite. It's a lot less opinionated. It's more like building blocks and you are dealing with a lot more detail. It's a much bigger system to manage, but don't get me wrong. I mean the guys are very friendly. They have their sort of use cases that they pursue. Google's now donated the project to CNCF. And so they're running it that way. Now it doesn't mean that there aren't FaaS on top of it. Red Hat have a serverless product VMware have one. But OpenFaaS because it owns the whole stack can get you something that's always been very lean, simple to use to the point that Keith in his hotel room installed it and was product with it in an evening without having to be a Kubernetes expert. >> And that is and if you remember back that was very anti-Kubernetes. >> Yes. >> It was not a platform I thought that was. And for some of the very same reasons, I didn't think it was very user friendly. You know, I tried open with I'm thinking what enterprise is going to try this thing, especially without the handholding and the support needed to do that. And you know, something pretty interesting that happened as I shared this with you on Twitter, I was having a briefing by a big microprocessor company, one of the big two. And they were showing me some of the work they were doing in Cloud-native and the way that they stretch test the system to show me Auto Scaling. Is that they bought up a OpenFaaS what is it? The well text that just does a bunch of, >> The cows maybe. >> Yeah the cows. That does just a bunch of texts. And it just all, and I'm like one I was amazed at is super simple app. And the second one was the reason why they discovered it was because of that simplicity is just a thing that's in your store that you can just download and test. And it was open fast. And it was this big company that you had no idea that was using >> No >> OpenFaaS. >> No. >> How prevalent is that? That you're always running into like these surprises of who's using the solution. >> There are a lot of top tier companies, billion dollar companies that use software that I've worked on. And it's quite common. The main issue you have with open source is you don't have like the commercial software you talked about, the relationships. They don't tell you they're using it until it breaks. And then they may come in incognito with a personal email address asking for things. What they don't want to do often is lend their brands or support you. And so it is a big challenge. However, early on, when I met you, BT, live person the University of Washington, and a bunch of other companies had told us they were using it. We were having discussions with them took them to Kubecon and did talks with them. You can go and look at them in the video player. However, when I left my job in 2019 to work on this full time I went to them and I said, you know, use it in production it's useful for you. We've done a talk, we really understand the business value of how it saves you time. I haven't got a way to fund it and it won't exist unless you help they were like sucks to be you. >> Wow that's brutal. So, okay let me get this right. I remember the story 2019, you leave your job. You say I'm going to do OpenFaaS and support this project 100% of your time. If there's no one contributing to the project from a financial perspective how do you make money? I've always pitched open source because you're the first person that I've met that ran an open source project. And I always pitched them people like you who work on it on their side time. But they're not the Knatives of the world, the SDOs, they have full time developers. Sponsored by Google and Microsoft, etc. If you're not sponsored how do you make money off of open source? >> If this is the million dollar question, really? How do you make money from something that is completely free? Where all of the value has already been captured by a company and they have no incentive to support you build a relationship or send you money in any way. >> And no one has really figured it out. Arguably Red Hat is the only one that's pulled it off. >> Well, people do refer to Red Hat and they say the Red Hat model but I think that was a one off. And we quite, we can kind of agree about that in a business. However, I eventually accepted the fact that companies don't pay for something they can get for free. It took me a very long time to get around that because you know, with open source enthusiast built a huge community around this project, almost 400 people have contributed code to it over the years. And we have had full-time people working on it on and off. And there's some people who really support it in their working hours or at home on the weekends. But no, I had to really think, right, what am I going to offer? And to begin with it would support existing customers weren't interested. They're not really customers because they're consuming it as a project. So I needed to create a product because we understand we buy products. Initially I just couldn't find the right customers. And so many times I thought about giving up, leaving it behind, my family would've supported me with that as well. And they would've known exactly why even you would've done. And so what I started to do was offer my insights as a community leader, as a maintainer to companies like we've got here. So Casting one of my customers, CSIG one of my customers, Rancher R, DigitalOcean, a lot of the vendors you see here. And I was able to get a significant amount of money by lending my expertise and writing content that gave me enough buffer to give the doctors time to realize that maybe they do need support and go a bit further into production. And over the last 12 months, we've been signing six figure deals with existing users and new users alike in enterprise. >> For support >> For support, for licensing of new features that are close source and for consulting. >> So you have proprietary extensions. Also that are sort of enterprise class. Right and then also the consulting business, the support business which is a proven business model that has worked >> Is a proven business model. What it's not a proven business model is if you work hard enough, you deserve to be rewarded. >> Mmh. >> You have to go with the system. Winter comes after autumn. Summer comes after spring and you, it's no point saying why is it like that? That's the way it is. And if you go with it, you can benefit from it. And that's what the realization I had as much as I didn't want to do it. >> So you know this community, well you know there's other project founders out here thinking about making the leap. If you're giving advice to a project founder and they're thinking about making this leap, you know quitting their job and becoming the next Alex. And I think this is the perception that the misperception out there. >> Yes. >> You're, you're well known. There's a difference between being well known and well compensated. >> Yeah. >> What advice would you give those founders >> To be. >> Before they make the leap to say you know what I'm going to do my project full time. I'm going to lean on the generosity of the community. So there are some generous people in the community. You've done some really interesting things for individual like contributions etc but that's not enough. >> So look, I mean really you have to go back to the MBA mindset. What problem are you trying to solve? Who is your target customer? What do they care about? What do they eat and drink? When do they go to sleep? You really need to know who this is for. And then customize a journey for them so that they can come to you. And you need some way initially of funneling those people in qualifying them because not everybody that comes to a student or somebody doing a PhD is not your customer. >> Right, right. >> You need to understand sales. You need to understand a lot about business but you can work it out on your way. You know, I'm testament to that. And once you have people you then need something to sell them that might meet their needs and be prepared to tell them that what you've got isn't right for them. 'cause sometimes that's the one thing that will build integrity. >> That's very hard for community leaders. It's very hard for community leaders to say, no >> Absolutely so how do you help them over that hump? I think of what you've done. >> So you have to set some boundaries because as an open source developer and maintainer you want to help everybody that's there regardless. And I think for me it was taking some of the open source features that companies used not releasing them anymore in the open source edition, putting them into the paid developing new features based on what feedback we'd had, offering support as well but also understanding what is support. What do you need to offer? You may think you need a one hour SLA for a fix probably turns out that you could sell a three day response time or one day response time. And some people would want that and see value in it. But you're not going to know until you talk to your customers. >> I want to ask you, because this has been a particular interest of mine. It seems like managed services have been kind of the lifeline for pure open source companies. Enabling these companies to maintain their open source roots, but still have a revenue stream of delivering as a service. Is that a business model option you've looked at? >> There's three business models perhaps that are prevalent. One is OpenCore, which is roughly what I'm following. >> Right. >> Then there is SaaS, which is what you understand and then there's support on pure open source. So that's more like what Rancher does. Now if you think of a company like Buoyant that produces Linkerd they do a bit of both. So they don't have any close source pieces yet but they can host it for you or you can host it and they'll support you. And so I think if there's a way that you can put your product into a SaaS that makes it easier for them to run then you know go for it. However, we've OpenFaaS, remember what is the core problem we are solving, portability So why lock into my cloud? >> Take that option off the table, go ahead. >> It's been a long journey and I've been a fan since your start. I've seen the bumps and bruises and the scars get made. If you're open source leader and you're thinking about becoming as famous as Alex, hey you can do that, you can put in all the work become famous but if you want to make a living, solve a problem, understand what people are willing to pay for that problem and go out and sell it. Valuable lessons here on theCUBE. From Valencia, Spain I'm Keith Townsend along with Paul Gillon and you're watching theCUBE the leader in high-tech coverage. (Upbeat music)

Published Date : May 19 2022

SUMMARY :

Brought to you by Red Hat, What's the difference between what you can keep to yourself And I think that can create that serverless and function you went back to your hotel room that you can try stuff, the ways you guys measure project success. and a blog post you might publish And can you distinguish your approach and if you listen to Aikas Ville when you started it. and in the hype they switched. and you are dealing And that is and if you remember back and the support needed to do that. that you can just download and test. like these surprises of and it won't exist unless you help you leave your job. to support you build a relationship Arguably Red Hat is the only a lot of the vendors you see here. that are close source and for consulting. So you have proprietary extensions. is if you work hard enough, And if you go with it, that the misperception out there. and well compensated. to say you know what I'm going so that they can come to you. And once you have people community leaders to say, no Absolutely so how do you and maintainer you want to help everybody have been kind of the lifeline perhaps that are prevalent. that you can put your product the table, go ahead. and the scars get made.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul GillonPERSON

0.99+

Keith TownsendPERSON

0.99+

GoogleORGANIZATION

0.99+

KeithPERSON

0.99+

one dayQUANTITY

0.99+

Alex EllisPERSON

0.99+

2019DATE

0.99+

MicrosoftORGANIZATION

0.99+

Simon FoskettPERSON

0.99+

2016DATE

0.99+

100%QUANTITY

0.99+

three dayQUANTITY

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

one hourQUANTITY

0.99+

2017DATE

0.99+

USLOCATION

0.99+

DigitalOceanORGANIZATION

0.99+

KnativeORGANIZATION

0.99+

AWSORGANIZATION

0.99+

BuoyantORGANIZATION

0.99+

Valencia, SpainLOCATION

0.99+

Rancher RORGANIZATION

0.99+

OneQUANTITY

0.99+

CNCFORGANIZATION

0.99+

OpenFaaSTITLE

0.99+

University of WashingtonORGANIZATION

0.99+

AlexPERSON

0.99+

KubeConEVENT

0.99+

three business modelsQUANTITY

0.99+

OpenFaaSORGANIZATION

0.99+

30,000QUANTITY

0.99+

two years earlierDATE

0.98+

million dollarQUANTITY

0.98+

oneQUANTITY

0.98+

six figureQUANTITY

0.98+

about 40 different open source repositoriesQUANTITY

0.98+

two versionsQUANTITY

0.98+

CloudNativeCon EuropeEVENT

0.97+

CloudnativeconORGANIZATION

0.97+

BTORGANIZATION

0.96+

bothQUANTITY

0.96+

firstQUANTITY

0.96+

KubeconORGANIZATION

0.95+

twoQUANTITY

0.95+

FaaSTITLE

0.95+

KubernetesORGANIZATION

0.94+

AlexaTITLE

0.94+

almost 400 peopleQUANTITY

0.94+

TwitterORGANIZATION

0.94+

TheCUBEORGANIZATION

0.93+

first personQUANTITY

0.92+

billion dollarQUANTITY

0.92+

second oneQUANTITY

0.91+

LinkerdORGANIZATION

0.88+

Red HatTITLE

0.87+

KubernetesTITLE

0.87+

CSIGORGANIZATION

0.87+

KnativeTITLE

0.86+

HatTITLE

0.85+

OpenCoreTITLE

0.84+

RancherORGANIZATION

0.83+

EuropeLOCATION

0.79+

KnativesORGANIZATION

0.79+

SiliconANGLEORGANIZATION

0.78+

Constance Caramanolis, Splunk | KubeCon + CloudNativeCon Europe 2020 - Virtual


 

>> Narrator: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 Virtual brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Hi I'm Stu Miniman and this is theCUBE's coverage of KubeCon, CloudNativeCon the 2020 European show of course happening virtually and that has put some unique challenges for the people running the show, really happy to welcome to the program she is one of the co-chairs of this event, and she is also a Principal Software Engineer at Splunk, Constance Caramanolis thank you so much for joining us. >> Hi, thank you for having me, I'm really excited to be here, it's definitely an interesting time. >> Alright, so Constance we know KubeCon it's a great community, robust everybody loves to get together there's some really interesting hallway conversations and so much going on, we've been watching, the four or five years we've been doing theCUBE at this show, just huge explosion of the breadth and depth of the content and of course, great people there. Just, if we could start with a little bit, your background, as I mentioned you're the co-chair, you work for Splunk by way of an acquisition, of Omnition try saying that three times fast, and Omnition you were telling me is a company that was bought really before it came out of stealth, but when it comes to the community itself, how long have you been involved in this community? What kind of led you to being co-chair? >> Yeah, I guess I've been involved with the community since 2017, so, I was at Lyft before Omnition Splunk, and I was lucky enough to be one of the first engineers, on Envoy you might've heard of Envoy, sorry I laugh at my own jokes. (laughing) Like my first exposure to KubeCon and seeing the CNCF community was KubeCon Austin and the thing that I was amazed by was actually you said it the hallway tracks, right? I would just see someone and be like, "Hey, like, I think I've seen your code review can I say hi?" And that started back on me at least a little bit involved in terms of talking to more people then they needed people I would work on a PR or in some of the community meetings and that was my first exposure to the community. And so I was involved in Envoy pretty actively involved in Envoy all the way until from 2016 until mid 2018 and then I switched projects and turning it left and did some other stuff and I came back into CNCF community, in OpenTelemetry as of last year, actually almost exactly a year ago now to work on making tracing, I'm going to say useful and the reason why I say useful is that usually people think of tracing as, not as important as metrics and logs, but there is so much to tracing that we tend to undervalue and that's why I got involved with OpenTelemetry and Omnition, because there's some really interesting ways that you could view tracing, use tracing, and you could answer a lot of questions that we have in our day-to-day and so that's kind of that's how I got involved in the second-round community and then ended up getting nominated to be on the co-chair and I obviously said yes, because this is an amazing opportunity to meet more people and have more of that hallway track. >> Alright, so definitely want to talk about OpenTracing, but let's talk about the event first, as we were talking about. >> Yeah. >> That community you always love the speakers, when they finish a session, they get mobbed by people doing questions. When you walk through the expo hall, you go see people so give us a little bit of insight as to how we're trying to replicate that experience, make sure that there's I don't know office hours for the speakers and just places and spaces for people to connect and meet people. >> Yeah, so I will say that like, part of the challenge with KubeCone EU was that it had already been meant to be an in person event and so we're changing it to virtual, isn't going to be as smooth as a KubeCon or we have the China event that's happening in a few weeks or at Boston, right that's still going on, like, those ones are being thought out a lot more as a proper virtual event. So a little bit of the awkwardness of, now everything is going to be online, right? It's like you can't actually shake someone's hand in a hallway but we are definitely trying to be cognizant of when I'm in terms of future load, like probably less content, right. It's harder to sit in front of a screen and listen to everything and so we know that we know we have enough bandwidth we're trying to find, different pieces of software that allow for better Q and A, right? Exactly, like the mobbing after session is go in as a speaker and one as attendee is sometimes like the best part about conferences is you get to like someone might've said something like, "Hey, like this little tidbit "I need to ask you more questions about this." So we're providing software to at least make that as smooth, and I'm putting this in quotation and as you'll be able to tell anyone who's watching as I speak with my hands. Right, so we're definitely trying to provide software to at least make that initial interaction as smooth as possible, maybe as easy as possible we know it's probably going to be a little bit bumpy just because I think it's also our first time, like everyone, every conference is facing this issue so it's going to be really interesting to see how the conference software evolves. It is things that we've talked about in terms of maybe offering their office hours, for that it's still something that like, I think it's going to be really just an open question for all of us, is that how do we maintain that community? And I think maybe we were talking or kind of when I was like planting the seed of a topic beforehand, it's like it's something I think that matters like, how do we actually define community? 'Cause so much of it has been defined off that hallway track or bumping into someone, right? And going into someone's booth and be like, like asking that question there, because it is a lot more less intimidating to ask something in person than is to ask it online when everyone gets to hear your question, right. I know I ask less questions online, I guess maybe one thing I want to say is that for now that am thinking about it is like, if you have a question please ask questions, right? If recording is done, if there's a recording for a talk, the speakers are usually made available online during the session or a bit afterwards, so please ask your questions when things come up, because that's going to be a really good way to, at least have a bit of that question there. And also don't be shy, please, even when I say like in terms of like, when it comes to review, code reviews, but if something's unintuitive or let's say, think about something else, like interact with it, say it or even ask that question on Twitter, if you're brave enough, I wouldn't but I also barely use Twitter, yeah I don't know it's a big open question I don't know what the community is going to look like and if it's going to be harder. >> Yeah, well, one of the things I know every, every time I go to the show conferences, when the keynote when it's always like, okay, "How many people is this your first time at the show?" And you look around and it's somewhere, third or half people attending for the first time. >> Yeah. I know I'm trying to remember if it was year and a half ago, or so there was created a kind of one-on-one track at the show to really help onboard and give people into the show because when the show started out, it was like okay, it was Kubernetes and a couple of other things now you've got the graduated, the incubated, the dozens of sandbox projects out there and then even more projects out there so, cloud-native is quite a broad topic, there is no wrong way where you can start and there's so many paths that you can go on. So any tips or things that we're doing this time, to kind of help broaden and welcome in those new participants? >> Yeah so there's two things, one is actually the one to attract is official for a KubeCon EU so we do have like, there's a few good talks in terms of like, how to approach KubeCon it was meant to originally be for a person but at least helping people in terms of general terms, right? 'Cause sometimes there's so much terminology that it feels like you need to carry, cloud-native dictionary around with you, doing that and giving suggestions there, so that's one of the first talks that's going to be able to watch on KubeCon so I highly suggest that, This is actually a really tough question because a lot of it would have been like, I guess it would have been for me, would have been in person be like, don't be afraid to like, if you see someone that, said something really interesting in a talk you attended, like, even if it's not after the question, just be like, "Hey, I thought what you said was really cool "and I just want to say I appreciate your work." Like expressing that appreciation and just even if it isn't like the most thoughtful question in the world just saying thank you or I appreciate you as a really good way to open things up because the people who are speaking are just as well most people are probably just as scared of going up there and sharing their knowledge as probably or of asking a question. So I think the main takeaway from that is don't be shy, like maybe do a nervous dance to get those jitters out and then after (laughing) and then ask that question or say like, thank you it's really nice to meet you. It's harder to have a virtual coffee, so hopefully they have their own teapot or coffee maker beside them, but offered you that, send an email I think, one thing that is very common and I have a hard time with this is that it's easy to get overwhelmed with how much content there is or you said it's just like, I first feel small and at least if everyone is focusing on Kubernetes, especially like a few years ago, at least and you're like, maybe that there are a lot of people who are really advanced but now that there's so many different people like so many people from all range of expertise in this subject matter experts, and interests that it's okay to be overwhelmed just be like, I need to take a step back because mentally attending like a few talks a day is like, I feel like it's taking like several exams 'cause there's so much information being bombarded on you and you're trying to process it so understand that you can't process it all in one day and that's okay, come back to it, right. It's a great thing is that all of these talks are recorded and so you can watch it another time, and I would say probably just choose like three or four talks that you're really excited about and listen to those, don't need to watch everything because as I said we can't process it all and that's okay and ask questions. >> Some great advice there because right, if we were there in person it was always, attend what you really want to see, are there speakers you want to engage with? Because you can go back and watch on demand that's been one of the great opportunities with the virtual events is you can have access on demand, you can poke and prod, personally I love that a lot of them you can adjust the speed of them so, if it's something that it's kind of an intro talk, I can crank it up to one and a half or 2X speed and get through more content or I can pause it, rewind if I'm not getting it. And the other opportunity is I tell you the last two or three years, when I'm at an event, I try to just spend my time, not looking at my phone, talking to people, but now there's the opportunity, hey, if I can be of help, if anybody in the community has a question or wants to get connected to somebody, we know a lot of people I'm easily reachable on Twitter and I'm not sitting on a plane or in the middle of something that being like, so there is just a great robust community out there, online, and it were great be a part of it. So speaking of projects, you mentioned OpenTelemetry, which is what, your day job works on it's been a really, interesting topic of course for those that don't know the history, there were actually two projects that merged, it was a OpenTracing and OpenCensus created OpenTelemetry, so why don't you bring us up to speed as to where we are with the project, and what people should be looking at at the show and throughout the rest of 2020? >> OpenTelemetry is very exciting, we just did our first beta release so for anyone who's been on the fence of, is OpenTelemetry getting traction, or is it something that you're like at, this is a really great time to want to get involved in OpenTelemetry and start looking into it, if it's as a viable project, but I guess should probably take a step back of what is OpenTelemetry, OpenTelemetry as you mentioned was the merging or the marriage of OpenTracing-OpenCensus, right? It was an acknowledgement that so many engineers were trying to solve the same problem, but as most of us knows, right, we are trying to solve the same problem, but we had two different implementations and we actually ended up having essentially a lot of waste of resources because we're all trying to solve the same problem, but then we're working on two different implementations. So that marriage was to address that because, right it's like if you look at all of the major players, all of the players on OpenTelemetry, right? They have a wide variety of vendor experience, right even as of speaking from the vendor hat, right vendors are really lucky that they get to work with so many customers and they get to see all these different use cases. Then there's also just so many actually end users who are using it and they have very peculiar use cases, too, even with a wide set of other people, they're not going to obviously have that, so OpenTelemetry gets to merge all of those different use cases into one, or I guess not into one, but like into a wide set of implementations, but at least it's maintained by a larger group instead of having two separate. And so the first goal was to unify tracing tracing is really far ahead in terms of implementation,, or several implementations of libraries, like Go, Java, Python, Ruby, like on other languages right now but quite a bit of lists there and there's even a collector too which some people might refer to as an agent, depending on what background they have. And so there's a lot of ways to one, implement tracing and also metrics for your services and also gather that data and manipulate it, right? 'Cause for example, tracings so tracing where it's like you can generate a lot of traces, but sometimes missing data and like the collector is a really great place to add data to that, so going back to the state of OpenTelemetry, OpenTelemetry since we just did a beta release, right, we're getting closer to GA. GA is something that we're tracking for at some point this year, no dates yet but it's something that we're really pushing towards, but we're starting to have a very stable API in terms of tracing a metric was on its way, log was all something we're wrapping up on. It is a really great opportunity to, all the different ways that we are that, we even say like service owners, applications, even business rate that we're trying to collect data and have visibility into our applications, this is a really great way to provide one common framework to generate all that data, to gather all that data and generate all that data. So it was really exciting and I don't know, we just want more users and why we say that is to the earlier point is that the more users that we have who are engaged with community, right if you want to open an issue, have a question if you want to set up a PR please do, like we really want more community engagement. It is a great time to do that because we are just starting to get traction, right? Like hopefully, hopefully in a year or two, like we are one of those really big, big projects right up on a CNCF KubeCon and it's like, let's see how much has grown. And it's a great time to join and help influence a project and so many chances for ownership, I know it's really exciting, the company-- >> Excellent well Constance, it's really exciting >> Yeah. >> Congratulations on the progress there, I'm sure everybody's looking forward to as you said GA later this year, want to give you the final word, yourself and Vicky Cheung as the co-chairs for the event, what's your real goal? What do you hope the takeaway is from this instance of the 2020 European show? Of course, virtual now instead of Amsterdam. I guess like two parts one for the takeaway is that it's probably going to be awkward, right? Especially again going back to the community is that we don't have a lot of that in person things so this will be an awkward interaction, but it's a really great place for us to want to assess what a community means to us and how we interact with the community. So I think it's going to be going into it with an open mindset of just knowing like, don't set the expectations, like any other KubeCon because we just know it won't be right, we can't even have like the after hours, like going out for coffee or drinks and other stuff there so having that there and being open to that being different and then also if you have ideas share it with us, 'cause we want to know how we can make it better, so expect that it's different, but it's still going to provide you with a lot of that content that you've been looking for and we still want to make that as much of a welcoming experience for you, so know that we're doing our best and we're open to feedback and we're here for you. >> Excellent, well Constance thank you so much for the work that you and the team have been doing on. absolutely, one of the events that we always look forward to, thanks so much for joining us. >> Thank you for having me. >> Alright, lots more coverage of theCUBE at KubeCon-Cloud Native on Europe 2020, I'm Stu Miniman and thanks for watching. (soft music)

Published Date : Aug 18 2020

SUMMARY :

brought to you by Red Hat, and that has put some unique challenges I'm really excited to be here, and depth of the content and and have more of that hallway track. but let's talk about the event first, and spaces for people to and listen to everything and so we know go to the show conferences, paths that you can go on. and so you can watch it another time, of them you can adjust the speed of them and like the collector but it's still going to provide you for the work that you and I'm Stu Miniman and thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Vicky CheungPERSON

0.99+

Constance CaramanolisPERSON

0.99+

2016DATE

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

threeQUANTITY

0.99+

fourQUANTITY

0.99+

last yearDATE

0.99+

ConstancePERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

EnvoyORGANIZATION

0.99+

thirdQUANTITY

0.99+

OmnitionORGANIZATION

0.99+

two projectsQUANTITY

0.99+

mid 2018DATE

0.99+

KubeConEVENT

0.99+

BostonLOCATION

0.99+

first timeQUANTITY

0.99+

two thingsQUANTITY

0.99+

PythonTITLE

0.99+

2020DATE

0.99+

2XQUANTITY

0.99+

second-roundQUANTITY

0.99+

CNCFORGANIZATION

0.99+

five yearsQUANTITY

0.99+

JavaTITLE

0.99+

a year agoDATE

0.99+

2017DATE

0.98+

first goalQUANTITY

0.98+

RubyTITLE

0.98+

one dayQUANTITY

0.98+

first exposureQUANTITY

0.98+

two partsQUANTITY

0.98+

dozensQUANTITY

0.98+

AmsterdamLOCATION

0.98+

two different implementationsQUANTITY

0.98+

OpenCensusORGANIZATION

0.98+

half peopleQUANTITY

0.98+

oneQUANTITY

0.98+

a yearQUANTITY

0.97+

this yearDATE

0.97+

CloudNativeConEVENT

0.97+

year and a half agoDATE

0.97+

twoQUANTITY

0.97+

LyftORGANIZATION

0.97+

three yearsQUANTITY

0.96+

four talksQUANTITY

0.96+

KubeCone EUEVENT

0.96+

GoTITLE

0.96+

first exposureQUANTITY

0.96+

three timesQUANTITY

0.95+

CloudNativeCon Europe 2020 VirtualEVENT

0.95+

OpenTelemetryTITLE

0.94+

first beta releaseQUANTITY

0.94+

few years agoDATE

0.94+

one and a halfQUANTITY

0.93+

later this yearDATE

0.92+

a lot of questionsQUANTITY

0.92+

SplunkPERSON

0.92+

first engineersQUANTITY

0.92+

GALOCATION

0.92+

OpenTracingORGANIZATION

0.92+

CNCF KubeConEVENT

0.91+

SplunkORGANIZATION

0.91+

two different implementationsQUANTITY

0.91+

2020 EuropeanEVENT

0.91+

firstQUANTITY

0.9+

ChinaLOCATION

0.9+

one common frameworkQUANTITY

0.89+

TwitterORGANIZATION

0.89+

Steve Gordon, Red Hat | KubeCon + CloudNativeCon Europe 2020 – Virtual


 

>> Voice over: From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 virtual, brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. >> Hi, I'm Stu Mittleman, and welcome back to theCUBE's Coverage of KubeCon CloudNativeCon Europe for 2020. Get to talk to the participants in this great community and ecosystem where they are around the globe. And when you think back to the early days of containers, it was, containers, they're lightweight, they're small, going to obliterate virtualization is often the headline that we had. Of course, we know everything in IT tends to be additive. And here we are in 2020 and containers and virtual machines, living side by side and often we'll see the back and forth that happens when we talk about virtualization in containers. To talk about that topic specifically, happy to welcome to the program, first time guest, Steve Gordon. He's the director of product management at Red Hat. Steve, thanks so much for joining us. >> Thanks so much Stu, it's great to be here. >> All right, as I teed up of course, virtualization was a wave that swept through the data center. It is a major piece, not only of what's in the data center, but even if you look at the public Clouds, often it was virtualization underneath there. Certain companies like Google, of course, really drove a container adoption. And often you hear when people talk about, I built something CloudNative, that underlying piece of being containerized and then using an orchestration layer like Kubernetes is what they talk about. So maybe stop for a sec, Red Hat of course, heavily involved in virtualization and containers, how you see that landscape and what's the general conversation you have with customers as to how they make the choice and how the lines blur between those worlds? >> Yeah, so at Red Hat, I think we've been working on certainly the current iteration of the next specialization with KVM for around 12 years and myself large portion of that. I think, one thing that's always been constant is while from the outside-in, specialization looks like it's been a fairly stable marketplace. It's always changing, it's always evolving. And what we're seeing right now is as people are adopting containers and even constructs built on top of containers into their workflows, there is more interest and more desire around how can I combine these things, recognizing that still an enormous percentage of my workloads are out there running in virtual machines today, but I'm building new things around them that need to be able to interact with them and springboard off of that. So I think for the last couple of years, I'm sure you yourself have seen a number of different projects pop up and the opensource community around this intersection of containers and visualization and how can these technologies compliment each other. And certainly KubeVirt is one of the projects that we've started in this space, in reaction to both that general interests, but also the real customer problems that people have, as they try and meld these two worlds. >> So Steve, at Red Hat Summit earlier this year, there was a lot of talk around container native virtualization. If you could just explain what that means, how that might be different from just virtualization in general, and we'll go from there. >> Sure, so back in, I think early 2017, late 2016, we started playing around this idea. We'd already seen the momentum around Kubernetes and the result the way we architected OpenShift, three at a time around, Kubernetes has this strength as an orchestration platform, but also a shared provider of storage, networking, et cetera, resources. And really thinking about, when we look at virtualization and containers, some of these problems are very common regardless of what footprint the workload happens to fit into. So leveraging that strength of Kubernetes as an orchestration platform, we started looking at, what would it look like to orchestrate virtual machines on that same platform right next to our application containers? And the extension of that the KubeVirt project and what has ultimately become OpenShift virtualization is based around that core idea of how can I make a traditional virtual machine to a full operating system, interact with and look exactly like a Kubernetes native construct, that I can use from the same platform? I can manage it using the same constructs, I can interact with it using the same console, all of these kinds of ideas. And then on top of that, not just bring in workloads as they lie, but enable really powerful workforce with people who are building a new application in containers that still need some backend components, say a database that's sitting in a VM, or also trying to integrate those virtual machines into new constructs, whether it's something like a pipeline or a service mesh. We're hearing a lot of questions around those things these days where people don't want to just apply those things to brand new workloads, but figure out how do they apply those constructs to the broader majority of their fleet of workflows that exist today. >> All right, so I believe back at Red Hat Summit, OpenShift virtualization was in beta. Where's the product that solution sets till today? >> Right, so at this year's KubeCon, we're happy to announce that OpenShift virtualization is moving to general availability. So it will be a fully supported part of OpenShift. And what that means is, you, as a subscriber to OpenShift, the platform, get virtualization as just an additional capability of that platform that you can enable as an operator from the operator hub, which is really a powerful thing for admins to be able to do that. But also is just really powerful in terms of the user experience. Like once that operator is enabled on your cluster, the little tab shows up, that shows that you can now go and create a virtual machine. But you also still get all of the metrics and the shared networking and so on that goes with that cluster, that underlies it all. And you can again do some really powerful things in terms of combining those constructs for both virtual machines and containers. >> When you talk about that line between virtualization and containers, a big question is, what does this mean for developers? How is it different from what they were using before? How do they engage and interact with their infrastructure today? >> Sure, so I think the way a lot of this current wave of technology got started for people was whether it was with Kubernetes or Docker before that, people would go and grab, easiest way they could grab compute for capacity was go to their virtual machine firm, whether that was their local virtualization estate at their company, or whether that was taking a credit card to public Cloud, getting a virtual machine and spinning up a container platform on top of that. What we're now seeing is, as that's transitioning into people building their workloads, almost entirely around these container constructs, in some cases when they're starting from scratch, there is more interest in, how do I leverage that platform directly? How do I, as my application group have more control over that platform? And in some cases, depending on the use case, like if they have demand for GPUs, for example, or other high-performance devices, that question of whether the virtualization layer between my physical host and my container is adding that much value? But then still wanting to bring in the traditional workloads they have as well. So I think we've seen this gradual transition where there is a growing interest in reevaluating, how do we start with container based architectures? To, okay, how has we transitioned towards more production scenarios and the growth in production scenarios? What tweaks do we make to that architecture? Does it still make sense to run all of that on top of virtual machines? Or does it make more sense to almost flip that equation as my workload mix gradually starts changing? >> Yeah, two thoughts come to mind on that. Number one is, are there specific applications out there, or I think about traditional VMs, often that Windows environments that we have there, is that some of the use case to bring them over to containers? And then also, once I've gotten it into the container environment, what are the steps to move forward? Because I have to expect that there's going to be some refactoring, some modernization to take advantage of the innovation and pace of change, not just to take it, containerize it and leave it. >> Yeah, so certainly, there is an enormous amount of potential out there in terms of Windows workloads, and people are definitely trying to work out how do they leverage those workloads in the context of OpenShift and Kubernetes based environment. And Windows containers obviously, is one way to address that. And certainly, that is very powerful in and of itself, for bringing those workloads to OpenShift and Kubernetes, but does have some constraints in terms of needing to be on a relatively recent version of Windows server and so on for those workloads to run in that construct. So where OpenShift virtualization helps with that is we can actually take an existing virtual machine workload, bring that across, even if it's say Windows server 2012, run it on top of the OpenShift virtualization platform as a VM, And then if or when you start modernizing more of that application, you can start teasing that out into actual containers. And that's actually something, it is one of our very early demos at Red Hat Summit 2018, I think was how you would go about doing that, and primarily we did that because it is a very powerful thing for customers to see how they can bring those, all the applications into this mix. And the other aspect of that I'll mention is one of our financial services customers who we've been working with, basically since that demo, they saw it from a hallway at Red Hat Summit and came and said, "Hey, we want to talk to you guys about that." One of the primary workload, is a Windows 10 style environment, that they happened to be bringing in as well. And that's more in that construct of treating OpenShift almost as a pool of compute, which you can use for many different workload types with the Windows 10 being just one aspect of that. And the other thing I'll say in terms of the second part of the question, what do I need to do in terms of refactoring? So we are very conscious of the fact that, if this is to provide value, you have to be able to bring in existing virtual machines with as minimal change as possible. So we do have a migration solution set, that we've had for a number of years, for bringing our virtual machines to Linux specialization stacks. We're expanding that to include OpenShift virtualization as a target, to help you bring in those existing virtual machine images. Where things do change a little bit is in terms of the operational approaches. Obviously, admin console now is OpenShift for those virtual machines, that does right now present a change. But we think it is a very powerful opportunity in terms of, as people get more and more production workloads into containers, for example, it's going to become a lot more appealing to have a backup solution, for example, that can cater to both the virtual machine workloads as well as any stateful container workloads you may have, which do exist in increasing numbers. >> Well, I'm glad you brought up a stateful discussion because as an industry, we've spent a long time making sure that virtual machines, have storage and have networking that is reliable in performance and the like. What should customers be thinking about and operators when they move to containers? Are there things that are different you manage bringing into, this brings them into the OpenShift management plane. So what else should I be thinking about? What do I need to do differently when I've embraced this? >> Yeah, so I think in terms of the things that virtual machine expects, the two big ones that come to mind to me are networking and storage. The compute piece is still there obviously, but I think is a little less complicated to solve just because the OpenShift and broader Kubernetes community have done such a great job of addressing that piece, and that's really what attracted us to it in the first place. But on the networking side, certainly the expectations of a traditional virtual machine are a little bit different to the networking model of Kubernetes by default. But again, we've seen a lot of growth in container based applications, particularly in the context of CloudNative network functions that have been pushing the boundaries of Kubernetes networking as well. That's resulted in projects like Motus, which allow us to give a virtual machine related to networking interface that it expects, but also give it the option of using the pod networking natively, for some of those more powerful constructs that are native to Kubernetes. So that's one of those areas where you've got a mix of options, depending on how far you want to go from a modernization perspective versus do I just want to bring this workload in and run it as it is. And my modernization is more built around it, in terms of the other container based things. Then similarly in storage, it's an area where obviously at Red Hat, we've been working close with the OpenShift container storage team, but we also work with a number of ecosystem partners on, not just how do we certify their storage plugins and make sure they work well both for containers and virtual machines, but also how do we push forward upstream efforts, around things like the container storage interface specification, to allow for these more powerful capabilities like snapshots cloning and so on which we need for virtual machines, but are also very valuable for container based workloads as well. >> Steve, you've mentioned some of the reasons why customers were moving towards this environment. Now that you're GA, what learnings did you have during beta? Are there any other customer stories you could share that you've learned along this journey? >> Yeah, so I think one of the things I'll say is that, there's no feedback like direct product in the hands of customer feedback. And it's really been interesting to see the different ways that people have applied it, not necessarily having set out to apply it, but having gotten partway through their journey and realized, hey, I need this capability. You have something that looks pretty handy and then having success with it. So in particular, in the telecommunications vertical, we've been working closely with a number of providers around the 5G rollouts and the 5G core in particular, where they've been focused on CloudNative network functions. And really what I mean by that is the wave of technology and the push they're making around 5G is to take what they started with network function virtualization a step further, and build that next generation network around CloudNative technologies, including Kubernetes and OpenShift. And as I've been doing that, I have been finding that some of the vendors are more or less prepared for that transition. And that's where, while they've been able to leverage the power of containers for those applications that are ready, they're also able to leverage OpenShift virtualization as a transitionary step, as they modernize the pieces that are taking a little bit longer. And that's where we've been able to run some applications in terms of the load balancer, in terms of a carrier grade database on top of OpenShift virtualization, which we probably wouldn't have set out to do this early in terms of our plan, but we're really able to react quickly to that customer demand and help them get that across the line. And I think that's a really powerful example where the end state may not necessarily be to run everything as a virtual machine forever, but that was still able to leverage this technology as a powerful tool in the context of our broadened up optimization effort. >> All right, well, Steve, thank you so much for giving us the updates. Congratulations on going GA for this solution. Definitely look forward to hearing more from the customers as they come. >> All right, thanks so much Stu. I appreciate it. >> All right, stay tuned for more coverage of KubeCon CloudNativeCon EU 2020, the virtual edition. I'm Stu Stu Mittleman. And thank you for watching theCUBE. (upbeat music)

Published Date : Aug 18 2020

SUMMARY :

brought to you by Red Hat, is often the headline that we had. it's great to be here. and how the lines blur that need to be able to interact with them how that might be different that the KubeVirt project Where's the product that of that platform that you can enable and the growth in production scenarios? is that some of the use case that they happened to sure that virtual machines, that have been pushing the boundaries some of the reasons that is the wave of technology from the customers as they come. All right, thanks so much Stu. 2020, the virtual edition.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Stu MittlemanPERSON

0.99+

Steve GordonPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

late 2016DATE

0.99+

Windows 10TITLE

0.99+

2020DATE

0.99+

oneQUANTITY

0.99+

early 2017DATE

0.99+

OpenShiftTITLE

0.99+

KubeConEVENT

0.99+

WindowsTITLE

0.98+

StuPERSON

0.98+

two thoughtsQUANTITY

0.98+

bothQUANTITY

0.98+

Red Hat SummitEVENT

0.97+

one wayQUANTITY

0.97+

LinuxTITLE

0.97+

Red Hat SummitEVENT

0.97+

around 12 yearsQUANTITY

0.97+

CloudNativeCon Europe 2020EVENT

0.97+

todayDATE

0.97+

earlier this yearDATE

0.97+

first timeQUANTITY

0.96+

CloudNativeTITLE

0.96+

Ecosystem PartnersORGANIZATION

0.95+

MotusTITLE

0.94+

one aspectQUANTITY

0.94+

this yearDATE

0.93+

OneQUANTITY

0.93+

KubernetesTITLE

0.93+

Red Hat Summit 2018EVENT

0.92+

last couple of yearsDATE

0.91+

two worldsQUANTITY

0.9+

first placeQUANTITY

0.89+

second partQUANTITY

0.89+

theCUBEORGANIZATION

0.85+

Windows server 2012TITLE

0.82+

Number oneQUANTITY

0.81+

Stu Stu MittlemanPERSON

0.8+

both virtualQUANTITY

0.79+

KubeCon CloudNativeCon Europe for 2020EVENT

0.79+

one ofQUANTITY

0.79+

two big onesQUANTITY

0.78+

one thingQUANTITY

0.78+

VMware 2019 Preview & 10 Year Reflection


 

>> From the Silicon Angle Media office in Boston Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. (upbeat music) >> Hello everybody, this is Dave Vallante with Stu Miniman and we're going to take a look back at ten years of theCUBE at VMworld and look forward to see what's coming next. So, as I say, this is theCUBE's 10th year at VMworld, that's VMworld, of course 2019. And Stu, if you think about the VMware of 2010, when we first started, it's a dramatically different VMware today. Let's look back at 2010. Paul Maritz was running VMware, he set forth the vision of the software mainframe last decade, well, what does that mean, software mainframe? Highly integrated hardware and software that can run any workload, any application. That is the gauntlet that Tucci and Maritz laid down. A lot of people were skeptical. Fast forward 10 years, they've actually achieved that, I mean, essentially, it is the standard operating system, if you will, in the data center, but there's a lot more to the story. But you remember, at the time, Stu, it was a very complex environment. When something went wrong, you needed guys with lab coats to come in a figure out, you know, what was going on, the I/O blender problem, storage was a real bottleneck. So let's talk about that. >> Yeah, Dave, so much. First of all, hard to believe, 10 years, you know, think back to 2010, it was my first time being at VMworld, even though I started working with VMware back in 2002 when it was like, you know, 100, 150 person company. Remember when vMotion first launched. But that first show that we went to, Dave, was in San Francisco, and most people didn't know theCUBE, heck, we were still figuring out exactly what theCUBE will be, and we brought in a bunch of our friends that were doing the CloudCamps in Silicon Valley, and we were talking about cloud. And there was this gap that we saw between, as you said, the challenges we were solving with VMware, which was fixing infrastructure, storage and networking had been broken, and how were we going to make sure that that worked in a virtual environment even better? But there were the early thought leaders that were talking about that future of cloud computing, which, today in 2019, looks like we had a good prediction. And, of course, where VMware is today, we're talking all about cloud. So, so many different eras and pieces and research that we did, you know, hundreds and hundreds of interviews that we've done at that show, it's definitely been one of our flagship shows and one of our favorite for guests and ecosystems and so much that we got to dig into at that event. >> So Tod Nielsen, who was the President and probably COO at the time, talked about the ecosystem. For every dollar spent on a VMware license, $15 was spent on the ecosystem. VMware was a very, even though they were owned by EMC, they were very, sort of, neutral to the ecosystem. You had what we called the storage cartel. It was certainly EMC, you know, but NetApp was right there, IBM, HP, you know, Dell had purchased EqualLogic, HDS was kind of there as well. These companies were the first to get the APIs, you remember, the VASA VAAI. So, we pushed VMware at the time, saying, "Look, you guys got a storage problem." And they said, "Well, we don't have a lot of resources, "we're going to let the ecosystem solve the problem, "here's an API, you guys figure it out." Which they largely did, but it took a long time. The other big thing you had in that 2010 timeframe was storage consolidation. You had the bidding war between Dell and HP, which, ultimately, HP, under Donatelli's leadership, won that bidding war and acquired 3PAR >> Bought 3PAR >> for 2.4, 2.5 billion, it forced Dell to buy Compellent. Subsequently, Isilon was acquired, Data Domain was acquired by EMC. So you had this consolidation of the early 2000s storage startups and then, still, storage was a major problem back then. But the big sea change was, two things happened in 2012. Pat Gelsinger took over as CEO, and VMware acquired Nicira, beat Cisco to the punch. Why did that change everything? >> Yeah, Dave, we talked a lot about storage, and how, you know, the ecosystem was changing this. Nicira, we knew it was a big deal. When I, you know, I talked to my friends that were deep in networking and I talked with Nicira and was majorly impressed with what they were doing. But this heterogeneous, and what now is the multi-cloud environment, networking needs to play a critical role. You see, you know, Cisco has clearly targeted that environment and Nicira had some really smart people and some really fundamental technology underneath that would allow networking to go just beyond the virtual machine where it was before, the vSwitch. So, you know, that expansion, and actually, it took a little while for, you know, the Nicira acquisition to run into NSX and that product to gain maturity, and to gain adoption, but as Pat Gelsinger has said more recently, it is one of the key drivers for VMware, getting them beyond just the hypervisor itself. So, so much is happening, I mean, Dave, I look at the swings as, you know, you said, VMware didn't have enough resources, they were going to let the ecosystem do it. In the early days, it was, I chose a server provider, and, oh yeah, VMware kind of plays in it. So VMware really grew how much control and how much power they had in buying decisions, and we're going through more of that change now, as to, as they're partnering we're going to talk about AWS and Microsoft and Google as those pieces. And Pat driving that ship. The analogy we gave is, could Pat do for VMware what Intel had done for a long time, which is, you have a big ecosystem, and you slowly start eating away at some of that other functionality without alienating that ecosystem. And to Pat's credit, it's actually something that he's done quite well. There's been some ebbs and flows, there's pushback in the community. Those that remember things like the "vTax," when they rolled that out. You know, there's certain features that the rolled into the hypervisor that have had parts of the ecosystem gripe a little bit, but for the most part, VMware is still playing well with the ecosystem, even though, after the Dell acquisition of EMC, you know, we'll talk about this some more, that relationship between Dell and VMware is tighter than it ever was in the EMC days. >> So that led to the Software-Defined Data Center, which was the big, sort of, vision. VMware wanted to do to storage and networking what it had done to compute. And this started to set up the tension between with VMware and Cisco, which, you know, lives on today. The other big mega trend, of course, was flash storage, which was coming into play. In many ways, that whole API gymnastics was a Band-Aid. But the other big piece if it is Pat Gelsinger was much more willing to integrate, you know, some of the EMC technologies, and now Dell technologies, into the VMware sort of stack. >> Right, so Dave, you talked about all of those APIs, Vvols was a huge multi-year initiative that VMware worked on and all of the big storage players were talking about how that would allow them to deeply integrate and make it virtualization-aware storage your so tense we come out on their own and try to do that. But if you look at it, VVols was also what enabled VMware to do vSAN, and that is a little bit of how they can try to erode in some of the storage piece, because vSAN today has the most customers in the hyperconverged infrastructure space, and is keeping to grow, but they still have those storage partnerships. It didn't eliminate it, but it definitely adds some tension. >> Well it is important, because under EMC's ownership it was sort of a let 1,000 flowers bloom sort of strategy, and today you see Jeff Clarke coming in and consolidating the portfolios, saying, "Look, let's let VMware go hard with vSAN." So you're seeing a different type of governance structure, we'll talk about that. 2013 was a big year. That's the year they brought in Sanjay Poonen, they did the AirWatch acquisition, they took on what the industry called VDI, what VMware called EUC, End-User Computing. Citrix was the dominant player in that space, VMware was fumbling, frankly. Sanjay Poonen came in, the AirWatch acquisition, now, VMware is a leader in that space, so that was big. The other big thing in 2013 was, you know, the famous comment by Carl Eschenbach about, you know, if we lose to the book seller, we'll all lose. VMware came out with it's cloud strategy, vCloud Air. I was there with the Wall Street analyst that day listening to Pat explain that and we were talking afterwards to a number of the Wall Street analysts saying, "This really doesn't make a lot of sense." And then they sort of retreated on that, saying that it was going to be an accelerant, and it just was basically a failed cloud strategy. >> And Dave, that 2013 is also when they spun out Cloud Foundry and founded Pivital. So, you know, this is where they took some of the pieces from EMC, the Greenplum, and they took some of the pieces from VMware, Spring and the Cloud Foundation, and put those together. As we speak right now, there was just an SEC Filing that VMware might suck them back in. Where I look at that, back in 2013, there was a huge gap between what VMware was doing on the infrastructure side and what Cloud Foundry was doing on the application modernization standpoint, they had bought the Pivotal Labs piece to help people understand new programming models and everything along those lines. Today, in 2019, if you look at where VMware is going, the changes happening in containerization, the changes happening from the application down, they need to come together. The Achilles heel that I have seen from VMware for a long time is that VMware doesn't have enough a tie to or help build the applications. Microsoft owns the applications, Oracle owns the applications. You know, there are all the ISVs that own the applications, and Pivotal, if they bring that back into VMware it can help, but it made sense at the time to kind of spin that out because it wasn't synergies between them. >> It was what I called at the time a bunch of misfit toys. And so it was largely David Goulden's engineering of what they called The Federation. And now you're seeing some more engineering, financial engineering, of having VMware essentially buy another, you know, Dell Silver Lake asset, which, you know, drove the stock price up 77% in a day that the Dow dropped 800 points. So I guess that works, kind of funny money. The other big trend sort of in that mid-part of this decade, hyperconverged, you know, really hit. Nutanix, who was at one point a strong partner of both VMware and Dell, was sort of hitting its groove swing. Fast forward to 2019, different situation, Nutanix really doesn't have a presence there. You know, people are looking at going beyond hyperconverged. So there's sort of the VMware ecosystem, sort of friendly posture has changed, they point fingers at each other. VMware says, "Well, it's Nutanix's fault." Nutanix will say it's VMware's fault. >> Right, so Dave, I pointed out, the Achilles heel for VMware might be that they don't have the closest tie to the application, but their greatest strength is, really, they are really the data center operating system, if you will. When we wrote out our research on Server SAN was before vSAN had gotten launched. It was where Nutanix, Scale Computing, SimpliVity, you know, Pivot3, and a few others were early in that space, but we stated in our research, if Microsoft and VMware get serious about that space, they can dominate. And we've seen, VMware came in strong, they do work with their partnerships. Of course, Dell, with the VxRail is their largest solution, but all of the other server providers, you know, have offerings and can put those together. And Microsoft, just last year, they kind of rebranded some of the Azure Stack as HCI and they're going strong in that space. So, absolutely, you know, strong presence in the data center platform, and that's what they're extending into their hybrid and multi-cloud offering, the VMware Cloud Solutions. >> So I want to get to some of the trends today, but just real quick, let's go through some of this. So 2015 was the big announcement in the fall where Dell was acquiring EMC, so we entered, really, the Dell era of VMware ownership in 2016. And the other piece that happened, really 2016 in the fall, but it went GA 2017, was the announcement AWS and VMware as the preferred partnership. Yes, AWS had a partnership with IBM, they've subsequently >> VMware had a partnership >> Yeah, sorry, VMware has a partnership with IBM for their cloud, subsequently VMware has done deals with Google and Microsoft, so there's, we now have entered the multi-cloud hybrid world. VMware capitulated on cloud, smart move, cleaned up its cloud strategy, cleaned that AirWatch mess. AWS also capitulated on hybrid. It's a term that they would never use, they don't use it necessarily a lot today, but they recognize that On Prem is a viable portion of the marketplace. And so now we've entered this new era of cloud, hybrid cloud, containers is the other big trend. People said, "Containers are going to really hurt VMware." You know, the jury's still out on that, VMware sort of pushes back on that. >> And Dave, just to put a point on that, you know, everybody, including us, spent a lot of time looking at this VMware Cloud on AWS partnership, and what does it mean, especially, to the parent, you know, Dell? How do they make that environment? And you've pointed out, Dave, that while VMware gets in those environments and gives themselves a very strong cloud strategy, AWS is the key partner, but of course, as you said, Microsoft Azure, Google Cloud, and all the server providers, we have a number of them including CenturyLink and Rackspace that they're partnering with, but we have to wait a little while before Amazon, when they announced their outpost solutions, VMware is a critical software piece, and you've got two flavors of the hardware. You can run the full AWS Stack, just like what they're running in their data center, but the alternative, of course, is VMware software running on Dell hardware. And we think that if VMware hadn't come in with a strong position with Amazon and their 600,000 customers, we're not sure that Amazon would have said, "Oh yeah, hey, you can run that same software stack "that you're running, but run some different hardware." So that's a good place for Dell to get in the environment, it helps kind of close out that story of VMware, Dell, and AWS and how the pieces fit together. >> Yeah, well so, by the way, earlier this week I privately mentioned to a Dell executive that one of the things I thought they should do was fold Pivotal into VMware. By the way, I think they should go further. I think they should look at RSA and Dell Boomi and SecureWorks, make VMware the mothership of software, and then really tie in Dell's hardware to VMware. That seems to me, Stu, the direction that they're going to try to gain an advantage on the balance of the ecosystem. I think VMware now is in a position of strength with, what, 5 or 600,000 customers. It feels like it's less ecosystem friendly than it used to be. >> Yeah, Dave, there's no doubt about it. HPE and IBM, who were two of the main companies that helped with VMware's ascendancy, do a lot of other things beyond VMware. Of course, IBM bought Red Hat, it is a key counterbalance to what VMware is doing in the multi-cloud. And Dave, to your point, absolutely, if you look at Dell's cloud strategy, they're number one offering is VMware, VMware cloud on Dell. Dell as the project dimension piece. All of these pieces do line up. I'll say, some of those pieces, absolutely, I would say, make sense to kind of pull in and shell together. I know one of the reasons they keep the security pieces at arm's length is just, you know, when something goes wrong in the security space, and it's not of the question of if, it's a question of when, they do have that arm's length to be able to keep that out and be able to remediate a little bit when something happens. >> So let's look at some of the things that we're following today. I think one of the big ones is, how will containers effect customer spending on VMware? We know people are concerned about the vTax. We also know that they're concerned about lock-in. And so, containers are this major force. Can VMware make containers a tailwind, or is it a headwind for them? >> So you look at all the acquisitions that they've made lately, Dave, CloudHealth is, from a management standpoint, in the public cloud. Heptio and Bitnami, targeting that cloud native space. Pair that with Cloud Foundry and you see, VMware and Pivotal together trying to go all-in on Kubernetes. So those 600,000 customers, VMware wants to be the group that educates you on containerization, Kubernetes, you know, how to build these new environments. For, you know, a lot of customers, it's attractive for them to just stay. "I have a relationship, "I have an enterprise licensing agreement, "I'm going to stay along with that." The question I would have is, if I want to do something in a modern way, is VMware really the best partner to choose from? Do they have the cost structure? A lot of these environments set up, you know, it's open source base, or I can work with my public cloud providers there, so why would I partner with VMware? Sure, they have a lot of smart people and they have expertise and we have a relationship, but what differentiates VMware, and is it worth paying for that licensing that they have, or will I look at alternatives? But as VMware grows their hybrid and multi-cloud deployments they absolutely are on the short list of, you know, strategic partners for most customers. >> The other big thing that we're watching is multi-cloud. I have said over and over that multi-cloud has largely been a symptom of multi-vendor. It's not necessarily, to date anyway, been a strategy of customers. Having said that, issues around security, governance, compliance have forced organizations and boards to say, "You know what, we need IT more involved, "let's make multi-cloud part of our strategy, "not only for governance and compliance "and making sure it adheres to the corporate edicts, "but also to put the right workload on the right cloud." So having some kind of strategy there is important. Who are the players there? Obviously VMware, I would say, right now, is the favorite because it's coming from a position of strength in the data center. Microsoft with it's software state, Cisco coming at it from a standpoint of network strength. Google, with Anthos, that announcement earlier this year, and, of course, Red Hat with IBM. Who's the company that I didn't mention in that list? >> Well, of course, you can't talk about cloud, Dave, without talking about AWS. So, as you stated before, they don't really want to talk about hybrid, hey, come on, multi-cloud, why would you do this? But any customer that has a multi-cloud environment, they've got AWS. And the VMware-AWS partnership is really interesting to watch. It will be, you know, where will Amazon grow in this environment as they find their customers are using multiple solutions? Amazon has lots of offerings to allow you leverage Kubernetes, but, for the most part, the messaging is still, "We are the best place for you, "if you do everything on us, "you're going to get better pricing "and all of these environments." But as you've said, Dave, we never get down to that homogeneous, you know, one vendor solution. It tends to be, you know, IT has always been this heterogeneous mess and you have different groups that purchase different things for different reasons, and we have not seen, yet, public cloud solving that for a lot of customers. If anything we often have many more silos in the clouds than we had in the data center before. >> Okay. Another big story that we're following, big trend, is the battle for networking. NSX, the software networking component, and then Cisco, who's got a combination of, obviously, hardware and software with ACI. You know, Stu, I got to say, Cisco a very impressive company. You know, 60+% market share, being able to hold that share for a long time. I've seen a lot of companies try to go up against Cisco. You know, the industry's littered with failures. It feels, however, like NSX is a disruptive force that's very hard for Cisco to deal with in a number of dimensions. We talked about multi-cloud, but networking in general. Cisco's still a major player, still, you know, owns the hardware infrastructure, obviously layering in its own software-defined strategy. But that seems to be a source of tension between the two companies. What's the customer perspective? >> Yeah, so first of all, Dave, Cisco, from a hardware perspective, is still going strong. There are some big competitors. Arista has been doing quite well into getting in, especially, a high performance, high speed environments, you know, Jayshree Ullal and that team, you know, very impressive public company that's doing quite well. >> Service providers that do really well there. >> Absolutely, but, absolutely, software is eating the world and it is impacting networking. Even when you look at Cisco's overall strategy, it is in the future. Cisco is not a networking company, they are a software company. The whole DevNet, you know, group that they have there is helping customers modernize, what we were talking about with Pivotal. Cisco is going there and helping customers create those new environments. But from a customer standpoint, they want simplicity. If my VMware is a big piece of my environment, I've probably started using NSX, NSX-T, some of these environments. As I go to my service providers, as I go to multi-cloud, that NSX piece inside my VMware cloud foundation starts to grow. I remember, Dave, a few years back, you know, Pat Gelsinger got up on a stage and was like, "This is the biggest collection of network administrators that we've ever seen!" And everybody's looking around and they're like, "Where? "We're virtualization people. "Oh, wait, just because we've got vNICs and vSwitches "and things like that." It still is a gap between kind of a hardcore networking people and the software state. But just like we see on storage, Dave, it's not like vSAN, despite it's thousands and thousands of customers, it is not the dominant player in storage. It's a big player, it's a great revenue stream, and it is expanding VMware beyond their core vSphere solutions. >> Back to Cisco real quickly. One of the things I'm very impressed with Cisco is the way in which they've developed infrastructures. Code with the DevNet group, how CCIEs are learning Python, and that's a very powerful sort of trend to watch. The other thing we're watching is VMware-AWS. How will it affect spending, you know, near-term, mid-term, long-term? Clearly it's been a momentum, you know, tailwind, for VMware today, but the questions remains, long-term, where will customers place their bets? Where will the spending be? We know that cloud is growing dramatically faster than On Prem, but it appears, at least in the near- to mid-term, for one, two, maybe three more cycles, maybe indefinitely, that the VMware-AWS relationship has been a real positive for VMware. >> Yeah, Dave, I think you stated it really well. When I talked to customers, they were a bit frozen a couple of years ago. "Ah, I know I need to do more in cloud, "but I have this environment, what do I do? "Do I stay with VMware, do I have to make a big change." And what VMware did, is they really opened things up and said, "Look, no, you can embrace cloud, and we're there for you. "We will be there to help be that bridge to the future, "if you will, so take your VMware environment, "do VMware cloud in lots of places, "and we will enable that." What we know today, the stat that we hear all the time, the old 80/20 we used to talk about was 80% keeping the lights on, now the 80% we hear about is, there's only 20% of workloads that are in public cloud today. It doesn't mean that that other 80% is going to flip overnight, but if you look over the next five to ten years, it could be a flip from 80/20 to 20/80. And as that shift happens, how much of that estate will stay under VMware licenses? Because the day after AWS made the announcement of VMware cloud on AWS, they offered some migration services. So if you just want to go on natively on the public cloud, you can do that. And Microsoft, Google, everybody has migration services, so use VMware for what I need to, but I might go more native cloud for some of those other environments. So we know it is going to continue to be a mix. Multi-cloud is what customers are doing today, and multi- and hybrid-cloud is what customers will be doing five years from now. >> The other big question we're watching is Outposts. Will VMware and Outposts get a larger share of wallet as a result of that partnership at the expense of other vendors? And so, remains to be seen, Outposts grabbed a lot of attention, that whole notion of same control plane, same hardware, same software, same data plane On Prem as in the Data Center, kind of like Oracle's same-same approach, but it's seemingly a logical one. Others are responding. Your thoughts on whether or not these two companies will dominate or the industry will respond or an equilibrium. >> Right, so first of all, right, that full same-same full stack has been something we've been talking about now, feels like for 10 years, Dave, with Oracle, IBM had a strategy on that, and you see that, but one of the things with VMware has strong strength. What they have over two decades of experiences on is making sure that I can have a software stack that can actually live in heterogeneous environments. So in the future, if we talk about if Kubernetes allows me to live in a multi-cloud environment, VMware might be able to give me some flexibility so that I can move from one hardware stack to another as I move from data centers to service providers to public clouds. So, absolutely, you know, one to watch. And VMware is smart. Amazon might be their number one partner, but they're lining up everywhere. When you see Sanjay Poonen up on stage with Thomas Kurian at Google Cloud talking about how Anthos in your data center very much requires VMware. You see Sachi Nodella up on stage talking about these kind of VMware partnerships. VMware is going to make sure that they live in all of these environments, just like they lived on all of the servers in the data center in the past. >> The other last two pieces that I want to touch on, and they're related is, as a result of Dell's ownership of VMware, are customers going to spend more with Dell? And it's clear that Dell is architecting a very tight relationship. You can see, first of all, Michael Dell putting Jeff Clarke in charge of everything Dell was brilliant, because, in a way, you know, Pat was kind of elevated as this superstar. And Michael Dell is the founder, and he's the leader of the company. So basically what he's created is this team of rivals. Now, you know, Jeff and Pat, they've worked together for decades, but very interesting. We saw them up on stage together, you know, last year, well I guess at Dell Technologies World, it was kind of awkward, but so, I love it. I love that tension of, It's very clear to me that Dell wants to integrate more tightly with VMware. It's the clear strategy, and they don't really care at this point if it's at the expense of the ecosystem. Let the ecosystem figure it out themselves. So that's one thing we're watching. Related to that is long-term, are customers going to spend more of their VMware dollars in the public cloud? Come back to Dell for a second. To me, AWS is by far the number one competitor of Dell, you know, that shift to the cloud. Clearly they've got other competitors, you know, NetApp, Huawei, you know, on and on and on, but AWS is the big one. How will cloud spending effect both Dell and AWS long-term? The numbers right now suggest that cloud's going to keep growing, $35, $40 billion run-rate company growing at 40% a year, whereas On Prem stuff's growing, you know, at best, single digits. So that trend really does favor the cloud guys. I talked to a Gartner analyst who tracks all this stuff. I said, "Can AWS continue to grow? It's so big." He said, "There's no reason, they can't stop. "The market's enormous." I tend to agree, what are your thoughts? >> Yeah, first of all, on the AWS, absolutely, I agree, Dave. They are still, if you look at the overall IT spend, AWS is still a small piece. They have, that lever that they have and the influence they have on the marketplace greatly outweighs the, you know, $30, $31 billion that they're at today, and absolutely they can keep growing. The one point, I think, what we've seen, the best success that Dell is having, it is the Dell and VMware really coming together, product development, go to market, the field is tightly, tightly, tightly alligned. The VxRail was the first real big push, and if they can do the same thing with the vCloud foundation, you know, VMware cloud on Dell hardware, that could be a real tailwind for Dell to try to grow faster as an infrastructure company, to grow more like the software companies or even the cloud companies will. Because we know, when we've run the numbers, Dave, private cloud is going to get a lot of dollars, even as public cloud continues its growth. >> I think the answer comes down to a couple things. Because right now we know that 80% of the spend and stall base is On Prem, 20% in the cloud. We're entering now the cloud 2.0, which introduces hybrid-cloud, On Prem, you know, connecting to clouds, multi-cloud, Kubernetes. So what it comes down to, to me Stu, is to what degree can Dell, VMware, and the ecosystem create that cloud experience in a hybrid world, number one? And number two, how will they be able to compete from a cost-structure standpoint? Dell's cost-structure is better than anybody else's in the On Prem world. I would argue that AWS's cost-structure is better, you know, relative to Dell, but remains to be seen. But really those two things, the cloud experience and the cost-structure, can they hold on, and how long can they hold on to that 80%? >> All right, so Dave here's the question I have for you. What are we talking about when we're talking about Dell plus VMware and even add in Pivotal? It's primarily hardware plus software. Who's the biggest in that multi-cloud space? It's IBM plus Red Hat, which you've stated emphatically, "This is a services play, and IBM has, you know, "just got, you know, services in their DNA, "and that could help supercharge where Red Hat's going "and the modernization." So is that a danger for Dell? If they bring in Pivotal, do they need to really ramp up that services? How do they do that? >> Yeah, I don't think it's a zero sum game, but I also don't think there's, it's five winners. I think that the leader, VMware right now would be my favorite, I think it's going to do very well. I think Red Hat has got, you know, a lot of good market momentum, I think they've got a captive install base, you know, with IBM and its large outsourcing business, and I think they can do pretty well, and I think number three could do okay. I think the other guys struggle. But it's so early, right now, in the hybrid-cloud world and the multi-cloud world, that if I were any one of those five I'd be going hard after it. We know Google's got the dollars, we know Microsoft has the software state, so I can see Microsoft actually doing quite well in that business, and could emerge as the, maybe they're not a long-shot right now, but they could be a, you know, three to one, four to one leader that comes out as the favorite. So, all right, we got to go. Stu, thanks very much for your insights. And thank you for watching and listening. We will be at VMworld 2019. Three days of coverage on theCUBE. Thanks for watching everybody, we'll see you next time. (upbeat music)

Published Date : Aug 15 2019

SUMMARY :

From the Silicon Angle Media office you know, what was going on, the I/O blender problem, and research that we did, you know, but NetApp was right there, IBM, HP, you know, and VMware acquired Nicira, beat Cisco to the punch. I look at the swings as, you know, you said, So that led to the Software-Defined Data Center, and all of the big storage players The other big thing in 2013 was, you know, but it made sense at the time to kind of spin that out of having VMware essentially buy another, you know, but all of the other server providers, you know, And the other piece that happened, of cloud, hybrid cloud, containers is the other big trend. And Dave, just to put a point on that, you know, that one of the things I thought they should do and it's not of the question of if, it's a question of when, So let's look at some of the things is VMware really the best partner to choose from? it's coming from a position of strength in the data center. It tends to be, you know, IT has always been But that seems to be a source of tension Jayshree Ullal and that team, you know, that do really well there. I remember, Dave, a few years back, you know, but it appears, at least in the near- to mid-term, now the 80% we hear about is, as in the Data Center, but one of the things with VMware has strong strength. and he's the leader of the company. and the influence they have on the marketplace and stall base is On Prem, 20% in the cloud. "This is a services play, and IBM has, you know, but they could be a, you know, three to one,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Sachi NodellaPERSON

0.99+

Jeff ClarkePERSON

0.99+

IBMORGANIZATION

0.99+

HPORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Carl EschenbachPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Paul MaritzPERSON

0.99+

2012DATE

0.99+

Thomas KurianPERSON

0.99+

JeffPERSON

0.99+

VMwareORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

$35QUANTITY

0.99+

$15QUANTITY

0.99+

2002DATE

0.99+

2013DATE

0.99+

$30QUANTITY

0.99+

twoQUANTITY

0.99+

PatPERSON

0.99+

EMCORGANIZATION

0.99+

Dave VallantePERSON

0.99+

David GouldenPERSON

0.99+

2016DATE

0.99+

AirWatchORGANIZATION

0.99+

Michael DellPERSON

0.99+

Brad Myles, Polaris | AWS Imagine Nonprofit 2019


 

>> Announcer: From Seattle, Washington, it's theCUBE! Covering AWS IMAGINE Nonprofit. Brought to you by Amazon Web Services. >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We're in the waterfront in Seattle, Washington, it's absolutely gorgeous here the last couple of days. We're here for the AWS IMAGINE Nonprofit event. We were here a couple weeks ago for the education event, now they have a whole separate track for nonprofits, and what's really cool about nonprofits is these people, these companies are attacking very, very big, ugly problems. It's not advertising, it's not click here and get something, these are big things, and one of the biggest issues is human trafficking. You probably hear a lot about it, it's way bigger than I ever thought it was, and we're really excited to have an expert in the field that, again, is using the power of AWS technology as well as their organization to help fight this cause. And we're excited to have Brad Myles, he is the CEO of Polaris and just coming off a keynote, we're hearing all about your keynote. So Brad, first off, welcome. >> Yeah, well thank you, thank you for having me. >> Absolutely, so Polaris, give us a little bit about kind of what's the mission for people that aren't familiar with the company. >> Yeah, so Polaris, we are a nonprofit that works full-time on this issue. We both combat the issue and try to get to long-term solutions, and respond to the issue and restore freedom to survivors by operating the National Human Trafficking Hotline for the United States, so, it's part kind of big data and long-term solutions, and it's part responding to day-to-day cases that break across the country every day. >> Right, in preparing for this interview and spending some time on the site there was just some amazing things that just jump right off the page. 24.9 million people are involved in this. Is that just domestically here in the States, or is that globally? >> That's a global number. So when you're thinking about human trafficking, think about three buckets. The first bucket is any child, 17 or younger, being exploited in the commercial sex trade. The second bucket is any adult, 18 or over, who's in the sex trade by force, fraud, or coercion. And the third bucket is anyone forced to work in some sort of other labor or service industry by force, fraud, or coercion. So you've got the child sex trafficking bucket, you've got the adult sex trafficking bucket, and then you've got all the labor trafficking bucket, right? You add up those three buckets globally, that's the number that the International Labour Organization came out and said 25 million around the world are those three buckets in a given year. >> Right, and I think again, going through the website, some of the just crazy discoveries, it's the child sex trafficking you can kind of understand that that's part of the problem, the adult sex trafficking. But you had like 25 different human trafficking business models, I forget the term that was used, for a whole host of things well beyond just the sex trade. It's a very big and unfortunately mature industry. >> Totally, yeah, so we, so the first thing that we do that we're kind of known for is operating the National Human Trafficking Hotline. The National Human Trafficking Hotline leads to having a giant data set on trafficking, it's 50,000 cases of trafficking that we've worked on. So then we analyzed that data set and came to the breakthrough conclusion that there are these 25 major forms, and almost any single call that we get in to the National Hotline is going to be one of those 25 types. And once you know that then the problem doesn't seem so overwhelming, it's not, you know, thousands of different types, it's these 25 things, so, it's 18 labor trafficking types and seven sex trafficking types. And it enables a little bit more granular analysis than just saying sex trafficking or labor trafficking which is kind of too broad and general. Let's get really specific about it, we're talking about these late night janitors, or we're talking about these people in agriculture, or we're talking about these women in illicit massage businesses. It enables the conversation to get more focused. >> Right, it's so interesting right, that's such a big piece of the big data trend that we see all over the place, right? It used to be, you know, you had old data, a sample of old data that you took an aggregate of and worked off the averages. And now, because of big data, and the other tools that we have today, now actually you can work on individual cases. So as you look at it from a kind of a big data point of view, what are some of the things that you're able to do? And that lead directly to, everyone's talking about the presentation that you just got off of, in terms of training people to look for specific behaviors that fit the patterns, so you can start to break some of these cases. >> Exactly, so, I think that the human trafficking field risks being too generic. So if you're just saying to the populace, "Look for trafficking, look for someone who's scared." People are like, that's not enough, that's too vague, it's kind of slipping through my fingers. But if you say, "In this particular type of trafficking, "with traveling magazine sales crews, "if someone comes to your door "trying to sell you a magazine with these specific signs." So now instead of talking about general red flag indicators across all 25 types, we're coming up with red flag indicators for each of the 25 types. So instead of speaking in aggregate we're getting really specific, it's almost like specific gene therapy. And the data analysis on our data set is enabling that to happen, which makes the trafficking field smarter, we could get smarter about where victims are recruited from, we could get smarter about intervention points, and we could get smarter about where survivors might have a moment to kind of get help and get out. >> Right, so I got to dig into the magazine salesperson, 'cause I think we've all had the kid-- >> Brad: Have you had a kid come to you yet? >> Absolutely, and you know, you think first they're hustlin' but their papers are kind of torn up, and they've got their little certificate, certification. How does that business model work? >> Yeah, so that's one of the 25 types, they're called mag crews. There was a New York Times article written by a journalist named Ian Urbina who really studied this and it came out a number of years ago. Then they made a movie about it called "American Honey," if you watch with a number of stars. But essentially this is a very long-standing business model, it goes back 30 or 40 years of like the door-to-door salesperson, and like trying to win sympathy from people going to door-to-door sales. And then these kind of predatory groups decided to prey on disaffected U.S. citizen youth that are kind of bored, or are kind of working a low-wage job. And so they go up to these kids and they say, "Tired of working at the Waffle House? "Well why don't you join our crew and travel the country, "and party every night, and you'll be outdoors every day, "and it's coed, you get to hang out with girls, "you get to hang out with guys, "we'll drink every night and all you have to do "is sell magazines during the day." And it's kind of this alluring pitch, and then the crews turn violent, and there's sometimes quotas on the crew, there's sometimes coercion on the crew. We get a lot of calls from kids who are abandoned by the crew. Where the crew says, "If you act up "or if you don't adhere to our rules, "we'll just drive away and leave you in this city." >> Wherever. >> Is the crews are very mobile they have this whole language, they call it kind of jumping territory. So they'll drive from like Kansas City to a nearby state, and we'll get this call from this kid, they're like, "I'm totally homeless, my crew just left me behind "because I kind of didn't obey one of the rules." So a lot of people, when they think of human trafficking they're not thinking of like U.S. citizen kids knocking on your door. And we're not saying that every single magazine crew is human trafficking, but we are saying that if there's force, and coercion, and fraud, and lies, and people feel like they can't leave, and people feel like they're being coerced to work, this is actually a form of human trafficking of U.S. citizen youth which is not very well-known but we hear about it on the Hotline quite a lot. >> Right, so then I wonder if you could tell us more about the Delta story 'cause most of the people that are going to be watching this interview weren't here today to hear your keynote. So I wonder if you can explain kind of that whole process where you identified a specific situation, you train people that are in a position to make a difference and in fact they're making a big difference. >> Yeah. So the first big report that we released based on the Hotline data was the 25 types, right? We decided to do a followup to that called Intersections, where we reached out to survivors of trafficking and we said, "Can you tell us about "the legitimate businesses that your trafficker used "while you were being trafficked?" And all these survivors were like, "Yeah, sure, "we'll tell you about social media, "we'll tell you about transportation, "we'll tell you about banks, "we'll tell you about hotels." And so we then identified these six major industries that traffickers use that are using legitimate companies, like rental car companies, and airlines, and ridesharing companies. So then we reached out to a number of those corporate partners and said, "You don't want this stuff on your services, right?" And Delta really just jumped at this, they were just like, "We take this incredibly seriously. "We want our whole workforce trained. "We don't want any trafficker to feel like "they can kind of get away with it on our flights. "We want to be a leader in transportation." And then they began taking all these steps. Their CEO, Ed Bastian, took it very seriously. They launched a whole corporate-wide taskforce across departments, they hosted listening sessions with survivor leaders so survivors could coach them, and then they started launching this whole strategy around training their flight attendants, and then training their whole workforce, and then supporting the National Human Trafficking Hotline, they made some monetary donations to Polaris. We get situations on the Hotline where someone is in a dangerous situation and needs to be flown across the country, like an escape flight almost, and Delta donated SkyMiles for us to give to survivors who are trying to flee a situation, who needs a flight. They can go to an airport and get on a flight for free that will fly them across the country. So it's almost like a modern day Underground Railroad, kind of flying people on planes. >> Jeff: Right, right. >> So they've just been an amazing partner, and they even then took the bold step of saying, "Well let's air a PSA on our flights "so the customer base can see this." So when you're on a Delta flight you'll see this PSA about human trafficking. And it just kept going and going and going. So it's now been about a five-year partnership and lots of great work together. >> And catching bad guys. >> Yeah, I mean, their publicity of the National Human Trafficking Hotline has led to a major increase in calls. Airport signage, more employees looking for it, and I actually do believe that the notion of flying, if you're going to be a trafficker, flying on a Delta flight is now a much more harrowing experience because everyone's kind of trained, and eyes and ears are looking. So you're going to pivot towards another airline that hasn't done that training yet, which now speaks to the need that once one member of an industry steps up, all different members of the industry need to follow suit. So we're encouraging a lot of the other airlines to do similar training and we're seeing some others do that, which is great. >> Yeah, and how much of it was from the CEO, or did he kind of come on after the fact, or was there kind of a champion catalyst that was pushing this through the organization, or is that often the case, or what do you find in terms of adoption of a company to help you on your mission? >> That's a great question. I mean, the bigger picture here is trafficking is a $150 billion industry, right? A group of small nonprofits and cops are not going to solve it on their own. We need the big businesses to enter the fight, because the big businesses have the resources, they have the brand, they have the customer base, they have the scale to make it a fair fight, right? So in the past few years we're seeing big businesses really enter the fight against trafficking, whether or not that's big data companies like AWS, whether or not that's social media companies, Facebook, whether or not that's hotel companies, like Wyndham and Marriott, airlines like Delta. And that's great because now the big hitters are joining the trafficking fight, and it happens in different ways, sometimes it's CEO-led, I think in the case of Delta, Ed Bastian really does take this issue very seriously, he was hosting events on this at his home, he's hosted roundtables of other CEOs in the Atlanta area like UPS, and Chick-fil-A, and Home Depot, and Coca-Cola, all those Atlanta-based CEOs know each other well, he'll host roundtables about that, and I think it was kind of CEO-led. But in other corporations it's one die hard champion who might be like a mid-level employee, or a director, who just says, "We really got to do this," and then they drive more CEO attention. So we've seen it happen both ways, whether or not it's top-down, or kind of middle-driven-up. But the big picture is if we could get some of the biggest corporations in the world to take this issue seriously, to ask questions about who they contract with, to ask questions about what's in their supply chain, to educate their workforce, to talk about this in front of their millions of customers, it just puts the fight against trafficking on steroids than a group of nonprofits would be able to do alone. So I think we're in a whole different realm of the fight now that business is at the table. >> And is that pretty much your strategy in terms of where you get the leverage, do you think? Is to execute via a lot of these well-resourced companies that are at this intersection point, I think that's a really interesting way to address the problem. >> Yeah, well, it's back to the 25 types, right? So the strategies depend on type. Like, I don't think big businesses being at the table are necessarily going to solve magazine sales crews, right? They're not necessarily going to solve begging on the street. But they can solve late night janitors that sometimes are trafficked, where lots of big companies are contracting with late night janitorial crews, and they come at 2:00 a.m., and they buff the floors, and they kind of change out the trash, and no one's there in the office building to see those workers, right? And so asking different questions of who you procure contracts with, to say, "Hey, before we contract with you guys, "we're going to need to ask you a couple questions "about where these workers got here, "and what these workers thought they were coming to do, "and we need to ID these workers." The person holding the purse strings, who's buying that contract, has the power to demand the conditions of that contract. Especially in agriculture and large retail buyers. So I think that big corporations, it's definitely part of the strategy for certain types, it's not going to solve other types of trafficking. But let's say banks and financial institutions, if they start asking different questions of who's banking with them, just like they've done with terrorism financing they could wipe out trafficking financing, could actually play a gigantic role in changing the course of how that type of trafficking exists. >> So we could talk all day, I'm sure, but we don't have time, but I'm just curious, what should people do, A, if they just see something suspicious, you know, reach out to one of these kids selling magazines, or begging on the street, or looking suspicious at an airport, so, A, that's the question. And then two, if people want to get involved more generically, whether in their company, or personally, how do they get involved? >> Yeah, so there are thousands of nonprofit groups across the country, Polaris is in touch with 3,000 of them. We're one of thousands. I would say find an organization in your area that you care about and volunteer, get involved, donate, figure out what they need. Our website is polarisproject.org, we have a national Referral Directory of organizations across the country, and so that's one way. The other way is the National Human Trafficking Hotline, the number, 1-888-373-7888. The Hotline depends on either survivors calling in directly as a lifeline, or community members calling in who saw something suspicious. So we get lots of calls from people who were getting their nails done, and the woman was crying and talking about how she's not being paid, or people who are out to eat as a family and they see something in the restaurant, or people who are traveling and they see something that doesn't make, kind of, quite sense in a hotel or an airport. So we need an army of eyes and ears calling tips into the National Human Trafficking Hotline and identifying these cases, and we need survivors to know the number themselves too so that they can call in on their own behalf. We need to respond to the problem in the short-term, help get these people connected to help, and then we need to do the long-term solutions which involves data, and business, and changing business practice, and all of that. But I do think that if people want to kind of educate themselves, polarisproject.org, there are some kind of meta-organizations, there's a group called Freedom United that's kind of starting a grassroots movement against trafficking, freedomunited.org. So lots of great organizations to look into, and this is a bipartisan issue, this is an issue that most people care about, it's one of the top headlines in the newspapers every day these days. And it's something that I think people in this country naturally care about because it references kind of the history of chattel slavery, and some of those forms of slavery that morphed but never really went away, and we're still fighting that same fight today. >> In terms of, you know, we're here at AWS IMAGINE, and they're obviously putting a lot of resources behind this, Teresa Carlson and the team. How are you using them, have you always been on AWS? Has that platform enabled you to accomplish your mission better? >> Yeah, oh for sure, I mean, Polaris crunches over 60 terabytes of data per day, of just like the computing that we're doing, right? >> Jeff: And what types of data are you crunching? >> It's the data associated with Hotline calls, we collect up to 150 variables on each Hotline call. The Hotline calls come in, we have this data set of 50,000 cases of trafficking with very sensitive data, and the protections of that data, the cybersecurity associated with that data, the storage of that data. So since 2017, Polaris has been in existence since 2002, so we're in our 17th year now, but starting three years ago in 2017 we started really partnering with AWS, where we're migrating more of our data onto AWS, building some AI tools with AWS to help us process Hotline calls more efficiently. And then talking about potentially moving our, all of our data storage onto AWS so that we don't have our own server racks in our office, we still need to go through a number of steps to get there. But having AWS at the table, and then talking about the Impact Computing team and this, like, real big data crunching of like millions of trafficking cases globally, we haven't even started talking about that yet but I think that's like a next stage. So for now, it's getting our data stronger, more secure, building some of those AI bots to help us with our work, and then potentially considering us moving completely serverless, and all of those things are conversations we're having with AWS, and thrilled that AWS is making this an issue to the point that it was prioritized and featured at this conference, which was a big deal, to get in front of the whole audience and do a keynote, and we're very, very grateful for that. >> And you mentioned there's so many organizations involved, are you guys doing data aggregation, data consolidation, sharing, I mean there must be with so many organizations, that adds a lot of complexity, and a lot of data silos, to steal classic kind of IT terms. Are you working towards some kind of unification around that, or how does that look in the future? >> We would love to get to the point where different organizations are sharing their data set. We'd love to get to the point where different organizations are using, like, a shared case management tool, and collecting the same data so it's apples to apples. There are different organizations, like, Thorn is doing some amazing big data-- >> Jeff: Right, we've had Thorn on a couple of times. >> How do we merge Polaris's data set with Thorn's data set? We're not doing that yet, right? I think we're only doing baby steps. But I think the AWS platform could enable potentially a merger of Thorn's data with Polaris's data in some sort of data lake, right? So that's a great idea, we would love to get to that. I think the field isn't there yet. The field has kind of been, like, tech-starved for a number of years, but in the past five years has made a lot of progress. The field is mostly kind of small shelters and groups responding to survivors, and so this notion of like infusing the trafficking field with data is somewhat of a new concept, but it's enabling us to think much bigger about what's possible. >> Well Brad, again, we could go on all day, you know, really thankful for what you're doing for a whole lot of people that we don't see, or maybe we see and we're not noticing, so thank you for that, and uh. >> Absolutely. >> Look forward to catching up when you move the ball a little bit further down the field. >> Yeah, thank you for having me on. It's a pleasure to be here. >> All right, my pleasure. He's Brad, I'm Jeff, you're watching theCUBE. We're at AWS IMAGINE Nonprofits, thanks for watching, we'll see you next time. (futuristic music)

Published Date : Aug 13 2019

SUMMARY :

Brought to you by Amazon Web Services. and one of the biggest issues is human trafficking. for people that aren't familiar with the company. and it's part responding to day-to-day cases Is that just domestically here in the States, And the third bucket is anyone forced to work it's the child sex trafficking you can kind of understand so the first thing that we do that we're kind of known for and the other tools that we have today, for each of the 25 types. Absolutely, and you know, you think first they're hustlin' Where the crew says, "If you act up "because I kind of didn't obey one of the rules." most of the people that are going to be watching this interview So the first big report that we released and lots of great work together. all different members of the industry need to follow suit. We need the big businesses to enter the fight, in terms of where you get the leverage, do you think? So the strategies depend on type. or begging on the street, and the woman was crying Teresa Carlson and the team. and the protections of that data, and a lot of data silos, to steal classic kind of IT terms. and collecting the same data so it's apples to apples. and groups responding to survivors, Well Brad, again, we could go on all day, you know, when you move the ball a little bit further down the field. It's a pleasure to be here. thanks for watching, we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ian UrbinaPERSON

0.99+

DeltaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

WyndhamORGANIZATION

0.99+

Brad MylesPERSON

0.99+

Jeff FrickPERSON

0.99+

MarriottORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

Ed BastianPERSON

0.99+

BradPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

1-888-373-7888OTHER

0.99+

JeffPERSON

0.99+

PolarisORGANIZATION

0.99+

Coca-ColaORGANIZATION

0.99+

Kansas CityLOCATION

0.99+

UPSORGANIZATION

0.99+

$150 billionQUANTITY

0.99+

Chick-fil-AORGANIZATION

0.99+

AtlantaLOCATION

0.99+

International Labour OrganizationORGANIZATION

0.99+

25 typesQUANTITY

0.99+

25 millionQUANTITY

0.99+

Freedom UnitedORGANIZATION

0.99+

Home DepotORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

25 thingsQUANTITY

0.99+

18QUANTITY

0.99+

2:00 a.m.DATE

0.99+

17QUANTITY

0.99+

American HoneyTITLE

0.99+

25 major formsQUANTITY

0.99+

polarisproject.orgOTHER

0.99+

third bucketQUANTITY

0.99+

30QUANTITY

0.99+

40 yearsQUANTITY

0.99+

second bucketQUANTITY

0.99+

17th yearQUANTITY

0.99+

three bucketsQUANTITY

0.99+

oneQUANTITY

0.99+

2017DATE

0.99+

first bucketQUANTITY

0.99+

ThornORGANIZATION

0.99+

24.9 million peopleQUANTITY

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

National Human Trafficking HotlineORGANIZATION

0.99+

Seattle, WashingtonLOCATION

0.99+

three years agoDATE

0.98+

both waysQUANTITY

0.98+

2002DATE

0.98+

applesORGANIZATION

0.98+

SkyMilesORGANIZATION

0.98+

25 different human trafficking business modelsQUANTITY

0.98+

50,000 casesQUANTITY

0.98+

United StatesLOCATION

0.97+

Recep Ozdag, Keysight | CUBEConversation


 

>> from our studios in the heart of Silicon Valley, Palo Alto, California It is >> a cute conversation. Hey, welcome back. Get ready. Geoffrey here with the Cube. We're gonna rip out the studios for acute conversation. It's the middle of the summer, the conference season to slow down a little bit. So we get a chance to do more cute conversation, which is always great. Excited of our next guest. He's Ridge, IP, Ops Statik. He's a VP and GM from key. Cite, Reject. Great to see you. >> Thank you for hosting us. >> Yeah. So we've had Marie on a couple of times. We had Bethany on a long time ago before the for the acquisition. But for people that aren't familiar with key site, give us kind of a quick overview. >> Sure, sure. So I'm within the excess solutions group Exhale really started was founded back in 97. It I peered around 2000 really started as a test and measurement company quickly after the I poet became the number one vendor in the space, quickly grew around 2012 and 2013 and acquired two companies Net optics and an ooey and net optics and I knew we were in the visibility or monitoring space selling taps, bypass witches and network packet brokers. So that formed the Visibility Group with a nice Xia. And then around 2017 key cite acquired Xia and we became I S G or extra Solutions group. Now, key site is also a very large test and measurement company. It is the actual original HB startup that started in Palo Alto many years ago. An HB, of course, grew, um it also started as a test and measurement company. Then later on it, it became a get a gun to printers and servers. HB spun off as agile in't, agile in't became the test and measurement. And then around 2014 I would say, or 15 agile in't spun off the test and measurement portion that became key site agile in't continued as a life and life sciences organization. And so key sites really got the name around 2014 after spinning off and they acquired Xia in 2017. So more joy of the business is testing measurement. But we do have that visibility and monitoring organization to >> Okay, so you do the test of measurement really on devices and kind of pre production and master these things up to speed. And then you're actually did in doing the monitoring in life production? Yes, systems. >> Mostly. The only thing that I would add is that now we are getting into live network testing to we see that mostly in the service provider space. Before you turn on the service, you need to make sure that all the devices and all the service has come up correctly. But also we're seeing it in enterprises to, particularly with security assessments. So reach assessment attacks. Security is your eye to organization really protecting the network? So we're seeing that become more and more important than they're pulling in test, particularly for security in that area to so as you. As you say, it's mostly device testing. But then that's going to network infrastructure and security networks, >> Right? So you've been in the industry for a while, you're it. Until you've been through a couple acquisitions, you've seen a lot of trends, so there's a lot of big macro things happening right now in the industry. It's exciting times and one of the ones. Actually, you just talked about it at Cisco alive a couple weeks ago is EJ Computer. There's a lot of talk about edges. Ej the new cloud. You know how much compute can move to the edge? What do you do in a crazy oilfield? With hot temperatures and no powers? I wonder if you can share some of the observations about EJ. You're kind of point of view as to where we're heading. And what should people be thinking about when they're considering? Yeah, what does EJ mean to my business? >> Absolutely, absolutely. So when I say it's computing, I typically include Io TI agent. It works is along with remote and branch offices, and obviously we can see the impact of Io TI security cameras, thermal starts, smart homes, automation, factory automation, hospital animation. Even planes have sensors on their engines right now for monitoring purposes and diagnostics. So that's one group. But then we know in our everyday lives, enterprises are growing very quickly, and they have remote and branch offices. More people are working from remotely. More people were working from home, so that means that more data is being generated at the edge. What it's with coyote sensors, each computing we see with oil and gas companies, and so it doesn't really make sense to generate all that data. Then you know, just imagine a self driving car. You need to capture a lot of data and you need to process. It just got really just send it to the cloud. Expect a decision to mate and then come back and so that you turn left or right, you need to actually process all that data, right? We're at the edge where the source of the data is, and that means pushing more of that computer infrastructure closer to the source. That also means running business critical applications closer to the source. And that means, you know, um, it's it's more of, ah, madness, massively distributed computer architecture. Um, what happens is that you have to then reliably connect all these devices so connectivity becomes important. But as you distribute, compute as well as applications, your attack surface increases right. Because all of these devices are very vulnerable. We're probably adding about 5,000,000 I ot devices every day to our network, So that's a lot of I O T. Devices or age devices that we connect many of these devices. You know, we don't really properly test. You probably know from your own home when you can just buy something and could easily connect it to your wife. I Similarly, people buy something, go to their work and connect to their WiFi. Not that device is connected to your entire network. So vulnerabilities in any of these devices exposes the entire network to that same vulnerability. So our attack surfaces increasing, so connection reliability as well as security for all these devices is a challenge. So we enjoy each computing coyote branch on road officers. But it does pose those challenges. And that's what we're here to do with our tech partners. Toe sold these issues >> right? It's just instinct to me on the edge because you still have kind of the three big um, the three big, you know, computer things. You got the networking right, which is just gonna be addressed by five g and a lot better band with and connectivity. But you still have store and you still have compute. You got to get those things Power s o a cz. You're thinking about the distribution of that computer and store at the edge versus in the cloud and you've got the Leighton see issue. It seems like a pretty delicate balancing act that people are gonna have to tune these systems to figure out how much to allocate where, and you will have physical limitations at this. You know the G power plant with the sure by now the middle of nowhere. >> It's It's a great point, and you typically get agility at the edge. Obviously, don't have power because these devices are small. Even if you take a room order branch office with 52 2 100 employees, there's only so much compute that you have. But you mean you need to be able to make decisions quickly. They're so agility is there. But obviously the vast amounts of computer and storage is more in your centralized data center, whether it's in your private cloud or your public cloud. So how do you do the compromise? When do you run applications at the edge when you were in applications in the cloud or private or public? Is that in fact, a compromise and year You might have to balance it, and it might change all the time, just as you know, if you look at our traditional history off compute. He had the mainframes which were centralized, and then it became distributed, centralized, distributed. So this changes all the time and you have toe make decisions, which which brings up the issue off. I would say hybrid, I t. You know, they have the same issue. A lot of enterprises have more of a, um, hybrid I t strategy or multi cloud. Where do you run the applications? Even if you forget about the age even on, do you run an on Prem? Do you run in the public cloud? Do you move it between class service providers? Even that is a small optimization problem. It's now even Matt bigger with H computer. >> Right? So the other thing that we've seen time and time again a huge trend, right? It's software to find, um, we've seen it in the networking space to compete based. It's offered to find us such a big write such a big deal now and you've seen that. So when you look at it from a test a measurement and when people are building out these devices, you know, obviously aton of great functional capability is suddenly available to people, but in terms of challenges and in terms of what you're thinking about in software defined from from you guys, because you're testing and measuring all this stuff, what's the goodness with the badness house for people, you really think about the challenges of software defined to take advantage of the tremendous opportunity. >> That's a really good point. I would say that with so far defined it working What we're really seeing is this aggregation typically had these monolithic devices that you would purchase from one vendor. That wonder vendor would guarantee that everything just works perfectly. What software defined it working, allows or has created is this desegregated model. Now you have. You can take that monolithic application and whether it's a server or a hardware infrastructure, then maybe you have a hyper visor or so software layer hardware, abstraction, layers and many, many layers. Well, if you're trying to get that toe work reliably, this means that now, in a way, the responsibility is on you to make sure that you test every all of these. Make sure that everything just works together because now we have choice. Which software packages should I install from which Bender This is always a slight differences. Which net Nick Bender should I use? If PJ smart Nick Regular Nick, you go up to the layer of what kind of ax elation should I use? D. P. D K. There's so many options you are responsible so that with S T N, you do get the advantage of opportunity off choice, just like on our servers and our PCs. But this means that you do have to test everything, make sure that everything works. So this means more testing at the device level, more testing at the service being up. So that's the predeployment stage and wants to deploy the service. Now you have to continually monitor it to make sure that it's working as you expected. So you get more choice, more diversity. And, of course, with segregation, you can take advantage of improvements on the hardware layer of the software layer. So there's that the segregation advantage. But it means more work on test as well as monitoring. So you know there's there's always a compromise >> trade off. Yeah, so different topic is security. Um, weird Arcee. This year we're in the four scout booth at a great chat with Michael the Caesars Yo there. And he talked about, you know, you talk a little bit about increasing surface area for attack, and then, you know, we all know the statistics of how long it takes people to know that they've been reach its center center. But Mike is funny. He you know, they have very simple sales pitch. They basically put their sniffer on your network and tell you that you got eight times more devices on the network than you thought. Because people are connecting all right, all types of things. So when you look at, you know, kind of monitoring test, especially with these increased surface area of all these, Iet devices, especially with bring your own devices. And it's funny, the H v A c seemed to be a really great place for bad guys to get in. And I heard the other day a casino at a casino, uh, connected thermometer in a fish tank in the lobby was the access point. How is just kind of changing your guys world, you know, how do you think about security? Because it seems like in the end, everyone seems to be getting he breached at some point in time. So it's almost Maur. How fast can you catch it? How do you minimize the damage? How do you take care of it versus this assumption that you can stop the reaches? You >> know, that was a really good point that you mentioned at the end, which is it's just better to assume that you will be breached at some point. And how quickly can you detect that? Because, on average, I think, according to research, it takes enterprise about six months. Of course, they're enterprise that are takes about a couple of years before they realize. And, you know, we hear this on the news about millions of records exposed billions of dollars of market cap loss. Four. Scout. It's a very close take partner, and we typically use deploy solutions together with these technology partners, whether it's a PM in P. M. But very importantly, security, and if you think about it, there's terabytes of data in the network. Typically, many of these tools look at the packet data, but you can't really just take those terabytes of data and just through it at all the tools, it just becomes a financially impossible toe provide security and deploy such tools in a very large network. So where this is where we come in and we were the taps, we access the data where the package workers was essentially groom it, filtering down to maybe tens or hundreds of gigs that that's really, really important. And then we feed it, feed it to our take partners such as Four Scout and many of the others. That way they can. They can focus on providing security by looking at the packets that really matter. For example, you know some some solutions only. Look, I need to look at the package header. You don't really need to see the send the payload. So if somebody is streaming Netflix or YouTube, maybe you just need to send the first mega byte of data not the whole hundreds of gigs over that to our video, so that allows them to. It allows us or helps us increase the efficiency of that tool. So the end customer can actually get a good R Y on that on that investment, and it allows for Scott to really look at or any of the tech partners to look at what's really important let me do a better job of investigating. Hey, have I been hacked? And of course, it has to be state full, meaning that it's not just looking at flow on one data flow on one side, looking at the whole communication. So you can understand What is this? A malicious application that is now done downloading other malicious applications and infiltrating my system? Is that a DDOS attack? Is it a hack? It's, Ah, there's a hole, equal system off attacks. And that's where we have so many companies in this in this space, many startups. >> It's interesting We had Tom Siebel on a little while ago actually had a W s event and his his explanation of what big data means is that there's no sampling air. And we often hear that, you know, we used to kind of prior to big day, two days we would take a sample of data after the fact and then tried to to do someone understanding where now the more popular is now we have a real time streaming engines. So now we're getting all the data basically instantaneously in making decisions. But what you just bring out is you don't necessarily want all the data all the time because it could. It can overwhelm its stress to Syria. That needs to be a much better management approach to that. And as I look at some of the notes, you know, you guys were now deploying 400 gigabit. That's right, which is bananas, because it seems like only yesterday that 100 gigabyte Ethan, that was a big deal a little bit about, you know, kind of the just hard core technology changes that are impacting data centers and deployments. And as this band with goes through the ceiling, what people are physically having to do, do it. >> Sure, sure, it's amazing how it took some time to go from 1 to 10 gig and then turning into 40 gig, but that that time frame is getting shorter and shorter from 48 2 108 100 to 400. I don't even know how we're going to get to the next phase because the demand is there and the demand is coming from a number of Trans really wants five G or the preparation for five G. A lot of service providers are started to do trials and they're up to upgrading that infrastructure because five G is gonna make it easier to access state of age quickly invest amounts of data. Whenever you make something easy for the consumer, they will consume it more. So that's one aspect of it. The preparation for five GS increasing the need for band with an infrastructure overhaul. The other piece is that we're with the neutralization. We're generating more Eastern West traffic, but because we're distributed with its computing, that East West traffic can still traverse data centers and geography. So this means that it's not just contained within a server or within Iraq. It actually just go to different locations. That also means your data center into interconnect has to support 400 gig. So a lot of network of hitmen manufacturers were typically call them. Names are are releasing are about to release 400 devices. So on the test side, they use our solutions to test these devices, obviously, because they want to release it based the standards to make sure that it works on. So that's the pre deployment phase. But once these foreign jiggy devices are deployed and typically service providers, but we're start slowly starting to see large enterprises deploy it as a mention because because of visualization and computing, then the question is, how do you make sure that your 400 gig infrastructure is operating at the capacity that you want in P. M. A. P M. As well as you're providing security? So there's a pre deployment phase that we help on the test side and then post deployment monitoring face. But five G is a big one, even though we're not. Actually we haven't turned on five year service is there's tremendous investment going on. In fact, key site. The larger organization is helping with a lot of these device testing, too. So it's not just Xia but key site. It's consume a lot of all of our time just because we're having a lot of engagements on the cellphone side. Uh, you know, decide endpoint side. It's a very interesting time that we're living in because the changes are becoming more and more frequent and it's very hot, so adapt and make sure that you're leading that leading that wave. >> In preparing for this, I saw you in another video camera. Which one it was, but your quote was you know, they didn't create electricity by improving candles. Every line I'm gonna steal it. I'll give you credit. But as you look back, I mean, I don't think most people really grown to the step function. Five g, you know, and they talk about five senior fun. It's not about your phone. It says this is the first kind of network built four machines. That's right. Machine data, the speed machine data and the quantity of Mr Sheen data. As you sit back, What kind of reflectively Again? You've been in this business for a while and you look at five G. You're sitting around talking to your to your friends at a party. So maybe some family members aren't in the business. How do you How do you tell them what this means? I mean, what are people not really seeing when they're just thinking it's just gonna be a handset upgrade there, completely missing the boat? >> Yeah, I think for the for the regular consumer, they just think it's another handset. You know, I went from three G's to 40 year. I got I saw bump in speed, and, you know, uh, some handset manufacturers are actually advertising five G capable handsets. So I'm just going to be out by another cell phone behind the curtain under the hurt. There's this massive infrastructure overhaul that a lot of service providers are going through. And it's scary because I would say that a lot of them are not necessarily prepared. The investment that's pouring in is staggering. The help that they need is one area that we're trying to accommodate because the end cell towers are being replaced. The end devices are being replaced. The data centers are being upgraded. Small South sites, you know, Um, there's there's, uh how do you provide coverage? What is the killer use case? Most likely is probably gonna be manufacturing just because it's, as you said mission to make mission machine learning Well, that's your machine to mission communication. That's where the connected hospitals connected. Manufacturing will come into play, and it's just all this machine machine communication, um, generating vast amounts of data and that goes ties back to that each computing where the edge is generating the data. But you then send some of that data not all of it, but some of that data to a centralized cloud and you develop essentially machine learning algorithms, which you then push back to the edge. The edge becomes a more intelligent and we get better productivity. But it's all machine to machine communication that, you know, I would say that more of the most of the five communication is gonna be much information communication. Some small portion will be the consumers just face timing or messaging and streaming. But that's gonna be there exactly. Exactly. That's going to change. I'm of course, we'll see other changes in our day to day lives. You know, a couple of companies attempted live gaming on the cloud in the >> past. It didn't really work out just because the network latency was not there. But we'll see that, too, and was seeing some of the products coming out from the lecture of Google into the company's where they're trying to push gaming to be in the cloud. It's something that we were not really successful in the past, so those are things that I think consumers will see Maur in their day to day lives. But the bigger impact is gonna be for the for the enterprise >> or jet. Thanks for ah, for taking some time and sharing your insight. You know, you guys get to see a lot of stuff. You've been in the industry for a while. You get to test all the new equipment that they're building. So you guys have a really interesting captaincy toe watches developments. Really exciting times. >> Thank you for inviting us. Great to be here. >> All right, Easier. Jeff. Jeff, you're watching the Cube. Where? Cube studios and fellow out there. Thanks for watching. We'll see you next time.

Published Date : Jun 20 2019

SUMMARY :

the conference season to slow down a little bit. But for people that aren't familiar with key site, give us kind of a quick overview. So more joy of the business is testing measurement. Okay, so you do the test of measurement really on devices and kind of pre production and master these things you need to make sure that all the devices and all the service has come up correctly. I wonder if you can share some of the observations about EJ. You need to capture a lot of data and you need to process. It's just instinct to me on the edge because you still have kind of the three big um, might have to balance it, and it might change all the time, just as you know, if you look at our traditional history So when you look are responsible so that with S T N, you do get the advantage of opportunity on the network than you thought. know, that was a really good point that you mentioned at the end, which is it's just better to assume that you will be And as I look at some of the notes, you know, gig infrastructure is operating at the capacity that you want in P. But as you look back, I mean, I don't think most people really grown to the step function. you know, Um, there's there's, uh how do you provide coverage? to be in the cloud. So you guys have a really interesting captaincy toe watches developments. Thank you for inviting us. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2017DATE

0.99+

1QUANTITY

0.99+

Tom SiebelPERSON

0.99+

Recep OzdagPERSON

0.99+

MikePERSON

0.99+

400 gigQUANTITY

0.99+

40 gigQUANTITY

0.99+

400 gigQUANTITY

0.99+

IraqLOCATION

0.99+

JeffPERSON

0.99+

400 devicesQUANTITY

0.99+

tensQUANTITY

0.99+

Palo AltoLOCATION

0.99+

2013DATE

0.99+

GeoffreyPERSON

0.99+

MariePERSON

0.99+

two companiesQUANTITY

0.99+

five yearQUANTITY

0.99+

40 yearQUANTITY

0.99+

firstQUANTITY

0.99+

hundredsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

97DATE

0.99+

10 gigQUANTITY

0.99+

yesterdayDATE

0.99+

GoogleORGANIZATION

0.99+

Four ScoutORGANIZATION

0.99+

400QUANTITY

0.99+

about six monthsQUANTITY

0.99+

ScottPERSON

0.98+

ExhaleORGANIZATION

0.98+

billions of dollarsQUANTITY

0.98+

eight timesQUANTITY

0.98+

XiaORGANIZATION

0.98+

I S GORGANIZATION

0.98+

This yearDATE

0.98+

BethanyPERSON

0.97+

LeightonORGANIZATION

0.97+

agileTITLE

0.97+

one aspectQUANTITY

0.97+

CubeORGANIZATION

0.96+

52 2 100 employeesQUANTITY

0.96+

SheenPERSON

0.96+

YouTubeORGANIZATION

0.96+

EJORGANIZATION

0.96+

2012DATE

0.96+

hundreds of gigsQUANTITY

0.96+

oneQUANTITY

0.95+

two daysQUANTITY

0.95+

one vendorQUANTITY

0.95+

one areaQUANTITY

0.95+

SyriaLOCATION

0.94+

400 gigabitQUANTITY

0.94+

100 gigabyteQUANTITY

0.94+

five seniorQUANTITY

0.93+

48QUANTITY

0.93+

2014DATE

0.92+

Five gORGANIZATION

0.92+

one groupQUANTITY

0.91+

TransORGANIZATION

0.91+

Palo Alto, CaliforniaLOCATION

0.9+

first mega byteQUANTITY

0.9+

BenderPERSON

0.9+

four scout boothQUANTITY

0.89+

Visibility GroupORGANIZATION

0.89+

four machinesQUANTITY

0.89+

each computingQUANTITY

0.88+

five communicationQUANTITY

0.88+

Silicon Valley,LOCATION

0.87+

five G.ORGANIZATION

0.87+

FourQUANTITY

0.86+

three GORGANIZATION

0.86+

100QUANTITY

0.86+

couple weeks agoDATE

0.86+

15QUANTITY

0.85+

one sideQUANTITY

0.84+

Net opticsORGANIZATION

0.84+

about millions of recordsQUANTITY

0.83+

108QUANTITY

0.82+

five G.TITLE

0.81+

H v A cCOMMERCIAL_ITEM

0.81+

Michael thePERSON

0.8+

about 5,000,000 I otQUANTITY

0.8+

a couple of yearsQUANTITY

0.79+

threeQUANTITY

0.79+

MattPERSON

0.79+

many years agoDATE

0.78+

Cheryl Hung, CNCF | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona, Spain, It's theCUBE, covering KubeCon, CloudNativeCon Europe 2019 Brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Welcome back, we're in Barcelona, Spain and you're watching theCUBE, the worldwide leader in live tech coverage and this is KubeCon, CloudNativeCon. I'm Stu Miniman and my co-host for the two live days of coverage is Mr. Cory Quinn. And joining us was on the main stage yesterday, is Cheryl Hung who is the Director of Ecosystems at the Cloud Native Computing Foundation, or the acronym CNCF. Cheryl, welcome back to the program, thanks so much for joining us. >> Thank you, I always have a great time with theCUBE. >> So, first of all 7700 people here, one of the things that strikes me is we go to a lot of shows. We even do a decent amount of international shows. The community here is definitely global, and it's not, sometimes it's the same traveling pack, some person's like, "Well not quite as many people here "as were in Seattle." I'm like, well this isn't just the contributors all going and some of their friends and family. We've had on our program, thanks to the CNCF, and for some of the ecosystem, many of the customers here in Europe doing things, when we talk to people involved, it is obvious that it is a global community and it definitely shows here at the event, so great job on that. >> It's something that the CNCF really cares about because it's not just about one country or small set of countries, this is actually a global movement. There are businesses all over the globe that are in the middle of this transformational moment, so it's just really exciting to see it. I mean, I think of myself as being pretty involved with the Cloud Native community, but as I'm walking around the sponsor booths here today, there's a good 40-50% that I'm just not familiar with and that's quite surprising to me. I would've thought I'd knew almost all the companies around here, but it's always really fun to see the new companies coming in. >> Okay, so let's talk for a second about the diversity inclusion. One of the things is bringing in people that might not have been able to come on their own. Can you talk a little bit about that effort? And you've got some connection with that yourself. >> Yeah, yeah, so I care a lot about diversity in tech, and women in tech more specifically. One of the things that, I feel like this community has a lot of very visible women, so when I actually looked at the number of contributors by men and women, I was really shocked to find out it was 3%. It's kind of disappointing when you think about it. >> And what you're saying is it's 3% of all the contributors to all the projects in the CNCF. >> Exactly. If you look at the 36 projects, you look at the number of the people who've made issues, commits, comments, pull requests, it's 3% women and I think the CNCF has put a lot of effort into the, for example, the diversity scholarships, so bringing more than 300 people from under-represented groups to KubeCon, including 56 here in Barcelona, and it has a personal meaning to me because I really got my start through that diversity scholarship to KubeCon Berlin two years ago and when I first came to KubeCon Berlin, I knew nobody. But just that little first step can go a long way into getting people into feeling like they're part of the community and they have something valuable to give back. And then, once you're in, you're hooked on it and yeah, then it's a lot of fun. >> It's been said fairly frequently that talent is evenly distributed, but opportunity is not. As you take a look at the diversity inclusion efforts that the Cloud Native Computing Foundation is embarking on, how do you, what do you start evolving to next, and I ask that as two specific questions: One is do you have a target for next time, other than just larger than 3%? And, secondly, are you looking to actively expand the diversity scholarship program? And if so, how? >> Yeah, the diversity scholarship and the other initiatives around this are long-term initiatives. They're not going to pay off in the next three months; they're going to pay off in two year's time, three year's time. At least that's the hope, that's the goal. So we're always reliant on a lot of our sponsors. I mean, it's kind of a nice time at the moment because there's a lot of effort and willingness to be supportive of diversity in tech, and that means that we can offer more diversity scholarships to more people. But, I sometimes wonder, like I hope that this is not a, I hope that this is not a one-off thing will happen for five years and then people will lose interest. So, I think there are other things that need to happen. And one of the interesting things that I looked at recently is a GitHub survey, this was done in 2017, where they asked men and women how, the last time that you got help in open source, what was the source of that help? And women were, so women were just as likely to say they were interested in contributing, but they were half as likely to say that they had asked on a public forum, like a mailing list, and half as likely to say that they had received unsolicited help. So, I don't think it's something you can just say, right we'll look at individuals and make them do more, this is a community effort. We're all part of the same group of people, that we're trying to do the same, trying to work on the same things, and to do that, we need to get this mindset amongst the community that we need to reach out to more individuals and help them and pull them in, rather than saying well it's up to the CNCF to sort it out. >> Right, so Cheryl, another piece of the ecosystem that you're involved with is the end user piece. We've saw some of the interviews on the stage, as I've mentioned we've had some on the program, talk about the importance and the progress of end user participation in the CNCF. >> Yeah, so the CNCF was set up with these three bodies: the governing board, the technical oversight committee, and the end user community. And in theory, these three should be co-equal in power. At the moment, the end user community is probably lagging behind a little bit, but it's the reason that I joined the CNCF, the reason that my world exists, is to understand what the end users need and get them active and engaged in the community. So, my hope for the end user community is that end users who come in can see, not only the value of using these projects, but there's a path for them into becoming strong technical leaders and having actual influence in the projects beyond users, and then eventually, maybe contributing themselves and becoming leaders. >> Governance of open source projects has always been something of a challenge because it seems that in many respects, the most vocal people are often the ones who are afforded an unfortunate level of control, despite the fact that they may very well not speak for the common case. Instead they start adjudicating and advocating for corner cases. How, it seems that, at this point, based upon the sheer level of engagement you're seeing across enterprises and companies of all sizes, that that is clearly not the case. How do you, I guess, shape an ecosystem that has a healthy perspective on that? >> So, leadership in open source is very different from leadership in a typical corporate hierarchy. And leaders in open source are recognized, not only because of their technical depth and their hands-on contribution, but for their ability to communicate to others and have the empathy and understand what other people need. So, I think that's gone, the people who are seen as leaders in this community have, they've become role models for others and others kind of use that, to earn the actual trust of the community, you have to be very clearly making the right decisions and not doing it because you have an agenda in mind or because your employer wants you to do certain things. So I think that's gone a long way too, making sure that the ecosystem is really healthy and people really feel good about what they're doing. >> Cheryl, last thing is, could you give us for, how are we helping end users get an on-ramp into this community? If you could just give us, kind of, a real quick, what's the CNCF doing, what are some of those on-ramps for those that aren't already on the vote? >> The three big challenges for end users right now are number one, how do I navigate the ecosystem? Number two, how do I hire engineers? And number three, how do I make sure that my business strategy is aligned with Cloud Native? So, navigating the ecosystem is probably the trickiest one because there's so many channels, so many projects, there's no central authority that you can go to and say, I've got this problem, am I doing the right thing? Can you help me get this project, this feature into this project's road map? So, the CNCF has a lot of programs to ensure that end users can meet their peers and especially companies who are, perhaps, 12 months ahead of them, and everybody's trying to go through the same journey right now, everyone has these common challenges. So if they can figure them out together and solve them together, then it just saves a lot of time and effort for everybody. On the hiring piece, the CNCF does a lot around marketing and PR and brand awareness and there's companies here who have a booth, who are not selling their products at all, they're just here because they want to be in front of the engineers who are most involved with open source and Kubernetes, and so the CNCF facilitates that, and to some extent, subsidizes these end users to be at KubeCon. And then the third challenge is aligning your business strategy with Cloud Native. So, end users want to know these projects have longevity. They're going to be here in five year's or 10 year's time, and so for companies that want to get involved into that next level, for example running on that technical oversight committee or being on the governing board, the CNCF can help end users become, have that level of impact and have that level of engagement within the community. >> Alright so Cheryl, last word, any advice for people? What's the hottest job out there, that people are looking for? >> I've previously managed DevOps engineering teams and finding people with real Kubernetes production experience right now is just really hard. And I would say that the first thing that you should do, if you have no experience at all in it, is look at the training programs, for example the CKA, Certified Kubernetes Administrator. You don't have to get a certification, but if you look at the curriculum and go through it step by step, you can understand the basic concepts and after that point, get the production experience. There's no substitute to a year or two years of really running applications and monitoring and scaling them in production and dealing with fires. So, once you get to that point, it's a great place to be. >> Alright, well you heard it here. Cheryl, thanks so much for sharing everything with the ecosystem diversity inclusion. Really appreciate the updates. >> Thank you, really good to speak to you. >> For Cory Quinn, I'm Stu Miniman. Coming back with, getting towards the end of two days of live coverage here. Thanks for watching theCUBE. (upbeat music)

Published Date : May 22 2019

SUMMARY :

the Cloud Native Computing Foundation for the two live days of coverage is Mr. Cory Quinn. and for some of the ecosystem, that are in the middle of this transformational moment, One of the things is bringing in people that One of the things that, I feel like this community has of all the contributors to all the projects in the CNCF. the diversity scholarships, so bringing more than 300 people that the Cloud Native Computing Foundation is how, the last time that you got help in open source, We've saw some of the interviews on the stage, and the end user community. that that is clearly not the case. making sure that the ecosystem is really healthy and Kubernetes, and so the CNCF facilitates that, is look at the training programs, for example the CKA, Really appreciate the updates. of two days of live coverage here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CherylPERSON

0.99+

SeattleLOCATION

0.99+

2017DATE

0.99+

CNCFORGANIZATION

0.99+

Cheryl HungPERSON

0.99+

Stu MinimanPERSON

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

EuropeLOCATION

0.99+

five yearsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

two yearQUANTITY

0.99+

three yearQUANTITY

0.99+

two daysQUANTITY

0.99+

Cory QuinnPERSON

0.99+

two yearsQUANTITY

0.99+

3%QUANTITY

0.99+

36 projectsQUANTITY

0.99+

12 monthsQUANTITY

0.99+

OneQUANTITY

0.99+

a yearQUANTITY

0.99+

7700 peopleQUANTITY

0.99+

five yearQUANTITY

0.99+

more than 300 peopleQUANTITY

0.99+

two live daysQUANTITY

0.99+

KubeConEVENT

0.98+

threeQUANTITY

0.98+

yesterdayDATE

0.98+

third challengeQUANTITY

0.98+

56QUANTITY

0.98+

todayDATE

0.98+

one countryQUANTITY

0.98+

GitHubORGANIZATION

0.98+

two specific questionsQUANTITY

0.98+

Barcelona, SpainLOCATION

0.98+

40-50%QUANTITY

0.98+

firstQUANTITY

0.97+

two years agoDATE

0.97+

CloudNativeConEVENT

0.97+

three bodiesQUANTITY

0.97+

first thingQUANTITY

0.96+

first stepQUANTITY

0.96+

oneQUANTITY

0.96+

three big challengesQUANTITY

0.95+

secondlyQUANTITY

0.92+

larger than 3%QUANTITY

0.91+

CloudNativeCon Europe 2019EVENT

0.89+

10 year'sQUANTITY

0.81+

secondQUANTITY

0.79+

CloudNativeCon EU 2019EVENT

0.77+

Number twoQUANTITY

0.7+

halfQUANTITY

0.69+

DevOpsORGANIZATION

0.67+

next three monthsDATE

0.65+

KubernetesTITLE

0.62+

CKAORGANIZATION

0.6+

number threeQUANTITY

0.56+

CloudTITLE

0.56+

theCUBETITLE

0.55+

theCUBEORGANIZATION

0.5+

BerlinORGANIZATION

0.43+

Keynote Analysis | KubeCon + CloudNativeCon EU 2019


 

>> Live from Barcelona Spain it's theCUBE covering KubeCon CloudNativeCon Europe 2019. Brought to you by Red Hat. >> Live from Barcelona Spain it's theCUBE covering KubeCon CloudNativeCon Europe 2019. Brought to you by Red Hat. the Cloud Native Computing Foundation and ecosystem partners. >> Hola Barcelona I'm Stu Miniman and my guest host for this week is the one and only Corey Quinn, and you're watching theCUBE the leader in live tech coverage, actually the fourth year we've been doing the KubeCon and CloudNativeCon. This is KubeCon CloudNativeCon Barcelona 2019. We've got two days of wall to wall live coverage. Last year we were in Copenhagen it was outside a little bit windy and we had this lovely silk above us. This time we are inside at the Fira. We've got some lovely Cube branding. The store with all the t-shirts and the little plushies of Fippy and all the animals are right down the row for us, and there is 7,700 people here. So I have been, I did the Austin show in 2017 did the Seattle show last year 2018. We had done the Portland show in 2016, so it's my third time doing one of these, but Corey it is your first time at one of these shows. Wait this isn't an AWF show, so what are you doing here? >> I'm still trying to figure that out myself when people invite me to go somewhere "Do you know anything about insert topic here?" absolutely, smile and bluff your way through. Eventually someone might call you on it, but that's tomorrow's problem not quite today's. >> Yeah I have this general rule of thumb the less I know about something the more I overdress to overcompensate it. Oh so here's the guy in the three piece suit. >> My primary skill is wearing a suit everything else is just edging details. >> Alright, so let's set the stage for our audience here Corey. As I've said we've got the Foundation, we've got a lot of the big members, we've got some of the project people, but I'm really excited we actually have some excellent users here, because it is five years now since Kubernetes came onto the scene of course built off of Borg from Google, and as Dan Conn said in the opening key note, he actually gave a nice historical lesson. The term he used is simultaneous invention and basically those things that, you know, there are times where we argue, who created the light bulb first, or who did this and this? Because there were multiple times out there and he said look there were more than a dozen projects out there. >> Many of them open source or a little bit open as to these things like container orchestration, but it is Kubernetes that is the defacto standard today, and it's why so many people show up for this show, >> and there's such a large ecosystem around it. So you live in the Cloud world you know what's your general view on CloudNative and Kubernetes and this whole kind of space? >> Well going back to something you said a minute or two ago. I think there's something very strong to be said about this being defined by it's users. I've never yet seen a successful paradigm takeoff in the world of technology that was vendor defined. It's at some point you wind up with these companies doing the digital equivalent of here we've crafted you this amazingly precise wrench, and you hand it to a user and the first thing they say is wow it's kind of a crappy hammer, but it's at least good for a first attempt. Tools are going to be used as users want to use them and they define what the patterns look like. >> Yeah so I'll give you the counter point there because we understand if we ask users what they wanted they wanted better buggy whips so we can go faster. To compare and contrast we had done a few years ago was this openstack was user driven and it came out of NASA, and if it was good enough for the rocket scientist, it should be something we that can learn on, and Rackspace had done good and gave it to the open source community, and stepped back and let people use it. First of all openstack it's not dead it's being used in the Telco world it's being used outside of North America quite a bit, but we saw the kind of boom and bust of that. >> We are a long way passed the heyday. >> The vendor ecosystem of openstack was oh it's an alternative to AWS, and maybe some way to get off the VMY licensing, and I've actually said it's funny if you listen to what happens in this ecosystem. Well, giving people the flexibility not to be totally locked in to AWS, and oh it's built on Linux and therefore I might not want to have licensing from certain vendors. Still echos from previously but it is very different. >> Very much so, and I will say the world has changed. >> I was very involved in Eucalyptus which was a bit of a different take on the idea, or the promise of what openstack was going to be What if you had Cloud API's in your own data center in 2012 that seemed like a viable concern. The world we live in today of public cloud first for a lot of shops was by no means assured. >> Yeah, Martin Meikos, Cube alum by the way, fantastic leader still heavily involved in open source. >> Very much so >> One of those things I think he was a little bit ahead of his time on these. So Corey, one of the reasons, why are you here? You are here because I pulled you here, and we do pay you to be here as a host. You're not here for goodwill and that. Your customers are all users and tend to be decent sized users and they say Corey helps people with their Amazon bills no that's the AWS bills not the I have a pile of boxes of smiley faces on there, oh my God what did I do around Christmas time. >> Exactly >> So the discussion at the show is this whole hybrid and multi cloud world when I talk to users they don't use those words. Cloud strategy, sure, my pile of applications, and how I'm updating some of them, and keeping some of them running, and working with that application portfolio and my data. All hugely important but what do you hear from users, and where does the things like cloud and multi cloud fit into their world? >> There are two basic archetypes of user that I tend to deal with. Because I deal with, as you mentioned, with predominately large customers >> you have the born in the cloud types who have more or less a single application. Picture a startup that hits meteoric growth and now is approaching or is in the IPO stage. They have a single application. They're generally all in on one provider, and the idea of going multi cloud is for auxiliary things. If we take a step back, for example, they're saying things like oh PagerDuty is a service that's not run by one of our major public cloud providers. There are a bunch of SaaS applications like that that factor in, but their infrastructure is >> predominately going to be based in one environment. The other large type of customer you'll tend to see is one of those multinational very divisional organizations where they have a long legacy of being very data center first because historically that was kind of the only option. And you'll start to see a bunch of different popup cloud providers inside those environments, but usually they stop at the line of business boundary or very occasionally on a per workload basis. I'm not seeing people say, >> well we're going to build this one application workload, and we want to be able to put that on Oracle cloud, and Azure and GCP and AWS, and this thing that my cousin runs out of the Ozarks. No one wants to do that in the traditional sense because as soon as you go down that path you are constrained to whatever the lowest common denominator across all those things are, and my cousins data center in the Ozarks doesn't have a lot of frills. So you wind up trying to be able to deploy anywhere, but by doing that you are giving up any higher level offering. You are slowing yourself down. >> Yeah, the thing we've always been worried about is back in the day when you talk about multi vendor do we go by the standard, and then go to least common denominator and what has worked it's way through the environment? That's what the customers want. I want today if I'm the user, agility is really one of the things that seem to be top of mind. What IT needs to do is respond to the speed of what the business needs and a CloudNative environment that I look at is it has to be that lever to be able to help me deliver on the next thing, or change the thing, or update my thing to get that working. It was, so disclaimer Red Hat is our headline sponsor here we thank them for our presence, but actually it's a great conversation with open shift customers, and they didn't talk about open shift to open shift to open shift. They talk about their digital transformation. They talk about their data. They talk about the cool new things that they are able to do, and it was that platform happened to be built on Kubernetes. That was the lever to help them do this at the Google show where you were at. That was the same conversation we had whether it is in GCP or whether it was in my own data center. >> You know yes we can do it with containers and everything like that. It was that lever to be able to help me modernize and run new apps and do it faster than I would've done it in the past. So it's that kind of progression that is interesting for me to hear, and just there is not, there is this tendency now to be like oh look everybody is working together and it's wonderful open source ecosystem. It's like well look the world today is definitely coopetition. Yes you need to be up on stage and if a customer says, I need to work with vendors A, B, C, and D. A, B, C, and D, you better work with that or they will go and find an alternative, because there are alternatives out there. >> (Corey) Absolutely, and when a company embarks on a digital transformation and starts moving into public cloud, there are two reasons they are doing that. The first is for cost savings in which case (laughs), let's talk, and the other is for capability storing, and you're not going to realize cost savings for a lot longer than you think you will. In any case you are not going realize capability story if all you view public cloud is being, is another place to run your VAMS or now your containers. >> Yeah, so thank you, Corey your title in your day job You're a Cloud economist. >> I am, two words that no one can define. So no one calls me on it. >> Kubernetes it's magical and free right >> That's what everyone tells me. It feels like right now we are sort of peak heighth as far as Kubernetes goes, and increasingly, whenever you see a technology that has gotten this level of adoption. We saw it with openstack, we've seen it with cloud, we've seen it with a bunch of things. We are starting to see it with Serverless as well. Where, what problem are you trying to solve? I'm not going to listen to the answer, today that answer is Kubernetes, and it seems like everyone's first project is their own resume. Great, there has to be a value proposition, there has to be a story for it, >> and I'm not suggesting that there isn't, but I think that it is being used as sort of an upscale snake oil in some cases or serpen grease as we like to call it in some context. >> Yeah, and that's one of our jobs here is to help extract the sigma from the noise. We've got some good customers. We're going into the environment. One of the things I try to do in the open keynote is find that theme. Couple of years, for a couple of shows >> it's been service mesh is the new hotness. We're talking about Istio, we're talking about Helm, We're talking about all these all these environments that say okay how do I pull together all the pieces of the application, >> and manage that together? Because there's just, you know, moving up the stack, and getting closer to that application. We'll talk about Serverless in one of the other segments later this week I'm sure because you know there's the, okay here Knative can help bridge that gap, but is that what I need? We talk a lot about Kubernetes is how much does the public cloud versus in my data center, and some of the guys they talk to, Serverless is in the public cloud. We'll call it functions of the service if you put it in your own data center, because while yes there are servers everywhere. If you actually manage those racks and everything like that it probably doesn't make sense to call it Serverless. We try not to get into too many semantics arguments here on theCUBE. >> You can generally tend to run arbitrary code anywhere the premise of Serverless to my mind. >> Is more about the event model, and you don't get that on VRAM in the same way that you do in a large public cloud provider, and whether that is the right thing or not, I'm not prepared to say, but it's important for that to be understood as you are going down that path. >> So Corey, any themes that jumped out for you, or things that you want to poke at, at the show, for me, Kubernetes has really kind of crossed that Chasm, and we do have large crowds. You can see the throngs of people behind us, and users that have great stories to tell, and CNCF itself, you know has a lot of projects out there, we're trying to make some sense of all those pieces. There's six now that have graduated, and FluentD is the most recent, but a lot of interesting things from the sandbox, through that kind of incubating phase there, and we're going to dig into some of the pieces there. Some of them build on top of Kubernetes, some of them are just part of this whole Cloud Native Ecosystem, and therefore related but don't necessarily need it, and can play in all these various worlds. >> What about you? >> For me I want to dig a little bit more into the idea of multi cloud. I have been making a bit of a stink for the past year. With the talk called the myth of multi cloud. Where it's not something I generally advise as a best practice, and I'm holding that fairly well, but what I want to do is I want to have conversations with people who are pursuing multi cloud strategies and figure out first, are they in fact pursuing the same thing, so we're defining out terms and talking on the same page, and secondly I want to get a little more context, and insight into why they are doing that, and what that looks like for them. Is it they want to be able to run different workloads in different places? Great that's fair, the same workload run everywhere, on the lowest common denominator. Well lets scratch below the surface a bit, and find out why that is. >> Yeah, and Corey you're spot on, and no surprise because you talk to users on this. From our research side on our team, we really say multi cloud or hybrid cloud. Hybrid cloud means you've got your own data centers, as opposed to multi cloud could be any of them. There's a little bit of a Venn Diagram you could do between that. >> But I am prepared to be wrong as well. I'm a company of two people. I don't have a research department, that's called the spare time I get >> when I can't sleep at night. So I don't have data, I have anecadata. I can talk about individual use cases, but then I'm telling individual company stories that I'm generally not authorized to tell. So it's more a question now of starting to speak to a broader base. >> So just to finish on the thought from out team is everything from I have all of these pieces, and they're really not connected, and I'm just trying to get my arms around through some of the solutions. Like in the AWS world we're looking at the VMware on AWS, and the outpost type of solution. That pullout or what Azure does with Azure stack, and the like, or even company like IBM and Oracle, where they have a stack that can be both >> in the public cloud and the private cloud. Those kind of fully integrated pieces versus the right now I'm just putting applications in certain areas, and then how do I manage data protection, how do I manage security across all these environments. It is a heterogeneous mess that we had, and I spent a lot of my career trying to help us break down those silos, get away from the cylinders of excellence as we called them, and we worked more traditionalist. So how much are we fighting that? I will just tell you that most of the people we're going to have on theCUBE, probably aren't going to want to get into that. They'll be happy to talk about their piece, and how they work with this broad wonderful ecosystem, but we can drill into where Kubernetes fits. We've got the five year anniversary of Kubernetes. We'll be talking to some of the people that helped create this technology, and lots of the various pieces. So with that, Corey, want to give you the final take here, before we talk about the stickers, and some of the rest. >> Oh absolutely, I think it's a fascinating show. I think that they're the right people who are attending. To give valuable perspective that, quite frankly, you're not going to get almost anywhere else. It's just a fascinating blend of people from large companies, small companies, giant vendors, and of course the middleware types, who are trying to effectively stand between in many cases, customers and the raw vendors, for a variety of very good reasons. Partner strategies are important. I'm very curious to see what that becomes, and how that tends to unfold in the next two days. >> Okay, so theCUBE by the way, we're not only a broadcast, but we are part of the community. We understand this network, and that is why Corey and I, you know, we come with stickers. So we've got these lovely sticker and partnership with Women Who Go, that made this logo for us for the Seattle show, and I have a few left, so if you come on by. Corey has his platypus, last week in AWS. So come on by where we are, you get some stickers, and of course, hit us up on Twitter if you have any questions. We're always looking for the community, and the network to help us with the data, and help us pull everything apart. So for Corey Quinn, I'm Stu Miniman, two days of live wall to wall coverage >> will continue very soon, and thank you as always for watching theCUBE. (Fading Electronic Music)

Published Date : May 21 2019

SUMMARY :

Brought to you by Red Hat. Brought to you by Red Hat. and the little plushies of Fippy and all the animals "Do you know anything about insert topic here?" the more I overdress to overcompensate it. everything else is just edging details. and as Dan Conn said in the opening key note, and this whole kind of space? and you hand it to a user and the first thing they say and if it was good enough for the rocket scientist, and therefore I might not want to have and I will say the world has changed. or the promise of what openstack was going to be Yeah, Martin Meikos, Cube alum by the way, and we do pay you to be here as a host. and keeping some of them running, that I tend to deal with. and now is approaching or is in the IPO stage. predominately going to be based in one environment. and my cousins data center in the Ozarks is back in the day when you talk about multi vendor and just there is not, there is this tendency now to and you're not going to realize cost savings Yeah, so thank you, Corey your title in your day job So no one calls me on it. and increasingly, whenever you see a technology and I'm not suggesting that there isn't, One of the things I try to do in the open keynote it's been service mesh is the new hotness. and some of the guys they talk to, the premise of Serverless to my mind. and you don't get that on VRAM in the same way and FluentD is the most recent, and I'm holding that fairly well, and no surprise because you talk to users on this. that's called the spare time I get that I'm generally not authorized to tell. and the outpost type of solution. and lots of the various pieces. and of course the middleware types, and the network to help us with the data, and thank you as always for watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

TomPERSON

0.99+

MartaPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Peter BurrisPERSON

0.99+

Chris KegPERSON

0.99+

Laura IpsenPERSON

0.99+

Jeffrey ImmeltPERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Chris O'MalleyPERSON

0.99+

Andy DaltonPERSON

0.99+

Chris BergPERSON

0.99+

Dave VelantePERSON

0.99+

Maureen LonerganPERSON

0.99+

Jeff FrickPERSON

0.99+

Paul FortePERSON

0.99+

Erik BrynjolfssonPERSON

0.99+

AWSORGANIZATION

0.99+

Andrew McCafeePERSON

0.99+

YahooORGANIZATION

0.99+

CherylPERSON

0.99+

MarkPERSON

0.99+

Marta FedericiPERSON

0.99+

LarryPERSON

0.99+

Matt BurrPERSON

0.99+

SamPERSON

0.99+

Andy JassyPERSON

0.99+

Dave WrightPERSON

0.99+

MaureenPERSON

0.99+

GoogleORGANIZATION

0.99+

Cheryl CookPERSON

0.99+

NetflixORGANIZATION

0.99+

$8,000QUANTITY

0.99+

Justin WarrenPERSON

0.99+

OracleORGANIZATION

0.99+

2012DATE

0.99+

EuropeLOCATION

0.99+

AndyPERSON

0.99+

30,000QUANTITY

0.99+

MauricioPERSON

0.99+

PhilipsORGANIZATION

0.99+

RobbPERSON

0.99+

JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Mike NygaardPERSON

0.99+

Nutanix .Next | NOLA | Day 1 | AM Keynote


 

>> PA Announcer: Off the plastic tab, and we'll turn on the colors. Welcome to New Orleans. ♪ This is it ♪ ♪ The part when I say I don't want ya ♪ ♪ I'm stronger than I've been before ♪ ♪ This is the part when I set your free ♪ (New Orleans jazz music) ("When the Saints Go Marching In") (rock music) >> PA Announcer: Ladies and gentleman, would you please welcome state of Louisiana chief design officer Matthew Vince and Choice Hotels director of infrastructure services Stacy Nigh. (rock music) >> Well good morning New Orleans, and welcome to my home state. My name is Matt Vince. I'm the chief design office for state of Louisiana. And it's my pleasure to welcome you all to .Next 2018. State of Louisiana is currently re-architecting our cloud infrastructure and Nutanix is the first domino to fall in our strategy to deliver better services to our citizens. >> And I'd like to second that warm welcome. I'm Stacy Nigh director of infrastructure services for Choice Hotels International. Now you may think you know Choice, but we don't own hotels. We're a technology company. And Nutanix is helping us innovate the way we operate to support our franchisees. This is my first visit to New Orleans and my first .Next. >> Well Stacy, you're in for a treat. New Orleans is known for its fabulous food and its marvelous music, but most importantly the free spirit. >> Well I can't wait, and speaking of free, it's my pleasure to introduce the Nutanix Freedom video, enjoy. ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ Ah, ah, ♪ ♪ Ah, ah, ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I lose everything, so I can sing ♪ ♪ Hallelujah I'm free ♪ ♪ I'm free, I'm free, I'm free, I'm free ♪ ♪ Gritting your teeth, you hold onto me ♪ ♪ It's never enough, I'm never complete ♪ ♪ Tell me to prove, expect me to lose ♪ ♪ I push it away, I'm trying to move ♪ ♪ I'm desperate to run, I'm desperate to leave ♪ ♪ If I lose it all, at least I'll be free ♪ ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> PA Announcer: Ladies and gentlemen, please welcome chief marketing officer Ben Gibson ♪ Ah, ah ♪ ♪ Ah, ah ♪ ♪ Hallelujah, I'm free ♪ >> Welcome, good morning. >> Audience: Good morning. >> And welcome to .Next 2018. There's no better way to open up a .Next conference than by hearing from two of our great customers. And Matthew, thank you for welcoming us to this beautiful, your beautiful state and city. And Stacy, this is your first .Next, and I know she's not alone because guess what It's my first .Next too. And I come properly attired. In the front row, you can see my Nutanix socks, and I think my Nutanix blue suit. And I know I'm not alone. I think over 5,000 people in attendance here today are also first timers at .Next. And if you are here for the first time, it's in the morning, let's get moving. I want you to stand up, so we can officially welcome you into the fold. Everyone stand up, first time. All right, welcome. (audience clapping) So you are all joining not just a conference here. This is truly a community. This is a community of the best and brightest in our industry I will humbly say that are coming together to share best ideas, to learn what's happening next, and in particular it's about forwarding not only your projects and your priorities but your careers. There's so much change happening in this industry. It's an opportunity to learn what's coming down the road and learn how you can best position yourself for this whole new world that's happening around cloud computing and modernizing data center environments. And this is not just a community, this is a movement. And it's a movement that started quite awhile ago, but the first .Next conference was in the quiet little town of Miami, and there was about 800 of you in attendance or so. So who in this hall here were at that first .Next conference in Miami? Let me hear from you. (audience members cheering) Yep, well to all of you grizzled veterans of the .Next experience, welcome back. You have started a movement that has grown and this year across many different .Next conferences all over the world, over 20,000 of your community members have come together. And we like to do it in distributed architecture fashion just like here in Nutanix. And so we've spread this movement all over the world with .Next conferences. And this is surging. We're also seeing just today the current count 61,000 certifications and climbing. Our Next community, close to 70,000 active members of our online community because .Next is about this big moment, and it's about every other day and every other week of the year, how we come together and explore. And my favorite stat of all. Here today in this hall amongst the record 5,500 registrations to .Next 2018 representing 71 countries in whole. So it's a global movement. Everyone, welcome. And you know when I got in Sunday night, I was looking at the tweets and the excitement was starting to build and started to see people like Adile coming from Casablanca. Adile wherever you are, welcome buddy. That's a long trip. Thank you so much for coming and being here with us today. I saw other folks coming from Geneva, from Denmark, from Japan, all over the world coming together for this moment. And we are accomplishing phenomenal things together. Because of your trust in us, and because of some early risk candidly that we have all taken together, we've created a movement in the market around modernizing data center environments, radically simplifying how we operate in the services we deliver to our businesses everyday. And this is a movement that we don't just know about this, but the industry is really taking notice. I love this chart. This is Gartner's inaugural hyperconvergence infrastructure magic quadrant chart. And I think if you see where Nutanix is positioned on there, I think you can agree that's a rout, that's a homerun, that's a mic drop so to speak. What do you guys think? (audience clapping) But here's the thing. It says Nutanix up there. We can honestly say this is a win for this hall here. Because, again, without your trust in us and what we've accomplished together and your partnership with us, we're not there. But we are there, and it is thanks to everyone in this hall. Together we have created, expanded, and truly made this market. Congratulations. And you know what, I think we're just getting started. The same innovation, the same catalyst that we drove into the market to converge storage network compute, the next horizon is around multi-cloud. The next horizon is around whether by accident or on purpose the strong move with different workloads moving into public cloud, some into private cloud moving back and forth, the promise of application mobility, the right workload on the right cloud platform with the right economics. Economics is key here. If any of you have a teenager out there, and they have a hold of your credit card, and they're doing something online or the like. You get some surprises at the end of the month. And that surprise comes in the form of spiraling public cloud costs. And this isn't to say we're not going to see a lot of workloads born and running in public cloud, but the opportunity is for us to take a path that regains control over infrastructure, regain control over workloads and where they're run. And the way I look at it for everyone in this hall, it's a journey we're on. It starts with modernizing those data center environments, continues with embracing the full cloud stack and the compelling opportunity to deliver that consumer experience to rapidly offer up enterprise compute services to your internal clients, lines of businesses and then out into the market. It's then about how you standardize across an enterprise cloud environment, that you're not just the infrastructure but the management, the automation, the control, and running any tier one application. I hear this everyday, and I've heard this a lot already this week about customers who are all in with this approach and running those tier one applications on Nutanix. And then it's the promise of not only hyperconverging infrastructure but hyperconverging multiple clouds. And if we do that, this journey the way we see it what we are doing is building your enterprise cloud. And your enterprise cloud is about the private cloud. It's about expanding and managing and taking back control of how you determine what workload to run where, and to make sure there's strong governance and control. And you're radically simplifying what could be an awfully complicated scenario if you don't reclaim and put your arms around that opportunity. Now how do we do this different than anyone else? And this is going to be a big theme that you're going to see from my good friend Sunil and his good friends on the product team. What are we doing together? We're taking all of that legacy complexity, that friction, that inability to be able to move fast because you're chained to old legacy environments. I'm talking to folks that have applications that are 40 years old, and they are concerned to touch them because they're not sure if they can react if their infrastructure can meet the demands of a new, modernized workload. We're making all that complexity invisible. And if all of that is invisible, it allows you to focus on what's next. And that indeed is the spirit of this conference. So if the what is enterprise cloud, and the how we do it different is by making infrastructure invisible, data centers, clouds, then why are we all here today? What is the binding principle that spiritually, that emotionally brings us all together? And we think it's a very simple, powerful word, and that word is freedom. And when we think about freedom, we think about as we work together the freedom to build the data center that you've always wanted to build. It's about freedom to run the applications where you choose based on the information and the context that wasn't available before. It's about the freedom of choice to choose the right cloud platform for the right application, and again to avoid a lot of these spiraling costs in unanticipated surprises whether it be around security, whether it be around economics or governance that come to the forefront. It's about the freedom to invent. It's why we got into this industry in the first place. We want to create. We want to build things not keep the lights on, not be chained to mundane tasks day by day. And it's about the freedom to play. And I hear this time and time again. My favorite tweet from a Nutanix customer to this day is just updated a lot of nodes at 38,000 feed on United Wifi, on my way to spend vacation with my family. Freedom to play. This to me is emotionally what brings us all together and what you saw with the Freedom video earlier, and what you see here is this new story because we want to go out and spread the word and not only talk about the enterprise cloud, not only talk about how we do it better, but talk about why it's so compelling to be a part of this hall here today. Now just one note of housekeeping for everyone out there in case I don't want anyone to take a wrong turn as they come to this beautiful convention center here today. A lot of freedom going on in this convention center. As luck may have it, there's another conference going on a little bit down that way based on another high growth, disruptive industry. Now MJBizCon Next, and by coincidence it's also called next. And I have to admire the creativity. I have to admire that we do share a, hey, high growth business model here. And in case you're not quite sure what this conference is about. I'm the head of marketing here. I have to show the tagline of this. And I read the tagline from license to launch and beyond, the future of the, now if I can replace that blank with our industry, I don't know, to me it sounds like a new, cool Sunil product launch. Maybe launching a new subscription service or the like. Stay tuned, you never know. I think they're going to have a good time over there. I know we're going to have a wonderful week here both to learn as well as have a lot of fun particularly in our customer appreciation event tonight. I want to spend a very few important moments on .Heart. .Heart is Nutanix's initiative to promote diversity in the technology arena. In particular, we have a focus on advancing the careers of women and young girls that we want to encourage to move into STEM and high tech careers. You have the opportunity to engage this week with this important initiative. Please role the video, and let's learn more about how you can do so. >> Video Plays (electronic music) >> So all of you have received these .Heart tokens. You have the freedom to go and choose which of the four deserving charities can receive donations to really advance our cause. So I thank you for your engagement there. And this community is behind .Heart. And it's a very important one. So thank you for that. .Next is not the community, the moment it is without our wonderful partners. These are our amazing sponsors. Yes, it's about sponsorship. It's also about how we integrate together, how we innovate together, and we're about an open community. And so I want to thank all of these names up here for your wonderful sponsorship of this event. I encourage everyone here in this room to spend time, get acquainted, get reacquainted, learn how we can make wonderful music happen together, wonderful music here in New Orleans happen together. .Next isn't .Next with a few cool surprises. Surprise number one, we have a contest. This is a still shot from the Freedom video you saw right before I came on. We have strategically placed a lucky seven Nutanix Easter eggs in this video. And if you go to Nutanix.com/freedom, watch the video. You may have to use the little scrubbing feature to slow down 'cause some of these happen quickly. You're going to find some fun, clever Easter eggs. List all seven, tweet that out, or as many as you can, tweet that out with hashtag nextconf, C, O, N, F, and we'll have a random drawing for an all expenses paid free trip to .Next 2019. And just to make sure everyone understands Easter egg concept. There's an eighth one here that's actually someone that's quite famous in our circles. If you see on this still shot, there's someone in the back there with a red jacket on. That's not just anyone. We're targeting in here. That is our very own Julie O'Brien, our senior vice president of corporate marketing. And you're going to hear from Julie later on here at .Next. But Julie and her team are the engine and the creativity behind not only our new Freedom campaign but more importantly everything that you experience here this week. Julie and her team are amazing, and we can't wait for you to experience what they've pulled together for you. Another surprise, if you go and visit our Freedom booths and share your stories. So they're like video booths, you share your success stories, your partnerships, your journey that I talked about, you will be entered to win a beautiful Nutanix brand compliant, look at those beautiful colors, bicycle. And it's not just any bicycle. It's a beautiful bicycle made by our beautiful customer Trek. I actually have a Trek bike. I love cycling. Unfortunately, I'm not eligible, but all of you are. So please share your stories in the Freedom Nutanix's booths and put yourself in the running, or in the cycling to get this prize. One more thing I wanted to share here. Yesterday we had a great time. We had our inaugural Nutanix hackathon. This hackathon brought together folks that were in devops practices, many of you that are in this room. We sold out. We thought maybe we'd get four or five teams. We had to shutdown at 14 teams that were paired together with a Nutanix mentor, and you coded. You used our REST APIs. You built new apps that integrated in with Prism and Clam. And it was wonderful to see this. Everyone I talked to had a great time on this. We had three winners. In third place, we had team Copper or team bronze, but team Copper. Silver, Not That Special, they're very humble kind of like one of our key mission statements. And the grand prize winner was We Did It All for the Cookies. And you saw them coming in on our Mardi Gras float here. We Did It All for Cookies, they did this very creative job. They leveraged an Apple Watch. They were lighting up VMs at a moments notice utilizing a lot of their coding skills. Congratulations to all three, first, second, and third all receive $2,500. And then each of them, then were able to choose a charity to deliver another $2,500 including Ronald McDonald House for the winner, we did it all for the McDonald Land cookies, I suppose, to move forward. So look for us to do more of these kinds of events because we want to bring together infrastructure and application development, and this is a great, I think, start for us in this community to be able to do so. With that, who's ready to hear form Dheeraj? You ready to hear from Dheeraj? (audience clapping) I'm ready to hear from Dheeraj, and not just 'cause I work for him. It is my distinct pleasure to welcome on the stage our CEO, cofounder and chairman Dheeraj Pandey. ("Free" by Broods) ♪ Hallelujah, I'm free ♪ >> Thank you Ben and good morning everyone. >> Audience: Good morning. >> Thank you so much for being here. It's just such an elation when I'm thinking about the Mardi Gras crowd that came here, the partners, the customers, the NTCs. I mean there's some great NTCs up there I could relate to because they're on Slack as well. How many of you are in Slack Nutanix internal Slack channel? Probably 5%, would love to actually see this community grow from here 'cause this is not the only even we would love to meet you. We would love to actually do this in a real time bite size communication on our own internal Slack channel itself. Now today, we're going to talk about a lot of things, but a lot of hard things, a lot of things that take time to build and have evolved as the industry itself has evolved. And one of the hard things that I want to talk about is multi-cloud. Multi-cloud is a really hard problem 'cause it's full of paradoxes. It's really about doing things that you believe are opposites of each other. It's about frictionless, but it's also about governance. It's about being simple, and it's also about being secure at the same time. It's about delight, it's about reducing waste, it's about owning, and renting, and finally it's also about core and edge. How do you really make this big at a core data center whether it's public or private? Or how do you really shrink it down to one or two nodes at the edge because that's where your machines are, that's where your people are? So this is a really hard problem. And as you hear from Sunil and the gang there, you'll realize how we've actually evolved our solutions to really cater to some of these. One of the approaches that we have used to really solve some of these hard problems is to have machines do more, and I said a lot of things in those four words, have machines do more. Because if you double-click on that sentence, it really means we're letting design be at the core of this. And how do you really design data centers, how do you really design products for the data center that hush all the escalations, the details, the complexities, use machine-learning and AI and you know figure our anomaly detection and correlations and patter matching? There's a ton of things that you need to do to really have machines do more. But along the way, the important lesson is to make machines invisible because when machines become invisible, it actually makes something else visible. It makes you visible. It makes governance visible. It makes applications visible, and it makes services visible. A lot of things, it makes teams visible, careers visible. So while we're really talking about invisibility of machines, we're talking about visibility of people. And that's how we really brought all of you together in this conference as well because it makes all of us shine including our products, and your careers, and your teams as well. And I try to define the word customer success. You know it's one of the favorite words that I'm actually using. We've just hired a great leader in customer success recently who's really going to focus on this relatively hard problem, yet another hard problem of customer success. We think that customer success, true customer success is possible when we have machines tend towards invisibility. But along the way when we do that, make humans tend towards freedom. So that's the real connection, the yin-yang of machines and humans that Nutanix is really all about. And that's why design is at the core of this company. And when I say design, I mean reducing friction. And it's really about reducing friction. And everything we do, the most mundane of things which could be about migrating applications, spinning up VMs, self-service portals, automatic upgrades, and automatic scale out, and all the things we do is about reducing friction which really makes machines become invisible and humans gain freedom. Now one of the other convictions we have is how all of us are really tied at the hip. You know our success is tied to your success. If we make you successful, and when I say you, I really mean Main Street. Main Street being customers, and partners, and employees. If we make all of you successful, then we automatically become successful. And very coincidentally, Main Street and Wall Street are also tied in that very same relation as well. If we do a great job at Main Street, I think the Wall Street customer, i.e. the investor, will take care of itself. You'll have you know taken care of their success if we took care of Main Street success itself. And that's the narrative that our CFO Dustin Williams actually went and painted to our Wall Street investors two months ago at our investor day conference. We talked about a $3 billion number. We said look as a company, as a software company, we can go and achieve $3 billion in billings three years from now. And it was a telling moment for the company. It was really about talking about where we could be three years from now. But it was not based on a hunch. It was based on what we thought was customer success. Now realize that $3 billion in pure software. There's only 10 to 15 companies in the world that actually have that kind of software billings number itself. But at the core of this confidence was customer success, was the fact that we were doing a really good job of not over promising and under delivering but under promising starting with small systems and growing the trust of the customers over time. And this is one of the statistics we actually talk about is repeat business. The first dollar that a Global 2000 customer spends in Nutanix, and if we go and increase their trust 15 times by year six, and we hope to actually get 17 1/2 and 19 times more trust in the years seven and eight. It's very similar numbers for non Global 2000 as well. Again, we go and really hustle for customer success, start small, have you not worry about paying millions of dollars upfront. You know start with systems that pay as they grow, you pay as they grow, and that's the way we gain trust. We have the same non Global 2000 pay $6 1/2 for the first dollar they've actually spent on us. And with this, I think the most telling moment was when Dustin concluded. And this is key to this audience here as well. Is how the current cohorts which is this audience here and many of them were not here will actually carry the weight of $3 billion, more than 50% of it if we did a great job of customer success. If we were humble and honest and we really figured out what it meant to take care of you, and if we really understood what starting small was and having to gain the trust with you over time, we think that more than 50% of that billings will actually come from this audience here without even looking at new logos outside. So that's the trust of customer success for us, and it takes care of pretty much every customer not just the Main Street customer. It takes care of Wall Street customer. It takes care of employees. It takes care of partners as well. Now before I talk about technology and products, I want to take a step back 'cause many of you are new in this audience. And I think that it behooves us to really talk about the history of this company. Like we've done a lot of things that started out as science projects. In fact, I see some tweets out there and people actually laugh at Nutanix cloud. And this is where we were in 2012. So if you take a step back and think about where the company was almost seven, eight years ago, we were up against giants. There was a $30 billion industry around network attached storage, and storage area networks and blade servers, and hypervisors, and systems management software and so on. So what did we start out with? Very simple premise that we will collapse the architecture of the data center because three tier is wasteful and three tier is not delightful. It was a very simple hunch, we said we'll take rack mount servers, we'll put a layer of software on top of it, and that layer of software back then only did storage. It didn't do networks and security, and it ran on top of a well known hypervisor from VMware. And we said there's one non negotiable thing. The fact that the design must change. The control plane for this data center cannot be the old control plane. It has to be rethought through, and that's why Prism came about. Now we went and hustled hard to add more things to it. We said we need to make this diverse because it can't just be for one application. We need to make it CPU heavy, and memory heavy, and storage heavy, and flash heavy and so on. And we built a highly configurable HCI. Now all of them are actually configurable as you know of today. And this was not just innovation in technologies, it was innovation in business and sizing, capacity planning, quote to cash business processes. A lot of stuff that we had to do to make this highly configurable, so you can really scale capacity and performance independent of each other. Then in 2014, we did something that was very counterintuitive, but we've done this on, and on, and on again. People said why are you disrupting yourself? You know you've been doing a good job of shipping appliances, but we also had the conviction that HCI was not about hardware. It was about a form factor, but it was really about an operating system. And we started to compete with ourselves when we said you know what we'll do arm's length distribution, we'll do arm's length delivery of products when we give our software to our Dell partner, to Dell as a partner, a loyal partner. But at the same time, it was actually seen with a lot of skepticism. You know these guys are wondering how to really make themselves vanish because they're competing with themselves. But we also knew that if we didn't compete with ourselves someone else will. Now one of the most controversial decisions was really going and doing yet another hypervisor. In the year 2015, it was really preposterous to build yet another hypervisor. It was a very mature market. This was coming probably 15 years too late to the market, or at least 10 years too late to market. And most people said it shouldn't be done because hypervisor is a commodity. And that's the word we latched on to. That this commodity should not have to be paid for. It shouldn't have a team of people managing it. It should actually be part of your overall stack, but it should be invisible. Just like storage needs to be invisible, virtualization needs to be invisible. But it was a bold step, and I think you know at least when we look at our current numbers, 1/3rd of our customers are actually using AHV. At least every quarter that we look at it, our new deployments, at least 35% of it is actually being used on AHV itself. And again, a very preposterous thing to have said five years ago, four years ago to where we've actually come. Thank you so much for all of you who've believed in the fact that virtualization software must be invisible and therefore we should actually try out something that is called AHV today. Now we went and added Lenovo to our OEM mix, started to become even more of a software company in the year 2016. Went and added HP and Cisco in some of very large deals that we talk about in earnings call, our HP deals and Cisco deals. And some very large customers who have procured ELAs from us, enterprise license agreements from us where they want to mix and match hardware. They want to mix Dell hardware with HP hardware but have common standard Nutanix entitlements. And finally, I think this was another one of those moments where we say why should HCI be only limited to X86. You know this operating systems deserves to run on a non X86 architecture as well. And that gave birth to this idea of HCI and Power Systems from IBM. And we've done a great job of really innovating with them in the last three, four quarters. Some amazing innovation that has come out where you can now run AIX 7.x on Nutanix. And for the first time in the history of data center, you can actually have a single software not just a data plane but a control plane where you can manage an IBM farm, an Power farm, and open Power farm and an X86 farm from the same control plane and have you know the IBM farm feed storage to an Intel compute farm and vice versa. So really good things that we've actually done. Now along the way, something else was going on while we were really busy building the private cloud, we knew there was a new consumption model on computing itself. People were renting computing using credit cards. This is the era of the millennials. They were like really want to bypass people because at the end of the day, you know why can't computing be consumed the way like eCommerce is? And that devops movement made us realize that we need to add to our stack. That stack will now have other computing clouds that is AWS and Azure and GCP now. So similar to the way we did Prism. You know Prism was really about going and making hypervisors invisible. You know we went ahead and said we'll add Calm to our portfolio because Calm is now going to be what Prism was to us back when we were really dealing with multi hypervisor world. Now it's going to be multi-cloud world. You know it's one of those things we had a gut around, and we really come to expect a lot of feedback and real innovation. I mean yesterday when we had the hackathon. The center, the epicenter of the discussion was Calm, was how do you automate on multiple clouds without having to write a single line of code? So we've come a long way since the acquisition of Calm two years ago. I think it's going to be a strong pillar in our overall product portfolio itself. Now the word multi-cloud is going to be used and over used. In fact, it's going to be blurring its lines with the idea of hyperconvergence of clouds, you know what does it mean. We just hope that hyperconvergence, the way it's called today will morph to become hyperconverged clouds not just hyperconverged boxes which is a software defined infrastructure definition itself. But let's focus on the why of multi-cloud. Why do we think it can't all go into a public cloud itself? The one big reason is just laws of the land. There's data sovereignty and computing sovereignty, regulations and compliance because of which you need to be in where the government with the regulations where the compliance rules want you to be. And by the way, that's just one reason why the cloud will have to disperse itself. It can't just be 10, 20 large data centers around the world itself because you have 200 plus countries and half of computing actually gets done outside the US itself. So it's a really important, very relevant point about the why of multi-cloud. The second one is just simple laws of physics. You know if there're machines at the edge, and they're producing so much data, you can't bring all the data to the compute. You have to take the compute which is stateless, it's an app. You take the app to where the data is because the network is the enemy. The network has always been the enemy. And when we thought we've made fatter networks, you've just produced more data as well. So this just goes without saying that you take something that's stateless that's without gravity, that's lightweight which is compute and the application and push it close to where the data itself is. And the third one which is related is just latency reasons you know? And it's not just about machine latency and electrons transferring over the speed light, and you can't defy the speed of light. It's also about human latency. It's also about multiple teams saying we need to federate and delegate, and we need to push things down to where the teams are as opposed to having to expect everybody to come to a very large computing power itself. So all the ways, the way they are, there will be at least three different ways of looking at multi-cloud itself. There's a centralized core cloud. We all go and relate to this because we've seen large data centers and so on. And that's the back office workhorse. It will crunch numbers. It will do processing. It will do a ton of things that will go and produce results for you know how we run our businesses, but there's also the dispersal of the cloud, so ROBO cloud. And this is the front office server that's really serving. It's a cloud that's going to serve people. It's going to be closer to people, and that's what a ROBO cloud is. We have a ton of customers out here who actually use Nutanix and the ROBO environments themselves as one node, two node, three node, five node servers, and it just collapses the entire server closet room in these ROBOs into something really, really small and minuscule. And finally, there's going to be another dispersed edge cloud because that's where the machines are, that's where the data is. And there's going to be an IOT machine fog because we need to miniaturize computing to something even smaller, maybe something that can really land in the palm in a mini server which is a PC like server, but you need to run everything that's enterprise grade. You should be able to go and upgrade them and monitor them and analyze them. You know do enough computing up there, maybe event-based processing that can actually happen. In fact, there's some great innovation that we've done at the edge with IOTs that I'd love for all of you to actually attend some sessions around as well. So with that being said, we have a hole in the stack. And that hole is probably one of the hardest problems that we've been trying to solve for the last two years. And Sunil will talk a lot about that. This idea of hybrid. The hybrid of multi-cloud is one of the hardest problems. Why? Because we're talking about really blurring the lines with owning and renting where you have a single-tenant environment which is your data center, and a multi-tenant environment which is the service providers data center, and the two must look like the same. And the two must look like the same is that hard a problem not just for burst out capacity, not just for security, not just for identity but also for networks. Like how do you blur the lines between networks? How do you blur the lines for storage? How do you really blur the lines for a single pane of glass where you can think of availability zones that look highly symmetric even though they're not because one of 'em is owned by you, and it's single-tenant. The other one is not owned by you, that's multi-tenant itself. So there's some really hard problems in hybrid that you'll hear Sunil talk about and the team. And some great strides that we've actually made in the last 12 months of really working on Xi itself. And that completes the picture now in terms of how we believe the state of computing will be going forward. So what are the must haves of a multi-cloud operating system? We talked about marketplace which is catalogs and automation. There's a ton of orchestration that needs to be done for multi-cloud to come together because now you have a self-service portal which is providing an eCommerce view. It's really about you know getting to do a lot of requests and workflows without having people come in the way, without even having tickets. There's no need for tickets if you can really start to think like a self-service portal as if you're just transacting eCommerce with machines and portals themselves. Obviously the next one is networking security. You need to blur the lines between on-prem and off-prem itself. These two play a huge role. And there's going to be a ton of details that you'll see Sunil talk about. But finally, what I want to focus on the rest of the talk itself here is what governance and compliance. This is a hard problem, and it's a hard problem because things have evolved. So I'm going to take a step back. Last 30 years of computing, how have consumption models changed? So think about it. 30 years ago, we were making decisions for 10 plus years, you know? Mainframe, at least 10 years, probably 20 plus years worth of decisions. These were decisions that were extremely waterfall-ish. Make 10s of millions of dollars worth of investment for a device that we'd buy for at least 10 to 20 years. Now as we moved to client-server, that thing actually shrunk. Now you're talking about five years worth of decisions, and these things were smaller. So there's a little bit more velocity in our decisions. We were not making as waterfall-ish decision as we used to with mainframes. But still five years, talk about virtualized, three tier, maybe three to five year decisions. You know they're still relatively big decisions that we were making with computer and storage and SAN fabrics and virtualization software and systems management software and so on. And here comes Nutanix, and we said no, no. We need to make it smaller. It has to become smaller because you know we need to make more agile decisions. We need to add machines every week, every month as opposed to adding you know machines every three to five years. And we need to be able to upgrade them, you know any point in time. You can do the upgrades every month if you had to, every week if you had to and so on. So really about more agility. And yet, we were not complete because there's another evolution going on, off-prem in the public cloud where people are going and doing reserved instances. But more than that, they were doing on demand stuff which no the decision was days to weeks. Some of these things that unitive compute was being rented for days to weeks, not years. And if you needed something more, you'd shift a little to the left and use reserved instances. And then spot pricing, you could do spot pricing for hours and finally lambda functions. Now you could to function as a service where things could actually be running only for minutes not even hours. So as you can see, there's a wide spectrum where when you move to the right, you get more elasticity, and when you move to the left, you're talking about predictable decision making. And in fact, it goes from minutes on one side to 10s of years on the other itself. And we hope to actually go and blur the lines between where NTNX is today where you see Nutanix right now to where we really want to be with reserved instances and on demand. And that's the real ask of Nutanix. How do you take care of this discontinuity? Because when you're owning things, you actually end up here, and when you're renting things, you end up here. What does it mean to really blur the lines between these two because people do want to make decisions that are better than reserved instance in the public cloud. We'll talk about why reserved instances which looks like a proxy for Nutanix it's still very, very wasteful even though you might think it's delightful, it's very, very wasteful. So what does it mean for on-prem and off-prem? You know you talk about cost governance, there's security compliance. These high velocity decisions we're actually making you know where sometimes you could be right with cost but wrong on security, but sometimes you could be right in security but wrong on cost. We need to really figure out how machines make some of these decisions for us, how software helps us decide do we have the right balance between cost, governance, and security compliance itself? And to get it right, we have introduced our first SAS service called Beam. And to talk more about Beam, I want to introduce Vijay Rayapati who's the general manager of Beam engineering to come up on stage and talk about Beam itself. Thank you Vijay. (rock music) So you've been here a couple of months now? >> Yes. >> At the same time, you spent the last seven, eight years really handling AWS. Tell us more about it. >> Yeah so we spent a lot of time trying to understand the last five years at Minjar you know how customers are really consuming in this new world for their workloads. So essentially what we tried to do is understand the consumption models, workload patterns, and also build algorithms and apply intelligence to say how can we lower this cost and you know improve compliance of their workloads.? And now with Nutanix what we're trying to do is how can we converge this consumption, right? Because what happens here is most customers start with on demand kind of consumption thinking it's really easy, but the total cost of ownership is so high as the workload elasticity increases, people go towards spot or a scaling, but then you need a lot more automation that something like Calm can help them. But predictability of the workload increases, then you need to move towards reserved instances, right to lower costs. >> And those are some of the things that you go and advise with some of the software that you folks have actually written. >> But there's a lot of waste even in the reserved instances because what happens it while customers make these commitments for a year or three years, what we see across, like we track a billion dollars in public cloud consumption you know as a Beam, and customers use 20%, 25% of utilization of their commitments, right? So how can you really apply, take the data of consumption you know apply intelligence to essentially reduce their you know overall cost of ownership. >> You said something that's very telling. You said reserved instances even though they're supposed to save are still only 20%, 25% utilized. >> Yes, because the workloads are very dynamic. And the next thing is you can't do hot add CPU or hot add memory because you're buying them for peak capacity. There is no convergence of scaling that apart from the scaling as another node. >> So you actually sized it for peak, but then using 20%, 30%, you're still paying for the peak. >> That's right. >> Dheeraj: That can actually add up. >> That's what we're trying to say. How can we deliver visibility across clouds? You know how can we deliver optimization across clouds and consumption models and bring the control while retaining that agility and demand elasticity? >> That's great. So you want to show us something? >> Yeah absolutely. So this is Beam as just Dheeraj outlined, our first SAS service. And this is my first .Next. And you know glad to be here. So what you see here is a global consumption you know for a business across different clouds. Whether that's in a public cloud like Amazon, or Azure, or Nutanix. We kind of bring the consumption together for the month, the recent month across your accounts and services and apply intelligence to say you know what is your spent efficiency across these clouds? Essentially there's a lot of intelligence that goes in to detect your workloads and consumption model to say if you're spending $100, how efficiently are you spending? How can you increase that? >> So you have a centralized view where you're looking at multiple clouds, and you know you talk about maybe you can take an example of an account and start looking at it? >> Yes, let's go into a cloud provider like you know for this business, let's go and take a loot at what's happening inside an Amazon cloud. Here we get into the deeper details of what's happening with the consumption of a specific services as well as the utilization of both on demand and RI. You know what can you do to lower your cost and detect your spend efficiency of a dollar to see you know are there resources that are provisioned by teams for applications that are not being used, or are there resources that we should go and rightsize because you know we have all this monitoring data, configuration data that we crunch through to basically detect this? >> You think there's billions of events that you look at everyday. You're already looking at a billon dollars worth of AWS spend. >> Right, right. >> So billions of events, billing, metering events every year to really figure out and optimize for them. >> So what we have here is a very popular international government organization. >> Dheeraj: Wow, so it looks like Russians are everywhere, the cloud is everywhere actually. >> Yes, it's quite popular. So when you bring your master account into Beam, we kind of detect all the linked accounts you know under that. Then you can go and take a look at not just at the organization level within it an account level. >> So these are child objects, you know. >> That's right. >> You can think of them as ephemeral accounts that you create because you don't want to be on the record when you're doing spams on Facebook for example. >> Right, let's go and take a look at what's happening inside a Facebook ad spend account. So we have you know consumption of the services. Let's go deeper into compute consumption, and you kind of see a trendline. You can do a lot of computing. As you see, looks like one campaign has ended. They started another campaign. >> Dheeraj: It looks like they're not stopping yet, man. There's a lot of money being made in Facebook right now. (Vijay laughing) >> So not only just get visibility at you know compute as a service inside a cloud provider, you can go deeper inside compute and say you know what is a service that I'm really consuming inside compute along with the CPUs n'stuff, right? What is my data transfer? You know what is my network? What is my load blancers? So essentially you get a very deeper visibility you know as a service right. Because we have three goals for Beam. How can we deliver visibility across clouds? How can we deliver visibility across services? And how can we deliver, then optimization? >> Well I think one thing that I just want to point out is how this SAS application was an extremely teachable moment for me to learn about the different resources that people could use about the public cloud. So all of you who actually have not gone deep enough into the idea of public cloud. This could be a great app for you to learn about things, the resources, you know things that you could do to save and security and things of that nature. >> Yeah. And we really believe in creating the single pane view you know to mange your optimization of a public cloud. You know as Ben spoke about as a business, you need to have freedom to use any cloud. And that's what Beam delivers. How can you make the right decision for the right workload to use any of the cloud of your choice? >> Dheeraj: How 'about databases? You talked about compute as well but are there other things we could look at? >> Vijay: Yes, let's go and take a look at database consumption. What you see here is they're using inside Facebook ad spending, they're using all databases except Oracle. >> Dheeraj: Wow, looks like Oracle sales folks have been active in Russia as well. (Vijay laughing) >> So what we're seeing here is a global view of you know what is your spend efficiency and which is kind of a scorecard for your business for the dollars that you're spending. And the great thing is Beam kind of brings together you know through its intelligence and algorithms to detect you know how can you rightsize resources and how can you eliminate things that you're not using? And we deliver and one click fix, right? Let's go and take a look at resources that are maybe provisioned for storage and not being used. We deliver the seamless one-click philosophy that Nutanix has to eliminate it. >> So one click, you can actually just pick some of these wasteful things that might be looking delightful because using public cloud, using credit cards, you can go in and just say click fix, and it takes care of things. >> Yeah, and not only remove the resources that are unused, but it can go and rightsize resources across your compute databases, load balancers, even past services, right? And this is where the power of it kind of comes for a business whether you're using on-prem and off-prem. You know how can you really converge that consumption across both? >> Dheeraj: So do you have something for Nutanix too? >> Vijay: Yes, so we have basically been working on Nutanix with something that we're going to deliver you know later this year. As you can see here, we're bringing together the consumption for the Nutanix, you know the services that you're using, the licensing and capacity that is available. And how can you also go and optimize within Nutanix environments >> That's great. >> for the next workload. Now let me quickly show you what we have on the compliance side. This is an extremely powerful thing that we've been working on for many years. What we deliver here just like in cost governance, a global view of your compliance across cloud providers. And the most powerful thing is you can go into a cloud provider, get the next level of visibility across cloud regimes for hundreds of policies. Not just policies but those policies across different regulatory compliances like HIPA, PCI, CAS. And that's very powerful because-- >> So you're saying a lot of what you folks have done is codified these compliance checks in software to make sure that people can sleep better at night knowing that it's PCI, and HIPA, and all that compliance actually comes together? >> And you can build this not just by cloud accounts, you can build them across cloud accounts which is what we call security centers. Essentially you can go and take a deeper look at you know the things. We do a whole full body scan for your cloud infrastructure whether it's AWS Amazon or Azure, and you can go and now, again, click to fix things. You know that had been probably provisioned that are violating the security compliance rules that should be there. Again, we have the same one-click philosophy to say how can you really remove things. >> So again, similar to save, you're saying you can go and fix some of these security issues by just doing one click. >> Absolutely. So the idea is how can we give our people the freedom to get visibility and use the right cloud and take the decisions instantly through one click. That's what Beam delivers you know today. And you know get really excited, and it's available at beam.nutanix.com. >> Our first SAS service, ladies and gentleman. Thank you so much for doing this, Vijay. It looks like there's going to be a talk here at 10:30. You'll talk more about the midterm elections there probably? >> Yes, so you can go and write your own security compliances as well. You know within Beam, and a lot of powerful things you can do. >> Awesome, thank you so much, Vijay. I really appreciate it. (audience clapping) So as you see, there's a lot of work that we're doing to really make multi-cloud which is a hard problem. You know think about working the whole body of it and what about cost governance? What about security compliance? Obviously what about hybrid networks, and security, and storage, you know compute, many of the things that you've actually heard from us, but we're taking it to a level where the business users can now understand the implications. A CFO's office can understand the implications of waste and delight. So what does customer success mean to us? You know again, my favorite word in a long, long time is really go and figure out how do you make you, the customer, become operationally efficient. You know there's a lot of stuff that we deliver through software that's completely uncovered. It's so latent, you don't even know you have it, but you've paid for it. So you've got to figure out what does it mean for you to really become operationally efficient, organizationally proficient. And it's really important for training, education, stuff that you know you're people might think it's so awkward to do in Nutanix, but it could've been way simpler if you just told you a place where you can go and read about it. Of course, I can just use one click here as opposed to doing things the old way. But most importantly to make it financially accountable. So the end in all this is, again, one of the things that I think about all the time in building this company because obviously there's a lot of stuff that we want to do to create orphans, you know things above the line and top line and everything else. There's also a bottom line. Delight and waste are two sides of the same coin. You know when we're talking about developers who seek delight with public cloud at the same time you're looking at IT folks who're trying to figure out governance. They're like look you know the CFOs office, the CIOs office, they're trying to figure out how to curb waste. These two things have to go hand in hand in this era of multi-cloud where we're talking about frictionless consumption but also governance that looks invisible. So I think, at the end of the day, this company will do a lot of stuff around one-click delight but also go and figure out how do you reduce waste because there's so much waste including folks there who actually own Nutanix. There's so much software entitlement. There's so much waste in the public cloud itself that if we don't go and put our arms around, it will not lead to customer success. So to talk more about this, the idea of delight and the idea of waste, I'd like to bring on board a person who I think you know many of you actually have talked about it have delightful hair but probably wasted jokes. But I think has wasted hair and delightful jokes. So ladies and gentlemen, you make the call. You're the jury. Sunil R.M.J. Potti. ("Free" by Broods) >> So that was the first time I came out from the bottom of a screen on a stage. I actually now know what it feels to be like a gopher. Who's that laughing loudly at the back? Okay, do we have the... Let's see. Okay, great. We're about 15 minutes late, so that means we're running right on time. That's normally how we roll at this conference. And we have about three customers and four demos. Like I think there's about three plus six, about nine folks coming onstage. So we'll have our own version of the parade as well on the main stage for the next 70 minutes. So let's just jump right into it. I think we've been pretty consistent in terms of our longterm plans since we started the company. And it's become a lot more clearer over the last few years about our plans to essentially make computing invisible as Dheeraj mentioned. We're doing this across multiple acts. We started with HCI. We call it making infrastructure invisible. We extended that to making data centers invisible. And then now we're in this mode of essentially extending it to converging clouds so that you can actually converge your consumption models. And so today's conference and essentially the theme that you're going to be seeing throughout the breakout sessions is about a journey towards invisible clouds, but make sure that you internalize the fact that we're investing heavily in each of the three phases. It's just not about the hybrid cloud with Nutanix, it's about actually finishing the job about making infrastructure invisible, expanding that to kind of go after the full data center, and then of course embark on some real meaningful things around invisible clouds, okay? And to start the session, I think you know the part that I wanted to make sure that we are all on the same page because most of us in the room are still probably in this phase of the journey which is about invisible infrastructure. And there the three key products and especially two of them that most of you guys know are Acropolis and Prism. And they're sort of like the bedrock of our company. You know especially Acropolis which is about the web scale architecture. Prism is about consumer grade design. And with Acropolis now being really mature. It's in the seventh year of innovation. We still have more than half of our company in terms of R and D spend still on Acropolis and Prism. So our core product is still sort of where we think we have a significant differentiation on. We're not going to let our foot off the peddle there. You know every time somebody comes to me and says look there's a new HCI render popping out or an existing HCI render out there, I ask a simple question to our customers saying show me 100 customers with 100 node deployments, and it will be very hard to find any other render out there that does the same thing. And that's the power of Acropolis the code platform. And then it's you know the fact that the velocity associated with Acropolis continues to be on a fast pace. We came out with various new capabilities in 5.5 and 5.6, and one of the most complicated things to get right was the fact to shrink our three node cluster to a one node, two node deployment. Most of you actually had requirements on remote office, branch office, or the edge that actually allowed us to kind of give us you know sort of like the impetus to kind of go design some new capabilities into our core OS to get this out. And associated with Acropolis and expanding into Prism, as you will see, the first couple of years of Prism was all about refactoring the user interface, doing a good job with automation. But more and more of the investments around Prism is going to be based on machine learning. And you've seen some variants of that over the last 12 months, and I can tell you that in the next 12 to 24 months, most of our investments around infrastructure operations are going to be driven by AI techniques starting with most of our R and D spend also going into machine-learning algorithms. So when you talk about all the enhancements that have come on with Prism whether it be formed by you know the management console changing to become much more automated, whether now we give you automatic rightsizing, anomaly detection, or a series of functionality that have gone into it, the real core sort of capabilities that we're putting into Prism and Acropolis are probably best served by looking at the quality of the product. You probably have seen this slide before. We started showing the number of nodes shipped by Nutanix two years ago at this conference. It was about 35,000 plus nodes at that time. And since then, obviously we've you know continued to grow. And we would draw this line which was about enterprise class quality. That for the number of bugs found as a percentage of nodes shipped, there's a certain line that's drawn. World class companies do about probably 2% to 3%, number of CFDs per node shipped. And we were just broken that number two years ago. And to give you guys an idea of how that curve has shown up, it's now currently at .95%. And so along with velocity, you know this focus on being true to our roots of reliability and stability continues to be, you know it's an internal challenge, but it's also some of the things that we keep a real focus on. And so between Acropolis and Prism, that's sort of like our core focus areas to sort of give us the confidence that look we have this really high bar that we're sort of keeping ourselves accountable to which is about being the most advanced enterprise cloud OS on the planet. And we will keep it this way for the next 10 years. And to complement that, over a period of time of course, we've added a series of services. So these are services not just for VMs but also for files, blocks, containers, but all being delivered in that single one-click operations fashion. And to really talk more about it, and actually probably to show you the real deal there it's my great pleasure to call our own version of Moses inside the company, most of you guys know him as Steve Poitras. Come on up, Steve. (audience clapping) (rock music) >> Thanks Sunil. >> You barely fit in that door, man. Okay, so what are we going to talk about today, Steve? >> Absolutely. So when we think about when Nutanix first got started, it was really focused around VDI deployments, smaller workloads. However over time as we've evolved the product, added additional capabilities and features, that's grown from VDI to business critical applications as well as cloud native apps. So let's go ahead and take a look. >> Sunil: And we'll start with like Oracle? >> Yeah, that's one of the key ones. So here we can see our Prism central user interface, and we can see our Thor cluster obviously speaking to the Avengers theme here. We can see this is doing right around 400,000 IOPs at around 360 microseconds latency. Now obviously Prism central allows you to mange all of your Nutanix deployments, but this is just running on one single Nutanix cluster. So if we hop over here to our explore tab, we can see we have a few categories. We have some Kubernetes, some AFS, some Xen desktop as well as Oracle RAC. Now if we hope over to Oracle RAC, we're running a SLOB workload here. So obviously with Oracle enterprise applications performance, consistency, and extremely low latency are very critical. So with this SLOB workload, we're running right around 300 microseconds of latency. >> Sunil: So this is what, how many node Oracle RAC cluster is this? >> Steve: This is a six node Oracle RAC deployment. >> Sunil: Got it. And so what has gone into the product in recent releases to kind of make this happen? >> Yeah so obviously on the hardware front, there's been a lot of evolutions in storage mediums. So with the introduction of NVME, persistent memory technologies like 3D XPoint, that's meant storage media has become a lot faster. Now to allow you to full take advantage of that, that's where we've had to do a lot of optimizations within the storage stack. So with AHV, we have what we call AHV turbo mode which allows you to full take advantage of those faster storage mediums at that much lower latency. And then obviously on the networking front, technologies such as RDMA can be leveraged to optimize that network stack. >> Got it. So that was Oracle RAC running on a you know Nutanix cluster. It used to be a big deal a couple of years ago. Now we've got many customers doing that. On the same environment though, we're going to show you is the advent of actually putting file services in the same scale out environment. And you know many of you in the audience probably know about AFS. We released it about 12 to 14 months ago. It's been one of our most popular new products of all time within Nutanix's history. And we had SMB support was for user file shares, VDI deployments, and it took awhile to bake, to get to scale and reliability. And then in the last release, in the recent release that we just shipped, we now added NFS for support so that we can no go after the full scale file server consolidation. So let's take a look at some of that stuff. >> Yep, let's do it. So hopping back over to Prism, we can see our four cluster here. Overall cluster-wide latency right around 360 microseconds. Now we'll hop down to our file server section. So here we can see we have our Next A File Server hosting right about 16.2 million files. Now if you look at our shares and exports, we can see we have a mix of different shares. So one of the shares that you see there is home directories. This is an SMB share which is actually mapped and being leveraged by our VDI desktops for home folders, user profiles, things of that nature. We can also see this Oracle backup share here which is exposed to our rack host via NFS. So RMAN is actually leveraging this to provide native database backups. >> Got it. So Oracle VMs, backup using files, or for any other file share requirements with AFS. Do we have the cluster also showing, I know, so I saw some Kubernetes as well on it. Let's talk about what we're thinking of doing there. >> Yep, let's do it. So if we think about cloud, cloud's obviously a big buzz word, so is containers in Kubernetes. So with ACS 1.0 what we did is we introduced native support for Docker integration. >> And pause there. And we screwed up. (laughing) So just like the market took a left turn on Kubernetes, obviously we realized that, and now we're working on ACS 2.0 which is what we're going to talk about, right? >> Exactly. So with ACS 2.0, we've introduced native Kubernetes support. Now when I think about Kubernetes, there's really two core areas that come to mind. The first one is around native integration. So with that, we have our Kubernetes volume integration, we're obviously doing a lot of work on the networking front, and we'll continue to push there from an integration point of view. Now the other piece is around the actual deployment of Kubernetes. When we think about a lot of Nutanix administrators or IT admins, they may have never deployed Kubernetes before, so this could be a very daunting task. And true to the Nutanix nature, we not only want to make our platform simple and intuitive, we also want to do this for any ecosystem products. So with ACS 2.0, we've simplified the full Kubernetes deployment and switching over to our ACS two interface, we can see this create cluster button. Now this actually pops up a full wizard. This wizard will actually walk you through the full deployment process, gather the necessary inputs for you, and in a matter of a few clicks and a few minutes, we have a full Kubernetes deployment fully provisioned, the masters, the workers, all the networking fully done for you, very simple and intuitive. Now if we hop back over to Prism, we can see we have this ACS2 Kubernetes category. Clicking on that, we can see we have eight instances of virtual machines. And here are Kubernetes virtual machines which have actually been deployed as part of this ACS2 installer. Now one of the nice things is it makes the IT administrator's job very simple and easy to do. The deployment straightforward monitoring and management very straightforward and simple. Now for the developer, the application architect, or engineers, they interface and interact with Kubernetes just like they would traditionally on any platform. >> Got it. So the goal of ACS is to ensure that the developer ecosystem still uses whatever tools that they are you know preferring while at that same time allowing this consolidation of containers along with VMs all on that same, single runtime, right? So that's ACS. And then if you think about where the OS is going, there's still some open space at the end. And open space has always been look if you just look at a public cloud, you look at blocks, files, containers, the most obvious sort of storage function that's left is objects. And that's the last horizon for us in completing the storage stack. And we're going to show you for the first time a preview of an upcoming product called the Acropolis Object Storage Services Stack. So let's talk a little bit about it and then maybe show the demo. >> Yeah, so just like we provided file services with AFS, block services with ABS, with OSS or Object Storage Services, we provide native object storage, compatibility and capability within the Nutanix platform. Now this provides a very simply common S3 API. So any integrations you've done with S3 especially Kubernetes, you can actually leverage that out of the box when you've deployed this. Now if we hop back over to Prism, I'll go here to my object stores menu. And here we can see we have two existing object storage instances which are running. So you can deploy however many of these as you wanted to. Now just like the Kubernetes deployment, deploying a new object instance is very simple and easy to do. So here I'll actually name this instance Thor's Hammer. >> You do know he loses it, right? He hasn't seen the movies yet. >> Yeah, I don't want any spoilers yet. So once we specified the name, we can choose our capacity. So here we'll just specify a large instance or type. Obviously this could be any amount or storage. So if you have a 200 node Nutanix cluster with petabytes worth of data, you could do that as well. Once we've selected that, we'll select our expected performance. And this is going to be the number of concurrent gets and puts. So essentially how many operations per second we want this instance to be able to facilitate. Once we've done that, the platform will actually automatically determine how many virtual machines it needs to deploy as well as the resources and specs for those. And once we've done that, we'll go ahead and click save. Now here we can see it's actually going through doing the deployment of the virtual machines, applying any necessary configuration, and in the matter of a few clicks and a few seconds, we actually have this Thor's Hammer object storage instance which is up and running. Now if we hop over to one of our existing object storage instances, we can see this has three buckets. So one for Kafka-queue, I'm actually using this for my Kafka cluster where I have right around 62 million objects all storing ProtoBus. The second one there is Spark. So I actually have a Spark cluster running on our Kubernetes deployed instance via ACS 2.0. Now this is doing analytics on top of this data using S3 as a storage backend. Now for these objects, we support native versioning, native object encryption as well as worm compliancy. So if you want to have expiry periods, retention intervals, that sort of thing, we can do all that. >> Got it. So essentially what we've just shown you is with upcoming objects as well that the same OS can now support VMs, files, objects, containers, all on the same one click operational fabric. And so that's in some way the real power of Nutanix is to still keep that consistency, scalability in place as we're covering each and every workload inside the enterprise. So before Steve gets off stage though, I wanted to talk to you guys a little bit about something that you know how many of you been to our Nutanix headquarters in San Jose, California? A few. I know there's like, I don't know, 4,000 or 5,000 people here. If you do come to the office, you know when you land in San Jose Airport on the way to longterm parking, you'll pass our office. It's that close. And if you come to the fourth floor, you know one of the cubes that's where I sit. In the cube beside me is Steve. Steve sits in the cube beside me. And when I first joined the company, three or four years ago, and Steve's if you go to his cube, it no longer looks like this, but it used to have a lot of this stuff. It was like big containers of this. I remember the first time. Since I started joking about it, he started reducing it. And then Steve eventually got married much to our surprise. (audience laughing) Much to his wife's surprise. And then he also had a baby as a bigger surprise. And if you come over to our office, and we welcome you, and you come to the fourth floor, find my cube or you'll find Steve's Cube, it now looks like this. Okay, so thanks a lot, my man. >> Cool, thank you. >> Thanks so much. (audience clapping) >> So single OS, any workload. And like Steve who's been with us for awhile, it's my great pleasure to invite one of our favorite customers, CSC Karen who's also been with us for three to four years. And I'll share some fond memories about how she's been with the company for awhile, how as partners we've really done a lot together. So without any further ado, let me bring up Karen. Come on up, Karen. (rock music) >> Thank you for having me. >> Yeah, thank you. So I remember, so how many of you guys were with Nutanix first .Next in Miami? I know there was a question like that asked last time. Not too many. You missed it. We wished we could go back to that. We wouldn't fit 3/4s of this crowd. But Karen was our first customer in the keynote in 2015. And we had just talked about that story at that time where you're just become a customer. Do you want to give us some recap of that? >> Sure. So when we made the decision to move to hyperconverged infrastructure and chose Nutanix as our partner, we rapidly started to deploy. And what I mean by that is Sunil and some of the Nutanix executives had come out to visit with us and talk about their product on a Tuesday. And on a Wednesday after making the decision, I picked up the phone and said you know what I've got to deploy for my VDI cluster. So four nodes showed up on Thursday. And from the time it was plugged in to moving over 300 VDIs and 50 terabytes of storage and turning it over for the business for use was less than three days. So it was really excellent testament to how simple it is to start, and deploy, and utilize the Nutanix infrastructure. Now part of that was the delight that we experienced from our customers after that deployment. So we got phone calls where people were saying this report it used to take so long that I'd got out and get a cup of coffee and come back, and read an article, and do some email, and then finally it would finish. Those reports are running in milliseconds now. It's one click. It's very, very simple, and we've delighted our customers. Now across that journey, we have gone from the simple workloads like VDIs to the much more complex workloads around Splunk and Hadoop. And what's really interesting about our Splunk deployment is we're handling over a billion events being logged everyday. And the deployment is smaller than what we had with a three tiered infrastructure. So when you hear people talk about waste and getting that out and getting to an invisible environment where you're just able to run it, that's what we were able to achieve both with everything that we're running from our public facing websites to the back office operations that we're using which include Splunk and even most recently our Cloudera and Hadoop infrastructure. What it does is it's got 30 crawlers that go out on the internet and start bringing data back. So it comes back with over two terabytes of data everyday. And then that environment, ingests that data, does work against it, and responds to the business. And that again is something that's smaller than what we had on traditional infrastructure, and it's faster and more stable. >> Got it. And it covers a lot of use cases as well. You want to speak a few words on that? >> So the use cases, we're 90%, 95% deployed on Nutanix, and we're covering all of our use cases. So whether that's a customer facing app or a back office application. And what are business is doing is it's handling large portfolios of data for fortune 500 companies and law firms. And these applications are all running with improved stability, reliability, and performance on the Nutanix infrastructure. >> And the plan going forward? >> So the plan going forward, you actually asked me that in Miami, and it's go global. So when we started in Miami and that first deployment, we had four nodes. We now have 283 nodes around the world, and we started with about 50 terabytes of data. We've now got 3.8 petabytes of data. And we're deployed across four data centers and six remote offices. And people ask me often what is the value that we achieved? So simplification. It's all just easier, and it's all less expensive. Being able to scale with the business. So our Cloudera environment ended up with one day where it spiked to 1,000 times more load, 1,000 times, and it just responded. We had rally cries around improved productivity by six times. So 600% improved productivity, and we were able to actually achieve that. The numbers you just saw on the slide that was very, very fast was we calculated a 40% reduction in total cost of ownership. We've exceeded that. And when we talk about waste, that other number on the board there is when I saved the company one hour of maintenance activity or unplanned downtime in a month which we're now able to do the majority of our maintenance activities without disrupting any of our business solutions, I'm saving $750,000 each time I save that one hour. >> Wow. All right, Karen from CSE. Thank you so much. That was great. Thank you. I mean you know some of these data points frankly as I started talking to Karen as well as some other customers are pretty amazing in terms of the genuine value beyond financial value. Kind of like the emotional sort of benefits that good products deliver to some of our customers. And I think that's one of the core things that we take back into engineering is to keep ourselves honest on either velocity or quality even hiring people and so forth. Is to actually the more we touch customers lives, the more we touch our partner's lives, the more it allows us to ensure that we can put ourselves in their shoes to kind of make sure that we're doing the right thing in terms of the product. So that was the first part, invisible infrastructure. And our goal, as we've always talked about, our true North is to make sure that this single OS can be an exact replica, a truly modern, thoughtful but original design that brings the power of public cloud this AWS or GCP like architectures into your mainstream enterprises. And so when we take that to the next level which is about expanding the scope to go beyond invisible infrastructure to invisible data centers, it starts with a few things. Obviously, it starts with virtualization and a level of intelligent management, extends to automation, and then as we'll talk about, we have to embark on encompassing the network. And that's what we'll talk about with Flow. But to start this, let me again go back to one of our core products which is the bedrock of our you know opinionated design inside this company which is Prism and Acropolis. And Prism provides, I mentioned, comes with a ton of machine-learning based intelligence built into the product in 5.6 we've done a ton of work. In fact, a lot of features are coming out now because now that PC, Prism Central that you know has been decoupled from our mainstream release strain and will continue to release on its own cadence. And the same thing when you actually flip it to AHV on its own train. Now AHV, two years ago it was all about can I use AHV for VDI? Can I use AHV for ROBO? Now I'm pretty clear about where you cannot use AHV. If you need memory overcome it, stay with VMware or something. If you need, you know Metro, stay with another technology, else it's game on, right? And if you really look at the adoption of AHV in the mainstream enterprise, the customers now speak for themselves. These are all examples of large global enterprises with multimillion dollar ELAs in play that have now been switched over. Like I'll give you a simple example here, and there's lots of these that I'm sure many of you who are in the audience that are in this camp, but when you look at the breakout sessions in the pods, you'll get a sense of this. But I'll give you one simple example. If you look at the online payment company. I'm pretty sure everybody's used this at one time or the other. They had the world's largest private cloud on open stack, 21,000 nodes. And they were actually public about it three or four years ago. And in the last year and a half, they put us through a rigorous VOC testing scale, hardening, and it's a full blown AHV only stack. And they've started cutting over. Obviously they're not there yet completely, but they're now literally in hundreds of nodes of deployment of Nutanix with AHV as their primary operating system. So it is primetime from a deployment perspective. And with that as the base, no cloud is complete without actually having self-service provisioning that truly drives one-click automation, and can you do that in this consumer grade design? And Calm was acquired, as you guys know, in 2016. We had a choice of taking Calm. It was reasonably feature complete. It supported multiple clouds. It supported ESX, it supported Brownfield, It supported AHV. I mean they'd already done the integration with Nutanix even before the acquisition. And we had a choice. The choice was go down the path of dynamic ops or some other products where you took it for revenue or for acceleration, you plopped it into the ecosystem and sold it at this power sucking alien on top of our stack, right? Or we took a step back, re-engineered the product, kept some of the core essence like the workflow engine which was good, the automation, the object model and all, but refactored it to make it look like a natural extension of our operating system. And that's what we did with Calm. And we just launched it in December, and it's been one of our most popular new products now that's flying off the shelves. If you saw the number of registrants, I got a notification of this for the breakout sessions, the number one session that has been preregistered with over 500 people, the first two sessions are around Calm. And justifiably so because it just as it lives up to its promise, and it'll take its time to kind of get to all the bells and whistles, all the capabilities that have come through with AHV or Acropolis in the past. But the feature functionality, the product market fit associated with Calm is dead on from what the feedback that we can receive. And so Calm itself is on its own rapid cadence. We had AWS and AHV in the first release. Three or four months later, we now added ESX support. We added GCP support and a whole bunch of other capabilities, and I think the essence of Calm is if you can combine Calm and along with private cloud automation but also extend it to multi-cloud automation, it really sets Nutanix on its first genuine path towards multi-cloud. But then, as I said, if you really fixate on a software defined data center message, we're not complete as a full blown AWS or GCP like IA stack until we do the last horizon of networking. And you probably heard me say this before. You heard Dheeraj and others talk about it before is our problem in networking isn't the same in storage. Because the data plane in networking works. Good L2 switches from Cisco, Arista, and so forth, but the real problem networking is in the control plane. When something goes wrong at a VM level in Nutanix, you're able to identify whether it's a storage problem or a compute problem, but we don't know whether it's a VLAN that's mis-configured, or there've been some packets dropped at the top of the rack. Well that all ends now with Flow. And with Flow, essentially what we've now done is take the work that we've been working on to create built-in visibility, put some network automation so that you can actually provision VLANs when you provision VMs. And then augment it with micro segmentation policies all built in this easy to use, consume fashion. But we didn't stop there because we've been talking about Flow, at least the capabilities, over the last year. We spent significant resources building it. But we realized that we needed an additional thing to augment its value because the world of applications especially discovering application topologies is a heady problem. And if we didn't address that, we wouldn't be fulfilling on this ambition of providing one-click network segmentation. And so that's where Netsil comes in. Netsil might seem on the surface yet another next generation application performance management tool. But the innovations that came from Netsil started off at the research project at the University of Pennsylvania. And in fact, most of the team right now that's at Nutanix is from the U Penn research group. And they took a really original, fresh look at how do you sit in a network in a scale out fashion but still reverse engineer the packets, the flow through you, and then recreate this application topology. And recreate this not just on Nutanix, but do it seamlessly across multiple clouds. And to talk about the power of Flow augmented with Netsil, let's bring Rajiv back on stage, Rajiv. >> How you doing? >> Okay so we're going to start with some Netsil stuff, right? >> Yeah, let's talk about Netsil and some of the amazing capabilities this acquisition's bringing to Nutanix. First of all as you mentioned, Netsil's completely non invasive. So it installs on the network, it does all its magic from there. There're no host agents, non of the complexity and compatibility issues that entails. It's also monitoring the network at layer seven. So it's actually doing a deep packet inspection on all your application data, and can give you insights into services and APIs which is very important for modern applications and the way they behave. To do all this of course performance is key. So Netsil's built around a completely distributed architecture scaled to really large workloads. Very exciting technology. We're going to use it in many different ways at Nutanix. And to give you a flavor of that, let me show you how we're thinking of integrating Flow and Nestil together, so micro segmentation and Netsil. So to do that, we install Netsil in one of our Google accounts. And that's what's up here now. It went out there. It discovered all the VMs we're running on that account. It created a map essentially of all their interactions, and you can see it's like a Google Maps view. I can zoom into it. I can look at various things running. I can see lots of HTTP servers over here, some databases. >> Sunil: And it also has stats, right? You can go, it actually-- >> It does. We can take a look at that for a second. There are some stats you can look at right away here. Things like transactions per second and latencies and so on. But if I wanted to micro segment this application, it's not really clear how to do so. There's no real pattern over here. Taking the Google Maps analogy a little further, this kind of looks like the backstreets of Cairo or something. So let's do this step by step. Let me first filter down to one application. Right now I'm looking at about three or four different applications. And Netsil integrates with the metadata. So this is that the clouds provide. So I can search all the tags that I have. So by doing that, I can zoom in on just the financial application. And when I do this, the view gets a little bit simpler, but there's still no real pattern. It's not clear how to micro segment this, right? And this is where the power of Netsil comes in. This is a fairly naive view. This is what tool operating at layer four just looking at ports and TCP traffic would give you. But by doing deep packet inspection, Netsil can get into the services layer. So instead of grouping these interactions by hostname, let's group them by service. So you go service tier. And now you can see this is a much simpler picture. Now I have some patterns. I have a couple of load balancers, an HA proxy and an Nginx. I have a web application front end. I have some application servers running authentication services, search services, et cetera, a database, and a database replica. I could go ahead and micro segment at this point. It's quite possible to do it at this point. But this is almost too granular a view. We actually don't usually want to micro segment at individual service level. You think more in terms of application tiers, the tiers that different services belong to. So let me go ahead and group this differently. Let me group this by app tier. And when I do that, a really simple picture emerges. I have a load balancing tier talking to a web application front end tier, an API tier, and a database tier. Four tiers in my application. And this is something I can work with. This is something that I can micro segment fairly easily. So let's switch over to-- >> Before we dot that though, do you guys see how he gave himself the pseudonym called Dom Toretto? >> Focus Sunil, focus. >> Yeah, for those guys, you know that's not the Avengers theme, man, that's the Fast and Furious theme. >> Rajiv: I think a year ahead. This is next years theme. >> Got it, okay. So before we cut over from Netsil to Flow, do we want to talk a few words about the power of Flow, and what's available in 5.6? >> Sure so Flow's been around since the 5.6 release. Actually some of the functionality came in before that. So it's got invisibility into the network. It helps you debug problems with WLANs and so on. We had a lot of orchestration with other third party vendors with load balancers, with switches to make publishing much simpler. And then of course with our most recent release, we GA'ed our micro segmentation capabilities. And that of course is the most important feature we have in Flow right now. And if you look at how Flow policy is set up, it looks very similar to what we just saw with Netsil. So we have load blancer talking to a web app, API, database. It's almost identical to what we saw just a moment ago. So while this policy was created manually, it is something that we can automate. And it is something that we will do in future releases. Right now, it's of course not been integrated at that level yet. So this was created manually. So one thing you'll notice over here is that the database tier doesn't get any direct traffic from the internet. All internet traffic goes to the load balancer, only specific services then talk to the database. So this policy right now is in monitoring mode. It's not actually being enforced. So let's see what happens if I try to attack the database, I start a hack against the database. And I have my trusty brute force password script over here. It's trying the most common passwords against the database. And if I happen to choose a dictionary word or left the default passwords on, eventually it will log into the database. And when I go back over here in Flow what happens is it actually detects there's now an ongoing a flow, a flow that's outside of policy that's shown up. And it shows this in yellow. So right alongside the policy, I can visualize all the noncompliant flows. This makes it really easy for me now to make decisions, does this flow should it be part of the policy, should it not? In this particular case, obviously it should not be part of the policy. So let me just switch from monitoring mode to enforcement mode. I'll apply the policy, give it a second to propagate. The flow goes away. And if I go back to my script, you can see now the socket's timing out. I can no longer connect to the database. >> Sunil: Got it. So that's like one click segmentation and play right now? >> Absolutely. It's really, really simple. You can compare it to other products in the space. You can't get simpler than this. >> Got it. Why don't we got back and talk a little bit more about, so that's Flow. It's shipping now in 5.6 obviously. It'll come integrated with Netsil functionality as well as a variety of other enhancements in that next few releases. But Netsil does more than just simple topology discovery, right? >> Absolutely. So Netsil's actually gathering a lot of metrics from your network, from your host, all this goes through a data pipeline. It gets processed over there and then gets captured in a time series database. And then we can slice and dice that in various different ways. It can be used for all kinds of insights. So let's see how our application's behaving. So let me say I want to go into the API layer over here. And I instantly get a variety of metrics on how the application's behaving. I get the most requested endpoints. I get the average latency. It looks reasonably good. I get the average latency of the slowest endpoints. If I was having a performance problem, I would know exactly where to go focus on. Right now, things look very good, so we won't focus on that. But scrolling back up, I notice that we have a fairly high error rate happening. We have like 11.35% of our HTTP requests are generating errors, and that deserves some attention. And if I scroll down again, and I see the top five status codes I'm getting, almost 10% of my requests are generating 500 errors, HTTP 500 errors which are internal server errors. So there's something going on that's wrong with this application. So let's dig a little bit deeper into that. Let me go into my analytics workbench over here. And what I've plotted over here is how my HTTP requests are behaving over time. Let me filter down to just the 500 ones. That will make it easier. And I want the 500s. And I'll also group this by the service tier so that I can see which services are causing the problem. And the better view for this would be a bar graph. Yes, so once I do this, you can see that all the errors, all the 500 errors that we're seeing have been caused by the authentication service. So something's obviously wrong with that part of my application. I can go look at whether Active Directory is misbehaving and so on. So very quickly from a broad problem that I was getting a high HTTP error rate. In fact, usually you will discover there's this customer complaining about a lot of errors happening in your application. You can quickly narrow down to exactly what the cause was. >> Got it. This is what we mean by hyperconvergence of the network which is if you can truly isolate network related problems and associate them with the rest of the hyperconvergence infrastructure, then we've essentially started making real progress towards the next level of hyperconvergence. Anyway, thanks a lot, man. Great job. >> Thanks, man. (audience clapping) >> So to talk about this evolution from invisible infrastructure to invisible data centers is another customer of ours that has embarked on this journey. And you know it's not just using Nutanix but a variety of other tools to actually fulfill sort of like the ambition of a full blown cloud stack within a financial organization. And to talk more about that, let me call Vijay onstage. Come on up, Vijay. (rock music) >> Hey. >> Thank you, sir. So Vijay looks way better in real life than in a picture by the way. >> Except a little bit of gray. >> Unlike me. So tell me a little bit about this cloud initiative. >> Yeah. So we've won the best cloud initiative twice now hosted by Incisive media a large magazine. It's basically they host a bunch of you know various buy side, sell side, and you can submit projects in various categories. So we've won the best cloud twice now, 2015 and 2017. The 2017 award is when you know as part of our private cloud journey we were laying the foundation for our private cloud which is 100% based on hyperconverged infrastructure. So that was that award. And then 2017, we've kind of built on that foundation and built more developer-centric next gen app services like PAS, CAS, SDN, SDS, CICD, et cetera. So we've built a lot of those services on, and the second award was really related to that. >> Got it. And a lot of this was obviously based on an infrastructure strategy with some guiding principles that you guys had about three or four years ago if I remember. >> Yeah, this is a great slide. I use it very often. At the core of our infrastructure strategy is how do we run IT as a business? I talk about this with my teams, they were very familiar with this. That's the mindset that I instill within the teams. The mission, the challenge is the same which is how do we scale infrastructure while reducing total cost of ownership, improving time to market, improving client experience and while we're doing that not lose sight of reliability, stability, and security? That's the mission. Those are some of our guiding principles. Whenever we take on some large technology investments, we take 'em through those lenses. Obviously Nutanix went through those lenses when we invested in you guys many, many years ago. And you guys checked all the boxes. And you know initiatives change year on year, the mission remains the same. And more recently, the last few years, we've been focused on converged platforms, converged teams. We've actually reorganized our teams and aligned them closer to the platforms moving closer to an SRE like concept. >> And then you've built out a full stack now across computer storage, networking, all the way with various use cases in play? >> Yeah, and we're aggressively moving towards PAS, CAS as our method of either developing brand new cloud native applications or even containerizing existing applications. So the stack you know obviously built on Nutanix, SDS for software fine storage, compute and networking we've got SDN turned on. We've got, again, PAS and CAS built on this platform. And then finally, we've hooked our CICD tooling onto this. And again, the big picture was always frictionless infrastructure which we're very close to now. You know 100% of our code deployments into this environment are automated. >> Got it. And so what's the net, net in terms of obviously the business takeaway here? >> Yeah so at Northern we don't do tech for tech. It has to be some business benefits, client benefits. There has to be some outcomes that we measure ourselves against, and these are some great metrics or great ways to look at if we're getting the outcomes from the investments we're making. So for example, infrastructure scale while reducing total cost of ownership. We're very focused on total cost of ownership. We, for example, there was a build team that was very focus on building servers, deploying applications. That team's gone down from I think 40, 45 people to about 15 people as one example, one metric. Another metric for reducing TCO is we've been able to absorb additional capacity without increasing operating expenses. So you're actually building capacity in scale within your operating model. So that's another example. Another example, right here you see on the screen. Faster time to market. We've got various types of applications at any given point that we're deploying. There's a next gen cloud native which go directly on PAS. But then a majority of the applications still need the traditional IS components. The time to market to deploy a complex multi environment, multi data center application, we've taken that down by 60%. So we can deliver server same day, but we can deliver entire environments, you know add it to backup, add it to DNS, and fully compliant within a couple of weeks which is you know something we measure very closely. >> Great job, man. I mean that's a compelling I think results. And in the journey obviously you got promoted a few times. >> Yep. >> All right, congratulations again. >> Thank you. >> Thanks Vijay. >> Hey Vijay, come back here. Actually we forgot our joke. So razzled by his data points there. So you're supposed to wear some shoes, right? >> I know my inner glitch. I was going to wear those sneakers, but I forgot them at the office maybe for the right reasons. But the story behind those florescent sneakers, I see they're focused on my shoes. But I picked those up two years ago at a Next event, and not my style. I took 'em to my office. They've been sitting in my office for the last couple years. >> Who's received shoes like these by the way? I'm sure you guys have received shoes like these. There's some real fans there. >> So again, I'm sure many of you liked them. I had 'em in my office. I've offered it to so many of my engineers. Are you size 11? Do you want these? And they're unclaimed? >> So that's the only feature of Nutanix that you-- >> That's the only thing that hasn't worked, other than that things are going extremely well. >> Good job, man. Thanks a lot. >> Thanks. >> Thanks Vijay. So as we get to the final phase which is obviously as we embark on this multi-cloud journey and the complexity that comes with it which Dheeraj hinted towards in his session. You know we have to take a cautious, thoughtful approach here because we don't want to over set expectations because this will take us five, 10 years to really do a good job like we've done in the first act. And the good news is that the market is also really, really early here. It's just a fact. And so we've taken a tiered approach to it as we'll start the discussion with multi-cloud operations, and we've talked about the stack in the prior session which is about look across new clouds. So it's no longer Nutanix, Dell, Lenova, HP, Cisco as the new quote, unquote platforms. It's Nutanix, Xi, GCP, AWS, Azure as the new platforms. That's how we're designing the fabric going forward. On top of that, you obviously have the hybrid OS both on the data plane side and control plane side. Then what you're seeing with the advent of Calm doing a marketplace and automation as well as Beam doing governance and compliance is the fact that you'll see more and more such capabilities of multi-cloud operations burnt into the platform. And example of that is Calm with the new 5.7 release that they had. Launch supports multiple clouds both inside and outside, but the fundamental premise of Calm in the multi-cloud use case is to enable you to choose the right cloud for the right workload. That's the automation part. On the governance part, and this we kind of went through in the last half an hour with Dheeraj and Vijay on stage is something that's even more, if I can call it, you know first order because you get the provisioning and operations second. The first order is to say look whatever my developers have consumed off public cloud, I just need to first get our arm around to make sure that you know what am I spending, am I secure, and then when I get comfortable, then I am able to actually expand on it. And that's the power of Beam. And both Beam and Calm will be the yin and yang for us in our multi-cloud portfolio. And we'll have new products to complement that down the road, right? But along the way, that's the whole private cloud, public cloud. They're the two ends of the barbell, and over time, and we've been working on Xi for awhile, is this conviction that we've built talking to many customers that there needs to be another type of cloud. And this type of a cloud has to feel like a public cloud. It has to be architected like a public cloud, be consumed like a public cloud, but it needs to be an extension of my data center. It should not require any changes to my tooling. It should not require and changes to my operational infrastructure, and it should not require lift and shift, and that's a super hard problem. And this problem is something that a chunk of our R and D team has been burning the midnight wick on for the last year and a half. Because look this is not about taking our current OS which does a good job of scaling and plopping it into a Equinix or a third party data center and calling it a hybrid cloud. This is about rebuilding things in the OS so that we can deliver a true hybrid cloud, but at the same time, give those functionality back on premises so that even if you don't have a hybrid cloud, if you just have your own data centers, you'll still need new services like DR. And if you think about it, what are we doing? We're building a full blown multi-tenant virtual network designed in a modern way. Think about this SDN 2.0 because we have 10 years worth of looking backwards on how GCP has done it, or how Amazon has done it, and now sort of embodying some of that so that we can actually give it as part of this cloud, but do it in a way that's a seamless extension of the data center, and then at the same time, provide new services that have never been delivered before. Everyone obviously does failover and failback in DR it just takes months to do it. Our goal is to do it in hours or minutes. But even things such as test. Imagine doing a DR test on demand for you business needs in the middle of the day. And that's the real bar that we've set for Xi that we are working towards in early access later this summer with GA later in the year. And to talk more about this, let me invite some of our core architects working on it, Melina and Rajiv. (rock music) Good to see you guys. >> You're messing up the names again. >> Oh Rajiv, Vinny, same thing, man. >> You need to back up your memory from Xi. >> Yeah, we should. Okay, so what are we going to talk about, Vinny? >> Yeah, exactly. So today we're going to talk about how Xi is pushing the envelope and beyond the state of the art as you were saying in the industry. As part of that, there's a whole bunch of things that we have done starting with taking a private cloud, seamlessly extending it to the public cloud, and then creating a hybrid cloud experience with one-click delight. We're going to show that. We've done a whole bunch of engineering work on making sure the operations and the tooling is identical on both sides. When you graduate from a private cloud to a hybrid cloud environment, you don't want the environments to be different. So we've copied the environment for you with zero manual intervention. And finally, building on top of that, we are delivering DR as a service with unprecedented simplicity with one-click failover, one-click failback. We're going to show you one click test today. So Melina, why don't we start with showing how you go from a private cloud, seamlessly extend it to consume Xi. >> Sounds good, thanks Vinny. Right now, you're looking at my Prism interface for my on premises cluster. In one-click, I'm going to be able to extend that to my Xi cloud services account. I'm doing this using my my Nutanix credential and a password manager. >> Vinny: So here as you notice all the Nutanix customers we have today, we have created an account for them in Xi by default. So you don't have to log in somewhere and create an account. It's there by default. >> Melina: And just like that we've gone ahead and extended my data center. But let's go take a look at the Xi side and log in again with my my Nutanix credentials. We'll see what we have over here. We're going to be able to see two availability zones, one for on premises and one for Xi right here. >> Vinny: Yeah as you see, using a log in account that you already knew mynutanix.com and 30 seconds in, you can see that you have a hybrid cloud view already. You have a private cloud availability zone that's your own Prism central data center view, and then a Xi availability zone. >> Sunil: Got it. >> Melina: Exactly. But of course we want to extend my network connection from on premises to my Xi networks as well. So let's take a look at our options there. We have two ways of doing this. Both are one-click experience. With direct connect, you can create a dedicated network connection between both environments, or VPN you can use a public internet and a VPN service. Let's go ahead and enable VPN in this environment. Here we have two options for how we want to enable our VPN. We can bring our own VPN and connect it, or we will deploy a VPN for you on premises. We'll do the option where we deploy the VPN in one-click. >> And this is another small sign or feature that we're building net new as part of Xi, but will be burned into our core Acropolis OS so that we can also be delivering this as a stand alone product for on premises deployment as well, right? So that's one of the other things to note as you guys look at the Xi functionality. The goal is to keep the OS capabilities the same on both sides. So even if I'm building a quote, unquote multi data center cloud, but it's just a private cloud, you'll still get all the benefits of Xi but in house. >> Exactly. And on this second step of the wizard, there's a few inputs around how you want the gateway configured, your VLAN information and routing and protocol configuration details. Let's go ahead and save it. >> Vinny: So right now, you know what's happening is we're taking the private network that our customers have on premises and extending it to a multi-tenant public cloud such that our customers can use their IP addresses, the subnets, and bring their own IP. And that is another step towards making sure the operation and tooling is kept consistent on both sides. >> Melina: Exactly. And just while you guys were talking, the VPN was successfully created on premises. And we can see the details right here. You can track details like the status of the connection, the gateway, as well as bandwidth information right in the same UI. >> Vinny: And networking is just tip of the iceberg of what we've had to work on to make sure that you get a consistent experience on both sides. So Melina, why don't we show some of the other things we've done? >> Melina: Sure, to talk about how we preserve entities from my on-premises to Xi, it's better to use my production environment. And first thing you might notice is the log in screen's a little bit different. But that's because I'm logging in using my ADFS credentials. The first thing we preserved was our users. In production, I'm running AD obviously on-prem. And now we can log in here with the same set of credentials. Let me just refresh this. >> And this is the Active Directory credential that our customers would have. They use it on-premises. And we allow the setting to be set on the Xi cloud services as well, so it's the same set of users that can access both sides. >> Got it. There's always going to be some networking problem onstage. It's meant to happen. >> There you go. >> Just launching it again here. I think it maybe timed out. This is a good sign that we're running on time with this presentation. >> Yeah, yeah, we're running ahead of time. >> Move the demos quicker, then we'll time out. So essentially when you log into Xi, you'll be able to see what are the environment capabilities that we have copied to the Xi environment. So for example, you just saw that the same user is being used to log in. But after the use logs in, you'll be able to see their images, for example, copied to the Xi side. You'll be able to see their policies and categories. You know when you define these policies on premises, you spend a lot of effort and create them. And now when you're extending to the public cloud, you don't want to do it again, right? So we've done a whole lot of syncing mechanisms making sure that the two sides are consistent. >> Got it. And on top of these policies, the next step is to also show capabilities to actually do failover and failback, but also do integrated testing as part of this compatibility. >> So one is you know just the basic job of making the environments consistent on two sides, but then it's also now talking about the data part, and that's what DR is about. So if you have a workload running on premises, we can take the data and replicate it using your policies that we've already synced. Once the data is available on the Xi side, at that point, you have to define a run book. And the run book essentially it's a recovery plan. And that says okay I already have the backups of my VMs in case of disaster. I can take my recovery plan and hit you know either failover or maybe a test. And then my application comes up. First of all, you'll talk about the boot order for your VMs to come up. You'll talk about networking mapping. Like when I'm running on-prem, you're using a particular subnet. You have an option of using the same subnet on the Xi side. >> Melina: There you go. >> What happened? >> Sunil: It's finally working.? >> Melina: Yeah. >> Vinny, you can stop talking. (audience clapping) By the way, this is logging into a live Xi data center. We have two regions West Coat, two data centers East Coast, two data centers. So everything that you're seeing is essentially coming off the mainstream Xi profile. >> Vinny: Melina, why don't we show the recovery plan. That's the most interesting piece here. >> Sure. The recovery plan is set up to help you specify how you want to recover your applications in the event of a failover or a test failover. And it specifies all sorts of details like the boot sequence for the VMs as well as network mappings. Some of the network mappings are things like the production network I have running on premises and how it maps to my production network on Xi or the test network to the test network. What's really cool here though is we're actually automatically creating your subnets on Xi from your on premises subnets. All that's part of the recovery plan. While we're on the screen, take a note of the .100 IP address. That's a floating IP address that I have set up to ensure that I'm going to be able to access my three tier web app that I have protected with this plan after a failover. So I'll be able to access it from the public internet really easily from my phone or check that it's all running. >> Right, so given how we make the environment consistent on both sides, now we're able to create a very simple DR experience including failover in one-click, failback. But we're going to show you test now. So Melina, let's talk about test because that's one of the most common operations you would do. Like some of our customers do it every month. But usually it's very hard. So let's see how the experience looks like in what we built. >> Sure. Test and failover are both one-click experiences as you know and come to expect from Nutanix. You can see it's failing over from my primary location to my recovery location. Now what we're doing right now is we're running a series of validation checks because we want to make sure that you have your network configured properly, and there's other configuration details in place for the test to be successful. Looks like the failover was initiated successfully. Now while that failover's happening though, let's make sure that I'm going to be able to access my three tier web app once it fails over. We'll do that by looking at my network policies that I've configured on my test network. Because I want to access the application from the public internet but only port 80. And if we look here under our policies, you can see I have port 80 open to permit. So that's good. And if I needed to create a new one, I could in one click. But it looks like we're good to go. Let's go back and check the status of my recovery plan. We click in, and what's really cool here is you can actually see the individual tasks as they're being completed from that initial validation test to individual VMs being powered on as part of the recovery plan. >> And to give you guys an idea behind the scenes, the entire recovery plan is actually a set of workflows that are built on Calm's automation engine. So this is an example of where we're taking some of power of workflow and automation that Clam has come to be really strong at and burning that into how we actually operationalize many of these workflows for Xi. >> And so great, while you were explaining that, my three tier web app has restarted here on Xi right in front of you. And you can see here there's a floating IP that I mentioned early that .100 IP address. But let's go ahead and launch the console and make sure the application started up correctly. >> Vinny: Yeah, so that .100 IP address is a floating IP that's a publicly visible IP. So it's listed here, 206.80.146.100. And that's essentially anybody in the audience here can go use your laptop or your cell phone and hit that and start to work. >> Yeah so by the way, just to give you guys an idea while you guys maybe use the IP to kind of hit it, is a real set of VMs that we've just failed over from Nutanix's corporate data center into our West region. >> And this is running live on the Xi cloud. >> Yeah, you guys should all go and vote. I'm a little biased towards Xi, so vote for Xi. But all of them are really good features. >> Scroll up a little bit. Let's see where Xi is. >> Oh Xi's here. I'll scroll down a little bit, but keep the... >> Vinny: Yes. >> Sunil: You guys written a block or something? >> Melina: Oh good, it looks like Xi's winning. >> Sunil: Okay, great job, Melina. Thank you so much. >> Thank you, Melina. >> Melina: Thanks. >> Thank you, great job. Cool and calm under pressure. That's good. So that was Xi. What's something that you know we've been doing around you know in addition to taking say our own extended enterprise public cloud with Xi. You know we do recognize that there are a ton of workloads that are going to be residing on AWS, GCP, Azure. And to sort of really assist in the try and call it transformation of enterprises to choose the right cloud for the right workload. If you guys remember, we actually invested in a tool over last year which became actually quite like one of those products that took off based on you know groundswell movement. Most of you guys started using it. It's essentially extract for VMs. And it was this product that's obviously free. It's a tool. But it enables customers to really save tons of time to actually migrate from legacy environments to Nutanix. So we took that same framework, obviously re-platformed it for the multi-cloud world to kind of solve the problem of migrating from AWS or GCP to Nutanix or vice versa. >> Right, so you know, Sunil as you said, moving from a private cloud to the public cloud is a lift and shift, and it's a hard you know operation. But moving back is not only expensive, it's a very hard problem. None of the cloud vendors provide change block tracking capability. And what that means is when you have to move back from the cloud, you have an extended period of downtime because there's now way of figuring out what's changing while you're moving. So you have to keep it down. So what we've done with our app mobility product is we have made sure that, one, it's extremely simple to move back. Two, that the downtime that you'll have is as small as possible. So let me show you what we've done. >> Got it. >> So here is our app mobility capability. As you can see, on the left hand side we have a source environment and target environment. So I'm calling my AWS environment Asgard. And I can add more environments. It's very simple. I can select AWS and then put in my credentials for AWS. It essentially goes and discovers all the VMs that are running and all the regions that they're running. Target environment, this is my Nutanix environment. I call it Earth. And I can add target environment similarly, IP address and credentials, and we do the rest. Right, okay. Now migration plans. I have Bifrost one as my migration plan, and this is how migration works. First you create a plan and then say start seeding. And what it does is takes a snapshot of what's running in the cloud and starts migrating it to on-prem. Once it is an on-prem and the difference between the two sides is minimal, it says I'm ready to cutover. At that time, you move it. But let me show you how you'd create a new migration plan. So let me name it, Bifrost 2. Okay so what I have to do is select a region, so US West 1, and target Earth as my cluster. This is my storage container there. And very quickly you can see these are the VMs that are running in US West 1 in AWS. I can select SQL server one and two, go to next. Right now it's looking at the target Nutanix environment and seeing it had enough space or not. Once that's good, it gives me an option. And this is the step where it enables the Nutanix service of change block tracking overlaid on top of the cloud. There are two options one is automatic where you'll give us the credentials for your VMs, and we'll inject our capability there. Or manually you could do. You could copy the command either in a windows VM or Linux VM and run it once on the VM. And change block tracking since then in enabled. Everything is seamless after that. Hit next. >> And while Vinny's setting it up, he said a few things there. I don't know if you guys caught it. One of the hardest problems in enabling seamless migration from public cloud to on-prem which makes it harder than the other way around is the fact that public cloud doesn't have things like change block tracking. You can't get delta copies. So one of the core innovations being built in this app mobility product is to provide that overlay capability across multiple clouds. >> Yeah, and the last step here was to select the target network where the VMs will come up on the Nutanix environment, and this is a summary of the migration plan. You can start it or just save it. I'm saving it because it takes time to do the seeding. I have the other plan which I'll actually show the cutover with. Okay so now this is Bifrost 1. It's ready to cutover. We started it four hours ago. And here you can see there's a SQL server 003. Okay, now I would like to show the AWS environment. As you can see, SQL server 003. This VM is actually running in AWS right now. And if you go to the Prism environment, and if my login works, right? So we can go into the virtual machine view, tables, and you see the VM is not there. Okay, so we go back to this, and we can hit cutover. So this is essentially telling our system, okay now it the time. Quiesce the VM running in AWS, take the last bit of changes that you have to the database, ship it to on-prem, and in on-prem now start you know configure the target VM and start bringing it up. So let's go and look at AWS and refresh that screen. And you should see, okay so the SQL server is now stopping. So that means it has quiesced and stopping the VM there. If you go back and look at the migration plan that we had, it says it's completed. So it has actually migrated all the data to the on-prem side. Go here on-prem, you see the production SQL server is running already. I can click launch console, and let's see. The Windows VM is already booting up. >> So essentially what Vinny just showed was a live cutover of an AWS VM to Nutanix on-premises. >> Yeah, and what we have done. (audience clapping) So essentially, this is about making two things possible, making it simple to migrate from cloud to on-prem, and making it painless so that the downtime you have is very minimal. >> Got it, great job, Vinny. I won't forget your name again. So last step. So to really talk about this, one of our favorite partners and customers has been in the cloud environment for a long time. And you know Jason who's the CTO of Cyxtera. And he'll introduce who Cyxtera is. Most of you guys are probably either using their assets or not without knowing their you know the new name. But is someone that was in the cloud before it was called cloud as one of the original founders and technologists behind Terremark, and then later as one of the chief architects of VMware's cloud. And then they started this new company about a year or so ago which I'll let Jason talk about. This journey that he's going to talk about is how a partner, slash customer is working with us to deliver net new transformations around the traditional industry of colo. Okay, to talk more about it, Jason, why don't you come up on stage, man? (rock music) Thank you, sir. All right so Cyxtera obviously a lot of people don't know the name. Maybe just give a 10 second summary of why you're so big already. >> Sure, so Cyxtera was formed, as you said, about a year ago through the acquisition of the CenturyLink data centers. >> Sunil: Which includes Savvis and a whole bunch of other assets. >> Yeah, there's a long history of those data centers, but we have all of them now as well as the software companies owned by Medina capital. So we're like the world's biggest startup now. So we have over 50 data centers around the world, about 3,500 customers, and a portfolio of security and analytics software. >> Sunil: Got it, and so you have this strategy of what we're calling revolutionizing colo deliver a cloud based-- >> Yeah so, colo hasn't really changed a lot in the last 20 years. And to be fair, a lot of what happens in data centers has to have a person physically go and do it. But there are some things that we can simplify and automate. So we want to make things more software driven, so that's what we're doing with the Cyxtera extensible data center or CXD. And to do that, we're deploying software defined networks in our facilities and developing automations so customers can go and provision data center services and the network connectivity through a portal or through REST APIs. >> Got it, and what's different now? I know there's a whole bunch of benefits with the integrated platform that one would not get in the traditional kind of on demand data center environment. >> Sure. So one of the first services we're launching on CXD is compute on demand, and it's powered by Nutanix. And we had to pick an HCI partner to launch with. And we looked at players in the space. And as you mentioned, there's actually a lot of them, more than I thought. And we had a lot of conversations, did a lot of testing in the lab, and Nutanix really stood out as the best choice. You know Nutanix has a lot of focus on things like ease of deployment. So it's very simple for us to automate deploying compute for customers. So we can use foundation APIs to go configure the servers, and then we turn those over to the customer which they can then manage through Prism. And something important to keep in mind here is that you know this isn't a manged service. This isn't infrastructure as a service. The customer has complete control over the Nutanix platform. So we're turning that over to them. It's connected to their network. They're using their IP addresses, you know their tools and processes to operate this. So it was really important for the platform we picked to have a really good self-service story for things like you know lifecycle management. So with one-click upgrade, customers have total control over patches and upgrades. They don't have to call us to do it. You know they can drive that themselves. >> Got it. Any other final words around like what do you see of the partnership going forward? >> Well you know I think this would be a great platform for Xi, so I think we should probably talk about that. >> Yeah, yeah, we should talk about that separately. Thanks a lot, Jason. >> Thanks. >> All right, man. (audience clapping) So as we look at the full journey now between obviously from invisible infrastructure to invisible clouds, you know there is one thing though to take away beyond many updates that we've had so far. And the fact is that everything that I've talked about so far is about completing a full blown true IA stack from all the way from compute to storage, to vitualization, containers to network services, and so forth. But every public cloud, a true cloud in that sense, has a full blown layer of services that's set on top either for traditional workloads or for new workloads, whether it be machine-learning, whether it be big data, you know name it, right? And in the enterprise, if you think about it, many of these services are being provisioned or provided through a bunch of our partners. Like we have partnerships with Cloudera for big data and so forth. But then based on some customer feedback and a lot of attention from what we've seen in the industry go out, just like AWS, and GCP, and Azure, it's time for Nutanix to have an opinionated view of the past stack. It's time for us to kind of move up the stack with our own offering that obviously adds value but provides some of our core competencies in data and takes it to the next level. And it's in that sense that we're actually launching Nutanix Era to simplify one of the hardest problems in enterprise IT and short of saving you from true Oracle licensing, it solves various other Oracle problems which is about truly simplifying databases much like what RDS did on AWS, imagine enterprise RDS on demand where you can provision, lifecycle manage your database with one-click. And to talk about this powerful new functionality, let me invite Bala and John on stage to give you one final demo. (rock music) Good to see you guys. >> Yep, thank you. >> All right, so we've got lots of folks here. They're all anxious to get to the next level. So this demo, really rock it. So what are we going to talk about? We're going to start with say maybe some database provisioning? Do you want to set it up? >> We have one dream, Sunil, one single dream to pass you off, that is what Nutanix is today for IT apps, we want to recreate that magic for devops and get back those weekends and freedom to DBAs. >> Got it. Let's start with, what, provisioning? >> Bala: Yep, John. >> Yeah, we're going to get in provisioning. So provisioning databases inside the enterprise is a significant undertaking that usually involves a myriad of resources and could take days. It doesn't get any easier after that for the longterm maintence with things like upgrades and environment refreshes and so on. Bala and team have been working on this challenge for quite awhile now. So we've architected Nutanix Era to cater to these enterprise use cases and make it one-click like you said. And Bala and I are so excited to finally show this to the world. We think it's actually Nutanix's best kept secrets. >> Got it, all right man, let's take a look at it. >> So we're going to be provisioning a sales database today. It's a four-step workflow. The first part is choosing our database engine. And since it's our sales database, we want it to be highly available. So we'll do a two node rack configuration. From there, it asks us where we want to land this service. We can either land it on an existing service that's already been provisioned, or if we're starting net new or for whatever reason, we can create a new service for it. The key thing here is we're not asking anybody how to do the work, we're asking what work you want done. And the other key thing here is we've architected this concept called profiles. So you tell us how much resources you need as well as what network type you want and what software revision you want. This is actually controlled by the DBAs. So DBAs, and compute administrators, and network administrators, so they can set their standards without having a DBA. >> Sunil: Got it, okay, let's take a look. >> John: So if we go to the next piece here, it's going to personalize their database. The key thing here, again, is that we're not asking you how many data files you want or anything in that regard. So we're going to be provisioning this to Nutanix's best practices. And the key thing there is just like these past services you don't have to read dozens of pages of best practice guides, it just does what's best for the platform. >> Sunil: Got it. And so these are a multitude of provisioning steps that normally one would take I guess hours if not days to provision and Oracle RAC data. >> John: Yeah, across multiple teams too. So if you think about the lifecycle especially if you have onshore and offshore resources, I mean this might even be longer than days. >> Sunil: Got it. And then there are a few steps here, and we'll lead into potentially the Time Machine construct too? >> John: Yeah, so since this is a critical database, we want data protection. So we're going to be delivering that through a feature called Time Machines. We'll leave this at the defaults for now, but the key thing to not here is we've got SLAs that deliver both continuous data protection as well as telescoping checkpoints for historical recovery. >> Sunil: Got it. So that's provisioning. We've kicked off Oracle, what, two node database and so forth? >> John: Yep, two node database. So we've got a handful of tasks that this is going to automate. We'll check back in in a few minutes. >> Got it. Why don't we talk about the other aspects then, Bala, maybe around, one of the things that, you know and I know many of you guys have seen this, is the fact that if you look at database especially Oracle but in general even SQL and so forth is the fact that look if you really simplified it to a developer, it should be as simple as I copy my production database, and I paste it to create my own dev instance. And whenever I need it, I need to obviously do it the opposite way, right? So that was the goal that we set ahead for us to actually deliver this new past service around Era for our customers. So you want to talk a little bit more about it? >> Sure Sunil. If you look at most of the data management functionality, they're pretty much like flavors of copy paste operations on database entities. But the trouble is the seemingly simple, innocuous operations of our daily lives becomes the most dreaded, complex, long running, error prone operations in data center. So we actually planned to tame this complexity and bring consumer grade simplicity to these operations, also make these clones extremely efficient without compromising the quality of service. And the best part is, the customers can enjoy these services not only for databases running on Nutanix, but also for databases running on third party systems. >> Got it. So let's take a look at this functionality of I guess snapshoting, clone and recovery that you've now built into the product. >> Right. So now if you see the core feature of this whole product is something we call Time Machine. Time Machine lets the database administrators actually capture the database tape to the granularity of seconds and also lets them create clones, refresh them to any point in time, and also recover the databases if the databases are running on the same Nutanix platform. Let's take a look at the demo with the Time Machine. So here is our customer relationship database management database which is about 2.3 terabytes. If you see, the Time Machine has been active about four months, and SLA has been set for continuously code revision of 30 days and then slowly tapers off 30 days of daily backup and weekly backups and so on, so forth. On the right hand side, you will see different colors. The green color is pretty much your continuously code revision, what we call them. That lets you to go back to any point in time to the granularity of seconds within those 30 days. And then the discreet code revision lets you go back to any snapshot of the backup that is maintained there kind of stuff. In a way, you see this Time Machine is pretty much like your modern day car with self driving ability. All you need to do is set the goals, and the Time Machine will do whatever is needed to reach up to the goal kind of stuff. >> Sunil: So why don't we quickly do a snapshot? >> Bala: Yeah, some of these times you need to create a snapshot for backup purposes, Time Machine has manual controls. All you need to do is give it a snapshot name. And then you have the ability to actually persist this snapshot data into a third party or object store so that your durability and that global data access requirements are met kind of stuff. So we kick off a snapshot operation. Let's look at what it is doing. If you see what is the snapshot operation that this is going through, there is a step called quiescing the databases. Basically, we're using application-centric APIs, and here it's actually RMAN of Oracle. We are using the RMan of Oracle to quiesce the database and performing application consistent storage snapshots with Nutanix technology. Basically we are fusing application-centric and then Nutanix platform and quiescing it. Just for a data point, if you have to use traditional technology and create a backup for this kind of size, it takes over four to six hours, whereas on Nutanix it's going to be a matter of seconds. So it almost looks like snapshot is done. This is full sensitive backup. You can pretty much use it for database restore kind of stuff. Maybe we'll do a clone demo and see how it goes. >> John: Yeah, let's go check it out. >> Bala: So for clone, again through the simplicity of command Z command, all you need to do is pick the time of your choice maybe around three o'clock in the morning today. >> John: Yeah, let's go with 3:02. >> Bala: 3:02, okay. >> John: Yeah, why not? >> Bala: You select the time, all you need to do is click on the clone. And most of the inputs that are needed for the clone process will be defaulted intelligently by us, right? And you have to make two choices that is where do you want this clone to be created with a brand new VM database server, or do you want to place that in your existing server? So we'll go with a brand new server, and then all you need to do is just give the password for you new clone database, and then clone it kind of stuff. >> Sunil: And this is an example of personalizing the database so a developer can do that. >> Bala: Right. So here is the clone kicking in. And what this is trying to do is actually it's creating a database VM and then registering the database, restoring the snapshot, and then recoding the logs up to three o'clock in the morning like what we just saw that, and then actually giving back the database to the requester kind of stuff. >> Maybe one finally thing, John. Do you want to show us the provision database that we kicked off? >> Yeah, it looks like it just finished a few seconds ago. So you can see all the tasks that we were talking about here before from creating the virtual infrastructure, and provisioning the database infrastructure, and configuring data protection. So I can go access this database now. >> Again, just to highlight this, guys. What we just showed you is an Oracle two node instance provisioned live in a few minutes on Nutanix. And this is something that even in a public cloud when you go to RDS on AWS or anything like that, you still can't provision Oracle RAC by the way, right? But that's what you've seen now, and that's what the power of Nutanix Era is. Okay, all right? >> Thank you. >> Thanks. (audience clapping) >> And one final thing around, obviously when we're building this, it's built as a past service. It's not meant just for operational benefits. And so one of the core design principles has been around being API first. You want to show that a little bit? >> Absolutely, Sunil, this whole product is built on API fist architecture. Pretty much what we have seen today and all the functionality that we've been able to show today, everything is built on Rest APIs, and you can pretty much integrate with service now architecture and give you your devops experience for your customers. We do have a plan for full fledged self-service portal eventually, and then make it as a proper service. >> Got it, great job, Bala. >> Thank you. >> Thanks, John. Good stuff, man. >> Thanks. >> All right. (audience clapping) So with Nutanix Era being this one-click provisioning, lifecycle management powered by APIs, I think what we're going to see is the fact that a lot of the products that we've talked about so far while you know I've talked about things like Calm, Flow, AHV functionality that have all been released in 5.5, 5.6, a bunch of the other stuff are also coming shortly. So I would strongly encourage you guys to kind of space 'em, you know most of these products that we've talked about, in fact, all of the products that we've talked about are going to be in the breakout sessions. We're going to go deep into them in the demos as well as in the pods. So spend some quality time not just on the stuff that's been shipping but also stuff that's coming out. And so one thing to keep in mind to sort of takeaway is that we're doing this all obviously with freedom as the goal. But from the products side, it has to be driven by choice whether the choice is based on platforms, it's based on hypervisors, whether it's based on consumption models and eventually even though we're starting with the management plane, eventually we'll go with the data plane of how do I actually provide a multi-cloud choice as well. And so when we wrap things up, and we look at the five freedoms that Ben talked about. Don't forget the sixth freedom especially after six to seven p.m. where the whole goal as a Nutanix family and extended family make sure we mix it up. Okay, thank you so much, and we'll see you around. (audience clapping) >> PA Announcer: Ladies and gentlemen, this concludes our morning keynote session. Breakouts will begin in 15 minutes. ♪ To do what I want ♪

Published Date : May 9 2018

SUMMARY :

PA Announcer: Off the plastic tab, would you please welcome state of Louisiana And it's my pleasure to welcome you all to And I'd like to second that warm welcome. the free spirit. the Nutanix Freedom video, enjoy. And I read the tagline from license to launch You have the freedom to go and choose and having to gain the trust with you over time, At the same time, you spent the last seven, eight years and apply intelligence to say how can we lower that you go and advise with some of the software to essentially reduce their you know they're supposed to save are still only 20%, 25% utilized. And the next thing is you can't do So you actually sized it for peak, and bring the control while retaining that agility So you want to show us something? And you know glad to be here. to see you know are there resources that you look at everyday. So billions of events, billing, metering events So what we have here is a very popular are everywhere, the cloud is everywhere actually. So when you bring your master account that you create because you don't want So we have you know consumption of the services. There's a lot of money being made So not only just get visibility at you know compute So all of you who actually have not gone the single pane view you know to mange What you see here is they're using have been active in Russia as well. to detect you know how can you rightsize So one click, you can actually just pick Yeah, and not only remove the resources the consumption for the Nutanix, you know the services And the most powerful thing is you can go to say how can you really remove things. So again, similar to save, you're saying So the idea is how can we give our people It looks like there's going to be a talk here at 10:30. Yes, so you can go and write your own security So the end in all this is, again, one of the things And to start the session, I think you know the part You barely fit in that door, man. that's grown from VDI to business critical So if we hop over here to our explore tab, in recent releases to kind of make this happen? Now to allow you to full take advantage of that, On the same environment though, we're going to show you So one of the shares that you see there is home directories. Do we have the cluster also showing, So if we think about cloud, cloud's obviously a big So just like the market took a left turn on Kubernetes, Now for the developer, the application architect, So the goal of ACS is to ensure So you can deploy however many of these He hasn't seen the movies yet. And this is going to be the number And if you come over to our office, and we welcome you, Thanks so much. And like Steve who's been with us for awhile, So I remember, so how many of you guys And the deployment is smaller than what we had And it covers a lot of use cases as well. So the use cases, we're 90%, 95% deployed on Nutanix, So the plan going forward, you actually asked And the same thing when you actually flip it to AHV And to give you a flavor of that, let me show you And now you can see this is a much simpler picture. Yeah, for those guys, you know that's not the Avengers This is next years theme. So before we cut over from Netsil to Flow, And that of course is the most important So that's like one click segmentation and play right now? You can compare it to other products in the space. in that next few releases. And if I scroll down again, and I see the top five of the network which is if you can truly isolate (audience clapping) And you know it's not just using Nutanix than in a picture by the way. So tell me a little bit about this cloud initiative. and the second award was really related to that. And a lot of this was obviously based on an infrastructure And you know initiatives change year on year, So the stack you know obviously built on Nutanix, of obviously the business takeaway here? There has to be some outcomes that we measure And in the journey obviously you got So you're supposed to wear some shoes, right? for the last couple years. I'm sure you guys have received shoes like these. So again, I'm sure many of you liked them. That's the only thing that hasn't worked, Thanks a lot. is to enable you to choose the right cloud Yeah, we should. of the art as you were saying in the industry. that to my Xi cloud services account. So you don't have to log in somewhere and create an account. But let's go take a look at the Xi side that you already knew mynutanix.com and 30 seconds in, or we will deploy a VPN for you on premises. So that's one of the other things to note the gateway configured, your VLAN information Vinny: So right now, you know what's happening is And just while you guys were talking, of the other things we've done? And first thing you might notice is And we allow the setting to be set on the Xi cloud services There's always going to be some networking problem onstage. This is a good sign that we're running So for example, you just saw that the same user is to also show capabilities to actually do failover And that says okay I already have the backups is essentially coming off the mainstream Xi profile. That's the most interesting piece here. or the test network to the test network. So let's see how the experience looks like details in place for the test to be successful. And to give you guys an idea behind the scenes, And so great, while you were explaining that, And that's essentially anybody in the audience here Yeah so by the way, just to give you guys Yeah, you guys should all go and vote. Let's see where Xi is. I'll scroll down a little bit, but keep the... Thank you so much. What's something that you know we've been doing And what that means is when you have And very quickly you can see these are the VMs So one of the core innovations being built So that means it has quiesced and stopping the VM there. So essentially what Vinny just showed and making it painless so that the downtime you have And you know Jason who's the CTO of Cyxtera. of the CenturyLink data centers. bunch of other assets. So we have over 50 data centers around the world, And to be fair, a lot of what happens in data centers in the traditional kind of on demand is that you know this isn't a manged service. of the partnership going forward? Well you know I think this would be Thanks a lot, Jason. And in the enterprise, if you think about it, We're going to start with say maybe some to pass you off, that is what Nutanix is Got it. And Bala and I are so excited to finally show this And the other key thing here is we've architected And the key thing there is just like these past services if not days to provision and Oracle RAC data. So if you think about the lifecycle And then there are a few steps here, but the key thing to not here is we've got So that's provisioning. that this is going to automate. is the fact that if you look at database And the best part is, the customers So let's take a look at this functionality On the right hand side, you will see different colors. And then you have the ability to actually persist of command Z command, all you need to do Bala: You select the time, all you need the database so a developer can do that. back the database to the requester kind of stuff. Do you want to show us the provision database So you can see all the tasks that we were talking about here What we just showed you is an Oracle two node instance (audience clapping) And so one of the core design principles and all the functionality that we've been able Good stuff, man. But from the products side, it has to be driven by choice PA Announcer: Ladies and gentlemen,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KarenPERSON

0.99+

JuliePERSON

0.99+

MelinaPERSON

0.99+

StevePERSON

0.99+

MatthewPERSON

0.99+

Julie O'BrienPERSON

0.99+

VinnyPERSON

0.99+

CiscoORGANIZATION

0.99+

DellORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

DheerajPERSON

0.99+

RussiaLOCATION

0.99+

LenovoORGANIZATION

0.99+

MiamiLOCATION

0.99+

AmazonORGANIZATION

0.99+

HPORGANIZATION

0.99+

2012DATE

0.99+

AcropolisORGANIZATION

0.99+

Stacy NighPERSON

0.99+

Vijay RayapatiPERSON

0.99+

StacyPERSON

0.99+

PrismORGANIZATION

0.99+

IBMORGANIZATION

0.99+

RajivPERSON

0.99+

$3 billionQUANTITY

0.99+

2016DATE

0.99+

Matt VincePERSON

0.99+

GenevaLOCATION

0.99+

twoQUANTITY

0.99+

ThursdayDATE

0.99+

VijayPERSON

0.99+

one hourQUANTITY

0.99+

100%QUANTITY

0.99+

$100QUANTITY

0.99+

Steve PoitrasPERSON

0.99+

15 timesQUANTITY

0.99+

CasablancaLOCATION

0.99+

2014DATE

0.99+

Choice Hotels InternationalORGANIZATION

0.99+

Dheeraj PandeyPERSON

0.99+

DenmarkLOCATION

0.99+

4,000QUANTITY

0.99+

2015DATE

0.99+

DecemberDATE

0.99+

threeQUANTITY

0.99+

3.8 petabytesQUANTITY

0.99+

six timesQUANTITY

0.99+

40QUANTITY

0.99+

New OrleansLOCATION

0.99+

LenovaORGANIZATION

0.99+

NetsilORGANIZATION

0.99+

two sidesQUANTITY

0.99+

100 customersQUANTITY

0.99+

20%QUANTITY

0.99+

Dustin Kirkland, Canonical | KubeCon 2017


 

>> Announcer: Live from Austin, Texas, it's theCUBE. Covering KubeCon and CloudNativeCon 2017. Brought to you by: Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Hey, welcome back everyone. And we're live here in Austin, Texas. This is theCUBE's exclusive coverage of the Cloud Native conference and KubeCon for Kubernetes Conference. This is for the Linux Foundation. This is theCUBE. I'm John Furrier, the co-founder of Silicon ANGLE Media. My co, Stu Miniman. Our next guest is Dustin Kirkland Vice-President of product. The Ubuntu, Canonical, welcome to theCUBE. >> Thank you, John. >> So you're the product guy. You get the keys to the kingdom, as they would say in the product circles. Man, what a best time to be-- >> Dustin: They always say that. I don't think I've heard that one. >> Well, the product guys are, well all the action's happening on the product side. >> Dustin: We're right in the middle of it. >> Cause you got to have a road map. You got to have a 20 mile steer on the next horizon while you go up into the pasture and deliver value, but you always got to be watching for it always making decision on what to do, when to ship product, not you got the Cloud things are happening at a very accelerated rate. And then you got to bring it out to the customers. >> That's right. >> You're livin' on both sides of the world You got to look inside, you got to look outside. >> All three. There's the marketing angle too. which is what we're doing here right now. So there's engineering sales and this is the marketing. >> Alright so where are we with this? Because now you guys have always been on the front lines of open source. Great track record. Everyone knows the history there. What are the new things? What's the big aha moment that this event, largest they've had ever. They're not even three years old. Why is this happening? >> I love seeing these events in my hometown Austin, Texas. So I hope we keep coming back. The aha moment is how application development is fundamentally changing. Cloud Native is the title of the Cloud Native Computing Foundation and CloudNativeConference here. What does Cloud Native mean? It's a different form of writing applications. Just before we were talking about systems programing right? That's not exactly Cloud Native. Cloud Native programming is writing to API's that are Cloud exposed API's, integrating with software as a service. Creating applications that have no intelligence, whatsoever, about what's underneath them, Right? But taking advantage of that and all the ways that you would want and expect in a modern application. Fault tolerance, automatic updates, hyper security. Just security, security, security. That is the aha moment. The way applications are being developed is fundamentally changing. >> Interesting perspective we had on earlier. Lew Tucker from Cisco, (mumbles) in the (mumbles) History Museum, CTO at Cisco, and we have Kelsey Hightower co-chair for this conference and also very active in the community. Yet, in the perspective, and I'll over simplify and generalize it, but basically was: Hey, that's been going on for 30 years, it's just different now. Tell us the old way and new way. Because the old way, you kind of describing it you're going to build your own stuff, full stack, building all parts of the stack and do a lot of stuff that you didn't want to do. And now you have more, especially time on your hands if DevOps and infrastructure as code starts to happen. But doesn't mean that networking goes away, doesn't mean storage goes away, that some new lines are forming. Describe that dynamic of what's new and the new way what changes from the old way? >> Virtualization has brought about a different way of thinking about resources. Be those compute resources, chopping CPU's up into virtual CPU's, that's KVM ware. You mentioned network and storage. Now we virtualized both of those into software defined storage and software defined networking, right? We have things like OpenStack that brings that all together from an infrastructure perspective. and we now have Kubernetes that brings that to fare from an application perspective. Kubernetes helps you think about applications in a different way. I said that paradigm has changed. It's Kubernetes that helps implement that paradigm. So that developers can write an application to a container orchestrator like Kubernetes and take advantage of many of the advances we've made below that layer in the operating system and in the Cloud itself. So from that perspective the game has changed and the way you write your application is not the same as a the monolithic app we might have written on an IBM or a traditional system. >> Dustin, you say monolithic app versus oh my gosh the multi layered cake that we have today. We were talking about the keynote this morning where CNCF went from four projects to 14 projects, you got Kubernetes, You got things like DSDU on top. Help up tease that a little bit. What are the ones that, where's canonical engaged? What are you hearing from customers? What are they excited about? What are they still looking for? >> In a somewhat self-serving way, I'll use this opportunity to explain exactly what we do in helping build that layered cake. It starts with the OS. We provide a great operating system, Ubuntu that every developer would certainly know and understand and appreciate. That's the kernel, that's the systemd, that's the hyperviser, that's all the storage and drivers that makes an operating system work well on hardware. Lot's of hardware, IBM, Dell HP, Intel, all the rest. As well as in virtual machines, the public Clouds, Microsoft, Amazon, Google, VM ware and others. So, we take care of that operating system perspective. Within the CNCF and within in the Kubernetes ecosystem, It really starts with the Kubernetes distribution. So we provide a Kubernetes distribution, we call it Canonicals Distribution of Kubernetes, CDK. Which is open source Kubernetes with security patches applied. That's it. No special sauce, no extra proprietary extensions. It is open source Kubernetes. The reference platform for open source Kubernetes 100% conformed. Now, once you have Kubernetes as you say, "What are you hearing from customers?" We hear a lot of customers who want a Kubernetes. Once they have a Kubernetes, the next question is: "Now what do I do with it?" If they have applications that their developers have been writing to Google's Kubernetes Engine GKE, or Amazon's Kubernetes Engine, the new one announced last week at re:Invent, AKS. Or Microsoft's Kubernetes Engine, Microsoft-- >> Microsoft's AKS, Amazons EKS. A lot of TLA's out there, always. >> Thank you for the TLA dissection. If you've written the applications already having your own Kubernetes is great, because then your applications simply port and run on that. And we help customers get there. However, if you haven't written your first application, that's where actually, most of the industry is today. They want a Kubernetes, but they're not sure why. So, to that end, we're helping bring some of the interesting workloads that exists, open source workloads and putting those on top of Canonical Kubernetes. Yesterday, we press released a new product from Canonical, launched in conjunction with our partners at Rancher Labs, Which is the Cloud Native platform. The Cloud Native platform is Ubuntu plus Kubernetes plus Rancher. That combination, we've heard from customers and from users of Ubuntu inside and out. Everyone's interested in a developer work flow that includes open-source Ubuntu, open-source Kubernetes and open-source Rancher, Which really accelerates the velocity of development. And that end solution provides exactly that and it helps populate, that Kubernetes with really interesting workloads. >> Dustin, so we know Sheng, Shannon and the team, they know a thing or two about building stacks with open source. We've talked with you many times, OpenStack. Give us a little bit of compare and contrast, what we've been doing with OpenStack with Canonical, very heavily involved, doing great there versus the Cloud Native stacking. >> If you know Shannon and Sheng, I think you can understand and appreciate why Mark, myself and the rest of the Canonical team are really excited about this partnership. We really see eye-to-eye on open source principles First. Deliver great open source experiences first. And then taking that to market with a product that revolves around support. Ultimately, developer option up front is what's important, and some of those developer applications will make its way into production in a mission critical sense. Which open up support opportunities for both of us. And we certainly see eye-to-eye from that perspective. What we bring to bare is Ubuntu ecosystem of developers. The Ubuntu OpenStack infrastructure is a service where we've seen many of the world's largest organizations deploying their OpenStacks. Doing so on Ubuntu and with Ubuntu OpenStacks. With the launch of Kubernetes and Canonical Kubernetes, many of those same organizations are running their own Kubernetes along side OpenStack. Or, in some cases, on top of OpenStack. In a very few cases, instead of Openstack, in very special cases, often at the Edge or in certain tiny Cloud or micro Cloud scenarios. In all of these we see Rancher as a really, really good partner in helping to accelerate that developer work flow. Enabling developers to write code, commit code to GitHub repository, with full GitHub integration. Authenticate against an active directory with full RBAC controls. Everything that you would need in an enterprise to bring that application to bare from concept, to development, to test into production, and then the life cycle, once it gains its own life in production. >> What about the impact of customers? So, I'm an IT guy or I'm an architect and man, all this new stuff's comin' at me. I love my open source, I'm happy with space. I don't want to touch it, don't want to break it, but I want to innovate. This whole world can be a little bit noisy and new to them. How do you have that conversation with that potential customer or customer where you say, Look, we can get there. Use your app team here's what you want to shape up to be, here's service meshes and plugable, Whoa plugable (mumbles)! So, again, how do you simplify that when you have conversations? What's the narrative? What's the conversation like? >> Usually our introduction into the organization of a Fortune 500 company is by the developers inside of that company who already know Ubuntu. Who already have some experience with Kubernetes or have some experience with Rancher or any of those other-- >> So it's a bottoms up? >> Yeah, it's bottoms up. Absolutely, absolutely. The developer network around Ubuntu is far bigger than the organization that is Canonical. So that helps us with the intro. Once we're in there, and the developers write those first few apps, we do get the introductions to their IT director who then wants that comfy blanket. Customer support, maybe 24 by seven-- >> What's the experience like? Is it like going to the airport, go through TSA, and you got to take your shoes off, take your belt off. What kind of inspection, what is kind of is the culture because they want to move fast, but they got to be sure. There's always been the challenge when you have the internal advocate saying, "Look, if we want to go this way "this is going to be more the reality for companies." Developers are now major influencers. Not just some, here's the product we made a decision and they ship it to 'em, it's shifted. >> If there's one thing that I've learned in this sort of product management assignment, I'm a engineer by trade, but as a product manager now for almost five years, is that you really have to look at the different verticals and some verticals move at vastly different paces than other verticals. When we are in the tele close phase, We're in RFI's, requests for a quote or a request for information that may last months, nine months. And then go through entering into a procurement process that may last another nine months. And we're talking about 18 months in an industry here that is spinning up, we're talking about how fast this goes, which is vastly different than the work we do in Silicon Valley, right? With some of the largest dot-coms in the world that are built on Ubuntu, maybe an AWS or else where. Their adoption curve is significantly different and the procurement angle is really different. What they're looking to buy often on the US West Coast is not so much support, but they're looking to guide your roadmap. We offer for customers of that size and scale a different set of products something we call feature sponsorships, where those customers are less interested in 24 by seven telephone support and far more interested in sponsoring certain features into Ubuntu itself and helping drive the Ubuntu roadmap. We offer both of those a products and different verticals buy in different ways. We talked to media and entertainment, and the conversation's completely different. Oil and gas, conversation's completely different. >> So what are you doing here? What's the big effort at CloudNativeCon? >> So we've got a great booth and we're talking about Ubuntu as a pretty universal platform for almost anything you're doing in the Cloud. Whether that's on frame infrastructure as a service, OpenStack. People can coo coo OpenStack and point OpenStack versus Kubernetes against one another. We cannot see it more differently-- >> Well no I think it's more that it's got clarity on where the community's lines are because apps guys are moving off OpenStack that's natural. It's really found the home, OpenStack very relevant huge production flow, I talk to Johnathon Bryce about this all the time. There's no co cooing OpenStack. It's not like it's hurting. Just to clarify OpenStack is not going anywhere its just that there's been some comments about OpenStack refugees going to (mumbles), but they're going there anyway! Do you agree? >> Yeah I agree, and that choice is there on Ubuntu. So infrastructure is a service, OpenStack's a fantastic platform, platforms as a service or Cloud Native through Cloud Native development Kubernetes is an excellent platform. We see those running side by side. Two racks a systems or a single rack. Half of those machines are OpenStack, Half of those are Kubernetes and the same IT department manages both. We see IT departments that are all in OpenStack. Their entire data center is OpenStack. And we see Kubernetes as one workload inside of that Openstack. >> How do you see Kubernetes impact on containers? A lot of people are coo cooing containers. But they're not going anywhere either. >> It's fundamental. >> The ecosystem's changing, certainly the roles of each part (mumbles) is exploding. How do you talk about that? What's your opinion on how containers are evolving? >> Containers are evolving, but they've been around for a very long time as well. Kubernetes has helped make containers consumable. And doctored to an extent, before that the work we've done around Linux containers LXE LEXT as well. All of those technologies are fundamental to it and it take tight integration with the OS. >> Dustin, so I'm curious. One of the big challenges I have the U face is the proliferation of deployments for customers. It's not just data center or even Cloud. Edge is now a very big piece of it. How do you think that containers helps enable the little bit of that Cloud Native goes there, but what kind of stresses does that put on your product organization? >> Containers are adding fuel to the fire on both the Edge and the back end Cloud. What's exciting to me about the Edge is that every Edge device, every connected device is connected to something. What's it connected to, a Cloud somewhere. And that can be an OpenStack Cloud or a Kubernetes Cloud, that can be a public Cloud, that could be a private implementation of that Cloud. But every connected device, whether its a car or a plane or a train or a printer or a drone it's connected to something, it's connected to a bunch of services. We see containers being deployed on Ubuntu on those Edge devices, as the packaging format, as the application format, as the multi-tendency layer that keeps one application from DOSing or attacking or being protected from another application on that Edge device. We also see containers running the micro services in the Cloud on Ubuntu there as well. The Edge to me, is extremely interesting in how it ties back to the Cloud and to be transparent here, Canonical strategy and Canonical's play is actually quiet strong here with Ubuntu providing quite a bit of consistency across those two layers. So developers working on those applications on those devices, are often sitting right next to the developers working on those applications in the Cloud and both of them are seeing Ubuntu helping them go faster. >> Bottom line, where do you see the industry going and how do you guys fit into the next three years, what's your prediction? >> I'm going to go right back to what I was saying right there. That the connection between the Edge and the Cloud is our angle right there, and there is nothing that's stopping that right now. >> We were just talking with Joe Beda and our view is if it's a shoot and computing world, everything's an Edge. >> Yeah, that's right. That's exactly right. >> (mumbles) is an Edge. A light in a house is an Edge with a processor in it. >> So I think the data centers are getting smarter. You wanted a prediction for next year: The data center is getting smarter. We're seeing autonomous data centers. We see data centers using metals as a service mask to automatically provision those systems and manage those systems in a way that hardware look like a Cloud. >> AI and IOT, certainly two topics that are really hot trends that are very relevant as changing storage and networking those industries have to transform. Amazon's tele (mumbles), everything like LAN and serverless, you're starting to see the infrastructure as code take shape. >> And that's what sits on top of Kubernetes. That's what's driving Kubernetes adoption are those AI machine learning artificial intelligence workloads. A lot of media and transcoding workloads are taking advantage of Kubernetes everyday. >> Bottom line, that's software. Good software, smart software. Dustin, Thanks so much for coming theCube. We really appreciate it. Congratulations. Continued developer success. Good to have a great ecosystem. You guys have been successful for a very long time. As the world continues to be democratized with software as it gets smarter more pervasive and Cloud computing, grid computing, Unigrid. Whatever it's called it is all done by software and the Cloud. Thanks for coming on. It's theCube live coverage from Austin, Texas, here at KubeCon and CloudNativeCon 2017. I'm John Furrier, Stu Miniman, We'll be back with more after this short break. (lively music)

Published Date : Dec 7 2017

SUMMARY :

Brought to you by: Red Hat, the Linux Foundation, This is for the Linux Foundation. You get the keys to the kingdom, I don't think I've heard that one. the action's happening on the product side. to do, when to ship product, not you got the You got to look inside, you got to look outside. There's the marketing angle too. What are the new things? But taking advantage of that and all the ways and the new way what changes from the old way? and the way you write your application is not the same What are the ones that, where's canonical engaged? Lot's of hardware, IBM, Dell HP, Intel, all the rest. A lot of TLA's out there, always. Which is the Cloud Native platform. We've talked with you many times, OpenStack. And then taking that to market with What about the impact of customers? of a Fortune 500 company is by the developers So that helps us with the intro. There's always been the challenge when you have is that you really have to look at We cannot see it more differently-- It's really found the home, OpenStack very relevant Yeah I agree, and that choice is there on Ubuntu. How do you see Kubernetes impact on containers? the roles of each part (mumbles) is exploding. All of those technologies are fundamental to it One of the big challenges I have the U face We also see containers running the micro services That the connection between the Edge and the Cloud We were just talking with Joe Beda Yeah, that's right. A light in a house is an Edge with a processor in it. and manage those systems in a way the infrastructure as code take shape. And that's what sits on top of Kubernetes. As the world continues to be democratized with software

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

Stu MinimanPERSON

0.99+

DustinPERSON

0.99+

Red HatORGANIZATION

0.99+

Dustin KirklandPERSON

0.99+

IBMORGANIZATION

0.99+

100%QUANTITY

0.99+

MarkPERSON

0.99+

DellORGANIZATION

0.99+

CanonicalORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

JohnPERSON

0.99+

nine monthsQUANTITY

0.99+

ShannonPERSON

0.99+

Rancher LabsORGANIZATION

0.99+

20 mileQUANTITY

0.99+

ShengPERSON

0.99+

next yearDATE

0.99+

AmazonORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

KubeConEVENT

0.99+

30 yearsQUANTITY

0.99+

CiscoORGANIZATION

0.99+

twoQUANTITY

0.99+

Silicon ANGLE MediaORGANIZATION

0.99+

HalfQUANTITY

0.99+

IntelORGANIZATION

0.99+

last weekDATE

0.99+

14 projectsQUANTITY

0.99+

24QUANTITY

0.99+

bothQUANTITY

0.99+

CanonicalsORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

two topicsQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

Johnathon BrycePERSON

0.99+

both sidesQUANTITY

0.99+

KubernetesTITLE

0.99+

AmazonsORGANIZATION

0.99+

two layersQUANTITY

0.99+

Lew TuckerPERSON

0.99+

GoogleORGANIZATION

0.99+

Cloud NativeTITLE

0.98+

CDKORGANIZATION

0.98+

OpenStackTITLE

0.98+

Cloud Native Computing FoundationORGANIZATION

0.98+

Joe BedaPERSON

0.98+

EdgeTITLE

0.98+

UbuntuTITLE

0.98+

Cloud NativeEVENT

0.98+

four projectsQUANTITY

0.98+

OpenstackTITLE

0.98+

AWSORGANIZATION

0.98+

first applicationQUANTITY

0.97+

Lew Tucker, Cisco | KubeCon 2017


 

>> Announcer: Live from Austin Texas, it's theCUBE. Covering KubeCon and CloudNativeCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. >> Welcome back everyone, this is theCUBE live in Austin, Texas for our exclusive coverage at the CloudNative Conference and KubeCon with Kubernetes via theCUBE. theCUBE which we're live, and 8 years running, I'm John Furrier, the founder of SiliconANGLE Media, my colleague, Stu Miniman. And I'm excited to have Cube alumni, and its distinguished industry legend, Lew Tucker, Vice President CT of Cloud Computing at Cisco Systems. Welcome back to theCUBE, great to see you. >> Great to be back, it's one of my favorite shows. >> Lou, we've had many conversations over the years, and it's always great to have you on because you're on the cutting-edge perspective, but you have a historical view as well, you've seen many waves of innovation. And obviously you own lots of property in the Computer's History museum, your resume goes on and on. But, you got to admire this community. Three years old, it was you, me and JJ we're sitting around at OpenStack in Vancouver three and a half years ago, having a beer after the event one of these days, and we were talking about Kubernetes, and we were really riffing on orchestration and kind of shooting the arrow forward, kind of reading the tea leaves. And we were predicting inter-clouding, inter-networking, Cisco core competency, the notion of application developers wanting infrastructure as code. We didn't actually say mircoservices but we were kind of describing a world that would be microservices, and this awesomeness that's going on with the Cloud. What a ... [Lew] You were right. You were right. >> We were right, it wasn't me, it was the community. This is how communities operate. >> It is. I think that what we're seeing, and particularly in these open source communities, you're getting the best ideas. And therefore, a lot of people are looking at this future space, and then we bring the kids out of the communities, get the projects that we work together on it, and that's how we move it forward. >> You've been a great leader in the community, just want to give you some props for that, you deserve it, but more importantly is just the momentum going on right now. And I want to get your take, you're squinting through the growth, you're looking at the innovation, looking at the big picture, certainly from a Cisco perspective, but also as an industry participant. Where's the action? Obviously containers grew, that tide came in, a lot of boats floated up. We saw microservices boom, then we now, Kubernetes' getting better and better, multiple versions, it's - some say commoditized, some would say more inter-operable. Really, that's the connection tissue for multi-cloud. >> Exactly right. >> Do you see the same thing? Where's the action? >> So, cloud computing is going everywhere now. And so it's natural that we see one of the next phases of this is in the area of multi-cloud. The customers, they are in public cloud, they have private data centers where they want to run similar applications. They don't want to have a completely different environment. What they really want to see is a consistent environment across which they can deploy applications. And that consistent environment also has to have security policies, authentication services, and a lot of these things. And to really drive the innovation, what I find interesting is that, the services that are coming now out of public cloud, whether it be an AI or server list, event-driven kind of programming models. Enterprises want to connect into them. And so one of the things I think that that leads to is that you're beginning to hear talk now, just beginning to hear it, which is this project called Istio. Which is a service mesh, because what that really allows -- >> John: What's the project name? >> It's called Istio? >> John: Istio. >> Lew: I-S-T-I-O. >> Okay. >> dot I-O. Everything is open source, it's a project that's contributed to by Google, and IBM, and Lyft, and now Cisco's getting involved in it, as well. And what it really plays into is this world of multi-cloud. That now we can actually access services in the public cloud from your own private data center, or from the public running applications in a public cloud, you can access services that are back in your data center. So it's really about this kind of application-level networking stack, that means that application developers can now off-load all of that heavy work to a service mesh, and therefore that'll accelerate application development. >> So it's interesting, I heard some talk about things like Envoy edge and service proxies, and service proxies have been a nice tool to kind of cobble together old legacy stuff, but now you're seeing stuff go to the next level. This data I heard in the keynote, I want to get your reaction 'cause this kind of jumps out at me. Lyft had created a mesh over hundreds of thousands of services over millions of transactions per second. Lyft. Uber's got some stuff on the monitoring side, Google's donated - This is large scale cloud guys who had to build their own stuff with open source, now contributing all this stuff back. This is the mesh you're talking about, correct? >> This is exactly right, yes. Because what we're seeing is, we've talked about micro services, and Kubernetes is about orchestration of containers. And that has accelerated application development and deploying it. But now the services, each one of those services still has all of this networking stuff they have to deal with. They have to deal with load balancing, they have to deal with retries, they have to deal with authentication. So instead, what is happening now, we're recognizing these common patterns, this is what the community does (mumbles). You see a common pattern, you abstract it, and you push that out into what is known as side cars now, so that the application developer doesn't have to -- the application doesn't get changed when you need to change, like, 'bring up a couple more services over here' 'put this on a different cloud'. The individual components now are unaffected by that, because all of that work has been offloaded into a service mesh. >> Lew, bring us inside a little bit. Dig into that next level of kind of networking. 'Cause you speak, kind of networking administrator, running around the data center, you get everything from pulling cables to zoning and everything like that. Now it's multi-cloud, multi-service, everything's faster. Through all the architect, the person running it, automation ... We don't have an hour, but give us a little bit about what it means to be a networking person these days. >> Well, it's interesting, because one of the things that we know application developers did not want to become, is to be a network engineer. And yet to do a lot of what they had to do, they had to learn a lot of those skills. And instead they would rather set things up by policy. For example, they would like to be able to say, 'if I'm deploying now the version two of my application', it's a classic thing we talk about in this deal, 'the next version we want to just direct' '5% of the traffic to it, make sure it's okay' 'before we turn over the whole thing.' You should be able to do that at the application level, and through a service mesh that is built in networking at the application level, the application guys can do it. Now the role of the network engineer is still the same, they have to provide the basic infrastructure to allow that to happen. And for example, a lot of the infrastructure now is extending the Cloud from public cloud through the cloud BPM services that they have back into the data center. So Cisco, for example, is putting technologies that are running at AWS and at Google, and Azure, that allows that to come back into the data center. So we can run Cisco virtual routers in the Cloud, connected back up in the data center. So their standard networking policy that the networking engineers really want to see enforced, they can be assured that that's enforced, and then Istio layers on top of it. >> And that's decoupled from the application. >> Right. Right. >> This is what we've been talking about since 2010, our eighth year of theCUBE, infrastructure as code. This is what DevOps was all about, and now it's evolving mainstream. >> Absolutely right. You really want infrastructure to be as boring as possible. And capable and then secure. And now give a lot more control over to the application developer. And we also know, right now it's really based largely on Kubernetes, it's a great example, but that will connect into virtual machines, it will connect into legacy services. So all of this has to do with connecting all of those pieces that are today in an enterprise, moving to a public cloud. And that transition doesn't happen wholesale. You move a couple over. >> Lew, one thing. I want you to look back, John talked about - We interviewed a bunch of years in OpenStack. What's your take on the role of OpenStack today, is there still a roll in OpenStack, and how's that kind of compare/contrast to what we're doing here? >> Happy to answer, because I actually am on both boards, I'm on the CNCF board and I'm on the OpenStack board, and I have contributors on my teams to both efforts across the board. And I think that the role that we're seeing of OpenStack is Openstack is evolving also, and it's becoming more embracive and it's becoming about open infrastructure. And it's really about, how do you create these open infrastructure plays. So it is about virtual machines, and containers, and bare metal, and setting up of those services. So Kubernetes works just great on top of OpenStack, and so now people get to have a choice, because one of the hard things I think for, mostly enterprise developers and everything else, is that the pace is changing so fast. So how do they try out some of the newer technologies that still can be connected back into the existing legacy systems? And that's why I think that we're seeing the role for OpenStack is to make that, you can put it with virtual machines, you can stand them up in there, and you can have the same virtual machines essentially running in the Cloud. >> So virtual machines versus other approaches has come up as a trade off, we heard in the keynote, between cost - I mean, speed, and security. Security's super important. So let me get your thoughts on how that plays out, because we've got the pluggable logger tech, which is another big theme we heard in the keynote, which is essentially just meaning, having a very focused, leverageable piece of code that can be connected into Kubernetes. But with VM's now, some are saying VM's are slow when you're trying to do security, but you want slow, boring when you need it, but you want speed and secure when you need it, too. How do you get both out of that? >> Without being too geeky in terms of, a virtual machine is emulating an entire computer. And so it looks like a computer, so you're running your traditional applications on top of a virtual machine. The same as they would if they were running on what we call, bare metal machine. So that is by necessity, much heavier. You're bringing around a whole operating system and things like that. Containers -- >> And there's a role for that, too. >> There's absolutely a role for that. >> Now containers? >> But containers, then, are really much more about, it's an application packaging exercise, so that you can say, 'I'm going to run this application, I just want all its dependencies packaged up.' I'll assume there's an operating system there. I'm going to count on the fact that there's a single operating system. So you can spin up containers, they're much more lightweight, much more quickly. And now there's even things such as Kata Containers that are coming out of Intel, which is now merging those technologies. >> Male: The clear containers. >> Clear containers, they came originally Clear Containers, and now it's merging, because we're saying, 'we want the security and the protection that you get' 'with a virtual machine, tied into, like the VTX' 'instruction set, in the hardware'. So you can get that level of security, assurances, but now you get the speed of containers. So, I think we're continuing to see the whole community evolving in this direction and making things easier for application developers, faster to do. They're increasing in scale, so management and orchestration - we talked about that three years ago, that that would be a big issue, and guess what? Of course it is. That's exactly what Kubernetes is addressing. >> And the role of the data is going to be critical, this is where a lot of people in the enterprise that we talked to, love the story, they love the narrative, but they're hearing things that they've never heard before and they kind of, slow down. So I'd like you to take a minute, Lew, and explain to the person watching, CIO, chief architect, network guy, whatever - what the hell is this Kubernetes hubbub about? What is Kubernetes, from your perspective? How would you wrap that up and describe the, what it is, and the impact to the customer? >> So, formally it's an orchestration of the container. So what that means is that, when you're developing an application, if you want it to be resilient, you want several instances of that application running, and you want traffic, then, to be low-balanced across it. Kubernetes provides that level of orchestration, to make sure there's always three running. If one fails, it can bring up another one. And it can do that completely automated. So it's a layer that really manages the deployment of containers. As an application developer, you still write your application, you package it up into a container, could be a doc or a container, and then you deploy it using Kubernetes in there. What is interesting, and I think that this is what we've recognized in this last year, I think, is that Kubernetes has a very simple networking model. Which is basically that of having a way to load-balance across multiple containers and keep them running. If you have anything more complicated about different services that you want to talk to from those containers, that may be different places in the universe, we don't have a mechanism for doing that. And everybody was having to write their own. So again, that's where the idea of a service mesh, STF -- >> John: That's where the meshing comes in. >> That's where the mesh ... >> Hundreds and hundreds of services. >> Lynkerd has been doing it for a while, Envoy. >> And Lyft and Uber, they had to do it because they had massive explosion of devices. >> Right, exactly right. And so that's why getting together the code from Lyft and Envoy, adding a control plane to it, which is what Istio really is about, brings that out, too. >> Sounds like an operating system to me, but Lew I one more question for you. You mentioned in, as you described it, Kubernetes, isn't that auto-scaling? If I'm familiar with AWS, isn't that just auto-scaling? Or is it auto-scaling for application instances? Or is auto-scaling more - defined differently? >> It does do the scaling part, it does the resiliency part, but it has a very simple model for that. And that's why you need to have other - but it's a beginning of that orchestration layer. >> Because at the container level, it has all those inherent problems. >> Right. And it can make sure to keep those containers alive and well, and manage the life cycle. >> John: And that's the difference. >> And that's the real difference. Whereas the auto-scaling from Amazon, as a service, is purely a networking capability then tied into bringing up new instances. >> So this is like auto-scaling on steroids. >> It is. But one of the differences also is that Kubernetes and what we're doing here is all open source. So you can run it anywhere. You don't get, a lot of people are very concerned about being locked in to, it used to be, you were locked into Oracle, or to Microsoft, or Java, on premise of things like that. >> Whatever proprietary operating system. >> And now they have concern being locked into these services that are in the public cloud providers. And what we're seeing now with Kubernetes and we're seeing in almost everything around here, by open sourcing them, the advantage is now the enterprise can run the same technology inside, without being locked into a vendor, as they do in the public cloud. >> Lew, so we spent a bunch of time talking about multi-cloud. Some of the more interesting pieces is what's happening at the edge, and IOT. We've heard Cisco talking about it for many years, networking of course important. What's your take, what are you working on, with regards to that these days. >> There's a couple new trends that we've been, IOT is actually now really getting realized, I think, because it is pushing a lot of the computing out to the edge, whether it be in cell phone towers or base stations, retail stores, that kind of edge. At the same time, we're seeing this multi-cloud that we want the big services. If I want to use a machine learning service, I want to use it up in the cloud, and I need to now connect it back to those devices. So multi-cloud is really about, addressing how do you develop applications that run across multiple, in the cloud, on the edge, in an IOT device. There's also, I think you've probably been hearing, server lists, and function as a service. These are, again, a lighter weight way to have kind of an event-driven model, so that if you have an IOT device and it just causes an event, you want to be able to spawn essentially a service, in the cloud, that only runs to process that one event, and then it goes away. So you're not paying to run instances of virtual machines or whatever, sitting there waiting for some event. You get a trigger, and you only pay - so it has this micro-billing capability as a part of it - so that you just can use only the resources. We finally realized the promise that we always had in cloud computing, which is that, pay for only what you need, for what you use. And so this is another way to do that. >> Lew, it's great to have you on theCUBE again, good to see you, great to get the update. I'd like to ask you one more final question to end the segment here. You always have your ear to the ground, reading the tea leaves, you have a unique skill to understand the tech at the root level. What's coming next? If we go back and we have these nice conversations where we're riffing on what's coming out in the next two, three years. It's unclear to some of the visionaries out there, so I got to ask you, what's going to be hot, what do you see emerging? As we saw Kubernetes and discussed, we couldn't have predicted this, I couldn't have. I knew it was going to be hot, I knew it was going to be big, but not this big, changing industry. What do you see out there? What would be the conversation you'd say, 'You know, we've got to watch this,' 'this is going to be a value creation opportunity,' 'enabling technology that's going to make a lot of things' 'flow nicely' - what kind of tech should ... >> Well, it may be a trite answer, 'cause I think a lot of people are seeing the same thing, is that we're actually laying the groundwork here, when we talk about multi-cloud, things that are distributed across multiple things. Accessing different services. I'm still a big believer in, it's going to be in the strength of those services. Whether they be speech-translation services, whether they be recommendation engine, whether it means big data services. Access to those services is what's going to be important. Three or four years from now, we're going to be talking about the intelligence -- >> Without a lot of heavy lifting to integrate it. >> Yes, that's exactly the point. We want it so that somebody can almost visually wire up these things, and take advantage of tremendously powerful machine-learning algorithms. That they don't want to have to hire the machine-learning experts to do it, they want to use that as a service. >> Slinging API, slinging services, wiring things up, sounds like it's an operating system to me. >> It's always an operating system at the end of the day. >> Lew Tucker, Vice President and CTO at Cisco Systems. Industry legend, on the board of CNCF, the fastest-growing organization, where projects equal products equals profit, and of course the OpenStack. Lew, thanks for coming on theCUBE, I'm John Furrier with Stu Miniman, back here live in Austin for more live coverage of CloudNativeCon and KubeCon, after this short break. >> Lew: Thank you.

Published Date : Dec 6 2017

SUMMARY :

Brought to you by Red Hat, the Linux Foundation, And I'm excited to have Cube alumni, and it's always great to have you on because This is how communities operate. communities, get the projects that we work together on it, just want to give you some props for that, you deserve it, And so one of the things I think that that leads to it's a project that's contributed to by Google, and IBM, This data I heard in the keynote, I want to get your so that the application developer doesn't have to -- Through all the architect, the person running it, And for example, a lot of the infrastructure now is Right. This is what we've been talking about since 2010, So all of this has to do with connecting kind of compare/contrast to what we're doing here? OpenStack is to make that, you can put it with boring when you need it, but you want speed and secure And so it looks like a computer, so you're running it's an application packaging exercise, so that you can say, So you can get that level of security, assurances, And the role of the data is going to be critical, So it's a layer that really manages the deployment Lynkerd has been doing it for a while, And Lyft and Uber, they had to do it because they had Envoy, adding a control plane to it, which is what Istio Sounds like an operating system to me, And that's why you need to have other - Because at the container level, it has all those And it can make sure to keep those containers And that's the real difference. But one of the differences also is that that are in the public cloud providers. Some of the more interesting pieces is because it is pushing a lot of the computing out to the Lew, it's great to have you on theCUBE again, I'm still a big believer in, it's going to be in the experts to do it, they want to use that as a service. sounds like it's an operating system to me. and of course the OpenStack.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John FurrierPERSON

0.99+

UberORGANIZATION

0.99+

IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

Lew TuckerPERSON

0.99+

LyftORGANIZATION

0.99+

AustinLOCATION

0.99+

AmazonORGANIZATION

0.99+

Linux FoundationORGANIZATION

0.99+

VancouverLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

AWSORGANIZATION

0.99+

CNCFORGANIZATION

0.99+

Austin, TexasLOCATION

0.99+

Cisco SystemsORGANIZATION

0.99+

LewPERSON

0.99+

5%QUANTITY

0.99+

theCUBEORGANIZATION

0.99+

KubeConEVENT

0.99+

last yearDATE

0.99+

eighth yearQUANTITY

0.99+

8 yearsQUANTITY

0.99+

IntelORGANIZATION

0.99+

threeQUANTITY

0.99+

CloudNativeConEVENT

0.99+

EnvoyORGANIZATION

0.98+

oneQUANTITY

0.98+

both boardsQUANTITY

0.98+

2010DATE

0.98+

OracleORGANIZATION

0.98+

three years agoDATE

0.98+

Austin TexasLOCATION

0.98+

bothQUANTITY

0.97+

one eventQUANTITY

0.97+

one thingQUANTITY

0.97+

KubernetesTITLE

0.97+

three and a half years agoDATE

0.96+

CloudNative ConferenceEVENT

0.96+

CubeORGANIZATION

0.96+

JJPERSON

0.96+

OpenStackTITLE

0.96+

Hundreds andQUANTITY

0.95+

three yearsQUANTITY

0.94+

CloudNativeCon 2017EVENT

0.94+

todayDATE

0.94+

KubeCon 2017EVENT

0.94+

Cloud ComputingORGANIZATION

0.93+

version twoOTHER

0.91+

JavaTITLE

0.9+

single operating systemQUANTITY

0.9+

ThreeQUANTITY

0.9+

four yearsQUANTITY

0.9+

LouPERSON

0.9+

OpenStackORGANIZATION

0.89+

Vice PresidentPERSON

0.88+

Kelly Mungary, Lions Gate & Bob Muglia, Snowflake Computing | AWS re:Invent 2017


 

>> Narrator: Live from Las Vegas, it's The Cube, covering AWS re:Invent 2017. Presented by AWS, Intel, and our ecosystem of partners. >> Bob: It's actually a little quieter here. >> Hey, welcome back to AWS re:Invent 2017. I am Lisa Martin. We're all very chatty. You can hear a lot of chatty folks behind us. This is day two of our continuing coverage. 42,000 people here, amazing. I'm Lisa Martin with my co-host Keith Townsend, and we're very excited to be joined by a Cube alumni Bob Muglia, CEO and President of Snowflake. >> Thank you. >> Lisa: Welcome back. >> Thank you, good to be back. >> And Kelly Mungary, the Director of Enterprise Data and Analytics from Lionsgate. A great use case from Snowflake. Thanks so much guys for joining us. So one of the hot things going on today at the event is your announcement Bob with AWS and Snowpipe. What is Snowpipe? How do customers get started with it? >> Great, well thanks. We're excited about Snowpipe. Snowpipe is a way of ingesting data into Snowflake in a streaming, continuous way. You simply can drop new data that's coming in into S3 and we'll ingest it for you automatically. Makes that super, super simple. Brings the data in continuously into your data warehouse, ensuring that you're always up to date and your analysts are getting the latest insights and the latest data. >> So, when you guys were founded, about five years ago, as the marketing says on your website, a complete data warehouse built for the Cloud. What was the opportunity back then? What did you see that was missing, and how has Snowflake evolved to really be a leader in this space? >> So you know, if you go back five years this was a time frame where no SQL was the big rage, and everybody was talking about how SQL was passe and it's something that you're not see in the future. Our founders had a different view, they had been working on true relational databases for almost 20 years, and they recognized the power of SQL and relational technology but they also saw that customers were experiencing significant limits with existing technology, and those limits really restricted what people could do. They saw in the Cloud and what Amazon had done the ability to build a all new database that takes advantage of the full elasticity and power of the Cloud to deliver whatever set of analytics capabilities that the business requires. However much data you want, however many queries simultaneously. Snowflake takes what you love about a relational database and removes all the limits, and allows you to operate in a very different way. And our founders had that vision five years ago, and really successfully executed on it. The product has worked beyond our dreams, and our customers, our response from our customers is what we get so excited about. >> So, the saying is "Data is the new oil". However, just as oil is really hard to drill for and find, finding the data to service up, to even put in a data lake to analyze has been a challenge. How did you guys go about identifying what data should even be streamed to Snowpipe? >> Well, yeah, that's a great question. I mean, in entertainment today, we're experiencing probably like in pretty much every type of business. A data explosion. We have, you know, streaming is big now. We have subscription data coming in, billing data, social media data, and on and on. And the thing is, it's not coming in a normal, regular format. It's coming in what we call a semi-structured, structured, json, xml. So, up until Snowflake came onto the scene with a truly Cloud based SAAS solution for data warehousing pretty much everyone was struggling to wrangle in all these data sets. Snowpipe is a great example of one of the avenues of bringing in these multiple data sets, merging them real time, and getting the analytics out to your business in an agile way that has never been seen before. >> So, can you talk a little bit about that experience? Kinda that day one up, you were taking these separate data sources, whether it's ERP solution, data from original content, merging that together and then being able to analyze that. What was that day one experience like? >> Well, you know, I gotta tell you, it evolves around a word, that word is "Yes", okay? And data architects and executives and leaders within pretty much every company are used to saying, "We'll get to that" and "We'll put it on the road map", "We could do that six months out", "Three months out". So what happened when I implemented Snowflake was I was just walking into meetings and going, "Yes". "You got it". "No worries, let's do it". >> Lisa: It liberated. >> Well, it's changes, it's not only liberating, it changes the individual's opportunities, the team's opportunities, the company's opportunities, and ultimately, revenue. So, I think it's just an amazing new way of approaching data warehousing. >> So Bob, can you talk a little bit about the partnership with AWS, and the power to bring that type of capability to customers? Data lakes are really hard to do that type of thing run a query against to get instant answers. Talk about the partnership with AWS to bring that type of capability. >> Well Amazon's been a fantastic partner of ours, and we really enjoy working with Amazon. We wind up working together with them to solve customer problems. Which is what I think is so fantastic. And with Snowflake, on top of Amazon, you can do what Kelly's saying. You can say yes, because all of a sudden you can now bring all of your data together in one place. Technology has limited, it's technology that has caused data to be in disparate silos. People don't want their data all scattered all over the place. It's all in these different places because limits to technology force people to do that. With the Cloud, and with what Amazon has done and with a product like Snowflake, you can bring all of that data together, and the thing that's interesting, where Kelly is going, is it can change the culture of a company, and the way people work. All of a sudden, data is not power. Data is available to everyone, and it's democratizing. Every person can work with data and help to bring the business forward. And it can really change the dynamics about the way people work. >> And Kelly, you just spoke at the multi-city Cloud Analytics Tour that Snowflake just did. You spoke in Santa Monica, one of my favorite places. You talked about a data driven culture. And we hear data driven in so many different conversations, but how did you actually go about facilitating a data driven culture. Who are some of the early adopters, and what business problems have you been able to solve by saying yes? >> Well, I can speak entertainment in general. I think that it's all about technology it's about talent, and it's about teaching. And with technology being the core of that. If we go back five years, six years, seven years, it was really hard to walk into a room, have an idea, a concept, around social media, around streaming data, around billing, around accounting. And to have an agile approach that you could bring together within a week or so forth. So what's happening is, now that we've implemented Snowflake on AWS and some of the other what I call dream tools on top of that. The dream stack, which includes Snowflake. It's more about integrating with the business. Now we can speak the same language with them. Now we can walk into a room and they're glad to see me now. And at the end of the day, it's new, it's all new. So, this is something that I say sometimes, in kidding, but it's actually true. It's as if Snowflake had a time traveler on staff that went forward in the future ten years to determine how things should be done in the big data space, and then came back and developed it. And that's how futuristic they are, but proven at the same time. And that allows us to cultivate that data driven culture within entertainment, because we have tools and we have the agile approach that the business is looking for. >> So, Kelly, I'm really interested, and I love the concept of making data available to everyone. That's been a theme of this conference from the keynote this morning, which is putting tools in builder's hands, and allowing builders to do what they do. >> Kelly: That's right. >> And we're always surprised at what users come back with. What's one of the biggest surprises from the use cases, now that you've enabled your users. >> Well, I'm gonna give you one that's based on AWS and Snowflake. A catch phrase you hear a lot of is "Data center of excellence", and a lot of us are trying to build out these data centers of excellence, but it's a little bit of an oxymoron to the fact that a data center of excellence is really about enabling your business and finding champions within marketing, within sales, within accounting, and giving them the ability to have self-service business intelligence, self-service data warehousing. The kinds of things that, again, we go back five, six years ago, you couldn't even have that conversation. I'll tell you today, I can walk into a room, and say, "Okay, who here is interested in learning "about data warehousing?". And there'll be somebody, "Okay, great". Within an hour, I'll have you being dangerous in terms of setting up, standing up, configuring and loading a data warehouse. That's unheard of, and it's all due to Snowflake and their new technology. >> I'd love to understand Bob, from your perspective. First of all, it sounds like you have a crystal ball according to Kelly, which is awesome. But second of all, collaboration, we talked about that earlier. Andy Jassy is very well known and very vocal about visiting customers every week. And I love their bottom, their backwards approach to, before building a product, to try to say, "What problem can we solve?". They're actually working with customers first. What are their requirements? Tell me a little bit Bob about the collaboration that Snowflake has with Lionsgate, or other customers. How are they helping to influence your crystal ball? >> You know what, this is where I think what Amazon has done, and Andy has done a fantastic job. There's so much to learn from them, and the customer centricity that Amazon has always had is something that we have really focused to bring into Snowflake, and really build deeply into our culture. I've sort of said many, many times, Snowflake is a value space company. Our values are important to us, they're prominent in our website. Our first value is we put our customer's first. What I'm most proud of is, every customer who has focused on deploying Snowflake, has successfully deployed Snowflake, and we learn from them. We engage with them. We partner with them. All of our customers are our partners. Kelly and Lionsgate are examples of customers that we learn from every day, and it's such a rewarding thing to hear what they want to do. You look at Snowpipe and what Snowpipe is, that came from customers, we learned that from customers. You look at so many features, so many details. It's iterative learning with customers. And what's interesting about that, it's listening to customers, but it's also understanding what they do. One of the things that's interesting about Snowflake is is that as a company we run Snowflake on Snowflake. All of our data is in Snowflake. All of our sales data, our financial data, our marketing data, our product support data, our engineering data. Every time a user runs a query, that query is logged in Snowflake and intrinsics about it are logged. So what's interesting is because it's all in one place, and it's all accessible, we can answer essentially any question, about what's been done. And then, driving the culture to do that is an important thing. One of the things I do find interesting is, even at Snowflake, even at this data centered company, even where everything is all centralized, I still find sometimes people don't reference it. And I'm constantly reinforcing that your intuition, you know, you're really smart, you're really intuitive, but you could be wrong. And if you can answer the question based on what's happened, what your customers are doing, because it's in the data, and you can get that answer quickly, it's a totally different world. And that's what you can do when you have a tool with the power of what Snowflake can deliver, is you could answer effectively any business question in just a matter of minutes, and that's transformative, it's transformative to the way people work, and that, to me, that's about what it means to build a data driven culture. Is to reinforce that the answer is inside what customers are doing. And so often, that is encapsulated in the data. >> Wow, your energy is incredible. We thank you so much Bob and Kelly for coming on and sharing your story. And I think a lot of our viewers are gonna learn some great lessons from both of you on collaboration on transformations. So thanks so much for stopping by. >> Yeah. >> Thank you so much, we really enjoyed it. Thanks a lot. >> Likewise, great to meet you. >> Thanks Kelly. >> Thank you. >> For my co-host Keith Townsend, and for Kelly and Bob, I am Lisa Martin. You've been watching The Cube, live on day two, continuing coverage at AWS re:Invent 2017. Stick around, we have great more guests coming up. (upbeat music)

Published Date : Nov 29 2017

SUMMARY :

it's The Cube, covering AWS re:Invent 2017. Bob Muglia, CEO and President of Snowflake. And Kelly Mungary, the Director and the latest data. as the marketing says on your website, and power of the Cloud to deliver finding the data to service up, Snowpipe is a great example of one of the avenues Kinda that day one up, you were taking these separate Well, you know, I gotta tell you, it changes the individual's opportunities, the partnership with AWS, and the power and the thing that's interesting, And Kelly, you just spoke And at the end of the day, it's new, it's all new. and I love the concept of making data available to everyone. from the use cases, now that you've enabled your users. and a lot of us are trying to build out How are they helping to influence your crystal ball? and that, to me, that's about what it means are gonna learn some great lessons from both of you Thank you so much, we really enjoyed it. and for Kelly and Bob, I am Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
KellyPERSON

0.99+

AWSORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

Kelly MungaryPERSON

0.99+

AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Bob MugliaPERSON

0.99+

BobPERSON

0.99+

Andy JassyPERSON

0.99+

Santa MonicaLOCATION

0.99+

LionsgateORGANIZATION

0.99+

LisaPERSON

0.99+

SnowflakeORGANIZATION

0.99+

The CubeTITLE

0.99+

todayDATE

0.99+

seven yearsQUANTITY

0.99+

six yearsQUANTITY

0.99+

42,000 peopleQUANTITY

0.99+

ten yearsQUANTITY

0.99+

five years agoDATE

0.99+

SQLTITLE

0.99+

Three monthsQUANTITY

0.98+

SnowflakeTITLE

0.98+

six monthsQUANTITY

0.98+

five yearsQUANTITY

0.98+

IntelORGANIZATION

0.98+

bothQUANTITY

0.97+

firstQUANTITY

0.97+

Lions GateORGANIZATION

0.97+

OneQUANTITY

0.97+

one placeQUANTITY

0.96+

CubeORGANIZATION

0.96+

almost 20 yearsQUANTITY

0.96+

oneQUANTITY

0.95+

FirstQUANTITY

0.95+

day twoQUANTITY

0.95+

first valueQUANTITY

0.95+

SnowpipeORGANIZATION

0.93+

Las VegasLOCATION

0.92+

SnowpipeTITLE

0.92+

Snowflake ComputingORGANIZATION

0.91+

six years agoDATE

0.9+

Sharad Singhal, The Machine & Matthias Becker, University of Bonn | HPE Discover Madrid 2017


 

>> Announcer: Live from Madrid, Spain, it's theCUBE, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is theCUBE, the leader in live tech coverage and my name is Dave Vellante, and I'm here with Peter Burris, this is day two of HPE Hewlett Packard Enterprise Discover in Madrid, this is their European version of a show that we also cover in Las Vegas, kind of six month cadence of innovation and organizational evolution of HPE that we've been tracking now for several years. Sharad Singal is here, he covers software architecture for the machine at Hewlett Packard Enterprise, and Matthias Becker, who's a postdoctoral researcher at the University of Bonn. Gentlemen, thanks so much for coming in theCUBE. >> Thank you. >> No problem. >> You know, we talk a lot on theCUBE about how technology helps people make money or save money, but now we're talking about, you know, something just more important, right? We're talking about lives and the human condition and >> Peter: Hard problems to solve. >> Specifically, yeah, hard problems like Alzheimer's. So Sharad, why don't we start with you, maybe talk a little bit about what this initiative is all about, what the partnership is all about, what you guys are doing. >> So we started on a project called the Machine Project about three, three and a half years ago and frankly at that time, the response we got from a lot of my colleagues in the IT industry was "You guys are crazy", (Dave laughs) right. We said we are looking at an enormous amount of data coming at us, we are looking at real time requirements on larger and larger processing coming up in front of us, and there is no way that the current architectures of the computing environments we create today are going to keep up with this huge flood of data, and we have to rethink how we do computing, and the real question for those of us who are in research in Hewlett Packard Labs, was if we were to design a computer today, knowing what we do today, as opposed to what we knew 50 years ago, how would we design the computer? And this computer should not be something which solves problems for the past, this should be a computer which deals with problems in the future. So we are looking for something which would take us for the next 50 years, in terms of computing architectures and what we will do there. In the last three years we have gone from ideas and paper study, paper designs, and things which were made out of plastic, to a real working system. We have around Las Vegas time, we'd basically announced that we had the entire system working with actual applications running on it, 160 terabytes of memory all addressable from any processing core in 40 computing nodes around it. And the reason is, although we call it memory-driven computing, it's really thinking in terms of data-driven computing. The reason is that the data is now at the center of this computing architecture, as opposed to the processor, and any processor can return to any part of the data directly as if it was doing, addressing in local memory. This provides us with a degree of flexibility and freedom in compute that we never had before, and as a software person, I work in software, as a software person, when we started looking at this architecture, our answer was, well, we didn't know we could do this. Now if, given now that I can do this and I assume that I can do this, all of us in the programmers started thinking differently, writing code differently, and we suddenly had essentially a toy to play with, if you will, as programmers, where we said, you know, this algorithm I had written off decades ago because it didn't work, but now I have enough memory that if I were to think about this algorithm today, I would do it differently. And all of a sudden, a new set of algorithms, a new set of programming possibilities opened up. We worked with a number of applications, ranging from just Spark on this kind of an environment, to how do you do large scale simulations, Monte Carlo simulations. And people talk about improvements in performance from something in the order of, oh I can get you a 30% improvement. We are saying in the example applications we saw anywhere from five, 10, 15 times better to something which where we are looking at financial analysis, risk management problems, which we can do 10,000 times faster. >> So many orders of magnitude. >> Many, many orders >> When you don't have to wait for the horrible storage stack. (laughs) >> That's right, right. And these kinds of results gave us the hope that as we look forward, all of us in these new computing architectures that we are thinking through right now, will take us through this data mountain, data tsunami that we are all facing, in terms of bringing all of the data back and essentially doing real-time work on those. >> Matthias, maybe you could describe the work that you're doing at the University of Bonn, specifically as it relates to Alzheimer's and how this technology gives you possible hope to solve some problems. >> So at the University of Bonn, we work very closely with the German Center for Neurodegenerative Diseases, and in their mission they are facing all diseases like Alzheimer's, Parkinson's, Multiple Sclerosis, and so on. And in particular Alzheimer's is a really serious disease and for many diseases like cancer, for example, the mortality rates improve, but for Alzheimer's, there's no improvement in sight. So there's a large population that is affected by it. There is really not much we currently can do, so the DZNE is focusing on their research efforts together with the German government in this direction, and one thing about Alzheimer's is that if you show the first symptoms, the disease has already been present for at least a decade. So if you really want to identify sources or biomarkers that will point you in this direction, once you see the first symptoms, it's already too late. So at the DZNE they have started on a cohort study. In the area around Bonn, they are now collecting the data from 30,000 volunteers. They are planning to follow them for 30 years, and in this process we generate a lot of data, so of course we do the usual surveys to learn a bit about them, we learn about their environments. But we also do very more detailed analysis, so we take blood samples and we analyze the complete genome, and also we acquire imaging data from the brain, so we do an MRI at an extremely high resolution with some very advanced machines we have, and all this data is accumulated because we do not only have to do this once, but we try to do that repeatedly for every one of the participants in the study, so that we can later analyze the time series when in 10 years someone develops Alzheimer's we can go back through the data and see, maybe there's something interesting in there, maybe there was one biomarker that we are looking for so that we can predict the disease better in advance. And with this pile of data that we are collecting, basically we need something new to analyze this data, and to deal with this, and when we heard about the machine, we though immediately this is a system that we would need. >> Let me see if I can put this in a little bit of context. So Dave lives in Massachusetts, I used to live there, in Framingham, Massachusetts, >> Dave: I was actually born in Framingham. >> You were born in Framingham. And one of the more famous studies is the Framingham Heart Study, which tracked people over many years and discovered things about heart disease and relationship between smoking and cancer, and other really interesting problems. But they used a paper-based study with an interview base, so for each of those kind of people, they might have collected, you know, maybe a megabyte, maybe a megabyte and a half of data. You just described a couple of gigabytes of data per person, 30,000, multiple years. So we're talking about being able to find patterns in data about individuals that would number in the petabytes over a period of time. Very rich detail that's possible, but if you don't have something that can help you do it, you've just collected a bunch of data that's just sitting there. So is that basically what you're trying to do with the machine is the ability to capture all this data, to then do something with it, so you can generate those important inferences. >> Exactly, so with all these large amounts of data we do not only compare the data sets for a single person, but once we find something interesting, we have also to compare the whole population that we have captured with each other. So there's really a lot of things we have to parse and compare. >> This brings together the idea that it's not just the volume of data. I also have to do analytics and cross all of that data together, right, so every time a scientist, one of the people who is doing biology studies or informatic studies asks a question, and they say, I have a hypothesis which this might be a reason for this particular evolution of the disease or occurrence of the disease, they then want to go through all of that data, and analyze it as as they are asking the question. Now if the amount of compute it takes to actually answer their questions takes me three days, I have lost my train of thought. But if I can get that answer in real time, then I get into this flow where I'm asking a question, seeing the answer, making a different hypothesis, seeing a different answer, and this is what my colleagues here were looking for. >> But if I think about, again, going back to the Framingham Heart Study, you know, I might do a query on a couple of related questions, and use a small amount of data. The technology to do that's been around, but when we start looking for patterns across brain scans with time series, we're not talking about a small problem, we're talking about an enormous sum of data that can be looked at in a lot of different ways. I got one other question for you related to this, because I gotta presume that there's the quid pro quo for getting people into the study, is that, you know, 30,000 people, is that you'll be able to help them and provide prescriptive advice about how to improve their health as you discover more about what's going on, have I got that right? >> So, we're trying to do that, but also there are limits to this, of course. >> Of course. >> For us it's basically collecting the data and people are really willing to donate everything they can from their health data to allow these large studies. >> To help future generations. >> So that's not necessarily quid pro quo. >> Okay, there isn't, okay. But still, the knowledge is enough for them. >> Yeah, their incentive is they're gonna help people who have this disease down the road. >> I mean if it is not me, if it helps society in general, people are willing to do a lot. >> Yeah of course. >> Oh sure. >> Now the machine is not a product yet that's shipping, right, so how do you get access to it, or is this sort of futures, or... >> When we started talking to one another about this, we actually did not have the prototype with us. But remember that when we started down this journey for the machine three years ago, we know back then that we would have hardware somewhere in the future, but as part of my responsibility, I had to deal with the fact that software has to be ready for this hardware. It does me no good to build hardware when there is no software to run on it. So we have actually been working at the software stack, how to think about applications on that software stack, using emulation and simulation environments, where we have some simulators with essentially instruction level simulator for what the machine does, or what that prototype would have done, and we were running code on top of those simulators. We also had performance simulators, where we'd say, if we write the application this way, this is how much we think we would gain in terms of performance, and all of those applications on all of that code we were writing was actually on our large memory machines, Superdome X to be precise. So by the time we started talking to them, we had these emulation environments available, we had experience using these emulation environments on our Superdome X platform. So when they came to us and started working with us, we took their software that they brought to us, and started working within those emulation environments to see how fast we could make those problems, even within those emulation environments. So that's how we started down this track, and most of the results we have shown in the study are all measured results that we are quoting inside this forum on the Superdome X platform. So even in that emulated environment, which is emulating the machine now, on course in the emulation Superdome X, for example, I can only hold 24 terabytes of data in memory. I say only 24 terabytes >> Only! because I'm looking at much larger systems, but an enormously large number of workloads fit very comfortably inside the 24 terabytes. And for those particular workloads, the programming techniques we are developing work at that scale, right, they won't scale beyond the 24 terabytes, but they'll certainly work at that scale. So between us we then started looking for problems, and I'll let Matthias comment on the problems that they brought to us, and then we can talk about how we actually solved those problems. >> So we work a lot with genomics data, and usually what we do is we have a pipeline so we connect multiple tools, and we thought, okay, this architecture sounds really interesting to us, but if we want to get started with this, we should pose them a challenge. So if they can convince us, we went through the literature, we took a tool that was advertised as the new optimal solution. So prior work was taking up to six days for processing, they were able to cut it to 22 minutes, and we thought, okay, this is a perfect challenge for our collaboration, and we went ahead and we took this tool, we put it on the Superdome X that was already running and stepped five minutes instead of just 22, and then we started modifying the code and in the end we were able to shrink the time down to just 30 seconds, so that's two magnitudes faster. >> We took something which was... They were able to run in 22 minutes, and that was already had been optimized by people in the field to say "I want this answer fast", and then when we moved it to our Superdome X platform, the platform is extremely capable. Hardware-wise it compares really well to other platforms which are out there. That time came down to five minutes, but that was just the beginning. And then as we modified the software based on the emulation results we were seeing underneath, we brought that time down to 13 seconds, which is a hundred times faster. We started this work with them in December of last year. It takes time to set up all of this environment, so the serious coding was starting in around March. By June we had 9X improvement, which is already a factor of 10, and since June up to now, we have gotten another factor of 10 on that application. So I'm now at a 100X faster than what the application was able to do before. >> Dave: Two orders of magnitude in a year? >> Sharad: In a year. >> Okay, we're out of time, but where do you see this going? What is the ultimate outcome that you're hoping for? >> For us, we're really aiming to analyze our data in real time. Oftentimes when we have biological questions that we address, we analyze our data set, and then in a discussion a new question comes up, and we have to say, "Sorry, we have to process the data, "come back in a week", and our idea is to be able to generate these answers instantaneously from our data. >> And those answers will lead to what? Just better care for individuals with Alzheimer's, or potentially, as you said, making Alzheimer's a memory. >> So the idea is to identify Alzheimer long before the first symptoms are shown, because then you can start an effective treatment and you can have the biggest impact. Once the first symptoms are present, it's not getting any better. >> Well thank you for your great work, gentlemen, and best of luck on behalf of society, >> Thank you very much >> really appreciate you coming on theCUBE and sharing your story. You're welcome. All right, keep it right there, buddy. Peter and I will be back with our next guest right after this short break. This is theCUBE, you're watching live from Madrid, HPE Discover 2017. We'll be right back.

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. that we also cover in Las Vegas, So Sharad, why don't we start with you, and frankly at that time, the response we got When you don't have to computing architectures that we are thinking through and how this technology gives you possible hope and in this process we generate a lot of data, So Dave lives in Massachusetts, I used to live there, is the Framingham Heart Study, which tracked people that we have captured with each other. Now if the amount of compute it takes to actually the Framingham Heart Study, you know, there are limits to this, of course. and people are really willing to donate everything So that's not necessarily But still, the knowledge is enough for them. people who have this disease down the road. I mean if it is not me, if it helps society in general, Now the machine is not a product yet and most of the results we have shown in the study that they brought to us, and then we can talk about and in the end we were able to shrink the time based on the emulation results we were seeing underneath, and we have to say, "Sorry, we have to process the data, Just better care for individuals with Alzheimer's, So the idea is to identify Alzheimer Peter and I will be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NeilPERSON

0.99+

Dave VellantePERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

Ajay PatelPERSON

0.99+

DavePERSON

0.99+

$3QUANTITY

0.99+

Peter BurrisPERSON

0.99+

Jonathan EbingerPERSON

0.99+

AnthonyPERSON

0.99+

Mark AndreesenPERSON

0.99+

Savannah PetersonPERSON

0.99+

EuropeLOCATION

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

YahooORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Matthias BeckerPERSON

0.99+

Greg SandsPERSON

0.99+

AmazonORGANIZATION

0.99+

Jennifer MeyerPERSON

0.99+

Stu MinimanPERSON

0.99+

TargetORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

RobertPERSON

0.99+

Paul CormierPERSON

0.99+

PaulPERSON

0.99+

OVHORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

PeterPERSON

0.99+

CaliforniaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SonyORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Andy JassyPERSON

0.99+

RobinPERSON

0.99+

Red CrossORGANIZATION

0.99+

Tom AndersonPERSON

0.99+

Andy JazzyPERSON

0.99+

KoreaLOCATION

0.99+

HowardPERSON

0.99+

Sharad SingalPERSON

0.99+

DZNEORGANIZATION

0.99+

U.S.LOCATION

0.99+

five minutesQUANTITY

0.99+

$2.7 millionQUANTITY

0.99+

TomPERSON

0.99+

John FurrierPERSON

0.99+

MatthiasPERSON

0.99+

MattPERSON

0.99+

BostonLOCATION

0.99+

JessePERSON

0.99+

Red HatORGANIZATION

0.99+

Armughan Ahmad, Dell EMC | Super Computing 2017


 

>> Announcer: From Denver, Colorado, it's theCUBE, covering Super Computing 17. Brought to you by Intel. (soft electronic music) Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're gettin' towards the end of the day here at Super Computing 2017 in Denver, Colorado. 12,000 people talkin' really about the outer limits of what you can do with compute power and lookin' out into the universe and black holes and all kinds of exciting stuff. We're kind of bringin' it back, right? We're all about democratization of technology for people to solve real problems. We're really excited to have our last guest of the day, bringin' the energy, Armughan Ahmad. He's SVP and GM, Hybrid Cloud and Ready Solutions for Dell EMC, and a many-time CUBE alumni. Armughan, great to see you. >> Yeah, good to see you, Jeff. So, first off, just impressions of the show. 12,000 people, we had no idea. We've never been to this show before. This is great. >> This is a show that has been around. If you know the history of the show, this was an IEEE engineering show, that actually turned into high-performance computing around research-based analytics and other things that came out of it. But, it's just grown. We're seeing now, yesterday the super computing top petaflops were released here. So, it's fascinating. You have some of the brightest minds in the world that actually come to this event. 12,000 of them. >> Yeah, and Dell EMC is here in force, so a lot of announcements, a lot of excitement. What are you guys excited about participating in this type of show? >> Yeah, Jeff, so when we come to an event like this, HBC-- We know that HBC is also evolved from your traditional HBC, which was around modeling and simulation, and how it started from engineering to then clusters. It's now evolving more towards machine learning, deep learning, and artificial intelligence. So, what we announced here-- Yesterday, our press release went out. It was really related to how our strategy of advancing HBC, but also democratizing HBC's working. So, on the advancing, on the HBC side, the top 500 super computing list came out. We're powering some of the top 500 of those. One big one is TAC, which is Texas Institute out of UT, University of Texas. They now have, I believe, the number 12 spot in the top 500 super computers in the world, running an 8.2 petaflops off computing. >> So, a lot of zeros. I have no idea what a petaflop is. >> It's very, very big. It's very big. It's available for machine learning, but also eventually going to be available for deep learning. But, more importantly, we're also moving towards democratizing HBC because we feel that democratizing is also very important, where HBC should not only be for the research and the academia, but it should also be focused towards the manufacturing customers, the financial customers, our commercial customers, so that they can actually take the complexity of HBC out, and that's where our-- We call it our HBC 2.0 strategy, off learning from the advancements that we continue to drive, to then also democratizing it for our customers. >> It's interesting, I think, back to the old days of Intel microprocessors getting better and better and better, and you had Spark and you had Silicon Graphics, and these things that were way better. This huge differentiation. But, the Intel I32 just kept pluggin' along and it really begs the question, where is the distinction now? You have huge clusters of computers you can put together with virtualization. Where is the difference between just a really big cluster and HBC and super computing? >> So, I think, if you look at HBC, HBC is also evolving, so let's look at the customer view, right? So, the other part of our announcement here was artificial intelligence, which is really, what is artificial intelligence? It's, if you look at a customer retailer, a retailer has-- They start with data, for example. You buy beer and chips at J's Retailer, for example. You come in and do that, you usually used to run a SEQUEL database or you used to run a RDBMS database, and then that would basically tell you, these are the people who can purchase from me. You know their purchase history. But, then you evolved into BI, and then if that data got really, very large, you then had an HBC cluster, would which basically analyze a lot of that data for you, and show you trends and things. That would then tell you, you know what, these are my customers, this is how many times they are frequent. But, now it's moving more towards machine learning and deep learning as well. So, as the data gets larger and larger, we're seeing datas becoming larger, not just by social media, but your traditional computational frameworks, your traditional applications and others. We're finding that data is also growing at the edge, so by 2020, about 20 billion devices are going to wake up at the edge and start generating data. So, now, Internet data is going to look very small over the next three, four years, as the edge data comes up. So, you actually need to now start thinking of machine learning and deep learning a lot more. So, you asked the question, how do you see that evolving? So, you see an RDBMS traditional SQL evolving to BI. BI then evolves into either an HBC or hadoop. Then, from HBC and hadoop, what do you do next? What you do next is you start to now feed predictive analytics into machine learning kind of solutions, and then once those predictive analytics are there, then you really, truly start thinking about the full deep learning frameworks. >> Right, well and clearly like the data in motion. I think it's funny, we used to make decisions on a sample of data in the past. Now, we have the opportunity to take all the data in real time and make those decisions with Kafka and Spark and Flink and all these crazy systems that are comin' to play. Makes Hadoop look ancient, tired, and yesterday, right? But, it's still valid, right? >> A lot of customers are still paying. Customers are using it, and that's where we feel we need to simplify the complex for our customers. That's why we announced our Machine Learning Ready Bundle and our Deep Learning Ready Bundle. We announced it with Intel and Nvidia together, because we feel like our customers either go to the GPU route, which is your accelerator's route. We announced-- You were talking to Ravi, from our server team, earlier, where he talked about the C4140, which has the quad GPU power, and it's perfect for deep learning. But, with Intel, we've also worked on the same, where we worked on the AI software with Intel. Why are we doing all of this? We're saying that if you thought that RDBMS was difficult, and if you thought that building a hadoop cluster or HBC was a little challenging and time consuming, as the customers move to machine learning and deep learning, you now have to think about the whole stack. So, let me explain the stack to you. You think of a compute storage and network stack, then you think of-- The whole eternity. Yeah, that's right, the whole eternity of our data center. Then you talk about our-- These frameworks, like Theano, Caffe, TensorFlow, right? These are new frameworks. They are machine learning and deep learning frameworks. They're open source and others. Then you go to libraries. Then you go to accelerators, which accelerators you choose, then you go to your operating systems. Now, you haven't even talked about your use case. Retail use case or genomic sequencing use case. All you're trying to do is now figure out TensorFlow works with this accelerator or does not work with this accelerator. Or, does Caffe and Theano work with this operating system or not? And, that is a complexity that is way more complex. So, that's where we felt that we really needed to launch these new solutions, and we prelaunched them here at Super Computing, because we feel the evolution of HBC towards AI is happening. We're going to start shipping these Ready Bundles for machine learning and deep learning in first half of 2018. >> So, that's what the Ready Solutions are? You're basically putting the solution together for the client, then they can start-- You work together to build the application to fix whatever it is they're trying to do. >> That's exactly it. But, not just fix it. It's an outcome. So, I'm going to go back to the retailer. So, if you are the CEO of the biggest retailer and you are saying, hey, I just don't want to know who buys from me, I want to now do predictive analytics, which is who buys chips and beer, but who can I sell more things to, right? So, you now start thinking about demographic data. You start thinking about payroll data and other datas that surround-- You start feeding that data into it, so your machine now starts to learn a lot more of those frameworks, and then can actually give you predictive analytics. But, imagine a day where you actually-- The machine or the deep learning AI actually tells you that it's not just who you want to sell chips and beer to, it's who's going to buy the 4k TV? You're makin' a lot of presumptions. Well, there you go, and the 4k-- But, I'm glad you're doin' the 4k TV. So, that's important, right? That is where our customers need to understand how predictive analytics are going to move towards cognitive analytics. So, this is complex but we're trying to make that complex simple with these Ready Solutions from machine learning and deep learning. >> So, I want to just get your take on-- You've kind of talked about these three things a couple times, how you delineate between AI, machine learning, and deep learning. >> So, as I said, there is an evolution. I don't think a customer can achieve artificial intelligence unless they go through the whole crawl walk around space. There's no shortcuts there, right? What do you do? So, if you think about, Mastercard is a great customer of ours. They do an incredible amount of transactions per day, (laughs) as you can think, right? In millions. They want to do facial recognitions at kiosks, or they're looking at different policies based on your buying behavior-- That, hey, Jeff doesn't buy $20,000 Rolexes every year. Maybe once every week, you know, (laughs) it just depends how your mood is. I was in the Emirates. Exactly, you were in Dubai (laughs). Then, you think about his credit card is being used where? And, based on your behaviors that's important. Now, think about, even for Mastercard, they have traditional RDBMS databases. They went to BI. They have high-performance computing clusters. Then, they developed the hadoop cluster. So, what we did with them, we said okay. All that is good. That data that has been generated for you through customers and through internal IT organizations, those things are all very important. But, at the same time, now you need to start going through this data and start analyzing this data for predictive analytics. So, they had 1.2 million policies, for example, that they had to crunch. Now, think about 1.2 million policies that they had to say-- In which they had to take decisions on. That they had to take decisions on. One of the policies could be, hey, does Jeff go to Dubai to buy a Rolex or not? Or, does Jeff do these other patterns, or is Armughan taking his card and having a field day with it? So, those are policies that they feed into machine learning frameworks, and then machine learning actually gives you patterns that they can now see what your behavior is. Then, based on that, eventually deep learning is when they move to next. Deep learning now not only you actually talk about your behavior patterns on the credit card, but your entire other life data starts to-- Starts to also come into that. Then, now, you're actually talking about something before, that's for catching a fraud, you can actually be a lot more predictive about it and cognitive about it. So, that's where we feel that our Ready Solutions around machine learning and deep learning are really geared towards, so taking HBC to then democratizing it, advancing it, and then now helping our customers move towards machine learning and deep learning, 'cause these buzzwords of AIs are out there. If you're a financial institution and you're trying to figure out, who is that customer who's going to buy the next mortgage from you? Or, who are you going to lend to next? You want the machine and others to tell you this, not to take over your life, but to actually help you make these decisions so that your bottom line can go up along with your top line. Revenue and margins are important to every customer. >> It's amazing on the credit card example, because people get so pissed if there's a false positive. With the amount of effort that they've put into keep you from making fraudulent transactions, and if your credit card ever gets denied, people go bananas, right? The behavior just is amazing. But, I want to ask you-- We're comin' to the end of 2017, which is hard to believe. Things are rolling at Dell EMC. Michael Dell, ever since he took that thing private, you could see the sparkle in his eye. We got him on a CUBE interview a few years back. A year from now, 2018. What are we going to talk about? What are your top priorities for 2018? >> So, number one, Michael continues to talk about that our vision is advancing human progress through technology, right? That's our vision. We want to get there. But, at the same time we know that we have to drive IT transformation, we have to drive workforce transformation, we have to drive digital transformation, and we have to drive security transformation. All those things are important because lots of customers-- I mean, Jeff, do you know like 75% of the S&P 500 companies will not exist by 2027 because they're either not going to be able to make that shift from Blockbuster to Netflix, or Uber taxi-- It's happened to our friends at GE over the last little while. >> You can think about any customer-- That's what Michael did. Michael actually disrupted Dell with Dell technologies and the acquisition of EMC and Pivotal and VMWare. In a year from now, our strategy is really about edge to core to the cloud. We think the world is going to be all three, because the rise of 20 billion devices at the edge is going to require new computational frameworks. But, at the same time, people are going to bring them into the core, and then cloud will still exist. But, a lot of times-- Let me ask you, if you were driving an autonomous vehicle, do you want that data-- I'm an Edge guy. I know where you're going with this. It's not going to go, right? You want it at the edge, because data gravity is important. That's where we're going, so it's going to be huge. We feel data gravity is going to be big. We think core is going to be big. We think cloud's going to be big. And we really want to play in all three of those areas. >> That's when the speed of light is just too damn slow, in the car example. You don't want to send it to the data center and back. You don't want to send it to the data center, you want those decisions to be made at the edge. Your manufacturing floor needs to make the decision at the edge as well. You don't want a lot of that data going back to the cloud. All right, Armughan, thanks for bringing the energy to wrap up our day, and it's great to see you as always. Always good to see you guys, thank you. >> All right, this is Armughan, I'm Jeff Frick. You're watching theCUBE from Super Computing Summit 2017. Thanks for watching. We'll see you next time. (soft electronic music)

Published Date : Nov 16 2017

SUMMARY :

Brought to you by Intel. So, first off, just impressions of the show. You have some of the brightest minds in the world What are you guys excited about So, on the advancing, on the HBC side, So, a lot of zeros. the complexity of HBC out, and that's where our-- You have huge clusters of computers you can and then if that data got really, very large, you then had and all these crazy systems that are comin' to play. So, let me explain the stack to you. for the client, then they can start-- The machine or the deep learning AI actually tells you So, I want to just get your take on-- But, at the same time, now you need to start you could see the sparkle in his eye. But, at the same time we know that we have to But, at the same time, people are going to bring them and it's great to see you as always. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

DubaiLOCATION

0.99+

ArmughanPERSON

0.99+

$20,000QUANTITY

0.99+

Michael DellPERSON

0.99+

EMCORGANIZATION

0.99+

2018DATE

0.99+

TACORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

2027DATE

0.99+

Armughan AhmadPERSON

0.99+

DellORGANIZATION

0.99+

12,000QUANTITY

0.99+

EmiratesLOCATION

0.99+

75%QUANTITY

0.99+

MastercardORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

2020DATE

0.99+

PivotalORGANIZATION

0.99+

8.2 petaflopsQUANTITY

0.99+

C4140COMMERCIAL_ITEM

0.99+

12,000 peopleQUANTITY

0.99+

Texas InstituteORGANIZATION

0.99+

GEORGANIZATION

0.99+

OneQUANTITY

0.99+

1.2 million policiesQUANTITY

0.99+

J's RetailerORGANIZATION

0.99+

Denver, ColoradoLOCATION

0.99+

YesterdayDATE

0.99+

500 super computersQUANTITY

0.99+

millionsQUANTITY

0.99+

20 billion devicesQUANTITY

0.99+

University of TexasORGANIZATION

0.99+

VMWareORGANIZATION

0.99+

CaffeORGANIZATION

0.98+

Super Computing Summit 2017EVENT

0.98+

yesterdayDATE

0.98+

Dell EMCORGANIZATION

0.98+

UberORGANIZATION

0.98+

IntelORGANIZATION

0.98+

HBCORGANIZATION

0.97+

RaviPERSON

0.97+

about 20 billion devicesQUANTITY

0.97+

end of 2017DATE

0.97+

I32COMMERCIAL_ITEM

0.97+

threeQUANTITY

0.96+

CUBEORGANIZATION

0.96+

first half of 2018DATE

0.96+

Super Computing 17EVENT

0.95+

Super Computing 2017EVENT

0.95+

Deep Learning Ready BundleCOMMERCIAL_ITEM

0.94+

GMORGANIZATION

0.94+

HadoopTITLE

0.93+

three thingsQUANTITY

0.91+

S&P 500ORGANIZATION

0.91+

SQLTITLE

0.9+

UTORGANIZATION

0.9+

about 1.2 million policiesQUANTITY

0.89+

firstQUANTITY

0.89+

RolexORGANIZATION

0.89+

Hybrid CloudORGANIZATION

0.88+

BlockbusterORGANIZATION

0.87+

TheanoORGANIZATION

0.86+

12QUANTITY

0.86+

IEEEORGANIZATION

0.85+

Bernie Spang, IBM & Wayne Glanfield, Red Bull Racing | Super Computing 2017


 

>> Announcer: From Denver, Colorado it's theCUBE. Covering Super Computing 17, brought to you by Intel. Welcome back everybody, Jeff Frick here with theCUBE. We're at Super Computing 2017 in Denver, Colorado talking about big big iron, we're talking about space and new frontiers, black holes, mapping the brain. That's all fine and dandy, but we're going to have a little bit more fun this next segment. We're excited to have our next guest Bernie Spang. He's a VP Software Defined Infrastructure for IBM. And his buddy and guest Wayne Glanfield HPC Manager for Red Bull Racing. And for those of you that don't know, that's not the pickup trucks, it's not the guy jumping out of space, this is the Formula One racing team. The fastest, most advanced race cars in the world. So gentlemen, first off welcome. Thank you. Thank you Jeff. So what is a race car company doing here for a super computing conference? Obviously we're very interested in high performance computing so traditionally we've used a wind tunnel to do our external aerodynamics. HPC allows us to do many many more iterations, design iterations of the car. So we can actually kind of get more iterations of the designs out there and make the car go faster very quicker. So that's great, you're not limited to how many times you can get it in the wind tunnel. The time you have in the wind tunnel. I'm sure there's all types of restrictions, cost and otherwise. There's lots of restrictions and both the wind tunnel and in HPC usage. So with HPC we're limited to 25 teraflops, which isn't many teraflops. 25 teraflops. >> Wayne: That's all. And Bernie, how did IBM get involved in Formula One racing? Well I mean our spectrum computing offerings are about virtualizing clusters to optimize efficiency, and the performance of the workloads. So our Spectrum LSF offering is used by manufacturers, designers to get ultimate efficiency out of the infrastructure. So with the Formula One restrictions on the teraflops you want to get as much work through that system as efficiently as you can. And that's where Spectrum computing comes in. That's great. And so again, back to the simulations. So not only can you just do simulations 'cause you got the capacity, but then you can customize it as you said I think before we turned on the cameras for specific tracks, specific race conditions. All types of variables that you couldn't do very easily in a traditional wind tunnel. Yes obviously it takes a lot longer to actually kind of develop, create, and rapid prototype the models and get them in the wind tunnel, and actually test them. And it's obviously much more expensive. So by having a HPC facility we can actually kind of do the design simulations in a virtual environment. So what's been kind of the ahah from that? Is it just simply more better faster data? Is there some other kind of transformational thing that you observed as a team when you started doing this type of simulation versus just physical simulation in a wind tunnel? We started using HPC and computational fluid dynamics about 12 years ago in anger. Traditionally it started out as a complementary tool to the wind tunnel. But now with the advances in HPC technology and software from IBM, it's actually beginning to overtake the wind tunnel. So it's actually kind of driving the way we design the car these days. That's great. So Bernie, working with super high end performance, right, where everything is really optimized to get that car to go a little bit faster, just a little bit faster. Right. Pretty exciting space to work in, you know, there's a lot of other great applications, aerospace, genomics, this and that. But this is kind of a fun thing you can actually put your hands on. Oh it's definitely fun, it's definitely fun being with the Red Bull Racing team, and with our clients when we brief them there. But we have commercial clients in automotive design, aeronautics, semiconductor manufacturing, where getting every bit of efficiency and performance out of their infrastructure is also important. Maybe they're not limited by rules, but they're limited by money, you know and the ability to investment. And their ability to get more out of the environment gives them a competitive advantage as well. And really what's interesting about racing, and a lot of sports is you get to witness the competition. We don't get to witness the competition between big companies day to day. You're not kind of watching it in those little micro instances. So the good thing is you get to learn a lot from such a focused, relatively small team as Red Bull Racing that you can apply to other things. So what are some of the learnings as you've got work with them that you've taken back? Well certainly they push the performance of the environment, and they push us, which is a great thing for us, and for our other clients who benefit. But one of the things I think that really stands out is the culture there of the entire team no matter what their role and function. From the driver on down to everybody else are focused on winning races and winning championships. And that team view of getting every bit of performance out of everything everybody does all the time really opened our thinking to being broader than just the scheduling of the IT infrastructure, it's also about making the design team more productive and taking steps out of the process, and anything we can do there. Inclusive of the storage management, and the data management over time. So it's not just the compute environment it's also the virtualized storage environment. Right, and just massive amounts of storage. You said not only are you running and generating, I'm just going to use boatloads 'cause I'm not sure which version of the flops you're going to use. But also you got historical data, and you have result data, and you have models that need to be tweaked, and continually upgraded so that you do better the following race. Exactly, I mean we're generating petabytes of data a year and I think one of the issues which is probably different from most industries is our workflows are incredibly complex. So we have up to 200 discrete job steps for each workflow to actually kind of produce a simulation. This is where the kind of IBM Spectrum product range actually helps us do that efficiently. If you imagine an aerospace engineer, or aerodynamics engineer trying to manually manage 200 individual job steps, it just wouldn't happen very efficiently. So this is where Spectrum scale actually kind of helps us do that. So you mentioned it briefly Bernie, but just a little bit more specifically. What are some of the other industries that you guys are showcasing that are leveraging the power of Spectrum to basically win their races. Yeah so and we talked about the infrastructure and manufacturing, but they're industrial clients. But also in financial services. So think in terms of risk analytics and financial models being an important area. Also healthcare life sciences. So molecular biology, finding new drugs. When you talk about the competition and who wins right. Genomics research and advances there. Again, you need a system and an infrastructure that can chew through vast amounts of data. Both the performance and the compute, as well as the longterm management with cost efficiency of huge volumes of data. And then you need that virtualized cluster so that you can run multiple workloads many times with an infrastructure that's running in 80%, 90% efficiency. You can't afford to have silos of clusters. Right we're seeing clients that have problems where they don't have this cluster virtualization software, have cluster creep, just like in the early days we had server sprawl, right? With a different app on a different server, and we needed to virtualize the servers. Well now we're seeing cluster creep. Right the Hadoop clusters and Spark clusters, and machine learning and deep learning clusters. As well as the traditional HPC workload. So what Spectrum computing does is virtualizes that shared cluster environment so that you can run all these different kind of workloads and drive up the efficiency of the environment. 'Cause efficiency is really the key right. You got to have efficiency that's what, that's really where cloud got its start, you know, kind of eating into the traditional space, right. There's a lot of inefficient stuff out there so you got to use your resources efficiently it's way too competitive. Correct well we're also seeing inefficiencies in the use of cloud, right. >> Jeff: Absolutely. So one of the features that we've added to the Spectrum computing recently is automated dynamic cloud bursting. So we have clients who say that they've got their scientists or their design engineers spinning up clusters in the cloud to run workloads, and then leaving the servers running, and they're paying the bill. So we built in automation where we push the workload and the data over the cloud, start the servers, run the workload. When the workload's done, spin down the servers and bring the data back to the user. And it's very cost effective that way. It's pretty fun everyone talks often about the spin up, but they forget to talk about the spin down. Well that's where the cost savings is, exactly. Alright so final words, Wayne, you know as you look forward, it's super a lot of technology in Formula One racing. You know kind of what's next, where do you guys go next in terms of trying to get another edge in Formula One racing for Red Bull specifically. I mean I'm hoping they reduce the restrictions on HPC so it can actually start using CFD and the software IBM provides in a serious manner. So it can actually start pushing the technologies way beyond where they are at the moment. It's really interesting that they, that as a restriction right, you think of like plates and size of the engine, and these types of things as the rule restrictions. But they're actually restricting based on data size, your use of high performance computing. They're trying to save money basically, but. It's crazy. So whether it's a rule or you know you're share holders, everybody's trying to save money. Alright so Bernie what are you looking at, sort of 2017 is coming to an end, it's hard for me to say that as you look forward to 2018 what are some of your priorities for 2018. Well the really important thing and we're hearing it at this conference, I'm talking with the analysts and with the clients. The next generation of HPC in analytics is what we're calling machine learning, deep learning, cognitive AI, whatever you want to call it. That's just the new generation of this workload. And our Spectrum conductor offering and our new deep learning impact capability to automate the training of deep learning models, so that you can more quickly get to an accurate model like in hours or minutes, not days or weeks. That's going to a huge break through. And based on our early client experience this year, I think 2018 is going to be a breakout year for putting that to work in commercial enterprise use cases. Alright well I look forward to the briefing a year from now at Super Computing 2018. Absolutely. Alright Bernie, Wayne, thanks for taking a few minutes out of your day, appreciate it. You're welcome, thank you. Alright he's Bernie, he's Wayne, I'm Jeff Frick we're talking Formula One Red Bull Racing here at Super Computing 2017. Thanks for watching.

Published Date : Nov 16 2017

SUMMARY :

and new frontiers, black holes, mapping the brain. So the good thing is you get to learn a lot and bring the data back to the user.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
WaynePERSON

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

BerniePERSON

0.99+

Wayne GlanfieldPERSON

0.99+

IBMORGANIZATION

0.99+

90%QUANTITY

0.99+

80%QUANTITY

0.99+

Bernie SpangPERSON

0.99+

2018DATE

0.99+

25 teraflopsQUANTITY

0.99+

Red Bull RacingORGANIZATION

0.99+

2017DATE

0.99+

Denver, ColoradoLOCATION

0.99+

oneQUANTITY

0.98+

Super Computing 17EVENT

0.98+

Super Computing 2017EVENT

0.98+

IntelORGANIZATION

0.97+

each workflowQUANTITY

0.97+

Super Computing 2018EVENT

0.97+

Formula OneEVENT

0.96+

bothQUANTITY

0.96+

BothQUANTITY

0.95+

this yearDATE

0.92+

up to 200 discrete job stepsQUANTITY

0.92+

a yearQUANTITY

0.89+

Formula OneORGANIZATION

0.86+

about 12 years agoDATE

0.86+

firstQUANTITY

0.84+

200 individual job stepsQUANTITY

0.82+

SpectrumOTHER

0.79+

HPCORGANIZATION

0.79+

Red BullEVENT

0.79+

theCUBEORGANIZATION

0.73+

petabytesQUANTITY

0.65+

SparkTITLE

0.61+

HPCPERSON

0.6+

issuesQUANTITY

0.56+

featuresQUANTITY

0.52+

SpectrumCOMMERCIAL_ITEM

0.5+

SpectrumTITLE

0.49+

SpectrumORGANIZATION

0.44+

BernieLOCATION

0.39+

Ravi Pendekanti, Dell EMC | Super Computing 2017


 

>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing '17, brought to you by Intel. Hey welcome back everybody, Jeff Frick here with theCUBE. We're at Super Computing 2017, Denver, Colorado, 12,000 people talking about big iron, big questions, big challenges. It's really an interesting take on computing, really out on the edge. The key note was, literally, light years out in space, talking about predicting the future with quirks and all kinds of things, a little over my head for sure. But we're excited to kind of get back to the ground and we have Ravi Pendekanti. He's the Senior Vice President of Product Management and Marketing, Server Platforms, Dell EMC. It's a mouthful, Ravi great to see you. Great to see you too Jeff and thanks for having me here. Absolutely, so we were talking before we turned the cameras on. One of your big themes, which I love, is kind of democratizing this whole concept of high performance computing, so it's not just the academics answering the really, really, really big questions. You're absolutely right. I mean think about it Jeff, 20 years ago, even 10 years ago, when people talk about high performance computing, it was what I call as being in the back alleys of research and development. There were a few research scientists working on it, but we're at a time in our journey towards helping humanity in a bigger way. The HPC has found it's way into almost every single mainstream industry you can think of. Whether it is fraud detection, you see MasterCard is using it for ensuring that they can see and detect any of the fraud that can be committed earlier than the perpetrators come in and actually hack the system. Or if you get into life sciences, if you talk about genomics. I mean this is what might be good for our next set of generations, where they can probably go out and tweak some of the things in a genome sequence so that we don't have the same issues that we have had in the past. Right. Right? So, likewise, you can pick any favorite industry. I mean we are coming up to the holiday seasons soon. I know a lot of our customers are looking at how do they come up with the right schema to ensure that they can stock the right product and ensure that it is available for everyone at the right time? 'Cause timing is important. I don't think any kid wants to go with no toy and have the product ship later. So bottom line is, yes, we are looking at ensuring the HPC reaches every single industry you can think of. So how do you guys parse HPC verses a really big virtualized cluster? I mean there's so many ways that compute and store has evolved, right? So now, with cloud and virtual cloud and private cloud and virtualization, you know, I can pull quite a bit of horsepower together to attack a problem. So how do you kind of cut the line between Navigate, yeah. big, big compute, verses true HPC? HPC. It's interesting you ask. I'm actually glad you asked because people think that it's just feeding CPU or additional CPU will do the trick, it doesn't. The simple fact is, if you look at the amount of data that is being created. I'll give you a simple example. I mean, we are talking to one of the airlines right now, and they're interested in capturing all the data that comes through their flights. And one of the things they're doing is capturing all the data from their engines. 'Cause end of the day, you want to make sure that your engines are pristine as they're flying. And every hour that an engine flies out, I mean as an airplane flies out, it creates about 20 terabytes of data. So, if you have a dual engine, which is what most flights are. In one hour they create about 40 terabytes of data. And there are supposedly about 38,000 flights taking off at any given time around the world. I mean, it's one huge data collection problem. Right? I mean, I'm told it's like a real Godzilla number, so I'll let you do the computation. My point is if you really look at the data, data has no value, right? What really is important is getting information out of it. The CPU on the other side has gone to a time and a phase where it is hitting the, what I call as the threshold of the Moore's law. Moore's law was all about performance doubles every two years. But today, that performance is not sufficient. Which is where auxiliary technologies need to be brought in. This is where the GPUs, the FBGAs. Right, right. Right. So when you think about these, that's where the HPC world takes off, is you're augmenting your CPUs and your processors with additional auxiliary technology such as the GPUs and FBGAs to ensure that you have more juice to go do this kind of analytics and the massive amounts of data that you and I and the rest of the humanity is creating. It's funny that you talk about that. We were just at a Western Digital event a little while ago, talking about the next generation of drives and it was the same thing where now it's this energy assist method to change really the molecular way that it saves information to get more out of it. So that's kind of how you parse it. If you've got to juice the CPU, and kind of juice the traditional standard architecture, then you're moving into the realm of high performance computing. Absolutely, I mean this is why, Jeff, yesterday we launched a new PowerEdge C4140, right? The first of it's kind in terms of the fact that it's got two Intel Xeon processors, but beyond that, it also can support four Nvidia GPUs. So now you're looking at a server that's got both the CPUs, to your earlier comment on processors, but is augmented by four of the GPUs, that gives immense capacity to do this kind of high performance computing. But as you said, it's not just compute, it's store, it's networking, it's services, and then hopefully you package something together in a solution so I don't have to build the whole thing from scratch, you guys are making moves, right? Oh, this is a perfect lead in, perfect lead in. I know, my colleague, Armagon will be talking to you guys shortly. What his team does, is it takes all the building blocks we provide, such as the servers, obviously looks at the networking, the storage elements, and then puts them together to create what are called solutions. So if you've got solutions, which enable our customers to go back in and easily deploy a machine-learning or a deep-learning solution. Where now our customers don't have to do what I call as the heavy lift. In trying to make sure that they understand how the different pieces integrate together. So the goal behind what we are doing at Dell EMC is to remove the guess work out so that our customers and partners can go out and spend their time deploying the solution. Whether it is for machine learning, deep learning or pick your favorite industry, we can also verticalize it. So that's the beauty of what we are doing at Dell EMC. So the other thing we were talking about before we turned turned the cameras on is, I call them the itys from my old Intel days, reliability, sustainability, serviceability, and you had a different phrase for it. >> Ravi: Oh yes, I know you're talking about the RAS. The RAS, right. Which is the reliability, availability, and serviceability. >> Jeff: But you've got a new twist on it. Oh we do. Adding something very important, and we were just at a security show early this week, CyberConnect, and security now cuts through everything. Because it's no longer a walled garden, 'cause there are no walls. There are no walls. It's really got to be baked in every layer of the solution. Absolutely right. The reason is, if you really look at security, it's not about, you know till a few years ago, people used to think it's all about protecting yourself from external forces, but today we know that 40% of the hacks happen because of the internal, you know, system processes that we don't have in place. Or we could have a person with an intent to break in for whatever reason, so the integrated security becomes part and parcel of what we do. This is where, with in part of a 14G family, one of the things we said is we need to have integrated security built in. And along with that, we want to have the scalability because no two workloads are the same and we all know that the amount of data that's being created today is twice what it was the last year for each of us. Forget about everything else we are collecting. So when you think about it, we need integrated security. We need to have the scalability feature set, also we want to make sure there is automation built in. These three main tenets that we talked about feed into what we call internally, the monic of a user's PARIS. And that's what I think, Jeff, to our earlier conversation, PARIS is all about, P is for best price performance. Anybody can choose to get the right performance or the best performance, but you don't want to shell out a ton of dollars. Likewise, you don't want to pay minimal dollars and try and get the best performance, that's not going to happen. I think there's a healthy balance between price performance, that's important. Availability is important. Interoperability, as much as everybody thinks that they can act on their own, it's nearly impossible, or it's impossible that you can do it on your own. >> Jeff: These are big customers, they've got a lot of systems. You are. You need to have an ecosystem of partners and technologies that come together and then, end of the day, you have to go out and have availability and serviceability, or security, to your point, security is important. So PARIS is about price performance, availability, interoperability, reliability, availability and security. I like it. That's the way we design it. It's much sexier than that. We drop in, like an Eiffel Tower picture right now. There you go, you should. So Ravi, hard to believe we're at the end of 2017, if we get together a year from now at Super Computing 2018, what are some of your goals, what are your some objectives for 2018? What are we going to be talking about a year from today? Oh, well looking into a crystal ball, as much as I can look into that, I thin that-- >> Jeff: As much as you can disclose. And as much as we can disclose, a few things I think are going to happen. >> Jeff: Okay. Number one, I think you will see people talk about to where we started this conversation. HPC has become mainstream, we talked about it, but the adoption of high performance computing, in my personal belief, is not still at a level that it needs to be. So, if you go down next 12 to 18 months, lets say, I do think the adoption rates will be much higher than where we are. And we talk about security now, because it's a very topical subject, but as much as we are trying to emphasize to our partners and customers that you've got to think about security from ground zero. We still see a number of customers who are not ready. You know, some of the analysis show there are nearly 40% of the CIOs are not ready in helping and they truly understand, I should say, what it takes to have a secure system and a secure infrastructure. It's my humble belief that people will pay attention to it and move the needle on it. And we talked about, you know, four GPUs in our C4140, do anticipate that there will be a lot more of auxiliary technology packed into it. Sure, sure. So that's essentially what I can say without spilling the beans too much. Okay, all right, super. Ravi, thanks for taking a couple of minutes out of your day, appreciate it. = Thank you. All right, he's Ravi, I'm Jeff Frick, you're watching theCUBE from Super Computing 2017 in Denver, Colorado. Thanks for watching. (techno music)

Published Date : Nov 16 2017

SUMMARY :

and the massive amounts of data that you and I Which is the reliability, because of the internal, you know, and then, end of the day, you have to go out Jeff: As much as you can disclose. And we talked about, you know, four GPUs in our C4140,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Ravi PendekantiPERSON

0.99+

40%QUANTITY

0.99+

RaviPERSON

0.99+

PARISORGANIZATION

0.99+

2018DATE

0.99+

one hourQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

12,000 peopleQUANTITY

0.99+

MasterCardORGANIZATION

0.99+

C4140COMMERCIAL_ITEM

0.99+

NvidiaORGANIZATION

0.99+

twiceQUANTITY

0.99+

eachQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

bothQUANTITY

0.99+

ArmagonORGANIZATION

0.99+

last yearDATE

0.99+

todayDATE

0.99+

about 20 terabytesQUANTITY

0.99+

Denver,LOCATION

0.98+

oneQUANTITY

0.98+

IntelORGANIZATION

0.98+

yesterdayDATE

0.98+

about 38,000 flightsQUANTITY

0.98+

early this weekDATE

0.98+

PowerEdgeCOMMERCIAL_ITEM

0.97+

endDATE

0.97+

Eiffel TowerLOCATION

0.97+

10 years agoDATE

0.97+

nearly 40%QUANTITY

0.96+

twoQUANTITY

0.95+

20 years agoDATE

0.95+

18 monthsQUANTITY

0.95+

three main tenetsQUANTITY

0.94+

firstQUANTITY

0.94+

fourQUANTITY

0.93+

Super Computing '17EVENT

0.92+

OneQUANTITY

0.92+

every two yearsQUANTITY

0.92+

Super Computing 2017EVENT

0.91+

12QUANTITY

0.89+

2017DATE

0.88+

few years agoDATE

0.86+

MoorePERSON

0.86+

XeonCOMMERCIAL_ITEM

0.85+

Western DigitalORGANIZATION

0.84+

about 40 terabytes of dataQUANTITY

0.83+

Super Computing 2018EVENT

0.82+

two workloadsQUANTITY

0.81+

dualQUANTITY

0.76+

a yearQUANTITY

0.74+

a ton of dollarsQUANTITY

0.74+

14GORGANIZATION

0.7+

singleQUANTITY

0.66+

every hourQUANTITY

0.65+

ground zeroQUANTITY

0.64+

HPCORGANIZATION

0.6+

ColoradoLOCATION

0.56+

doublesQUANTITY

0.56+

PORGANIZATION

0.54+

CyberConnectORGANIZATION

0.49+

theCUBEORGANIZATION

0.49+

RASOTHER

0.34+

Susan Bobholz, Intel | Super Computing 2017


 

>> [Announcer] From Denver, Colorado, it's the Cube covering Super Computing 17, brought to you by Intel. (techno music) >> Welcome back, everybody, Jeff Frick with the Cube. We are at Super Computing 2017 here in Denver, Colorado. 12,000 people talking about big iron, heavy lifting, stars, future mapping the brain, all kinds of big applications. We're here, first time ever for the Cube, great to be here. We're excited for our next guest. She's Susan Bobholtz, she's the Fabric Alliance Manager for Omni-Path at Intel, Susan, welcome. >> Thank you. >> So what is Omni-Path, for those that don't know? >> Omni-Path is Intel's high performance fabric. What it does is it allows you to connect systems and make big huge supercomputers. >> Okay, so for the royal three-headed horsemen of compute, store, and networking, you're really into data center networking, connecting the compute and the store. >> Exactly, correct, yes. >> Okay. How long has this product been around? >> We started shipping 18 months ago. >> Oh, so pretty new? >> Very new. >> Great, okay and target market, I'm guessing has something to do with high performance computing. >> (laughing) Yes, our target market is high performance computing, but we're also seeing a lot of deployments in artificial intelligence now. >> Okay and so what's different? Why did Intel feel compelled that they needed to come out with a new connectivity solution? >> We were getting people telling us they were concerned that the existing solutions were becoming too expensive and weren't going to scale into the future, so they said Intel, can you do something about it, so we did. We made a couple of strategic acquisitions, we combined that with some of our own IP and came up with Omni-Path. Omni-Path is very much a proprietary protocol, but we use all the same software interfaces as InfiniBand, so your software applications just run. >> Okay, so to the machines it looks like InfiniBand? >> Yes. >> Just plug and play and run. >> Very much so, it's very similar. >> Okay what are some of the attributes that make it so special? >> The reason it's really going very well is that it's the price performance benefits, so we have equal to, or better, performance than InfiniBand today, but we also have our switch technology is 48 ports verses InfiniBand is 36 ports. So that means you can build denser clusters in less space and less cables, lower power, total cost of ownership goes down, and that's why people are buying it. >> Really fits into the data center strategy that Intel's executing very aggressively right now. >> Fits very nicely, absolutely, yes, very much so. >> Okay, awesome, so what are your thoughts here at the show? Any announcements, anything that you've seen that's of interest? >> Oh yeah, so, a couple things. We've had really had good luck on the Top 500 list. 60% of the servers that are running a 100 gigabyte fabrics in the Top 500 list are running connected via Omni-Path. >> What percentage again? >> 60% >> 60? >> Yes. >> You've only been at it for 18 months? >> Yes, exactly. >> Impressive. >> Very, very good. We've got systems in the Top 10 already. Some of the Top 10 systems in the world are using Omni-Path. >> Is it rip and replace, do you find, or these are new systems that people are putting in. >> Yeah, these are new systems. Usually when somebody's got a system they like and run, they don't want to touch it. >> Right. >> These are people saying I need a new system. I need more power, I need more oompf. They have the money, the budget, they want to put in something new, and that's when they look to Omni-Path. >> Okay, so what are you working on now, what's kind of next for Omni-Path? >> What's next for us is we are announcing a new higher, denser switch technology, so that will allow you to go for your director class switches, which is the really big ones, is now rather than having 768 ports, you can go to 1152, and that means, again, denser topologies, lower power, less cabling, it reduces your total cost of ownership. >> Right, I think you just answered my question, but I'm going to ask you anyway. >> (laughs) Okay. >> We talked a little bit before we turned the camera on about AI and some of the really unique challenges of AI, and that was part of the motivation behind this product. So what are some of the special attributes of AI that really require this type of connectivity? >> It's very much what you see even with high performance computing. You need low latency, you need high bandwidth. It's the same technologies, and in fact, in a lot of cases, it's the same systems, or sometimes they can be running software load that is HPC focused, and sometimes they're running a software load that is artificial intelligence focused. But they have the same exact needs. >> Okay. >> Do it fast, do it quick. >> Right, right, that's why I said you already answered the question. Higher density, more computing, more storing, faster. >> Exactly, right, exactly. >> And price performance. All right, good, so if we come back a year from now for Super Computing 2018, which I guess is in Dallas in November, they just announced. What are we going to be talking about, what are some of your priorities and the team's priorities as you look ahead to 2018? >> Oh we're continuing to advance the Omni-Path technology with software and additional capabilities moving forward, so we're hoping to have some really cool announcements next year. >> All right, well, we'll look forward to it, and we'll see you in Dallas in a year. >> Thanks, Cube. >> All right, she's Susan, and I'm Jeff. You're watching the Cube from Super Computing 2017. Thanks for watching, see ya next time. (techno music)

Published Date : Nov 15 2017

SUMMARY :

covering Super Computing 17, brought to you by Intel. She's Susan Bobholtz, she's the Fabric Alliance Manager What it does is it allows you to connect systems Okay, so for the royal three-headed horsemen Okay. has something to do with high performance computing. in artificial intelligence now. so they said Intel, can you do something So that means you can build denser clusters Really fits into the data center strategy in the Top 500 list are running connected via Omni-Path. Some of the Top 10 systems in the world are using Omni-Path. Is it rip and replace, do you find, and run, they don't want to touch it. They have the money, the budget, so that will allow you to go for your director class but I'm going to ask you anyway. about AI and some of the really unique challenges of AI, It's very much what you see you already answered the question. and the team's priorities as you look ahead to 2018? moving forward, so we're hoping to have and we'll see you in Dallas in a year. All right, she's Susan, and I'm Jeff.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Susan BobholtzPERSON

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

Susan BobholzPERSON

0.99+

DallasLOCATION

0.99+

18 monthsQUANTITY

0.99+

NovemberDATE

0.99+

SusanPERSON

0.99+

2018DATE

0.99+

36 portsQUANTITY

0.99+

60%QUANTITY

0.99+

12,000 peopleQUANTITY

0.99+

CubePERSON

0.99+

next yearDATE

0.99+

100 gigabyteQUANTITY

0.99+

IntelORGANIZATION

0.99+

Denver, ColoradoLOCATION

0.99+

48 portsQUANTITY

0.99+

768 portsQUANTITY

0.99+

60QUANTITY

0.98+

first timeQUANTITY

0.97+

CubeCOMMERCIAL_ITEM

0.97+

18 months agoDATE

0.97+

Super Computing 2017EVENT

0.96+

todayDATE

0.92+

InfiniBandTITLE

0.91+

Top 10QUANTITY

0.91+

1152QUANTITY

0.91+

Super Computing 17EVENT

0.91+

Top 10 systemsQUANTITY

0.85+

a yearQUANTITY

0.82+

three-headedQUANTITY

0.8+

PathOTHER

0.79+

Super ComputingEVENT

0.76+

TopQUANTITY

0.72+

Omni-PathTITLE

0.72+

Omni-PathOTHER

0.72+

Omni-PathCOMMERCIAL_ITEM

0.71+

OmniTITLE

0.59+

OmniORGANIZATION

0.58+

Omni-PathORGANIZATION

0.57+

coupleQUANTITY

0.5+

-PathOTHER

0.49+

PathORGANIZATION

0.3+

500OTHER

0.29+

Jim Wu, Falcon Computing | Super Computing 2017


 

>> Announcer: From Denver, Colorado, it's theCUBE covering Super Computing '17. Brought to you by Intel. (upbeat techno music) Hey welcome back, everybody. Jeff Frick here with theCUBE. We're at Super Computing 2017 in Denver, Colorado. It's our first trip to the show, 12,000 people, a lot of exciting stuff going on, big iron, big lifting, heavy duty compute. We're excited to have our next guest on. He's Jim Wu, he's the Director of Customer Experience for Falcon Computing. Jim, welcome. Thank you. Good to see you. So, what does Falcon do for people that aren't familiar with the company? Yeah, Falcon Company is in our early stages startup, focus on AVA-based acceleration development. Our vision is to allow software engineers to develop a FPGA-based accelerators, accelerators without FPGA expertise. Right, you just said you closed your B round. So, congratulations on that. >> Jim: Thank you. Yeah, very exciting. So, it's a pretty interesting concept. To really bring the capability to traditional software engineers to program for hardware. That's kind of a new concept. What do you think? 'Cause it brings the power of a hardware system. but the flexibility of a software system. Yeah, so today, to develop FPGA accelerators is very challenging. So, today for the accelerations-based people use very low level language, like a Verilog and the VHDL to develop FPGA accelerators. Which was very time consuming, very labor-intensive. So, our goal is to liberate them to use, C/C++ space design flow to give them an environment that they are familiar with in C/C++. So now not only can they improve their productivity, we also do a lot of automatic organization under the hood, to give them the highest accelerator results. Right, so that really opens up the ecosystem well beyond the relatively small ecosystem that knows how to program their hardware. Definitely, that's what we are hoping to see. We want to the tool in the hands of all software programmers. They can use it in the Cloud. They can use it on premises. Okay. So what's the name of your product? And how does it fit within the stack? I know we've got the Intel microprocessor under the covers, we've got the accelerator, we've got the cards. There's a lot of pieces to the puzzle. >> Jim: Yeah. So where does Falcon fit? So our main product is a compiler, called the Merlin Compiler. >> Jeff: Okay. It's a pure C and the C++ flow that enables software programmers to design APGA-based accelerators without any knowledge of APGA. And it's highly integrated with Intel development tools. So users don't even need to learn anything about the Intel development environment. They can just use their C++ development environment. Then in the end, we give them the host code as well as APGA binaries so they can round on APGA to see a accelerated applications. Okay, and how long has Merlin been GA? Actually, we'll be GA early next year. Early next year. So finishing, doing the final polish here and there. Yes. So in this quarter, we are heavily investing a lot of ease-of-use features. Okay. We have most of the features we want to be in the tool, but we're still lacking a bit in terms of ease-of-use. >> Jeff: Okay. So we are enhancing our report capabilities, we are enhancing our profiling of capabilities. We want to really truly like a traditional C++-based development environment for software application engineers. Okay, that's fine. You want to get it done, right, before you ship it out the door? So you have some Alpha programs going on? Some Beta programs of some really early adopters? Yeah, exactly. So today we provide a 14 day free trial to any customers who are interested. We have it, you can set up your enterprise or you can set up on Cloud. Okay. We provide to where you want your work done. Okay. And so you'll support all the cloud service providers, the big public clouds, all the private clouds. All the traditional data servers as well. Right. So, we are twice already on Aduplas as well as Alibaba Cloud. So we are working on bringing the tool to other public cloud providers as well. Right. So what is some of the early feedback you're getting from some of the people you're talking to? As to where this is going to make the biggest impact. What type of application space has just been waiting for this solution? So our Merlin Compiler is a productivity tool, so any space that FPGA can traditionally play well that's where we want to be there. So like encryption, decryption, video codec, compression, decompression. Those kind of applications are very stable for APGA. Now traditionally they can only be developed by hardware engineers. Now with the Merlin Compiler, all of these software engineers can use the Merlin Compiler to do all of these applications. Okay. And when is the GA getting out, I know it's coming? When is it coming? Approximately So probably first quarter of 2018. Okay, that's just right around the corner. Exactly. Alright, super. And again, a little bit about the company, how many people are you? A little bit of the background on the founders. So we have about 30 employees, at the moment, so we have offices in Santa Clara which is our headquarters. We also have an office in Los Angeles. As well as a Beijing, China. Okay, great. Alright well Jim, thanks for taking a few minutes. We'll be looking for GA in a couple of months and wish you nothing but the best success. Okay, thank you so much, Jeff. Alright, he's Jim Lu I'm Jeff Frick. You're watching theCUBE from supering computing 2017. Thanks for watching. (upbeat techno music)

Published Date : Nov 14 2017

SUMMARY :

Brought to you by Intel. Verilog and the VHDL to develop FPGA accelerators. called the Merlin Compiler. We have most of the features we want to be in the tool, We provide to where you want your work done.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim WuPERSON

0.99+

JimPERSON

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Santa ClaraLOCATION

0.99+

BeijingLOCATION

0.99+

Los AngelesLOCATION

0.99+

14 dayQUANTITY

0.99+

todayDATE

0.99+

FalconORGANIZATION

0.99+

first quarter of 2018DATE

0.99+

12,000 peopleQUANTITY

0.99+

Denver, ColoradoLOCATION

0.99+

twiceQUANTITY

0.99+

first tripQUANTITY

0.99+

C++TITLE

0.99+

Early next yearDATE

0.98+

IntelORGANIZATION

0.98+

Super Computing '17EVENT

0.98+

early next yearDATE

0.98+

2017DATE

0.98+

GALOCATION

0.97+

Jim LuPERSON

0.97+

Falcon CompanyORGANIZATION

0.97+

about 30 employeesQUANTITY

0.97+

Super Computing 2017EVENT

0.97+

APGATITLE

0.94+

this quarterDATE

0.94+

theCUBEORGANIZATION

0.94+

CTITLE

0.92+

AduplasORGANIZATION

0.91+

C/C+TITLE

0.9+

C+TITLE

0.87+

Alibaba CloudORGANIZATION

0.84+

APGAORGANIZATION

0.82+

Falcon ComputingORGANIZATION

0.81+

ChinaLOCATION

0.76+

MerlinTITLE

0.71+

Merlin CompilerTITLE

0.65+

MerlinORGANIZATION

0.64+

FPGAORGANIZATION

0.62+

SuperEVENT

0.61+

GAORGANIZATION

0.61+

VerilogTITLE

0.54+