Scott Feldman, SAP HANA & Leonardo Community | SAP SAPPHIRE NOW 2018
>> From Orlando, Florida, it's theCUBE. Covering SAP SAPPHIRE NOW 2018. Brought to you by NetApp. >> Hey, welcome to theCUBE. I'm Lisa Martin, on the ground, at SAPPHIRE NOW 2018 in the NetApp booth with Keith Townsend for the day. Keith and I are joined by Scott Feldman, the Global Head of SAP HANA and Leonardo Communities. Scott, welcome to theCUBE. >> Thank you, great to be here. >> So, communities, plural. Why are... Tell us about the communities at SAP. Why is there specifically an SAP HANA community, before we get into Leonardo? >> Okay, well it's kinda fun because you saw one community and then they say, "Well, go do another community." So you do one, and it's like, okay do one. Do another one. So we have, at SAP, a global community that runs on the SAP.com platform. That's for everybody. That's for all customers, all partners, all analysts, everybody. That's normally called a SAP community. What we realized back in, around 2012 or 2013, is that we wanted to have a special place where our SAP HANA early adopter customers could go and join and network with each other on an online presence, right, and then have an opportunity to share their knowledge with each other and get more information from SAP. So we created a separate community on SAP HANA. It's actually a pretty easy URL, it's called SAPHANACommunity.com. It's pretty simple to remember. And now, we've doing this for about five, six years. >> So talk to us about what's unique about the HANA community outside of the technology. SAP Communities, in general's already pretty big, very active community. >> Correct. >> But what was the call out or what was the results of creating the HANA community? >> Great, and that's a great question. So what's really interesting about the SAP HANA community is that the topic and coverage of the content is specifically related to SAP HANA, data management, database tools and technologies, analytics, and other surrounding areas that are connected to that HANA platform as an anchor. So we have provided, over the past five years, almost 300, 300 webinars of content on SAP HANA technology. A lot of that content has come from SAP product managers, a lot of it's come from solution experts, partners as well, have provided content. And they're in the form of webinar frameworks as well as whitepapers and other content that's on there. Now, the people that join the community, which is all free by the way for the customers that join, are mainly our SAP customers. Now I'm proud to tell you, here and also SAPPHIRE 2018, we're here, we're over 6,100 or so members, globally, of the SAP HANA community. And what's really great about that is, you know, relative to some of the millions of numbers of people throughout for other communities, it seems like, you know, 6,000 plus is a small number. But you have to keep in mind that it's very targeted, right? So the people that are through the door, and our members of the community on the SAP HANA Jam, we have it on our SAP Jam site which is hosting the SAP cloud platform. These are people that really are interested in that topic. And they really wanna learn about SAP HANA and the technology surrounding SAP HANA. So they're very, very high-qualified, high-quality people. >> Very engaged, it sounds like. >> Absolutely. >> So, speaking of that, so this morning during Bill McDermott's keynote, he mentioned 23,000 HANA customers. >> Yes. >> You mentioned 6,000 actively engaging in your community. >> Yes. >> Collaboration was a big theme of this morning, talking about, this is not grandpa's CRM anymore, what SAP is doing to break that status quo. How influential are those customers engaging in the HANA community to its development and its evolution? >> That's a fantastic question. So what's happened is the community... Think of almost like a pyramid. So the community of the large, vast number of people who have joined the community for interest in topics have mostly consumed information, they are kinda the base line of the pyramid. Some of those customers have some great stories to tell. Okay, so what we did was we started a webinar series in 2013 called Spotlight. And I'll take credit for the name, actually, 'cause we call it the SAP HANA Spotlight. And essentially, what we're doing is, imagine the customer in a spotlight where they're sharing their journey. They're sharing their SAP HANA story and their journey. So we launched that a number of years ago and now we've done almost 80 separate HANA Spotlight webinars with customers that are sharing their stories. Well we even took it one step further beyond that. In 2013, some of the executives from our early adopting customers for SAP HANA, they came over to SAP and they said, "Gee, SAP, we're betting our career "and our company survival "on this new technology called SAP HANA," back in 2013. And they basically came to us and said, "I wanna have a council." So we wanna have a council of influence so that we have an opportunity to get together, share stories, share our journeys with each other, get to know who the other customers are that are also early adopters and are embarking on this journey with us together, and then, more importantly, to answer your question, feed that information back to SAP development so that we could, back at SAP, improve the product and come out with some additional features and functions and make it even better. Well that was 2013. Our very first meeting was up in Canada, in a suburb in Toronto, at one of our customer locations. We had 13 people in that meeting. Today, dial up six years, we're at over 750 members of an executive, so these are C-level VPS, senior IT, and chief architects that are in our community globally. We've done 24 meetings, I'm about to schedule the 25th meeting, and I've globalized that. And the customers, I thought they would've been tired of these kinds of meetings, they love it. They absolutely love it. So again, going back to that analogy, this is kind of the high peak point of the pyramid. We get the executives that are making these decisions and we talk about thought leadership. We don't talk about features and functionality. We do talk about road maps, we talk about investments that they need to make, and we anchor it again on the SAP HANA platform but we're bringing in other technologies and components like analytics or SAP Leonardo, right, or S/4 HANA, right. Now that it's announced, we'll bring in C/4 HANA. So we'll cover other topics as well, and of course the cloud platform. >> So you set it up, rinse and repeat, now we're at Leonardo. >> Rinse and repeat. Rinse and repeat. >> What is, first off, what is Leonardo? Great name, I love the name. But what is it? >> So SAP Leonardo is a methodology. It's an opportunity for our customers to co-design, co-invent, and get engaged in the design thinking process to understand how data, and we talked about this today, how we can, how data and how knowledge can enable an intelligent enterprise. And it's a process. So what people need to understand, and customers work with at SAP and they could go to the SAP Leonardo booth areas at the conferences and see as many testers as they wish. But essentially it's a foundation. It's an understanding of, how do I take where I am today from my own understanding of how I operate my business, and where do I need to go, what is my next gem process? Where do I need to be in five years to be that thought leader and how do I get there? So how do I take data that I know and data that I don't know? We have, I just ran into one of our customers... We run a program out of our team as well called the SAP Innovation Awards. It started off as the HANA Innovation Awards and now we cover all technologies and all topics for customer innovation. So SAP Leonardo, cloud platform solutions, SAP HANA solutions, data management solutions, these are all innovative offerings. We just announced all the winners, we have a actually ceremony tomorrow night where all the winners have been announced and they're gonna be receiving their trophies. We've been doing this for many years. What's interesting about that is all the innovative projects that are coming from the customer programs, projects, innovations. What are they doing? How are they co-innovating? Are they co-innovating with SAP? Are they doing smart farming? We have one winner that's actually doing smart farming, micro-crop planting to understand soil composition. And humidity and moisture composition is different even if you go one meter away on this, one meter, which is nothing. >> You're right. >> For the Americans listening, it's three feet. (everyone laughs) And that's pretty close. And they can actually combine different crop plantings based on soil conditions and compositions and this is all being monitored in the SAP HANA cloud. So this is really phenomenal. >> Yeah, that would be. >> And we love these kinds of stories. And what we're doing now, as you can imagine. You're probably gonna ask me, how do you connect the dots? Well it was pretty easy to connect the dots. We have the customers that are presented these great programs. They've created these great values that they're providing to their industry, right? And they're great wins and successes. And we're leveraging those customers in the community as thought leaders. And we're also doing sessions like that. I'd like to get them on theCUBE. Have them talk about some of the things >> That would be great. >> that they're doing. >> We would have fun. We love customer stories. >> I love it. I think it would be phenomenal. >> So, let's talk bout the dynamics of running a community program that featured around a product. And HANA, very straightforward, is about the tech, a lot of it was speeds and fees transitioned into solutions. >> Right. >> When you start out with something as ambitious as Leonardo framework, are the dynamics different, like what are, what is the community like? >> A little bit 'cause SAP HANA is the foundation. And we saw this today at the keynotes today. And Bill's keynote was phenomenal and we saw that how he was positioning this and it's all about the intelligent enterprise and SAP HANA as a foundation, it's fantastic. And we've been doing this for a lot of years. But what do we do to build upon that? When we established the foundational community for SAP HANA, people started coming in and wanting to understand everything about the HANA community. We did a couple fundamental things. Number one, we connected with the SAP HANA Academy. And I'll give a shout out to my friends at the academy, I love them to death, and we've been partnering with them for five plus years. The SAP HANA Academy is a YouTube site of thousands of videos on how to do anything. It could be data management, it could be data hub, it could be Vora which is the connected to Hadoop. It could be SAP HANA. It could be analytics. And there's thousands, literally thousands of videos on how to just about do anything that you want connected to the community. So the people and the SAP HANA Academy team has presented content, webinars on our community broadcasting at least for the last... This year they did one, they do like two or three every year for the last number of years. What we did with SAP Leonardo was, Leonardo can be thought of as a combination of the technologies. So we have, as you know, with machine learning, IoT, blockchain, right, analytics and a whole bunch of other things, design thinking methodologies that are in Leonardo, so what we did is we took a lot of that and created a series of webinars and content. We just finished something called the SAP Digital Transformation Series featuring SAP Leonardo in conjunction with ASUG, the America User Group, that's our co-conference sponsor here and we love them to death. And what we did was do the 14-part webinar series. We had thousands of people come onto these calls and each call covered, for example, Mala, who's our president, she did what is the overview of Leonardo? How do we do this? We covered analytics with Mike Flannagan. Maricel covered design thinking. And then we went from there. Then we covered the solutions themselves. What is IoT, what is blockchain, what is machine learning? How do you understand what these things do and how they impact your organization? Then we took it one step further. We went into the industry solutions. So the partners are developing industry solutions. The industry accelerates, we talked a little bit earlier, there's a press release that just came out on that, on some of the.. >> The Partner Medallion Initiative. >> The Premiere Medallion Initiative, right. My friend Mike is running, from the Leonardo team. And that is certifying partners for the specific solutions that they're building around the industry, the deliverables that they have around the SAP Leonardo, we feature that as well. So all of that content was in this series and we continue to build upon that. What we really want, though, now is we wanna do what we did this time last year which was, we want the customer stories. So we've done, I've told you, we've done a lot of webinars in the community. So a lot of content going to the members of the community from the experts that understand that content. Next step, second half of the year, is we want those customer stories out there. So those 80 or so webinars that I mentioned that we did with our customer Spotlights, we want those Spotlights now. So we'll focus those... Anybody watching, give me those Spotlights. We want those stories. We want the customers to really articulate their story, their challenges, their successes, their wins, what are they doing to the SAP technology that-- >> You're preaching to the choir about customer marketing persons so that there's no better value-- >> Isn't it great? >> Brand validation, than the voice of the customer. Speaking of brand validation, I heard this morning that Bill McDermott announced that you guys are now 17 on the top 100 global most valuable brands. >> Absolutely. >> He wants to be in the top 10. >> And we're proud of that. I'm part of that team. >> Up four. You're doing this with a tremendous amount of partners is what you mentioned, partners. We're in the NetApp booth. >> Correct. >> Talk to us about what SAP and NetApp are doing in the community to enable this amazing amount of education that you're doing. >> So that's a great find. I mean, SAP wouldn't be where it is today, and I've been with SAP for (chuckles) I don't wanna say the number of years but people watch me and they know I've been at SAP a long time. It's like you can't say Scott Feldman without SAP. So it's been kind of anchored in for a long time. It's sort of the blood, the blue blood runs in the DNA you know. It was just kind of fun. But some of the partners that we've worked with in the communities have taken it to another step. NetApp is one of those. And I love working with NetApp. They're a strategic technology provider and a fantastic global partner with SAP. I know you just heard from RJ who did an interview, we work a lot with him and his team as well, Roland and the rest of the team. And what NetApp has done is they've made another strategic investment with us in the communities, for the HANA community and the Leonardo community such that they're a name-sponsored partner. And what's really nice about that is we have a special spot and if you go to the SAPHANACommunity.com site, or if you're already a member, or the other one is, you can guess, SAPLeonardoCommunity.com, very similar, right? If you go to either one of those sites, you'll find that there's a spot for partners that are specific to that community, that have taken the next step to add additional value. NetApp is there, there's a page. And what we've done is we've created a page with all the NetApp content on, what is NetApp's contribution on SAP HANA and Leonardo? Where is the value proposition? Why NetApp? What are they doing with SAP? Where are the links that we can go for all the content that NetApp has provided to us to post in that community? And not only that, NetApp is also an outstanding member, upstanding member of the SAP HANA CL Council Community 'cause they also run SAP. And, in addition to that, NetApp is a strategic partner that provides webinar content for SAP, for the community. So, about once a quarter, there'll be a webinar that is sponsored by NetApp and now I'm bugging them a little bit to get the customers in front of the webinar so we can have these little-- >> There must be some NetApp-SAP Customer Spotlights just waiting to come into the surface, right? >> Oh, absolutely. And we're doing them in small snippets so what's really great about that, it's kinda like this discussion that we're having, these small chunks. 'Cause I think the new wave of doing things, >> Snackable content. >> And I could certainly tell you're from the generation that's just maybe a little bit younger, is that they don't have time to sit down and watch a webinar for one hour. But they'll take it in 20-minute doses. They'll just like, "Man, give me "all the 20-minute webinars you want." It's like, I'll just give me a chunk and I'll take it and boom. I really want that. So that's been a lot of fun. So NetApp's been a fantastic strategic partner and we'll continue to partner with them moving forward. >> So I'm hearing a lot of collaboration, a lot of participation, energy just radiating, I think off from the main stage-- >> Oh I don't like the community, just do the watch, uncles love it. >> From the main stage to what you're talking about, what with what you guys are doing and I love to hear that the customers are being recognized for their innovation. Not just-- >> They are, yeah. >> Transforming their businesses, new revenue streams, new business models, but leveraging their partners like SAP, like NetApp, to become the intelligent enterprise and change industries. >> Absolutely, Lisa. And they're becoming the thought leaders of their own industry. So if you want to become a leader or a thought leader in your own specific industry, join the SAP HANA Community, make the investments in SAP Leonardo, work with SAP, work with NetApp, and like Bill says, let's get it done. >> Let's get it done. Scott, thanks so much for stopping by and chatting with Keith and me this morning. >> Thank you for your time, it's been my pleasure. >> And enjoy the rest of the event. >> I look forward to it. >> All right. Lisa Martin with Keith Townsend on theCUBE from the NetApp booth at SAP SAPPHIRE NOW 2018. Thanks for watching. (funky music)
SUMMARY :
Brought to you by NetApp. in the NetApp booth with Keith Townsend for the day. before we get into Leonardo? that runs on the SAP.com platform. So talk to us about what's unique about the HANA community of the community on the SAP HANA Jam, we have it it sounds like. So, speaking of that, so this morning actively engaging in your community. in the HANA community to its development and its evolution? And I'll take credit for the name, actually, 'cause we call So you set it up, rinse and repeat, Rinse and repeat. Great name, I love the name. in the design thinking process to understand how data, all being monitored in the SAP HANA cloud. in the community as thought leaders. We love customer stories. I think it would be phenomenal. So, let's talk bout the dynamics and the SAP HANA Academy team has presented And that is certifying partners for the specific solutions on the top 100 global most valuable brands. in the top 10. And we're proud of that. We're in the NetApp booth. in the community to enable this amazing amount of education in the communities have taken it to another step. And we're doing them in small snippets "all the 20-minute webinars you want." the community, just do the watch, uncles love it. From the main stage to what you're talking about, like SAP, like NetApp, to become the intelligent enterprise own specific industry, join the SAP HANA Community, make the with Keith and me this morning. Thank you for your time, And enjoy the rest from the NetApp booth at SAP SAPPHIRE NOW 2018.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith | PERSON | 0.99+ |
Mike Flannagan | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
Roland | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Scott Feldman | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Toronto | LOCATION | 0.99+ |
Bill | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
RJ | PERSON | 0.99+ |
one hour | QUANTITY | 0.99+ |
Maricel | PERSON | 0.99+ |
6,000 | QUANTITY | 0.99+ |
one meter | QUANTITY | 0.99+ |
ASUG | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
20-minute | QUANTITY | 0.99+ |
24 meetings | QUANTITY | 0.99+ |
America User Group | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
13 people | QUANTITY | 0.99+ |
Bill McDermott | PERSON | 0.99+ |
tomorrow night | DATE | 0.99+ |
SAP HANA | TITLE | 0.99+ |
17 | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
three feet | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
80 | QUANTITY | 0.99+ |
five plus years | QUANTITY | 0.99+ |
Leonardo | ORGANIZATION | 0.99+ |
SAP HANA Jam | TITLE | 0.99+ |
25th meeting | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Mala | PERSON | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
HANA | TITLE | 0.99+ |
last year | DATE | 0.99+ |
each call | QUANTITY | 0.99+ |
23,000 | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
SAP Jam | TITLE | 0.99+ |
S/4 HANA | TITLE | 0.99+ |
SAP HANA Spotlight | TITLE | 0.99+ |
today | DATE | 0.99+ |
HANA Innovation Awards | EVENT | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
one winner | QUANTITY | 0.99+ |
thousands of videos | QUANTITY | 0.98+ |
SAPHANACommunity.com | OTHER | 0.98+ |
C/4 HANA | TITLE | 0.98+ |
thousands of people | QUANTITY | 0.98+ |
SAP Innovation Awards | EVENT | 0.98+ |
three | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Itamar Ankorion, Qlik & Peter MacDonald, Snowflake | AWS re:Invent 2022
(upbeat music) >> Hello, welcome back to theCUBE's AWS RE:Invent 2022 Coverage. I'm John Furrier, host of theCUBE. Got a great lineup here, Itamar Ankorion SVP Technology Alliance at Qlik and Peter McDonald, vice President, cloud partnerships and business development Snowflake. We're going to talk about bringing SAP data to life, for joint Snowflake, Qlik and AWS Solution. Gentlemen, thanks for coming on theCUBE Really appreciate it. >> Thank you. >> Thank you, great meeting you John. >> Just to get started, introduce yourselves to the audience, then going to jump into what you guys are doing together, unique relationship here, really compelling solution in cloud. Big story about applications and scale this year. Let's introduce yourselves. Peter, we'll start with you. >> Great. I'm Peter MacDonald. I am vice president of Cloud Partners and business development here at Snowflake. On the Cloud Partner side, that means I manage AWS relationship along with Microsoft and Google Cloud. What we do together in terms of complimentary products, GTM, co-selling, things like that. Importantly, working with other third parties like Qlik for joint solutions. On business development, it's negotiating custom commercial partnerships, large companies like Salesforce and Dell, smaller companies at most for our venture portfolio. >> Thanks Peter and hi John. It's great to be back here. So I'm Itamar Ankorion and I'm the senior vice president responsible for technology alliances here at Qlik. With that, own strategic alliances, including our key partners in the cloud, including Snowflake and AWS. I've been in the data and analytics enterprise software market for 20 plus years, and my main focus is product management, marketing, alliances, and business development. I joined Qlik about three and a half years ago through the acquisition of Attunity, which is now the foundation for Qlik data integration. So again, we focus in my team on creating joint solution alignment with our key partners to provide more value to our customers. >> Great to have both you guys, senior executives in the industry on theCUBE here, talking about data, obviously bringing SAP data to life is the theme of this segment, but this reinvent, it's all about the data, big data end-to-end story, a lot about data being intrinsic as the CEO says on stage around in the organizations in all aspects. Take a minute to explain what you guys are doing as from a company standpoint. Snowflake and Qlik and the solutions, why here at AWS? Peter, we'll start with you at Snowflake, what you guys do as a company, your mission, your focus. >> That was great, John. Yeah, so here at Snowflake, we focus on the data platform and until recently, data platforms required expensive on-prem hardware appliances. And despite all that expense, customers had capacity constraints, inexpensive maintenance, and had limited functionality that all impeded these organizations from reaching their goals. Snowflake is a cloud native SaaS platform, and we've become so successful because we've addressed these pain points and have other new special features. For example, securely sharing data across both the organization and the value chain without copying the data, support for new data types such as JSON and structured data, and also advance in database data governance. Snowflake integrates with complimentary AWS services and other partner products. So we can enable holistic solutions that include, for example, here, both Qlik and AWS SageMaker, and comprehend and bring those to joint customers. Our customers want to convert data into insights along with advanced analytics platforms in AI. That is how they make holistic data-driven solutions that will give them competitive advantage. With Snowflake, our approach is to focus on customer solutions that leverage data from existing systems such as SAP, wherever they are in the cloud or on-premise. And to do this, we leverage partners like Qlik native US to help customers transform their businesses. We provide customers with a premier data analytics platform as a result. Itamar, why don't you talk about Qlik a little bit and then we can dive into the specific SAP solution here and some trends >> Sounds great, Peter. So Qlik provides modern data integration and analytics software used by over 38,000 customers worldwide. Our focus is to help our customers turn data into value and help them close the gap between data all the way through insight and action. We offer click data integration and click data analytics. Click data integration helps to automate the data pipelines to deliver data to where they want to use them in real-time and make the data ready for analytics and then Qlik data analytics is a robust platform for analytics and business intelligence has been a leader in the Gartner Magic Quadrant for over 11 years now in the market. And both of these come together into what we call Qlik Cloud, which is our SaaS based platform. So providing a more seamless way to consume all these services and accelerate time to value with customer solutions. In terms of partnerships, both Snowflake and AWS are very strategic to us here at Qlik, so we have very comprehensive investment to ensure strong joint value proposition to we can bring to our mutual customers, everything from aligning our roadmaps through optimizing and validating integrations, collaborating on best practices, packaging joint solutions like the one we'll talk about today. And with that investment, we are an elite level, top level partner with Snowflake. We fly that our technology is Snowflake-ready across the entire product set and we have hundreds of joint customers together and with AWS we've also partnered for a long time. We're here to reinvent. We've been here with the first reinvent since the inaugural one, so it kind of gives you an idea for how long we've been working with AWS. We provide very comprehensive integration with AWS data analytics services, and we have several competencies ranging from data analytics to migration and modernization. So that's our focus and again, we're excited about working with Snowflake and AWS to bring solutions together to market. >> Well, I'm looking forward to unpacking the solutions specifically, and congratulations on the continued success of both your companies. We've been following them obviously for a very long time and seeing the platform evolve beyond just SaaS and a lot more going on in cloud these days, kind of next generation emerging. You know, we're seeing a lot of macro trends that are going to be powering some of the things we're going to get into real quickly. But before we get into the solution, what are some of those power dynamics in the industry that you're seeing in trends specifically that are impacting your customers that are taking us down this road of getting more out of the data and specifically the SAP, but in general trends and dynamics. What are you hearing from your customers? Why do they care? Why are they going down this road? Peter, we'll start with you. >> Yeah, I'll go ahead and start. Thanks. Yeah, I'd say we continue to see customers being, being very eager to transform their businesses and they know they need to leverage technology and data to do so. They're also increasingly depending upon the cloud to bring that agility, that elasticity, new functionality necessary to react in real-time to every evolving customer needs. You look at what's happened over the last three years, and boy, the macro environment customers, it's all changing so fast. With our partnerships with AWS and Qlik, we've been able to bring to market innovative solutions like the one we're announcing today that spans all three companies. It provides a holistic solution and an integrated solution for our customer. >> Itamar let's get into it, you've been with theCUBE, you've seen the journey, you have your own journey, many, many years, you've seen the waves. What's going on now? I mean, what's the big wave? What's the dynamic powering this trend? >> Yeah, in a nutshell I'll call it, it's all about time. You know, it's time to value and it's about real-time data. I'll kind of talk about that a bit. So, I mean, you hear a lot about the data being the new oil, but it's definitely, we see more and more customers seeing data as their critical enabler for innovation and digital transformation. They look for ways to monetize data. They look as the data as the way in which they can innovate and bring different value to the customers. So we see customers want to use more data so to get more value from data. We definitely see them wanting to do it faster, right, than before. And we definitely see them looking for agility and automation as ways to accelerate time to value, and also reduce overall costs. I did mention real-time data, so we definitely see more and more customers, they want to be able to act and make decisions based on fresh data. So yesterday's data is just not good enough. >> John: Yeah. >> It's got to be down to the hour, down to the minutes and sometimes even lower than that. And then I think we're also seeing customers look to their core business systems where they have a lot of value, like the SAP, like mainframe and thinking, okay, our core data is there, how can we get more value from this data? So that's key things we see all the time with customers. >> Yeah, we did a big editorial segment this year on, we called data as code. Data as code is kind of a riff on infrastructure as code and you start to see data becoming proliferating into all aspects, fresh data. It's not just where you store it, it's how you share it, it's how you turn it into an application intrinsically involved in all aspects. This is the big theme this year and that's driving all the conversations here at RE:Invent. And I'm guaranteeing you, it's going to happen for another five and 10 years. It's not stopping. So I got to get into the solution, you guys mentioned SAP and you've announced the solution by Qlik, Snowflake and AWS for your customers using SAP. Can you share more about this solution? What's unique about it? Why is it important and why now? Peter, Itamar, we'll start with you first. >> Let me jump in, this is really, I'll jump because I'm excited. We're very excited about this solution and it's also a solution by the way and again, we've seen proven customer success with it. So to your point, it's ready to scale, it's starting, I think we're going to see a lot of companies doing this over the next few years. But before we jump to the solution, let me maybe take a few minutes just to clarify the need, why we're seeing, why we're seeing customers jump to do this. So customers that use SAP, they use it to manage the core of their business. So think order processing, management, finance, inventory, supply chain, and so much more. So if you're running SAP in your company, that data creates a great opportunity for you to drive innovation and modernization. So what we see customers want to do, they want to do more with their data and more means they want to take SAP with non-SAP data and use it together to drive new insights. They want to use real-time data to drive real-time analytics, which they couldn't do to date. They want to bring together descriptive with predictive analytics. So adding machine learning in AI to drive more value from the data. And naturally they want to do it faster. So find ways to iterate faster on their solutions, have freedom with the data and agility. And I think this is really where cloud data platforms like Snowflake and AWS, you know, bring that value to be able to drive that. Now to do that you need to unlock the SAP data, which is a lot of also where Qlik comes in because typical challenges these customers run into is the complexity, inherent in SAP data. Tens of thousands of tables, proprietary formats, complex data models, licensing restrictions, and more than, you have performance issues, they usually run into how do we handle the throughput, the volumes while maintaining lower latency and impact. Where do we find knowledge to really understand how to get all this done? So these are the things we've looked at when we came together to create a solution and make it unique. So when you think about its uniqueness, because we put together a lot, and I'll go through three, four key things that come together to make this unique. First is about data delivery. How do you have the SAP data delivery? So how do you get it from ECC, from HANA from S/4HANA, how do you deliver the data and the metadata and how that integration well into Snowflake. And what we've done is we've focused a lot on optimizing that process and the continuous ingestion, so the real-time ingestion of the data in a way that works really well with the Snowflake system, data cloud. Second thing is we looked at SAP data transformation, so once the data arrives at Snowflake, how do we turn it into being analytics ready? So that's where data transformation and data worth automation come in. And these are all elements of this solution. So creating derivative datasets, creating data marts, and all of that is done by again, creating an optimized integration that pushes down SQL based transformations, so they can be processed inside Snowflake, leveraging its powerful engine. And then the third element is bringing together data visualization analytics that can also take all the data now that in organizing inside Snowflake, bring other data in, bring machine learning from SageMaker, and then you go to create a seamless integration to bring analytic applications to life. So these are all things we put together in the solution. And maybe the last point is we actually took the next step with this and we created something we refer to as solution accelerators, which we're really, really keen about. Think about this as prepackaged templates for common business analytic needs like order to cash, finance, inventory. And we can either dig into that a little more later, but this gets the next level of value to the customers all built into this joint solution. >> Yeah, I want to get to the accelerators, but real quick, Peter, your reaction to the solution, what's unique about it? And obviously Snowflake, we've been seeing the progression data applications, more developers developing on top of Snowflake, data as code kind of implies developer ecosystem. This is kind of interesting. I mean, you got partnering with Qlik and AWS, it's kind of a developer-like thinking real solution. What's unique about this SAP solution that's, that's different than what customers can get anywhere else or not? >> Yeah, well listen, I think first of all, you have to start with the idea of the solution. This are three companies coming together to build a holistic solution that is all about, you know, creating a great opportunity to turn SAP data into value this is Itamar was talking about, that's really what we're talking about here and there's a lot of technology underneath it. I'll talk more about the Snowflake technology, what's involved here, and then cover some of the AWS pieces as well. But you know, we're focusing on getting that value out and accelerating time to value for our joint customers. As Itamar was saying, you know, there's a lot of complexity with the SAP data and a lot of value there. How can we manage that in a prepackaged way, bringing together best of breed solutions with proven capabilities and bringing this to market quickly for our joint customers. You know, Snowflake and AWS have been strong partners for a number of years now, and that's not only on how Snowflake runs on top of AWS, but also how we integrate with their complementary analytics and then all products. And so, you know, we want to be able to leverage those in addition to what Qlik is bringing in terms of the data transformations, bringing data out of SAP in the visualization as well. All very critical. And then we want to bring in the predictive analytics, AWS brings and what Sage brings. We'll talk about that a little bit later on. Some of the technologies that we're leveraging are some of our latest cutting edge technologies that really make things easier for both our partners and our customers. For example, Qlik leverages Snowflakes recently released Snowpark for Python functionality to push down those data transformations from clicking the Snowflake that Itamar's mentioning. And while we also leverage Snowpark for integrations with Amazon SageMaker, but there's a lot of great new technology that just makes this easy and compelling for customers. >> I think that's the big word, easy button here for what may look like a complex kind of integration, kind of turnkey, really, really compelling example of the modern era we're living in, as we always say in theCUBE. You mentioned accelerators, SAP accelerators. Can you give an example of how that works with the technology from the third party providers to deliver this business value Itamar, 'cause that was an interesting comment. What's the example? Give an example of this acceleration. >> Yes, certainly. I think this is something that really makes this truly, truly unique in the industry and again, a great opportunity for customers. So we kind talked earlier about there's a lot of things that need to be done with SP data to turn it to value. And these accelerator, as the name suggests, are designed to do just that, to kind of jumpstart the process and reduce the time and the risk involved in such project. So again, these are pre-packaged templates. We basically took a lot of knowledge, and a lot of configurations, best practices about to get things done and we put 'em together. So think about all the steps, it includes things like data extraction, so already knowing which tables, all the relevant tables that you need to get data from in the contexts of the solution you're looking for, say like order to cash, we'll get back to that one. How do you continuously deliver that data into Snowflake in an in efficient manner, handling things like data type mappings, metadata naming conventions and transformations. The data models you build all the way to data mart definitions and all the transformations that the data needs to go through moving through steps until it's fully analytics ready. And then on top of that, even adding a library of comprehensive analytic dashboards and integrations through machine learning and AI and put all of that in a way that's in pre-integrated and tested to work with Snowflake and AWS. So this is where again, you get this entire recipe that's ready. So take for example, I think I mentioned order to cash. So again, all these things I just talked about, I mean, for those who are not familiar, I mean order to cash is a critical business process for every organization. So especially if you're in retail, manufacturing, enterprise, it's a big... This is where, you know, starting with booking a sales order, following by fulfilling the order, billing the customer, then managing the accounts receivable when the customer actually pays, right? So this all process, you got sales order fulfillment and the billing impacts customer satisfaction, you got receivable payments, you know, the impact's working capital, cash liquidity. So again, as a result this order to cash process is a lifeblood for many businesses and it's critical to optimize and understand. So the solution accelerator we created specifically for order to cash takes care of understanding all these aspects and the data that needs to come with it. So everything we outline before to make the data available in Snowflake in a way that's really useful for downstream analytics, along with dashboards that are already common for that, for that use case. So again, this enables customers to gain real-time visibility into their sales orders, fulfillment, accounts receivable performance. That's what the Excel's are all about. And very similarly, we have another one for example, for finance analytics, right? So this will optimize financial data reporting, helps customers get insights into P&L, financial risk of stability or inventory analytics that helps with, you know, improve planning and inventory management, utilization, increased efficiencies, you know, so in supply chain. So again, these accelerators really help customers get a jumpstart and move faster with their solutions. >> Peter, this is the easy button we just talked about, getting things going, you know, get the ball rolling, get some acceleration. Big part of this are the three companies coming together doing this. >> Yeah, and to build on what Itamar just said that the SAP data obviously has tremendous value. Those sales orders, distribution data, financial data, bringing that into Snowflake makes it easily accessible, but also it enables it to be combined with other data too, is one of the things that Snowflake does so well. So you can get a full view of the end-to-end process and the business overall. You know, for example, I'll just take one, you know, one example that, that may not come to mind right away, but you know, looking at the impact of weather conditions on supply chain logistics is relevant and material and have interest to our customers. How do you bring those different data sets together in an easy way, bringing the data out of SAP, bringing maybe other data out of other systems through Qlik or through Snowflake, directly bringing data in from our data marketplace and bring that all together to make it work. You know, fundamentally organizational silos and the data fragmentation exist otherwise make it really difficult to drive modern analytics projects. And that in turn limits the value that our customers are getting from SAP data and these other data sets. We want to enable that and unleash. >> Yeah, time for value. This is great stuff. Itamar final question, you know, what are customers using this? What do you have? I'm sure you have customers examples already using the solution. Can you share kind of what these examples look like in the use cases and the value? >> Oh yeah, absolutely. Thank you. Happy to. We have customers across different, different sectors. You see manufacturing, retail, energy, oil and gas, CPG. So again, customers in those segments, typically sectors typically have SAP. So we have customers in all of them. A great example is like Siemens Energy. Siemens Energy is a global provider of gas par services. You know, over what, 28 billion, 30 billion in revenue. 90,000 employees. They operate globally in over 90 countries. So they've used SAP HANA as a core system, so it's running on premises, multiple locations around the world. And what they were looking for is a way to bring all these data together so they can innovate with it. And the thing is, Peter mentioned earlier, not just the SAP data, but also bring other data from other systems to bring it together for more value. That includes finance data, these logistics data, these customer CRM data. So they bring data from over 20 different SAP systems. Okay, with Qlik data integration, feeding that into Snowflake in under 20 minutes, 24/7, 365, you know, days a year. Okay, they get data from over 20,000 tables, you know, over million, hundreds of millions of records daily going in. So it is a great example of the type of scale, scalability, agility and speed that they can get to drive these kind of innovation. So that's a great example with Siemens. You know, another one comes to mind is a global manufacturer. Very similar scenario, but you know, they're using it for real-time executive reporting. So it's more like feasibility to the production data as well as for financial analytics. So think, think, think about everything from audit to texts to innovate financial intelligence because all the data's coming from SAP. >> It's a great time to be in the data business again. It keeps getting better and better. There's more data coming. It's not stopping, you know, it's growing so fast, it keeps coming. Every year, it's the same story, Peter. It's like, doesn't stop coming. As we wrap up here, let's just get customers some information on how to get started. I mean, obviously you're starting to see the accelerators, it's a great program there. What a great partnership between the two companies and AWS. How can customers get started to learn about the solution and take advantage of it, getting more out of their SAP data, Peter? >> Yeah, I think the first place to go to is talk to Snowflake, talk to AWS, talk to our account executives that are assigned to your account. Reach out to them and they will be able to educate you on the solution. We have packages up very nicely and can be deployed very, very quickly. >> Well gentlemen, thank you so much for coming on. Appreciate the conversation. Great overview of the partnership between, you know, Snowflake and Qlik and AWS on a joint solution. You know, getting more out of the SAP data. It's really kind of a key, key solution, bringing SAP data to life. Thanks for coming on theCUBE. Appreciate it. >> Thank you. >> Thank you John. >> Okay, this is theCUBE coverage here at RE:Invent 2022. I'm John Furrier, your host of theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
bringing SAP data to life, great meeting you John. then going to jump into what On the Cloud Partner side, and I'm the senior vice and the solutions, and the value chain and accelerate time to value that are going to be powering and data to do so. What's the dynamic powering this trend? You know, it's time to value all the time with customers. and that's driving all the and it's also a solution by the way I mean, you got partnering and bringing this to market of the modern era we're living in, that the data needs to go through getting things going, you know, Yeah, and to build in the use cases and the value? agility and speed that they can get It's a great time to be to educate you on the solution. key solution, bringing SAP data to life. Okay, this is theCUBE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Siemens | ORGANIZATION | 0.99+ |
Peter MacDonald | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Peter McDonald | PERSON | 0.99+ |
Qlik | ORGANIZATION | 0.99+ |
28 billion | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Tens | QUANTITY | 0.99+ |
three companies | QUANTITY | 0.99+ |
Siemens Energy | ORGANIZATION | 0.99+ |
20 plus years | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Itamar Ankorion | PERSON | 0.99+ |
third element | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Itamar | PERSON | 0.99+ |
over 20,000 tables | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
90,000 employees | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
Cloud Partners | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
over 38,000 customers | QUANTITY | 0.99+ |
under 20 minutes | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
over 11 years | QUANTITY | 0.98+ |
Snowpark | TITLE | 0.98+ |
Second thing | QUANTITY | 0.98+ |
Peter MacDonald & Itamar Ankorion | AWS re:Invent 2022
(upbeat music) >> Hello, welcome back to theCUBE's AWS RE:Invent 2022 Coverage. I'm John Furrier, host of theCUBE. Got a great lineup here, Itamar Ankorion SVP Technology Alliance at Qlik and Peter McDonald, vice President, cloud partnerships and business development Snowflake. We're going to talk about bringing SAP data to life, for joint Snowflake, Qlik and AWS Solution. Gentlemen, thanks for coming on theCUBE Really appreciate it. >> Thank you. >> Thank you, great meeting you John. >> Just to get started, introduce yourselves to the audience, then going to jump into what you guys are doing together, unique relationship here, really compelling solution in cloud. Big story about applications and scale this year. Let's introduce yourselves. Peter, we'll start with you. >> Great. I'm Peter MacDonald. I am vice president of Cloud Partners and business development here at Snowflake. On the Cloud Partner side, that means I manage AWS relationship along with Microsoft and Google Cloud. What we do together in terms of complimentary products, GTM, co-selling, things like that. Importantly, working with other third parties like Qlik for joint solutions. On business development, it's negotiating custom commercial partnerships, large companies like Salesforce and Dell, smaller companies at most for our venture portfolio. >> Thanks Peter and hi John. It's great to be back here. So I'm Itamar Ankorion and I'm the senior vice president responsible for technology alliances here at Qlik. With that, own strategic alliances, including our key partners in the cloud, including Snowflake and AWS. I've been in the data and analytics enterprise software market for 20 plus years, and my main focus is product management, marketing, alliances, and business development. I joined Qlik about three and a half years ago through the acquisition of Attunity, which is now the foundation for Qlik data integration. So again, we focus in my team on creating joint solution alignment with our key partners to provide more value to our customers. >> Great to have both you guys, senior executives in the industry on theCUBE here, talking about data, obviously bringing SAP data to life is the theme of this segment, but this reinvent, it's all about the data, big data end-to-end story, a lot about data being intrinsic as the CEO says on stage around in the organizations in all aspects. Take a minute to explain what you guys are doing as from a company standpoint. Snowflake and Qlik and the solutions, why here at AWS? Peter, we'll start with you at Snowflake, what you guys do as a company, your mission, your focus. >> That was great, John. Yeah, so here at Snowflake, we focus on the data platform and until recently, data platforms required expensive on-prem hardware appliances. And despite all that expense, customers had capacity constraints, inexpensive maintenance, and had limited functionality that all impeded these organizations from reaching their goals. Snowflake is a cloud native SaaS platform, and we've become so successful because we've addressed these pain points and have other new special features. For example, securely sharing data across both the organization and the value chain without copying the data, support for new data types such as JSON and structured data, and also advance in database data governance. Snowflake integrates with complimentary AWS services and other partner products. So we can enable holistic solutions that include, for example, here, both Qlik and AWS SageMaker, and comprehend and bring those to joint customers. Our customers want to convert data into insights along with advanced analytics platforms in AI. That is how they make holistic data-driven solutions that will give them competitive advantage. With Snowflake, our approach is to focus on customer solutions that leverage data from existing systems such as SAP, wherever they are in the cloud or on-premise. And to do this, we leverage partners like Qlik native US to help customers transform their businesses. We provide customers with a premier data analytics platform as a result. Itamar, why don't you talk about Qlik a little bit and then we can dive into the specific SAP solution here and some trends >> Sounds great, Peter. So Qlik provides modern data integration and analytics software used by over 38,000 customers worldwide. Our focus is to help our customers turn data into value and help them close the gap between data all the way through insight and action. We offer click data integration and click data analytics. Click data integration helps to automate the data pipelines to deliver data to where they want to use them in real-time and make the data ready for analytics and then Qlik data analytics is a robust platform for analytics and business intelligence has been a leader in the Gartner Magic Quadrant for over 11 years now in the market. And both of these come together into what we call Qlik Cloud, which is our SaaS based platform. So providing a more seamless way to consume all these services and accelerate time to value with customer solutions. In terms of partnerships, both Snowflake and AWS are very strategic to us here at Qlik, so we have very comprehensive investment to ensure strong joint value proposition to we can bring to our mutual customers, everything from aligning our roadmaps through optimizing and validating integrations, collaborating on best practices, packaging joint solutions like the one we'll talk about today. And with that investment, we are an elite level, top level partner with Snowflake. We fly that our technology is Snowflake-ready across the entire product set and we have hundreds of joint customers together and with AWS we've also partnered for a long time. We're here to reinvent. We've been here with the first reinvent since the inaugural one, so it kind of gives you an idea for how long we've been working with AWS. We provide very comprehensive integration with AWS data analytics services, and we have several competencies ranging from data analytics to migration and modernization. So that's our focus and again, we're excited about working with Snowflake and AWS to bring solutions together to market. >> Well, I'm looking forward to unpacking the solutions specifically, and congratulations on the continued success of both your companies. We've been following them obviously for a very long time and seeing the platform evolve beyond just SaaS and a lot more going on in cloud these days, kind of next generation emerging. You know, we're seeing a lot of macro trends that are going to be powering some of the things we're going to get into real quickly. But before we get into the solution, what are some of those power dynamics in the industry that you're seeing in trends specifically that are impacting your customers that are taking us down this road of getting more out of the data and specifically the SAP, but in general trends and dynamics. What are you hearing from your customers? Why do they care? Why are they going down this road? Peter, we'll start with you. >> Yeah, I'll go ahead and start. Thanks. Yeah, I'd say we continue to see customers being, being very eager to transform their businesses and they know they need to leverage technology and data to do so. They're also increasingly depending upon the cloud to bring that agility, that elasticity, new functionality necessary to react in real-time to every evolving customer needs. You look at what's happened over the last three years, and boy, the macro environment customers, it's all changing so fast. With our partnerships with AWS and Qlik, we've been able to bring to market innovative solutions like the one we're announcing today that spans all three companies. It provides a holistic solution and an integrated solution for our customer. >> Itamar let's get into it, you've been with theCUBE, you've seen the journey, you have your own journey, many, many years, you've seen the waves. What's going on now? I mean, what's the big wave? What's the dynamic powering this trend? >> Yeah, in a nutshell I'll call it, it's all about time. You know, it's time to value and it's about real-time data. I'll kind of talk about that a bit. So, I mean, you hear a lot about the data being the new oil, but it's definitely, we see more and more customers seeing data as their critical enabler for innovation and digital transformation. They look for ways to monetize data. They look as the data as the way in which they can innovate and bring different value to the customers. So we see customers want to use more data so to get more value from data. We definitely see them wanting to do it faster, right, than before. And we definitely see them looking for agility and automation as ways to accelerate time to value, and also reduce overall costs. I did mention real-time data, so we definitely see more and more customers, they want to be able to act and make decisions based on fresh data. So yesterday's data is just not good enough. >> John: Yeah. >> It's got to be down to the hour, down to the minutes and sometimes even lower than that. And then I think we're also seeing customers look to their core business systems where they have a lot of value, like the SAP, like mainframe and thinking, okay, our core data is there, how can we get more value from this data? So that's key things we see all the time with customers. >> Yeah, we did a big editorial segment this year on, we called data as code. Data as code is kind of a riff on infrastructure as code and you start to see data becoming proliferating into all aspects, fresh data. It's not just where you store it, it's how you share it, it's how you turn it into an application intrinsically involved in all aspects. This is the big theme this year and that's driving all the conversations here at RE:Invent. And I'm guaranteeing you, it's going to happen for another five and 10 years. It's not stopping. So I got to get into the solution, you guys mentioned SAP and you've announced the solution by Qlik, Snowflake and AWS for your customers using SAP. Can you share more about this solution? What's unique about it? Why is it important and why now? Peter, Itamar, we'll start with you first. >> Let me jump in, this is really, I'll jump because I'm excited. We're very excited about this solution and it's also a solution by the way and again, we've seen proven customer success with it. So to your point, it's ready to scale, it's starting, I think we're going to see a lot of companies doing this over the next few years. But before we jump to the solution, let me maybe take a few minutes just to clarify the need, why we're seeing, why we're seeing customers jump to do this. So customers that use SAP, they use it to manage the core of their business. So think order processing, management, finance, inventory, supply chain, and so much more. So if you're running SAP in your company, that data creates a great opportunity for you to drive innovation and modernization. So what we see customers want to do, they want to do more with their data and more means they want to take SAP with non-SAP data and use it together to drive new insights. They want to use real-time data to drive real-time analytics, which they couldn't do to date. They want to bring together descriptive with predictive analytics. So adding machine learning in AI to drive more value from the data. And naturally they want to do it faster. So find ways to iterate faster on their solutions, have freedom with the data and agility. And I think this is really where cloud data platforms like Snowflake and AWS, you know, bring that value to be able to drive that. Now to do that you need to unlock the SAP data, which is a lot of also where Qlik comes in because typical challenges these customers run into is the complexity, inherent in SAP data. Tens of thousands of tables, proprietary formats, complex data models, licensing restrictions, and more than, you have performance issues, they usually run into how do we handle the throughput, the volumes while maintaining lower latency and impact. Where do we find knowledge to really understand how to get all this done? So these are the things we've looked at when we came together to create a solution and make it unique. So when you think about its uniqueness, because we put together a lot, and I'll go through three, four key things that come together to make this unique. First is about data delivery. How do you have the SAP data delivery? So how do you get it from ECC, from HANA from S/4HANA, how do you deliver the data and the metadata and how that integration well into Snowflake. And what we've done is we've focused a lot on optimizing that process and the continuous ingestion, so the real-time ingestion of the data in a way that works really well with the Snowflake system, data cloud. Second thing is we looked at SAP data transformation, so once the data arrives at Snowflake, how do we turn it into being analytics ready? So that's where data transformation and data worth automation come in. And these are all elements of this solution. So creating derivative datasets, creating data marts, and all of that is done by again, creating an optimized integration that pushes down SQL based transformations, so they can be processed inside Snowflake, leveraging its powerful engine. And then the third element is bringing together data visualization analytics that can also take all the data now that in organizing inside Snowflake, bring other data in, bring machine learning from SageMaker, and then you go to create a seamless integration to bring analytic applications to life. So these are all things we put together in the solution. And maybe the last point is we actually took the next step with this and we created something we refer to as solution accelerators, which we're really, really keen about. Think about this as prepackaged templates for common business analytic needs like order to cash, finance, inventory. And we can either dig into that a little more later, but this gets the next level of value to the customers all built into this joint solution. >> Yeah, I want to get to the accelerators, but real quick, Peter, your reaction to the solution, what's unique about it? And obviously Snowflake, we've been seeing the progression data applications, more developers developing on top of Snowflake, data as code kind of implies developer ecosystem. This is kind of interesting. I mean, you got partnering with Qlik and AWS, it's kind of a developer-like thinking real solution. What's unique about this SAP solution that's, that's different than what customers can get anywhere else or not? >> Yeah, well listen, I think first of all, you have to start with the idea of the solution. This are three companies coming together to build a holistic solution that is all about, you know, creating a great opportunity to turn SAP data into value this is Itamar was talking about, that's really what we're talking about here and there's a lot of technology underneath it. I'll talk more about the Snowflake technology, what's involved here, and then cover some of the AWS pieces as well. But you know, we're focusing on getting that value out and accelerating time to value for our joint customers. As Itamar was saying, you know, there's a lot of complexity with the SAP data and a lot of value there. How can we manage that in a prepackaged way, bringing together best of breed solutions with proven capabilities and bringing this to market quickly for our joint customers. You know, Snowflake and AWS have been strong partners for a number of years now, and that's not only on how Snowflake runs on top of AWS, but also how we integrate with their complementary analytics and then all products. And so, you know, we want to be able to leverage those in addition to what Qlik is bringing in terms of the data transformations, bringing data out of SAP in the visualization as well. All very critical. And then we want to bring in the predictive analytics, AWS brings and what Sage brings. We'll talk about that a little bit later on. Some of the technologies that we're leveraging are some of our latest cutting edge technologies that really make things easier for both our partners and our customers. For example, Qlik leverages Snowflakes recently released Snowpark for Python functionality to push down those data transformations from clicking the Snowflake that Itamar's mentioning. And while we also leverage Snowpark for integrations with Amazon SageMaker, but there's a lot of great new technology that just makes this easy and compelling for customers. >> I think that's the big word, easy button here for what may look like a complex kind of integration, kind of turnkey, really, really compelling example of the modern era we're living in, as we always say in theCUBE. You mentioned accelerators, SAP accelerators. Can you give an example of how that works with the technology from the third party providers to deliver this business value Itamar, 'cause that was an interesting comment. What's the example? Give an example of this acceleration. >> Yes, certainly. I think this is something that really makes this truly, truly unique in the industry and again, a great opportunity for customers. So we kind talked earlier about there's a lot of things that need to be done with SP data to turn it to value. And these accelerator, as the name suggests, are designed to do just that, to kind of jumpstart the process and reduce the time and the risk involved in such project. So again, these are pre-packaged templates. We basically took a lot of knowledge, and a lot of configurations, best practices about to get things done and we put 'em together. So think about all the steps, it includes things like data extraction, so already knowing which tables, all the relevant tables that you need to get data from in the contexts of the solution you're looking for, say like order to cash, we'll get back to that one. How do you continuously deliver that data into Snowflake in an in efficient manner, handling things like data type mappings, metadata naming conventions and transformations. The data models you build all the way to data mart definitions and all the transformations that the data needs to go through moving through steps until it's fully analytics ready. And then on top of that, even adding a library of comprehensive analytic dashboards and integrations through machine learning and AI and put all of that in a way that's in pre-integrated and tested to work with Snowflake and AWS. So this is where again, you get this entire recipe that's ready. So take for example, I think I mentioned order to cash. So again, all these things I just talked about, I mean, for those who are not familiar, I mean order to cash is a critical business process for every organization. So especially if you're in retail, manufacturing, enterprise, it's a big... This is where, you know, starting with booking a sales order, following by fulfilling the order, billing the customer, then managing the accounts receivable when the customer actually pays, right? So this all process, you got sales order fulfillment and the billing impacts customer satisfaction, you got receivable payments, you know, the impact's working capital, cash liquidity. So again, as a result this order to cash process is a lifeblood for many businesses and it's critical to optimize and understand. So the solution accelerator we created specifically for order to cash takes care of understanding all these aspects and the data that needs to come with it. So everything we outline before to make the data available in Snowflake in a way that's really useful for downstream analytics, along with dashboards that are already common for that, for that use case. So again, this enables customers to gain real-time visibility into their sales orders, fulfillment, accounts receivable performance. That's what the Excel's are all about. And very similarly, we have another one for example, for finance analytics, right? So this will optimize financial data reporting, helps customers get insights into P&L, financial risk of stability or inventory analytics that helps with, you know, improve planning and inventory management, utilization, increased efficiencies, you know, so in supply chain. So again, these accelerators really help customers get a jumpstart and move faster with their solutions. >> Peter, this is the easy button we just talked about, getting things going, you know, get the ball rolling, get some acceleration. Big part of this are the three companies coming together doing this. >> Yeah, and to build on what Itamar just said that the SAP data obviously has tremendous value. Those sales orders, distribution data, financial data, bringing that into Snowflake makes it easily accessible, but also it enables it to be combined with other data too, is one of the things that Snowflake does so well. So you can get a full view of the end-to-end process and the business overall. You know, for example, I'll just take one, you know, one example that, that may not come to mind right away, but you know, looking at the impact of weather conditions on supply chain logistics is relevant and material and have interest to our customers. How do you bring those different data sets together in an easy way, bringing the data out of SAP, bringing maybe other data out of other systems through Qlik or through Snowflake, directly bringing data in from our data marketplace and bring that all together to make it work. You know, fundamentally organizational silos and the data fragmentation exist otherwise make it really difficult to drive modern analytics projects. And that in turn limits the value that our customers are getting from SAP data and these other data sets. We want to enable that and unleash. >> Yeah, time for value. This is great stuff. Itamar final question, you know, what are customers using this? What do you have? I'm sure you have customers examples already using the solution. Can you share kind of what these examples look like in the use cases and the value? >> Oh yeah, absolutely. Thank you. Happy to. We have customers across different, different sectors. You see manufacturing, retail, energy, oil and gas, CPG. So again, customers in those segments, typically sectors typically have SAP. So we have customers in all of them. A great example is like Siemens Energy. Siemens Energy is a global provider of gas par services. You know, over what, 28 billion, 30 billion in revenue. 90,000 employees. They operate globally in over 90 countries. So they've used SAP HANA as a core system, so it's running on premises, multiple locations around the world. And what they were looking for is a way to bring all these data together so they can innovate with it. And the thing is, Peter mentioned earlier, not just the SAP data, but also bring other data from other systems to bring it together for more value. That includes finance data, these logistics data, these customer CRM data. So they bring data from over 20 different SAP systems. Okay, with Qlik data integration, feeding that into Snowflake in under 20 minutes, 24/7, 365, you know, days a year. Okay, they get data from over 20,000 tables, you know, over million, hundreds of millions of records daily going in. So it is a great example of the type of scale, scalability, agility and speed that they can get to drive these kind of innovation. So that's a great example with Siemens. You know, another one comes to mind is a global manufacturer. Very similar scenario, but you know, they're using it for real-time executive reporting. So it's more like feasibility to the production data as well as for financial analytics. So think, think, think about everything from audit to texts to innovate financial intelligence because all the data's coming from SAP. >> It's a great time to be in the data business again. It keeps getting better and better. There's more data coming. It's not stopping, you know, it's growing so fast, it keeps coming. Every year, it's the same story, Peter. It's like, doesn't stop coming. As we wrap up here, let's just get customers some information on how to get started. I mean, obviously you're starting to see the accelerators, it's a great program there. What a great partnership between the two companies and AWS. How can customers get started to learn about the solution and take advantage of it, getting more out of their SAP data, Peter? >> Yeah, I think the first place to go to is talk to Snowflake, talk to AWS, talk to our account executives that are assigned to your account. Reach out to them and they will be able to educate you on the solution. We have packages up very nicely and can be deployed very, very quickly. >> Well gentlemen, thank you so much for coming on. Appreciate the conversation. Great overview of the partnership between, you know, Snowflake and Qlik and AWS on a joint solution. You know, getting more out of the SAP data. It's really kind of a key, key solution, bringing SAP data to life. Thanks for coming on theCUBE. Appreciate it. >> Thank you. >> Thank you John. >> Okay, this is theCUBE coverage here at RE:Invent 2022. I'm John Furrier, your host of theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
bringing SAP data to life, great meeting you John. then going to jump into what On the Cloud Partner side, and I'm the senior vice and the solutions, and the value chain and accelerate time to value that are going to be powering and data to do so. What's the dynamic powering this trend? You know, it's time to value all the time with customers. and that's driving all the and it's also a solution by the way I mean, you got partnering and bringing this to market of the modern era we're living in, that the data needs to go through getting things going, you know, Yeah, and to build in the use cases and the value? agility and speed that they can get It's a great time to be to educate you on the solution. key solution, bringing SAP data to life. Okay, this is theCUBE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Siemens | ORGANIZATION | 0.99+ |
Peter MacDonald | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Peter McDonald | PERSON | 0.99+ |
Itamar Ankorion | PERSON | 0.99+ |
Qlik | ORGANIZATION | 0.99+ |
28 billion | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Tens | QUANTITY | 0.99+ |
three companies | QUANTITY | 0.99+ |
Siemens Energy | ORGANIZATION | 0.99+ |
20 plus years | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
third element | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Itamar | PERSON | 0.99+ |
over 20,000 tables | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
90,000 employees | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
Cloud Partners | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
over 38,000 customers | QUANTITY | 0.99+ |
under 20 minutes | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
over 11 years | QUANTITY | 0.98+ |
Snowpark | TITLE | 0.98+ |
Second thing | QUANTITY | 0.98+ |
Breaking Analysis: VMware Explore 2022 will mark the start of a Supercloud journey
>> From the Cube studios in Palo Alto and Boston, bringing you data driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> While the precise direction of VMware's future is unknown, given the plan Broadcom acquisition, one thing is clear. The topic of what Broadcom plans will not be the main focus of the agenda at the upcoming VMware Explore event next week in San Francisco. We believe that despite any uncertainty, VMware will lay out for its customers what it sees as its future. And that future is multi-cloud or cross-cloud services, what we call Supercloud. Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we drill into the latest survey data on VMware from ETR. And we'll share with you the next iteration of the Supercloud definition based on feedback from dozens of contributors. And we'll give you our take on what to expect next week at VMware Explorer 2022. Well, VMware is maturing. You can see it in the numbers. VMware had a solid quarter just this week, which was announced beating earnings and growing the top line by 6%. But it's clear from its financials and the ETR data that we're showing here that VMware's Halcion glory days are behind it. This chart shows the spending profile from ETR's July survey of nearly 1500 IT buyers and CIOs. The survey included 722 VMware customers with the green bars showing elevated spending momentum, ie: growth, either new or growing at more than 6%. And the red bars show lower spending, either down 6% or worse or defections. The gray bars, that's the flat spending crowd, and it really tells a story. Look, nobody's throwing away their VMware platforms. They're just not investing as rapidly as in previous years. The blue line shows net score or spending momentum and subtracts the reds from the greens. The yellow line shows market penetration or pervasiveness in the survey. So the data is pretty clear. It's steady, but it's not remarkable. Now, the timing of the acquisition, quite rightly, is quite good, I would say. Now, this next chart shows the net score and pervasiveness juxtaposed on an XY graph and breaks down the VMware portfolio in those dimensions, the product portfolio. And you can see the dominance of respondents citing VMware as the platform. They might not know exactly which services they use, but they just respond VMware. That's on the X axis. You can see it way to the right. And the spending momentum or the net score is on the Y axis. That red dotted line at 4%, that indicates elevated levels and only VMware cloud on AWS is above that line. Notably, Tanzu has jumped up significantly from previous quarters, with the rest of the portfolio showing steady, as you would expect from a maturing platform. Only carbon black is hovering in the red zone, kind of ironic given the name. We believe that VMware is going to be a major player in cross cloud services, what we refer to as Supercloud. For months, we've been refining the concept and the definition. At Supercloud '22, we had discussions with more than 30 technology and business experts, and we've gathered input from many more. Based on that feedback, here's the definition we've landed on. It's somewhat refined from our earlier definition that we published a couple weeks ago. Supercloud is an emerging computing architecture that comprises a set of services abstracted from the underlying primitives of hyperscale clouds, e.g. compute, storage, networking, security, and other native resources, to create a global system spanning more than one cloud. Supercloud is three essential properties, three deployment models, and three service models. So what are those essential elements, those properties? We've simplified the picture from our last report. We show them here. I'll review them briefly. We're not going to go super in depth here because we've covered this topic a lot. But supercloud, it runs on more than one cloud. It creates that common or identical experience across clouds. It contains a necessary capability that we call a superPaaS that acts as a cloud interpreter, and it has metadata intelligence to optimize for a specific purpose. We'll publish this definition in detail. So again, we're not going to spend a ton of time here today. Now, we've identified three deployment models for Supercloud. The first is a single instantiation, where a control plane runs on one cloud but supports interactions with multiple other clouds. An example we use is Kubernetes cluster management service that runs on one cloud but can deploy and manage clusters on other clouds. The second model is a multi-cloud, multi-region instantiation where a full stack of services is instantiated on multiple clouds and multiple cloud regions with a common interface across them. We've used cohesity as one example of this. And then a single global instance that spans multiple cloud providers. That's our snowflake example. Again, we'll publish this in detail. So we're not going to spend a ton of time here today. Finally, the service models. The feedback we've had is IaaS, PaaS, and SaaS work fine to describe the service models for Supercloud. NetApp's Cloud Volume is a good example in IaaS. VMware cloud foundation and what we expect at VMware Explore is a good PaaS example. And SAP HANA Cloud is a good example of SaaS running as a Supercloud service. That's the SAP HANA multi-cloud. So what is it that we expect from VMware Explore 2022? Well, along with what will be an exciting and speculation filled gathering of the VMware community at the Moscone Center, we believe VMware will lay out its future architectural direction. And we expect it will fit the Supercloud definition that we just described. We think VMware will show its hand on a set of cross-cloud services and will promise a common experience for users and developers alike. As we talked about at Supercloud '22, VMware kind of wants to have its cake, eat it too, and lose weight. And by that, we mean that it will not only abstract the underlying primitives of each of the individual clouds, but if developers want access to them, they will allow that and actually facilitate that. Now, we don't expect VMware to use the term Supercloud, but it will be a cross-cloud multi-cloud services model that they put forth, we think, at VMworld Explore. With IaaS comprising compute, storage, and networking, a very strong emphasis, we believe, on security, of course, a governance and a comprehensive set of data protection services. Now, very importantly, we believe Tanzu will play a leading role in any announcements this coming week, as a purpose-built PaaS layer, specifically designed to create a common experience for cross clouds for data and application services. This, we believe, will be VMware's most significant offering to date in cross-cloud services. And it will position VMware to be a leader in what we call Supercloud. Now, while it remains to be seen what Broadcom exactly intends to do with VMware, we've speculated, others have speculated. We think this Supercloud is a substantial market opportunity generally and for VMware specifically. Look, if you don't own a public cloud, and very few companies do, in the tech business, we believe you better be supporting the build out of superclouds or building a supercloud yourself on top of hyperscale infrastructure. And we believe that as cloud matures, hyperscalers will increasingly I cross cloud services as an opportunity. We asked David Floyer to take a stab at a market model for super cloud. He's really good at these types of things. What he did is he took the known players in cloud and estimated their IaaS and PaaS cloud services, their total revenue, and then took a percentage. So this is super set of just the public cloud and the hyperscalers. And then what he did is he took a percentage to fit the Supercloud definition, as we just shared above. He then added another 20% on top to cover the long tail of Other. Other over time is most likely going to grow to let's say 30%. That's kind of how these markets work. Okay, so this is obviously an estimate, but it's an informed estimate by an individual who has done this many, many times and is pretty well respected in these types of forecasts, these long term forecasts. Now, by the definition we just shared, Supercloud revenue was estimated at about $3 billion in 2022 worldwide, growing to nearly $80 billion by 2030. Now remember, there's not one Supercloud market. It comprises a bunch of purpose-built superclouds that solve a specific problem. But the common attribute is it's built on top of hyperscale infrastructure. So overall, cloud services, including Supercloud, peak by the end of the decade. But Supercloud continues to grow and will take a higher percentage of the cloud market. The reasoning here is that the market will change and compute, will increasingly become distributed and embedded into edge devices, such as automobiles and robots and factory equipment, et cetera, and not necessarily be a discreet... I mean, it still will be, of course, but it's not going to be as much of a discrete component that is consumed via services like EZ2, that will mature. And this will be a key shift to watch in spending dynamics and really importantly, computing economics, the things we've talked about around arm and edge and AI inferencing and new low cost computing architectures at the edge. We're talking not the near edge, like, Lowes and Home Depot, we're talking far edge and embedded devices. Now, whether this becomes a seamless part of Supercloud remains to be seen. Look, if that's how we see it, the current and the future state of Supercloud, and we're committed to keeping the discussion going with an inclusive model that gathers input from all parts of the industry. Okay, that's it for today. Thanks to Alex Morrison, who's on production, and he also manages the podcast. Ken Schiffman, as well, is on production in our Boston office. Kristin Martin and Cheryl Knight, they help us get the word out on social media and in our newsletters. And Rob Hoffe is our editor in chief over at Silicon Angle and does some helpful editing. Thank you, all. Remember these episodes, they're all available as podcasts, wherever you listen. All you got to do is search Breaking Analysis Podcast. I publish each week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com or DM me @Dvellante or comment on our LinkedIn posts. Please do check out etr.ai. They've got some great enterprise survey research. So please go there and poke around, And if you need any assistance, let them know. This is Dave Vellante for the Cube Insights powered by ETR. Thanks for watching, and we'll see you next time on Breaking Analysis. (lively music)
SUMMARY :
From the Cube studios and subtracts the reds from the greens.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Morrison | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rob Hoffe | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
2022 | DATE | 0.99+ |
Lowes | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
722 | QUANTITY | 0.99+ |
4% | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
2030 | DATE | 0.99+ |
Silicon Angle | ORGANIZATION | 0.99+ |
July | DATE | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
6% | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
second model | QUANTITY | 0.99+ |
more than 6% | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
more than one cloud | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
nearly $80 billion | QUANTITY | 0.99+ |
about $3 billion | QUANTITY | 0.99+ |
more than 30 technology | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
this week | DATE | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
each week | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
three service models | QUANTITY | 0.98+ |
VMware Explore | EVENT | 0.98+ |
dozens of contributors | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
NetApp | TITLE | 0.97+ |
this week | DATE | 0.97+ |
Supercloud | TITLE | 0.97+ |
SAP HANA | TITLE | 0.97+ |
VMworld Explore | ORGANIZATION | 0.97+ |
three essential properties | QUANTITY | 0.97+ |
three deployment models | QUANTITY | 0.97+ |
one cloud | QUANTITY | 0.96+ |
Tanzu | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.96+ |
Moscone Center | LOCATION | 0.96+ |
wikibon.com | OTHER | 0.95+ |
SAP HANA Cloud | TITLE | 0.95+ |
Cube Insights | ORGANIZATION | 0.92+ |
single instantiation | QUANTITY | 0.9+ |
Video exclusive: Oracle adds more wood to the MySQL HeatWave fire
(upbeat music) >> When Oracle acquired Sun in 2009, it paid $5.6 billion net of Sun's cash and debt. Now I argued at the time that Oracle got one of the best deals in the history of enterprise tech, and I got a lot of grief for saying that because Sun had a declining business, it was losing money, and its revenue was under serious pressure as it tried to hang on for dear life. But Safra Catz understood that Oracle could pay Sun's lower profit and lagging businesses, like its low index 86 product lines, and even if Sun's revenue was cut in half, because Oracle has such a high revenue multiple as a software company, it could almost instantly generate $25 to $30 billion in shareholder value on paper. In addition, it was a catalyst for Oracle to initiate its highly differentiated engineering systems business, and was actually the precursor to Oracle's Cloud. Oracle saw that it could capture high margin dollars that used to go to partners like HP, it's original exit data partner, and get paid for the full stack across infrastructure, middleware, database, and application software, when eventually got really serious about cloud. Now there was also a major technology angle to this story. Remember Sun's tagline, "the network is the computer"? Well, they should have just called it cloud. Through the Sun acquisition. Oracle also got a couple of key technologies, Java, the number one programming language in the world, and MySQL, a key ingredient of the LAMP stack, that's Linux, Apache, MySQL and PHP, Perl or Python, on which the internet is basically built, and is used by many cloud services like Facebook, Twitter, WordPress, Flicker, Amazon, Aurora, and many other examples, including, by the way, Maria DB, which is a fork of MySQL created by MySQL's creator, basically in protest to Oracle's acquisition; the drama is Oscar worthy. It gets even better. In 2020, Oracle began introducing a new version of MySQL called MySQL HeatWave, and since late 2020 it's been in sort of a super cycle rolling, out three new releases in less than a year and a half in an attempt to expand its Tam and compete in new markets. Now we covered the release of MySQL Autopilot, which uses machine learning to automate management functions. And we also covered the bench marketing that Oracle produced against Snowflake, AWS, Azure, and Google. And Oracle's at it again with HeatWave, adding machine learning into its database capabilities, along with previously available integrations of OLAP and OLTP. This, of course, is in line with Oracle's converged database philosophy, which, as we've reported, is different from other cloud database providers, most notably Amazon, which takes the right tool for the right job approach and chooses database specialization over a one size fits all strategy. Now we've asked Oracle to come on theCUBE and explain these moves, and I'm pleased to welcome back Nipun Agarwal, who's the senior vice president for MySQL Database and HeatWave at Oracle. And today, in this video exclusive, we'll discuss machine learning, other new capabilities around elasticity and compression, and then any benchmark data that Nipun wants to share. Nipun's been a leading advocate of the HeatWave program. He's led engineering in that team for over 10 years, and he has over 185 patents in database technologies. Welcome back to the show Nipun. Great to see you again. Thanks for coming on. >> Thank you, Dave. Very happy to be back. >> Yeah, now for those who may not have kept up with the news, maybe to kick things off you could give us an overview of what MySQL HeatWave actually is so that we're all on the same page. >> Sure, Dave, MySQL HeatWave is a fully managed MySQL database service from Oracle, and it has a builtin query accelerator called HeatWave, and that's the part which is unique. So with MySQL HeatWave, customers of MySQL get a single database which they can use for transactional processing, for analytics, and for mixed workloads because traditionally MySQL has been designed and optimized for transaction processing. So in the past, when customers had to run analytics with the MySQL based service, they would need to move the data out of MySQL into some other database for running analytics. So they would end up with two different databases and it would take some time to move the data out of MySQL into this other system. With MySQL HeatWave, we have solved this problem and customers now have a single MySQL database for all their applications, and they can get the good performance of analytics without any changes to their MySQL application. >> Now it's no secret that a lot of times, you know, queries are not, you know, most efficiently written, and critics of MySQL HeatWave will claim that this product is very memory and cluster intensive, it has a heavy footprint that adds to cost. How do you answer that, Nipun? >> Right, so for offering any database service in the cloud there are two dimensions, performance and cost, and we have been very cognizant of both of them. So it is indeed the case that HeatWave is a, in-memory query accelerator, which is why we get very good performance, but it is also the case that we have optimized HeatWave for commodity cloud services. So for instance, we use the least expensive compute. We use the least expensive storage. So what I would suggest is for the customers who kind of would like to know what is the price performance advantage of HeatWave compared to any database we have benchmark against, Redshift, Snowflake, Google BigQuery, Azure Synapse, HeatWave is significantly faster and significantly lower price on a multitude of workloads. So not only is it in-memory database and optimized for that, but we have also optimized it for commodity cloud services, which makes it much lower price than the competition. >> Well, at the end of the day, it's customers that sort of decide what the truth is. So to date, what's been the customer reaction? Are they moving from other clouds from on-prem environments? Both why, you know, what are you seeing? >> Right, so we are definitely a whole bunch of migrations of customers who are running MySQL on-premise to the cloud, to MySQL HeatWave. That's definitely happening. What is also very interesting is we are seeing that a very large percentage of customers, more than half the customers who are coming to MySQL HeatWave, are migrating from other clouds. We have a lot of migrations coming from AWS Aurora, migrations from RedShift, migrations from RDS MySQL, TerriData, SAP HANA, right. So we are seeing migrations from a whole bunch of other databases and other cloud services to MySQL HeatWave. And the main reason we are told why customers are migrating from other databases to MySQL HeatWave are lower cost, better performance, and no change to their application because many of these services, like AWS Aurora are ETL compatible with MySQL. So when customers try MySQL HeatWave, not only do they get better performance at a lower cost, but they find that they can migrate their application without any changes, and that's a big incentive for them. >> Great, thank you, Nipun. So can you give us some names? Are there some real world examples of these customers that have migrated to MySQL HeatWave that you can share? >> Oh, absolutely, I'll give you a few names. Stutor.com, this is an educational SaaS provider raised out of Brazil. They were using Google BigQuery, and when they migrated to MySQL HeatWave, they found a 300X, right, 300 times improvement in performance, and it lowered their cost by 85 (audio cut out). Another example is Neovera. They offer cybersecurity solutions and they were running their application on an on-premise version of MySQL when they migrated to MySQL HeatWave, their application improved in performance by 300 times and their cost reduced by 80%, right. So by going from on-premise to MySQL HeatWave, they reduced the cost by 80%, improved performance by 300 times. We are Glass, another customer based out of Brazil. They were running on AWS EC2, and when they migrated, within hours they found that there was a significant improvement, like, you know, over 5X improvement in database performance, and they were able to accommodate a very large virtual event, which had more than a million visitors. Another example, Genius Senority. They are a game designer in Japan, and when they moved to MySQL HeatWave, they found a 90 times percent improvement in performance. And there many, many more like a lot of migrations, again, from like, you know, Aurora, RedShift and many other databases as well. And consistently what we hear is (audio cut out) getting much better performance at a much lower cost without any change to their application. >> Great, thank you. You know, when I ask that question, a lot of times I get, "Well, I can't name the customer name," but I got to give Oracle credit, a lot of times you guys have at your fingertips. So you're not the only one, but it's somewhat rare in this industry. So, okay, so you got some good feedback from those customers that did migrate to MySQL HeatWave. What else did they tell you that they wanted? Did they, you know, kind of share a wishlist and some of the white space that you guys should be working on? What'd they tell you? >> Right, so as customers are moving more data into MySQL HeatWave, as they're consolidating more data into MySQL HeatWave, customers want to run other kinds of processing with this data. A very popular one is (audio cut out) So we have had multiple customers who told us that they wanted to run machine learning with data which is stored in MySQL HeatWave, and for that they have to extract the data out of MySQL (audio cut out). So that was the first feedback we got. Second thing is MySQL HeatWave is a highly scalable system. What that means is that as you add more nodes to a HeatWave cluster, the performance of the system improves almost linearly. But currently customers need to perform some manual steps to add most to a cluster or to reduce the cluster size. So that was other feedback we got that people wanted this thing to be automated. Third thing is that we have shown in the previous results, that HeatWave is significantly faster and significantly lower price compared to competitive services. So we got feedback from customers that can we trade off some performance to get even lower cost, and that's what we have looked at. And then finally, like we have some results on various data sizes with TPC-H. Customers wanted to see if we can offer some more data points as to how does HeatWave perform on other kinds of workloads. And that's what we've been working on for the several months. >> Okay, Nipun, we're going to get into some of that, but, so how did you go about addressing these requirements? >> Right, so the first thing is we are announcing support for in-database machine learning, meaning that customers who have their data inside MySQL HeatWave can now run training, inference, and prediction all inside the database without the data or the model ever having to leave the database. So that's how we address the first one. Second thing is we are offering support for real time elasticity, meaning that customers can scale up or scale down to any number of nodes. This requires no manual intervention on part of the user, and for the entire duration of the resize operation, the system is fully available. The third, in terms of the costs, we have double the amount of data that can be processed per node. So if you look at a HeatWave cluster, the size of the cluster determines the cost. So by doubling the amount of data that can be processed per node, we have effectively reduced the cluster size which is required for planning a given workload to have, which means it reduces the cost to the customer by half. And finally, we have also run the TPC-DS workload on HeatWave and compared it with other vendors. So now customers can have another data point in terms of the performance and the cost comparison of HeatWave with other services. >> All right, and I promise, I'm going to ask you about the benchmarks, but I want to come back and drill into these a bit. How is HeatWave ML different from competitive offerings? Take for instance, Redshift ML, for example. >> Sure, okay, so this is a good comparison. Let's start with, let's say RedShift ML, like there are some systems like, you know, Snowflake, which don't even offer any, like, processing of machine learning inside the database, and they expect customers to write a whole bunch of code, in say Python or Java, to do machine learning. RedShift ML does have integration with SQL. That's a good start. However, when customers of Redshift need to run machine learning, and they invoke Redshift ML, it makes a call to another service, SageMaker, right, where so the data needs to be exported to a different service. The model is generated, and the model is also outside RedShift. With HeatWave ML, the data resides always inside the MySQL database service. We are able to generate models. We are able to train the models, run inference, run explanations, all inside the MySQL HeatWave service. So the data, or the model, never have to leave the database, which means that both the data and the models can now be secured by the same access control mechanisms as the rest of the data. So that's the first part, that there is no need for any ETL. The second aspect is the automation. Training is a very important part of machine learning, right, and it impacts the quality of the predictions and such. So traditionally, customers would employ data scientists to influence the training process so that it's done right. And even in the case of Redshift ML, the users are expected to provide a lot of parameters to the training process. So the second thing which we have worked on with HeatWave ML is that it is fully automated. There is absolutely no user intervention required for training. Third is in terms of performance. So one of the things we are very, very sensitive to is performance because performance determines the eventual cost to the customer. So again, in some benchmarks, which we have published, and these are all available on GitHub, we are showing how HeatWave ML is 25 times faster than Redshift ML, and here's the kicker, at 1% of the cost. So four benefits, the data all remain secure inside the database service, it's fully automated, much faster, much lower cost than the competition. >> All right, thank you Nipun. Now, so there's a lot of talk these days about explainability and AI. You know, the system can very accurately tell you that it's a cat, you know, or for you Silicon Valley fans, it's a hot dog or not a hot dog, but they can't tell you how the system got there. So what is explainability, and why should people care about it? >> Right, so when we were talking to customers about what they would like from a machine learning based solution, one of the feedbacks we got is that enterprise is a little slow or averse to uptaking machine learning, because it seems to be, you know, like magic, right? And enterprises have the obligation to be able to explain, or to provide a answer to their customers as to why did the database make a certain choice. With a rule based solution it's simple, it's a rule based thing, and you know what the logic was. So the reason explanations are important is because customers want to know why did the system make a certain prediction? One of the important characteristics of HeatWave ML is that any model which is generated by HeatWave ML can be explained, and we can do both global explanations or model explanations as well as we can also do local explanations. So when the system makes a specific prediction using HeatWave ML, the user can find out why did the system make such a prediction? So for instance, if someone is being denied a loan, the user can figure out what were the attribute, what were the features which led to that decision? So this ensures, like, you know, fairness, and many of the times there is also like a need for regulatory compliance where users have a right to know. So we feel that explanations are very important for enterprise workload, and that's why every model which is generated by HeatWave ML can be explained. >> Now I got to give Snowflakes some props, you know, this whole idea of separating compute from storage, but also bringing the database to the cloud and driving elasticity. So that's been a key enabler and has solved a lot of problems, in particular the snake swallowing the basketball problem, as I often say. But what about elasticity and elasticity in real time? How is your version, and there's a lot of companies chasing this, how is your approach to an elastic cloud database service different from what others are promoting these days? >> Right, so a couple of characteristics. One is that we have now fully automated the process of elasticity, meaning that if a user wants to scale up or scale down, the only thing they need to specify is the eventual size of the cluster and the system completely takes care of it transparently. But then there are a few characteristics which are very unique. So for instance, we can scale up or scale down to any number of nodes. Whereas in the case of Snowflake, the number of nodes someone can scale up or scale down to are the powers of two. So if a user needs 70 CPUs, well, their choice is either 64 or 128. So by providing this flexibly with MySQL HeatWave, customers get a custom fit. So they can get a cluster which is optimized for their specific portal. So that's the first thing, flexibility of scaling up or down to any number of nodes. The second thing is that after the operation is completed, the system is fully balanced, meaning the data across the various nodes is fully balanced. That is not the case with many solutions. So for instance, in the case of Redshift, after the resize operation is done, the user is expected to manually balance the data, which can be very cumbersome. And the third aspect is that while the resize operation is going on, the HeatWave cluster is completely available for queries, for DMLS, for loading more data. That is, again, not the case with Redshift. Redshift, suppose the operation takes 10 to 15 minutes, during that window of time, the system is not available for writes, and for a big part of that chunk of time, the system is not even available for queries, which is very limiting. So the advantages we have are fully flexible, the system is in a balanced state, and the system is completely available for the entire duration operation. >> Yeah, I guess you got that hypergranularity, which, you know, sometimes they say, "Well, t-shirt sizes are good enough," but then I think of myself, some t-shirts fit me better than others, so. Okay, I saw on the announcement that you have this lower price point for customers. How did you actually achieve this? Could you give us some details around that please? >> Sure, so there are two things for announcing this service, which lower the cost for the customers. The first thing is that we have doubled the amount of data that can be processed by a HeatWave node. So if we have doubled the amount of data, which can be a process by a node, the cluster size which is required by customers reduces to half, and that's why the cost drops to half. The way we have managed to do this is by two things. One is support for Bloom filters, which reduces the amount of intermediate memory. And second is we compress the base data. So these are the two techniques we have used to process more data per node. The second way by which we are lowering the cost for the customers is by supporting pause and resume of HeatWave. And many times you find customers of like HeatWave and other services that they want to run some other queries or some other workloads for some duration of time, but then they don't need the cluster for a few hours. Now with the support for pause and resume, customers can pause the cluster and the HeatWave cluster instantaneously stops. And when they resume, not only do we fetch the data, in a very, like, you know, a quick pace from the object store, but we also preserve all the statistics, which are used by Autopilot. So both the data and the metadata are fetched, extremely fast from the object store. So with these two capabilities we feel that it'll drive down the cost to our customers even more. >> Got it, thank you. Okay, I promised I was going to get to the benchmarks. Let's have it. How do you compare with others but specifically cloud databases? I mean, and how do we know these benchmarks are real? My friends at EMC, they were back in the day, they were brilliant at doing benchmarks. They would produce these beautiful PowerPoints charts, but it was kind of opaque, but what do you say to that? >> Right, so there are multiple things I would say. The first thing is that this time we have published two benchmarks, one is for machine learning and other is for SQL analytics. All the benchmarks, including the scripts which we have used are available on GitHub. So we have full transparency, and we invite and encourage customers or other service providers to download the scripts, to download the benchmarks and see if they get any different results, right. So what we are seeing, we have published it for other people to try and validate. That's the first part. Now for machine learning, there hasn't been a precedence for enterprise benchmarks so we talk about aiding open data sets and we have published benchmarks for those, right? So both for classification, as well as for aggression, we have run the training times, and that's where we find that HeatWave MLS is 25 times faster than RedShift ML at one percent of the cost. So fully transparent, available. For SQL analytics, in the past we have shown comparisons with TPC-H. So we would show TPC-H across various databases, across various data sizes. This time we decided to use TPC-DS. the advantage of TPC-DS over TPC-H is that it has more number of queries, the queries are more complex, the schema is more complex, and there is a lot more data skew. So it represents a different class of workloads, and which is very interesting. So these are queries derived from the TPC-DS benchmark. So the numbers we have are published this time are for 10 terabyte TPC-DS, and we are comparing with all the four majors services, Redshift, Snowflake, Google BigQuery, Azure Synapse. And in all the cases, HeatWave is significantly faster and significantly lower priced. Now one of the things I want to point out is that when we are doing the cost comparison with other vendors, we are being overly fair. For instance, the cost of HeatWave includes the cost of both the MySQL node as well as the HeatWave node, and with this setup, customers can run transaction processing analytics as well as machine learning. So the price captures all of it. Whereas with the other vendors, the comparison is only for the analytic queries, right? So if customers wanted to run RDP, you would need to add the cost of that database. Or if customers wanted to run machine learning, you would need to add the cost of that service. Furthermore, with the case of HeatWave, we are quoting pay as you go price, whereas for other vendors like, you know, RedShift, and like, you know, where applicable, we are quoting one year, fully paid upfront cost rate. So it's like, you know, very fair comparison. So in terms of the numbers though, price performance for TPC-DS, we are about 4.8 times better price performance compared to RedShift We are 14.4 times better price performance compared to Snowflake, 13 times better than Google BigQuery, and 15 times better than Synapse. So across the board, we are significantly faster and significantly lower price. And as I said, all of these scripts are available in GitHub for people to drive for themselves. >> Okay, all right, I get it. So I think what you're saying is, you could have said this is what it's going to cost for you to do both analytics and transaction processing on a competitive platform versus what it takes to do that on Oracle MySQL HeatWave, but you're not doing that. You're saying, let's take them head on in their sweet spot of analytics, or OLTP separately and you're saying you still beat them. Okay, so you got this one database service in your cloud that supports transactions and analytics and machine learning. How much do you estimate your saving companies with this integrated approach versus the alternative of kind of what I called upfront, the right tool for the right job, and admittedly having to ETL tools. How can you quantify that? >> Right, so, okay. The numbers I call it, right, at the end of the day in a cloud service price performance is the metric which gives a sense as to how much the customers are going to save. So for instance, for like a TPC-DS workload, if we are 14 times better price performance than Snowflake, it means that our cost is going to be 1/14th for what customers would pay for Snowflake. Now, in addition, in other costs, in terms of migrating the data, having to manage two different databases, having to pay for other service for like, you know, machine learning, that's all extra and that depends upon what tools customers are using or what other services they're using for transaction processing or for machine learning. But these numbers themselves, right, like they're very, very compelling. If we are 1/5th the cost of Redshift, right, or 1/14th of Snowflake, these numbers, like, themselves are very, very compelling. And that's the reason we are seeing so many of these migrations from these databases to MySQL HeatWave. >> Okay, great, thank you. Our last question, in the Q3 earnings call for fiscal 22, Larry Ellison said that "MySQL HeatWave is coming soon on AWS," and that caught a lot of people's attention. That's not like Oracle. I mean, people might say maybe that's an indication that you're not having success moving customers to OCI. So you got to go to other clouds, which by the way I applaud, but any comments on that? >> Yep, this is very much like Oracle. So if you look at one of the big reasons for success of the Oracle database and why Oracle database is the most popular database is because Oracle database runs on all the platforms, and that has been the case from day one. So very akin to that, the idea is that there's a lot of value in MySQL HeatWave, and we want to make sure that we can offer same value to the customers of MySQL running on any cloud, whether it's OCI, whether it's the AWS, or any other cloud. So this shows how confident we are in our offering, and we believe that in other clouds as well, customers will find significant advantage by having a single database, which is much faster and much lower price then what alternatives they currently have. So this shows how confident we are about our products and services. >> Well, that's great, I mean, obviously for you, you're in MySQL group. You love that, right? The more places you can run, the better it is for you, of course, and your customers. Okay, Nipun, we got to leave it there. As always it's great to have you on theCUBE, really appreciate your time. Thanks for coming on and sharing the new innovations. Congratulations on all the progress you're making here. You're doing a great job. >> Thank you, Dave, and thank you for the opportunity. >> All right, and thank you for watching this CUBE conversation with Dave Vellante for theCUBE, your leader in enterprise tech coverage. We'll see you next time. (upbeat music)
SUMMARY :
and get paid for the full Very happy to be back. maybe to kick things off you and that's the part which is unique. that adds to cost. So it is indeed the case that HeatWave Well, at the end of the day, And the main reason we are told So can you give us some names? and they were running their application and some of the white space and for that they have to extract the data and for the entire duration I'm going to ask you about the benchmarks, So one of the things we are You know, the system can and many of the times there but also bringing the So the advantages we Okay, I saw on the announcement and the HeatWave cluster but what do you say to that? So the numbers we have and admittedly having to ETL tools. And that's the reason we in the Q3 earnings call for fiscal 22, and that has been the case from day one. Congratulations on all the you for the opportunity. All right, and thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
$25 | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Brazil | LOCATION | 0.99+ |
two techniques | QUANTITY | 0.99+ |
2009 | DATE | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
14.4 times | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
85 | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Sun | ORGANIZATION | 0.99+ |
300 times | QUANTITY | 0.99+ |
14 times | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
$5.6 billion | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
25 times | QUANTITY | 0.99+ |
Nipun Agarwal | PERSON | 0.99+ |
Redshift | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
90 times | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
$30 billion | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
70 CPUs | QUANTITY | 0.99+ |
MySQL HeatWave | TITLE | 0.99+ |
second aspect | QUANTITY | 0.99+ |
RedShift | TITLE | 0.99+ |
Second thing | QUANTITY | 0.99+ |
RedShift ML | TITLE | 0.99+ |
1% | QUANTITY | 0.99+ |
Redshift ML | TITLE | 0.99+ |
Nipun | PERSON | 0.99+ |
Third | QUANTITY | 0.99+ |
one percent | QUANTITY | 0.99+ |
13 times | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
15 times | QUANTITY | 0.99+ |
two capabilities | QUANTITY | 0.99+ |
Hannah Sperling, SAP | WiDS 2022
>>Hey everyone. Welcome back to the cubes. Live coverage of women in data science, worldwide conference widths 2022. I'm Lisa Martin coming to you from Stanford university at the Arriaga alumni center. And I'm pleased to welcome my next guest. Hannah Sperling joins me business process intelligence or BPI, academic and research alliances at SAP HANA. Welcome to the program. >>Hi, thank you so much for having me. >>So you just flew in from Germany. >>I did last week. Yeah. Long way away. I'm very excited to be here. Uh, but before we get started, I would like to say that I feel very fortunate to be able to be here and that my heart and vicious still goes out to people that might be in more difficult situations right now. I agree >>Such a it's one of my favorite things about Wiz is the community that it's grown into. There's going to be about a 100,000 people that will be involved annually in woods, but you walk into the Arriaga alumni center and you feel this energy from all the women here, from what Margo and teams started seven years ago to what it has become. I was happened to be able to meet listening to one of the panels this morning, and they were talking about something that's just so important for everyone to hear, not just women, the importance of mentors and sponsors, and being able to kind of build your own personal board of directors. Talk to me about some of the mentors that you've had in the past and some of the ones that you have at SAP now. >>Yeah. Thank you. Um, that's actually a great starting point. So maybe talk a bit about how I got involved in tech. Yeah. So SAP is a global software company, but I actually studied business and I was hired directly from university, uh, around four years ago. And that was to join SAP's analytics department. And I've always had a weird thing for databases, even when I was in my undergrad. Um, I did enjoy working with data and so working in analytics with those teams and some people mentoring me, I got into database modeling and eventually ventured even further into development was working in analytics development for a couple of years. And yeah, still am with a global software provider now, which brought me to women and data science, because now I'm also involved in research again, because yeah, some reason couldn't couldn't get enough of that. Um, maybe learn about the stuff that I didn't do in my undergrad. >>And post-grad now, um, researching at university and, um, yeah, one big part in at least European data science efforts, um, is the topic of sensitive data and data privacy considerations. And this is, um, also topic very close to my heart because you can only manage what you measure, right. But if everybody is afraid to touch certain pieces of sensitive data, I think we might not get to where we want to be as fast as we possibly could be. And so I've been really getting into a data and anonymization procedures because I think if we could random a workforce data usable, especially when it comes to increasing diversity in stem or in technology jobs, we should really be, um, letting the data speak >>And letting the data speak. I like that. One of the things they were talking about this morning was the bias in data, the challenges that presents. And I've had some interesting conversations on the cube today, about data in health care data in transportation equity. Where do you, what do you think if we think of international women's day, which is tomorrow the breaking the bias is the theme. Where do you think we are from your perspective on breaking the bias that's across all these different data sets, >>Right. So I guess as somebody working with data on a daily basis, I'm sometimes amazed at how many people still seem to think that data can be unbiased. And this has actually touched upon also in the first keynote that I very much enjoyed, uh, talking about human centered data science people that believe that you can take the human factor out of any effort related to analysis, um, are definitely on the wrong path. So I feel like the sooner that we realize that we need to take into account certain bias sees that will definitely be there because data is humanly generated. Um, the closer we're going to get to something that represents reality better and might help us to change reality for the better as well, because we don't want to stick with the status quo. And any time you look at data, it's definitely gonna be a backward looking effort. So I think the first step is to be aware of that and not to strive for complete objectivity, but understanding and coming to terms with the fact just as it was mentioned in the equity panel, that that is logically impossible, right? >>That's an important, you bring up a really important point. It's important to understand that that is not possible, but what can we work with? What is possible? What can we get to, where do you think we are on the journey of being able to get there? >>I think that initiatives like widths of playing an important role in making that better and increasing that awareness there a big trend around explainability interpretability, um, an AI that you see, not just in Europe, but worldwide, because I think the awareness around those topics is increasing. And that will then, um, also show you the blind spots that you may still have, no matter how much you think about, um, uh, the context. Um, one thing that we still need to get a lot better at though, is including everybody in these types of projects, because otherwise you're always going to have a certain selection in terms of prospectus that you're getting it >>Right. That thought diversity there's so much value in thought diversity. That's something that I think I first started talking about thought diversity at a Wood's conference a few years ago, and really understanding the impact there that that can make to every industry. >>Totally. And I love this example of, I think it was a soap dispenser. I'm one of these really early examples of how technology, if you don't watch out for these, um, human centered considerations, how technology can, can go wrong and just, um, perpetuate bias. So a soap dispenser that would only recognize the hand, whether it was a certain, uh, light skin type that w you know, be placed underneath it. So it's simple examples like that, um, that I think beautifully illustrate what we need to watch out for when we design automatic decision aids, for example, because anywhere where you don't have a human checking, what's ultimately decided upon you end up, you might end up with much more grave examples, >>Right? No, it's, it's I agree. I, Cecilia Aragon gave the talk this morning on the human centered guy. I was able to interview her a couple of weeks ago for four winds and a very inspiring woman and another herself, but she brought up a great point about it's the humans and the AI working together. You can't ditch the humans completely to your point. There are things that will go wrong. I think that's a sends a good message that it's not going to be AI taking jobs, but we have to have those two components working better. >>Yeah. And maybe to also refer to the panel discussion we heard, um, on, on equity, um, I very much liked professor Bowles point. Um, I, and how she emphasized that we're never gonna get to this perfectly objective state. And then also during that panel, um, uh, data scientists said that 80% of her work is still cleaning the data most likely because I feel sometimes there is this, um, uh, almost mysticism around the role of a data scientist that sounds really catchy and cool, but, um, there's so many different aspects of work in data science that I feel it's hard to put that all in a nutshell narrowed down to one role. Um, I think in the end, if you enjoy working with data, and maybe you can even combine that with a certain domain that you're particularly interested in, be it sustainability, or, you know, urban planning, whatever that is the perfect match >>It is. And having that passion that goes along with that also can be very impactful. So you love data. You talked about that, you said you had a strange love for databases. Where do you, where do you want to go from where you are now? How much more deeply are you going to dive into the world of data? >>That's a good question because I would, at this point, definitely not consider myself a data scientist, but I feel like, you know, taking baby steps, I'm maybe on a path to becoming one in the future. Um, and so being at university, uh, again gives me, gives me the opportunity to dive back into certain courses and I've done, you know, smaller data science projects. Um, and I was actually amazed at, and this was touched on in a panel as well earlier. Um, how outdated, so many, um, really frequently used data sets are shown the realm of research, you know, AI machine learning, research, all these models that you feed with these super outdated data sets. And that's happened to me like something I can relate to. Um, and then when you go down that path, you come back to the sort of data engineering path that I really enjoy. So I could see myself, you know, keeping on working on that, the whole data, privacy and analytics, both topics that are very close to my heart, and I think can be combined. They're not opposites. That is something I would definitely stay true to >>Data. Privacy is a really interesting topic. We're seeing so many, you know, GDPR was how many years did a few years old that is now, and we've got other countries and states within the United States, for example, there's California has CCPA, which will become CPRA next year. And it's expanding the definition of what private sensitive data is. So we're companies have to be sensitive to that, but it's a huge challenge to do so because there's so much potential that can come from the data yet, we've got that personal aspect, that sensitive aspect that has to be aware of otherwise there's huge fines. Totally. Where do you think we are with that in terms of kind of compliance? >>So, um, I think in the past years we've seen quite a few, uh, rather shocking examples, um, in the United States, for instance, where, um, yeah, personal data was used or all proxies, um, that led to, uh, detrimental outcomes, um, in Europe, thanks to the strong data regulations. I think, um, we haven't had as many problems, but here the question remains, well, where do you draw the line? And, you know, how do you design this trade-off in between increasing efficiency, um, making business applications better, for example, in the case of SAP, um, while protecting the individual, uh, privacy rights of, of people. So, um, I guess in one way, SAP has a, as an easier position because we deal with business data. So anybody who doesn't want to care about the human element maybe would like to, you know, try building models and machine generated data first. >>I mean, at least I would feel much more comfortable because as soon as you look at personally identifiable data, you really need to watch out, um, there is however ways to make that happen. And I was touching upon these anonymization techniques that I think are going to be, um, more and more important in the, in the coming years, there is a proposed on the way by the European commission. And I was actually impressed by the sophisticated newness of legislation in, in that area. And the plan is for the future to tie the rules around the use of data science, to the specific objectives of the project. And I think that's the only way to go because of the data's out there it's going to be used. Right. We've sort of learned that and true anonymization might not even be possible because of the amount of data that's out there. So I think this approach of, um, trying to limit the, the projects in terms of, you know, um, looking at what do they want to achieve, not just for an individual company, but also for us as a society, think that needs to play a much bigger role in any data-related projects where >>You said getting true anonymization isn't really feasible. Where are we though on the anonymization pathway, >>If you will. I mean, it always, it's always the cost benefit trade off, right? Because if the question is not interesting enough, so if you're not going to allocate enough resources in trying to reverse engineer out an old, the tie to an individual, for example, sticking true to this, um, anonymization example, um, nobody's going to do it right. We live in a world where there's data everywhere. So I feel like that that's not going to be our problem. Um, and that is why this approach of trying to look at the objectives of a project come in, because, you know, um, sometimes maybe we're just lucky that it's not valuable enough to figure out certain details about our personal lives so that nobody will try, because I am sure that if people, data scientists tried hard enough, um, I wonder if there's challenges they wouldn't be able to solve. >>And there has been companies that have, you know, put out data sets that were supposedly anonymized. And then, um, it wasn't actually that hard to make interferences and in the, in the panel and equity one lab, one last thought about that. Um, we heard Jessica speak about, uh, construction and you know, how she would, um, she was trying to use, um, synthetic data because it's so hard to get the real data. Um, and the challenge of getting the synthetic data to, um, sort of, uh, um, mimic the true data. And the question came up of sensors in, in the household and so on. That is obviously a huge opportunity, but for me, it's somebody who's, um, very sensitive when it comes to privacy considerations straight away. I'm like, but what, you know, if we generate all this data, then somebody uses it for the wrong reasons, which might not be better urban planning for all different communities, but simple profit maximization. Right? So this is something that's also very dear to my heart, and I'm definitely going to go down that path further. >>Well, Hannah, it's been great having you on the program. Congratulations on being a Wood's ambassador. I'm sure there's going to be a lot of great lessons and experiences that you'll take back to Germany from here. Thank you so much. We appreciate your time for Hannah Sperling. I'm Lisa Martin. You're watching the QS live coverage of women in data science conference, 2020 to stick around. I'll be right back with my next guest.
SUMMARY :
I'm Lisa Martin coming to you from Stanford Uh, but before we get started, I would like to say that I feel very fortunate to be able to and some of the ones that you have at SAP now. And that was to join SAP's analytics department. And this is, um, also topic very close to my heart because Where do you think we are data science people that believe that you can take the human factor out of any effort related What can we get to, where do you think we are on the journey um, an AI that you see, not just in Europe, but worldwide, because I think the awareness around there that that can make to every industry. hand, whether it was a certain, uh, light skin type that w you know, be placed underneath it. I think that's a sends a good message that it's not going to be AI taking jobs, but we have to have those two Um, I think in the end, if you enjoy working So you love data. data sets are shown the realm of research, you know, AI machine learning, research, We're seeing so many, you know, many problems, but here the question remains, well, where do you draw the line? And the plan is for the future to tie the rules around the use of data Where are we though on the anonymization pathway, So I feel like that that's not going to be our problem. And there has been companies that have, you know, put out data sets that were supposedly anonymized. Well, Hannah, it's been great having you on the program.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Hannah | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Cecilia Aragon | PERSON | 0.99+ |
Hannah Sperling | PERSON | 0.99+ |
Jessica | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Germany | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
Bowles | PERSON | 0.99+ |
next year | DATE | 0.99+ |
today | DATE | 0.99+ |
seven years ago | DATE | 0.99+ |
first step | QUANTITY | 0.99+ |
one role | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
last week | DATE | 0.99+ |
first keynote | QUANTITY | 0.99+ |
European commission | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
two components | QUANTITY | 0.98+ |
One | QUANTITY | 0.97+ |
SAP HANA | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
this morning | DATE | 0.95+ |
around four years ago | DATE | 0.94+ |
both topics | QUANTITY | 0.94+ |
100,000 people | QUANTITY | 0.93+ |
four winds | QUANTITY | 0.93+ |
international women's day | EVENT | 0.91+ |
California | LOCATION | 0.9+ |
GDPR | TITLE | 0.89+ |
one way | QUANTITY | 0.88+ |
couple of weeks ago | DATE | 0.87+ |
few years ago | DATE | 0.87+ |
2022 | DATE | 0.86+ |
Stanford university | ORGANIZATION | 0.84+ |
European | OTHER | 0.82+ |
Arriaga | ORGANIZATION | 0.8+ |
CPRA | ORGANIZATION | 0.8+ |
Wood | PERSON | 0.78+ |
one thing | QUANTITY | 0.75+ |
one last | QUANTITY | 0.74+ |
one of | QUANTITY | 0.74+ |
QS | EVENT | 0.72+ |
CCPA | ORGANIZATION | 0.69+ |
years | DATE | 0.6+ |
Margo | PERSON | 0.6+ |
about | QUANTITY | 0.54+ |
years | QUANTITY | 0.52+ |
WiDS | EVENT | 0.47+ |
Wiz | ORGANIZATION | 0.39+ |
Ashish Palekar & Cami Tavares | AWS Storage Day 2021
(upbeat music) >> Welcome back to theCUBE's continuous coverage of AWS storage day. My name is Dave Vellante and we're here from Seattle. And we're going to look at the really hard workloads, those business and mission critical workloads, the most sensitive data. They're harder to move to the cloud. They're hardened. They have a lot of technical debt. And the blocker in some cases has been storage. Ashish Palekar is here. He's the general manager of EBS snapshots, and he's joined by Cami Tavares who's a senior manager of product management for Amazon EBS. Folks, good to see you. >> Ashish: Good to see you again Dave. >> Dave: Okay, nice to see you again Ashish So first of all, let's start with EBS. People might not be familiar. Everybody knows about S3 is famous, but how are customers using EBS? What do we need to know? >> Yeah, it's super important to get the basics, right? Right, yeah. We have a pretty broad storage portfolio. You talked about S3 and S3 glacier, which are an object and object and archival storage. We have EFS and FSX that cover the file site, and then you have a whole host of data transfer services. Now, when we think about block, we think of a really four things. We think about EBS, which is the system storage for EC2 volumes. When we think about snapshots, which is backups for EBS volumes. Then we think about instant storage, which is really a storage that's directly attached to an instance and manages and then its life cycle is similar to that of an instance. Last but not the least, data services. So things like our elastic volumes capability of fast snapshot restore. So the answer to your question really is EBS is persistent storage for EC2 volumes. So if you've used EC2 instances, you'll likely use EBS volumes. They service boot volumes and they service data volumes, and really cover a wide gamut of workloads from relational databases, no SQL databases, file streaming, media and coding. It really covers the gamut of workloads. >> Dave: So when I heard SAN in the cloud, I laughed out loud. I said, oh, because I could think about a box, a bunch of switches and this complicated network, and then you're turning it into an API. I was like, okay. So you've made some announcements that support SAN in the cloud. What, what can you tell us about? >> Ashish: Yeah, So SANs and for customers and storage, those are storage area networks, really our external arrays that customers buy and connect their performance critical and mission critical workloads. With block storage and with EBS, we got a bunch of customers that came to us and said, I'm thinking about moving those kinds of workloads to the cloud. What do you have? And really what they're looking for and what they were looking for is performance availability and durability characteristics that they would get from their traditional SANs on premises. And so that's what the team embarked on and what we launched at reinvent and then at GEd in July is IO2 block express. And what IO2 block express does is it's a complete ground app, really the invention of our storage product offering and gives customers the same availability, durability, and performance characteristics that can, we'll go into little later about that they're used to in their on premises. The other thing that we realized is that it's not just enough to have a volume. You need an instance that can drive that kind of throughput and IOPS. And so coupled with our trends in EC2 we launched our R5b that now triples the amount of IOPS and throughput that you can get from a single instance to EBS storage. So when you couple the sub millisecond latency, the capacity and the performance that you get from IO2 block express with R5b, what we hear from customers is that gives them enough of the performance availability characteristics and durability characteristics to move their workloads from on premises, into the cloud, for the mission critical and business critical apps. >> Dave: Thank you for that. So Cami when I, if I think about how the prevailing way in which storage works, I drop off a box at the loading dock and then I really don't know what happens. There may be a service organization that's maybe more intimate with the customer, but I don't really see the innovations and the use cases that are applied clouds, different. You know, you live it every day. So you guys always talk about customer inspired innovation. So what are you seeing in terms of how people are using this capability and what innovations they're driving? >> Cami: Yeah, so I think when we look at the EBS portfolio and this, the evolution over the years, you can really see that it was driven by customer need and we have different volume types and they have very specific performance characteristics, and they're built to meet these unique needs of customer workloads. So I'll tell you a little bit about some of our specific volume types to kind of illustrate this evolution over the years. So starting with our general purpose volumes, we have many customers that are using these volumes today. They really are looking for high performance at a low cost, and you have all kinds of transactional workloads and low-latency interactive applications and boot volumes, as Ashish mentioned. And they tell us, the customer is using these general purpose volumes, they tell us that they really like this balanced cost and performance. And customers also told us, listen, I have these more demanding applications that need higher performance. I need more IOPS, more throughput. And so looking at that customer need, we were really talking about these IO intensive applications like SAP HANA and Oracle and databases that require just higher durability. And so we looked at that customer feedback and we launched our provisioned IOPS IO2 volume. And with that volume, you get five nines of durability and four times the IOPS that you would get with general purpose volumes. So it's a really compelling offering. Again, customers came to us and said, this is great. I need more performance, I need more IOPS, more throughput, more storage than I can get with a single IO2 volume. And so these were talking about, you mentioned mission critical applications, SAP HANA, Oracle, and what we saw customers doing often is they were striping together multiple IO2 volumes to get the maximum performance, but very quickly with the most demanding applications, it got to a point where we have more IO2 volumes that you want to manage. And so we took that feedback to heart and we completely reinvented the underlying EBS hardware and the software and networking stacks. And we'll launched block express. With block express, you can get four times the IOPS throughput and storage that you would get with a single io2 volume. So it's a really compelling offering for customers. >> Dave: If I had to go back and ask you, what was the catalyst, what was the sort of business climate that really drove the decision here. Was that people were just sort of fed up with you know, I'll use the phrase, the undifferentiated, heavy lifting around SAN, what was it, was it COVID driven? What was the climate? >> You know, it's important to recognize when we are talking about business climate today, every business is a data business and block storage is really a foundational part of that. And so with SAN in the cloud specifically, we have seen enterprises for several years, buying these traditional hardware arrays for on premises SANs. And it's a very expensive investment. Just this year alone, they're spending over $22 billion on SANs. And with this old model on premises SANs, you would probably spend a lot of time doing this upfront capacity planning, trying to figure out how much storage you might need. And in the end, you'd probably end up overbuying for peak demand because you really don't want to get stuck, not having what you need to scale your business. And so now with block express, you don't have to do that anymore. You pay for what you need today, and then you can increase your storage as your business needs change. So that's cost and cost is a very important factor. But really when we're talking to customers and enterprises that are looking for SAN in the cloud, the number one reason that they want to move to the cloud with their SANs and these mission, critical workloads is agility and speed. And it's really transformational for businesses to be able to change the customer experience for their customers and innovate at a much faster pace. And so with the block express product, you get to do that much faster. You can go from an idea to an implementation orders of magnitude faster. Whereas before if you had these workloads on premises, it would take you several weeks just to get the hardware. And then you have to build all this surrounding infrastructure to get it up and running. Now, you don't have to do that anymore. You get your storage in minutes, and if you change your mind, if your business needs change, if your workloads change, you can modify your EBS volume types without interrupting your workload. >> Dave: Thank you for that. So Cami kind of addressed some of this, but I know store admins say, don't touch my SAN, I'm not moving it. This is a big decision for a lot of people. So kind of a two-part question, you know, why now, what do people need to know? And give us the north star close it out with, with where you see the future. >> Ashish: Yeah, so let's, I'll kick things off and then Cami, do jump in. So first of the volume is one part of the story, right? And with IO2 block express, I think we've given customers an extremely compelling offering to go build their mission critical and business critical applications on. We talked about the instance type R5b in terms of giving that instance level performance, but all this is on the foundation of AWS in terms of availability zones and regions. So you think about the constructs and we talk them in terms of building blocks, but our building blocks are really availability zones and regions. And that gives you that core availability infrastructure that you need to build your mission critical and business critical applications. You then take layer on top of that our regional footprint, right. And now you can spin up those workloads globally, if you need to. And then last but not the least, once you're in AWS, you have access to other services. Be it AI, be it ML, be it our relational database services that you can start to think about undifferentiated, heavy lifting. So really you get the smorgasbord really from the availability footprint to global footprint and all the way up to sort of our service stack that you get access to. >> Dave: So that's really thinking out of the box. We're out of time. Cami we'll give you the last word. >> Cami: I just want to say, if you want to learn more about EBS, there's a deep dive session with our principal engineer, Marc Olson later today. So definitely join that. >> Dave: Folks, thanks so much for coming to theCUBE. (in chorus )Thank you. >> Thank you for watching. Keep it right there for more great content from AWS storage day from Seattle.
SUMMARY :
And the blocker in some So first of all, let's start with EBS. and then you have a whole host What, what can you tell us about? that you can get from a single So what are you seeing in And with that volume, you that really drove the decision here. and then you can increase your storage So kind of a two-part question, you know, And that gives you that core Cami we'll give you the last word. if you want to learn more about EBS, much for coming to theCUBE. Thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Ashish Palekar | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ashish | PERSON | 0.99+ |
Cami Tavares | PERSON | 0.99+ |
Marc Olson | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Cami | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
EBS | ORGANIZATION | 0.99+ |
two-part | QUANTITY | 0.99+ |
one part | QUANTITY | 0.99+ |
July | DATE | 0.99+ |
over $22 billion | QUANTITY | 0.99+ |
EC2 | TITLE | 0.99+ |
FSX | TITLE | 0.99+ |
EFS | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
EBS | TITLE | 0.98+ |
four times | QUANTITY | 0.98+ |
IO2 block express | TITLE | 0.97+ |
Oracle | ORGANIZATION | 0.96+ |
today | DATE | 0.94+ |
five nines | QUANTITY | 0.93+ |
this year | DATE | 0.92+ |
SQL | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
single | QUANTITY | 0.91+ |
later today | DATE | 0.87+ |
SAP HANA | TITLE | 0.86+ |
four things | QUANTITY | 0.86+ |
single instance | QUANTITY | 0.85+ |
R5b | OTHER | 0.85+ |
block express | TITLE | 0.84+ |
block express | ORGANIZATION | 0.76+ |
S3 | TITLE | 0.75+ |
Amazon EBS | ORGANIZATION | 0.74+ |
one | QUANTITY | 0.71+ |
AWS Storage Day 2021 | EVENT | 0.69+ |
GEd | ORGANIZATION | 0.63+ |
storage day | EVENT | 0.59+ |
star | LOCATION | 0.58+ |
several weeks | QUANTITY | 0.56+ |
COVID | OTHER | 0.53+ |
S3 | COMMERCIAL_ITEM | 0.51+ |
IO2 | TITLE | 0.44+ |
Talor Holloway, Advent One | IBM Think 2021
>>from around the globe. It's the >>cube with digital >>coverage of IBM >>Think 2021 brought to you >>by IBM. Welcome back everyone to the cube coverage of IBM Think 2021 virtual um john for your host of the cube. Our next guest taylor Holloway. Chief technology officer at advent one. Tyler welcome to the cube from down under in Australia and we're in Palo alto California. How are you? >>Well thanks john thanks very much. Glad to be glad to be on here. >>Love love the virtual cube of the virtual events. We can get to talk to people really quickly with click um great conversation here around hybrid cloud, multi cloud and all things software enterprise before we get started. I wanna take a minute to explain what you guys do at advent one. What's the main focus? >>Yeah. So look we have a lot of customers in different verticals. Um so you know generally what we provide depends on the particular industry the customers in. But generally speaking we see a lot of demand for operational efficiency, helping our clients tackle cyber security risks, adopt cloud and set them up to modernize the applications. >>And this is this has been a big wave coming in for sure with you know, cloud and scale. So I gotta ask you, what are the main challenges that you guys are solvent for your customers um and how are you helping them overcome come that way and transformative innovative way? >>Yeah, look, I think helping our clients um improve their security posture is a big one. We're finding as well that our customers are gaining a lot of operational efficiency by adopting sort of open source technology red huts an important partner of ours as his IBM um and we're seeing them sort of move away from some more proprietary solutions. Automation is a big focus for us as well. We've had some great outcomes with our clients or helping them automate um and you know, to live up um you know the stand up and data operations of environments a lot quickly a lot more easily and uh and to be able to sort of apply some standards across multiple sort of areas of their I. T. Estate. >>What are some of the solutions that you guys are doing with IBM's portfolio on the infrastructure side, you got red hat, you've got a lot of open source stuff to meet the needs of clients. What do you mean? What's the mean? >>Uh Yeah I think on the storage side will probably help our clients sort of tackle the expanding data in structured and particularly unstructured data they're trying to take control of so you know, looking at spectrum scale and those type of products from an audio perspective for unstructured data is a good example. And so they're flush systems for more block storage and more run of the mill sort of sort of environments. We have helped our clients consolidate and modernize on IBM power systems. Having Red Hat is both a Lynx operating system and having open shift as a container platform. Um really helps there. And Red Hat also provides management overlay, which has been great on what we do with IBM power systems. We've been working on a few different sort of use cases on power in particular. More recently, SAP Hana is a big one where we've had some success with our clients migrating Muhanna on to onto IBM power systems. And we've also helped our customers, you know, improve some um some environments on the other end of the side, such as IBM I, we still have a large number of customers with, with IBM I and and you know how do we help them? You know some of them are moving to cloud in one way or another others are consuming some kind of IRS and we can sort of wrap around a managed service to to help them through. >>So I gotta ask you the question, you know U C T. Oh you played a lot of technologies kubernetes just become this lingua franca for this kind of like I'll call a middleware kind of orchestration layer uh containers. Also you're awesome but I gotta ask you when you walk into a client's environment you have to name names but you know usually you see kind of two pictures man, they need some serious help or they got their act together. So either way they're both opportunities for Hybrid cloud. How do you how do you how do you evaluate the environment when you go in, when you walk into those two scenarios? What goes through your mind? What some of the conversations that you guys have with those clients. Can you take me through a kind of day in the life of both scenarios? The ones that are like I can't get the job done, I'm so close in on the right team and the other ones, like we're grooving, we're kicking butt. >>Yeah. So look, let's start well, I supposed to start off with you try and take somewhat of a technology agnostic view and just sort of sit down and listen to what they're trying to achieve, how they're going for customers who have got it. You know, as you say, all nailed down things are going really well. Um it's just really understanding what what can we do to help. Is there an opportunity for us to help at all like there? Um, you know, generally speaking, there's always going to be something and it may be, you know, we don't try and if someone is going really well, they might just want someone to help with a bespoke use case or something very specific where they need help. On the other end of the scale where a customer is sort of pretty early on and starting to struggle. We generally try and help them not boil the ocean at once. Just try and get some winds, pick some key use cases, you know, deliver some value back and then sort of growing from there rather than trying to go into a customer and trying to do everything at once tends to be a challenge. Just understand what the priorities are and help them get going. >>What's the impact been for red hat? Um, in your customer base, a lot of overlap. Some overlap, no overlap coming together. What's the general trend that you're seeing? What's the reaction been? >>Yeah I think it's been really good. Obviously IBM have a lot of focus on cloud packs where they're bringing their software on red hat open shift that will run on multiple clouds. So I think that's one that we'll see a lot more of overtime. Um Also helping customers automate their I. T. Operations with answerable is one we do quite a lot of um and there's some really bespoke use cases we've done with that as well as some standardized one. So helping with day two operations and all that sort of thing. But there's also some really sort of out there things customers have needed to automate that's been a challenge for them and being able to use open source tools to do it has worked really well. We've had some good wins there, >>you know, I want to ask you about the architecture and I'm just some simplify it real. Just for the sake of devops, um you know, segmentation, you got hybrid clouds, take a programmable infrastructure and then you've got modern applications that need to have a I some have said I've even sit on the cube and other broadcast that if you don't have a I you're gonna be at a handicap some machine learning, some data has to be in there. You can probably see ai and mostly everything as you go in and try to architect that out for customers um and help them get to a hybrid cloud infrastructure with real modern application front end with using data. What's what's the playbook? Do you have any best practices or examples you can share or scenarios or visions that you see uh playing >>out? I think you're the first one is obviously making sure customers data is in the right place. So if they might be wanting to use um some machine learning in one particular cloud provider and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and able to move data from one cloud to another or back into court data center? So there's a lot of that. I think that we spend a lot of time with customers to try and get a right architecture and also how do we make sure it's secure from end to end. So if they're moving things from into multiple one or more public clouds as well as maybe in their own data center, making sure connectivity is all set up properly. All the security requirements are met. So I think we sort of look at it from a from a high level design point of view, we look at obviously what the target state is going to be versus the current state that really take into account security, performance, connectivity or those sort of things to make sure that they're going to have a good result. >>You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they always comment about their credibility and all the other than the normal stuff. But one of the things that comes out a lot pretty much consistently is their experience in verticals. Uh they have such a track record in verticals and this is where AI and machine learning data has to be very much scoped in on the vertical. You can't generalize and have a general purpose data plane inside of vertically specialized kind of focus. How how do you see that evolving, how does IBM play there with this kind of the horizontally scalable mindset of a hybrid model, both on premise in the cloud, but that's still saying provide that intimacy with the data to fuel the machine learning or NLP or power that ai which seems to be critical. >>Yeah, I think there's a lot of services where you know, public cloud providers are bringing out new services all the time and some of it is pre can and easy to consume. I think what IBM from what I've observed, being really good at is handling some of those really bespoke use cases. So if you have a particular vertical with a challenge, um you know, there's going to be sort of things that are pre can that you can go and consume. But if you need to do something custom that could be quite challenging. How do they sort of build something that could be quite specific for a particular industry and then obviously being able to repeat that afterwards for us, that's obviously something we're very interested in. >>Yeah, tell I love chatting whether you love getting the low down also, people might not know your co author of a book performance guy with IBM Power Systems, So I gotta ask you, since I got you here and I don't mean to put you on the spot, but if you can just share your vision or any kind of anecdotal observation as people start to put together their architecture and again, you know, Beauty's in the eye of the beholder, every environment is different. But still, hybrid, distributed concept is distributed computing. Is there a KPI is there a best practice on as a manager or systems architect to kind of keep an eye on what what good is and how how good becomes better because the day to operations becomes a super important concept. We're seeing some called Ai ops where okay, I'm provisioning stuff out on a hybrid Cloud operational environment. But now day two hits are things happen as more stuff entered into the equation. What's your vision on KPs and management? What to keep tracking? >>Yeah, I think obviously attention to detail is really important to be able to build things properly. A good KPI particularly managed service area that I'm curious that understanding is how often do you actually have to log into the systems that you're managing? So if you're logging in and recitation into servers and all this sort of stuff all the time, all of your automation and configuration management is not set up properly. So, really a good KPI an interesting one is how often do you log into things all the time? If something went wrong, would you sooner go and build another one and shoot the one that failed or go and restore from backup? So thinking about how well things are automated. If things are immutable using infrastructure as code, those are things that I think are really important when you look at, how is something going to be scalable and easy to manage going forward. What I hate to see is where, you know, someone build something and automates it all in the first place and they're too scared to run it again afterwards in case it breaks something. >>It's funny the next generation of leaders probably won't even know like, hey, yeah, taylor and john they had to log into systems back in the day. You know, I mean, I could be like a story they tell their kids. Uh but no, that's a good Metro. This is this automation. So it's on the next level. Let's go the next level automation. Um what's the low hanging fruit for automation? Because you're getting at really the kind of the killer app there, which is, you know, self healing systems, good networks that are programmable but automation will define more value. What's your take? >>I think the main thing is where you start to move from a model of being able to start small and automate individual things which could be patching or system provisioning or anything like that. But what you really want to get to is to be able to drive everything through, get So instead of having a written up paper, change request, I'm going to change your system and all the rest of it. It really should be driven through a pull request and have things through it and and build pipelines to go and go and make a change running in development, make sure it's successful and then it goes and gets pushed into production. That's really where I think you want to get to and you can start to have a lot of people collaborating really well on this particular project or a customer that also have some sort of guard rails around what happens in some level of governance rather than being a free for all. >>Okay, final question. Where do you see event one headed? What's your future plans to continue to be a leader? I. T. Service leader for this guy? BMS Infrastructure portfolio? >>I think it comes down to people in the end, so really making sure that we partner with our clients and to be well positioned to understand what they want to achieve and and have the expertise in our team to bring to the table to help them do it. I think open source is a key enabler to help our clients adopt a hybrid cloud model to sort of touched on earlier uh as well as be able to make use of multiple clouds where it makes sense from a managed service perspective. I think everyone is really considering themselves and next year managed service provider. But what that means for us is to provide a different, differentiated managed service and also have the strong technical expertise to back it up. >>Taylor Holloway, chief technology officer advent one remote videoing in from down under in Australia. I'm john ferrier and Palo alto with cube coverage of IBM thing. Taylor, thanks for joining me today from the cube. >>Thank you very much. >>Okay, cube coverage. Thanks for watching ever. Mhm mm
SUMMARY :
It's the Welcome back everyone to the cube coverage of IBM Think 2021 Glad to be glad to be on here. I wanna take a minute to explain what you guys do at advent one. Um so you know generally And this is this has been a big wave coming in for sure with you know, cloud and scale. We've had some great outcomes with our clients or helping them automate um and you know, What are some of the solutions that you guys are doing with IBM's portfolio on the infrastructure side, control of so you know, looking at spectrum scale and those type of products from an audio perspective for What some of the conversations that you guys have with those clients. there's always going to be something and it may be, you know, we don't try and if someone is going really well, What's the general trend that you're seeing? and there's some really bespoke use cases we've done with that as well as some standardized one. you know, I want to ask you about the architecture and I'm just some simplify it real. and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they Yeah, I think there's a lot of services where you know, public cloud providers are bringing out new services all the time and since I got you here and I don't mean to put you on the spot, but if you can just share your vision or is where, you know, someone build something and automates it all in the first place and they're too scared to run it So it's on the next level. I think the main thing is where you start to move from a model of being able to start small Where do you see event one headed? I think it comes down to people in the end, so really making sure that we partner with our clients and I'm john ferrier and Palo alto with cube coverage of IBM Thanks for watching ever.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Taylor Holloway | PERSON | 0.99+ |
today | DATE | 0.99+ |
taylor | PERSON | 0.99+ |
Talor Holloway | PERSON | 0.99+ |
Tyler | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Taylor | PERSON | 0.99+ |
two scenarios | QUANTITY | 0.99+ |
taylor Holloway | PERSON | 0.99+ |
Think 2021 | COMMERCIAL_ITEM | 0.99+ |
john | PERSON | 0.99+ |
next year | DATE | 0.99+ |
both scenarios | QUANTITY | 0.99+ |
IBM Power Systems | ORGANIZATION | 0.98+ |
two pictures | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Palo alto California | LOCATION | 0.97+ |
Red Hat | TITLE | 0.96+ |
first one | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Palo alto | ORGANIZATION | 0.92+ |
both opportunities | QUANTITY | 0.92+ |
two hits | QUANTITY | 0.9+ |
red hat | TITLE | 0.88+ |
Think | COMMERCIAL_ITEM | 0.83+ |
john ferrier | PERSON | 0.82+ |
advent one | ORGANIZATION | 0.82+ |
one cloud | QUANTITY | 0.79+ |
one way | QUANTITY | 0.78+ |
Lynx | TITLE | 0.75+ |
two operations | QUANTITY | 0.69+ |
BMS | ORGANIZATION | 0.68+ |
Chief | PERSON | 0.67+ |
2021 | DATE | 0.63+ |
SAP Hana | TITLE | 0.63+ |
Muhanna | TITLE | 0.58+ |
cloud | QUANTITY | 0.54+ |
Advent One | ORGANIZATION | 0.53+ |
IBM21 Talor Holloway VTT
>>from around the globe. It's the cube with digital >>coverage of IBM >>Think 2021 brought to >>you by IBM. Welcome back everyone to the cube coverage of IBM Think 2021 virtual um john for your host of the cube. Our next guest taylor Holloway. Chief technology officer at advent one. Tyler welcome to the cube from down under in Australia and we're in Palo alto California. How are you? >>Well thanks john thanks very much. Glad to be glad to be on here. >>Love love the virtual cube of the virtual events. We can get to talk to people really quickly with click um great conversation here around hybrid cloud, multi cloud and all things software enterprise before we get started. I wanna take a minute to explain what you guys do at advent one. What's the main focus? >>Yeah. So look we have a lot of customers in different verticals. Um so you know generally what we provide depends on the particular industry the customers in. But generally speaking we see a lot of demand for operational efficiency, helping our clients tackle cyber security risks, adopt cloud and set them up to modernize the applications. >>And this is this has been a big wave coming in for sure with, you know, cloud and scale. So I gotta ask you, what are the main challenges that you guys are solvent for your customers um and how are you helping them overcome come that way and transformative innovative way? >>Yeah, look, I think helping our clients um improve their security posture is a big one. We're finding as well that our customers are gaining a lot of operational efficiency by adopting sort of open source technology. Red Hearts, an important partner of ours is IBM um and we're seeing them sort of move away from some more proprietary solutions. Automation is a big focus for us as well. We've had some great outcomes with our clients or helping them automate um and you know deliver um, you know, the stand up and data operations of environments a lot quickly, a lot more easily. And uh and to be able to sort of apply some standards across multiple sort of areas of their estate. >>What are some of the solutions that you guys are doing with IBM's portfolio in the I. T. Infrastructure side? You got red hat, you got a lot of open source stuff to meet the needs of clients. What do you mean? What's that mean? >>Um Yeah, I think on the storage side will probably help our clients sort of tackle the expanding data in structured and particularly unstructured data they're trying to take control of so, you know, looking at spectrum scale and those type of products from an audio perspective for unstructured data is a good example. And so they're flash systems for more block storage and more run of the mill sort of sort of environments. We have helped our clients consolidate and modernize on IBM Power systems. Having Red Hat is both a UNIX operating system and having I can shift as a container platform really helps there. And Red Hat also provides management overlay, which has been great on what we do with IBM Power systems. We've been working on a few different sort of use cases on power in particular, sort of more recently. Um SAP Hana is a big one where we've had some success with our clients migrating Muhanna on to onto IBM power systems and we've also helped our customers, you know, improve some um some environments on the other end of the side, such as IBM I, we still have a large number of customers with with IBM I and and you know how do we help them? You know some of them are moving to cloud in one way or another others are consuming some kind of IRS and we can sort of wrap around a managed service to to help them through. >>So I gotta ask you the question, you know U. C. T. Oh you played a lot of technology actually kubernetes just become this lingua franca for this kind of like I'll call a middleware kind of orchestration layer uh containers. Obviously you're awesome but I gotta ask you when you walk into a client's environment you have to name names but you know usually you see kind of two pictures man, they need some serious help or they got their act together. So either way they're both opportunities for Hybrid cloud. How do you how do you how do you evaluate the environment when you go in, when you walk into those two scenarios? What goes through your mind? What some of the conversations that you guys have with those clients? Can you take me through a kind of day in the life of both scenarios? The ones that are like I can't get the job done, I'm so close in on the right team and the other ones, like we're grooving, we're kicking butt. >>Yeah. So look, let's start, well, I supposed to start off with you try and take somewhat of a technology agnostic view and just sort of sit down and listen to what they're trying to achieve, how they're going for customers who have got it. You know, as you say, all nailed down things are going really well. Um it's just really understanding what what can we do to help. Is there an opportunity for us to help at all like there? Um, you know, generally speaking, there's always going to be something and it may be, you know, we don't try and if someone is going really well, they might just want someone to help with a bespoke use case or something very specific where they need help. On the other end of the scale where a customer is sort of pretty early on and starting to struggle. We generally try and help them not boil the ocean at once. Just try and get some winds, pick some key use cases, you know, deliver some value back and then sort of growing from there rather than trying to go into a customer and trying to do everything at once tends to be a challenge. Just understand what the priorities are and help them get going. >>What's the impact been for red hat? Um, in your customer base, a lot of overlap. Some overlap, no overlap coming together. What's the general trend that you're seeing? What's the reaction been? >>Yeah I think it's been really good. Obviously IBM have a lot of focus on cloud packs where they're bringing their software on red hat open shift that will run on multiple clouds. So I think that's one that we'll see a lot more of overtime. Um Also helping customers automate their I. T. Operations with answerable is one we do quite a lot of um and there's some really bespoke use cases we've done with that as well as some standardized one. So helping with day two operations and all that sort of thing. But there's also some really sort of out there things customers have needed to automate. That's been a challenge for them and being able to use open source tools to do it has worked really well. We've had some good wins there, >>you know, I want to ask you about the architecture and I'm just some simplify it real just for the sake of devops, um you know, segmentation, you got hybrid clouds, take a programmable infrastructure and then you've got modern applications that need to have a I some have said, I've even said on the cube and other broadcasts that if you don't have a I you're gonna be at a handicap some machine learning, some data has to be in there. You can probably see aI and mostly everything as you go in and try to architect that out for customers um and help them get to a hybrid cloud infrastructure with real modern application front end with using data. What's what's the playbook, do you have any best practices or examples you can share or scenarios or visions that you see uh playing >>out? I think the yeah, the first one is obviously making sure customers data is in the right place. So if they might be wanting to use um some machine learning in one particular cloud provider and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and able to move data from one cloud to another or back into court data center? So there's a lot of that. I think that we spend a lot of time with customers to try and get a right architecture and also how do we make sure it's secure from end to end. So if they're moving things from into multiple one or more public clouds as well as maybe in their own data center, making sure connectivity is all set up properly. All the security requirements are met. So I think we sort of look at it from a from a high level design point of view, we look at obviously what the target state is going to be versus the current state that really take into account security, performance, connectivity or those sort of things to make sure that they're going to have a good result. >>You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they always comment about their credibility and all the other than the normal stuff. But one of the things that comes out a lot pretty much consistently is their experience in verticals. Uh just have such a track record in verticals and this is where AI and machine learning data has to be very much scoped in on the vertical. You can't generalize and have a general purpose data plane inside of vertically specialized kind of focus. How how do you see that evolving, how does IBM play there with this kind of the horizontally scalable mindset of a hybrid model, both on premise in the cloud, but that's still saying provide that that intimacy with the data to fuel the machine learning or NLP or power that AI, which seems to be critical. >>Yeah, I think there's a lot of services where, you know, public cloud providers are bringing out new services all the time and some of it is pre can and easy to consume. I think what IBM from what I've observed being really good at is handling some of those really bespoke use cases. So if you have a particular vertical with a challenge, um you know, there's going to be sort of things that are pre can that you can go and consume. But if you need to do something custom that could be quite challenging. How do they sort of build something that could be quite specific for a particular industry and then obviously being able to repeat that afterwards for us, that's obviously something we're very interested in. >>Yeah, taylor love chatting, whether you love getting the low down, also, people might not know your co author of a book performance guy with IBM Power Systems, so I gotta ask you, since I got you here and I don't mean to put you on the spot, but if you can just share your vision or any kind of anecdotal observation as people start to put together their architecture and again, you know, Beauty's in the eye of the beholder, every environment is different. But still, hybrid, distributed concept is distributed computing, Is there a KPI is there a best practice on as a manager or systems architect to kind of keep an eye on what what good is and how how good becomes better because the day to operations becomes a super important concept. We're seeing some called Ai ops where Okay, I'm provisioning stuff out on a hybrid Cloud operational environment. But now day two hits are things happen as more stuff entered into the equation. What's your vision on KPs and management? What to keep >>tracking? Yeah, I think obviously attention to detail is really important to be able to build things properly. A good KPI particularly managed service area that I'm curious that understanding is how often do you actually have to log into the systems that you're managing? So if you're logging in and recitation into servers and all this sort of stuff all the time, all of your automation and configuration management is not set up properly. So, really a good KPI an interesting one is how often do you log into things all the time if something went wrong, would you sooner go and build another one and shoot the one that failed or go and restore from backup? So thinking about how well things are automated. If things are immutable using infrastructure as code, those are things that I think are really important when you look at, how is something going to be scalable and easy to manage going forward. What I hate to see is where, you know, someone build something and automated all in the first place and they're too scared to run it again afterwards in case it breaks something. >>It's funny the next generation of leaders probably won't even know like, hey, yeah, taylor and john they had to log into systems back in the day. You know, I mean, I could be like a story they tell their kids. Uh but no, that's a good metric. This is this automation. So it's on the next level. Let's go the next level automation. Um what's the low hanging fruit for automation? Because you're getting at really the kind of the killer app there which is, you know, self healing systems, good networks that are programmable but automation will define more value. >>What's your take? I think the main thing is where you start to move from a model of being able to start small and automate individual things which could be patching or system provisioning or anything like that. But what you really want to get to is to be able to drive everything through. Get So instead of having a written up paper, change request, I'm going to change your system and all the rest of it. It really should be driven through a pull request and have things through it and and build pipelines to go and go and make a change running in development, make sure it's successful and then it goes and gets pushed into production. That's really where I think you want to get to and you can start to have a lot of people collaborating really well on this particular project or a customer that also have some sort of guard rails around what happens in some level of governance rather than being a free for >>all. Okay, final question. Where do you see event one headed? What's your future plans to continue to be a leader? I. T. Service by leader for this guy? BMS infrastructure portfolio? >>I think it comes down to people in the end, so really making sure that we partner with our clients and to be well positioned to understand what they want to achieve and and have the expertise in our team to bring to the table to help them do it. I think open source is a key enabler to help our clients adopt a hybrid cloud model to sort of touched on earlier as well as be able to make use of multiple clouds where it makes sense From a managed service perspective. I think everyone is really considering themselves next year managed service provider, but what that means for us is to provide a different, differentiated managed service and also have the strong technical expertise to back it up. >>Taylor Holloway, chief technology officer advent one remote videoing in from down under in Australia. I'm john ferrier and Palo alto with cube coverage of IBM thing. Taylor, thanks for joining me today from the cube. >>Thank you very much. >>Okay, cube coverage. Thanks for watching ever. Mhm
SUMMARY :
It's the cube with digital you by IBM. Glad to be glad to be on here. I wanna take a minute to explain what you guys do at advent one. Um so you know generally And this is this has been a big wave coming in for sure with, you know, cloud and scale. We've had some great outcomes with our clients or helping them automate um and you know deliver What are some of the solutions that you guys are doing with IBM's portfolio in the I. we still have a large number of customers with with IBM I and and you know how What some of the conversations that you guys have with those clients? there's always going to be something and it may be, you know, we don't try and if someone is going really well, What's the general trend that you're seeing? That's been a challenge for them and being able to use open source tools to do it has worked um you know, segmentation, you got hybrid clouds, take a programmable infrastructure and and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they Yeah, I think there's a lot of services where, you know, public cloud providers are bringing out new services all the time and some since I got you here and I don't mean to put you on the spot, but if you can just share your vision or the time if something went wrong, would you sooner go and build another one and shoot the one that failed So it's on the next level. I think the main thing is where you start to move from a model of being able to Where do you see event one headed? I think it comes down to people in the end, so really making sure that we partner with our clients and I'm john ferrier and Palo alto with cube coverage of IBM Thanks for watching ever.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Taylor Holloway | PERSON | 0.99+ |
Taylor | PERSON | 0.99+ |
taylor | PERSON | 0.99+ |
Tyler | PERSON | 0.99+ |
IBM Power Systems | ORGANIZATION | 0.99+ |
two scenarios | QUANTITY | 0.99+ |
Red Hearts | ORGANIZATION | 0.99+ |
john | PERSON | 0.99+ |
next year | DATE | 0.99+ |
both scenarios | QUANTITY | 0.99+ |
Think 2021 | COMMERCIAL_ITEM | 0.99+ |
one | QUANTITY | 0.99+ |
Think 2021 | COMMERCIAL_ITEM | 0.98+ |
taylor Holloway | PERSON | 0.98+ |
Palo alto California | LOCATION | 0.98+ |
Red Hat | TITLE | 0.98+ |
two pictures | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
both opportunities | QUANTITY | 0.93+ |
UNIX | TITLE | 0.92+ |
Palo alto | ORGANIZATION | 0.88+ |
one way | QUANTITY | 0.86+ |
john ferrier | PERSON | 0.75+ |
day two | QUANTITY | 0.69+ |
day | QUANTITY | 0.66+ |
U. | ORGANIZATION | 0.66+ |
Chief technology officer | PERSON | 0.63+ |
Power | TITLE | 0.63+ |
two hits | QUANTITY | 0.61+ |
Talor Holloway | PERSON | 0.57+ |
advent one | ORGANIZATION | 0.5+ |
SAP Hana | ORGANIZATION | 0.49+ |
power | TITLE | 0.49+ |
systems | COMMERCIAL_ITEM | 0.47+ |
Muhanna | TITLE | 0.41+ |
IBM | COMMERCIAL_ITEM | 0.4+ |
C. | LOCATION | 0.37+ |
IBM16 Leo LaBranche VTT
(upbeat music) >> Narrator: From around the globe. It's theCUBE. With digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome to theCUBE's digital coverage of IBM Think 2021. I'm Lisa Martin. Next joining me is Leo LaBranche, Director of global strategic initiatives at AWS. Leo, welcome to theCUBE. >> Thank you, happy to be here. >> So talk to me about AWS and IBM what's going on there with their relationship. What are some of the things that are significant for both partners? >> Yeah, absolutely, IBM's relationship really started with us around 2016. I would say it was a little bit more opportunistic at the time. We knew there was an opportunity to go to market together. We knew there were some great things we could do for our customers. But we hadn't quite cracked to crack the code so to speak on when and where and why we were going to partner at that point. You fast forward into this sort of 2017 to 2019 timeframe. And we became, I'd say a lot more intentional about how we were going to go to market, where we were going to invest areas such as SAP, et cetera. Were an early one that we identified and I'd say the ball really started rolling sort of in the 2018 timeframe. A combination of a number of different things occurred, you know, the acquisition of Red Hat, obviously, you know Red Hat is a very significant, was a very significant partner with AWS, prior to the acquisition. And so post acquisition, you combine that with ramping up a workforce, focused on AWS, combined with a number of different competencies that IBM really invested in, around migration as an example, or SAP. And, you know, the, the ball really starting to roll quickly after that, you know, I'd say the last 18 months or so we both invested significantly in the relationship expansion around the world really, and joined resources and capability to make sure that we're going to markets sort of partnered intentional way rather than sort of an opportunistic. >> Oh, go ahead. >> So I'd say so far, that's absolutely been paying off in that we are seeing a number of wins all around the world across a broad set of industries, as well as the broad set of technologies. So, you know, the strength of, of IBM's consulting services in particular, but also their software combined with the strength of our platform has really proven to be successful for our customers. >> So you said started in 2016, really started taking shape in the last couple of years, that Red Hat acquisition. Talk to me about what's in this for customers. I imagine customers that are expanding or needing to move workloads into the cloud, or maybe more of a hybrid cloud approach. What are some of the big benefits that customers are going to gain from this partnership? >> Yeah, absolutely. And the reality is IBM has a long and storied history and relationship with their customers, right? They run and manage many of the workloads. They really know the customer's business incredibly well. They have domain expertise and industry and then the technology expertise from a professional services perspective to really help navigate the waters and determine what the right strategy is around moving to the cloud, right? You combine that with the depth and breadth of the skills and capabilities and services that AWS provides. And the fact that IBM has invested significantly in making sure their professional services are deeply steeped in our technology and capabilities. It's a great combination of really understanding the customer's needs. Plus the art of the possible, honestly, when it comes to technology that we provide, really can accelerate both and mitigate risk when it comes to moving to the cloud. >> That risk mitigation is key. So you guys recently, AWS recently launched if I'm going to get this right. Red Hat OpenShift Service on AWS or ROSA. Can you talk to me a little bit about ROSA? >> Yeah so, Red Hat obviously very well known, and ultimately adopted within the enterprise. We have built a fully managed service around Red Hat on AWS. What that means is you'll have access to essentially the capabilities that that Red Hat would normally provide but all containerized within a solution that allows you to get access to AWS services, right. The other benefit here is normally you would get sort of a multi-vendor with invoicing and cost model, right? Where you get billed from Red Hat, get billed from Amazon. You get billed from IBM. In this case, it's essentially a wholistic service in which there's a single sort of invoicing and vendor relationship, right. So it's combination of capabilities that normally would be provided via Red Hat combined with access to cloud and all the interfaces and capabilities around OpenShift, et cetera, that you could do there. Plus a more interesting and beneficial commercial model. >> So streamlined pricing models, streamlined operating model for customers. Talk to me about some of the customers that have adopted it. Give me a look into some of the industries where you've seen good adoption and some of the results that they're gaining so far. (loud engine buzz in background) >> Yeah one second, sorry, it's like insanely loud. >> Man's voice: No worries, let's just take a pause. We can just, so yeah we'll go right as if Lisa just finished the question. So just take a breather as long as it needs. And then whenever you're ready whenever that's died down, just like give it a beat give it like a second and then just right as if she just asked the question. >> Answer the question then. >> Man's voice: Yeah. >> All right. >> Man's voice: I'll cut it out as if nothing happened. >> Give me two minutes. So actually on your question, I know the answer from things that I've done recently, but was there an official answer Theresa I'm supposed to give on this? >> Teresa: No, not really I mean, I think what you're talking about on Red Hat specifically >> Right, ROSA's early adoption. >> Teresa: Yeah, no I mean, there there's a product page and stuff, it's really about just the ability of customers to be able to run those solutions on the AWS console it is really the, the gist of it. And that it's fully integrated. >> I'm not sure some of the examples I know of are publicly refrenceable. >> Lisa: That's okay, you could just say, you know, customer in XYZ industry, that's totally fair. I'm not so worried about that. >> Teresa: Yeah I don't know if so ROSA. Lisa, ROSA was just launched in March and so it's brand new so I don't have the customer stories yet. So that's why I don't have them listed for Leo. >> Lisa: Oh, that's fine, that's totally fine. Maybe we can talk about, you know, since the launch was just around the corner, some of the things that have been going on, the momentum interest from customers, questions conversations can be more like that as you're launching the GTM. >> Yeah, and there's certainly a couple of industries that they have targeted I'm going to go with that as well as a couple of customers, like, >> Teresa: Thank you, Lisa. >> Lisa: Sure, of course. >> I think they went around the corner. (Lisa laughs) >> Lisa: All right, let me know and I'll re-ask the question. I'll tweak it a little bit. >> Yeah, go ahead. >> Lisa: All right, so talk to me about, ROSA just launched very recently. Talk to me about customer interest, adoption. Maybe some of the industries in particular if you're seeing any industry that's kind of really leading edge here and taking advantage of this new managed service. >> Yeah, absolutely, so no big surprise, right? The the existing customer base that currently uses Red Hat Linux, and some of the options in OpenShift, et cetera that are out today are then the right customers to potentially look at this when it comes to moving forward. You know, industry-wise certainly there are areas within financial services, banking, insurance, et cetera. We're also seeing some around manufacturing, a little less so, but some in media and telco as well. So it's, it's a broad swath of any applicability of Red Hat and OpenShift is somewhat universal but the early customer bases has largely been sort of in those three areas. >> What I'm curious what are the key target audiences are these, Red Hat customers are these AWS customers. IBM all three? >> Yeah. I mean, there isn't necessarily the perfect customer that we're necessarily looking for, as much as if there are existing customers that are currently using Linux or using Red Hat. If there are someone who, a customer who currently has a relationship with either AWS or IBM there's an opportunity to essentially look at it from any of the angles. If you're already on cloud or you've already experienced AWS in some shape or form there's an opportunity to potentially to leverage ROSA, to further expand that capability and also have some more flexibility so to speak. If you're already using IBM as a professional services provider and advisory firm then they absolutely have the expertise and understanding of this product set to help you understand how it could be best leveraged, right. So you can kind of look at it from either of the dimensions. If it's a customer that's completely new to all of us then we're happy to talk to you. But it's something that will definitely take a little bit more explanation to understand as to why you should, or shouldn't consider us with this multicloud OpenShift type solution. >> Got it, let's shift gears a bit and talk about SAP. When we think about customers looking to migrate SAP workloads to the cloud, looking at the right cloud providers those are really big, challenging strategic decisions for leadership to make. Talk to me about why when you're in those conversations AWS is the best choice. >> Absolutely, I mean, really AWS, let's say with SAP and with with many of our services is really looking to give all the options that you could conceivably need or want in order to engage in cloud migration and transformation. press AP specifically, right? There are a number of different options, right. You could go for a lift and shift or upgrade from many databases to a suite on SAP HANA could potentially look to modernize and leverage cloud services, post migration as well. And then the sort of final pinnacle of that is a complete transformation to S four or S four HANA as far as why AWS specifically beyond just choice, you know, from a cost perspective, it's pretty compelling. And we have some pretty compelling business and use cases around ultimately the cost savings that come when you move from an on-premise SAP implementation to cloud beyond that, usually the cloud migration itself is an opportunity to condense or reduce the number of instances you're paying for, from an SAP perspective, which then further reduces cost. From a reliability perspective, you know, AWS is the world's most secure, extensive reliable cloud infrastructure, right? Any of the instances that you put on AWS are instantly I'd say fairly instantly provisioned in such a way that they are provided across multiple what we call Availability Zones which is giving you sort of the resiliency and the stability that really no other cloud provider can provide. On the security front, I mean this is really a unique position in that AWS plus IBM and the security, the depth in security services you know, numerous years of professional services work that IBM has done in the security space. You know, they have roughly 8,000 or so cybersecurity experts within IBM. So the combination of their expertise in security plus the security of our platform is a great combination. I'd say the final one is around performance, right? AWS offers many more cloud native options around certified SAP instances, specifically all the way from 256 gigabyte option all the way up to 24 terabytes which is the largest of its kind. And as those who have implemented SAP know it's a very resource intensive. So having the ability to do that from a performance perspective is a key differentiator for sure. >> Talk to me from your opinion about why IBM for SAP on AWS, why should customers go that direction for their projects? >> Yeah, you know, IBM has over 40 years of experience in implementing SAP for their customers right. And they've done, I think it's over 6,000 SAP migrations, 40,000 global SAP consultants around the world. Right, so from a capability and depth of experience, you know, there's a lot of nuance to doing it. SAP implementation, particularly one that's then moving from on-prem to the cloud. You know, they've got the experience right. Beyond that they have industry specific solutions that are pre-configured. So I think that there's 12 industry specific solutions pre-configured for SAP, it allows, you know roughly 20 to 30% acceleration when it comes to implementation of platforms. So combination of just depth of experience, depth of capability combined with these solutions to accelerate are all key reasons for sure. >> The acceleration you bring up, sorry is interesting because we saw in the last year the acceleration of digital transformation projects and businesses needing to pivot again and again, and again to figure out how to survive and be successful in this very dynamic market in which we're still living. Anything industry-wise specific that you saw that was really driving the acceleration and the use cases for ROSA in the last year? >> Yeah so, you know SAP, we saw an interesting trend as a result of what's everyone's been experiencing in the last year with COVID, et cetera. You know, many organizations postponed large ERP implementations and large SAP migrations, because of what you just said, right. They weren't entirely sure what would need to be done in order to survive either a competitive threats or more just the global threats that were occurring. So what we saw was, really none of the transformations went away. They, were put on hold for a period of time let's say six to nine months ago maybe even a year ago almost. In lieu of I would say more top line revenue generating or innovative type solutions that maybe were focused specifically at, you know, the changing dynamic with COVID. Since then we've seen a combination of those new ideas, right? Combination of the new innovation around healthcare of course, but also public sector and, you know a lot around employment and then engagement there. We've seen a combination of those new ideas and new innovations with the original goal of optimizing transforming SAP ERP, et cetera. And then combining the two to allow access to the data, that sits inside the SAP implementation the SAP. Combine the data in SAP with all these new innovations and then ultimately use that to sort of capitalize on what the future businesses are going to be. That's been huge, it's been very interesting to see some organizations completely change their business model over the course of the last 12 months. In ways they probably had never intended to before right? But it's, absolutely become an opportunity in a time of a lot of challenges. >> Agreed there are silver linings and we've seen a lot of those interesting opportunities to your point that businesses probably would never have come up with had there not been a forcing function like we've been living with. Leo thank you for joining me today. Talking to me about what's going on with IBM and AWS. We'll be excited to follow what happens with ROSA as it continues to roll out. And we appreciate you joining us on the program. >> Absolutely thank you for your time. >> For Leo Labrunch I'm Lisa Martin. You're watching theCUBE's digital coverage of IBM think 2021. (upbeat music)
SUMMARY :
brought to you by IBM. Welcome to theCUBE's digital What are some of the and I'd say the ball really in that we are seeing a number in the last couple of years, depth and breadth of the skills if I'm going to get this right. So it's combination of capabilities that Give me a look into some of the it's like insanely loud. Lisa just finished the question. Man's voice: I'll cut it question, I know the answer just the ability of customers the examples I know of could just say, you know, so I don't have the customer stories yet. around the corner, some of the I think they went around the corner. and I'll re-ask the question. Lisa: All right, so talk to me about, and some of the options are the key target audiences from any of the angles. Talk to me about why when So having the ability to do that of nuance to doing it. and the use cases for that sits inside the SAP Talking to me about what's of IBM think 2021.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Leo LaBranche | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Theresa | PERSON | 0.99+ |
Leo | PERSON | 0.99+ |
March | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two minutes | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Leo Labrunch | PERSON | 0.99+ |
ROSA | PERSON | 0.99+ |
OpenShift | TITLE | 0.99+ |
256 gigabyte | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
both partners | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
2021 | DATE | 0.99+ |
S four HANA | TITLE | 0.99+ |
one second | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Red Hat | ORGANIZATION | 0.98+ |
a year ago | DATE | 0.98+ |
30% | QUANTITY | 0.98+ |
nine months ago | DATE | 0.98+ |
over 40 years | QUANTITY | 0.98+ |
S four | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Red Hat | TITLE | 0.97+ |
ROSA | ORGANIZATION | 0.95+ |
IBM16 Leo LaBranche VCUBE
>> Narrator: From around the globe, It's theCUBE. With digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome to theCUBE's digital coverage of IBM Think 2021. I'm Lisa Martin. Next joining me is Leo LaBranche, Director of global strategic initiatives at AWS. Leo, welcome to theCUBE. >> Thank you, happy to be here. >> So talk to me about AWS and IBM what's going on there with their relationship. What are some of the things that are significant for both partners? >> Yeah, absolutely, IBM's relationship really started with us around 2016. I would say it was a little bit more opportunistic at the time. We knew there was an opportunity to go to market together. We knew there were some great things we could do for our customers. But we hadn't quite cracked to crack the code so to speak on when and where and why we were going to partner at that point. You fast forward into this sort of 2017 to 2019 timeframe. And we became, I'd say a lot more intentional about how we were going to go to market, where we were going to invest areas such as SAP, et cetera. Were an early one that we identified and I'd say the ball really started rolling sort of in the 2018 timeframe. A combination of a number of different things occurred, you know, the acquisition of Red Hat, obviously, you know Red Hat is a very significant, was a very significant partner with AWS, prior to the acquisition. And so post acquisition, you combine that with ramping up a workforce, focused on AWS, combined with a number of different competencies that IBM really invested in, around migration as an example, or SAP. And, you know, the, the ball really starting to roll quickly after that, you know, I'd say the last 18 months or so we both invested significantly in the relationship expansion around the world really, and joined resources and capability to make sure that we're going to markets sort of partnered intentional way rather than sort of an opportunistic. >> Oh, go ahead. >> So I'd say so far, that's absolutely been paying off in that we are seeing a number of wins all around the world across a broad set of industries, as well as the broad set of technologies. So, you know, the strength of, of IBM's consulting services in particular, but also their software combined with the strength of our platform has really proven to be successful for our customers. >> So you said started in 2016, really started taking shape in the last couple of years, that Red Hat acquisition. Talk to me about what's in this for customers. I imagine customers that are expanding or needing to move workloads into the cloud, or maybe more of a hybrid cloud approach. What are some of the big benefits that customers are going to gain from this partnership? >> Yeah, absolutely. And the reality is IBM has a long and storied history and relationship with their customers, right? They run and manage many of the workloads. They really know the customer's business incredibly well. They have domain expertise and industry and then the technology expertise from a professional services perspective to really help navigate the waters and determine what the right strategy is around moving to the cloud, right? You combine that with the depth and breadth of the skills and capabilities and services that AWS provides. And the fact that IBM has invested significantly in making sure their professional services are deeply steeped in our technology and capabilities. It's a great combination of really understanding the customer's needs. Plus the art of the possible, honestly, when it comes to technology that we provide, really can accelerate both and mitigate risk when it comes to moving to the cloud. >> That risk mitigation is key. So you guys recently, AWS recently launched if I'm going to get this right. Red Hat OpenShift Service on AWS or ROSA. Can you talk to me a little bit about ROSA? >> Yeah so, Red Hat obviously very well known, and ultimately adopted within the enterprise. We have built a fully managed service around Red Hat on AWS. What that means is you'll have access to essentially the capabilities that that Red Hat would normally provide but all containerized within a solution that allows you to get access to AWS services, right. The other benefit here is normally you would get sort of a multi-vendor with invoicing and cost model, right? Where you get billed from Red Hat, get billed from Amazon. You get billed from IBM. In this case, it's essentially a wholistic service in which there's a single sort of invoicing and vendor relationship, right. So it's combination of capabilities that normally would be provided via Red Hat combined with access to cloud and all the interfaces and capabilities around OpenShift, et cetera, that you could do there. Plus a more interesting and beneficial commercial model. >> So streamlined pricing models, streamlined operating model for customers. Talk to me about some of the customers that have adopted it. Give me a look into some of the industries where you've seen good adoption and some of the results that they're gaining so far. (loud engine buzz in background) >> Yeah one second, sorry, it's like insanely loud. >> Man's voice: No worries, let's just take a pause. We can just, so yeah we'll go right as if Lisa just finished the question. So just take a breather as long as it needs. And then whenever you're ready whenever that's died down, just like give it a beat give it like a second and then just right as if she just asked the question. >> Answer the question then. >> Man's voice: Yeah. >> All right. >> Man's voice: I'll cut it out as if nothing happened. >> Give me two minutes. So actually on your question, I know the answer from things that I've done recently, but was there an official answer Theresa I'm supposed to give on this? >> Teresa: No, not really I mean, I think what you're talking about on Red Hat specifically >> Right, ROSA's early adoption. >> Teresa: Yeah, no I mean, there there's a product page and stuff, it's really about just the ability of customers to be able to run those solutions on the AWS console it is really the, the gist of it. And that it's fully integrated. >> I'm not sure some of the examples I know of are publicly refrenceable. >> Lisa: That's okay, you could just say, you know, customer in XYZ industry, that's totally fair. I'm not so worried about that. >> Teresa: Yeah I don't know if so ROSA. Lisa, ROSA was just launched in March and so it's brand new so I don't have the customer stories yet. So that's why I don't have them listed for Leo. >> Lisa: Oh, that's fine, that's totally fine. Maybe we can talk about, you know, since the launch was just around the corner, some of the things that have been going on, the momentum interest from customers, questions conversations can be more like that as you're launching the GTM. >> Yeah, and there's certainly a couple of industries that they have targeted I'm going to go with that as well as a couple of customers, like, >> Teresa: Thank you, Lisa. >> Lisa: Sure, of course. >> I think they went around the corner. (Lisa laughs) >> Lisa: All right, let me know and I'll re-ask the question. I'll tweak it a little bit. >> Yeah, go ahead. >> Lisa: All right, so talk to me about, ROSA just launched very recently. Talk to me about customer interest, adoption. Maybe some of the industries in particular if you're seeing any industry that's kind of really leading edge here and taking advantage of this new managed service. >> Yeah, absolutely, so no big surprise, right? The the existing customer base that currently uses Red Hat Linux, and some of the options in OpenShift, et cetera that are out today are then the right customers to potentially look at this when it comes to moving forward. You know, industry-wise certainly there are areas within financial services, banking, insurance, et cetera. We're also seeing some around manufacturing, a little less so, but some in media and telco as well. So it's, it's a broad swath of any applicability of Red Hat and OpenShift is somewhat universal but the early customer bases has largely been sort of in those three areas. >> What I'm curious what are the key target audiences are these, Red Hat customers are these AWS customers. IBM all three? >> Yeah. I mean, there isn't necessarily the perfect customer that we're necessarily looking for, as much as if there are existing customers that are currently using Linux or using Red Hat. If there are someone who, a customer who currently has a relationship with either AWS or IBM there's an opportunity to essentially look at it from any of the angles. If you're already on cloud or you've already experienced AWS in some shape or form there's an opportunity to potentially to leverage ROSA, to further expand that capability and also have some more flexibility so to speak. If you're already using IBM as a professional services provider and advisory firm then they absolutely have the expertise and understanding of this product set to help you understand how it could be best leveraged, right. So you can kind of look at it from either of the dimensions. If it's a customer that's completely new to all of us then we're happy to talk to you. But it's something that will definitely take a little bit more explanation to understand as to why you should, or shouldn't consider us with this multicloud OpenShift type solution. >> Got it, let's shift gears a bit and talk about SAP. When we think about customers looking to migrate SAP workloads to the cloud, looking at the right cloud providers those are really big, challenging strategic decisions for leadership to make. Talk to me about why when you're in those conversations AWS is the best choice. >> Absolutely, I mean, really AWS, let's say with SAP and with with many of our services is really looking to give all the options that you could conceivably need or want in order to engage in cloud migration and transformation. press AP specifically, right? There are a number of different options, right. You could go for a lift and shift or upgrade from many databases to a suite on SAP HANA could potentially look to modernize and leverage cloud services, post migration as well. And then the sort of final pinnacle of that is a complete transformation to S four or S four HANA as far as why AWS specifically beyond just choice, you know, from a cost perspective, it's pretty compelling. And we have some pretty compelling business and use cases around ultimately the cost savings that come when you move from an on-premise SAP implementation to cloud beyond that, usually the cloud migration itself is an opportunity to condense or reduce the number of instances you're paying for, from an SAP perspective, which then further reduces cost. From a reliability perspective, you know, AWS is the world's most secure, extensive reliable cloud infrastructure, right? Any of the instances that you put on AWS are instantly I'd say fairly instantly provisioned in such a way that they are provided across multiple what we call Availability Zones which is giving you sort of the resiliency and the stability that really no other cloud provider can provide. On the security front, I mean this is really a unique position in that AWS plus IBM and the security, the depth in security services you know, numerous years of professional services work that IBM has done in the security space. You know, they have roughly 8,000 or so cybersecurity experts within IBM. So the combination of their expertise in security plus the security of our platform is a great combination. I'd say the final one is around performance, right? AWS offers many more cloud native options around certified SAP instances, specifically all the way from 256 gigabyte option all the way up to 24 terabytes which is the largest of its kind. And as those who have implemented SAP know it's a very resource intensive. So having the ability to do that from a performance perspective is a key differentiator for sure. >> Talk to me from your opinion about why IBM for SAP on AWS, why should customers go that direction for their projects? >> Yeah, you know, IBM has over 40 years of experience in implementing SAP for their customers right. And they've done, I think it's over 6,000 SAP migrations, 40,000 global SAP consultants around the world. Right, so from a capability and depth of experience, you know, there's a lot of nuance to doing it. SAP implementation, particularly one that's then moving from on-prem to the cloud. You know, they've got the experience right. Beyond that they have industry specific solutions that are pre-configured. So I think that there's 12 industry specific solutions pre-configured for SAP, it allows, you know roughly 20 to 30% acceleration when it comes to implementation of platforms. So combination of just depth of experience, depth of capability combined with these solutions to accelerate are all key reasons for sure. >> The acceleration you bring up, sorry is interesting because we saw in the last year the acceleration of digital transformation projects and businesses needing to pivot again and again, and again to figure out how to survive and be successful in this very dynamic market in which we're still living. Anything industry-wise specific that you saw that was really driving the acceleration and the use cases for ROSA in the last year? >> Yeah so, you know SAP, we saw an interesting trend as a result of what's everyone's been experiencing in the last year with COVID, et cetera. You know, many organizations postponed large ERP implementations and large SAP migrations, because of what you just said, right. They weren't entirely sure what would need to be done in order to survive either a competitive threats or more just the global threats that were occurring. So what we saw was, really none of the transformations went away. They, were put on hold for a period of time let's say six to nine months ago maybe even a year ago almost. In lieu of I would say more top line revenue generating or innovative type solutions that maybe were focused specifically at, you know, the changing dynamic with COVID. Since then we've seen a combination of those new ideas, right? Combination of the new innovation around healthcare of course, but also public sector and, you know a lot around employment and then engagement there. We've seen a combination of those new ideas and new innovations with the original goal of optimizing transforming SAP ERP, et cetera. And then combining the two to allow access to the data, that sits inside the SAP implementation the SAP. Combine the data in SAP with all these new innovations and then ultimately use that to sort of capitalize on what the future businesses are going to be. That's been huge, it's been very interesting to see some organizations completely change their business model over the course of the last 12 months. In ways they probably had never intended to before right? But it's, absolutely become an opportunity in a time of a lot of challenges. >> Agreed there are silver linings and we've seen a lot of those interesting opportunities to your point that businesses probably would never have come up with had there not been a forcing function like we've been living with. Leo thank you for joining me today. Talking to me about what's going on with IBM and AWS. We'll be excited to follow what happens with ROSA as it continues to roll out. And we appreciate you joining us on the program. >> Absolutely thank you for your time. >> For Leo Labrunch I'm Lisa Martin. You're watching theCUBE's digital coverage of IBM think 2021. (upbeat music)
SUMMARY :
brought to you by IBM. Welcome to theCUBE's digital What are some of the and I'd say the ball really in that we are seeing a number in the last couple of years, depth and breadth of the skills if I'm going to get this right. So it's combination of capabilities that Give me a look into some of the it's like insanely loud. Lisa just finished the question. Man's voice: I'll cut it question, I know the answer just the ability of customers the examples I know of could just say, you know, so I don't have the customer stories yet. around the corner, some of the I think they went around the corner. and I'll re-ask the question. Lisa: All right, so talk to me about, and some of the options are the key target audiences from any of the angles. Talk to me about why when So having the ability to do that of nuance to doing it. and the use cases for that sits inside the SAP Talking to me about what's of IBM think 2021.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Leo LaBranche | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
Leo | PERSON | 0.99+ |
Theresa | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
two minutes | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
March | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
256 gigabyte | QUANTITY | 0.99+ |
Leo Labrunch | PERSON | 0.99+ |
12 industry | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
S four HANA | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
both partners | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
a year ago | DATE | 0.99+ |
today | DATE | 0.98+ |
over 40 years | QUANTITY | 0.98+ |
2021 | DATE | 0.98+ |
ROSA | ORGANIZATION | 0.98+ |
Red Hat | TITLE | 0.98+ |
S four | TITLE | 0.98+ |
one second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
ROSA | PERSON | 0.98+ |
30% | QUANTITY | 0.98+ |
nine months ago | DATE | 0.97+ |
SAP HANA | TITLE | 0.97+ |
Red Hat Linux | TITLE | 0.97+ |
Compute Session 05
>> Thank you for joining us today for this session entitled, Deploy any Workload as a Service, When General Purpose Technology isn't Enough. This session today will be on our HPE GreenLake platform. And my name is Mark Seamans, and I'm a member of our GreenLake cloud services team. And I'll be kind of leading you through the material today which will include both a slide presentation as well as an interactive demo to get some experience in terms of how the process goes for interacting with your initial experience with our GreenLake system. So, let's go ahead and get started. One of the things that we've noticed over the last decade and I'm sure that you have as well has been the tremendous focus on accelerating business while concurrently trying to increase agility and to reduce costs. And one of the ways a lot of businesses have gone about doing that has been leveraging a cloud based technology set. And in many cases, that's involved moving some of the workloads to the public cloud. And so with that much said, though, while organizations have been able to enjoy that cost control and the agility associated with the public cloud. What we've seen is that the easy to move workloads have been moved but there's a significant amount as much as 70% in many cases of workloads that organizations run which still remain on prem. And there's reasons for that. Some cases it's due to data privacy and security concerns. Other times it's due to latency of really needing high-performance access to data. And the other times, it's really just related to the interconnected nature of systems and that you need to have a whole bunch of systems which form an overall experience and they need to be located close together. So, one of the challenges that we've worked with customers and have actually developed our GreenLake solution to address is this idea of trying to achieve this cloud-like experience for all of your apps and data in a way that leverages the best of the public cloud with also that same type of experience delivered on premise. So as you think about some of the challenges, again, we touched on this that customers are trying to address. One of the ones is this idea of agility, being able to move quickly and to be able to take a set of IT resources that you have and deploy them for different use cases and different models. So, it's one of the things as we built GreenLake, we really had a strong focus on is how do we provide a common foundation, a common framework to deliver that kind of agility. The next one is this term on the top right called scale. And one of the words you may hear is you hear cloud talked about regularly is this notion of what's called elasticity and the ability to have something stretch and get larger kind of on an on demand basis. That's another challenge and premise that we've really tried to work through. And you'll see how we've addressed that. Now, obviously, as you do this, you can achieve scale if you just put a ton of equipment in place much more maybe than you need at any given time but with that comes a lot of costs. And so as you think about wanting to have an agile and flexible system, what you'd also like is something where the costs flexes as your needs grow and it's elastic and that it can get larger and then it can get smaller as needed as well. So, we'll talk about how we do that with our GreenLake solution. And then finally it's complexity, it's trying to abstract away the vision for people of having to be aware of all the complexity it takes to build these systems and provide a single interface, a single experience for people to manage all of their IT assets. So we do that through this solution called HPE GreenLake and really we call it the cloud that comes to you. And as you think about what we're really trying to do here is take the notion of a cloud from being a place where people have thought about the public cloud and turning that to an idea of the cloud being an experience. And so it's regardless of whether it's in the public cloud or running on premise or as is the case with GreenLake, whether it's a mixture of those and maybe even a mixture of multiple public clouds with on-prem experience, the cloud now becomes something you experience and that you leverage as opposed to a place where you have an account and that can include edge computing combined with co-location or data center based computing. It could include equipment stored in your own data center and certainly it can include resources in the public cloud. So, let's take a look at how we go about delivering the experience and what some of those benefits are as we put these solutions in place. So, as you think about why you'd want to do this and the benefits you get from GreenLake, what we've seen in terms of both working with customers and actually having studies done with analysts is the benefits are numerous, but they come in areas that are shown here, one time to deployment. And that once you get this flexible and easily to manage environment in place with what we'll show you are these prebuilt, pre-configured and managed as a service solutions, your time to deployment for putting new workloads in place can shrink dramatically. The next in terms of having these pre-configured solutions and combining both the hardware and software technology with a set of managed services through our GreenLake managed services team, what you can do is dramatically reduce the risk of putting a new workload in place. So for example, if you wanted to deploy virtual desktop infrastructure and maybe you haven't done that in the past, you can leverage a GreenLake VDI solution along with GreenLake management services to very predictably and very reliably put that solution in place. So you're up and running focusing on the needs of your users with incredibly lowered risk, because this was built on a pre-validated and a pre-certified foundation. Obviously, I talked earlier about the idea with GreenLake is that you have flexibility in terms of scaling up your use of the resources, even though they're computers that may be in your data center or a colo, and also scaling them back down. So if you have workloads over time, that may be even an end of month cycle or an end to quarter cycle where certain workloads get larger and then would get smaller again, the ability with GreenLake on a consumption billing basis is there where your costs can flow as your use of the systems flow. And again, I'll show you a screen in just a few minutes, that kind of illustrates what that looks like. And then the last piece is the single pane of glass for control and insight into what's going on. And what we mean by that is not just what's going on from a cost perspective, but also what's going on from a system utilization perspective. You'll see in one of the screens I'll show that there's a system utilization report of all of your GreenLake resources that you can view at any time. And so what you can get visibility to, for example, with storage capacity as your storage capacity is being consumed over time as you generate more data, the system will tell you, hey, you're getting up to about 60, 70% utilized. And then at that point, we would be able to work with you to automatically deploy even though you won't be paying for it yet, additional storage capacity so it's ready as your needs grow to encompass that. So in terms of what are some of these services that we deliver as part of GreenLake? Well, they range and you see here a portfolio of services that we offer. If you start at the bottom, it's simple things, right? Things like compute as a service, and I'll show you examples of that today, networking as a service, hyper-converged infrastructure as a service. And then if we work our way up the stack, we move from kind of basic services to platform services, things like VMware and containers as a service. And then if we go to the top layer of this, we actually can offer complete solutions for targeted workloads. So if your need was for example, to run machine learning and AI, and you wanted to have a complete environment put in place that you could leverage for machine learning and AI and use it and consume it on a consumption as a service basis, we've got our MLOps solution that delivers that. And similarly, I mentioned earlier, VDI for virtual desktops or a solution for SAP HANA. So, the solutions range from very basic compute at the foundation all the way up to complete workload solutions that you can achieve. And the portfolio of what these are is expanding all the time. And as you'll see, you can go out to our hpe.com site and see a complete catalog of all the GreenLake services that are available. So let's take a minute and let's drill in like on that MLOps solution. And we can take a look at how that fits together and what makes that up. So, if you think about GreenLake for MLOps, it's a fast path for data scientists, and it's really oriented around the needs of data scientists within your organization who have a desire to be able to get in and start to analyze data for advantage in your business. So, what comes with an MLOps solution from GreenLake starts at the left side of the slide here with a fully curated hardware platform, including GPU based nodes, data science, optimized hardware, all the storage that you're going to need to run at scale and that performance to make these workloads work. And so that's one piece of it is a curated hardware stack for machine learning. Next in the software component, we pre-validated a whole bunch of the common stack elements that you would need. So beyond operating systems, but things for doing continuous integration, for things like TensorFlow and Jupyter notebooks are already pre-validated and delivered with this solution. So, the tools that your data scientists will need come with this, ready to go, out of the box. And then finally, as this solution gets delivered, there's a services component to it beyond just us installing this full thing and delivering a complete solution to you. But the GreenLake management services options where our services teams can work side by side with data scientists to assist them in getting up to speed on the solution, to leveraging the tools, to understanding best practices if you want those, if you want that assistance for deploying MLOps and the whole thing's delivered as a service. As similar, we similar solutions for other workloads like SAP HANA that would leverage again, different compute building blocks, but always in a way that's done for workload optimized solutions, best practice and that build up that stack. And so your experience in consuming this is always consistent, but what's running under the hood isn't just a generic solution that you might see in for example, a public cloud environment, it's a best practice, hardware optimized, software optimized environment built for each one of the workloads that we can deploy. So I like to do at this point is actually show you what's the process like for actually specifying a GreenLake solution. And maybe we'll take a look at compute as our example today. So, what I've got here is a browser experience, I'm just in my web browser, I'm on the hpe.com website and what I'd like to do. I mean the GreenLake section and I've actually clicked on this services menu and I'm going to go ahead and scroll down. And one of the things you can see here is that catalog of GreenLake services that I referenced. So, just like we showed you on the slide, this is that catalog of services that you can consume. I'm going to go to compute and we'll go about quoting a GreenLake compute solution. So we see when I clicked on that, one of the options I have is to get a price in my inbox. And I'll click on that to go in here to our GreenLake quick quote environment where if in my case here for our demonstration, I'll specify that I'd like to purchase to add to my GreenLake environment some additional general compute capability for some workloads that I might like to run. If I click on this, I go in and you notice here that I'm not going to specify server types. I'm really going to tell the system about the types of workloads that I'd like to run and the characteristics of those workloads. So for example, my workload choices would be adaptable performance or maybe densely optimized compute for highly scalable and high performance computing requirements. So, I'll select adaptable performance. I have a choice of processor types, my case, I'll pick Intel. And I then say, how many servers for the workloads that I want to run would be part of the solution. Again, in my case, maybe we'll quote a 20 server configuration. Now, as we think about the plans here, what you can see is we're really looking at the different options in terms of a balanced performance and price option which is the recommended option. But if I knew that the workloads I were going to run were more performance optimized, I could simply click on that option. And in the system under the hood does all the work to reconfigure the system. I'm not having to pick individual server options as you see. So once I picked between cost optimized balance or performance, I can go in here and select the rest of the options. Now, we'll start at the top right and you see here from a services perspective, this is where it specifies how much services content and in services assistance I'd like all the way from just doing proactive metering of my solution all the way through being able to do actual workload deployment and assistance with me physically managing the equipment myself. The other piece I'll focus on is this variable usage. And this comes back to how much of the variable time, variable capacity of additional capacity, what I like to have available in my data center for this solution. So if I know that my flex could be larger in the future of the capacity, I want to flex up and down. I might pick a slightly larger amount of flex capacity at my location as part of this solution. With that, I'd select that workload. And the less steps would be, I could click on get price and this whole thing will be packaged up and shipped to you in terms of the price of the solution. And any other details that you might like to see. And I encourage you to go out to hpe.com and to go through this process yourself for one of the workloads that might be of interest for you to get a flavor of that experience. So if we move forward, once you've deployed your GreenLake solution, one of the things you see here is that single pane of glass experience in terms of managing the system, right? We've got a single panel that all in one place provides you access to your cost information for billing, and what's driving that billing, your middle and the middle of the top center, you can see we've got information on the capacity planning but then we can actually drill in and actually look at additional things like services we offer around continuous compliance, capacity planning data for you to build and see how things like storage or filling, cost control information with recommendations around how you could reduce or minimize your costs based on the usage profile that you have. So, all of this is a fully integrated experience that can span components running both on-premise and also incorporating services that could be in the public cloud. Now, when we think about who's using this and why is this becoming attractive? You can imagine just looking at this capability that this ability to blend public cloud capabilities with on-premise or in a co-location, private data center capabilities provides tremendous power and provides tremendous flexibility for users. And so we're seeing this adopted broadly as kind of a new way, people are looking to take the advantages of cloud, but bring them into a much more self-managed or on-premise experience. And so some example, customers here include deployments in the automotive field, both at Porsche or over on the right at Zenseact, which is the autonomous driving division of Volvo where they're doing research with tremendous amounts of data to produce the best possible autonomous driving experience. And then in the center, Danfoss who is one of the world's leading manufacturers of both electric and hydraulic control components. And so as they produce components themselves, that drive an optimized management of physical infrastructure, power, liquids and cooling, they're leveraging GreenLake for the same type of control and best practice deployment of their data centers and of their IT infrastructure. So again, somebody who's innovating in their own world taking advantage of compute innovations to get the benefits of the cloud and the flexibility of a cloud-like environment but running within their own premise. And it's not just those three customers clearly. I mean, what we're seeing is, as you see on the slide, it's a unique solution in the market today. It provides the true benefits of the cloud, but with your own on-premise experience, it provides expertise in terms of services to help you take best advantage of it. And if you look at the adoption by customers, over a thousand customers in 50 countries have now deployed GreenLake based solutions as the foundation on which they're building their next generation IT architecture. So, there's a lot of unique capabilities that as we built GreenLake, that we have that really make this a single pane of glass and a very, very unified and elegant experience. So as we kind of wrap up, there's three things I want to call your attention to, one, GreenLake, which we focused a lot on today. I'd also like to call your attention to the point next services, which are an extension of those GreenLake services that I talked about earlier but there's a much broader portfolio of what Pointnext can do in delivering value for your organization. And then again, HPE financial services who much like what we do with GreenLake in this as a service consumption environment can provide a lot of financial flexibility in other models and other use cases. So, I'd encourage you to take time to learn about each of those three areas. And then there's obviously many many resources available online. And again, there's some that are listed here but it kind of as a single point takeaway from this slide, I encourage you to go to hpe.com. If you're interested in GreenLake, click on our GreenLake icon and you can take yourself through that quoting experience for what would be interesting and certainly as well for our compute solutions, there's a tremendous amount of information about the leading solutions that HPE brings to market. So with that, I hope that's been an informative set of experience. I'm thanking you for spending a little bit of time with us today and hopefully you'll take some time to learn more about GreenLake and how it might be a benefit for you within your organization. Thanks again.
SUMMARY :
and the benefits you get from GreenLake,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Volvo | ORGANIZATION | 0.99+ |
Mark Seamans | PERSON | 0.99+ |
Porsche | ORGANIZATION | 0.99+ |
three customers | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
20 server | QUANTITY | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
Pointnext | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Zenseact | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
50 countries | QUANTITY | 0.99+ |
over a thousand customers | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
Danfoss | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
SAP HANA | TITLE | 0.98+ |
single point | QUANTITY | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
single panel | QUANTITY | 0.97+ |
three areas | QUANTITY | 0.97+ |
one piece | QUANTITY | 0.97+ |
single pane | QUANTITY | 0.97+ |
single interface | QUANTITY | 0.96+ |
single experience | QUANTITY | 0.95+ |
Jupyter | ORGANIZATION | 0.94+ |
HPE GreenLake | TITLE | 0.93+ |
hpe.com | OTHER | 0.93+ |
GreenLake | TITLE | 0.93+ |
about 60, 70% | QUANTITY | 0.92+ |
TensorFlow | ORGANIZATION | 0.91+ |
Deploy any Workload as a Service, When General Purpose Technology isn't Enough | TITLE | 0.85+ |
one place | QUANTITY | 0.84+ |
Session 05 | QUANTITY | 0.83+ |
Intel | ORGANIZATION | 0.79+ |
one of | QUANTITY | 0.78+ |
hpe.com | ORGANIZATION | 0.78+ |
Gree | ORGANIZATION | 0.77+ |
GreenLake | COMMERCIAL_ITEM | 0.74+ |
up to | QUANTITY | 0.67+ |
ones | QUANTITY | 0.66+ |
last decade | DATE | 0.66+ |
Marc Staimer, Dragon Slayer Consulting & David Floyer, Wikibon | December 2020
>> Announcer: From theCUBE studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is theCUBE conversation. >> Hi everyone, this is Dave Vellante and welcome to this CUBE conversation where we're going to dig in to this, the area of cloud databases. And Gartner just published a series of research in this space. And it's really a growing market, rapidly growing, a lot of new players, obviously the big three cloud players. And with me are three experts in the field, two long time industry analysts. Marc Staimer is the founder, president, and key principal at Dragon Slayer Consulting. And he's joined by David Floyer, the CTO of Wikibon. Gentlemen great to see you. Thanks for coming on theCUBE. >> Good to be here. >> Great to see you too Dave. >> Marc, coming from the great Northwest, I think first time on theCUBE, and so it's really great to have you. So let me set this up, as I said, you know, Gartner published these, you know, three giant tomes. These are, you know, publicly available documents on the web. I know you guys have been through them, you know, several hours of reading. And so, night... (Dave chuckles) Good night time reading. The three documents where they identify critical capabilities for cloud database management systems. And the first one we're going to talk about is, operational use cases. So we're talking about, you know, transaction oriented workloads, ERP financials. The second one was analytical use cases, sort of an emerging space to really try to, you know, the data warehouse space and the like. And, of course, the third is the famous Gartner Magic Quadrant, which we're going to talk about. So, Marc, let me start with you, you've dug into this research just at a high level, you know, what did you take away from it? >> Generally, if you look at all the players in the space they all have some basic good capabilities. What I mean by that is ultimately when you have, a transactional or an analytical database in the cloud, the goal is not to have to manage the database. Now they have different levels of where that goes to as how much you have to manage or what you have to manage. But ultimately, they all manage the basic administrative, or the pedantic tasks that DBAs have to do, the patching, the tuning, the upgrading, all of that is done by the service provider. So that's the number one thing they all aim at, from that point on every database has different capabilities and some will automate a whole bunch more than others, and will have different primary focuses. So it comes down to what you're looking for or what you need. And ultimately what I've learned from end users is what they think they need upfront, is not what they end up needing as they implement. >> David, anything you'd add to that, based on your reading of the Gartner work. >> Yes. It's a thorough piece of work. It's taking on a huge number of different types of uses and size of companies. And I think those are two parameters which really change how companies would look at it. If you're a Fortune 500 or Fortune 2000 type company, you're going to need a broader range of features, and you will need to deal with size and complexity in a much greater sense, and a lot of probably higher levels of availability, and reliability, and recoverability. Again, on the workload side, there are different types of workload and there're... There is as well as having the two transactional and analytic workloads, I think there's an emerging type of workload which is going to be very important for future applications where you want to combine transactional with analytic in real time, in order to automate business processes at a higher level, to make the business processes synchronous as opposed to asynchronous. And that degree of granularity, I think is missed, in a broader view of these companies and what they offer. It's in my view trying in some ways to not compare like with like from a customer point of view. So the very nuance, what you talked about, let's get into it, maybe that'll become clear to the audience. So like I said, these are very detailed research notes. There were several, I'll say analysts cooks in the kitchen, including Henry Cook, whom I don't know, but four other contributing analysts, two of whom are CUBE alum, Don Feinberg, and Merv Adrian, both really, you know, awesome researchers. And Rick Greenwald, along with Adam Ronthal. And these are public documents, you can go on the web and search for these. So I wonder if we could just look at some of the data and bring up... Guys, bring up the slide one here. And so we'll first look at the operational side and they broke it into four use cases. The traditional transaction use cases, the augmented transaction processing, stream/event processing and operational intelligence. And so we're going to show you there's a lot of data here. So what Gartner did is they essentially evaluated critical capabilities, or think of features and functions, and gave them a weighting, or a weighting, and then a rating. It was a weighting and rating methodology. On a s... The rating was on a scale of one to five, and then they weighted the importance of the features based on their assessment, and talking to the many customers they talk to. So you can see here on the first chart, we're showing both the traditional transactions and the augmented transactions and, you know, the thing... The first thing that jumps out at you guys is that, you know, Oracle with Autonomous is off the charts, far ahead of anybody else on this. And actually guys, if you just bring up slide number two, we'll take a look at the stream/event processing and operational intelligence use cases. And you can see, again, you know, Oracle has a big lead. And I don't want to necessarily go through every vendor here, but guys, if you don't mind going back to the first slide 'cause I think this is really, you know, the core of transaction processing. So let's look at this, you've got Oracle, you've got SAP HANA. You know, right there interestingly Amazon Web Services with the Aurora, you know, IBM Db2, which, you know, it goes back to the good old days, you know, down the list. But so, let me again start with Marc. So why is that? I mean, I guess this is no surprise, Oracle still owns the Mission-Critical for the database space. They earned that years ago. One that, you know, over the likes of Db2 and, you know, Informix and Sybase, and, you know, they emerged as number one there. But what do you make of this data Marc? >> If you look at this data in a vacuum, you're looking at specific functionality, I think you need to look at all the slides in total. And the reason I bring that up is because I agree with what David said earlier, in that the use case that's becoming more prevalent is the integration of transaction and analytics. And more importantly, it's not just your traditional data warehouse, but it's AI analytics. It's big data analytics. It's users are finding that they need more than just simple reporting. They need more in-depth analytics so that they can get more actionable insights into their data where they can react in real time. And so if you look at it just as a transaction, that's great. If you're going to just as a data warehouse, that's great, or analytics, that's fine. If you have a very narrow use case, yes. But I think today what we're looking at is... It's not so narrow. It's sort of like, if you bought a streaming device and it only streams Netflix and then you need to get another streaming device 'cause you want to watch Amazon Prime. You're not going to do that, you want one, that does all of it, and that's kind of what's missing from this data. So I agree that the data is good, but I don't think it's looking at it in a total encompassing manner. >> Well, so before we get off the horses on the track 'cause I love to do that. (Dave chuckles) I just kind of let's talk about that. So Marc, you're putting forth the... You guys seem to agree on that premise that the database that can do more than just one thing is of appeal to customers. I suppose that makes, certainly makes sense from a cost standpoint. But, you know, guys feel free to flip back and forth between slides one and two. But you can see SAP HANA, and I'm not sure what cloud that's running on, it's probably running on a combination of clouds, but, you know, scoring very strongly. I thought, you know, Aurora, you know, given AWS says it's one of the fastest growing services in history and they've got it ahead of Db2 just on functionality, which is pretty impressive. I love Google Spanner, you know, love the... What they're trying to accomplish there. You know, you go down to Microsoft is, they're kind of the... They're always good enough a database and that's how they succeed and et cetera, et cetera. But David, it sounds like you agree with Marc. I would say, I would think though, Amazon kind of doesn't agree 'cause they're like a horses for courses. >> I agree. >> Yeah, yeah. >> So I wonder if you could comment on that. >> Well, I want to comment on two vectors. The first vector is that the size of customer and, you know, a mid-sized customer versus a global $2,000 or global 500 customer. For the smaller customer that's the heart of AWS, and they are taking their applications and putting pretty well everything into their cloud, the one cloud, and Aurora is a good choice. But when you start to get to a requirements, as you do in larger companies have very high levels of availability, the functionality is not there. You're not comparing apples and... Apples with apples, it's two very different things. So from a tier one functionality point of view, IBM Db2 and Oracle have far greater capability for recovery and all the features that they've built in over there. >> Because of their... You mean 'cause of the maturity, right? maturity and... >> Because of their... Because of their focus on transaction and recovery, et cetera. >> So SAP though HANA, I mean, that's, you know... (David talks indistinctly) And then... >> Yeah, yeah. >> And then I wanted your comments on that, either of you or both of you. I mean, SAP, I think has a stated goal of basically getting its customers off Oracle that's, you know, there's always this urinary limping >> Yes, yes. >> between the two companies by 2024. Larry has said that ain't going to happen. You know, Amazon, we know still runs on Oracle. It's very hard to migrate Mission-Critical, David, you and I know this well, Marc you as well. So, you know, people often say, well, everybody wants to get off Oracle, it's too expensive, blah, blah, blah. But we talked to a lot of Oracle customers there, they're very happy with the reliability, availability, recoverability feature set. I mean, the core of Oracle seems pretty stable. >> Yes. >> But I wonder if you guys could comment on that, maybe Marc you go first. >> Sure. I've recently done some in-depth comparisons of Oracle and Aurora, and all their other RDS services and Snowflake and Google and a variety of them. And ultimately what surprised me is you made a statement it costs too much. It actually comes in half of Aurora for in most cases. And it comes in less than half of Snowflake in most cases, which surprised me. But no matter how you configure it, ultimately based on a couple of things, each vendor is focused on different aspects of what they do. Let's say Snowflake, for example, they're on the analytical side, they don't do any transaction processing. But... >> Yeah, so if I can... Sorry to interrupt. Guys if you could bring up the next slide that would be great. So that would be slide three, because now we get into the analytical piece Marc that you're talking about that's what Snowflake specialty is. So please carry on. >> Yeah, and what they're focused on is sharing data among customers. So if, for example, you're an automobile manufacturer and you've got a huge supply chain, you can supply... You can share the data without copying the data with any of your suppliers that are on Snowflake. Now, can you do that with the other data warehouses? Yes, you can. But the focal point is for Snowflake, that's where they're aiming it. And whereas let's say the focal point for Oracle is going to be performance. So their performance affects cost 'cause the higher the performance, the less you're paying for the performing part of the payment scale. Because you're paying per second for the CPUs that you're using. Same thing on Snowflake, but the performance is higher, therefore you use less. I mean, there's a whole bunch of things to come into this but at the end of the day what I've found is Oracle tends to be a lot less expensive than the prevailing wisdom. So let's talk value for a second because you said something, that yeah the other databases can do that, what Snowflake is doing there. But my understanding of what Snowflake is doing is they built this global data mesh across multiple clouds. So not only are they compatible with Google or AWS or Azure, but essentially you sign up for Snowflake and then you can share data with anybody else in the Snowflake cloud, that I think is unique. And I know, >> Marc: Yes. >> Redshift, for instance just announced, you know, Redshift data sharing, and I believe it's just within, you know, clusters within a customer, as opposed to across an ecosystem. And I think that's where the network effect is pretty compelling for Snowflake. So independent of costs, you and I can debate about costs and, you know, the tra... The lack of transparency of, because AWS you don't know what the bill is going to be at the end of the month. And that's the same thing with Snowflake, but I find that... And by the way guys, you can flip through slides three and four, because we've got... Let me just take a quick break and you have data warehouse, logical data warehouse. And then the next slide four you got data science, deep learning and operational intelligent use cases. And you can see, you know, Teradata, you know, law... Teradata came up in the mid 1980s and dominated in that space. Oracle does very well there. You can see Snowflake pop-up, SAP with the Data Warehouse, Amazon with Redshift. You know, Google with BigQuery gets a lot of high marks from people. You know, Cloud Data is in there, you know, so you see some of those names. But so Marc and David, to me, that's a different strategy. They're not trying to be just a better data warehouse, easier data warehouse. They're trying to create, Snowflake that is, an incremental opportunity as opposed to necessarily going after, for example, Oracle. David, your thoughts. >> Yeah, I absolutely agree. I mean, ease of use is a primary benefit for Snowflake. It enables you to do stuff very easily. It enables you to take data without ETL, without any of the complexity. It enables you to share a number of resources across many different users and know... And be able to bring in what that particular user wants or part of the company wants. So in terms of where they're focusing, they've got a tremendous ease of use, tremendous focus on what the customer wants. And you pointed out yourself the restrictions there are of doing that both within Oracle and AWS. So yes, they have really focused very, very hard on that. Again, for the future, they are bringing in a lot of additional functions. They're bringing in Python into it, not Python, JSON into the database. They can extend the database itself, whether they go the whole hog and put in transaction as well, that's probably something they may be thinking about but not at the moment. >> Well, but they, you know, they obviously have to have TAM expansion designs because Marc, I mean, you know, if they just get a 100% of the data warehouse market, they're probably at a third of their stock market valuation. So they had better have, you know, a roadmap and plans to extend there. But I want to come back Marc to this notion of, you know, the right tool for the right job, or, you know, best of breed for a specific, the right specific, you know horse for course, versus this kind of notion of all in one, I mean, they're two different ends of the spectrum. You're seeing, you know, Oracle obviously very successful based on these ratings and based on, you know their track record. And Amazon, I think I lost count of the number of data stores (Dave chuckles) with Redshift and Aurora and Dynamo, and, you know, on and on and on. (Marc talks indistinctly) So they clearly want to have that, you know, primitive, you know, different APIs for each access, completely different philosophies it's like Democrats or Republicans. Marc your thoughts as to who ultimately wins in the marketplace. >> Well, it's hard to say who is ultimately going to win, but if I look at Amazon, Amazon is an all-cart type of system. If you need time series, you go with their time series database. If you need a data warehouse, you go with Redshift. If you need transaction, you go with one of the RDS databases. If you need JSON, you go with a different database. Everything is a different, unique database. Moving data between these databases is far from simple. If you need to do a analytics on one database from another, you're going to use other services that cost money. So yeah, each one will do what they say it's going to do but it's going to end up costing you a lot of money when you do any kind of integration. And you're going to add complexity and you're going to have errors. There's all sorts of issues there. So if you need more than one, probably not your best route to go, but if you need just one, it's fine. And if, and on Snowflake, you raise the issue that they're going to have to add transactions, they're going to have to rewrite their database. They have no indexes whatsoever in Snowflake. I mean, part of the simplicity that David talked about is because they had to cut corners, which makes sense. If you're focused on the data warehouse you cut out the indexes, great. You don't need them. But if you're going to do transactions, you kind of need them. So you're going to have to do some more work there. So... >> Well... So, you know, I don't know. I have a different take on that guys. I think that, I'm not sure if Snowflake will add transactions. I think maybe, you know, their hope is that the market that they're creating is big enough. I mean, I have a different view of this in that, I think the data architecture is going to change over the next 10 years. As opposed to having a monolithic system where everything goes through that big data platform, the data warehouse and the data lake. I actually see what Snowflake is trying to do and, you know, I'm sure others will join them, is to put data in the hands of product builders, data product builders or data service builders. I think they're betting that that market is incremental and maybe they don't try to take on... I think it would maybe be a mistake to try to take on Oracle. Oracle is just too strong. I wonder David, if you could comment. So it's interesting to see how strong Gartner rated Oracle in cloud database, 'cause you don't... I mean, okay, Oracle has got OCI, but you know, you think a cloud, you think Google, or Amazon, Microsoft and Google. But if I have a transaction database running on Oracle, very risky to move that, right? And so we've seen that, it's interesting. Amazon's a big customer of Oracle, Salesforce is a big customer of Oracle. You know, Larry is very outspoken about those companies. SAP customers are many, most are using Oracle. I don't, you know, it's not likely that they're going anywhere. My question to you, David, is first of all, why do they want to go to the cloud? And if they do go to the cloud, is it logical that the least risky approach is to stay with Oracle, if you're an Oracle customer, or Db2, if you're an IBM customer, and then move those other workloads that can move whether it's more data warehouse oriented or incremental transaction work that could be done in a Aurora? >> I think the first point, why should Oracle go to the cloud? Why has it gone to the cloud? And if there is a... >> Moreso... Moreso why would customers of Oracle... >> Why would customers want to... >> That's really the question. >> Well, Oracle have got Oracle Cloud@Customer and that is a very powerful way of doing it. Where exactly the same Oracle system is running on premise or in the cloud. You can have it where you want, you can have them joined together. That's unique. That's unique in the marketplace. So that gives them a very special place in large customers that have data in many different places. The second point is that moving data is very expensive. Marc was making that point earlier on. Moving data from one place to another place between two different databases is a very expensive architecture. Having the data in one place where you don't have to move it where you can go directly to it, gives you enormous capabilities for a single database, single database type. And I'm sure that from a transact... From an analytic point of view, that's where Snowflake is going, to a large single database. But where Oracle is going to is where, you combine both the transactional and the other one. And as you say, the cost of migration of databases is incredibly high, especially transaction databases, especially large complex transaction databases. >> So... >> And it takes a long time. So at least a two year... And it took five years for Amazon to actually succeed in getting a lot of their stuff over. And five years they could have been doing an awful lot more with the people that they used to bring it over. So it was a marketing decision as opposed to a rational business decision. >> It's the holy grail of the vendors, they all want your data in their database. That's why Amazon puts so much effort into it. Oracle is, you know, in obviously a very strong position. It's got growth and it's new stuff, it's old stuff. It's, you know... The problem with Oracle it has like many of the legacy vendors, it's the size of the install base is so large and it's shrinking. And the new stuff is.... The legacy stuff is shrinking. The new stuff is growing very, very fast but it's not large enough yet to offset that, you see that in all the learnings. So very positive news on, you know, the cloud database, and they just got to work through that transition. Let's bring up slide number five, because Marc, this is to me the most interesting. So we've just shown all these detailed analysis from Gartner. And then you look at the Magic Quadrant for cloud databases. And, you know, despite Amazon being behind, you know, Oracle, or Teradata, or whomever in every one of these ratings, they're up to the right. Now, of course, Gartner will caveat this and say, it doesn't necessarily mean you're the best, but of course, everybody wants to be in the upper, right. We all know that, but it doesn't necessarily mean that you should go by that database, I agree with what Gartner is saying. But look at Amazon, Microsoft and Google are like one, two and three. And then of course, you've got Oracle up there and then, you know, the others. So that I found that very curious, it is like there was a dissonance between the hardcore ratings and then the positions in the Magic Quadrant. Why do you think that is Marc? >> It, you know, it didn't surprise me in the least because of the way that Gartner does its Magic Quadrants. The higher up you go in the vertical is very much tied to the amount of revenue you get in that specific category which they're doing the Magic Quadrant. It doesn't have to do with any of the revenue from anywhere else. Just that specific quadrant is with that specific type of market. So when I look at it, Oracle's revenue still a big chunk of the revenue comes from on-prem, not in the cloud. So you're looking just at the cloud revenue. Now on the right side, moving to the right of the quadrant that's based on functionality, capabilities, the resilience, other things other than revenue. So visionary says, hey how far are you on the visionary side? Now, how they weight that again comes down to Gartner's experts and how they want to weight it and what makes more sense to them. But from my point of view, the right side is as important as the vertical side, 'cause the vertical side doesn't measure the growth rate either. And if we look at these, some of these are growing much faster than the others. For example, Snowflake is growing incredibly fast, and that doesn't reflect in these numbers from my perspective. >> Dave: I agree. >> Oracle is growing incredibly fast in the cloud. As David pointed out earlier, it's not just in their cloud where they're growing, but it's Cloud@Customer, which is basically an extension of their cloud. I don't know if that's included these numbers or not in the revenue side. So there's... There're a number of factors... >> Should it be in your opinion, Marc, would you include that in your definition of cloud? >> Yeah. >> The things that are hybrid and on-prem would that cloud... >> Yes. >> Well especially... Well, again, it depends on the hybrid. For example, if you have your own license, in your own hardware, but it connects to the cloud, no, I wouldn't include that. If you have a subscription license and subscription hardware that you don't own, but it's owned by the cloud provider, but it connects with the cloud as well, that I would. >> Interesting. Well, you know, to your point about growth, you're right. I mean, it's probably looking at, you know, revenues looking, you know, backwards from guys like Snowflake, it will be double, you know, the next one of these. It's also interesting to me on the horizontal axis to see Cloud Data and Databricks further to the right, than Snowflake, because that's kind of the data lake cloud. >> It is. >> And then of course, you've got, you know, the other... I mean, database used to be boring, so... (David laughs) It's such a hot market space here. (Marc talks indistinctly) David, your final thoughts on all this stuff. What does the customer take away here? What should I... What should my cloud database management strategy be? >> Well, I was positive about Oracle, let's take some of the negatives of Oracle. First of all, they don't make it very easy to rum on other platforms. So they have put in terms and conditions which make it very difficult to run on AWS, for example, you get double counts on the licenses, et cetera. So they haven't played well... >> Those are negotiable by the way. Those... You bring it up on the customer. You can negotiate that one. >> Can be, yes, They can be. Yes. If you're big enough they are negotiable. But Aurora certainly hasn't made it easy to work with other plat... Other clouds. What they did very... >> How about Microsoft? >> Well, no, that is exactly what I was going to say. Oracle with adjacent workloads have been working very well with Microsoft and you can then use Microsoft Azure and use a database adjacent in the same data center, working with integrated very nicely indeed. And I think Oracle has got to do that with AWS, it's got to do that with Google as well. It's got to provide a service for people to run where they want to run things not just on the Oracle cloud. If they did that, that would in my term, and my my opinion be a very strong move and would make make the capabilities available in many more places. >> Right. Awesome. Hey Marc, thanks so much for coming to theCUBE. Thank you, David, as well, and thanks to Gartner for doing all this great research and making it public on the web. You can... If you just search critical capabilities for cloud database management systems for operational use cases, that's a mouthful, and then do the same for analytical use cases, and the Magic Quadrant. There's the third doc for cloud database management systems. You'll get about two hours of reading and I learned a lot and I learned a lot here too. I appreciate the context guys. Thanks so much. >> My pleasure. All right, thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (upbeat music)
SUMMARY :
leaders all around the world. Marc Staimer is the founder, to really try to, you know, or what you have to manage. based on your reading of the Gartner work. So the very nuance, what you talked about, You're not going to do that, you I thought, you know, Aurora, you know, So I wonder if you and, you know, a mid-sized customer You mean 'cause of the maturity, right? Because of their focus you know... either of you or both of you. So, you know, people often say, But I wonder if you But no matter how you configure it, Guys if you could bring up the next slide and then you can share And by the way guys, you can And you pointed out yourself to have that, you know, So if you need more than one, I think maybe, you know, Why has it gone to the cloud? Moreso why would customers of Oracle... on premise or in the cloud. And as you say, the cost in getting a lot of their stuff over. and then, you know, the others. to the amount of revenue you in the revenue side. The things that are hybrid and on-prem that you don't own, but it's Well, you know, to your point got, you know, the other... you get double counts Those are negotiable by the way. hasn't made it easy to work and you can then use Microsoft Azure and the Magic Quadrant. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Rick Greenwald | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Marc Staimer | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam Ronthal | PERSON | 0.99+ |
Don Feinberg | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Larry | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
December 2020 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Henry Cook | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Merv Adrian | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
Monica Kumar & Bala Kuchibhotla, Nutanix | Introducing a New Era in Database Management
>> Narrator: From around the globe. It's theCUBE with digital coverage of A New Era In Database Management. Brought to you by Nutanix. >> Hi, I'm Stu Miniman. And welcome to this special presentation with Nutanix. We're talking about A New Era In Database Management. To help us dig into it, first of all, I have the Senior Vice President and General Manager of Nutanix Era Databases and Business Critical Applications, that is Bala Kuchibhotla. And one of our other CUBE alongs, Monica Kumar. Who's an SVP also with Nutanix. Bala, Monica, thank you so much for joining us. >> Thank you, thank you so... >> Great to be here. All right, so first of all, Bala a new Era. We, have a little bit of a punj. You've got me with some punjs there. Of course we know that the database for Nutanix solution is Era. So, we always like to bring out the news first. Why don't you tell us, what does this mean? What is Nutanix announcing today? >> Awesome. Thank you, Stu. Yeah, so today's a very big day for us. I'm super excited to inform all of us and our audience that we are announcing the Eratory dot two GA bits for customers to enjoy it. Some customers can download and start playing with it. So what's new with Nutanix Eratory dot two? As you knows 1.0 is a single cluster solution meaning the customers have to have a Nutanix cluster and then have around the same cluster to enjoy the databases. But with Eratory dot two, it becomes multi-cluster solution. It's not just a multi-cluster solution, but customers can enjoy database across clusters, That means that they can have their Always On Availability Groups SQL servers, their Postgres servers across Nutanix clusters. That means that they can spread across Azure Availability Zones. Now, the most interesting point of this is, it's not just across clusters, customers can place these clusters in the cloud. That is AWS. You can have Nutanix cluster in the AWS cluster and then the primary production clusters maybe on the Nutanix and primary enterprise cloud kind of stuff, that's number one. Number two, we have extended our data management capabilities, data management platform capabilities, and what we call them as global time mission. Global time mission with a data access management. Like racing river, that you need to harness the racing river by constructing a dam and then harness it for multipurpose either irrigation projects or hydroelectric project kind of stuff. You need to kind of do the similar things for your data in company, enterprise company. You need to make sure that the right persons get the right amount of data, so that you don't kind of give all production data to everyone in the company. At the same time, they also need the accessible, with one click they can get the database, the data they want. So that's the data access management. Imagine a QA person only gets the sanitized snapshots or sanitize database backups for them to create the copies. And then we are extending our database engine portfolios too to introduce SAP HANA to the thing. As you know, that we support Oracle today, Postgres, MalSQL, Mariadb SQL server. I'm excited to inform that we are introducing SAP HANA. Our customers can do one click sandbox creation into an environment for SAP HANA predown intense platform. And lastly, I'm super excited to inform that we are becoming a Postgres vendor. We are willing to give 24 by seven, 365 day support but Postgres database engine, that's kind of a provision through Nutanix setup platform. So this way the customers can enjoy the engine, platform, service all together in one single shot with a single 180 company that they can call and get the support they want. I'm super duper excited that this is going to make the customers a truly multicloud multi cluster data management platform. Thank you. >> Yeah. And I'll just add to that too. It's fantastic that we are now offering this new capability. I just want to kind of remind our audience that Nutanix for many years has been providing the foundation the infrastructure software, where you can run all these multiple workloads including databases today. And what we're doing with Era is fantastic because now they are giving our customers the ability to take that database that they run on top of Nutanix to provide that as a service now. So now are talking to a whole different organization here. It's database administrations, it's administrators, it's teams that run databases, it teams that care about data and providing access to data and organizations. >> Well, first of all, congratulations, I've taught for a couple of years to the teams at Nutanix especially some of the people working on PostgreSQL really exciting stuff and you've both seen really the unlocking of database. It used to be ,we talked about, I have one database it's kind of the one that everything runs on. Now, customers they have more databases. You talked about that flexibility is then, where we run it. We'd love to hear, maybe Monica we start with you. You talk about the customers, what does this really mean for them? Because one of our most mission critical applications we talk about, we're not just throwing our databases or what. I don't wake up in the morning and say, Oh let me move it to this cloud and put it in this data center. This needs to be reliable. I need to have access to the data. I need to be able to work with it. So, what does this really mean? And what does it unlock for your customers? >> Yes absolutely, I love to talk about this topic. I mean, if you think about databases, they are means to an end. And in this case, the end is being able to mine insights from the data and then make meaningful decisions based on that. So when we talk to customers, it's really clear that data has not become one of the most valuable assets that an organization owns. Well, of course, in addition to the employees that are part of the organization and our customers. Data is one of the most important assets. But most organizations, the challenges they face is a lot of data gets collected. And in fact, we've heard numbers thrown around for many years like, almost 80% of world's data has been created in the last like three or four years. And data is doubling every two years in terms of volume. Well guess what? Data gets collected. It sits there and organizations are struggling to get access to it with the right performance, the right security and regulation compliance, the reliability, availability, by persona, developers need certain access, analysts needs different access line of businesses need different access. So what we see is organizations are struggling in getting access to data at the right time by the right person on the team and when they need it. And I think that's where database as a service is critical. It's not just about having the database software which is of course important but how you know not make that service available to your stakeholders, to developers to lines of business within the SLAs that they demand. So is it instantly? How quickly can you make it available? How quickly can you use have access to data and do something meaningful with it? And mind the insights for smarter business? And then the one thing I'd like to add is that's where IT and business really come together. That's the glue. If you think about it today, what is the blue between an IT Organization and a business organization? It's the data. And that's where they're really coming together to say how can we together deliver the right service? So you, the business owner can deliver the right outcome for our business. >> That's very true. Maybe I'll just add a couple of comments there. What we're trying to do is we are trying to bring the cloud experience, the RDS-like experience to the enterprise cloud and then hybrid cloud. So the customers will now have a choice of cloud. They don't need to be locked in a particular cloud, at the same time enjoy the true cloud utility experience. We help customers create clouds, database clouds either by themselves if that's big enough to manage the cloud themselves or they can partner with a GSIs like Wipro, WorkHCL and then create a completely managed database service kind of stuff. So, this brings this cloud neutrality, portability for customers and give them the choice and their terms, Stu. >> Well Bala, absolutely we've seen a huge growth in managed services as you've said, maybe bring us inside a little bit. What is free up customers? What we've said for so long that back when HCI first started, it was some of the storage administrators might bristle because you were taking things away from them. It was like, no, we're going to free you up to do other things that as Monica said, deliver more business value not mapping LUNs and doing that. How about from the DBA standpoint? What are some of those repetitive, undifferentiated heavy lifting that we're going to take away from them so that they can focus on the business value. >> Yep. Thank you Stu. So think about this. We all do copy paste operations in laptops. Something of that sort happens in data center at a much larger scale. Meaning that the same kind of copy paste operation happens to databases and petabytes and terabytes of scale. Hundreds of petabytes. It has become the most dreaded complex, long running error prone operation. Why should it be that way? Why should the DBS spend all this mundane tasks and then get busy for every cloning operation? It's a two day job for me, every backup job. It's like a hobby job for provisioning takes like three days. We can take this undifferentiated heavy lifting by this and then let the DBS focus on designing the cloud for them. Looking for the database tuning, design data modeling, ML aspects of the data kind of stuff. So we are freeing up the database Ops people, in a way that they can design the database cloud, and make sure that they are energy focused on high valid things and more towards the business center kind of stuff. >> Yeah. And you know automation is really important. You were talking about is automating mundane grunt work. Like IT spends 80% of its time in maintaining systems. So then where is the time for innovation. So if we can automate stuff that's repetitive, stuff that the machine can do, the software can do, why not? And I think that's what our database as a service often does. And I would add this, the big thing our database as a service does really is provide IT organizations and DV organizations a way to manage heterogeneous databases too. It's not like, here's my environment for Postgres. Here's my environment for my SQL. Here's my environment for Oracle. Here's my environment for SQL server. Now with a single offering, a single tool you can manage your heterogeneous environment across different clouds. On premises cloud, or in a public cloud environment. So I think that's the beauty we are talking about with Nutanix's Era. Is a truly, truly gives organizations that single environment to manage heterogeneous databases, apply the same automation and the ease of management across all these different environments. >> Yeah. I'll just add one comment to that. A true managed PaaS obviously customers in like a single shop go to public cloud, just click through and then they get the database and point. And then if someone is managing the database for them. But if you look at the enterprise data centers, they need to bring that enterprise GalNets and structure to these databases. It's not like anyone can do anything to any or these databases. So we are kind of getting the best of both, the needed enterprise GalNets by these enterprise people at the same time bringing the convenience for the application teams and developers they want to consume these databases like utility. So bringing the cloud experience, bringing the enterprise GalNets. At same time, I'm super confident we can cut down the cost. So that is what Nutanix Era is all about across all the clouds, including the enterprise cloud. >> Well, Bala being simpler and being less expensive are one of the original promises of the cloud that don't necessarily always come out there. So, that's super important. One of the other things, you talk about these hybrid environments. It's not just studied, in the public cloud want to understand these environments, if I'm in the public cloud, can I still leverage some of the services that are in the public cloud? So, if I want to run some analytics, if I want to use some of the phenomenal services that are coming out every day. Is that something that can be done in this environment? >> Yeah, beautiful. Thank you Stu. So we are seeing customers who two categories. There is a public cloud customer, completely born in public cloud cloud, native services. They realize that for every database that maintaining five or seven different copies and the management of these copies is prohibited just because every copy is a faulty copy in the public cloud. Meaning you take a backup snapshot and restore it. Your meter like New York taxi, it starts with running for your EBSÂ Â and that you are looking at it kind of stuff. So they can leverage Nutanix clusters and then have a highly efficient cloning capability so that they can cut down some of these costs for these secondary environments that I talk about. What we call is copy data management, that's one kind of use case. The other kind of customers that we are seeing who's cloud is a phenomenon. There's no way that people have to move to cloud. That's the something at a C level mandate that happens. These customers are enjoying their database experience on our enterprise cloud. But when they try to go to these big hyperscalers, they are seeing the disconnect that they're not able to enjoy some of the things that they are seeing on the enterprise cloud with us. So this transition, they are talking to us. Can you get this kind of functionality with Nutanix platform onto some of these big hyperscalers? So there are kind of customers moving both sides, some customers that are public cloud they're time to enjoy our facilities other than copy data management and Nutanix. Customers that are on-prem but they have a mandate to good public cloud ,with our hybrid cloud strategy. They get to enjoy the same kind of convenience that they are seeing it on enterprise and bringing the same kind of governance that they used to do it. so that maybe see customers. Yeah. >> Yeah. Monica, I want to go back to something you talked about customers dealing with that heterogeneous environment that they have reminds me of a lot of the themes that we talked about at nutanix.next because customers have they have multiple clouds they're using, requires different skillsets, different tooling. It's that simplicity layer that Nutanix has been working to deliver since day one. What are you from your customers? How are they doing with this? And especially in the database world. What are some of those challenges that they're really facing that we're looking to help solve with the solution today. >> Yeah. I mean, if you think about it, what customers at least in our experience, what they want or what they're looking for is this modern cloud platform that can really work across multiple cloud environments. Cause people don't want to change running, let's say an Oracle database you're on-prem on a certain stack and then using a whole different stack to run Oracle database in the cloud. What they want is the same exact foundation. So be so they can be, for sure have the right performance. Availability, reliability, the applications don't have to be rewritten on top of Oracle database. They want to preserve all of that, but they want the flexibility to be able to run that cloud platform wherever they choose to. So that's one. So that's choosing the right and modernizing and choosing the right cloud platform is definitely very important to our customers, but you nailed it on the head Stu. It's been about how do you manage it? How do you operate it on a daily basis? And that's where our customers are struggling with multiple types of tools out there, custom tool for every single environment. And that's what they don't want. They want to be able to manage, simply across multiple environments using the same tools and skillsets. And again, and I'm going to beat the same drum, but that's when Nutanix shines. That's a design principle is. It's the exact same technology foundation that you provide to customers to run any applications. In this case it happens to be databases. Exact same foundation you can use to run databases on-prem in the cloud. And then on top of that using Era boom! Simple management, simple operations, simple provisioning simple copy data management, simple patching, all of that becomes easy using just a single framework to manage and operate. And I will tell you this, when we talk to customers, what is it that DBS and database teams are struggling with? They're struggling with SLS and performance on scalability, that's one, number two they're struggling with keeping it up and running and fulfilling the demands of the stakeholders because they cannot keep up with how many databases they need to keep provisioning and patching and updating. So at Nutanix now we are actually solving both those problems with the platform. We are solving the problem of a very specific SLA that we can deliver in any cloud. And with Era, you're solving the issue of that operational complexity. We're making it really easy. So again, IT stakeholders DBS can fulfill the demands of the business stakeholders and really help them monetize the data. >> Yeah. I'll just add on with one concrete examples too. Like we have a big financial customer, they want to run Postgres. They are looking at the public cloud. Can we do a manage services kind of stuff, but you look at this, that the cost difference between a Postgres and your company infrastructure versus managed services almost like $3X to $4X dollars. Now, with Nutanix platform and Era, we were able to show that they can do at much reduced cost, manage their best service experience including their DBA cost are including the cloud administration cost. Like we added the infrastructure picture. We added the people who are going to manage the cloud, internal cloud and then intern experience being, plus plus of what they can see to public cloud. That's what makes the big difference. And this is what data sovereignty, data control, compliance and infrastructure governance, all these things coupled with cloud experiences, what customers really see the value of Era and the enterprise cloud and with an extension to the public cloud, with our hybrid cloud strategy. if they want to move this workload to public cloud they can do it. So, today with AWS clusters and tomorrow with our Azure clusters. So that gives them that kind of insurance not getting locked in by a big hyperscaler, but at same time enjoy the cloud experience. That's what big customers are looking for. >> Alright Bala, all the things you laid out here, what's the availability of Era rotically dot two? >> Era rotically dot two is actually available today. The customers can enjoy download the bits. We already have bunches of beta customers who are trying it out with the recall big telco companies are financial companies, and even big companies that manage big pensions kind of stuff. Let's talk about that kind of stuff. People are looking to us. In fact, there are customers who are looking for, when is this available for Azure cluster so that we can move some of our workloads to and manage the databases in Azure classes. So it is available and I'm looking forward to great feedback from our customers. And I'm hoping that it will solve some of their major critical problems. And in the process they get the best of Nutanix. >> Monica, last question I have for you. This doesn't seem like it's necessarily the same traditional infrastructure go to market for a solution like this. If I think back to, people think of HCI it was like, Oh! well, it was kind of a new box. We know Nutanix is a software company. More of what you do today is subscription based. So, maybe if you could talk a little bit to just how Nutanix goes to market with a solution like this. >> Yeah. And you know what, maybe people don't realize it but I'm hoping a lot of people do that. Nutanix is not just an infrastructure company anymore. In the last many years we've developed a full cloud platform in not only do we offer the infrastructure services with hyperconverged infrastructure which is now really the foundation. It's the hybrid cloud infrastructure. As you know, Stu, we talked to you a month ago and we talked about the evolution of XCI to really becoming the hybrid cloud infrastructure. But in addition to that, we also offer other data center services on storage DR Networking. We also offer DevOps services with application provisioning automation, application orchestration and then of course, database services that we talking about today and we offer desktop services. So Nutanix has really evolved in the last few years to a complete cloud platform really focusing on the application and workloads that run on top of the infrastructure stack. So not just the infrastructure layer but how can we be the best platform to run your databases? Your end is the computing workloads, your analytics applications your enterprise applications, cloud native applications. So that's what this is. And databases is one of our most successful workloads that's that runs a Nutanix very well because of the way the infrastructure software is architected. Because it's really great to scale high performance because again our superior architecture. And now with Era, it's a tool, it's all in one. Now it's also about really simplifying the management of databases and delivering them speedily and with agility to drive innovation in the organizations. >> Yep. Thank you Monica. Thank you. I I'll just add a couple of lines of comments into that. DTM for databases as erotically dots two, is going to be a challenge. And historically we are seen as an infrastructure company but the beauty of databases is so and to send to the infrastructure, the storage. So the language slightly becomes easy. And in fact, this holistic way of looking at solving the problem at the solution level rather than infrastructure helps us to go to a different kind of buyer, different kinds of decision maker, and we are learning. And I can tell you confidently the kind of progress that we have seen for in one enough year, the kind of customers that we are winning. And we are proving that we can bring a big difference to them. Though there is a challenge of DTM speaking the language of database, but the sheer nature of cloud platform the way they are a hundred hyperscale work. That's the kind of language that we take. You can run your solution. And here is how you can cut down your database backup time from hours to less than minute. Here's how you can cut down your patching from 16 hours to less than one hour. It is how you can cut down your provisioning time from multiple weeks to let them like matter of minutes. That holistic way of approaching it coupled with the power of the platform, really making the big difference for us. And I usually tell every time I meet, can you give us an opportunity to cut down your database cost, the PC vote, total cost of operations by close to 50%? That gets them excited that lets then move lean in and say, how do you plan to do it? And then we go about how do we do it? And we do a deep dive and PC people and all of it. So I'm excited. I think this is going to be a big play for Nutanix. We're going to make big difference. >> Absolutely well, Bala, congratulations to the team. Monica, both of you thank you so much for joining, really excited for all the announcements. >> Thank you so much. >> Thank you >> Stay with us. We're going to dig in a little bit more with one more interview for this product launch of the New Era and Database Management from Nutanix. I'm Stu Minimam as always, thank you for watching theCUBE. (cool music)
SUMMARY :
Narrator: From around the globe. I have the Senior Vice that the database for the customers have to our customers the ability I have one database it's kind of the one of the most valuable assets So the customers will now How about from the DBA standpoint? Meaning that the same kind of stuff that the machine can do, So bringing the cloud experience, of the services that are and the management of these of a lot of the themes that we talked about at nutanix.next demands of the stakeholders of Era and the enterprise And in the process they the same traditional of the way the infrastructure the kind of customers that we are winning. really excited for all the announcements. the New Era and Database
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Monica | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Monica Kumar | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
$3X | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
DBS | ORGANIZATION | 0.99+ |
two day | QUANTITY | 0.99+ |
$4X | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Postgres | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
16 hours | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Bala Kuchibhotla | PERSON | 0.99+ |
less than one hour | QUANTITY | 0.99+ |
Bala | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
SAP HANA | TITLE | 0.99+ |
365 day | QUANTITY | 0.99+ |
Stu Minimam | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Hundreds of petabytes | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
a month ago | DATE | 0.98+ |
today | DATE | 0.98+ |
SQL | TITLE | 0.98+ |
two categories | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
SAP HANA | TITLE | 0.98+ |
HCI | ORGANIZATION | 0.98+ |
single cluster | QUANTITY | 0.97+ |
seven different copies | QUANTITY | 0.96+ |
single shop | QUANTITY | 0.96+ |
almost 80% | QUANTITY | 0.96+ |
nutanix.next | ORGANIZATION | 0.96+ |
seven | QUANTITY | 0.96+ |
single framework | QUANTITY | 0.95+ |
one kind | QUANTITY | 0.94+ |
Eratory dot two | TITLE | 0.94+ |
every two years | QUANTITY | 0.93+ |
Tarkan Maner & Rajiv Mirani, Nutanix | Global .NEXT Digital Experience 2020
>> Narrator: From around the globe, it's theCUBE with coverage of the Global .NEXT Digital Experience brought to you by Nutanix. >> Welcome back, I'm Stu Miniman and this is theCUBE's coverage of the Nutanix .NEXT Digital Experience. We've got two of the c-suite here to really dig into some of the strategy and partnerships talked at their annual user conference. Happy to welcome back to the program two of our CUBE alumni first of all, we have Tarkan Maner. He is the Chief Customer Officer at Nutanix and joining us also Rajiv Mirani, he is the Chief Technology Officer, CTO. Rajiv, Tarkan, great to see you both. Thanks so much for joining us on theCUBE. >> Great to be back. >> Good to see you. >> All right. So Tarkan talk about a number of announcements. You had some big partner executives up on stage. As I just talked with Monica about, Scott Guthrie wearing the signature red polo, you had Kirk Skaugen from Lenovo of course, a real growing partnership with Nutanix, a bunch of others and even my understanding the partner program for how you go to market has gone through a lot. So a whole lot of stuff to go into, partnerships, don't need to tackle it all here upfront, but give us some of the highlights from your standpoint. >> I'll tell this to my dear friend Rajiv and I've been really busy, last few months and last 12 months have been super, super busy for us. And as you know, the latest announcements we made the new $750 million investment from Bain capital, amazing if by 20 results, Q4, big results. And obviously in the last few months big announcements with AWS as part of our hybrid multicloud vision and obviously Rajiv and I, we're making sale announcements, product announcements, partner announcements at .NEXT. So at a high level, I know Rajiv is going to cover this a little bit more in detail, but we covered everything under these three premises. Run better, run faster and run anywhere. Without stealing the thunder from Rajiv, but I just want to give you at a high level a little bit. What excites us a lot is obviously the customer partner intimacy and all this new IP innovation and announcement also very strong, very tight operational results and operational execution makes the company really special as a independent software vendor in this multicloud era. Obviously, we are the only true independent software vendor to do not run a business in a sense with fast growth. Timed to that announcement chain we make this big announcement with Azure partnership, our Nutanix portfolio under the Nutanix cluster ran now available as Bare-Metal Service on Azure after AWS. The partnership is new with Azure. We just announced the first angle of it. Limited access customers are taking it to look at the service. We're going to have a public preview in a few months, and more to come. And obviously we're not going to stop there. We have tons of work going on with other cloud providers, as well. Tying that, obviously, big focus with our Citrix partnership globally around our end user computing business as Rajiv will outline further, our portfolio on top of our digital infrastructure, tying the data center services, DevOps services, and you user computing services, Citrix partnership becomes a big one, and obviously you're tying the Lenovo and HP partnership to these things as the core platforms to run that business. It's creating tons of opportunity and I'll cover a little bit more further in a bit more detail, but one other partnership we are also focusing on, our Google partnership and on desktop as a service. So these are all coming to get around data center, DevOps, and user competent services on top of that amazing infrastructure Rajiv and team built over the past 10 years. I see Rajiv as one of our co-founders and one side with the right another. So the business is obviously booming in multiple fronts. This, if by 2020 was a great starting point with all this investment, that bank capital $750 million, big execution, ACD transition, software transition. And obviously these cloud partnerships are going to make big differences moving forward. >> Yeah, so Rajiv, want to build off what Tarkan was just saying there, that really coming together, when I heard the strategy run better, run faster, run anywhere, it really pulled together some of the threads I've been watching at Nutanix the last couple of years. There's been some SaaS solutions where it was like, wait, I don't understand how that ties back to really the core of what Nutanix does. And of course, Nutanix is more than just an HCI company, it's software and that simplicity and the experience as your team has always said, trying to make things invisible, but help if you would kind of lay out, there's a lot of announcements, but architecturally, there were some significant changes from the core, as well as, if I'm reading it right, it feels like the portfolio has a little bit more cohesion than I was seeing a year or so ago. >> Yeah, actually the theme around all these announcements is the same really, it's this ability to run any application, whether it's the most demanding traditional applications, the SAP HANA, the Epics and so on, but also the more modern cloud native application, any kind of application, we want the best platform. We want a platform that's simple, seamless, and secure, but we want to be able to run every application, we want to run it with great performance. So if you look at the announcements that are being made around strengthening the core with the Block Store, adding things like virtual networking, as well as announcements we made around building Karbon platform services, essentially making it easier for developers to build applications in a new cloud native way, but still have the choice of running them on premises or in the cloud. We believe we have the best platform for all of that. And then of course you want to give customers the optionality to run these applications anywhere they want, whether that's a private cloud, their own private data centers and service providers, or in the public cloud and the hyperscalers. So we give them that whole range of choices, and you can see that all the announcements fit into that one theme: any application, anywhere, that's basically it. >> Well, I'd like you to build just a little bit more on the application piece. The developer conversation is something we've been hearing from Nutanix the last couple of years. We've seen you in the cloud native space. Of course, Karbon is your Kubernetes offering. So the line I used a couple of years ago at .NEXT was modernize the platform, then you can modernize all of your applications on top of it, so where does Nutanix touch the developer? You know, how does that, building new apps, modernizing my apps tie into the Nutanix discussion? >> Yeah great question, Stu. So last year we introduced Karbon for the first time. And if you look at Karbon, the initial offering was really targeted at an IT audience, right? So it's basically the goal was to make Kubernetes management itself very easy for the IT professional. So essentially, whether you were creating a Nutanix, sorry, a Karbon cluster, or scaling it out or upgrading Kubernetes itself. We wanted to make that part of the life cycle very, very simple for IT. For the developer we offered the Vanilla Kubernetes system. And this was something that developers asked us for again and again, don't go around mucking around with Kubernetes itself, we want Vanilla Kubernetes, we want to use our Kube Cuddle or the tools that we're used to. So don't go fork off and build the economic Kubernetes distribution. That's the last thing we want. So we had a good platform already, but then we wanted to take the next step because very few applications today are self contained in the sense that they run entirely within themselves without dependence on external services, especially when you're building in the cloud, you have access, suppose you're building an Amazon, you have access to RDS to manage your databases. Don't have to manage it yourself. Your object stores, data pipelines, all kinds of platform services available, which really can accelerate development of your own applications, right? So we took the stand said, look, this is good. This is important. We want to give developers the same kind of services, but we want to make it much more democratic in the sense that we want them to be able to run these applications anywhere, not just on AWS or not just on GCP. And that's really the genesis of Kubernetes platform services. We've taken the most common services people use in the cloud and made them available to run anywhere. Public cloud, private cloud, anywhere. So we think it's very exciting. >> Tarkan, we had, you and I had a discussion with one of your partners on how this hybrid cloud scenario is playing out at HP discover, of course, with the GreenLake solution. I'm curious from your standpoint, all the things that Rajiv was just talking about, that's a real change, if you think about kind of the traditional infrastructure people they're needing to move up the stack. You've got partnerships with the hyperscalers. So help explain a little bit the ripple effect as Nutanix helps customers simplify and modernize, how your partners and your channel can still participate. >> So perfect, look, as you heard from Rajiv, this is like all coming super nicely together. As Rajiv outlined, with the data center, operations and services, DevOps services, to enable that faster time to market capable, that Kubernetes offering and user services, our desktop services on top of that classical industry-leading, record-breaking digital infrastructure. That hybrid cloud infrastructure we call today. You play this game with devoting a little bit, as you remember, we used to call hyper-converged infrastructure. Now we call it of the hybrid cloud infrastructure, in a sense. All those pieces coming together nicely end-to-end, unlike any other vendor, and from a software only perspective, we're not owned by a hardware company which is making a huge difference. Gives us tremendous level of flexibility, democratization, and freedom of choice. Cloud to us is basically is not a destination. It's an operating model. You heard me say this before, as Rajiv also said. So in our strategy, when you look at it, Stu, we have a three pronged approach on top of our on-prem, marketplace on-prem capable. There's been 17,000+ customers, 7,000+ channel and strategic partners. Also as part of this big announcement, this new partner program we called Elevate, on the Elevate brand, bringing all the channel partners, ISEs, platform partners, hyperscalers, Telco XPSs, and our global market partners all in one bucket where we manage them, simply the incentives. It's a very simple way to execute that opposite Chris Kaddaras, our Chief Revenue Officer, as well as Christian Alvarez, our Chief Partner Officer sort of speaking on global goal, the channels, working together tightly with our organization on the product front to deliver this. So one key point I want to share with you, tying to what Rajiv said earlier on the multicloud area, obviously we realize customers are looking for freedom of choice. So we have our own cloud, Nutanix cloud, under the XI brand. X-I, XI brand, which is basically our own logistics, our own basically, serviceability, payment capability and our software, running off our portal partnerships like Equinix delivering that software as a service. We started with disaster recovery as a service, very fast growing business. Now we announced our GreenLake partnership with HPE in the backend that data center as a service might be actually HP GreenLake if the customer wants it. So that partnership creates huge opportunities for us. Obviously, on top of that, we have these Telco XSP partnerships. As we're announcing partnerships with some amazing source providers like OBH. You heard today from college Sudani in society general, they are not only using AWS and Azure and Nutanix on-prem and Nutanix clusters on Azure and AWS for their internal departments, but they also use a local service provider in France for data gravity and data security reasons. A French company dealing with French business and data centers, with that kind of data governance requirements within the country, within the borders of France. So in that context we are also the service provider partnerships coming in. We're going to announce a partnership with OVHS vault, which is a big deal for us. And tying to this, as Rajiv talked about, our clusters portfolio, our portfolio basically running on-prem on AWS and Azure. And we're not going to stop there obviously. So give choice to the customers. So as Rajiv said, basically, Nutanix can run anywhere. On top of that we announced just today with Capgemini, a new dev test environment is a service. Where Rajiv's portfolio, end-to-end, data center, DevOps, and some of the UC capabilities for dev test reasons can run as a service on Capgemini cloud. We have similar partnerships with HCL, similar partnerships with (indistinct) and we're super excited for this .NEXT in FI21 because of those reasons. >> Rajiv, one of the real challenges we've had for a long time is, I want to be able to have that optionality. I want to be able to live in any environment. I don't want to be stuck in an environment, but I want to be able to take advantage of the innovation and the functionality that's there. Can you give us a little bit of insight? How do you make sure that Nutanix can live these environments like the new Azure partnership and it has the Nutanix experience, yet I can take advantage of, whether it be AI or some other capabilities that a Google, an Amazon or a Microsoft has. How do you balance that? You have to integrate with all of these partners yet, not lock out the features that they keep adding. >> Right, absolutely, that's a great point, Stu. And that's something we pride ourselves on, that we're not taking shortcuts. We're not trying to create our own bubble in these hyperscalers, where we run in an isolated environment and can't interact with the rest of the services they offer. And that's primarily why we have spent the time and the effort to integrate closely with their virtual networking, with the services that they provide and essentially offer the best of both worlds. We take the Nutanix stack, the entire software stack, everything we build from top to bottom, make it available. So the same experience is there with upgrades and prism, the same experience is available on-prem and in the cloud. But at the same time, as you said, we want people to have full speed access to cloud services. There's things the cloud is doing that will be very difficult for anybody to do. I mean, the kind of thing that, say Google does with AI, or Azure does with databases. It's remarkable what these guys are doing, and you want to take advantage of those services. So for us, it's very, very important, that access is not constrained in any way, but also that customers have the time to make this journey, right? If they want to move to cloud today, they can do that. And then they can refactor and redevelop their applications over time and start consuming these sales. So it's not an all or nothing proposition. It's not that you have to refactor it, rewrite before you can move forward. That's been extremely important for us and it's really topical right now, especially with this pandemic. I think one thing all of IT has realized is that you have to be agile. You have to be able to react to things and timeframes you never thought you needed to, right. So it's not just disaster recovery, but the amount of effort that's gone in the last few months in enabling a distributed workforce, who thought it would happen so quickly? But it's a kind of agility that, an optionality that we are giving to customers that really makes it possible. >> Yeah, absolutely. Right now, things are moving pretty fast. So let me let both of you have the final word. Give us a little bit viewpoint, as things are moving fast, what's on the plate? What should we be expecting to see from Nutanix and your ecosystem through the rest of 2020, Tarkan? >> So look, heard from us, Stu, I know you're talking to multiple folks and you had this discussions with us, end-to-end, and look for the company to be successful, customer partner intimacy, IP innovation, and execution, and operational excellence. Obviously, all three things need to come together. So in a sense, Stu, we just need to keep moving. I give this analogy a lot, as Benjamin Franklin says, the human beings are divided in three categories, you know? The first one is those who are immovable. They never move. Second category, those who, you know, are movable, you can move them if you try hard. And obviously third category, those who just move. Not only themselves, but they move others, like in a sense, in a nice way to refer to Benjamin Franklin, with one of our key founders in the US, in a sense as the founders of this company, with folks like Rajiv and other executives, and some of the newcomers, we a culture, which just keeps moving and the last 12 months, you've seen some of these. And obviously going back to the announcement day, AWS, now Azure, the Capgemini announcement then test as a service around some of the portfolio that Rajiv talked about or a Google partnership on desktop as a service, deep focus on Citrix globally with Azure, Google, and ourselves on-prem, off-prem. And obviously some of the big moves were making with some of the customers, it's going to continue. This is just the beginning. I mean, literally Rajiv and I are doing these .NEXT conferences, announcements, and so on. We're actually doing calls right now to basically execute for the next 12 months. We're planning the next 12 months' execution. So we're super excited now with this new Bain Capital investment, and also the partnership, the product, we're ready to rock and roll. So look forward to seeing you soon, Stu, and we're going to have more news to cover with you. >> Yeah, exactly right, Tarkan. I think as Tarkan said we are at the beginning of a journey right now. I think the way hybrid cloud is now becoming seamless opens up so many possibilities for customers, things that were never possible before. Most people when they talk hybrid cloud, they're talking about fairly separate environments, some applications running in the public cloud, some running on premises. Applications that are themselves hybrid that run across, or that can burst from one to the other, or can move around with both app and data mobility. I think the possibilities are huge. And it's going to be many years before we see the full potential of this platform. >> Well Rajiv and Tarkan, thank you so much for sharing all of the updates, congratulations on the progress, and absolutely look forward to catching up in the near future and watching the journey. >> Thanks, Stu. >> Thank you, Stu. >> And stay with us for more coverage here from the Nutanix .NEXT digital experience. I'm Stu Miniman, and as always, thank you for watching theCUBE. (bright music)
SUMMARY :
the globe, it's theCUBE of the Nutanix the partner program the latest announcements we made and the experience as the optionality to run these applications So the line I used a couple That's the last thing we want. kind of the traditional on the product front to deliver this. and it has the Nutanix experience, But at the same time, as you said, the rest of 2020, Tarkan? and look for the company to be successful, in the public cloud, congratulations on the progress, the Nutanix
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tarkan | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Rajiv | PERSON | 0.99+ |
HCL | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Chris Kaddaras | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Monica | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
France | LOCATION | 0.99+ |
Benjamin Franklin | PERSON | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
Christian Alvarez | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
$750 million | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Rajiv Mirani | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Tarkan Maner | PERSON | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Capgemini | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
OBH | ORGANIZATION | 0.99+ |
third category | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
20 results | QUANTITY | 0.99+ |
OVHS | ORGANIZATION | 0.99+ |
SAP HANA | TITLE | 0.99+ |
Second category | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
first angle | QUANTITY | 0.99+ |
two | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both worlds | QUANTITY | 0.98+ |
Nut | ORGANIZATION | 0.98+ |
Bain Capital | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.97+ |
X-I | ORGANIZATION | 0.97+ |
June Yang, Google and Shailesh Shukla, Google | Google Cloud Next OnAir '20
>> Announcer: From around the globe, it's theCUBE. Covering Google Cloud Next on Air '20. >> Hi, I'm Stu Miniman. And this is theCUBE's coverage of Google Cloud Next On Air. One of the weeks that they had for the show is to dig deep into infrastructure, of course, one of the foundational pieces when we talk about cloud, so happy to welcome to the program, I've got two of the general managers for both compute and networking. First of all, welcome back one of our cube alumni, June Yang, who's the vice president of compute and also welcoming Shailesh Shukla who's the vice president and general manager of networking both with Google Cloud. Thank you both so much for joining us. >> Great to be here. >> Great to be here, thanks for inviting us Stu. >> So June, if I can start with, you know, one of the themes I heard in the keynote that you gave during the infrastructure week was talking about, we talked about meeting customers where they are, how do I get, you know, all of my applications that I have, obviously some of them are building new applications. Some of them I'm doing SaaS, but many of them, I have to say, how do I get it from where I am to where I want to be and then start taking advantage of cloud and modernization and new capabilities. So if you could, you know, what's new when it comes to migration from a Google Cloud standpoint and, you know, give us a little bit insight as to what you're hearing from your customers. >> Yeah, definitely happy to do so. I think for many of our customers, migration is really the first step, right? A lot of the applications on premise today so the goal is really how do I move from on prem to the cloud? So to that extend, I think we have announced a number of capabilities. And one of the programs that are very exciting that we have just launched is called RAMP program which stands for Google Cloud Rapid Assessment and Migration Program. So it's really kind of bundling a holistic approach of you know, kind of programs tooling and you know, as well as incentives altogether to really help customer with that kind of a journey, right? And then also on the product side, we have introduced a number of new capabilities to really ease that transition for customer to move from on premise to the cloud as well. One of the things we just announced is Google Cloud VMware Engine. And this is really, you know, we built as a native service inside Google as a (indistinct) to allow customer to run their VMware as a service on top of Google infrastructure. So customers can easily take their, you know, what's running on premise, that's running VMware today and move it to cloud was really no change whatsoever and really lift and shift. And your other point is really about a modernization, right? Cause most of our customers coming in today, it's not just about I'm running this as a way it is. It's also, how do I extract value out of this kind of capability? So we build this as a service so that customer can easily start using services like BigQuery to be able to extract data and insights out of this and to be able to give them additional advantages and to create new services and things like that. And for other customers who might want to be able to, you know, leverage our AI, ML capability, that's at their fingertips as well. So it's just really trying to make that process super easy. Another kind of class of workloads we see is really around SAP, right? That's our bread and butter for many enterprises. So customers are moving those out into the clouds and we've seen many examples really kind of really, allow customers to take the data that's sitting in SAP HANA and be able to extract more value out of those. Home Depot is a great example of those and where they're able to leverage the inquiry to take, you know, their stockouts and some of the inventory management and really to the next level, and really giving a customer a much better experience at the end of the day. So those are kind of just a few things that we're doing on that side to really make you a customer easy to lift and shift and then be able to modernize along the way. >> Well yeah, June, if I would like to dig in a little bit on the VMware piece that you talked about. I've been talking of VM-ware a bit lately, talking to some of their customers leveraging the VMware cloud offerings and that modernization is so important because the traditional way you think about virtualization was I stick something in a VM and I leave it there and of course customers, I want to be able to take advantage of the innovation and changes in the cloud. So it seems like things like your analytics and AI would be a natural fit for VMware customers to then get access to those services that you're offering. >> Yeah, absolutely. I think we have lots of customers, that's kind of want to differentiators that customers are looking for, right? I can buy my VMware in a variety of places, but I want to be able to take it to the next level. How do I use data as my differentiator? You know, one of the core missions as part of the Google mission is really how do we help customers to digitally transform and reimagine their business was a data power innovation, and that's kind of one key piece we know we want to focus on, and this is part of the reason why we built this as really a native service inside of Google Cloud so that you're going through the same council using, you know, accessing VMware engine, accessing BigQuery, accessing networking, firewalls, and so forth, all really seamlessly. And so it makes it really easy to be able to extend and modernize. >> All right, well, June one of the other things, anytime we come to the Cloud event is we know that there's going to be updates in some of the primary offerings. So when it comes to compute and storage, know there's a number of announcements there, probably more than we'll be able to cover in this, but give us some of the highlights. >> Yeah, let me give some highlights I mean, at the core of this is a really Google Compute Engine, and we're very excited we've introduced a number of new, what we call VM families, right? Essentially different UBM instances, that's catered towards different use cases and different kinds of workloads. So for example, we launched the N2D VM, so this is a set of VMs on EMD technology and really kind of provide excellent price performance benefit for customers and who can choose to go down that particular path. We're also just really introduced our A2 VM family. This is based on GPU accelerator optimized to VM. So we're the first ones in the market to introduce NVIDIA Ampere A 100. So for lots of customers who were really introduced, we're interesting, you know, use GPU to do their ML and AI type of analysis. This is a big help because it's got a better performance compared to the previous generation so they can run their models faster and turn it around and turn insights. >> Wonderful. Shailesh, of course we want to hear about the networking components to, you know, Google, very well known you know, everybody leverages Google's network and global reach so how about the update from your network side? >> Absolutely. Stu, let me give you a set of updates that we have announced at next conference. So first of all as you know, many customers choose Google Cloud for the scale, the reach, the performance and the elasticity that we provide and ultimately results in better user experience or customer experience. And the backbone of all of this capability is our private global backbone network, right? Which all of our cloud customers benefit from. The networking is extremely important to advance our customers digital journeys, the ones that June talked about, migration and modernization, as well as security, right? So to that end, we made several announcements. Let's talk about some of them. First we announced a new subsea cable called the Grace Hopper which will actually run between the U.S. on one side and UK on the other and Spain on another leg. And it's equipped with about 16 fiber pairs that will get completed in 2022. And it will allow for significant new capacity between the U.S. and Europe, right? Second Google Cloud CDN, it's one of our most popular and fast-growing service offerings. It now offers the capability to serve content from on prem, as well as other clouds especially for hybrid and multicloud deployments. This provides a tremendous amount of flexibility in where the content can be placed and overall content and application delivery. Third we have announced the expansion of our partnership with Cisco and it's we have announced this notion of Cisco SD-WAN Cloud Hub with Google Cloud. It's one of the first in the industry to actually create an automated end to end solution that intelligently and securely, you know, connects or bridges enterprise networks to any workload across multiple clouds and to other locations. Four, we announced a new capabilities in the network intelligence center. It's a platform that provides customers with unmatched visibility into their networks, along with proactive kind of network verification, security recommendations, and so on. There were two specific modules there, around firewall insights and performance dashboard that we announced in addition to the three that already existed. And finally, we have a range of really powerful announcements in the security front, as you know, security is one of our top priorities and our infrastructure and products are designed, built and operated with an end to end security framework and end to end security as a core design principle. Let me give you a few highlights. First, as part of making it easy for firewall management for our customers to manage firewall across multiple organizations, we announced hierarchical firewall. Second, in order to enable, you know, better security capability, we announced the notion of packet metering, right? So which is something that we announced earlier in the year, but it's now GA and allows customers to collect and inspect network traffic across multiple machine types without any overhead, right? Third is, in actually in our compute and security teams, we announced the capability to what we call as confidential VMs, which offer the ability to encrypt data while being processed. We have always had the capability to encrypt data at rest and while in motion, now we are the first in the industry to announce the ability to encrypt data even while it is being processed. So we are really, you know, pleased to offer that as part of our confidential computing portfolio. We also announced the ability to do a managed service around our cloud armor security portfolio for DDoS web application and bot detection, that's called Cloud Armor Managed Protection. And finally we also announced the capability called Private Service Connect that allows customers to connect effortlessly to other Google Cloud services or to third party SaaS applications while keeping their traffic secure and private over the, in kind of the broader internet. So we were really pleased to announce in number of, you know, very critical kind of announcements, products and capabilities and partnerships such as Cisco in order to further the modernization and migration for our customers. >> Yeah, one note I will make for our audience, you know, check the details on the website. I know some of the security features are now in data, many of the other things it's now general availability. Shailesh, follow up question I have for you is when I look in 2020, the internet patterns of traffic have changed drastically. You saw a very rapid shift, everyone had needed to work from home, there's been a lot of stresses and strains on the network, when I hear things like your CDN or your SD-WAN partnership with Cisco, I have to think that there's, you know, an impact on that. What are you seeing? What are you hearing from your customers? How are you helping them work through these rapid changes to be able to respond and still give people the, you know, the performance and reliability of traffic where they need it, when they need? >> Right, absolutely. This is a, you know, very important question and a very important topic, right? And when we saw the impact of COVID, you know, as you know Google's mission is to be, continue to be helpful to our customers, we actually invested and continue to invest in building out our CDN capability, our interconnect, the capacity in our network infrastructure, and so on, in order to provide better, for example distance learning, video conferencing, e-commerce, financial services and so on and we are proud to say that we were able to support a very significant expansion in the overall traffic, you know, on a global basis, right? In Google Clouds and Google's network without a hitch. So we are really proud to be able to say that. In addition there are other areas where we have been looking to help our customers. For example, high performance computing is a very interesting capability that many customers are using for things such as COVID research, right? So a good example is Northeastern University in Boston that has been using, you know, a sort of thousands of kind of preemptable virtual machines on Google Cloud to power very large scale and a data driven model and simulations to figure out how the travel restrictions and social distancing will actually impact the spread of the virus. That's an example of the way that we are trying to be helpful as part of the the broader global situation. >> Great. June, I have to imagine generally from infrastructure there've been a number of other impacts that Google Cloud has been helping your customers, any other examples that you'd like to share? >> Yeah, absolutely. I mean, if you look at the COVID impact, it impact different industries quite differently. We've seen certain industries that just really, their demand skyrocketed overnight. For example you know, I take one of our internal customer, Google, you know, Google Meet, which is Google's video conferencing service, we just announced that we saw a 30X increase over the last few months since COVID has started. And this is all running on Google infrastructure. And we've seen similar kind of a pattern for a number of our customers on the media entertainment area, and certainly video conferencing and so forth. And we've been able to scale to beat these key customer's demand and to make sure that they have the agility they need to meet the demand from their customers and so we're definitely very proud to be part of the, you know, part of this effort to kind of enable folks to be able to work from home, to be able to study from home and so on and so forth. You know, for some customers, you know, the whole business continuity is really a big deal for them, you know, where's the whole work from home a mandate. So for example, one of our customers Telus International, it's a Canadian telecommunication company, because of COVID they had to, you know, be able to transition tens and thousands of employees to work on the whole model immediately. And they were able to work with Google Cloud and our partner, itopia, who is specializing in virtual desktop and application. So overnight, literally in 24 hours, we're able to deploy a fully configured virtual desktop environments from Google Cloud and allow their employees to come back to service. So that's just one example, there's hundreds and thousands more of those examples, and it's been very heartening to be part of this, you know, Google to be helpful to our customer. >> Great. Well, I want to let both of you just have the final word when you're talking to customers here in 2020, how should they be thinking of Google Cloud? How do you make sure that you're helping them in differentiating from some of the other solutions and the environment? May be June if we could start with you. >> Sure, so at Google Cloud, our goal is to make it easy for anyone you know, whether you're big big enterprises or small startups, to be able to build your applications, to be able to innovate and harness the power of data to extract additional information, insights, and to be able to scale your business. As an infrastructure provider, we want to deliver the best infrastructure to run all customers application and on a global basis, reliably and securely. Definitely getting more and more complicated and you know, as we kind of spread our capacity to different locations, it gets more complicated from a logistics and a perspective as well so we want to help to do the heavy lifting around the infrastructure, so that from a customer, they can simply consume our infrastructure as a service and be able to focus on their businesses and not worry about the infrastructure side. So, you know, that's our goal, we'll do the plumbing work and we'll allow customers innovate on top of that. >> Right. You know, June you said that very well, right? Distributed infrastructure is a key part of our strategy to help our customers. In addition, we also provide the platform capability. So essentially a digital transformation platform that manages data at scale to help, you know, develop and modernize the applications, right? And finally we layer on top of that, a suite of industry specific solutions that deliver kind of these digital capabilities across each of the key verticals, such as financial services or telecommunications or media and entertainment, retail, healthcare, et cetera. So that's how combining together infrastructure platform and solutions we are able to help customers in their modernization journeys. >> All right, June and Shailesh, thank you so much for sharing the updates, congratulations to your teams on the progress, and absolutely look forward to hearing more in the future. >> Great, thank you Stu. >> Thank you Stu. >> All right, and stay tuned for more coverage of Google Cloud Next On Air '20. I'm Stu Miniman, thank you for watching theCUBE. (Upbeat music)
SUMMARY :
the globe, it's theCUBE. so happy to welcome to the program, Great to be here, So June, if I can start with, you know, and to be able to give and changes in the cloud. And so it makes it really easy to be able there's going to be updates to the previous generation very well known you know, Second, in order to enable, you know, and still give people the, you know, and simulations to figure out June, I have to imagine and to make sure that they and the environment? and to be able to scale your business. scale to help, you know, to hearing more in the future. you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Shailesh | PERSON | 0.99+ |
Telus International | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
tens | QUANTITY | 0.99+ |
Shailesh Shukla | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
2022 | DATE | 0.99+ |
June Yang | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
24 hours | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
June | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Third | QUANTITY | 0.99+ |
Northeastern University | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Second | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Spain | LOCATION | 0.99+ |
SAP HANA | TITLE | 0.98+ |
NVIDIA | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
Four | QUANTITY | 0.98+ |
UK | LOCATION | 0.98+ |
Shailesh Shukla | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
first step | QUANTITY | 0.98+ |
two specific modules | QUANTITY | 0.97+ |
Ampere A 100 | COMMERCIAL_ITEM | 0.97+ |
thousands | QUANTITY | 0.97+ |
one note | QUANTITY | 0.96+ |
Stu | PERSON | 0.96+ |
one key | QUANTITY | 0.96+ |
Google Cloud Next | TITLE | 0.96+ |
BigQuery | TITLE | 0.96+ |
one side | QUANTITY | 0.95+ |
each | QUANTITY | 0.94+ |
Grace Hopper | COMMERCIAL_ITEM | 0.93+ |
itopia | ORGANIZATION | 0.91+ |
Google Cloud | TITLE | 0.9+ |
about 16 fiber pairs | QUANTITY | 0.9+ |
first ones | QUANTITY | 0.89+ |
Raj Verma, MemSQL | CUBEConversation, August 2020
>>From the cube studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a cute conversation. Welcome to this cube conversation. I'm Lisa Martin pleased to be joined once again by the co CEO of mem sequel, Raj Verma, Raj, welcome back to the program. >>Thank you very much, Lisa. Great to see you as always. >>It's great to see you as well. I always enjoy our conversations. So why don't you start off because something that's been in the news the last couple of months besides COVID is one of your competitors, snowflake confidentially filed IPO documents with the sec a couple months ago. Just wanted to get your perspective on from a market standpoint. What does that signify? >>Yeah. Firstly, congratulations to the snowflake team. Uh, you know, I've, I have a bunch of friends there, you know, John McMahon, my explosives on the board. And I remember having a conversation with him about seven years ago and it was just starting off and I'm just so glad for him and Bob Mobileye. And, and as I said, a bunch of my friends who are there, um, they're executed brilliantly and, uh, I'm thrilled for that. So, um, we are hearing as to what the outcomes are likely to be. And, uh, it just seems like, uh, you know, it's going to be a great help. Um, and I think what it signifies is firstly, if you have a bit technology and if you execute well, good things happen and there's enough room for innovation here. So that is one, the second aspect is I think, and I think more importantly, what it signifies is a change of thought in the database market. >>If you really see, um, and know if my memory serves me right in the last two decades or probably two and a half buckets, we just had one company go public in the database space and that was Mongo. And, um, and that was in, I think October, 2017 and then, uh, two and a half years. So three years we've seen on other ones and uh, from the industry that we know, um, you know, there are going to be a couple that are going to go out in the next 18 months, 24 months as well. So the fact is that we had a, the iron grip on the database market for almost, you know, more than two decades. It was Oracle, IBM that a bit of Sybase and SAP HANA. And now there are a bunch of companies which are helping solve the problems of tomorrow with the technology of the month. >>And, uh, and that is, um, that is snowflake is a primary example of that. Um, so that's a, that's good change. God is good. I do think the incumbents are gonna find it harder and harder going forward. And also if you really see the evolution of the database market, the first sort of workloads that moved to the cloud with the developer workloads and the big benefactor that that was the no secret movement and one company that executed in my opinion, the best was Mongol. And they were the big benefactor of that, that sort of movement to the cloud. The second was the very large, but Moisey database data warehouse market, and a big benefactor of that has been snowflake big queries, the other one as well. However, the biggest set of tsunami of data that's we are seeing move to the cloud is the operational data, which is the marriage of historical data with real time data to give you real time insights as, or what we call the now are now. >>And that's going to be much, much bigger than, uh, than both the, you know, sequel or the developer data movement and the data warehouse. And we hope to be a benefactor of that. And then the shake up that happens in the database market and the change that's happening there, isn't a vendor take on market anymore, and that's good because you don't then have the stranglehold that Oracle had and you know, some of the ways that are treated as customers and help them to run some, et cetera, um, yeah. And giving customers choice so that they can choose what's best for the business is going to be, it's going to be great. And me are going to see seven to 10 really good database companies in large, in the next decade. And we surely hope them secret as one of them of, we definitely have the, have the potential to be one of them. >>You have the market, we have the product, we have the customers. So, you know, as I tell my team, it's up to us as to what we make of it. And, um, you know, we don't worry that much about competition. You did mention snowflake being advantage station. We, yeah, sure. You know, we do compete on certain opportunities. However, their value proposition is a little more single-threaded than ours. So they are more than the Datavail house space are. Our vision of the board is that, uh, you know, you should have a single store for data, whether it's database house, whether it's developer data or whether it's operational data or DP data. And, uh, you know, watch this space from orders. We make somebody exciting announcements. >>So dig into that a little bit more because some of the news and the commentary Raj in the last, maybe six weeks since the snowflake, um, IPO confidential information was released was, is the enterprise data warehouse dead. And you just had a couple of interesting things we're talking about now, we're seeing this momentum, huge second database to go public in two and a half bigots. That's huge, but that's also signifying to a point you made earlier. There's, there's a shift. So memes SQL isn't, we're not talking about an EDW. We're talking about operational real time. How do you see that if you're not looking in the rear view mirror, those competitors, how do you see that market and the opportunities? >>Yeah, I, I don't think the data warehouse market is dead at thought. I think the very fact that, you know, smoke makers going out at whatever valuation they go out, which is, you know, tens of billions of dollars is, um, is a testimony to the fact that, you know, it's a fancy ad master. This is what it is. I mean, data warehouses have existed for decades and, uh, there is a better way of doing it. So it's a fancy of mousetrap and, and that's great. I mean, that's way to money and it's clearly been demonstrated. Now what we are saying is that I think that is a better way to manage the organization's data rather than having them categorized in buckets of, you know, data warehouse, data developer, data DP, or transactional data, you know, uh, analytical data. Is there a way to imagine the future where there is one single database that you can quit eat, or data warehouse workloads for operational workloads, for OLTB work acknowledge and gain insights. And that's not a fancier mousetrap that is a data strategy reimagine. And, uh, and that's our mission. That's our purpose in life right now and are very excited about it's going to be hard. It's not, it's not a given it's a hard problem to solve. Otherwise, if you can solve it before we have the, uh, we have the goods to deliver and the talent, the deliberate, and, um, we are, we are trying it out with some very, very marquee customers. So we've been very excited about, >>Well, changing of the guard, as you mentioned, is hard. The opposite is easy, the opposite, you know, ignoring and not wanting to get out of that comfort zone. That's taken the easy route in my opinion. So it seems like we've got in the market, this, this significant changing of the guard, not just in, you know, what some of your competition is doing, but also from a customer's perspective, how do you help customers, especially institutions that have been around for decades and decades and decades pivot quickly so that the changing of the guard doesn't wipe them out. >>Yeah. Um, I actually think slightly differently. I think changing of the guard, um, wiping out a customer is if they stick or are resistant to the fact that there is a change of God, you know, and if they, if they hold on to, as we said in our previous conversation, if you stick onto the decisions of yesterday, you will not see the Sundays of tomorrow. So I do think that, uh, you know, change, you have a, God is a, is a symbolism, not even a symbolism as a statement to our customers to say, there is a better way of doing, uh, what you are doing to solve tomorrow's problem. And then doesn't have to be the Oracles and the BB tools and the psychosis of the world. So that's, that's one aspect of it. The second thing is, as I've always said, you're not really that obsessed about, uh, competition. >>The competition will do what they do. Uh, we are really very focused on having an impact in the shortest period of time on our customers and, uh, hopefully a positive impact. And if you can't do it, then, you know, I've had conversations with a few of them saying, maybe be not the company for you. Uh, it's not as if I have to sort of, software's a good one. I supply to the successful customers in the bag to do the unsuccessful with customers. The fact is that, you know, in certain, certain places there isn't an organizational alignment and you don't succeed. However, we do have young, we have in the last 14 months or so made tremendous investments into really ease of use of flexibility of architecture, which is hybrid and tactile, and that shrinking the total time to value for our customers. Because if I, if I believe you, if you do these three things, you will have an impact, a positive impact on the customer, in the sharpest, uh, amount of time and your Lindy or yourself. And I think that is more important than worrying needlessly about competition. And then the competition will do what they do. But if you keep your customers happy by having a positive impact, um, successes, only amount of time, >>Customers and employees are essential to that. But I like that you talked about customer obsession because you see it all over the place. Many people use it as descriptors of themselves and their LinkedIn profiles, for example, but for it actually to be meaningful, you talked about the whole objective is to make an impact for your customers. How do you define that? So that it's not just, I don't want to say marketing term, but something that everyone says they're customer obsessed showing it right within the pudding. >>It's easy to say we are customer obsessed. I mean, this organization is going to say we don't care about our customer. So, you know, of course we all want our customers to be successful. How do you, that's easy, you know, having a cultural value that we put our customers first is, was easy, but we didn't choose to do that. What we said is how do you have an impact on your customer in the shortest amount of time, right? That is, that is what you have. I'm sequel and Lee have now designed every process in mem sequel to align with that word. If, if that is a decision that we have to make a B essentially lenses through the fact of what is in the best interest of our customer and what will get us to have an impact, a positive impact on the customer in the shortest amount of time, that is a decision, which is a buy decision for us to make. >>A lot of times it's more expensive. It's a, a lot duffel. It stresses the, um, the, the, the organization, um, and the people in it. But that's, uh, that's what you have to do if you are. Um, if you are, you know, as, as they say, customer obsessed, um, it is, it's just a term which is easy to use, but very difficult to put here too. And we want to be a tactic. It right to be, we are going to continue to learn. It's a, it's not a destination, it's a journey. And we continue to take decisions and refine our processes do, as I said, huh, impact on our customers in the shortest amount of time. Now, obsessiveness, a lot of times is seen as a negative in the current society that we live in. And there's a reason for that because the, they view view obsession, but I view obsession and aggression is that is a punishing expression, which is really akin to just being cruel, you know, leading by fear and all the rest of it, which is as no place in any organization. >>And I actually think that in society at large, nothing, I believe that doesn't have any place in society. And then there's something which I dumb as instrumentalists, which is, this is where we were. This is where we are. This is where we are going and how do we track our progress on a daily, weekly, monthly basis? And if we, aren't sort of getting to that level that we believe we should get to, if our customers, aren't seeing the value of dramas in the shortest amount of time, what is it that we need to do better? Um, is that obsession, our instrumental aggression is, is, is what we are all about. And that brings with it a level of intensity, which is not what everyone, but then when you are, you know, challenging the institutions which have, uh, you know, the also has to speak for naked, it's gonna take a Herculean effort to ask them. And, uh, you know, the, the basically believed that instrumental aggression in terms of the, uh, you know, having an impact on customer in the shop to smile at time is gonna get us there. And a, and B are glad to have people who actually believe in that. And, uh, and that's why we've made tremendous progress over the course of last, uh, two years. >>So instrumental aggression. Interesting. How you talked about that, it's a provocative statement, but the way that you talk about it almost seems it's a prescriptive, very strategic, well thought out type of moving the business forward, busting through the old guard. Cause let's face it, you know, the big guys, the Oracles they're there, they're not easy for customers to rip and replace, but instrumental aggression seems to kind of go hand in hand with the changing of the guard. You've got to embrace one to be able to deliver the other, right. >>Yeah. So ducks, I think even a fever inventing something new. Um, I mean, yeah, it just requires instrumental aggression, I believe is a, uh, uh, anchor core to most successful organizations, whether in IP or anywhere else. That is a, that is a site to that obsession. And not, I'm not talking about instrumental aggression here, but I'm really talking about the obsession to succeed, uh, which, uh, you know, gave rise to what I think someone called us brilliant jerks and all the rest of it, because that is the sort of negative side of off obsession. And I think the challenge of leadership in our times is how do you foster the positivity of obsession, which needs to change a garden? And that's the instrumental aggression as a, as a tool to, to go there. And how do you prevent the negative side of it, which says that the end justifies the means and, and that's just not true. >>Uh, there is, there is something that's right, and there's something that's wrong. And, uh, and if that is made very clear that the end does not justify the meanings, it creates a lot of trust between, um, Austin, our customers, also not employees. And when their inherent trust, um, happens, then you foster, as I said, the positive side of obsession and, um, get away from the negative side of obsession that you've seen in certain very, very large companies. Now, the one thing that instrumental aggression and obsession brings to a company is that, uh, it makes a lot of people uncomfortable, and this is what I continue to tell. Um, our, our employees and my audience is, um, you know, be comfortable being uncomfortable because what you're trying to do is odd. And it's going to take a, as I say, a Herculean effort. So let's, uh, let's be comfortable being uncomfortable, uh, and have fun doing it. If there's, uh, how many people get a chance to change, uh, industry, which was dominated by a few bears and have such a positive impact, not only on our estimates, but society at large. And, uh, I think it's a privilege. Pressure is a privilege. And, uh, I'm grateful for the opportunity that's been afforded to me and to my colleagues. And, uh, >>It's a great way. Sorry. That's a great way of looking at it. Pressure is a privilege. If you think about, I love what you said, I always say, get, you know, get comfortably uncomfortable. It is a heart in any aspect, whether it's your workouts or your discipline, you know, working from home, it's a hard thing to do to your point. There's a lot of positivity that can come from it. If we think of what's happening this week alone and the U S political climate changing of the old guard, we've got Kamala Harris as our first female VP nominee and how many years, but also from a diversity angle, from a women leadership perspective, blowing the door wide open. >>It's great to see that, um, you know, we have someone that my daughter's going to look up to and say that, uh, you know, yes, there is, there is a place for us in society and we can have a meaningful contribution to society. So I actually think that San Antonio versus nomination is, um, you know, it's a simple ism of change of God, for sure. Um, I have no political agendas, um, at all. Then you can see how it pans out in November, but the one thing is for sure, but it's going to make a lot of people uncomfortable, a change of God, or this makes a lot of people. And, and, uh, and you know, I was reflecting back on something else and in everything that I've actually achieved, which is, is something I'm proud of. I had to go through a zone, but I was extremely uncomfortable. >>Uh, Gould only happens when you have uncomfortable, um, girl to happens in your conference room. And, um, whether it's, um, you know, running them sequel, uh, or are having a society change, uh, if you stick to your comfort zone, you stick to your prejudices and viruses because it's just comfortable there, there's a, uh, wanting to be awkward. And, uh, and, and I think that that's that essential change of God. As I said, at the cost of repeating myself will make a lot of people uncomfortable, but I honestly believe will move the society forward. And, uh, yeah, I, um, I couldn't be more proud of, uh, having a California San Diego would be nominated and it's a, she brings diversity multicultural. And what I loved about it was, you know, we talk about culture and all the rest of it. And she, she was talking about how our parents who were both, uh, uh, at the Berkeley when she was growing up, we were picking up from and she be, you know, in our, in our prime going to protests and Valley. >>And so it was just, uh, it was ingrained in her to be able to challenge the status school and move the society forward. And, uh, you know, she was comfortable being uncomfortable when she was in that, you know, added that. And that's good. Maybe not. I think we sort of, uh, yeah, I, yeah, let's see, let's see what November brings to us, but, um, I think just a nomination has, uh, exchanged a lot of things and, uh, if it's not this time, it can be the next time, but at the time off the bat, but you're going to have a woman by woman president in my lifetime. Um, that's um, I minced about them, uh, and that's just great. >>Well, I should hope so too. And there's so many, I know we've got to wrap here, but so many different data points that show that that technology company actually, companies, excuse me, with women in leadership position are significantly 10, 20% more profitable. So the changing of the guard is hard as you said, but it's time to get uncomfortable. And this is a great example of that as well as the culture that you have at mem sequel Raja. It's always a pleasure and a philosophical time talking with you. I thank you for joining me on the cube today. >>Thank you me since I'm just stay safe, though. >>You as well for my guest, Raj Burma, I'm Lisa Martin. Thank you for watching this cube conversation.
SUMMARY :
From the cube studios in Palo Alto in Boston, connecting with thought leaders all around the world. It's great to see you as well. uh, it just seems like, uh, you know, it's going to be a great help. from the industry that we know, um, you know, there are going to be a couple that are going to go out in the next 18 months, And also if you really see the evolution of the database market, you know, sequel or the developer data movement and the data warehouse. And, uh, you know, watch this space from orders. in the rear view mirror, those competitors, how do you see that market and the opportunities? is, um, is a testimony to the fact that, you know, it's a fancy ad master. Well, changing of the guard, as you mentioned, is hard. So I do think that, uh, you know, And if you can't do it, then, you know, I've had conversations with a few of them saying, maybe be not the company for you. But I like that you talked about customer obsession because you see it So, you know, of course we all want our customers to be successful. that is a punishing expression, which is really akin to just being cruel, you know, aggression in terms of the, uh, you know, having an impact on customer in the shop to smile at time is gonna you know, the big guys, the Oracles they're there, they're not easy for customers to rip and replace, which, uh, you know, gave rise to what I think someone called us brilliant jerks and all the rest our, our employees and my audience is, um, you know, be comfortable being uncomfortable because what you know, working from home, it's a hard thing to do to your point. It's great to see that, um, you know, we have someone that my daughter's And, um, whether it's, um, you know, running them sequel, uh, or are having a society uh, you know, she was comfortable being uncomfortable when she was in that, you know, added that. I thank you for joining me on the cube today. Thank you for watching this cube conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Raj Burma | PERSON | 0.99+ |
Bob Mobileye | PERSON | 0.99+ |
Raj Verma | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John McMahon | PERSON | 0.99+ |
October, 2017 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
August 2020 | DATE | 0.99+ |
Kamala Harris | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Lee | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Raj | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
next decade | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
second thing | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
one company | QUANTITY | 0.98+ |
two years | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
second aspect | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two and a half years | QUANTITY | 0.98+ |
Sundays | DATE | 0.98+ |
tomorrow | DATE | 0.98+ |
Firstly | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
tens of billions of dollars | QUANTITY | 0.97+ |
Mongo | ORGANIZATION | 0.97+ |
three things | QUANTITY | 0.97+ |
two and a half buckets | QUANTITY | 0.96+ |
San Antonio | LOCATION | 0.96+ |
more than two decades | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
Boston | LOCATION | 0.95+ |
single store | QUANTITY | 0.94+ |
second database | QUANTITY | 0.94+ |
one aspect | QUANTITY | 0.93+ |
24 months | QUANTITY | 0.93+ |
decades | QUANTITY | 0.93+ |
10, 20% | QUANTITY | 0.92+ |
seven years ago | DATE | 0.92+ |
Raja | TITLE | 0.92+ |
two and a half bigots | QUANTITY | 0.91+ |
Berkeley | LOCATION | 0.9+ |
Oracles | ORGANIZATION | 0.9+ |
SAP HANA | TITLE | 0.88+ |
couple | QUANTITY | 0.88+ |
Moisey | ORGANIZATION | 0.88+ |
last couple of months | DATE | 0.86+ |
firstly | QUANTITY | 0.86+ |
couple months ago | DATE | 0.86+ |
one single database | QUANTITY | 0.83+ |
six weeks | QUANTITY | 0.83+ |
SQL | TITLE | 0.81+ |
Sybase | ORGANIZATION | 0.8+ |
California San Diego | LOCATION | 0.8+ |
God | PERSON | 0.8+ |
10 really good database companies | QUANTITY | 0.79+ |
last 14 months | DATE | 0.79+ |
U | LOCATION | 0.78+ |
first female VP | QUANTITY | 0.75+ |
Lindy | ORGANIZATION | 0.74+ |
OLTB | ORGANIZATION | 0.73+ |
single | QUANTITY | 0.73+ |
Austin | ORGANIZATION | 0.72+ |
one thing | QUANTITY | 0.7+ |
next 18 months | DATE | 0.68+ |
COVID | ORGANIZATION | 0.67+ |
last two decades | DATE | 0.63+ |
MemSQL | ORGANIZATION | 0.6+ |
about | DATE | 0.56+ |
Mongol | ORGANIZATION | 0.44+ |
Paul Sustman, Veritas | CUBE Conversation, June 2020
>> Woman: From the cube studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a cube conversation. >> Hi, I'm Stu Miniman and welcome to this cube conversation. Going to be digging in talking about how storage, in the software world, moving forward to cloud native containerized environment. Happy to welcome to the program. First time guests, Paul Sustman. He is the product manager for info scale storage and availability products with Veritas. Paul, thank you so much for joining us. >> Hey, thanks for having me on. I'm really excited to talk about what we're doing for support for containers and Kubernetes. >> All right, so, Veritas I think, most people should be familiar with Veritas when it comes to the storage world of course, strong and long history. Why don't you level set us first on infoscale, I've got way too much history going back to, things like Veritas volume manager, and the like, but infoscale today in 2020, how should we be thinking of it, and kind of the region has out in the marketplace? >> Yeah. First off our infoscale, infoscale is a product that's used by very critical infrastructure, the top enterprises, the top 11 out of 12 airline reservation systems, the top 19 out of 20 investment banks, right? These are companies that use infoscale to drive their business, not just an application, but actually keep their business available and operational. So, we've had a long legacy. You talked about some of the history. We are formerly known as storage foundation, going back 25 years Veritas storage foundation, as it was known at that time was one of the first virtualization technologies, where we virtualized storage for hard drives, right? That's where the volume management came in. We really support for many different file systems, both clustered or shared storage, as well as non shared storage, came out with support for Unix to Linux migrations, added support for virtualization technologies, and came out with a lot of optimizations for storage efficiency and performance optimizations. And we've been building upon that legacy ever since. We've recently come out with a lot of support for AWS cloud as well as Azure cloud, and support for SAP HANA as well as SAP netWeaver for Azure. And, we have customers who are now migrating to their SAP environments up into the cloud. So, long history of this, we came out with Docker support back in 2016 for Docker containers. We made a bet that Docker was going to win. We actually built our net backup flex appliances around the Docker platform. It turns out that wasn't quite accurate. It turns out Kubernetes won, there's some standards now that have come out around storage and networking interfaces, and the world has shifted and it's picking up that standardized platform. So we're doing the same. So, what we're doing is a couple of different things. First off, we are, coming out with a persistent storage solution, for leveraging the CSI storage interface. And, we're coming out with a high availability solution, which is leveraging some of our legacy code around VCs and around the service group technology we have an intelligent monitoring framework, to monitor what's going on inside the container. And we're going to be adding that technology in the infoscale and releasing it later this year. So that's what we're actively working on. I'm really excited about the fact that we're able to bring forward this legacy that we have, where we've done it incredibly well on physical environments and virtual environments. And as customers move to the cloud, to also support containers. We're seeing that mission critical applications are starting to move to containers. We're having a large number of our customers come to us and saying, "what's your roadmap? Where are you going on containers? We've been talking about the flex appliance, on the net backup appliance where we've done really support for that years ago." And they're looking to actively start moving some of those mission critical apps. But what they're seeing is, is that in the container environment, it's missing a lot of the enterprise capabilities that exists on physical platforms. >> Paul. >> Yeah. >> Paul, if I could, so yeah, I'm glad we got the news in here. (mumbles) if we can level set our customers a little bit. >> Sure. >> The marketplace here. So, I think back to server virtualization and VMware. We spent about a decade as an industry going from, yeah it's supported and it works with it too. How do we really optimize it, and make sure it is really supported? When you talk about cloud environment, talk about containerization, we've gone through a maturation journey there also, and in some ways it's got a little bit faster, and we've learned from the past, but it has been a journey we've been on. So, you talk about Docker helped really, bring containers to the masses and the enterprise especially. But maybe give us a little bit as to, you throw it a couple of things like interfaces that are supported to enable a storage more how Kubernetes fits into things. Help us understand, how it's not just supporting the environment, but making sure that they're optimized and take advantage of the feature functionality that people are looking for. And why they go to these containerized Kubernetes environments. >> Yeah. That's a great thing. So, first off IDC called out that containerization is actually it has a potential of replacing what VMware has done around VMs and virtual machines, and that's, I think there's several driving factors for container adoption, right. It comes down to that term cattle not pets, which is often used around containers where you're able to manage things at larger scale or a larger number of items. And it comes down to the fact that the container itself is a much smaller image size than a VM. It's a fraction of the size of a VM, and that makes it possible to be more agile. It makes it possible to have a higher density of containers versus VMs. It makes it easier to manage as well. And because of that, there's faster adoption, with developers and speed and efficiency coming about where developers are making changes quicker in a container environment. And that's very appealing to customers. So, we're seeing a lot of interest in containers. The applications that went there first were applications that were not the typical mission critical application, but we're more of a web type application that didn't have a dependency on persistent data. The data was temporal. But what we're seeing now is, as adoption happens more and more in the container environment. And as people realize that there's a lot of advantages to this container versus a VM, there looking to take those applications and lift and shift them to a container environment, to take advantage of those benefits. So, that's what we're seeing it right now. >> Yeah. It's really interesting, right. You know, Paul, when you looked at that virtualization adoption, it was what a VM really did, is it brought the whole operating system along with me. So, inside that we have, not only the operating system, but typically one application but there'll be more, as opposed to a container gets closer to that atomic unit of the application, or even if it's microservices architecture, it might just be a service inside there. So, I guess that that brings us to the point when you talk about storage, what I really care about. I care about my data, I care about my applications. As you mentioned, often there are different type of applications. Developers are building new applications, using containers as an example. Help us understand where Veritas and infoscale fits in, what applications you're supporting today from a containerized environment. And are there any things you're saying is that, "hey, this is what you should do containers, and at least for certain enterprise environments, maybe we're not quite ready for certain things here yet." >> Yeah. So let, let me take a step back. If you look at the maturity in that technology shift, in my opinion, we're at today with containers where we were early on with VMs. So, early on with the VMs, a lot of people were saying that those virtual machines, they're not really suitable for production code, they're not suitable for mission critical applications. You really should run those on dedicated hardware. In what we've seen is actually a shift in VMs, when people run pretty much everything on VMs now. It's your first platform by default, instead of a physical server. And now the same thing is kind of happening with cloud as well. In containers, what we're seeing is that the early adopters, they weren't looking for those critical or enterprise data requirements, things like security and scale and performance. They were okay with the status quo. But as people are starting to move things that drive their business, or they're going to run their business on, they really need those requirements. They need the same level set of enterprise capabilities that exists today in VMs, on VMs and exists today on physical environments or even in the cloud, a lot of capabilities in the cloud, that's very secure. It's very resilient. The data is very durable. Those capabilities exist there, but on containers, they've been lacking until recently. And so, what we're doing, is we're trying to bring those same capabilities that our customers are used to, for those customers as they're moving those mission-critical applications to containers. >> Excellent, so, let's talk about the services that that infoscale offers. When we first moved to cloud, there were some that thought, "Oh, hey, wait, maybe I don't need to think about things like high availability and data protection, I'll just architect the cloud that way." I think we know from like a security standpoint, it's a shared responsibility model that everybody understands. When it comes to containerization also, I'm often architecting things differently. So, I have to think about things a little bit different, but I don't think it removes the need, for some of the services that we typically see, from solutions like you offer from Veritas. Maybe, give us a little bit of understanding as to, is it the same, Is it a little bit different, and what is needed in today's new architecture? >> Yeah. That's a great question. So, if you look at containers, and start reading a lot of their documentation around Kubernetes, what they claim and what they point out, is that the underlying storage is responsible for the high availability of the storage. It's not the requirement of the application. It's not the requirement of the IT administrator. (mumbles) push it back on the storage. And if you look at the way storage is used or consumed with containers, it's really, there's two types of storage. There is block-level storage, which is presented from the disk array. The challenge with block-level storage by itself is that there's no data management right there. What ends up happening is that, the database does the data management and the database in order to take advantage or compensate for that lack of data management. Often what happens is the database is oversubscribed. So, you present too much data, or the database in the end up wasting space. The other side of things, the common use cases around files, and the most common use case, or the most way that most people use with containers, is actually leveraging NFS. NFS was never designed for mission critical applications. It's really designed for very small IO, and it will guarantee or maintain right consistency. But if you have multiple applications accessing the same share, who knows who's going to actually win. Somebody will win, and it might not be who you want to win. So, you have data corruption or data integrity issues with them NFS, not to mention that you have huge performance challenges with NFS. Again, it was never designed for mission critical application. And so those are areas that our customers have looked to us in the past, and look to us right now, to present storage which is very high performance and very highly available, and is often replicated across the Metro or across geo locations, across availability zones, to other data centers. So that you have multiple redundant copies. And so that you just don't lose data, right. That's something that we've done really well with infoscale and we've done that for applications that require share resources. And we've done that for applications that require their own repository, their own data store. So, it's an opportunity for customers, to use or have other storage, which is persistent, highly available, higher performance, for use with their containers, other than NFS or block storage. >> Excellent. Well, we know that the storage, we always use to joke all, is that the only constant is change, in the cloud native world, we know that it accelerating change, is the norm. Give us the final takeaway, when they think of infoscale for Kubernetes in containers, how should we think about Veritas, and what differentiates you from really the rest of the marketplace? >> Yeah. If you look at it, it's really simple. I mean we have a solution which works very well for storage, very high performance, very highly available, scales really well. We are going to be releasing a plugin for Kubernetes that will install on storage nodes and make that storage persistent and available to the application running up as a container. We're also taking the technology that we've done around, our availability suite and we are taking some of the technology forward into containers. Now, understanding that Kubernetes does the orchestration, our key differentiation is that we're going to be, monitoring the dependencies of what's critical for that application, right? All the mount points, the network interfaces, all the different processes make up that critical application. We'll be monitoring those applications, actually inside the container and then working with Kubernetes to in collaborating as far as orchestration goes, so we'll tell Kubernetes when it needs to restart the container or restart a pod. Lots of it advantages come a solution. And the way we're building it, again it integrates with Kubernetes. We monitor the what's going on inside the container, and we'll notify Kubernetes of an event change and we'll do that instantaneously. Kubernetes looks at the pod, they don't look at inside the container, right. They don't look at the processes, they don't look at the mount points. So, the pod might be available, but the container itself, you might've lost a process, you might have lost one of the containers. One of your dependencies might have gone away, and we're taking that same availability offering that we've done very well with and the physical environment, and cloud in virtual environments, and bringing that forward to containers. >> Excellent. Paul, any minimum requirements, Kubernetes of course, being open source, there are dozens of distributions out there. So, if I choose >> Paul: Yeah. >> Any of the native services from the public cloud providers or from my vendor of choice, I don't have to be like on 1.16 or 1.17 to get this, what are any considerations there? >> Well the latest version I think is 1.18, they're coming out with 1.19 soon. (murmurs) Kubernetes in my view, they came out with the standards. They came out with a standard network interface and a standard storage interface. We're leveraging those standards, and we're building a plugin towards that standard. That same plugin will be used in Kubernetes and OpenShift and VMware, as well as all the different cloud container offerings. So, our intention is to support all those. We'll be supporting Kubernetes on day one. Out of the box for Linux platforms, with all the same storage capabilities that we have with impulse scale, and with the same agent framework and monitoring framework that we have with infoscale for our availability as well. >> Excellent. Well, Paul Sustman, thank you so much. It's been great to watch the maturation of the storage environments in the container and Kubernetes world. Thanks so much joining us. >> Thank you. Thanks for having me. >> All right, I'm Stu Miniman and thank you for watching the cube. (upbeat music)
SUMMARY :
leaders all around the world. He is the product manager I'm really excited to talk about and kind of the region has is that in the container environment, (mumbles) if we can level set and the enterprise especially. that the container itself is it brought the whole a lot of capabilities in the cloud, is it the same, Is it and is often replicated across the Metro is that the only constant is change, and bringing that forward to containers. Kubernetes of course, Any of the native services Out of the box for Linux platforms, the storage environments in the container Thanks for having me. and thank you for watching the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Sustman | PERSON | 0.99+ |
Paul Sustman | PERSON | 0.99+ |
June 2020 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2016 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
20 investment banks | QUANTITY | 0.99+ |
first platform | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two types | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
12 airline reservation systems | QUANTITY | 0.98+ |
1.18 | OTHER | 0.98+ |
First time | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one application | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
later this year | DATE | 0.97+ |
one | QUANTITY | 0.96+ |
Boston | LOCATION | 0.96+ |
Linux | TITLE | 0.96+ |
OpenShift | TITLE | 0.96+ |
VMware | TITLE | 0.92+ |
SAP HANA | TITLE | 0.92+ |
Unix | TITLE | 0.92+ |
25 years | QUANTITY | 0.91+ |
IDC | ORGANIZATION | 0.91+ |
SAP netWeaver | TITLE | 0.91+ |
NFS | TITLE | 0.9+ |
Docker | TITLE | 0.89+ |
about a decade | QUANTITY | 0.87+ |
flex | ORGANIZATION | 0.86+ |
1.19 | OTHER | 0.86+ |
dozens of distributions | QUANTITY | 0.82+ |
1.17 | OTHER | 0.8+ |
day one | QUANTITY | 0.8+ |
Azure | TITLE | 0.77+ |
11 | QUANTITY | 0.76+ |
one of | QUANTITY | 0.75+ |
Kubernetes | ORGANIZATION | 0.72+ |
Azure cloud | TITLE | 0.72+ |
19 | QUANTITY | 0.72+ |
1.16 | OTHER | 0.7+ |
infoscale | TITLE | 0.65+ |
Veritas | PERSON | 0.61+ |
Docker | ORGANIZATION | 0.61+ |
years | DATE | 0.57+ |
foundation | ORGANIZATION | 0.55+ |
SAP | ORGANIZATION | 0.4+ |
containers | QUANTITY | 0.35+ |
Neil MacDonald, HPE | HPE Discover 2020
>> Narrator: From around the globe its the Cube, covering HPE Discover Virtual Experience brought to you by HPE. >> Hi everybody this is Dave Vellante and welcome back to the Cube's coverage of HPE's Discover 2020 the Virtual Experience the Cube. The Cube has been virtualized We like to say Am very happy to welcome in Neil McDonalds, he's the General Manager for Compute at HPE. Great to see you again Neil, wish we were face to face, but this will have to do. >> Very well, it's great to see you Dave. Next time we'll do this face to face. >> Next time we have hopefully next year. We'll see how things are going, but I hope you're safe and your family's all good and I say it's good to talk to you, you know we've talked before many times you know, it's interesting just to know the whole parlance in our industry is changing even you know Compute in your title, and no longer do we think about it as just sort of servers or a box you guys are moving to this as a service notion, really it's kind of fundamental or, poignant that we see this really entering this next decade. It's not going to be the same as last decade, is it? >> No, I think our customers are increasingly looking at delivering outcomes to their customers in their lines of business, and Compute can take many forms to do that and it's exciting to see the evolution and the technologies that we're delivering and the consumption models that our customers are increasingly taking advantage of such as GreenLake. >> Yes so Antonio obviously in his Keynote made a big deal in housing previous Keynotes about GreenLake, a lot of themes on you know, the cloud economy and as a service, I wonder if you could share with our audience, you know what are the critical aspects that we should know really around GreenLake? >> Well, GreenLake is growing tremendously for us we have around a thousand customers, delivering infrastructure through the GreenLake offerings and that's backed by 5,000 people in the company around the world who are tuning an optimizing and taking care of that infrastructure for those customers. There's billions of dollars of total contract value under GreenLake right now, and it's accelerating in the current climate because really what GreenLake is all about is flexibility. The flexibility to scale up, to scale down, the ability to pay as you use the infrastructure, which in the current environment, is incredibly helpful for conserving cash and boosting both operational flexibility with the technology, but also financial flexibility, in our customer's operations. The other big advantage of course at GreenLake is it frees up talent most companies are in the world of challenges in freeing up their talent to work on really impactful business transformation initiatives, we've seen in the last couple of quarters, an even greater acceleration of digital transformation work for example and if all of your talent is tied up in managing the existing infrastructure, then that's a drain on your ability to transform and in some industries even survive right now, so GreenLake can help with all of those elements and, with all of the pressure from COVID, it's actually becoming even more consumed, by more and more customers around the world it's- >> Yeah right I mean that definitely ties into the whole as a service conversation as well I mean to your point, you know, digital transformation you know, the last couple of years has really accelerated, but I feel yeah, I feel like in the last 90 days, it's accelerated more than it has in the last three years, because if you weren't digital, you really had no way to do business and as a service has really played into that so I wonder if you could talk about yours as a service, you know, posture and thinking. >> Well you're absolutely right Dave organizations that had not already embarked on a digital transformation, have rapidly learned in our current situation that it's not an optional activity. Those that were already on that path are having to move faster, and those that weren't are having to develop those strategies very rapidly in order to transform their business and to survive. And the really new thing about GreenLake and the other service offerings that we provide in that context is how it can accelerate the deployment. Many companies for example, have had to deal with VDI deployments in order to enable many more of their workforce to be productive when they can't be in the office or in the facility and a solution like GreenLake can really help enable very rapid deployment and build up but not just VDI many other workloads in high performance Compute or in SAP HANA for example, are all areas that we're bringing value to customers through that kind of as a service offering. Yeah, a couple of examples Nokia software is using GreenLake to accelerate their research and development as they drive the leadership and the 5G revolution, and they're doing that at a fraction of the cost of the public cloud. We've got Zanotti, which has built a private cloud for artificial intelligence and HPC is being used to develop the next generation of autonomous software for cars. And finally, we've got also Portion from Arctic who have built a fully managed hybrid cloud environment to accelerate all the application development without having to bear the traditional costs of an over-provisioned complex infrastructure. So all of our customers are relying on that because Compute and Innovation is just at the core of the digital transformations that everybody is embarked on as they modernize their businesses right now and it's exciting to be able to be part of that and to be able to do there, to help. >> So of course in the tech business innovation is the you know the main spring of growth and change, which is constant in our industry and I have a panel this week with Doctor Go talking about swarm learning in AI, and that's some organic innovation that HPE is doing, but as well, you've done some, M&A as well. Recently, you guys announced and we covered it a pretty major investment in Pensando Systems. I wonder if you could talk a little bit about what, that means to the Compute business specifically in, HPE customers generally. >> So that partnership with Pensando was really exciting, and it's great to see the momentum that its building in delivering value to our customers, at the end of the day we've been successful with Pensando in building that momentum in very highly regulated industries and the value that is really intrinsic to Pensando is the simplifying of the network architecture. Traditionally, when you would manage an enterprise network environment, you would create centralized devices for services like load balancing or firewalls and other security functionality and all the traffic in the data center would be going back and forth, tromboning across the infrastructure as you sought to secure your underlying Compute. The beauty of the Pensando technology is that we actually push that functionality all the way out to the edge at the server so whether those servers are in a data center, whether they're in a colocation facility, whether they're on the edge, we can deliver all of that security service that would traditionally be required in centralized expensive, complex, unique devices that were specific to each individual purpose, and essentially make that a software defined set of services running in each node of your infrastructure, which means that as you scale your infrastructure, you don't have a bottleneck. You're just scaling that security capability with the scaling of your computer infrastructure. It takes traffic off your core networks, which gives you some benefits there, but fundamentally it's about a much more scalable, responsive cost-efficient approach to managing the security of the traffic in your networks and securing the Compute end points within your infrastructure. And it's really exciting to see that being picked up, in financial services and healthcare, and other segments that have you know, very high standards, with respect to security and infrastructure management, which is a great complement to the technology from Pensando and the partnership that we have with Pensando and HPE. >> And it's compact too we should share with our audience it's basically a card, that you stick inside of a server correct Neil? >> That's exactly right. Pensando's PCIe card together with HPE servers, puts that security functionality in the server, exactly where your data is being processed and the power of that is several fold, it avoids the tromboning that we talked about back across the whole network every time you've got to go to a centralized security appliance, it eliminates those complex single purpose appliances from the infrastructure, and that of course means that the failure domain is much smaller cause your failure demands a single server, but it also means that as you scale your infrastructure, your security infrastructure scales with the servers. So you have a much simpler network architecture, and as I say, that's being delivered in environments with very high standards for security, which is a really a great endorsement of the Pensando technology and the partnership that HPE and Pensando will have in bringing that technology to market for our customers. >> So if I understand it correctly, the Pensando is qualified for Pro-Lite, Appollo and in Edgelines. My question is, so if I'm one of those customers today, what's in it for me? Are they sort of hopping on this for existing infrastructure, or is it part of, sort of new digital initiatives, I wonder if you could explain. >> So if you were looking to build out infrastructure for the future, then you would ask yourself, why would you continue to carry forward legacy architectures in your network with these very expensive custom appliances for each security function? Why not embrace a software defined approach that pushes that to the edge of your network whether the edge are in course or are actually out on the edge or in your data centers, you can have that security functionality embedded within your Compute infrastructure, taking advantage of Pensandos technologies. >> So obviously things have changed is specifically in the security space, people are talking about this work from home, and this remote access being a permanent or even a quasi-permanent situation. So I wonder if we could talk about the edge and specifically where Aruba fits in the edge, how Pensando compliments. What's HPE's vision with regard to how this evolves and maybe how it's been supercharged with the COVID pandemic. >> So we're very fortunate to have the Aruba intelligent edge technology in the HPE portfolio. And the power of that technology is its focus on the analysis of data and the development of solutions at the site of the data generated. Increasingly the data volumes are such that they're going to have to be dealt with at the edge and given that, you need to be building edge infrastructure that is capable enough and secure enough for that to be the case. And so we've got a great compliment between the, intelligent edge technology within the Aruba portfolio, with all of the incredible management capabilities that are in those platforms combined with technologies like Pensando and our HPE Compute platforms, bring the ability to build a very cohesive, secure, scalable infrastructure that tackles the challenges of having to do this computer at the edge, but still being able to do it in both a secure and easily managed way and that's the power of the combination of Aruba, HPE Compute and Pensando. >> Well, with the expanded threat surface with people working from home organizations are obviously very concerned about compliance, and being able to enforce consistent policies across this sort of new network, so I think what you're talking about is it's very important that you have a cohesive system from a security standpoint you're not just bolting on some solution at the tail end, your comments. >> Well security, always depends on all the links in the chain and one of the most critical links in the chain is the security of the actual Compute itself. And within the HPE compliant platforms, we've done a lot of work to build very differentiated and exclusive capability with our hardware, a Silicon Root of Trust, which is built directly into Silicon. And that enables us to ensure the integrity of the entire boot chain on the security of the platform, drones up in ways that can't be done with some of the other hardware approaches that are prevalent in the industry, and that's actually brought some benefit, in financial terms to our customers because of the certifications that are enabled in the, Cyber Catalyst designations that we've earned for the platforms. >> So we also know from listening to your announcements with Pensando just observing security in general, that this notion of micro-segmentation is very important being able to have increased granularity as opposed to kind of a blob, maybe you could explain why that's important you know, the so what behind micro-segmentation if you will. >> Well it's all about minimizing the threat perimeter on any given device and if you can minimize the vectors through which your infrastructure will interact on the network, then you can provide additional layers of security and that's the power of having your security functionality right down at the edge, because you can have a security processor sitting right in the server and providing great security of the node level you're no longer relying on the network management and getting all of that right and you also have much greater flexibility because you can easily in a software defined environment, push the policies that are relevant for the individual pieces of infrastructure in an automated policy driven way, rather than having to rely on someone in network security, getting the manual configuration of that infrastructure, correct to protect the individual notes. And if you take that kind of approach, and you embed that kind of technology in servers, which are fundamentally robust in terms of security because of the Silicon Root of Trust that we've embedded across our platform portfolio whether that's Pro-line or Synergy or BladeSystem or Edgeline, you get a tremendous combination, as a result of these technologies, and as I mentioned, the being Cyber Catalyst designation is a proof point of that. Last year there we're over 150 security products, put forward for the Sovereign Capitalist designation, and the only a handful were actually awarded I think 17, of which two were HPE Compute and Aruba. And the power of is that many organizations are not having to deal with insurance for Cybersecurity events. And the Catalyst designation can actually lead to lower premiums for the choice of the infrastructure that you've made to such as HPE Compute, has actually enabled you to have a lower cost of insuring your organization against cybersecurity issues, because infrastructure matters and the choice of infrastructure with the right innovation in it is a really critical choice for organizations moving forward in security and in so many other ways. >> Yeah, you mentioned a lot of things there software defined, that's going to enable automation and scale, you talked about the perimeter you know, the perimeter of the traditional moat around the castle that's gone the perimeter, there is no perimeter anymore, it's everywhere so that whole you know, weakest link in the chain and the chain of events. And then the other thing you talked about was the layers you know very important when you're talking to security practitioners you know, building layers in so all of this really is factoring in security in particular, is factoring into customer buying decisions. Isn't it? >> Well security is incredibly important for so many of our customers across many industries. And having the ability to meet those security needs head on is really critical. We've been very successful in leveraging these technologies for many customers in many different industries, you know, one example is we've recently won multiple deals with the Defense Intelligence Systems Agency, who you will imagine have very high standards for security, worth hundreds of millions of dollars of that infrastructure so there's a great endorsement, from the customer set who are taking advantage of these technologies and finding that they deliver great benefits for them in the operational security of their infrastructure. >> Yeah what if I could ask you a question on the edge. I mean, as somebody who is you know, with a company that is really at the heart of technology, and I'm sure you're constantly looking at new companies, M&A you know et cetera, you know inventing tech, but I want to ask you about the architectures for the edge and just in thinking about a lot of data at the edge, not all the data is going to come back to the data center or the cloud, there's going to be a lot of AI influencing going on in real time or near real time. Do you guys see different architectures emerging to support that edge? I mean from a Compute standpoint or is it going to be traditional architectures that support that. >> It's clearly an evolving architectural approach because for the longest time, infrastructure was built with some kind of hub you know, whether or not some data center or in the cloud, around all of the devices at the edge would be essentially calling home, so edge devices historically have been very focused on connectivity on acquisition of data, and then sending that data back for some kind of processing and action at some centralized location. And the reality is that given the amount of data being generated at the edge now given the capability even of the most modern networks, it's simply not possible to be moving those kinds of data volumes all the way back to some remote processing environment, and then communicating a decision for action all the way back up to the edge. First of all, the networks kind of handle the volume data's involved if every device in the world was doing that, and secondly, the latencies are too slow. They're not fast enough in order to be able to take the action needed at the edge. So that means that you have to countenance systems at the edge that are not actually storing data, that are not actually computing upon data, and in a lot of edge systems historically, they would evolve from very proprietary, very vertically integrated systems to Brax PC controller based systems with some form of IP connectivity back to, some central processing environment. And the reality is that if you build your infrastructure that way, you finish up with a very unmanageable fleet, you finish up with a very fragmented, disjointed infrastructure and our perspective is that companies that are going to be successful in the future have to think themselves as an edge to cloud approach. They have to be pursuing this in a way that views, the edge, the data center, and the cloud as part of an integrated continuum, which enables the movement of data when needed you heard about the swarm learning that you talked about with my colleague Doctor Go, where there's a balance of what is computed, where in the infrastructure, and so many other examples, but you need to be able to move Compute to where the data is, and you need to be able to do that efficiently with a unified approach to the architecture. And that's where assets like the HPE Data Fabric come into play, which enable that kind of unification across the different locations of equipment. It also means you need to think differently about the actual building blocks themselves, in a lot of edge environments, if you take a Classic 19 interact mode Compute device, that was originally designed for the data center it's simply not the right kind of infrastructure. So that's why we have offerings like the Edgeline portfolio and the HPE products there, because they're designed to operate in those environments with different environmentals than you find the data center with different interfaces to systems of action and systems of control, than you'd typically find in a data center environment yet still bringing many of the security benefits and the manageability benefits that we've talked about earlier in our conversation today Dave. So it's definitely going to be an evolving, a new architectural approach at the edge, and companies that are thoughtful about their choice of infrastructure, are going to be much more successful than those that take a more incremental approach, and we were excited to be there to help our customers on that journey. >> Yeah Neil it's a very exciting time I mean you know, much of the innovation in the last decade was found inside the data center and in your world a lot of times you know, inside the server itself but what you're describing is this, end-to-end system across the network and that systems view, and then there's going to be a ton of innovation there and we're very excited for you thanks so much for coming on the Cube it was great to see you again. >> It is great to be here and we're just excited to be here to help our customers, and giving them the best volume for the workloads whether that's taking advantage of GreenLake, taking advantage of the innovative security technologies that we've talked about, or being the edge to cloud platform as a service company that can help our customers transform in this distributed world from the edge to the data center to the cloud. Thanks for having me Dave. >> You very welcome, awesome summary and its always good to see you Neil. Thank you for watching everybody this David Vellante, for the Cube our coverage of the HPE Discover 2020 Virtual Experience, will be right back to the short break. (soft upbeat music)
SUMMARY :
the globe its the Cube, of HPE's Discover 2020 the Very well, it's great to see you Dave. know the whole parlance evolution and the technologies the ability to pay as you has in the last three years, of the cost of the public cloud. is the you know the main of the traffic in your and the power of that is several fold, the Pensando is qualified out on the edge or in your data centers, in the security space, bring the ability to build at the tail end, your comments. that are prevalent in the industry, the so what behind on the network, then you the perimeter you know, And having the ability to not all the data is going to around all of the devices at a lot of times you know, being the edge to cloud platform and its always good to see you Neil.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
David Vellante | PERSON | 0.99+ |
Neil MacDonald | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Defense Intelligence Systems Agency | ORGANIZATION | 0.99+ |
Antonio | PERSON | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
Neil McDonalds | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
next year | DATE | 0.99+ |
5,000 people | QUANTITY | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Aruba | ORGANIZATION | 0.99+ |
Arctic | ORGANIZATION | 0.99+ |
SAP HANA | TITLE | 0.98+ |
one example | QUANTITY | 0.98+ |
GreenLak | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
COVID pandemic | EVENT | 0.98+ |
billions of dollars | QUANTITY | 0.97+ |
M&A | ORGANIZATION | 0.97+ |
BladeSystem | ORGANIZATION | 0.97+ |
hundreds of millions of dollars | QUANTITY | 0.97+ |
over 150 security products | QUANTITY | 0.97+ |
Pensandos | ORGANIZATION | 0.97+ |
this week | DATE | 0.96+ |
Edgeline | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
single purpose | QUANTITY | 0.95+ |
Synergy | ORGANIZATION | 0.95+ |
each individual | QUANTITY | 0.95+ |
Brax | ORGANIZATION | 0.94+ |
First | QUANTITY | 0.94+ |
17 | QUANTITY | 0.94+ |
single server | QUANTITY | 0.93+ |
last decade | DATE | 0.93+ |
Pensando Systems | ORGANIZATION | 0.92+ |
each node | QUANTITY | 0.92+ |
around a thousand customers | QUANTITY | 0.91+ |
Cube | COMMERCIAL_ITEM | 0.88+ |
HPE Discover 2020 | EVENT | 0.87+ |
HPE Compute | ORGANIZATION | 0.87+ |
Aruba | LOCATION | 0.85+ |
2020 | TITLE | 0.83+ |
Appollo | ORGANIZATION | 0.83+ |
GreenLake | TITLE | 0.82+ |
last couple of years | DATE | 0.81+ |
VxRail: Taking HCI to Extremes
>> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCube Conversation. >> Hi, I'm Stu Miniman. And welcome to this special presentation. We have a launch from Dell Technologies updates from the VxRail family. We're going to do things a little bit different here. We actually have a launch video Shannon Champion, of Dell Technologies. And the way we do things a lot of times, is, analysts get a little preview or when you're watching things. You might have questions on it. So, rather than me just wanting it, or you wanting yourself I actually brought in a couple of Dell Technologies expertS two of our Cube alumni, happy to welcome you back to the program. Jon Siegal, he is the Vice President of Product Marketing, and Chad Dunn, who's the Vice President of Product Management, both of them with Dell Technologies. Gentlemen, thanks so much for joining us. >> Good to see you Stu. >> Great to be here. >> All right, and so what we're going to do is we're going to be rolling the video here. I've got a button I'm going to press, Andrew will stop it here and then we'll kind of dig in a little bit, go into some questions when we're all done. We're actually holding a crowd chat, where you will be able to ask your questions, talk to the experts and everything. And so a little bit different way to do a product announcement. Hope you enjoy it. And with that, it's VxRail. Taking HCI to the extremes is the theme. We'll see what that means and everything. But without any further ado, let's let Shannon take the video away. >> Hello, and welcome. My name is Shannon Champion, and I'm looking forward to taking you through what's new with VxRail. Let's get started. We have a lot to talk about. Our launch covers new announcements addressing use cases across the Core, Edge and Cloud and spans both new hardware platforms and options, as well as the latest in software innovations. So let's jump right in. Before we talk about our announcements, let's talk about where customers are adopting VxRail today. First of all, on behalf of the entire Dell Technologies and VxRail teams, I want to thank each of our over 8000 customers, big and small in virtually every industry, who've chosen VxRail to address a broad range of workloads, deploying nearly 100,000 nodes today. Thank you. Our promise to you is that we will add new functionality, improve serviceability, and support new use cases, so that we deliver the most value to you, whether in the Core, at the Edge or for the Cloud. In the Core, VxRail from day one has been a catalyst to accelerate IT transformation. Many of our customers started here and many will continue to leverage VxRail to simply extend and enhance your VMware environment. Now we can support even more demanding applications such as In-Memory databases, like SAP HANA, and more AI and ML applications, with support for more and more powerful GPUs. At the Edge, video surveillance, which also uses GPUs, by the way, is an example of a popular use case leveraging VxRail alongside external storage. And right now we all know the enhanced role that IT is playing. And as it relates to VDI, VxRail has always been a great option for that. In the Cloud, it's all about Kubernetes, and how Dell Technologies Cloud platform, which is VCF on VxRail can deliver consistent infrastructure for both traditional and Cloud native applications. And we're doing that together with VMware. VxRail is the only jointly engineered HCI system built with VMware for VMware environments, designed to enhance the native VMware experience. This joint engineering with VMware and investments in software innovation together deliver an optimized operational experience at reduced risk for our customers. >> Alright, so Shannon talked a bit about, the important role of IT Of course right now, with the global pandemic going on. It's really, calling in, essential things, putting, platforms to the test. So, I really love to hear what both of you are hearing from customers. Also, VDI, of course, in the early days, it was, HCI-only-does-VDI. Now, we know there are many solutions, but remote work is putting that back front and center. So, Jon, why don't we start with you as the what is (muffled speaking) >> Absolutely. So first of all, Stu, thank you, I want to do a shout out to our VxRail customers around the world. It's really been humbling, inspiring, and just amazing to see The impact of our VxRail customers around the world and what they're having on on human progress here. Just for a few examples, there are genomics companies that we have running VxRail that have rolled out testing at scale. We also have research universities out in the Netherlands, doing the antibody detection. The US Navy has stood up a floating hospital to of course care for those in need. So we are here to help that's been our message to our customers, but it's amazing to see how much they're helping society during this. So just just a pleasure there. But as you mentioned, just to hit on the VDI comments, so to your points too, HCI, VxRail, VDI, that was an initial use case years ago. And it's been great to see how many of our existing VxRail customers have been able to pivot very quickly leveraging VxRail to add and to help bring their remote workforce online and support them with their existing VxRail. Because VxRail is flexible, it is agile, to be able to support those multiple workloads. And in addition to that, we've also rolled out some new VDI bundles to make it simpler for customers more cost effective cater to everything from knowlEdge workers to multimedia workers. You name it, you know from 250, desktops up to 1000. But again, back to your point VxRail, HCI, is well beyond VDI, it crossed the chasm a couple years ago actually. And VDI now is less than a third of the typical workloads, any of our customers out there, it supports now a range of workloads that you heard from Shannon, whether it's video surveillance, whether it's general purpose, all the way to mission critical applications now with SAP HAN. So, this has changed the game for sure. But the range of work loads and the flexibility of the actual rules which really helping our existing customers during this pandemic. >> Yeah, I agree with you, Jon, we've seen customers really embrace HCI for a number of workloads in their environments, from the ones that we sure all knew and loved back in the initial days of HCI. Now, the mission critical things now to Cloud native workloads as well, and the sort of the efficiencies that customers are able to get from HCI. And specifically, VxRail gives them that ability to pivot. When these, shall we say unexpected circumstances arise? And I think that that's informing their their decisions and their opinions on what their IP strategies look like as they move forward. They want that same level of agility, and ability to react quickly with their overall infrastructure. >> Excellent. Now I want to get into the announcements. What I want my team actually, your team gave me access to the CIO from the city of Amarillo, so maybe they can dig up that footage, talk about how fast they pivoted, using VxRail to really spin up things fast. So let's hear from the announcement first and then definitely want to share that that customer story a little bit later. So let's get to the actual news that Shannon's going to share. >> Okay, now what's new? I am pleased to announce a number of exciting updates and new platforms, to further enable IT modernization across Core, Edge and Cloud. I will cover each of these announcements in more detail, demonstrating how only VxRail can offer the breadth of platform configurations, automation, orchestration and Lifecycle Management, across a fully integrated hardware and software full stack with consistent, simplified operations to address the broadest range of traditional and modern applications. I'll start with hybrid Cloud and recap what you may have seen in the Dell Technologies Cloud announcements just a few weeks ago, related to VMware Cloud foundation on VxRail. Then I'll cover two brand new VxRail hardware platforms and additional options. And finally circle back to talk about the latest enhancements to our VxRail HCI system software capabilities for Lifecycle Management. Let's get started with our new Cloud offerings based on VxRail. VxRail is the HCI foundation for Dell Technologies, Cloud Platform, bringing automation and financial models, similar to public Cloud to On-premises environments. VMware recently introduced Cloud foundation for Delta, which is based on vSphere 7.0. As you likely know by now, vSphere 7.0 was definitely an exciting and highly anticipated release. In keeping with our synchronous release commitment, we introduced VxRail 7.0 based on vSphere 7.0 in late April, which was within 30 days of VMware's release. Two key areas that VMware focused on we're embedding containers and Kubernetes into vSphere, unifying them with virtual machines. And the second is improving the work experience for vSphere administrators with vSphere Lifecycle Manager or VLCM. I'll address the second point a bit in terms of how VxRail fits in in a moment for VCF 4 with Tom Xu, based on vSphere 7.0 customers now have access to a hybrid Cloud platform that supports native Kubernetes workloads and management, as well as your traditional VM-based workloads. So containers are now first class citizens of your private Cloud alongside traditional VMs and this is now available with VCF 4.0, on VxRail 7.0. VxRail's tight integration with VMware Cloud foundation delivers a simple and direct path not only to the hybrid Cloud, but also to deliver Kubernetes at Cloud scale with one complete automated platform. The second Cloud announcement is also exciting. Recent VCF for networking advancements have made it easier than ever to get started with hybrid Cloud, because we're now able to offer a more accessible consolidated architecture. And with that Dell Technologies Cloud platform can now be deployed with a four-node configuration, lowering the cost of an entry level hybrid Cloud. This enables customers to start smaller and grow their Cloud deployment over time. VCF and VxRail can now be deployed in two different ways. For small environments, customers can utilize a consolidated architecture which starts with just four nodes. Since the management and workload domains share resources in this architecture, it's ideal for getting started with an entry level Cloud to run general purpose virtualized workloads with a smaller entry point. Both in terms of required infrastructure footprint as well as cost, but still with a Consistent Cloud operating model. For larger environments where dedicated resources and role-based access control to separate different sets of workloads is usually preferred. You can choose to deploy a standard architecture which starts at eight nodes for independent management and workload domains. A standard implementation is ideal for customers running applications that require dedicated workload domains that includes Horizon, VDI, and vSphere with Kubernetes. >> Alright, Jon, there's definitely been a lot of interest in our community around everything that VMware is doing with vSphere 7.0. understand if you wanted to use the Kubernetes piece, it's VCF as that so we've seen the announcements, Dell, partnering in there it helps us connect that story between, really the VMware strategy and how they talk about Cloud and where does VxRail fit in that overall, Delta Cloud story? >> Absolutely. So first of all Stu, the VxRail course is integral to the Delta Cloud strategy. it's been VCF on VxRail equals the Delta Cloud platform. And this is our flagship on prem Cloud offering, that we've been able to enable operational consistency across any Cloud, whether it's On-prem, in the Edge or in the public Cloud. And we've seen the Dell tech Cloud Platform embraced by customers for a couple key reasons. One is it offers the fastest hybrid Cloud deployment in the market. And this is really, thanks to a new subscription offer that we're now offering out there where in less than 14 days, it can be still up and running. And really, the Dell tech Cloud does bring a lot of flexibility in terms of consumption models, overall when it comes to VxRail. Secondly, I would say is fast and easy upgrades. This is what VxRail brings to the table for all workloads, if you will, into especially critical in the Cloud. So the full automation of Lifecycle Management across the hardware and software stack across the VMware software stack, and in the Dell software and hardware supporting that, together, this enables essentially the third thing, which is customers can just relax. They can be rest assured that their infrastructure will be continuously validated, and always be in a continuously validated state. And this is the kind of thing that those three value propositions together really fit well, with any on-prem Cloud. Now you take what Shannon just mentioned, and the fact that now you can build and run modern applications on the same VxRail infrastructure alongside traditional applications. This is a game changer. >> Yeah, I love it. I remember in the early days talking with Dunn about CI, how does that fit in with Cloud discussion and the line I've used the last couple years is, modernize the platform, then you can modernize the application. So as companies are doing their full modernization, then this plays into what you're talking about. All right, we can let Shannon continue, we can get some more before we dig into some more analysis. >> That's good. >> Let's talk about new hardware platforms and updates. that result in literally thousands of potential new configuration options. covering a wide breadth of modern and traditional application needs across a range of the actual use cases. First up, I am incredibly excited to announce a brand new Dell EMC VxRail series, the D series. This is a ruggedized durable platform that delivers the full power of VxRail for workloads at the Edge in challenging environments or for space constrained areas. VxRail D series offers the same compelling benefits as the rest of the VxRail portfolio with simplicity, agility and lifecycle management. But in a lightweight short depth at only 20 inches, it's adorable form factor that's extremely temperature-resilient, shock resistant, and easily portable. It even meets milspec standards. That means you have the full power of lifecycle automation with VxRail HCI system software and 24 by seven single point of support, enabling you to rapidly react to business needs, no matter the location or how harsh the conditions. So whether you're deploying a data center at a mobile command base, running real-time GPS mapping on the go, or implementing video surveillance in remote areas, you can ensure availability, integrity and confidence for every workload with the new VxRail ruggedized D series. >> All right, Chad we would love for you to bring us in a little bit that what customer requirement for bringing this to market. I remember seeing, Dell servers ruggedized, of course, Edge, really important growth to build on what Jon was talking about, Cloud. So, Chad, bring us inside, what was driving this piece of the offering? >> Sure Stu. Yeah, yeah, having been at the hardware platforms that can go out into some of these remote locations is really important. And that's being driven by the fact that customers are looking for compute performance and storage out at some of these Edges or some of the more exotic locations. whether that's manufacturing plants, oil rigs, submarine ships, military applications, places that we've never heard of. But it's also about extending that operational simplicity of the the sort of way that you're managing your data center that has VxRails you're managing your Edges the same way using the same set of tools. You don't need to learn anything else. So operational simplicity is absolutely key here. But in those locations, you can take a product that's designed for a data center where definitely controlling power cooling space and take it some of these places where you get sand blowing or seven to zero temperatures, could be Baghdad or it could be Ketchikan, Alaska. So we built this D series that was able to go to those extreme locations with extreme heat, extreme cold, extreme altitude, but still offer that operational simplicity. Now military is one of those applications for the rugged platform. If you look at the resistance that it has to heat, it operates at a 45 degrees Celsius or 113 degrees Fahrenheit range, but it can do an excursion up to 55 C or 131 degrees Fahrenheit for up to eight hours. It's also resistant to heat sand, dust, vibration, it's very lightweight, short depth, in fact, it's only 20 inches deep. This is a smallest form factor, obviously that we have in the VxRail family. And it's also built to be able to withstand sudden shocks certified to withstand 40 G's of shock and operation of the 15,000 feet of elevation. Pretty high. And this is sort of like wherever skydivers go to when they want the real thrill of skydiving where you actually need oxygen to, to be for that that altitude. They're milspec-certified. So, MIL-STD-810G, which I keep right beside my bed and read every night. And it comes with a VxRail stick hardening package is packaging scripts so that you can auto lock down the rail environment. And we've got a few other certifications that are on the roadmap now for naval shock requirements. EMI and radiation immunity often. >> Yeah, it's funny, I remember when we first launched it was like, "Oh, well everything's going to white boxes. "And it's going to be massive, "no differentiation between everything out there." If you look at what you're offering, if you look at how public Clouds build their things, but I called it a few years or is there's a pure optimization. So you need to scale, you need similarities but you know you need to fit some, very specific requirements, lots of places, so, interesting stuff. Yeah, certifications, always keep your teams busy. Alright, let's get back to Shannon to view on the report. >> We are also introducing three other hardware-based additions. First, a new VxRail E Series model based on where the first time AMD EPYC processors. These single socket 1U nodes, offer dual socket performance with CPU options that scale from eight to 64 Cores, up to a terabyte of memory and multiple storage options making it an ideal platform for desktop VDI analytics and computer aided design. Next, the addition of the latest Nvidia Quadro RTX GPUs brings the most significant advancement in computer graphics in over a decade to professional work flows. Designers and artists across industries can now expand the boundary of what's possible, working with the largest and most complex graphics rendering, deep learning and visual computing workloads. And Intel Optane DC persistent memory is here, and it offers high performance and significantly increased memory capacity with data persistence at an affordable price. Data persistence is a critical feature that maintains data integrity, even when power is lost, enabling quicker recovery and less downtime. With support for Intel obtain DC persistent memory customers can expand in memory intensive workloads and use cases like SAP HANA. Alright, let's finally dig into our HCI system software, which is the Core differentiation for VxRail regardless of your workload or platform choice. Our joining engineering with VMware and investments in VxRail HCI system software innovation together deliver an optimized operational experience at reduced risk for our customers. Under the covers, VxRail offers best in class hardware, married with VMware HCI software, either vSAN or VCF. But what makes us different stems from our investments to integrate the two. Dell Technologies has a dedicated VxRail team of about 400 people to build market sell and support a fully integrated hyper converged system. That team has also developed our unique VxRail HCI system software, which is a suite of integrated software elements that extend VMware native capabilities to deliver seamless, automated operational experience that customers cannot find elsewhere. The key components of VxRail HCI system software shown around the arc here that include the extra manager, full stack lifecycle management, ecosystem connectors, and support. I don't have time to get into all the details of these elements today, but if you're interested in learning more, I encourage you to meet our experts. And I will tell you how to do that in a moment. I touched on the LCM being a key feature to the vSphere 7.0 earlier and I'd like to take the opportunity to expand on that a bit in the context of VxRail Lifecycle Management. The LCM adds valuable automation to the execution of updates for customers, but it doesn't eliminate the manual work still needed to define and package the updates and validate all of the components prior to applying them. With VxRail customers have all of these areas addressed automatically on their behalf, freeing them to put their time into other important functions for their business. Customers tell us that Lifecycle management continues to be a major source of the maintenance effort they put into their infrastructure, and then it tends to lead to overburden IT staff, that it can cause disruptions to the business if not managed effectively, and that it isn't the most efficient economically. Automation of Lifecycle Management and VxRail results in the utmost simplicity from a customer experience perspective, and offers operational freedom from maintaining infrastructure. But as shown here, our customers not only realize greater IT team efficiencies, they have also reduced downtime with fewer unplanned outages, and reduced overall cost of operations. With VxRail HCI system software, intelligent Lifecycle Management upgrades of the fully integrated hardware and software stack are automated, keeping clusters and continuously validated states while minimizing risks and operational costs. How do we ensure Continuously validated states for VxRail. VxRail labs execute an extensive, automated, repeatable process on every firmware and software upgrade and patch to ensure clusters are in continuously validated states of the customers choosing across their VxRail environment. The VxRail labs are constantly testing, analyzing, optimizing, and sequencing all of the components in the upgrade to execute in a single package for the full stack. All the while VxRail is backed by Dell EMC's world class services and support with a single point of contact for both hardware and software. IT productivity skyrockets with single click non disruptive upgrades of the fully integrated hardware and software stack without the need to do extensive research and testing. taking you to the next VxRail version of your choice, while always in a continuously validated state. You can also confidently execute automated VxRail upgrades. No matter what hardware generation or node types are in the cluster. They don't have to all be the same. And upgrades with VxRail are faster and more efficient with leapfrogging simply choose any VxRail version you desire. And be assured you will get there in a validated state while seamlessly bypassing any other release in between. Only VxRail can do that. >> All right, so Chad, the lifecycle management piece that Shannon was just talking about is, not the sexiest, it's often underappreciated. There's not only the years of experience, but the continuous work you're doing, reminds me back the early vSAN deployments versus VxRail jointly developed, jointly tested between Dell and VMware. So bring us inside why, 2020 Lifecycle Management still, a very important piece, especially in the VM family line. >> Yes, Stu, I think it's sexy, but, I'm pretty big nerd. (all laughing) Yeah, this is really always been our bread and butter. And in fact, it gets even more important, the larger the deployments come, when you start to look at data centers full of VxRails and all the different hardware software, firmware combinations that could exist out there. It's really the value that you get out of that VxRail HCI system software that Shannon was talking about and how it's optimized around the VMware use case. Very tightly integrated with each VMware component, of course, and the intelligence of being able to do all the firmware, all of the drivers, all the software all together in tremendous value to our customers. But to deliver that we really need to make a fairly large investment. So as Shannon mentioned, we run about 25,000 hours of testing across Each major release for patches, express patches, that's about 7000 hours for each of those. So, obviously, there's a lot of parallelism. And we're always developing new test scenarios for each release that we need to build in as we as we introduce new functionality. And one of the key things that we're able to do, as Shannon mentioned, is to be able to leapfrog releases and get you to that next validated state. We've got about 100 engineers just working on creating and executing those test cases on a continuous basis and obviously, a huge amount of automation. And we've talked about that investment to execute those tests. That's one worth of $60 million of investment in our lab. In fact, we've got just over 2000 VxRail units in our testbed across the US, Shanghai, China and Cork, Ireland. So a massive amount of testing of each of those components to make sure that they operate together in a validated state. >> Yeah, well, absolutely, it's super important not only for the day one, but the day two deployments. But I think this actually a great place for us to bring in that customer that Dell gave me access to. So we've got the CIO of Amarillo, Texas, he was an existing VxRail customer. And he's going to explain what happened as to how he needed to react really fast to support the work-from-home initiative, as well as we get to hear in his words the value of what Lifecycle Management means. So Andrew, if we could queue up that customer segment, please? >> It's been massive and it's been interesting to see the IT team absorb it. As we mature, I think they embrace the ability to be innovative and to work with our departments. But this instance, really justified why I was driving progress. So fervently why it was so urgent today. Three years ago, the answer would have been no. We wouldn't have been in a place where we could adapt With VxRail in place, in a week we spun up hundreds of instant balls. We spun up a 75-person call center in a day and a half, for our public health. We rolled out multiple applications for public health so they could do remote clinics. It's given us the flexibility to be able to roll out new solutions very quickly and be very adaptive. And it's not only been apparent to my team, but it's really made an impact on the business. And now what I'm seeing is those of my customers that work, a little lagging or a little conservative, or understanding the impact of modernizing the way they do business because it makes them adaptable as well. >> Alright, so great, Richard, you talked a bunch about the the efficiencies that that the IT put in place, how about that, that overall just managed, you talked about how fast you spun up these new VDI instances. need to be able to do things much simpler? So how does the overall Lifecycle Management fit into this discussion? >> It makes it so much easier. And in the old environment, one, It took a lot of man hours to make change. It was very disruptive, when we did make change, it overburdened, I guess that's the word I'm looking for. It really overburdened our staff to cause disruption to business. That wasn't cost efficient. And then simple things like, I've worked for multi billion dollar companies where we had massive QA environments that replicated production, simply can't afford that at local government. Having this sort of environment lets me do a scaled down QA environment and still get the benefit of rolling out non disruptive change. As I said earlier, it's allowed us to take all of those cycles that we were spending on Lifecycle Management because it's greatly simplified, and move those resources and rescale them in other areas where we can actually have more impact on the business. It's hard to be innovative when 100% of your cycles are just keeping the ship afloat. >> All right, well, nothing better than hearing it straight from the end user, public sector reacting very fast to the COVID-19. And, if you heard him he said, if this is his, before he had run this project, he would not have been able to respond. So I think everybody out there understands, if I didn't actually have access to the latest technology, it would be much harder. All right, I'm looking forward to doing the CrowdChat letting everybody else dig in with questions and get follow up but a little bit more, I believe one more announcement he can and got for us though. Let's roll the final video clip. >> In our latest software release VxRail 4.7.510, We continue to add new automation and self service features. New functionality enables you to schedule and run upgrade health checks in advance of upgrades, to ensure clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade windows, as they can be assured the clusters will seamlessly upgrade within that window. Of course, running health checks on a regular basis also helps ensure that your clusters are always ready for unscheduled patches and security updates. We are also offering more flexibility and getting all nodes or clusters to a common release level with the ability to reimage nodes or clusters to a specific VxRail version, or down rev one or more nodes that may be shipped at a higher rate than the existing cluster. This enables you to easily choose your validated state when adding new nodes or repurposing nodes in a cluster. To sum up all of our announcements, whether you are accelerating data sets modernization extending HCI to harsh Edge environments, deploying an on-premises Dell Technologies Cloud platform to create a developer ready Kubernetes infrastructure. VxRail is there delivering a turn-key experience that enables you to continuously innovate, realize operational freedom and predictably evolve. VxRail provides an extensive breadth of platform configurations, automation and Lifecycle Management across the integrated hardware and software full stack and consistent hybrid Cloud operations to address the broadest range of traditional and modern applications across Core, Edge and Cloud. I now invite you to engage with us. First, the virtual passport program is an opportunity to have some fun while learning about VxRail new features and functionality and sCore some sweet digital swag while you're at it. Delivered via an augmented reality app. All you need is your device. So go to vxrail.is/passport to get started. And secondly, if you have any questions about anything I talked about or want a deeper conversation, we encourage you to join one of our exclusive VxRail Meet The Experts sessions available for a limited time. First come first served, just go to vxrail.is/expertsession to learn more. >> All right, well, obviously, with everyone being remote, there's different ways we're looking to engage. So we've got the CrowdChat right after this. But Jon, give us a little bit more as to how Dell's making sure to stay in close contact with customers and what you've got for options for them. >> Yeah, absolutely. So as Shannon said, so in lieu of not having done Tech World this year in person, where we could have those great in-person interactions and answer questions, whether it's in the booth or in meeting rooms, we are going to have these Meet The Experts sessions over the next couple weeks, and we're going to put our best and brightest from our technical community and make them accessible to everyone out there. So again, definitely encourage you. We're trying new things here in this virtual environment to ensure that we can still stay in touch, answer questions, be responsive, and really looking forward to, having these conversations over the next couple of weeks. >> All right, well, Jon and Chad, thank you so much. We definitely look forward to the conversation here and continued. If you're here live, definitely go down below and do it if you're watching this on demand. You can see the full transcript of it at crowdchat.net/vxrailrocks. For myself, Shannon on the video, Jon, Chad, Andrew, man in the booth there, thank you so much for watching, and go ahead and join the CrowdChat.
SUMMARY :
Announcer: From the Cube And the way we do things a lot of times, talk to the experts and everything. And as it relates to VDI, So, I really love to hear what both of you and the flexibility of the actual rules and the sort of the efficiencies that Shannon's going to share. the latest enhancements to really the VMware strategy and the fact that now you can build and the line I've used that delivers the full power of VxRail for bringing this to market. and operation of the "And it's going to be massive, and that it isn't the most especially in the VM family line. and all the different hardware software, And he's going to explain what happened the ability to be innovative that that the IT put in and still get the benefit it straight from the end user, for the next upgrade or patch. little bit more as to how to ensure that we can still and go ahead and join the CrowdChat.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
Shannon | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Jon Siegal | PERSON | 0.99+ |
Chad Dunn | PERSON | 0.99+ |
Chad | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
15,000 feet | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
40 G | QUANTITY | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Tom Xu | PERSON | 0.99+ |
$60 million | QUANTITY | 0.99+ |
US Navy | ORGANIZATION | 0.99+ |
131 degrees Fahrenheit | QUANTITY | 0.99+ |
Baghdad | LOCATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
113 degrees Fahrenheit | QUANTITY | 0.99+ |
vSphere 7.0 | TITLE | 0.99+ |
75-person | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
vSphere | TITLE | 0.99+ |
45 degrees Celsius | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
VxRail | TITLE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Shanghai | LOCATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
VxRail 7.0 | TITLE | 0.99+ |
Amarillo | LOCATION | 0.99+ |
less than 14 days | QUANTITY | 0.99+ |
Delta Cloud | TITLE | 0.99+ |
late April | DATE | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
20 inches | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
SAP HANA | TITLE | 0.99+ |
seven | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
VxRail E Series | COMMERCIAL_ITEM | 0.99+ |
each | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
a day and a half | QUANTITY | 0.98+ |
about 400 people | QUANTITY | 0.98+ |
VxRail: Taking HCI to Extremes
>> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCube Conversation. >> Hi, I'm Stu Miniman. And welcome to this special presentation. We have a launch from Dell Technologies updates from the VxRail family. We're going to do things a little bit different here. We actually have a launch video Shannon Champion, of Dell Technologies. And the way we do things a lot of times, is, analysts get a little preview or when you're watching things. You might have questions on it. So, rather than me just wanting it, or you wanting yourself I actually brought in a couple of Dell Technologies expertS two of our Cube alumni, happy to welcome you back to the program. Jon Siegal, he is the Vice President of Product Marketing, and Chad Dunn, who's the Vice President of Product Management, both of them with Dell Technologies. Gentlemen, thanks so much for joining us. >> Good to see you Stu. >> Great to be here. >> All right, and so what we're going to do is we're going to be rolling the video here. I've got a button I'm going to press, Andrew will stop it here and then we'll kind of dig in a little bit, go into some questions when we're all done. We're actually holding a crowd chat, where you will be able to ask your questions, talk to the experts and everything. And so a little bit different way to do a product announcement. Hope you enjoy it. And with that, it's VxRail. Taking HCI to the extremes is the theme. We'll see what that means and everything. But without any further ado, let's let Shannon take the video away. >> Hello, and welcome. My name is Shannon Champion, and I'm looking forward to taking you through what's new with VxRail. Let's get started. We have a lot to talk about. Our launch covers new announcements addressing use cases across the Core, Edge and Cloud and spans both new hardware platforms and options, as well as the latest in software innovations. So let's jump right in. Before we talk about our announcements, let's talk about where customers are adopting VxRail today. First of all, on behalf of the entire Dell Technologies and VxRail teams, I want to thank each of our over 8000 customers, big and small in virtually every industry, who've chosen VxRail to address a broad range of workloads, deploying nearly 100,000 nodes today. Thank you. Our promise to you is that we will add new functionality, improve serviceability, and support new use cases, so that we deliver the most value to you, whether in the Core, at the Edge or for the Cloud. In the Core, VxRail from day one has been a catalyst to accelerate IT transformation. Many of our customers started here and many will continue to leverage VxRail to simply extend and enhance your VMware environment. Now we can support even more demanding applications such as In-Memory databases, like SAP HANA, and more AI and ML applications, with support for more and more powerful GPUs. At the Edge, video surveillance, which also uses GPUs, by the way, is an example of a popular use case leveraging VxRail alongside external storage. And right now we all know the enhanced role that IT is playing. And as it relates to VDI, VxRail has always been a great option for that. In the Cloud, it's all about Kubernetes, and how Dell Technologies Cloud platform, which is VCF on VxRail can deliver consistent infrastructure for both traditional and Cloud native applications. And we're doing that together with VMware. VxRail is the only jointly engineered HCI system built with VMware for VMware environments, designed to enhance the native VMware experience. This joint engineering with VMware and investments in software innovation together deliver an optimized operational experience at reduced risk for our customers. >> Alright, so Shannon talked a bit about, the important role of IT Of course right now, with the global pandemic going on. It's really, calling in, essential things, putting, platforms to the test. So, I really love to hear what both of you are hearing from customers. Also, VDI, of course, in the early days, it was, HCI-only-does-VDI. Now, we know there are many solutions, but remote work is putting that back front and center. So, Jon, why don't we start with you as the what is (muffled speaking) >> Absolutely. So first of all, Stu, thank you, I want to do a shout out to our VxRail customers around the world. It's really been humbling, inspiring, and just amazing to see The impact of our VxRail customers around the world and what they're having on on human progress here. Just for a few examples, there are genomics companies that we have running VxRail that have rolled out testing at scale. We also have research universities out in the Netherlands, doing the antibody detection. The US Navy has stood up a floating hospital to of course care for those in need. So we are here to help that's been our message to our customers, but it's amazing to see how much they're helping society during this. So just just a pleasure there. But as you mentioned, just to hit on the VDI comments, so to your points too, HCI, VxRail, VDI, that was an initial use case years ago. And it's been great to see how many of our existing VxRail customers have been able to pivot very quickly leveraging VxRail to add and to help bring their remote workforce online and support them with their existing VxRail. Because VxRail is flexible, it is agile, to be able to support those multiple workloads. And in addition to that, we've also rolled out some new VDI bundles to make it simpler for customers more cost effective cater to everything from knowlEdge workers to multimedia workers. You name it, you know from 250, desktops up to 1000. But again, back to your point VxRail, HCI, is well beyond VDI, it crossed the chasm a couple years ago actually. And VDI now is less than a third of the typical workloads, any of our customers out there, it supports now a range of workloads that you heard from Shannon, whether it's video surveillance, whether it's general purpose, all the way to mission critical applications now with SAP HAN. So, this has changed the game for sure. But the range of work loads and the flexibility of the actual rules which really helping our existing customers during this pandemic. >> Yeah, I agree with you, Jon, we've seen customers really embrace HCI for a number of workloads in their environments, from the ones that we sure all knew and loved back in the initial days of HCI. Now, the mission critical things now to Cloud native workloads as well, and the sort of the efficiencies that customers are able to get from HCI. And specifically, VxRail gives them that ability to pivot. When these, shall we say unexpected circumstances arise? And I think that that's informing their their decisions and their opinions on what their IP strategies look like as they move forward. They want that same level of agility, and ability to react quickly with their overall infrastructure. >> Excellent. Now I want to get into the announcements. What I want my team actually, your team gave me access to the CIO from the city of Amarillo, so maybe they can dig up that footage, talk about how fast they pivoted, using VxRail to really spin up things fast. So let's hear from the announcement first and then definitely want to share that that customer story a little bit later. So let's get to the actual news that Shannon's going to share. >> Okay, now what's new? I am pleased to announce a number of exciting updates and new platforms, to further enable IT modernization across Core, Edge and Cloud. I will cover each of these announcements in more detail, demonstrating how only VxRail can offer the breadth of platform configurations, automation, orchestration and Lifecycle Management, across a fully integrated hardware and software full stack with consistent, simplified operations to address the broadest range of traditional and modern applications. I'll start with hybrid Cloud and recap what you may have seen in the Dell Technologies Cloud announcements just a few weeks ago, related to VMware Cloud foundation on VxRail. Then I'll cover two brand new VxRail hardware platforms and additional options. And finally circle back to talk about the latest enhancements to our VxRail HCI system software capabilities for Lifecycle Management. Let's get started with our new Cloud offerings based on VxRail. VxRail is the HCI foundation for Dell Technologies, Cloud Platform, bringing automation and financial models, similar to public Cloud to On-premises environments. VMware recently introduced Cloud foundation for Delta, which is based on vSphere 7.0. As you likely know by now, vSphere 7.0 was definitely an exciting and highly anticipated release. In keeping with our synchronous release commitment, we introduced VxRail 7.0 based on vSphere 7.0 in late April, which was within 30 days of VMware's release. Two key areas that VMware focused on we're embedding containers and Kubernetes into vSphere, unifying them with virtual machines. And the second is improving the work experience for vSphere administrators with vSphere Lifecycle Manager or VLCM. I'll address the second point a bit in terms of how VxRail fits in in a moment for VCF 4 with Tom Xu, based on vSphere 7.0 customers now have access to a hybrid Cloud platform that supports native Kubernetes workloads and management, as well as your traditional VM-based workloads. So containers are now first class citizens of your private Cloud alongside traditional VMs and this is now available with VCF 4.0, on VxRail 7.0. VxRail's tight integration with VMware Cloud foundation delivers a simple and direct path not only to the hybrid Cloud, but also to deliver Kubernetes at Cloud scale with one complete automated platform. The second Cloud announcement is also exciting. Recent VCF for networking advancements have made it easier than ever to get started with hybrid Cloud, because we're now able to offer a more accessible consolidated architecture. And with that Dell Technologies Cloud platform can now be deployed with a four-node configuration, lowering the cost of an entry level hybrid Cloud. This enables customers to start smaller and grow their Cloud deployment over time. VCF and VxRail can now be deployed in two different ways. For small environments, customers can utilize a consolidated architecture which starts with just four nodes. Since the management and workload domains share resources in this architecture, it's ideal for getting started with an entry level Cloud to run general purpose virtualized workloads with a smaller entry point. Both in terms of required infrastructure footprint as well as cost, but still with a Consistent Cloud operating model. For larger environments where dedicated resources and role-based access control to separate different sets of workloads is usually preferred. You can choose to deploy a standard architecture which starts at eight nodes for independent management and workload domains. A standard implementation is ideal for customers running applications that require dedicated workload domains that includes Horizon, VDI, and vSphere with Kubernetes. >> Alright, Jon, there's definitely been a lot of interest in our community around everything that VMware is doing with vSphere 7.0. understand if you wanted to use the Kubernetes piece, it's VCF as that so we've seen the announcements, Dell, partnering in there it helps us connect that story between, really the VMware strategy and how they talk about Cloud and where does VxRail fit in that overall, Delta Cloud story? >> Absolutely. So first of all Stu, the VxRail course is integral to the Delta Cloud strategy. it's been VCF on VxRail equals the Delta Cloud platform. And this is our flagship on prem Cloud offering, that we've been able to enable operational consistency across any Cloud, whether it's On-prem, in the Edge or in the public Cloud. And we've seen the Dell tech Cloud Platform embraced by customers for a couple key reasons. One is it offers the fastest hybrid Cloud deployment in the market. And this is really, thanks to a new subscription offer that we're now offering out there where in less than 14 days, it can be still up and running. And really, the Dell tech Cloud does bring a lot of flexibility in terms of consumption models, overall when it comes to VxRail. Secondly, I would say is fast and easy upgrades. This is what VxRail brings to the table for all workloads, if you will, into especially critical in the Cloud. So the full automation of Lifecycle Management across the hardware and software stack across the VMware software stack, and in the Dell software and hardware supporting that, together, this enables essentially the third thing, which is customers can just relax. They can be rest assured that their infrastructure will be continuously validated, and always be in a continuously validated state. And this is the kind of thing that those three value propositions together really fit well, with any on-prem Cloud. Now you take what Shannon just mentioned, and the fact that now you can build and run modern applications on the same VxRail infrastructure alongside traditional applications. This is a game changer. >> Yeah, I love it. I remember in the early days talking with Dunn about CI, how does that fit in with Cloud discussion and the line I've used the last couple years is, modernize the platform, then you can modernize the application. So as companies are doing their full modernization, then this plays into what you're talking about. All right, we can let Shannon continue, we can get some more before we dig into some more analysis. >> That's good. >> Let's talk about new hardware platforms and updates. that result in literally thousands of potential new configuration options. covering a wide breadth of modern and traditional application needs across a range of the actual use cases. First up, I am incredibly excited to announce a brand new Dell EMC VxRail series, the D series. This is a ruggedized durable platform that delivers the full power of VxRail for workloads at the Edge in challenging environments or for space constrained areas. VxRail D series offers the same compelling benefits as the rest of the VxRail portfolio with simplicity, agility and lifecycle management. But in a lightweight short depth at only 20 inches, it's adorable form factor that's extremely temperature-resilient, shock resistant, and easily portable. It even meets milspec standards. That means you have the full power of lifecycle automation with VxRail HCI system software and 24 by seven single point of support, enabling you to rapidly react to business needs, no matter the location or how harsh the conditions. So whether you're deploying a data center at a mobile command base, running real-time GPS mapping on the go, or implementing video surveillance in remote areas, you can ensure availability, integrity and confidence for every workload with the new VxRail ruggedized D series. >> All right, Chad we would love for you to bring us in a little bit that what customer requirement for bringing this to market. I remember seeing, Dell servers ruggedized, of course, Edge, really important growth to build on what Jon was talking about, Cloud. So, Chad, bring us inside, what was driving this piece of the offering? >> Sure Stu. Yeah, yeah, having been at the hardware platforms that can go out into some of these remote locations is really important. And that's being driven by the fact that customers are looking for compute performance and storage out at some of these Edges or some of the more exotic locations. whether that's manufacturing plants, oil rigs, submarine ships, military applications, places that we've never heard of. But it's also about extending that operational simplicity of the the sort of way that you're managing your data center that has VxRails you're managing your Edges the same way using the same set of tools. You don't need to learn anything else. So operational simplicity is absolutely key here. But in those locations, you can take a product that's designed for a data center where definitely controlling power cooling space and take it some of these places where you get sand blowing or seven to zero temperatures, could be Baghdad or it could be Ketchikan, Alaska. So we built this D series that was able to go to those extreme locations with extreme heat, extreme cold, extreme altitude, but still offer that operational simplicity. Now military is one of those applications for the rugged platform. If you look at the resistance that it has to heat, it operates at a 45 degrees Celsius or 113 degrees Fahrenheit range, but it can do an excursion up to 55 C or 131 degrees Fahrenheit for up to eight hours. It's also resistant to heat sand, dust, vibration, it's very lightweight, short depth, in fact, it's only 20 inches deep. This is a smallest form factor, obviously that we have in the VxRail family. And it's also built to be able to withstand sudden shocks certified to withstand 40 G's of shock and operation of the 15,000 feet of elevation. Pretty high. And this is sort of like wherever skydivers go to when they want the real thrill of skydiving where you actually need oxygen to, to be for that that altitude. They're milspec-certified. So, MIL-STD-810G, which I keep right beside my bed and read every night. And it comes with a VxRail stick hardening package is packaging scripts so that you can auto lock down the rail environment. And we've got a few other certifications that are on the roadmap now for naval shock requirements. EMI and radiation immunity often. >> Yeah, it's funny, I remember when we first launched it was like, "Oh, well everything's going to white boxes. "And it's going to be massive, "no differentiation between everything out there." If you look at what you're offering, if you look at how public Clouds build their things, but I called it a few years or is there's a pure optimization. So you need to scale, you need similarities but you know you need to fit some, very specific requirements, lots of places, so, interesting stuff. Yeah, certifications, always keep your teams busy. Alright, let's get back to Shannon to view on the report. >> We are also introducing three other hardware-based additions. First, a new VxRail E Series model based on where the first time AMD EPYC processors. These single socket 1U nodes, offer dual socket performance with CPU options that scale from eight to 64 Cores, up to a terabyte of memory and multiple storage options making it an ideal platform for desktop VDI analytics and computer aided design. Next, the addition of the latest Nvidia Quadro RTX GPUs brings the most significant advancement in computer graphics in over a decade to professional work flows. Designers and artists across industries can now expand the boundary of what's possible, working with the largest and most complex graphics rendering, deep learning and visual computing workloads. And Intel Optane DC persistent memory is here, and it offers high performance and significantly increased memory capacity with data persistence at an affordable price. Data persistence is a critical feature that maintains data integrity, even when power is lost, enabling quicker recovery and less downtime. With support for Intel obtain DC persistent memory customers can expand in memory intensive workloads and use cases like SAP HANA. Alright, let's finally dig into our HCI system software, which is the Core differentiation for VxRail regardless of your workload or platform choice. Our joining engineering with VMware and investments in VxRail HCI system software innovation together deliver an optimized operational experience at reduced risk for our customers. Under the covers, VxRail offers best in class hardware, married with VMware HCI software, either vSAN or VCF. But what makes us different stems from our investments to integrate the two. Dell Technologies has a dedicated VxRail team of about 400 people to build market sell and support a fully integrated hyper converged system. That team has also developed our unique VxRail HCI system software, which is a suite of integrated software elements that extend VMware native capabilities to deliver seamless, automated operational experience that customers cannot find elsewhere. The key components of VxRail HCI system software shown around the arc here that include the extra manager, full stack lifecycle management, ecosystem connectors, and support. I don't have time to get into all the details of these elements today, but if you're interested in learning more, I encourage you to meet our experts. And I will tell you how to do that in a moment. I touched on the LCM being a key feature to the vSphere 7.0 earlier and I'd like to take the opportunity to expand on that a bit in the context of VxRail Lifecycle Management. The LCM adds valuable automation to the execution of updates for customers, but it doesn't eliminate the manual work still needed to define and package the updates and validate all of the components prior to applying them. With VxRail customers have all of these areas addressed automatically on their behalf, freeing them to put their time into other important functions for their business. Customers tell us that Lifecycle management continues to be a major source of the maintenance effort they put into their infrastructure, and then it tends to lead to overburden IT staff, that it can cause disruptions to the business if not managed effectively, and that it isn't the most efficient economically. Automation of Lifecycle Management and VxRail results in the utmost simplicity from a customer experience perspective, and offers operational freedom from maintaining infrastructure. But as shown here, our customers not only realize greater IT team efficiencies, they have also reduced downtime with fewer unplanned outages, and reduced overall cost of operations. With VxRail HCI system software, intelligent Lifecycle Management upgrades of the fully integrated hardware and software stack are automated, keeping clusters and continuously validated states while minimizing risks and operational costs. How do we ensure Continuously validated states for VxRail. VxRail labs execute an extensive, automated, repeatable process on every firmware and software upgrade and patch to ensure clusters are in continuously validated states of the customers choosing across their VxRail environment. The VxRail labs are constantly testing, analyzing, optimizing, and sequencing all of the components in the upgrade to execute in a single package for the full stack. All the while VxRail is backed by Dell EMC's world class services and support with a single point of contact for both hardware and software. IT productivity skyrockets with single click non disruptive upgrades of the fully integrated hardware and software stack without the need to do extensive research and testing. taking you to the next VxRail version of your choice, while always in a continuously validated state. You can also confidently execute automated VxRail upgrades. No matter what hardware generation or node types are in the cluster. They don't have to all be the same. And upgrades with VxRail are faster and more efficient with leapfrogging simply choose any VxRail version you desire. And be assured you will get there in a validated state while seamlessly bypassing any other release in between. Only VxRail can do that. >> All right, so Chad, the lifecycle management piece that Shannon was just talking about is, not the sexiest, it's often underappreciated. There's not only the years of experience, but the continuous work you're doing, reminds me back the early vSAN deployments versus VxRail jointly developed, jointly tested between Dell and VMware. So bring us inside why, 2020 Lifecycle Management still, a very important piece, especially in the VM family line. >> Yes, Stu, I think it's sexy, but, I'm pretty big nerd. (all laughing) Yeah, this is really always been our bread and butter. And in fact, it gets even more important, the larger the deployments come, when you start to look at data centers full of VxRails and all the different hardware software, firmware combinations that could exist out there. It's really the value that you get out of that VxRail HCI system software that Shannon was talking about and how it's optimized around the VMware use case. Very tightly integrated with each VMware component, of course, and the intelligence of being able to do all the firmware, all of the drivers, all the software all together in tremendous value to our customers. But to deliver that we really need to make a fairly large investment. So as Shannon mentioned, we run about 25,000 hours of testing across Each major release for patches, express patches, that's about 7000 hours for each of those. So, obviously, there's a lot of parallelism. And we're always developing new test scenarios for each release that we need to build in as we as we introduce new functionality. And one of the key things that we're able to do, as Shannon mentioned, is to be able to leapfrog releases and get you to that next validated state. We've got about 100 engineers just working on creating and executing those test cases on a continuous basis and obviously, a huge amount of automation. And we've talked about that investment to execute those tests. That's one worth of $60 million of investment in our lab. In fact, we've got just over 2000 VxRail units in our testbed across the US, Shanghai, China and Cork, Ireland. So a massive amount of testing of each of those components to make sure that they operate together in a validated state. >> Yeah, well, absolutely, it's super important not only for the day one, but the day two deployments. But I think this actually a great place for us to bring in that customer that Dell gave me access to. So we've got the CIO of Amarillo, Texas, he was an existing VxRail customer. And he's going to explain what happened as to how he needed to react really fast to support the work-from-home initiative, as well as we get to hear in his words the value of what Lifecycle Management means. So Andrew, if we could queue up that customer segment, please? >> It's been massive and it's been interesting to see the IT team absorb it. As we mature, I think they embrace the ability to be innovative and to work with our departments. But this instance, really justified why I was driving progress. So fervently why it was so urgent today. Three years ago, the answer would have been no. We wouldn't have been in a place where we could adapt With VxRail in place, in a week we spun up hundreds of instant balls. We spun up a 75-person call center in a day and a half, for our public health. We rolled out multiple applications for public health so they could do remote clinics. It's given us the flexibility to be able to roll out new solutions very quickly and be very adaptive. And it's not only been apparent to my team, but it's really made an impact on the business. And now what I'm seeing is those of my customers that work, a little lagging or a little conservative, or understanding the impact of modernizing the way they do business because it makes them adaptable as well. >> Alright, so great, Richard, you talked a bunch about the the efficiencies that that the IT put in place, how about that, that overall just managed, you talked about how fast you spun up these new VDI instances. need to be able to do things much simpler? So how does the overall Lifecycle Management fit into this discussion? >> It makes it so much easier. And in the old environment, one, It took a lot of man hours to make change. It was very disruptive, when we did make change, it overburdened, I guess that's the word I'm looking for. It really overburdened our staff to cause disruption to business. That wasn't cost efficient. And then simple things like, I've worked for multi billion dollar companies where we had massive QA environments that replicated production, simply can't afford that at local government. Having this sort of environment lets me do a scaled down QA environment and still get the benefit of rolling out non disruptive change. As I said earlier, it's allowed us to take all of those cycles that we were spending on Lifecycle Management because it's greatly simplified, and move those resources and rescale them in other areas where we can actually have more impact on the business. It's hard to be innovative when 100% of your cycles are just keeping the ship afloat. >> All right, well, nothing better than hearing it straight from the end user, public sector reacting very fast to the COVID-19. And, if you heard him he said, if this is his, before he had run this project, he would not have been able to respond. So I think everybody out there understands, if I didn't actually have access to the latest technology, it would be much harder. All right, I'm looking forward to doing the CrowdChat letting everybody else dig in with questions and get follow up but a little bit more, I believe one more announcement he can and got for us though. Let's roll the final video clip. >> In our latest software release VxRail 4.7.510, We continue to add new automation and self service features. New functionality enables you to schedule and run upgrade health checks in advance of upgrades, to ensure clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade windows, as they can be assured the clusters will seamlessly upgrade within that window. Of course, running health checks on a regular basis also helps ensure that your clusters are always ready for unscheduled patches and security updates. We are also offering more flexibility and getting all nodes or clusters to a common release level with the ability to reimage nodes or clusters to a specific VxRail version, or down rev one or more nodes that may be shipped at a higher rate than the existing cluster. This enables you to easily choose your validated state when adding new nodes or repurposing nodes in a cluster. To sum up all of our announcements, whether you are accelerating data sets modernization extending HCI to harsh Edge environments, deploying an on-premises Dell Technologies Cloud platform to create a developer ready Kubernetes infrastructure. VxRail is there delivering a turn-key experience that enables you to continuously innovate, realize operational freedom and predictably evolve. VxRail provides an extensive breadth of platform configurations, automation and Lifecycle Management across the integrated hardware and software full stack and consistent hybrid Cloud operations to address the broadest range of traditional and modern applications across Core, Edge and Cloud. I now invite you to engage with us. First, the virtual passport program is an opportunity to have some fun while learning about VxRail new features and functionality and sCore some sweet digital swag while you're at it. Delivered via an augmented reality app. All you need is your device. So go to vxrail.is/passport to get started. And secondly, if you have any questions about anything I talked about or want a deeper conversation, we encourage you to join one of our exclusive VxRail Meet The Experts sessions available for a limited time. First come first served, just go to vxrail.is/expertsession to learn more. >> All right, well, obviously, with everyone being remote, there's different ways we're looking to engage. So we've got the CrowdChat right after this. But Jon, give us a little bit more as to how Dell's making sure to stay in close contact with customers and what you've got for options for them. >> Yeah, absolutely. So as Shannon said, so in lieu of not having done Tech World this year in person, where we could have those great in-person interactions and answer questions, whether it's in the booth or in meeting rooms, we are going to have these Meet The Experts sessions over the next couple weeks, and we're going to put our best and brightest from our technical community and make them accessible to everyone out there. So again, definitely encourage you. We're trying new things here in this virtual environment to ensure that we can still stay in touch, answer questions, be responsive, and really looking forward to, having these conversations over the next couple of weeks. >> All right, well, Jon and Chad, thank you so much. We definitely look forward to the conversation here and continued. If you're here live, definitely go down below and do it if you're watching this on demand. You can see the full transcript of it at crowdchat.net/vxrailrocks. For myself, Shannon on the video, Jon, Chad, Andrew, man in the booth there, thank you so much for watching, and go ahead and join the CrowdChat.
SUMMARY :
Announcer: From the Cube And the way we do things a lot of times, talk to the experts and everything. And as it relates to VDI, So, I really love to hear what both of you and the flexibility of the actual rules and the sort of the efficiencies that Shannon's going to share. the latest enhancements to really the VMware strategy and the fact that now you can build and the line I've used that delivers the full power of VxRail for bringing this to market. and operation of the "And it's going to be massive, and that it isn't the most especially in the VM family line. and all the different hardware software, And he's going to explain what happened the ability to be innovative that that the IT put in and still get the benefit it straight from the end user, for the next upgrade or patch. little bit more as to how to ensure that we can still and go ahead and join the CrowdChat.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
Shannon | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Jon Siegal | PERSON | 0.99+ |
Chad Dunn | PERSON | 0.99+ |
Chad | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
15,000 feet | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
40 G | QUANTITY | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Tom Xu | PERSON | 0.99+ |
$60 million | QUANTITY | 0.99+ |
US Navy | ORGANIZATION | 0.99+ |
131 degrees Fahrenheit | QUANTITY | 0.99+ |
Baghdad | LOCATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
113 degrees Fahrenheit | QUANTITY | 0.99+ |
vSphere 7.0 | TITLE | 0.99+ |
75-person | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
vSphere | TITLE | 0.99+ |
45 degrees Celsius | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
VxRail | TITLE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Shanghai | LOCATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
VxRail 7.0 | TITLE | 0.99+ |
Amarillo | LOCATION | 0.99+ |
less than 14 days | QUANTITY | 0.99+ |
Delta Cloud | TITLE | 0.99+ |
late April | DATE | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
20 inches | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
SAP HANA | TITLE | 0.99+ |
seven | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
VxRail E Series | COMMERCIAL_ITEM | 0.99+ |
each | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
a day and a half | QUANTITY | 0.98+ |
about 400 people | QUANTITY | 0.98+ |
Vicente Moranta, IBM | SUSECON Digital '20
(upbeat music) >> Narrator: From around the globe it's theCUBE, with coverage of SUSECON Digital. Brought to you by SUSE. >> Stu: Welcome back, I'm Stu Miniman and this is theCUBE's coverage of SUSECON Digital '20. Apeda Welcome to the program Vincente Moranta, who is the Vice President of Offer Management of Enterprise Linux Workloads on Power. Vincente, pleasure to see you, thanks for joining us. >> Vincente: Hey Stu and thank you for having me. >> All right, so we know that SUSE lives on a lot of platforms. We're going to talk a bit about applications specifically, primarily SAP. Give us a little bit, Vincente, about what you're working on, and the relevance to the partnership with SUSE. >> Sure, absolutely. So, the last five years I've been responsible for offering management at IBM. Focused on solutions that live on IBM powered systems. In particular, we started with SAP HANA, and obviously SAP and SUSE, with their fantastic relationship, was a big part of that and continues to be as we have grown the platform for the last five years. >> Excellent. So, SAP of course, critical workload, we've been seeing SAP go through those transformation. So, help us understand what work needs to be done to integrate these things? Make sure that companies can run their business. >> Yeah, I think primarily as clients are making their transition from a traditional type of an ERP, CRM, and even BW type workloads, they're looking for a way to make those transitions. Really get in to the whole digital transformation and all of the spaces of being able to leverage technology in a way that creates value to the client, in almost real time. But they want to do it with technology partners that are going to enable the client to do it with minimal risk, with high flexibility and with partners who are there for them to, in some cases, do things that are not necessarily all too forwarded or ready to go yet. But really giving the customer the ability to adapt to things. And when we started with SAP HANA, as I mentioned, the customers in the market who were doing HANA on X86 platforms were limited to certain set of capabilities, certain set of support statements, and things like that. And a big part of that was bare metal implementations which still to this day remain the most popular way to deploy HANA in an X86 environment. But when we got together with SUSE and with SAP and we started the partnership around HANA, the thing that became very clear was that customers needed flexibility. They needed to be able to adapt to changing environments, very interesting challenges that they were trying to tackle with these HANA projects. But the capabilities of the servers that they were using, were not allowing them to have that flexibility. And then, even if SUSE was trying to do certain things and give some flexibility to those clients, if the infrastructure cannot handle it, or vice verse, it really just is a one-party trick and it doesn't work. So the focus with SUSE, almost from the beginning, has been on tool innovation. And we've been able to accomplish really amazing things together with them and SAP. Things that could not have been possible without that very strong collaboration. And one of them that is very recent, is shared processor pool. Right? In a world where HANA is deployed bare-metal systems, IBM Power is always doing virtualization, and together with SUSE, we were able to come up with a solution. And with SAP, obviously. That allowed customers to share source in a virtual way across many HANA instances. So completely revolutionizing the DCO and the ROI for clients working with HANA. Without trading out any of the resiliency, any of the performance, and everything else. So, that's the balance that a lot of these customers are looking for is flexibility, and better returns, especially now more than ever. Without trading out all of the things that they need for an S/4 HANA project or an ERP or a BW project. >> You talked about the flexibility and the returns that customers get on this. I wonder if you step back for a second, where is this hitting on a CIO's priority list? What has changed in today's Cloud era? Couple weeks ago, IBM Think was going on, heard a lot about customers, how they're going through their journey in the cloud. We know there's a lot of options there so. SAP solutions specifically, there's a lot of ways that we can do this. So how does a CIO figure out what the best solution for their skill-set and the technology partner that they work with. >> Yeah, I think at a high-level where the CIO's are basing nowadays, is kind of, it's a good time to be a CIO, I think, because you get a chance to have a broad range of deployment options. Without having to trade out from the features. I'm sure some CIO's will disagree and will say there's plenty of other challenges that are making their lives complicated. But if we just focus on the fact that you can deploy HANA - you can deploy it in the cloud, you can deploy it in hybrid, you can deploy it on premises. And the largest then, and especially with our capabilities, and together with SUSE, the CIO doesn't have to make a choice on trade out of things that they have to lose if they make one of the other. I think that is what helps them to feel comfortable to go in to SAP and being able to adapt. If a project becomes too large or the data transfer requirements become too complicated or too expensive, it's easy enough to bring it back and to maybe leave dev test in a cloud and move the rest of the production environment to on premise. Through a number of partnerships that we have done over the last few years, there's a number of very large MSP's and CSP's including SAP HANA Enterprise Crowd - HEC - and very soon IBM cloud as well. Who can provide all of these capabilities that SUSE and Empower allow for a HANA deployment to be done in a Cloud. So from our perspective, even though I'm a hardware guy, and some people may think I only care about on premises business, the reality is when a customer says, or a CIO as you were asking. When a CIO is trying to make a decision we don't want that CIO to be thinking they have to make a decision between IBM supporting them only if it's on premises or only if it's on Cloud. We can do both. And they don't have to do, it's not a hard trade off to decide. You can start with one, you can go to the other one. We can have capacity for them like we're doing with SAP HEC today, SAP HANA Enterprise Cloud. They're using Power9 technology. The customers benefit in regardless of which deployment option they choose. Both with SUSE underneath it. I think we're trying to make it simpler for them to make those choices without infrastructure becoming the sticky point. >> Yeah, and you talked about the support that users can get, of course, from IBM. At SUSECON, a lot of the discussion about the community there. >> Absolutely. >> So, what can you tell us about, you've got thousands of customers that are running SAP HANA on Power, how do you help them rally together and be part of (muted). >> Yeah, so, you and I have known each other for a while and I think when we started working together at a prior company it was around communities practice. And the organizational network and social network. A big part of what we have done is just going to that same approach. Of just connecting people with people. Right? Connecting people from SUSE with people from IBM, with clients and trying to foster valuable interaction between those clients. Whether it's TechU, IBM TechU Conferences, SAP TechEd, SUSECON, you name it. We're always kind of looking for ways to bring people together. And I'll put in a plug for a client entity, a client council called the SAP Power Customer Council, which is a group of clients that decided on their own to get together and bring other customers who are doing SAP deployments on AIX, on Linux, obviously with SUSE and HANA, and come together once a year. We also have almost monthly interlock and workshops with them. But that is one way where the SUSE folks, IBM Power, SAP Development, all come together with a whole bunch of clients and they're giving us feedback. But also identifying things for us to work on next. From a support perspective, as you said, we have thousands of clients nowadays, and the really fantastic thing has been very few issues and the issues that we have had, SUSE, SAP and IBM, all three of us together, have been able to resolve them to the customers satisfaction. So it just kind of demonstrates that regardless of where something is invented SUSE with SLES, SAP with HANA, us with our hardware and our hypervisors, when it comes to the clients we all work very closely together for their success. >> Great. Those feedback loops are so critically important to everyone involved. I guess last thing, maybe if you've got a customer example that might highlight the partnership between IBM and SUSE? >> Yeah, there's a number of them and we have, I think it's over 60 public references together with SUSE of clients who are doing an SAP HANA with SUSE Empower. But a couple that come to mind, obviously Robert Bosch is a fantastic client for all of us. A fantastic partner. And they've been with us almost from the very beginning, together with SUSE and together with us. And they helped us to identify early on some things that they would like to be able to see supported. Some capabilities that they expected to be able to have, especially given that Bosch had a strong knowledge of IBM technology, IBM product. And they wanted to be able to apply some of the same capabilities around Live Partition Mobility and large size L-bars for HANA and things like that. And they worked very closely with SUSE and with us, and with SAP, to not just give us the requirement, but really help us to identify okay, how should this work? Right, it's not just creating the technology and adding more and more features but how do we integrate it, how do we integrate it in to Bosch, who had created a fantastic self-provisioning type of a portal for all of their clients, all of their internal entities around the world. That was really cool and it really kind of helped us to highlight how we could integrate into tools, monetary, and reporting, etc that our clients have. Another example if I can, is Richemont. Richemont International is based in Geneva. Luxury brand. And Helga Delterad who was the Director of Idea at the time, kind of came to me and gave me a challenge. He said, "Look, I love HANA Power. I love that we can do all of these things with it. But I really would like be able to share processors across multiple HANA instances. That would really reduce the bill. It would really reduce the cost. And Richemont would be able to achieve a much quicker return on investment than we had anticipated." So, he gave us a challenge. The challenge went to everybody. It went to SUSE, to us and to SAP, we all got together and again with Helga being the executive sponsor on the client side, he really kind of worked with all of us. Brought us together and it was a power of the possible type of situation that now is generally available to all clients. And it's thanks to Helga, thanks to Richemont, who brought us together and gave us that challenge. >> Excellent. Well Vincenta Morante, great to catch up with you. Thanks so much for sharing the update on IBM Power and the partnership with SUSE. >> Thanks Stu. >> All right, we'll be back with more coverage from SUSECON Digital '20. I'm Stu Miniman and as always, thank you for watching theCUBE. (upbeat music plays)
SUMMARY :
Brought to you by SUSE. Welcome to the program Vincente Moranta, Vincente: Hey Stu and and the relevance to the and continues to be as we have grown to integrate these things? the client to do it with and the technology partner the CIO doesn't have to At SUSECON, a lot of the discussion and be part of (muted). and the really fantastic thing has been that might highlight the But a couple that come to mind, IBM Power and the partnership with SUSE. I'm Stu Miniman and as always,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Helga Delterad | PERSON | 0.99+ |
Vincente Moranta | PERSON | 0.99+ |
Vincenta Morante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Vincente | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Geneva | LOCATION | 0.99+ |
Vicente Moranta | PERSON | 0.99+ |
HANA | TITLE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Helga | PERSON | 0.99+ |
Robert Bosch | PERSON | 0.99+ |
SUSE | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Bosch | ORGANIZATION | 0.99+ |
SUSE | ORGANIZATION | 0.99+ |
Richemont | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
SAP HANA | TITLE | 0.99+ |
SLES | TITLE | 0.99+ |
SAP | ORGANIZATION | 0.98+ |
Couple weeks ago | DATE | 0.98+ |
IBM Power | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
SUSECON | ORGANIZATION | 0.98+ |
Richemont International | ORGANIZATION | 0.98+ |
SAP Power Customer Council | ORGANIZATION | 0.97+ |
Linux | TITLE | 0.97+ |
three | QUANTITY | 0.96+ |
one way | QUANTITY | 0.96+ |
SAP HANA Enterprise Cloud | TITLE | 0.96+ |
today | DATE | 0.95+ |
over 60 public references | QUANTITY | 0.95+ |
once a year | QUANTITY | 0.95+ |
AIX | TITLE | 0.94+ |
X86 | TITLE | 0.94+ |
SAP | TITLE | 0.92+ |
S/4 HANA | TITLE | 0.92+ |
Cloud | TITLE | 0.91+ |
SAP HANA Enterprise | TITLE | 0.91+ |
couple | QUANTITY | 0.88+ |
SAP HEC | TITLE | 0.88+ |
Richemont | PERSON | 0.87+ |
Live Partition Mobility | TITLE | 0.87+ |
Enterprise Linux | ORGANIZATION | 0.86+ |
DCO | TITLE | 0.86+ |
Jared Rosoff & Kit Colbert, VMware | CUBEConversation, April 2020
(upbeat music) >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We are having a very special Cube conversation and kind of the the ongoing unveil, if you will, of the new VMware vSphere seven dot O. We're going to get a little bit more of a technical deep-dive here today and we're excited to have a longtime CUBE alumni. Kit Colbert here is the VP and CTO of Cloud platform at VMware. Kit, great to see you. >> Yeah, happy to be here. And new to theCUBE, Jared Rosoff. He's a Senior Director of Product Management of VMware and I'm guessing had a whole lot to do with this build. So Jared, first off, congratulations for birthing this new release and great to have you on board. >> Thanks, feels pretty great, great to be here. >> All right, so let's just jump into it. From kind of a technical aspect, what is so different about vSphere 7? >> Yeah, great. So vSphere 7 bakes Kubernetes right into the virtualization platform. And so this means that as a developer, I can now use Kubernetes to actually provision and control workloads inside of my vSphere environment. And it means as an IT admin, I'm actually able to deliver Kubernetes and containers to my developers really easily right on top of the platform I already run. >> So I think we had kind of a sneaking suspicion that that might be coming with the acquisition of the Heptio team. So really exciting news, and I think Kit, you teased it out quite a bit at VMware last year about really enabling customers to deploy workloads across environments, regardless of whether that's on-prem, public cloud, this public cloud, that public cloud, so this really is the realization of that vision. >> It is, yeah. So we talked at VMworld about Project Pacific, right, this technology preview. And as Jared mentioned of what that was, was how do we take Kubernetes and really build it into vSphere? As you know, we had a hybrid cloud vision for quite a while now. How do we proliferate vSphere to as many different locations as possible? Now part of the broader VMware cloud foundation portfolio. And you know, as we've gotten more and more of these instances in the cloud, on premises, at the edge, with service providers, there's a secondary question of how do we actually evolve that platform so it can support not just the existing workloads, but also modern workloads as well. >> Right. All right, so I think he brought some pictures for us, a little demo. So why don't we, >> Yeah. Why don't we jump over >> Yeah, let's dive into it. to there and let's see what it looks like? You guys can cue up the demo. >> Jared: Yeah, so we're going to start off looking at a developer actually working with the new VMware cloud foundation four and vSphere 7. So what you're seeing here is the developer's actually using Kubernetes to deploy Kubernetes. The self-eating watermelon, right? So the developer uses this Kubernetes declarative syntax where they can describe a whole Kubernetes cluster. And the whole developer experience now is driven by Kubernetes. They can use the coop control tool and all of the ecosystem of Kubernetes API's and tool chains to provision workloads right into vSphere. And so, that's not just provisioning workloads though, this is also key to the developer being able to explore the things they've already deployed. So go look at, hey, what's the IP address that got allocated to that? Or what's the CPU load on this workload I just deployed? On top of Kubernetes, we've integrated a Container Registry into vSphere. So here we see a developer pushing and pulling container images. And you know, one of the amazing things about this is from an infrastructure as code standpoint, now, the developer's infrastructure as well as their software is all unified in source control. I can check in not just my code, but also the description of the Kubernetes environment and storage and networking and all the things that are required to run that app. So now we're looking at a sort of a side-by-side view, where on the right hand side is the developer continuing to deploy some pieces of their application. And on the left hand side, we see vCenter. And what's key here is that as the developer deploys new things through Kubernetes, those are showing up right inside of the vCenter console. And so the developer and IT are seeing exactly the same things with the same names. And so this means when a developer calls, their IT department says, hey, I got a problem with my database. We don't spend the next hour trying to figure out which VM they're talking about. They got the same name, they see the same information. So what we're going to do is that, you know, we're going to push the the developer screen aside and start digging into the vSphere experience. And you know, what you'll see here is that vCenter is the vCenter you've already known and love, but what's different is that now it's much more application focused. So here we see a new screen inside of vCenter, vSphere namespaces. And so, these vSphere namespaces represent whole logical applications, like the whole distributed system now is a single object inside of vCenter. And when I click into one of these apps, this is a managed object inside of vSphere. I can click on permissions, and I can decide which developers have the permission to deploy or read the configuration of one of these namespaces. I can hook this into my Active Directory infrastructure. So I can use the same corporate credentials to access the system. I tap into all my existing storage. So this platform works with all of the existing vSphere storage providers. I can use storage policy based management to provide storage for Kubernetes. And it's hooked in with things like DRS, right? So I can define quotas and limits for CPU and memory, and all of that's going to be enforced by DRS inside the cluster. And again, as an admin, I'm just using vSphere. But to the developer, they're getting a whole Kubernetes experience out of this platform. Now, vSphere also now sucks in all this information from the Kubernetes environment. So besides seeing the VMs and things the developers have deployed, I can see all of the desired state specifications, all the different Kubernetes objects that the developers have created. The compute, network and storage objects, they're all integrated right inside the vCenter console. And so once again from a diagnostics and troubleshooting perspective, this data's invaluable. It often saves hours just in trying to figure out what we're even talking about when we're trying to resolve an issue. So as you can see, this is all baked right into vCenter. The vCenter experience isn't transformed a lot. We get a lot of VI admins who look at this and say, where's the Kubernetes? And they're surprised, they like, they've been managing Kubernetes all this time, it just looks like the vSphere experience they've already got. But all those Kubernetes objects, the pods and containers, Kubernetes clusters, load balancer, storage, they're all represented right there natively in the vCenter UI. And so we're able to take all of that and make it work for your existing VI admins. >> Well that's a, that's pretty wild, you know. It really builds off the vision that again, I think you kind of outlined, Kit, teased out it at VMworld which was the IT still sees vSphere, which is what they want to see, what they're used to seeing, but devs see Kubernetes. And really bringing those together in a unified environment so that, depending on what your job is, and what you're working on, that's what you're going to see and that's kind of unified environment. >> Yep. Yeah, as the demo showed, it is still vSphere at the center, but now there's two different experiences that you can have interacting with vSphere. The Kubernetes based one, which is of course great for developers and DevOps type folks, as well as a traditional vSphere interface, APIs, which is great for VI admins and IT operations. >> Right. And then, and really, it was interesting too. You teased out a lot. That was a good little preview if people knew what they were watching, but you talked about really cloud journey, and kind of this bifurcation of kind of classical school apps that are running in their classic VMs and then kind of the modern, you know, cloud native applications built on Kubernetes. And you outlined a really interesting thing that people often talk about the two ends of the spectrum and getting from one to the other but not really about kind of the messy middle, if you will. And this is really enabling people to pick where along that spectrum they can move their workloads or move their apps. >> Yeah, no. I think we think a lot about it like that. That we look at, we talk to customers and all of them have very clear visions on where they want to go. Their future state architecture. And that involves embracing cloud, it involves modernizing applications. And you know, as you mentioned, it's challenging for them because I think what a lot of customers see is this kind of, these two extremes. Either you're here where you are, with kind of the old current world, and you got the bright nirvana future on the far end there. And they believe that the only way to get there is to kind of make a leap from one side to the other. That you have to kind of change everything out from underneath you. And that's obviously very expensive, very time consuming and very error-prone as well. There's a lot of things that can go wrong there. And so I think what we're doing differently at VMware is really, to your point, is you call it the messy middle, I would say it's more like how do we offer stepping stones along that journey? Rather than making this one giant leap, we had to invest all this time and resources. How can we enable people to make smaller incremental steps each of which have a lot of business value but don't have a huge amount of cost? >> Right. And it's really enabling kind of this next gen application where there's a lot of things that are different about it but one of the fundamental things is where now the application defines the resources that it needs to operate versus the resources defining kind of the capabilities of what the application can do and that's where everybody is moving as quickly as makes sense, as you said, not all applications need to make that move but most of them should and most of them are and most of them are at least making that journey. So you see that? >> Yeah, definitely. I mean, I think that certainly this is one of the big evolutions we're making in vSphere from looking historically at how we managed infrastructure, one of the things we enable in vSphere 7 is how we manage applications, right? So a lot of the things you would do in infrastructure management of setting up security rules or encryption settings or you know, your resource allocation, you would do this in terms of your physical and virtual infrastructure. You talk about it in terms of this VM is going to be encrypted or this VM is going to have this Firewall rule. And what we do in vSphere 7 is elevate all of that to application centric management. So you actually look at an application and say I want this application to be constrained to this much CPU. Or I want this application to have these security rules on it. And so that shifts the focus of management really up to the application level. >> Jeff: Right. >> Yeah, and like, I would kind of even zoom back a little bit there and say, you know, if you look back, one thing we did with something like VSAN, before that, people had to put policies on a LUN, you know, an actual storage LUN and a storage array. And then by virtue of a workload being placed on that array, it inherited certain policies, right? And so VSAN really turned that around and allows you to put the policy on the VM. But what Jared's talking about now is that for a modern workload, a modern workload's not a single VM, it's a collection of different things. We got some containers in there, some VMs, probably distributed, maybe even some on-prem, some in the cloud, and so how do you start managing that more holistically? And this notion of really having an application as a first-class entity that you can now manage inside of vSphere, it's a really powerful and very simplifying one. >> Right. And why this is important is because it's this application centric point of view which enables the digital transformation that people are talking about all the time. That's a nice big word, but the rubber hits the road is how do you execute and deliver applications, and more importantly, how do you continue to evolve them and change them based on either customer demands or competitive demands or just changes in the marketplace? >> Yeah, well you look at something like a modern app that maybe has a hundred VMs that are part of it and you take something like compliance, right? So today, if I want to check if this app is compliant, I got to go look at every individual VM and make sure it's locked down, and hardened, and secured the right way. But now instead, what I can do is I can just look at that one application object inside of vCenter, set the right security settings on that, and I can be assured that all the different objects inside of it are going to inherit that stuff. So it really simplifies that. It also makes it so that that admin can handle much larger applications. You know, if you think about vCenter today you might log in and see a thousand VMs in your inventory. When you log in with vSphere 7, what you see is a few dozen applications. So a single admin can manage a much larger pool of infrastructure, many more applications than they could before because we automate so much of that operation. >> And it's not just the scale part, which is obviously really important, but it's also the rate of change. And this notion of how do we enable developers to get what they want to get done, done, i.e., building applications, while at the same time enabling the IT operations teams to put the right sort of guardrails in place around compliance and security, performance concerns, these sorts of elements. And so by being able to have the IT operations team really manage that logical application at that more abstract level and then have the developer be able to push in new containers or new VMs or whatever they need inside of that abstraction, it actually allows those two teams to work actually together and work together better. They're not stepping over each other but in fact now, they can both get what they need to get done, done, and do so as quickly as possible but while also being safe and in compliance and so forth. >> Right. So there's a lot more to this. This is a very significant release, right? Again, lot of foreshadowing if you go out and read the tea leaves, it's a pretty significant, you know, kind of re-architecture of many parts of vSphere. So beyond the Kubernetes, you know, kind of what are some of the other things that are coming out in this very significant release? >> Yeah, that's a great question because we tend to talk a lot about Kubernetes, what was Project Pacific but is now just part of vSphere, and certainly that is a very large aspect of it but to your point, vSphere 7 is a massive release with all sorts of other features. And so instead of a demo here, let's pull up some slides and we'll take a look at what's there. So outside of Kubernetes, there's kind of three main categories that we think about when we look at vSphere 7. So the first one is simplified lifecycle management. And then really focus on security is the second one, and then applications as well, but both including the cloud native apps that couldn't fit in the Kubernetes bucket as well as others. And so we go on the first one, the first column there, there's a ton of stuff that we're doing around simplifying lifecycle. So let's go to the next slide here where we can dive in a little bit more to the specifics. So we have this new technology, vSphere life cycle management, vLCM, and the idea here is how do we dramatically simplify upgrades, life cycle management of the ESX clusters and ESX hosts? How do we make them more declarative with a single image that you can now specify for an entire cluster. We find that a lot of our vSphere admins, especially at larger scales, have a really tough time doing this. There's a lot of in and outs today, it's somewhat tricky to do. And so we want to make it really really simple and really easy to automate as well. >> Right. So if you're doing Kubernetes on Kubernetes, I suppose you're going to have automation on automation, right? Because upgrading to the seven is probably not an inconsequential task. >> And yeah, and going forward and allowing, you know, as we start moving to deliver a lot of this great vSphere functionality at a more rapid clip, how do we enable our customers to take advantage of all those great things we're putting out there as well? >> Right. Next big thing you talk about is security. >> Yep. >> And we just got back from RSA, thank goodness we got that show in before all the madness started. >> Yep. >> But everyone always talked about security's got to be baked in from the bottom to the top. So talk about kind of the changes in the security. >> So, done a lot of things around security. Things around identity federation, things around simplifying certificate management, you know, dramatic simplifications there across the board. One I want to focus on here on the next slide is actually what we call vSphere trust authority. And so with that one what we're looking at here is how do we reduce the potential attack surfaces and really ensure there's a trusted computing base? When we talk to customers, what we find is that they're nervous about a lot of different threats including even internal ones, right? How do they know all the folks that work for them can be fully trusted? And obviously if you're hiring someone, you somewhat trust them but you know, how do you implement the concept of lease privilege? Right? >> Right. >> Jeff: Or zero trust, right, is a very hot topic >> Yeah, exactly. in security. >> So the idea with trust authority is that we can specify a small number of physical ESX hosts that you can really lock down and ensure are fully secure. Those can be managed by a special vCenter server which is in turn very locked down, only a few people have access to it. And then those hosts and that vCenter can then manage other hosts that are untrusted and can use attestation to actually prove that okay, this untrusted host haven't been modified, we know they're okay so they're okay to actually run workloads on they're okay to put data on and that sort of thing. So it's this kind of like building block approach to ensure that businesses can have a very small trust base off of which they can build to include their entire vSphere environment. >> Right. And then the third kind of leg of the stool is, you know, just better leveraging, you know, kind of a more complex asset ecosystem, if you will, with things like FPGAs and GPUs and you know, >> Yeah. kind of all of the various components that power these different applications which now the application can draw the appropriate resources as needed, so you've done a lot of work there as well. >> Yeah, there's a ton of innovation happening in the hardware space. As you mentioned, all sorts of accelerateds coming out. We all know about GPUs, and obviously what they can do for machine learning and AI type use cases, not to mention 3-D rendering. But you know, FPGAs and all sorts of other things coming down the pike as well there. And so what we found is that as customers try to roll these out, they have a lot of the same problems that we saw on the very early days of virtualization. I.e., silos of specialized hardware that different teams were using. And you know, what you find is all things we found before. You find very low utilization rates, inability to automate that, inability to manage that well, put in security and compliance and so forth. And so this is really the reality that we see at most customers. And it's funny because, and so much you think, well wow, shouldn't we be past this? As an industry, shouldn't we have solved this already? You know, we did this with virtualization. But as it turns out, the virtualization we did was for compute, and then storage and network, but now we really need to virtualize all these accelerators. And so that's where this Bitfusion technology that we're including now with vSphere really comes to the forefront. So if you see in the current slide we're showing here, the challenges that just these separate pools of infrastructure, how do you manage all that? And so if you go to the, if we go to the next slide what we see is that with Bitfusion, you can do the same thing that we saw with compute virtualization. You can now pool all these different silos infrastructure together so they become one big pool of GPUs of infrastructure that anyone in an organization can use. We can, you know, have multiple people sharing a GPU. We can do it very dynamically. And the great part of it is is that it's really easy for these folks to use. They don't even need to think about it. In fact, integrates seamlessly with their existing workflows. >> So it's pretty interesting 'cause of the classifications of the assets now are much larger, much varied, and much more workload specific, right? That's really the opportunity slash challenge that you guys are addressing. >> They are. >> A lot more diverse, yep. And so like, you know, a couple other things just, now, I don't have a slide on it, but just things we're doing to our base capabilities. Things around DRS and VMotion. Really massive evolutions there as well to support a lot of these bigger workloads, right? So you look at some of the massive SAP HANA, or Oracle Databases. And how do we ensure that VMotion can scale to handle those without impacting their performance or anything else there. Making DRS smarter about how it does load balancing and so forth. >> Jeff: Right. >> So a lot of the stuff is not just kind of brand new, cool new accelerator stuff, but it's also how do we ensure the core apps people have already been running for many years, we continue to keep up with the innovation and scale there as well. >> Right. All right, so Jared, I give you the last word. You've been working on this for a while, there's a whole bunch of admins that have to sit and punch keys. What do you tell them, what should they be excited about, what are you excited for them in this new release? >> I think what I'm excited about is how, you know, IT can really be an enabler of the transformation of modern apps, right? I think today you look at a lot of these organizations and what ends up happening is the app team ends up sort of building their own infrastructure on top of IT's infrastructure, right? And so now I think we can shift that story around. I think that there's, you know, there's an interesting conversation that a lot of IT departments and app dev teams are going to be having over the next couple years about how do we really offload some of these infrastructure tasks from the dev team, make you more productive, give you better performance, availability, disaster recovery, and these kinds of capabilities. >> Awesome. Well, Jared, congratulation, again both of you, for you getting the release out. I'm sure it was a heavy lift and it's always good to get it out in the world and let people play with it and thanks for sharing a little bit more of a technical deep-dive. I'm sure there's a ton more resources for people that even want to go down into the weeds. So thanks for stopping by. >> Thank you. >> Thank you. >> All right, he's Jared, he's Kit, I'm Jeff. You're watching theCUBE. We're in the Palo Alto studios. Thanks for watching and we'll see you next time. (upbeat music)
SUMMARY :
and kind of the the ongoing and great to have you on board. great, great to be here. From kind of a technical aspect, and containers to my of the Heptio team. And as Jared mentioned of what that was, All right, so I think he Why don't we jump over to there and let's see what it looks like? and all of the ecosystem the IT still sees vSphere, that you can have and kind of this bifurcation and all of them have very clear visions kind of the capabilities So a lot of the things you would do and so how do you start but the rubber hits the and secured the right way. And it's not just the scale part, So beyond the Kubernetes, you know, and certainly that is a management of the ESX clusters So if you're doing Next big thing you talk about is security. And we just got back from RSA, from the bottom to the top. but you know, how do you Yeah, exactly. So the idea with trust authority of leg of the stool is, kind of all of the various components and so much you think, well 'cause of the classifications And so like, you know, a So a lot of the stuff is that have to sit and punch keys. of the transformation and it's always good to We're in the Palo Alto studios.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jared | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jared Rosoff | PERSON | 0.99+ |
April 2020 | DATE | 0.99+ |
two teams | QUANTITY | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
vSphere 7 | TITLE | 0.99+ |
last year | DATE | 0.99+ |
vSphere 7 | TITLE | 0.99+ |
vSphere | TITLE | 0.99+ |
Project Pacific | ORGANIZATION | 0.99+ |
second one | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ESX | TITLE | 0.99+ |
vCenter | TITLE | 0.99+ |
Heptio | ORGANIZATION | 0.99+ |
two ends | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
seven | QUANTITY | 0.98+ |
two extremes | QUANTITY | 0.98+ |
SAP HANA | TITLE | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
Kubernetes | TITLE | 0.97+ |
third | QUANTITY | 0.97+ |
first column | QUANTITY | 0.97+ |
single | QUANTITY | 0.96+ |
one side | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
first | QUANTITY | 0.95+ |
single object | QUANTITY | 0.95+ |
three main categories | QUANTITY | 0.95+ |
WMware VOD (embargo until 4/2)
(bright upbeat music) >> Hello and welcome to the Palo Alto Studios, theCube. I'm John Furrier, we here for a special Cube Conversation and special report, big news from VMware to discuss the launch of the availability of vSphere 7. I'm here with Krish Prasad SVP and General Manager of the vSphere Business and Cloud Platform Business Unit. And Paul Turner, VP of vSphere Product Management. Guys, thanks for coming in and talking about the big news. >> Thank you for having us. >> You guys announced some interesting things back in March around containers, Kubernetes and vSphere. Krish, tell us about the hard news what's being announced? >> Today we are announcing the general availability of vSphere 7. John, it's by far the biggest release that we have done in the last 10 years. We premiered it as project Pacific few months ago. With this release, we are putting Kubernetes native support into the vSphere platform. What that allows us to do is give customers the ability to run both modern applications based on Kubernetes and containers, as well as traditional VM based applications on the same platform. And it also allows the IT departments to provide their developers, cloud operating model using the VMware cloud foundation that is powered by this release. This is a key part of our (murmurs) portfolio solutions and products that we announced this year. And it is targeted fully at the developers of modern applications. >> And the specific news is vSphere.. >> Seven is generally available. >> Generally a vSphere 7? >> Yes. >> So let's on the trend line here, the relevance is what? What's the big trend line, that this is riding obviously we saw the announcements at VMware last year, and throughout the year, there's a lot of buzz. Pat Gelsinger says, "There's a big wave here with Kubernetes." What does this announcement mean for you guys with the marketplace trend? >> Yes what Kubernetes is really about is people trying to have an agile operation, they're trying to modernize the IT applications. And the best way to do that, is build off your current platform, expand it and make it a an innovative, an Agile Platform for you to run Kubernetes applications and VM applications together. And not just that customers are also looking at being able to manage a hybrid cloud environment, both on-prem and public cloud together. So they want to be able to evolve and modernize their application stack, but modernize their infrastructure stack, which means hybrid cloud operations with innovative applications Kubernetes or container based applications and VMs. >> What's exciting about this trend, Krish, we were talking about this at VMworld last year, we had many conversations around cloud native, but you're seeing cloud native becoming the operating model for modern business. I mean, this is really the move to the cloud. If you look at the successful enterprises, leaving the suppliers, the on premises piece, if not moved to the cloud native marketplace technologies, the on-premise isn't effective. So it's not so much on-premises going away, we know it's not, but it's turning into cloud native. This is the move to the cloud generally, this is a big wave. >> Yeah, absolutely. I mean, John, if you think about it on-premise, we have, significant market share, we are by far the leader in the market. And so what we are trying to do with this, is to allow customers to use the current platform they are using, but bring their modern application development on top of the same platform. Today, customers tend to set up stacks, which are different, so you have a Kubernetes stack, you have stack for the traditional applications, you have operators and administrators who are specialized in Kubernetes on one side, and you have the traditional VM operators on the other side. With this move, what we are saying is that you can be on the same common platform, you can have the same administrators who are used to administering the environment that you already had, and at the same time, offer the developers what they like, which is Kubernetes dial-tone, that they can come and deploy their applications on the same platform that you use for traditional applications. >> Yeah, Paul, Pat said Kubernetes can be the dial-tone of the internet. Most millennials might even know what dial-tone is. But what he meant is that's the key fabric, that's going to orchestrate. And we've heard over the years skill gap, skill gap, not a lot of skills out there. But when you look at the reality of skills gap, it's really about skills gaps and shortages, not enough people, most CIOs and chief information security officers, that we talk to, say, I don't want to fork my development teams, I don't want to have three separate teams, I don't have to, I want to have automation, I want an operating model that's not going to be fragmented. This kind of speaks to this whole idea of, interoperability and multi cloud. This seems to be the next big way behind hybrid. >> I think it is the next big wave, the thing that customers are looking for is a cloud operating model. They like the ability for developers to be able to invoke new services on demand in a very agile way. And we want to bring that cloud operating model to on-prem, to Google Cloud, to Amazon cloud to Microsoft Cloud to any of our VCPP partners. You get the same cloud operating experience. And it's all driven by a Kubernetes based dial-tone that's effective and available within this platform. So by bringing a single infrastructure platform that can run in this hybrid manner, and give you the cloud operating agility the developers are looking for, that's what's key in version seven. >> Does Pat Gelsinger mean when he says dial-tone of the internet Kubernetes. Does he mean always on? or what does he mean specifically? Just that it's always available? what's the meaning behind that phrase? >> The first thing he means is that developers can come to the infrastructure, which is, The VMware Cloud Foundation, and be able to work with a set of API's that are Kubernetes API's. So developers understand that, they are looking for that. They understand that dial-tone, right? And you come to our VMware cloud foundation that runs across all these clouds, you get the same API set that you can use to deploy that application. >> Okay, so let's get into the value here of vSphere 7, how does VMware and vSphere 7 specifically help customers? Isn't just bolting on Kubernetes to vSphere, some will say is that's simple or (murmurs) you're running product management no, it's not that easy. Some people say, "He is bolting Kubernetes on vSphere." >> it's not that easy. So one of the things, if anybody has actually tried deploying Kubernetes, first, it's highly complicated. And so definitely one of the things that we're bringing is you call it a bolt on, but it's certainly not like that we are making it incredibly simple. You talked about IT operational shortages, customers want to be able to deploy Kubernetes environments in a very simple way. The easiest way that you can do that is take your existing environment that route 90% of IT, and just turn on the Kubernetes dial-tone, and it is as simple as that. Now, it's much more than that, in version seven, as well, we're bringing in a couple things that are very important. You also have to be able to manage at scale, just like you would in the cloud, you want to be able to have infrastructure, almost self manage and upgrade and lifecycle manage itself. And so we're bringing in a new way of managing infrastructure so that you can manage just large scale environments, both on-premise and public cloud environments at scale. And then associated with that as well is you must make it secure. So there's a lot of enhancements we're building into the platform around what we call intrinsic security, which is how can we actually build in a truly a trusted platform for your developers and IT. >> I was just going to touch on your point about this, the shortage of IT staff, and how we are addressing that here. The way we are addressing that, is that the IT administrators that are used to administering vSphere can continue to administer this enhanced platform with Kubernetes, the same way they administered the older releases, so they don't have to learn anything new. They are just working the same way. We are not changing any tools, process, technologies. >> So same as it was before? >> Same as before. >> More capability. >> More capability. And developers can come in and they see new capabilities around Kubernetes. So it's a best of both worlds. >> And what was the pain point that you guys are solving? Obviously, the ease of use is critical, obviously, operationally, I get that. As you look at the cloud native developer cycles, infrastructure as code means, as app developers, on the other side, taking advantage of it. What's the real pain point that you guys are solving with vSphere 7. >> So I think it's multiple factors. So first is we've talked about agility a few times, there is DevOps is a real trend inside an IT organizations. They need to be able to build and deliver applications much quicker, they need to be able to respond to the business. And to do that, what they are doing is they need infrastructure that is on demand. So what we're really doing in the core Kubernetes kind of enablement, is allowing that on demand fulfillment of infrastructure, so you get that agility that you need. But it's not just tied to modern applications. It's also all of your existing business applications and your monitoring applications on one platform, which means that you've got a very simple and low cost way of managing large scale IT infrastructure. So that's a huge piece as well. And then I do want to emphasize a couple of other things. We're also bringing in new capabilities for AI and ML applications for SAP HANA databases, where we can actually scale to some of the largest business applications out there. And you have all of the capabilities like the GPU awareness and FPGA awareness that we built into the platform, so that you can truly run this as the fastest accelerated platform for your most extreme applications. So you've got the ability to run those applications, as well as your Kubernetes and Container based application. >> That's the accelerated application innovation piece of the announcement right? >> That's right, yeah. It's quite powerful that we've actually brought in, basically new hardware awareness into the product and expose that to your developers, whether that's through containers or through VMs. >> Krish, I want to get your thoughts on the ecosystem and then the community but I want to just dig into one feature you mentioned. I get the lifestyle improvement, life lifecycle improvement, I get the application acceleration innovation, but the intrinsic security is interesting. Could you take a minute, explain what that is? >> Yeah, so there's a few different aspects. One is looking at how can we actually provide a trusted environment. And that means that you need to have a way that the key management that even your administrator is not able to get keys to the kingdom, as we would call it. You want to have a controlled environment that, some of the worst security challenges inside in some of the companies has been your internal IT staff. So you've got to have a way that you can run a trusted environment independent. We've got vSphere Trust Authority that we released in version seven, that actually gives you a secure environment for actually managing your keys to the kingdom effectively your certificates. So you've got this, continuous runtime. Now, not only that, we've actually gone and taken our carbon black features, and we're actually building in full support for carbon black into the platform. So that you've got native security of even your application ecosystem. >> Yeah, that's been coming up a lot conversations, the carbon black and the security piece. Krish obviously vSphere everywhere having that operating model makes a lot of sense, but you have a lot of touch points, you got cloud, hyper scalars got the edge, you got partners. >> We have that dominant market share on private cloud. We are on Amazon, as you will know, Azure, Google, IBM Cloud, Oracle Cloud. So all the major clouds, there is a vSphere stack running. So it allows customers if you think about it, it allows customers to have the same operating model, irrespective of where their workload is residing. They can set policies, components, security, they set it once, it applies to all their environments across this hybrid cloud, and it's all supported by our VMware Cloud Foundation, which is powered by vSphere 7. >> Yeah, I think having that, the cloud as API based having connection points and having that reliable easy to use is critical operating model. Alright guys, so let's summarize the announcement. What do you guys their takeaway from this vSphere 7, what is the bottom line? What's it really mean? (Paul laughs) >> I think what we're, if we look at it for developers, we are democratizing Kubernetes. We already are in 90% of IT environments out there are running vSphere. We are bringing to every one of those vSphere environments and all of the virtual infrastructure administrators, they can now manage Kubernetes environments, you can you can manage it by simply upgrading your environment. That's a really nice position rather than having independent kind of environments you need to manage. So I think that is one of the key things that's in here. The other thing though, I don't think any other platform out there, other than vSphere that can run in your data center in Google's, in Amazon's, in Microsoft's, in thousands of VCPP partners. You have one hybrid platform that you can run with. And that's got operational benefits, that's got efficiency benefits, that's got agility benefits. >> Great. >> Yeah, I would just add to that and say that, look, we want to meet customers, where they are in their journey. And we want to enable them to make business decisions without technology getting in the way. And I think the announcement that we made today, with vSphere 7, is going to help them accelerate their digital transformation journey, without making trade offs on people, process and technology. And there is more to come. Look, we are laser focused on making our platform the best in the industry, for running all kinds of applications and the best platform for a hybrid and multi cloud. And so you will see more capabilities coming in the future. Stay tuned. >> Well, one final question on this news announcement, which is awesome, vSphere, core product for you guys, if I'm the customer, tell me why it's going to be important five years from now? >> Because of what I just said, it is the only platform that is going to be running across all the public clouds, which will allow you to have an operational model that is consistent across the cloud. So think about it. If you go to Amazon native, and then you have a workload in Azure, you're going to have different tools, different processes, different people trained to work with those clouds. But when you come to VMware and you use our Cloud Foundation, you have one operating model across all these environments, and that's going to be game changing. >> Great stuff, great stuff. Thanks for unpacking that for us. Congratulations on the announcement. >> Thank you. >> vSphere 7, news special report here, inside theCube cCnversation, I'm John Furrier. Thanks for watching. (upbeat music) >> Hey welcome back everybody, Jeff Frick here with theCube. We are having a very special Cube Conversation and kind of the the ongoing unveil, if you will of the new VMware vSphere 7.0 we're going to get a little bit more of a technical deep dive here today we're excited to have longtime Cube alumni, Kit Colbert here, is the VP and CTO of Cloud Platform at VMware. Kit, great to see you. And, and new to theCube, Jared Rosoff. He's a Senior Director of Product Management VMware, and I'm guessing had a whole lot to do with this build. So Jared, first off, congratulations for birthing this new release. And great to have you on board. >> Feels pretty good, great to be here. >> All right, so let's just jump into it. From kind of a technical aspect, what is so different about vSphere 7? >> Yeah, great. So vSphere 7, bakes Kubernetes right into the virtualization platform. And so this means that as a developer, I can now use Kubernetes to actually provision and control workloads inside of my vSphere environment. And it means as an IT admin, I'm actually able to deliver Kubernetes and containers to my developers really easily right on top of the platform I already run. >> So I think we had kind of a sneaking suspicion that might be coming with the acquisition of the FTO team. So really exciting news. And I think Kit you tease it out quite a bit at VMware last year about really enabling customers to deploy workloads across environments, regardless of whether that's on-prem, public cloud, this public cloud, that public cloud. So this really is the realization of that vision. >> It is, yeah. So, we talked at VMworld about project Pacific, this technology preview, and as Jared mentioned, what that was, is how do we take Kubernetes and really build it into vSphere. As you know, we had Hybrid Cloud Vision for quite a while now. How do we proliferate vSphere to as many different locations as possible. Now part of the broader VMware Cloud Foundation portfolio. And as we've gotten more and more of these instances in the cloud on-premises, at the edge, with service providers, there's a secondary question, how do we actually evolve that platform? So it can support not just the existing workloads, but also modern workloads as well. >> All right. So I think you brought some pictures for us a little demo. So why (murmurs) and let's see what it looks like. You guys can keep the demo? >> Narrator: So we're going to start off looking at a developer actually working with the new VMware Cloud Foundation for and vSphere 7. So what you're seeing here is a developer is actually using Kubernetes to deploy Kubernetes. The selfie in watermelon, (all laughing) So the developer uses this Kubernetes declarative syntax where they can describe a whole Kubernetes cluster. And the whole developer experience now is driven by Kubernetes. They can use the coop control tool and all of the ecosystem of Kubernetes API's and tool chains to provision workloads right into vSphere. And so, that's not just provisioning workloads, though. This is also key to the developer being able to explore the things they've already deployed, so go look at, hey, what's the IP address that got allocated to that? Or what's the CPU load on this workload I just deployed. On top of Kubernetes, we've integrated a Container Registry into vSphere. So here we see a developer pushing and pulling container images. And one of the amazing things about this is, from an infrastructure is code standpoint. Now, the developers infrastructure as well as their software is all unified in source control. I can check in, not just my code, but also the description of the Kubernetes environment and storage and networking and all the things that are required to run that app. So now we're looking at sort of a side by side view, where on the right hand side is the developer continuing to deploy some pieces of their application and on the left hand side, we see vCenter. And what's key here is that as the developer deploys new things through Kubernetes, those are showing up right inside of the vCenter console. And so the developer and IT are seeing exactly the same things, the same names, and so this means what a developer calls their IT department and says, "Hey, I got a problem with my database," we don't spend the next hour trying to figure out which VM they're talking about. They got the same name, they see the same information. So what we're going to do is that, we're going to push the the developer screen aside and start digging into the vSphere experience. And what you'll see here is that vCenter is the vCenter you've already known and love, but what's different is that now it's much more application focused. So here we see a new screen inside of vCenter vSphere namespaces. And so these vSphere namespaces represent whole logical applications, like the whole distributed system now as a single object inside of vCenter. And when I click into one of these apps, this is a managed object inside of vSphere. I can click on permissions, and I can decide which developers have the permission to deploy or read the configuration of one of these namespaces. I can hook this into my active directory infrastructure, so I can use the same, corporate credentials to access the system, I tap into all my existing storage. So, this platform works with all of the existing vSphere storage providers. I can use storage policy based management to provide storage for Kubernetes. And it's hooked in with things like DRS, right? So I can define quotas and limits for CPU and memory, and all that's going to be enforced by DRS inside the cluster. And again, as an admin, I'm just using vSphere, but to the developer, they're getting a whole Kubernetes experience out of this platform. Now, vSphere also now sucks in all this information from the Kubernetes environment. So besides, seeing the VMs and things that developers have deployed, I can see all of the desired state specifications, all the different Kubernetes objects that the developers have created, the compute network and storage objects, they're all integrated right inside the vCenter console. And so once again, from a diagnostics and troubleshooting perspective, this data is invaluable, often saves hours, just to try to figure out what we're even talking about more trying to resolve an issue. So, as you can see, this is all baked right into vCenter. The vCenter experience isn't transformed a lot, we get a lot of VI admins who look at this and say, "Where's the Kubernetes?" And they're surprised. They're like, they've been managing Kubernetes all this time, it just looks, it looks like the vSphere experience they've already got. But all those Kubernetes objects, the pods and containers, Kubernetes clusters, load balancer stores, they're all represented right there natively in the vCenter UI. And so we're able to take all of that and make it work for your existing VI admins. >> Well, it's pretty wild. It really builds off the vision that again, I think you kind of outlined Kit teased out at VMworld, which was, the IT still sees vSphere, which is what they want to see, what they're used to seeing, but (murmurs) see Kubernetes and really bringing those together in a unified environment. So that, depending on what your job is and what you're working on, that's what you're going to see in this kind of unified environment. >> Yeah, as the demo showed, (clears throat) it is still vSphere at the center, but now there's two different experiences that you can have interacting with vSphere, Kubernetes base one, which is of course great for developers and DevOps type folks, as well as the traditional vSphere interface API's, which is great for VI admins and IT operations. >> And then it really is interesting too, you tease that a lot. That was a good little preview, people knew they're watching. But you talked about really cloud journey and kind of this bifurcation of kind of classical school apps that are that are running in their classic VMs, and then kind of the modern, kind of cloud native applications built on Kubernetes. And you outlined a really interesting thing that people often talk about the two ends of the spectrum, and getting from one to the other, but not really about kind of the messy middle, if you will, and this is really enabling people to pick where along that spectrum, they can move their workloads or move their apps. >> Yeah, I think we think a lot about it like that, we talk to customers, and all of them have very clear visions on where they want to go, their future state architecture. And that involves embracing cloud and involves modernizing applications. And you know, as you mentioned, it's challenging for them. Because I think what a lot of customers see is this kind of these two extremes either you're here where you are, kind of the old current world, and you got the bright Nirvana future on the far end there. And they believe that the only way to get there is to kind of make a leap from one side to the other, they have to kind of change everything out from underneath you. And that's obviously very expensive, very time consuming, and very error prone as well. There's a lot of things that can go wrong there. And so I think what we're doing differently at VMware is really to your point as you call it, the messy middle, I would say it's more like, how do we offer stepping stones along that journey? Rather than making this one giant leap we had to invest all this time and resources? How can we enable people to make smaller incremental steps, each of which have a lot of business value, but don't have a huge amount of cost? >> And it's really enabling kind of this next gen application, where there's a lot of things that are different about it. But one of the fundamental things is where now the application defines the resources that it needs to operate, versus the resources defining kind of the capabilities what the application can do. And that's where everybody is moving as quickly as makes sense. As you said, not all applications need to make that move, but most of them should, and most of them are, and most of them are at least making that journey. Do you see that? >> Yeah, definitely. I mean, I think that, certainly this is one of the big evolutions we're making in vSphere from, looking historically at how we managed infrastructure, one of the things we enable in vSphere 7, is how we manage applications. So a lot of the things you would do in infrastructure management of setting up security rules or encryption settings, or, your resource allocation, you would do this in terms of your physical and virtual infrastructure, you talk about it in terms of, this VM is going to be encrypted, or this VM is going to have this firewall rule. And what we do in vSphere 7 is elevate all of that to application centric management. So you actually look at an application and say, I want this application to be constrained to this much CPU. Or I want this application to have these security rules on it. And so that shifts the focus of management really up to the application level. >> And like, I can even zoom back a little bit there and say, if you look back, one thing we did was something like vSAN before that people had to put policies on a LAN an actual storage LAN, and a storage array. And then by virtue of a workload being placed on that array, inherited certain policies. And so, vSAN will turn that around allows you to put the policy on the VM. But what Jared is talking about now is that for a modern workload, a modern workloads is not a single VM, it's a collection of different things. You got some containers in there, some VMs, probably distributed, maybe even some on-prem, some on the cloud. And so how do you start managing that more holistically? And this notion of really having an application as a first class entity that you can now manage inside of vSphere is really powerful and very simplified one. >> And why this is important is because it's this application centric point of view, which enables the digital transformation that people are talking about all the time. That's a nice big word, but when the rubber hits the road is how do you execute and deliver applications. And more importantly, how do you continue to evolve them and change them, based on on either customer demands or competitive demands, or just changes in the marketplace. >> Yeah when you look at something like a modern app that maybe has 100 VMs that are part of it, and you take something like compliance. So today, if I want to check if this app is compliant, I got to go look at every individual VM and make sure it's locked down hardened and secured the right way. But now instead, what I can do is I can just look at that one application object inside of vCenter, set the right security settings on that and I can be assured that all the different objects inside of it are going to inherit that stuff. So it really simplifies that. It also makes it so that that admin can handle much larger applications. If you think about vCenter today, you might log in and see 1000 VMs in your inventory. When you log in with vSphere 7, what you see is few dozen applications. So a single admin can manage much larger pool of infrastructure, many more applications than they could before. Because we automate so much of that operation. >> And it's not just the scale part, which is obviously really important, but it's also the rate of change. And this notion of how do we enable developers to get what they want to get done, done, i.e. building applications, while at the same time enabling the IT operations teams to put the right sort of guardrails in place around compliance and security performance concerns, these sorts of elements. And so being by being able to have the IT operations team really manage that logical application at that more abstract level, and then have the developer be able to push in new containers or new VMs or whatever they need inside of that abstraction. It actually allows those two teams to work actually together and work together better. They're not stepping over each other. But in fact, now they can both get what they need to get done, done, and do so as quickly as possible but while also being safe, and in compliance, and so forth. >> So there's a lot more to this, this is a very significant release, right? Again, a lot of foreshadowing, if you go out and read the tea leaves, it's a pretty significant kind of re-architecture of many, many parts of vSphere. So beyond the Kubernetes, kind of what are some of the other things that are coming out in this very significant release? >> Yeah, that's a great question, because we tend to talk a lot about Kubernetes, what was Project Pacific, but it's now just part of vSphere. And certainly, that is a very large aspect of it. But to your point, vSphere 7 is a massive release with all sorts of other features. And so there is a demo here, let's pull up some slides. And we're ready to take a look at what's there. So, outside of Kubernetes, there's kind of three main categories that we think about when we look at vSphere 7. So the first first one is simplified Lifecycle Management. And then really focused on security as a second one, and then applications as well, but both including, the cloud native apps that could fit in the Kubernetes bucket as well as others. And so we go on the first one, the first column there, there's a ton of stuff that we're doing, around simplifying life cycles. So let's go to the next slide here where we can dive in a little bit more to the specifics. So we have this new technology vSphere Lifecycle Management, vLCM. And the idea here is how do we dramatically simplify upgrades, lifecycle management of the ESX clusters and ESX hosts? How do we make them more declarative, with a single image, you can now specify for an entire cluster. We find that a lot of our vSphere admins, especially at larger scales, have a really tough time doing this. There's a lot of ins and outs today, it's somewhat tricky to do. And so we want to make it really, really simple and really easy to automate as well. >> So if you're doing Kubernetes on Kubernetes, I suppose you're going to have automation on automation, because upgrading to the sevens is probably not an inconsequential task. >> And yeah, and going forward and allowing you as we start moving to deliver a lot of this great VCR functionality at a more rapid clip. How do we enable our customers to take advantage of all those great things we're putting out there as well. >> Next big thing you talk about is security. >> Yep >> We just got back from RSA. Thank goodness, we got that show in before all the badness started. But everyone always talks about security is got to be baked in from the bottom to the top. Talk about kind of the the changes in the security. >> So I've done a lot of things around security, things around identity federation, things around simplifying certificate management, dramatic simplifications they're across the board. What I want to focus on here, on the next slide is actually what we call vSphere Trust Authority. And so with that one, what we're looking at here is how do we reduce the potential attack surfaces, and really ensure there's a trusted computing base? When we talk to customers, what we find is that they're nervous about a lot of different threats, including even internal ones, right? How do they know all the folks that work for them can be fully trusted. And obviously, if you're hiring someone, you somewhat trust them. How do you implement the concept of least privilege. >> Jeff: Or zero trust (murmurs) >> Exactly. So they idea with trust authority that we can specify a small number of physical ESX hosts that you can really lock down ensure a fully secure, those can be managed by a special vCenter Server, which is in turn very locked down, only a few people have access to it. And then those hosts and that vCenter can then manage other hosts that are untrusted and can use attestation to actually prove that, okay, this untrusted host haven't been modified, we know they're okay, so they're okay to actually run workloads or they're okay to put data on and that sort of thing. So it's this kind of like building block approach to ensure that businesses can have a very small trust base off of which they can build to include their entire vSphere environment. >> And then the third kind of leg of the stool is, just better leveraging, kind of a more complex asset ecosystem, if you will, with things like FPGAs and GPUs, and kind of all of the various components that power these different applications which now the application can draw the appropriate resources as needed. So you've done a lot of work there as well. >> Yeah, there's a ton of innovation happening in the hardware space, as you mentioned, all sorts of accelerators coming out. We all know about GPUs, and obviously what they can do for machine learning and AI type use cases, not to mention 3D rendering. But FPGAs, and all sorts of other things coming down the pike as well there. And so what we found is that as customers try to roll these out, they have a lot of the same problems that we saw in the very early days of virtualization, i.e. silos of specialized hardware that different teams were using. And what you find is, all the things we found before you find very low utilization rates, inability to automate that, inability to manage that well, putting security and compliance and so forth. And so this is really the reality that we see in most customers and it's funny because, and sometimes you think, "Wow, shouldn't we be past this?" As an industry should we have solved this already, we did this with virtualization. But as it turns out, the virtualization we did was for compute and then storage network. But now we really need to virtualize all these accelerators. And so that's where this bit fusion technology that we're including now with vSphere, really comes to the forefront. So if you see in the current slide, we're showing here, the challenges that just these separate pools of infrastructure, how do you manage all that? And so if the we go to the next slide, what we see is that, with that fusion, you can do the same thing that we saw with compute virtualization, you can now pool all these different silos infrastructure together. So they become one big pool of GPUs of infrastructure that anyone in an organization can use. We can, have multiple people sharing a GPU, we can do it very dynamically. And the great part of it is that it's really easy for these folks to use. They don't even need to think about it, in fact, integrates seamlessly with their existing workflows. >> So it's free, it's pretty cheap, because the classifications of the assets now are much, much larger, much varied and much more workload specific right. That's really the opportunity slash challenge there. >> They are a lot more diverse And so like, a couple other things just, I don't have a slide on it, but just things we're doing to our base capabilities, things around DRS and vMotion. Really massive evolutions there as well to support a lot of these bigger workloads, right. So you look at some of the massive SAP HANA or Oracle databases, and how do we ensure that vMotion can scale to handle those, without impacting their performance or anything else there? Making DRS smarter about how it does load balancing, and so forth. So a lot of the stuff not just kind of brand new, cool new accelerator stuff, but it's also how do we ensure the core as people have already been running for many years, we continue to keep up with the innovation and scale there as well. >> All right. So Jared I give you the last word. You've been working on this for a while. There's a whole bunch of admins that have to sit and punch keys. What do you tell them? What should they be excited about? What are you excited for them in this new release? >> I think what I'm excited about is how IT can really be an enabler of the transformation of modern apps. I think today, you look at all of these organizations, and what ends up happening is, the app team ends up sort of building their own infrastructure on top of IT infrastructure. And so, now, I think we can shift that story around. I think that there's an interesting conversation that a lot of IT departments and app dev teams are going to be having over the next couple of years about how do we really offload some of these infrastructure tasks from the dev team? Make you more productive, give you better performance, availability, disaster recovery and these kinds of capabilities. >> Awesome. Well, Jared, congratulation and Kit both of you for getting the release out. I'm sure it was a heavy lift. And it's always good to get it out in the world and let people play with it. And thanks for for sharing a little bit more of a technical deep dive into this ton more resources for people that didn't want to go down into the weeds. So thanks for stopping by. >> Thank you. >> Thank you. >> Alright, he's Jared, he's Kit, I'm Jeff. You're watching theCube. We're in the Palo Alto Studios. Thanks for watching, we'll see you next time. (upbeat music) >> Hi, and welcome to a special Cube Conversation. I'm Stu Miniman, and we're digging into VMware vSphere 7 announcement. We've had conversations with some of the executives some of the technical people, but we know that there's no better way to really understand the technology than to talk to some of the practitioners that are using it. So really happy to have joined me on the program. I have Philip Buckley-Mellor, who is an infrastructure designer with British Telecom joining me digitally from across the pond. Phil, thanks so much for joining us. >> Nice too. >> Alright, so Phil, let's start of course, British Telecom, I think most people know, you know what BT is and it's, really sprawling company. Tell us a little bit about, your group, your role and what's your mandate. >> Okay, so, my group is called service platforms. It's the bit of BT that services all of our multi millions of our customers. So we have broadband, we have TV, we have mobile, we have DNS and email systems. And it's all about our customers. It's not a B2B part of BT, you're with me? We specifically focus on those kind of multi million customers that we've got in those various services. And in particular, my group we do infrastructure. we really do from data center all the way up to really about boot time or so we'll just pass boot time, and the application developers look after that stage and above. >> Okay, great, we definitely going to want to dig in and talk about that, that boundary between the infrastructure teams and the application teams. But let's talk a little bit first, we're talking about VMware. So, how long has your organization been doing VMware and tell us, what you see with the announcement that VMware is making for vSphere 7? >> Sure, well, I mean, we've had really great relationship with VMware for about 12, 13 years, something like that. And it's a absolutely key part of our infrastructure. It's written throughout BT, really, in every part of our operations, design, development, and the whole ethos of the company is based around a lot of VMware products. And so one of the challenges that we've got right now is application architectures are changing quite significantly at the moment, And as you know, in particular with serverless, and with containers and a whole bunch of other things like that. We're very comfortable with our ability to manage VMs and have been for a while. We currently use extensively we use vSphere NSXT, VROPs, login site, network insight and a whole bunch of other VMware constellation applications. And our operations teams know how to use that they know how to optimize, they know how to pass the plan, and (murmurs). So that's great. And that's been like that for half a decade at least, we've been really, really confident with our ability to deal with VMware environments. And along came containers and like, say, multi cloud as well. And what we were struggling with was the inability to have a single pane of glass, really on all of that, and to use the same people and the same processes to manage a different kind of technology. So we, we've been working pretty closely with VMware on a number of different containerization products. For several years now, I've worked really closely with the vSphere integrated containers, guys in particular, and now with the Pacific guys, with really the ideal that when we bring in version seven and the containerization aspects of version seven, we'll be in a position to have that single pane of glass to allow our operations team to really barely differentiate between what's a VM and what's a container. That's really the Holy Grail. So we'll be able to allow our developers to develop, our operations team to deploy and to operate, and our designers to see the same infrastructure, whether that's on-premises, cloud or off-premises, and be able to manage the whole piece in that respect. >> Okay, so Phil, really interesting things you walk through here, you've been using containers in a virtualized environment for a number of years, want to understand and the organizational piece just a little bit, because it sounds great, I manage all the environment, but, containers are a little bit different than VMs. if I think back, from an application standpoint, it was, let's stick it in a VM, I don't need to change it. And once I spin up a VM, often that's going to sit there for, months, if not years, as opposed to, I think about a containerization environment. It's, I really want to pool of resources, I'm going to create and destroy things all the time. So, bring us inside that organizational piece. How much will there needs to be interaction and more interaction or change in policies between your infrastructure team and your app dev team? >> Well, yes, me absolutely right, that's the nature and the timescales that we're talking about between VMs and containers is wildly different. As you say, we probably almost certainly have Vms in place now that were in place in 2018 certainly I imagine, and haven't really been touched. Whereas as you say, VMs and a lot of people talk about spinning them all up all the time. There are parts of architecture that require that, in particular, the very client facing bursty stuff, does require spinning up and spinning down pretty quickly. But some of our some of our other containers do sit around for weeks, if not months, really does depend on the development cycle aspects of that, but the heartbeat that we've really had was just visualizing it. And there are a number of different products out there that allow you to see the behavior of your containers and understand the resource requirements that they are having at any given moment. Allies troubleshoot and seven. But they need any problems, the new things that we we will have to get used to. And also it seems that there's an awful lot of competing products, quite a Venn diagram of in terms of functionality and user abilities to do that. So again coming back to being able to manage through vSphere. And to be able to have a list of VMs on alongside is a list of containers and to be able to use policies to define how they behave in terms of their networking, to be able to essentially put our deployments on rails by using in particular tag based policies, means that we can take the onus of security, we can take the onus of performance management and capacity management away from the developers who don't really have a lot of time, and they can just get on with their job, which is to develop new functionality, and help our customers. So that means then we have to be really responsible about defining those policies, and making sure that they're adhered to. But again, we know how to do that with the VMs through vSphere. So the fact that we can actually apply that straight away, just with slightly different compute unit, is really what we're talking about here is ideal, and then to be able to extend that into multiple clouds as well, because we do use multiple clouds where (murmurs) and as your customers, and we're between them is an opportunity that we can't do anything other than be excited about (murmurs) >> Yeah, Phil, I really like how you described really the changing roles that are happening there in your organization need to understand, right? There's things that developers care about the they want to move fast, they want to be able to build new things and there's things that they shouldn't have to worry about. And, you know, we talked about some of the new world and it's like, oh, can the platform underneath this take care of it? Well, there's some things platforms take care of, there's some things that the software or your team is going to need to understand. So maybe if you could dig in a little bit, some of those, what are the drivers from your application portfolio? What is the business asking of your organization that's driving this change? And being one of those tail winds pushing you towards, Kubernetes and the vSphere 7 technologies? >> Well, it all comes down to the customers, right? Our customers want new functionality. They want new integrations, they want new content, they want better stability and better performance and our ability to extend or contracting capacity as needed as well. So there will be ultimate challenges that we want to give our customers the best possible experience of our products and services. So we have to have address that really from a development perspective, it's our developers have the responsibility to, design and deploy those. So, in infrastructure, we have to act as a firm, foundation, really underneath all of that. That allows them to know that what they spend their time and develop and want to push out to our customers is something that can be trusted is performant. We understand where the capacity requirements are coming from in the short term, and in the long term for that, and he's secure as well, obviously, is a big aspect to it. And so really, we're just providing our developers with the best possible chance of giving our customers what will hopefully make them delighted. >> Great, Phil, you've mentioned a couple of times that you're using public clouds as well as, your VMware firm. Want to make sure I if you can explain a little bit a couple of things. Number one is, when it comes to your team, especially your infrastructure team, how much are they in involved with setting up some of the basic pieces or managing things like performance in the public cloud. And secondly, when you look at your applications, or some of your clouds, some of your applications hybrid going between the data center and the public cloud. And I haven't talked to too many customers that are doing applications that just live in any cloud and move things around. But you know, maybe if you could clarify those pieces as to, what cloud really means to your organization and your applications? >> Sure, well, I mean, tools. Cloud allows us to accelerate development, which is nice because it means we don't have to do on-premises capacity lifts for new pieces of functionality are so we can initially build in the cloud and test in the cloud. But very often, applications really make better sense, especially in the TV environment where people watch TV all the time. I mean, yes, there are peak hours and lighter hours of TV watching. Same goes for broadband really. But we generally were well more than an eight hour application profile. So what that allows us to do then is to have applications that are, well, it makes sense. We run them inside our organization where we have to run them in our organization for, data protection reasons or whatever, then we can do that as well. But where we say, for instance, we have a boxing match on. And we're going to be seeing an enormous spike in the amount of customers that want to sign up into our auto journey to allow them to view that and to gain access to that, well, why would you spend a lot of money on servers just for that level of additional capacity? So we do absolutely have hybrid applications, not sorry, hybrid blocks, we have blocks of sub applications, dozens of them really to support our platform. And what you would see is that if you were to look at our full application structure for one of the platforms, I mentioned, that some of the some of those application blocks have to run inside some can run outside and what we want to be able to do is to allow our operations team to define that, again, by policies to where they run, and to, have a system that allows us to transparently see where they're running, how they're running, and the implications of those decisions so that we can tune those maybe in the future as well. And that way, we best serve our customers. We got to get our customers yeah, what they need. >> All right, great, Phil, final question I have for you, you've been through a few iterations of looking at VMs containers, public cloud, what what advice would you give your peers with the announcement of vSphere 7 and how they can look at things today in 2020 versus what they might have looked at, say a year or two ago? >> Well, I'll be honest, I was a little bit surprised by vSphere 7. We knew that VMware will working on trying to make containers on the same level, both from a management deployment perspective as VMs. I mean, they're called VMware after all right? And we knew that they were looking at that. But I was surprised by just quite how quickly they've managed to almost completely reinvent the application, really. It's, you know, if you look at the whole Tansy stuff and the Mission Control stuff, I think a lot of people were blown away by just quite how happy VMware were to reinvent themselves from an application perspective, and to really leap forward. And this is, between version six and seven. I've been following these since version three, at least. And it's an absolutely revolutionary change in terms of the overall architecture. The aims to, to what they want to achieve with the application. And luckily, the nice thing is, is that if you're used to version six is not that big a deal, it's really not that big a deal to move forward at all, it's not such a big change to process and training and things like that. But my word, there's an awful lot of work underneath that, underneath the covers. And I'm really excited. And I think all the people in my position should really use take it as an opportunity to revisit what they can achieve with, in particular with vSphere, and with in combination with NSXT, it's quite hard to put into place unless you've seen the slides about it and unless you've seen the product, just how revolutionary the version seven is compared to previous versions, which have kind of evolved through a couple of years. So yeah, I think I'm really excited about it. And I know a lot of my peers or the companies that I speak with quite often are very excited about seven as well. So yeah, I'm really excited about though the whole base >> Well, Phil, thank you so much. Absolutely no doubt this is a huge move for VMware, the entire company and their ecosystem rallying around, help move to the next phase of where application developers and infrastructure need to go. Phil Buckley joining us from British Telecom. I'm Stu Miniman. Thank you so much for watching theCube. (upbeat music)
SUMMARY :
of the vSphere Business and Cloud Platform Business Unit. Kubernetes and vSphere. And it also allows the IT departments to provide So let's on the trend line here, And the best way to do that, This is the move to the cloud generally, this is a big wave. and at the same time, offer the developers what they like, This kind of speaks to this whole idea of, They like the ability for developers to be able to of the internet Kubernetes. and be able to work with a set of API's Okay, so let's get into the value here of vSphere 7, And so definitely one of the things that is that the IT administrators that are used So it's a best of both worlds. What's the real pain point that you guys are solving And to do that, what they are doing is and expose that to your developers, I get the application acceleration innovation, And that means that you need to have a way that the carbon black and the security piece. So all the major clouds, and having that reliable easy to use and all of the virtual infrastructure administrators, and the best platform for a hybrid and multi cloud. and that's going to be game changing. Congratulations on the announcement. vSphere 7, news special report here, and kind of the the ongoing unveil, if you will From kind of a technical aspect, of the platform I already run. And I think Kit you tease it out quite a bit So it can support not just the existing workloads, So I think you brought some pictures for us a little demo. and all the things that are required to run that app. It really builds off the vision that again, that you can have interacting with vSphere, but not really about kind of the messy middle, if you will, and you got the bright Nirvana future on the far end there. But one of the fundamental things is So a lot of the things you would do And so how do you start managing that more holistically? that people are talking about all the time. and I can be assured that all the different And it's not just the scale part, So beyond the Kubernetes, kind of what are some And the idea here is how do we dramatically simplify So if you're doing Kubernetes on Kubernetes, And yeah, and going forward and allowing you Next big thing you talk about Talk about kind of the the changes in the security. on the next slide is actually what that you can really lock down ensure a fully secure, and kind of all of the various components And so if the we go to the next slide, That's really the opportunity So a lot of the stuff not just kind of brand new, What are you excited for them in this new release? And so, now, I think we can shift that story around. And it's always good to get it out in the world We're in the Palo Alto Studios. So really happy to have joined me on the program. you know what BT is and it's, really sprawling company. and the application developers look after and tell us, what you see with the announcement and the same processes to manage a different I manage all the environment, So the fact that we can actually apply that straight away, and it's like, oh, can the platform underneath and in the long term for that, and he's secure as well, And I haven't talked to too many customers I mentioned, that some of the some of those application And I know a lot of my peers or the companies and infrastructure need to go.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Paul Turner | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jared | PERSON | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
Phil | PERSON | 0.99+ |
Phil Buckley | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Jared Rosoff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
two teams | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Philip Buckley-Mellor | PERSON | 0.99+ |
Krish | PERSON | 0.99+ |
March | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
John Furrier | PERSON | 0.99+ |
vSphere 7 | TITLE | 0.99+ |
last year | DATE | 0.99+ |
vSphere | TITLE | 0.99+ |
BT | ORGANIZATION | 0.99+ |
100 VMs | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
Krish Prasad | PERSON | 0.99+ |
vSphere Trust Authority | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
VMware Cloud Foundation | ORGANIZATION | 0.99+ |
British Telecom | ORGANIZATION | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
vCenter | TITLE | 0.99+ |
FTO | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
vSphere | ORGANIZATION | 0.99+ |
one platform | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
ESX | TITLE | 0.99+ |
DO NOT PUBLISH: Jared Rosoff & Kit Colbert, VMware | CUBEConversation, March 2020
(upbeat music) >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We are having a very special CUBE conversation and kind of the the ongoing unveil, if you will, of the new VMware vSphere 7.0. We're going to get a little bit more of a technical deep-dive here today and we're excited to have a longtime CUBE alumni. Kit Colbert here is the VP and CTO of Cloud platform at VMware. Kit, great to see you. >> Yeah, happy to be here. And new to theCUBE, Jared Rosoff. He's a Senior Director of Product Management of VMware and I'm guessing had a whole lot to do with this build. So Jared, first off, congratulations for birthing this new release and great to have you on board. >> Thanks, feels pretty great, great to be here. >> All right, so let's just jump into it. From kind of a technical aspect, what is so different about vSphere 7? >> Yeah, great. So vSphere 7 bakes Kubernetes right into the virtualization platform. And so this means that as a developer, I can now use Kubernetes to actually provision and control workloads inside of my vSphere environment. And it means as an IT admin, I'm actually able to deliver Kubernetes and containers to my developers really easily right on top of the platform I already run. >> So I think we had kind of a sneaking suspicion that that might be coming with the acquisition of the Heptio team. So really exciting news, and I think Kit, you teased it out quite a bit at VMware last year about really enabling customers to deploy workloads across environments, regardless of whether that's on-prem, public cloud, this public cloud, that public cloud, so this really is the realization of that vision. >> It is, yeah. So we talked at VMworld about Project Pacific, right, this technology preview. And as Jared mentioned of what that was, was how do we take Kubernetes and really build it into vSphere? As you know, we had a hybrid cloud vision for quite a while now. How do we proliferate vSphere to as many different locations as possible? Now part of the broader VMware cloud foundation portfolio. And you know, as we've gotten more and more of these instances in the cloud, on premises, at the edge, with service providers, there's a secondary question of how do we actually evolve that platform so it can support not just the existing workloads, but also modern workloads as well. >> Right. All right, so I think he brought some pictures for us, a little demo. So why don't we, >> Yeah. Why don't we jump over >> Yeah, let's dive into it. to there and let's see what it looks like? You guys can cue up the demo. >> Jared: Yeah, so we're going to start off looking at a developer actually working with the new VMware cloud foundation four and vSphere 7. So what you're seeing here is the developer's actually using Kubernetes to deploy Kubernetes. The self-eating watermelon, right? So the developer uses this Kubernetes declarative syntax where they can describe a whole Kubernetes cluster. And the whole developer experience now is driven by Kubernetes. They can use the coop control tool and all of the ecosystem of Kubernetes API's and tool chains to provision workloads right into vSphere. And so, that's not just provisioning workloads though, this is also key to the developer being able to explore the things they've already deployed. So go look at, hey, what's the IP address that got allocated to that? Or what's the CPU load on this workload I just deployed? On top of Kubernetes, we've integrated a Container Registry into vSphere. So here we see a developer pushing and pulling container images. And you know, one of the amazing things about this is from an infrastructure as code standpoint, now, the developer's infrastructure as well as their software is all unified in source control. I can check in not just my code, but also the description of the Kubernetes environment and storage and networking and all the things that are required to run that app. So now we're looking at a sort of a side-by-side view, where on the right hand side is the developer continuing to deploy some pieces of their application. And on the left hand side, we see vCenter. And what's key here is that as the developer deploys new things through Kubernetes, those are showing up right inside of the vCenter console. And so the developer and IT are seeing exactly the same things with the same names. And so this means when a developer calls, their IT department says, hey, I got a problem with my database. We don't spend the next hour trying to figure out which VM they're talking about. They got the same name, they see the same information. So what we're going to do is that, you know, we're going to push the the developer screen aside and start digging into the vSphere experience. And you know, what you'll see here is that vCenter is the vCenter you've already known and love, but what's different is that now it's much more application focused. So here we see a new screen inside of vCenter, vSphere namespaces. And so, these vSphere namespaces represent whole logical applications, like the whole distributed system now is a single object inside of vCenter. And when I click into one of these apps, this is a managed object inside of vSphere. I can click on permissions, and I can decide which developers have the permission to deploy or read the configuration of one of these namespaces. I can hook this into my Active Directory infrastructure. So I can use the same corporate credentials to access the system. I tap into all my existing storage. So this platform works with all of the existing vSphere storage providers. I can use storage policy based management to provide storage for Kubernetes. And it's hooked in with things like DRS, right? So I can define quotas and limits for CPU and memory, and all of that's going to be enforced by DRS inside the cluster. And again, as an admin, I'm just using vSphere. But to the developer, they're getting a whole Kubernetes experience out of this platform. Now, vSphere also now sucks in all this information from the Kubernetes environment. So besides seeing the VMs and things the developers have deployed, I can see all of the desired state specifications, all the different Kubernetes objects that the developers have created. The compute, network and storage objects, they're all integrated right inside the vCenter console. And so once again from a diagnostics and troubleshooting perspective, this data's invaluable. It often saves hours just in trying to figure out what we're even talking about when we're trying to resolve an issue. So as you can see, this is all baked right into vCenter. The vCenter experience isn't transformed a lot. We get a lot of VI admins who look at this and say, where's the Kubernetes? And they're surprised, they like, they've been managing Kubernetes all this time, it just looks like the vSphere experience they've already got. But all those Kubernetes objects, the pods and containers, Kubernetes clusters, load balancer, storage, they're all represented right there natively in the vCenter UI. And so we're able to take all of that and make it work for your existing VI admins. >> Well that's a, that's pretty wild, you know. It really builds off the vision that again, I think you kind of outlined, Kit, teased out it at VMworld which was the IT still sees vSphere, which is what they want to see, what they're used to seeing, but devs see Kubernetes. And really bringing those together in a unified environment so that, depending on what your job is, and what you're working on, that's what you're going to see and that's kind of unified environment. >> Yep. Yeah, as the demo showed, it is still vSphere at the center, but now there's two different experiences that you can have interacting with vSphere. The Kubernetes based one, which is of course great for developers and DevOps type folks, as well as a traditional vSphere interface, APIs, which is great for VI admins and IT operations. >> Right. And then, and really, it was interesting too. You teased out a lot. That was a good little preview if people knew what they were watching, but you talked about really cloud journey, and kind of this bifurcation of kind of classical school apps that are running in their classic VMs and then kind of the modern, you know, cloud native applications built on Kubernetes. And you outlined a really interesting thing that people often talk about the two ends of the spectrum and getting from one to the other but not really about kind of the messy middle, if you will. And this is really enabling people to pick where along that spectrum they can move their workloads or move their apps. >> Yeah, no. I think we think a lot about it like that. That we look at, we talk to customers and all of them have very clear visions on where they want to go. Their future state architecture. And that involves embracing cloud, it involves modernizing applications. And you know, as you mentioned, it's challenging for them because I think what a lot of customers see is this kind of, these two extremes. Either you're here where you are, with kind of the old current world, and you got the bright nirvana future on the far end there. And they believe that the only way to get there is to kind of make a leap from one side to the other. That you have to kind of change everything out from underneath you. And that's obviously very expensive, very time consuming and very error-prone as well. There's a lot of things that can go wrong there. And so I think what we're doing differently at VMware is really, to your point, is you call it the messy middle, I would say it's more like how do we offer stepping stones along that journey? Rather than making this one giant leap, we had to invest all this time and resources. How can we enable people to make smaller incremental steps each of which have a lot of business value but don't have a huge amount of cost? >> Right. And it's really enabling kind of this next gen application where there's a lot of things that are different about it but one of the fundamental things is where now the application defines the resources that it needs to operate versus the resources defining kind of the capabilities of what the application can do and that's where everybody is moving as quickly as makes sense, as you said, not all applications need to make that move but most of them should and most of them are and most of them are at least making that journey. So you see that? >> Yeah, definitely. I mean, I think that certainly this is one of the big evolutions we're making in vSphere from looking historically at how we managed infrastructure, one of the things we enable in vSphere 7 is how we manage applications, right? So a lot of the things you would do in infrastructure management of setting up security rules or encryption settings or you know, your resource allocation, you would do this in terms of your physical and virtual infrastructure. You talk about it in terms of this VM is going to be encrypted or this VM is going to have this Firewall rule. And what we do in vSphere 7 is elevate all of that to application centric management. So you actually look at an application and say I want this application to be constrained to this much CPU. Or I want this application to have these security rules on it. And so that shifts the focus of management really up to the application level. >> Jeff: Right. >> Yeah, and like, I would kind of even zoom back a little bit there and say, you know, if you look back, one thing we did with something like VSAN, before that, people had to put policies on a LUN, you know, an actual storage LUN and a storage array. And then by virtue of a workload being placed on that array, it inherited certain policies, right? And so VSAN really turned that around and allows you to put the policy on the VM. But what Jared's talking about now is that for a modern workload, a modern workload's not a single VM, it's a collection of different things. We got some containers in there, some VMs, probably distributed, maybe even some on-prem, some in the cloud, and so how do you start managing that more holistically? And this notion of really having an application as a first-class entity that you can now manage inside of vSphere, it's a really powerful and very simplifying one. >> Right. And why this is important is because it's this application centric point of view which enables the digital transformation that people are talking about all the time. That's a nice big word, but the rubber hits the road is how do you execute and deliver applications, and more importantly, how do you continue to evolve them and change them based on either customer demands or competitive demands or just changes in the marketplace? >> Yeah, well you look at something like a modern app that maybe has a hundred VMs that are part of it and you take something like compliance, right? So today, if I want to check if this app is compliant, I got to go look at every individual VM and make sure it's locked down, and hardened, and secured the right way. But now instead, what I can do is I can just look at that one application object inside of vCenter, set the right security settings on that, and I can be assured that all the different objects inside of it are going to inherit that stuff. So it really simplifies that. It also makes it so that that admin can handle much larger applications. You know, if you think about vCenter today you might log in and see a thousand VMs in your inventory. When you log in with vSphere 7, what you see is a few dozen applications. So a single admin can manage a much larger pool of infrastructure, many more applications than they could before because we automate so much of that operation. >> And it's not just the scale part, which is obviously really important, but it's also the rate of change. And this notion of how do we enable developers to get what they want to get done, done, i.e., building applications, while at the same time enabling the IT operations teams to put the right sort of guardrails in place around compliance and security, performance concerns, these sorts of elements. And so by being able to have the IT operations team really manage that logical application at that more abstract level and then have the developer be able to push in new containers or new VMs or whatever they need inside of that abstraction, it actually allows those two teams to work actually together and work together better. They're not stepping over each other but in fact now, they can both get what they need to get done, done, and do so as quickly as possible but while also being safe and in compliance and so forth. >> Right. So there's a lot more to this. This is a very significant release, right? Again, lot of foreshadowing if you go out and read the tea leaves, it's a pretty significant, you know, kind of re-architecture of many parts of vSphere. So beyond the Kubernetes, you know, kind of what are some of the other things that are coming out in this very significant release? >> Yeah, that's a great question because we tend to talk a lot about Kubernetes, what was Project Pacific but is now just part of vSphere, and certainly that is a very large aspect of it but to your point, vSphere 7 is a massive release with all sorts of other features. And so instead of a demo here, let's pull up some slides and we'll take a look at >> Already? what's there. So outside of Kubernetes, there's kind of three main categories that we think about when we look at vSphere 7. So the first one is simplified lifecycle management. And then really focused on security is the second one, and then applications as well, but both including the cloud native apps that couldn't fit in the Kubernetes bucket as well as others. And so we go on the first one, the first column there, there's a ton of stuff that we're doing around simplifying lifecycle. So let's go to the next slide here where we can dive in a little bit more to the specifics. So we have this new technology, vSphere life cycle management, vLCM, and the idea here is how do we dramatically simplify upgrades, life cycle management of the ESX clusters and ESX hosts? How do we make them more declarative with a single image that you can now specify for an entire cluster. We find that a lot of our vSphere admins, especially at larger scales, have a really tough time doing this. There's a lot of in and outs today, it's somewhat tricky to do. And so we want to make it really really simple and really easy to automate as well. >> Right. So if you're doing Kubernetes on Kubernetes, I suppose you're going to have automation on automation, right? Because upgrading to the seven is probably not an inconsequential task. >> And yeah, and going forward and allowing, you know, as we start moving to deliver a lot of this great vSphere functionality at a more rapid clip, how do we enable our customers to take advantage of all those great things we're putting out there as well? >> Right. Next big thing you talk about is security. >> Yep. >> And we just got back from RSA, thank goodness we got that show in before all the madness started. >> Yep. >> But everyone always talked about security's got to be baked in from the bottom to the top. So talk about kind of the changes in the security. >> So, done a lot of things around security. Things around identity federation, things around simplifying certificate management, you know, dramatic simplifications there across the board. One I want to focus on here on the next slide is actually what we call vSphere trust authority. And so with that one what we're looking at here is how do we reduce the potential attack surfaces and really ensure there's a trusted computing base? When we talk to customers, what we find is that they're nervous about a lot of different threats including even internal ones, right? How do they know all the folks that work for them can be fully trusted? And obviously if you're hiring someone, you somewhat trust them but you know, how do you implement the concept of lease privilege? Right? >> Right. >> Jeff: Or zero trust, right, is a very hot topic >> Yeah, exactly. in security. >> So the idea with trust authority is that we can specify a small number of physical ESX hosts that you can really lock down and ensure are fully secure. Those can be managed by a special vCenter server which is in turn very locked down, only a few people have access to it. And then those hosts and that vCenter can then manage other hosts that are untrusted and can use attestation to actually prove that okay, this untrusted host haven't been modified, we know they're okay so they're okay to actually run workloads on they're okay to put data on and that sort of thing. So it's this kind of like building block approach to ensure that businesses can have a very small trust base off of which they can build to include their entire vSphere environment. >> Right. And then the third kind of leg of the stool is, you know, just better leveraging, you know, kind of a more complex asset ecosystem, if you will, with things like FPGAs and GPUs and you know, >> Yeah. kind of all of the various components that power these different applications which now the application can draw the appropriate resources as needed, so you've done a lot of work there as well. >> Yeah, there's a ton of innovation happening in the hardware space. As you mentioned, all sorts of accelerateds coming out. We all know about GPUs, and obviously what they can do for machine learning and AI type use cases, not to mention 3-D rendering. But you know, FPGAs and all sorts of other things coming down the pike as well there. And so what we found is that as customers try to roll these out, they have a lot of the same problems that we saw on the very early days of virtualization. I.e., silos of specialized hardware that different teams were using. And you know, what you find is all things we found before. You find very low utilization rates, inability to automate that, inability to manage that well, put in security and compliance and so forth. And so this is really the reality that we see at most customers. And it's funny because, and so much you think, well wow, shouldn't we be past this? As an industry, shouldn't we have solved this already? You know, we did this with virtualization. But as it turns out, the virtualization we did was for compute, and then storage and network, but now we really need to virtualize all these accelerators. And so that's where this Bitfusion technology that we're including now with vSphere really comes to the forefront. So if you see in the current slide we're showing here, the challenges that just these separate pools of infrastructure, how do you manage all that? And so if you go to the, if we go to the next slide what we see is that with Bitfusion, you can do the same thing that we saw with compute virtualization. You can now pool all these different silos infrastructure together so they become one big pool of GPUs of infrastructure that anyone in an organization can use. We can, you know, have multiple people sharing a GPU. We can do it very dynamically. And the great part of it is is that it's really easy for these folks to use. They don't even need to think about it. In fact, integrates seamlessly with their existing workflows. >> So it's pretty interesting 'cause of the classifications of the assets now are much larger, much varied, and much more workload specific, right? That's really the opportunity / challenge that you guys are addressing. >> They are. >> A lot more diverse, yep. And so like, you know, a couple other things just, now, I don't have a slide on it, but just things we're doing to our base capabilities. Things around DRS and vMotion. Really massive evolutions there as well to support a lot of these bigger workloads, right? So you look at some of the massive SAP HANA, or Oracle Databases. And how do we ensure that vMotion can scale to handle those without impacting their performance or anything else there. Making DRS smarter about how it does load balancing and so forth. >> Jeff: Right. >> So a lot of the stuff is not just kind of brand new, cool new accelerator stuff, but it's also how do we ensure the core apps people have already been running for many years, we continue to keep up with the innovation and scale there as well. >> Right. All right, so Jared, I give you the last word. You've been working on this for a while, there's a whole bunch of admins that have to sit and punch keys. What do you tell them, what should they be excited about, what are you excited for them in this new release? >> I think what I'm excited about is how, you know, IT can really be an enabler of the transformation of modern apps, right? I think today you look at a lot of these organizations and what ends up happening is the app team ends up sort of building their own infrastructure on top of IT's infrastructure, right? And so now I think we can shift that story around. I think that there's, you know, there's an interesting conversation that a lot of IT departments and app dev teams are going to be having over the next couple years about how do we really offload some of these infrastructure tasks from the dev team, make you more productive, give you better performance, availability, disaster recovery, and these kinds of capabilities. >> Awesome. Well, Jared, congratulation, again both of you, for you getting the release out. I'm sure it was a heavy lift and it's always good to get it out in the world and let people play with it and thanks for sharing a little bit more of a technical deep-dive. I'm sure there's a ton more resources for people that even want to go down into the weeds. So thanks for stopping by. >> Thank you. >> Thank you. >> All right, he's Jared, he's Kit, I'm Jeff. You're watching theCUBE. We're in the Palo Alto studios. Thanks for watching and we'll see you next time. (upbeat music)
SUMMARY :
and kind of the the ongoing unveil, if you will, and great to have you on board. From kind of a technical aspect, And so this means that as a developer, and I think Kit, you teased it out quite a bit And you know, as we've gotten more and more All right, so I think he brought some pictures for us, Why don't we jump over to there and let's see what it looks like? and all the things that are required to run that app. I think you kind of outlined, Kit, that you can have interacting with vSphere. but not really about kind of the messy middle, if you will. And you know, as you mentioned, it's challenging for them And it's really enabling kind of this next gen application So a lot of the things you would do and so how do you start managing that more holistically? but the rubber hits the road is how do you execute and I can be assured that all the different objects And so by being able to have the IT operations team So beyond the Kubernetes, you know, and certainly that is a very large aspect of it and the idea here is how do we dramatically simplify So if you're doing Kubernetes on Kubernetes, Next big thing you talk about is security. And we just got back from RSA, So talk about kind of the changes in the security. but you know, how do you implement the concept Yeah, exactly. of physical ESX hosts that you can really lock down and GPUs and you know, kind of all of the various components And so if you go to the, if we go to the next slide 'cause of the classifications of the assets now And so like, you know, a couple other things just, So a lot of the stuff is not just kind of brand new, All right, so Jared, I give you the last word. And so now I think we can shift that story around. and it's always good to get it out in the world We're in the Palo Alto studios.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jared | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jared Rosoff | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
March 2020 | DATE | 0.99+ |
two teams | QUANTITY | 0.99+ |
Kit Colbert | PERSON | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
vSphere 7 | TITLE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
last year | DATE | 0.99+ |
vSphere | TITLE | 0.99+ |
ESX | TITLE | 0.99+ |
second one | QUANTITY | 0.99+ |
Project Pacific | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
vCenter | TITLE | 0.99+ |
Heptio | ORGANIZATION | 0.99+ |
two ends | QUANTITY | 0.99+ |
first one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
two extremes | QUANTITY | 0.98+ |
SAP HANA | TITLE | 0.98+ |
seven | QUANTITY | 0.97+ |
first column | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
three main categories | QUANTITY | 0.97+ |
one side | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
first | QUANTITY | 0.95+ |
third | QUANTITY | 0.95+ |
single object | QUANTITY | 0.95+ |
CUBE | ORGANIZATION | 0.94+ |