Image Title

Search Results for CrowdChat:

Io-Tahoe Smart Data Lifecycle CrowdChat | Digital


 

>>from around the globe. It's the Cube with digital coverage of data automated and event. Siri's Brought to You by Iot Tahoe Welcome, everyone to the second episode in our data automated Siri's made possible with support from Iot Tahoe. Today we're gonna drill into the data lifecycle, meaning the sequence of stages that data travels through from creation to consumption to archive. The problem, as we discussed in our last episode, is that data pipelines, they're complicated, They're cumbersome, that disjointed, and they involve highly manual processes. Ah, smart data lifecycle uses automation and metadata to approve agility, performance, data quality and governance and ultimately reduce costs and time to outcomes. Now, in today's session will define the data lifecycle in detail and provide perspectives on what makes a data lifecycle smart and importantly, how to build smarts into your processes. In a moment, we'll be back with Adam Worthington from ethos to kick things off, and then we'll go into an export power panel to dig into the tech behind smart data life cycles, and it will hop into the crowdchat and give you a chance to ask questions. So stay right there. You're watching the cube innovation impact influence. Welcome >>to the Cube disruptors. Developers and practitioners learn from the voices of leaders who share their personal insights from the hottest digital events around the globe. Enjoy the best this community has to offer on the Cube, your global leader. >>High tech digital coverage. Okay, we're back with Adam Worthington. Adam, good to see you. How are things across the pond? >>Thank you, I'm sure. >>Okay, so let's let's set it up. Tell us about yourself. What? Your role is a CTO and >>automatically. As you said, we found a way to have a pretty in company ourselves that we're in our third year on. Do we specialize in emerging disruptive technologies within the infrastructure? That's the kind of cloud space on my phone is the technical lead. So I kind of my job to be an expert in all of the technologies that we work with, which can be a bit of a challenge if you have a huge for phone is one of the reasons, like deliberately focusing on on also kind of pieces a successful validation and evaluation of new technologies. >>So you guys really technology experts, data experts and probably also expert in process and delivering customer outcomes. Right? >>That's a great word there, Dave Outcomes. That's a lot of what I like to speak to customers about. >>Let's talk about smart data, you know, when you when you throw in terms like this is it kind of can feel buzz, wordy. But what are the critical aspects of so called smart data? >>Help to step back a little bit, seen a little bit more in terms of kind of where I can see the types of problems I saw. I'm really an infrastructure solution architect trace on and what I kind of benefit we organically. But over time my personal framework, I focused on three core design principal simplicity, flexibility, inefficient, whatever it was designing. And obviously they need different things, depending on what the technology area is working with. But that's a pretty good. So they're the kind of areas that a smart approach to data will directly address. Reducing silos that comes from simplifying, so moving away from conflict of infrastructure, reducing the amount of copies of data that we have across the infrastructure and reducing the amount of application environments that need different areas so smarter get with data in my eyes anyway, the further we moved away from this. >>But how does it work? I mean, how do you know what's what's involved in injecting smarts into your data lifecycle? >>I think one of my I actually did not ready, but generally one of my favorite quotes from the French lost a mathematician, Blaise Pascal. He said, If I get this right, I have written a short letter, but I didn't have time. But Israel, I love that quite for lots of reasons >>why >>direct application in terms of what we're talking about, it is actually really complicated. These developers technology capabilities to make things simple, more directly meet the needs of the business. So you provide self service capabilities that they just need to stop driving. I mean, making data on infrastructure makes the business users using >>your job. Correct me. If I'm wrong is to kind of put that all together in a solution and then help the customer realize that we talked about earlier that business out. >>Yeah, enough if they said in understanding both sides so that it keeps us on our ability to deliver on exactly what you just said is big experts in the capabilities and new a better way to do things but also having the kind of the business understanding to be able to ask the right questions. That's how new a better price is. Positions another area that I really like his stuff with their platforms. You can do more with less. And that's not just about using data redundancy. That's about creating application environments, that conservative and then the infrastructure to service different requirements that are able to use the random Io thing without getting too kind of low level as well as the sequential. So what that means is you don't necessarily have to move data from application environment a do one thing related, and then move it to the application environment. Be that environment free terms of an analytics on the left Right works. Both keep the data where it is, use it or different different requirements within the infrastructure and again do more with less. And what that does is not just about simplicity and efficiency. It significantly reduces the time to value of that as well. >>Do you have examples that you can share with us even if they're anonymous customers that you work with that are maybe a little further down on the journey. Or maybe not >>looking at the you mentioned data protection earlier. So another organization This is a project which is just kind of hearing confessions moment, huge organization. They're literally petabytes of data that was servicing their back up in archive. And what they have is not just this realization they have combined. I think I different that they have dependent on the what area of infrastructure they were backing up, whether it was virtualization, that was different because they were backing up PC's June 6th. They're backing up another database environment, using something else in the cloud knowledge bases approach that we recommended to work with them on. They were able to significantly reduce complexity and reduce the amount of time that it systems of what they were able to achieve and what this is again. One of the clients have They've gone above the threshold of being able to back up for that. >>Adam, give us the final thoughts, bring us home. In this segment, >>the family built something we didn't particularly such on, that I think it is really barely hidden. It is spoken about as much as I think it is, that agile approaches to infrastructure we're going to be touched on there could be complicated on the lack of it efficient, the impact, a user's ability to be agile. But what you find with traditional approaches and you already touched on some of the kind of benefits new approaches there. It's often very prescriptive, designed for a particular as the infrastructure environment, the way that it served up the users in kind of a packaged. Either way, it means that they need to use it in that whatever wave in data bases, that kind of service of as it comes in from a flexibility standpoint. But for this platform approach, which is the right way to address technology in my eyes enables, it's the infrastructure to be used. Flexible piece of it, the business users of the data users what we find this capability into their innovating in the way they use that on the White House. I bring benefits. This is a platform to prescriptive, and they are able to do that. What you're doing with these new approaches is all of the metrics that we touched on and pass it from a cost standpoint from a visibility standpoint, but what it means is that the innovators in the business want really, is to really understand what they're looking to achieve and now have to to innovate with us. Now, I think I've started to see that with projects season places. If you do it in the right way, you articulate the capability and empower the business users in the right ways. Very significantly. Better position. The advantages on really matching significantly bigger than their competition. Yeah, >>Super Adam in a really exciting space. And we spent the last 10 years gathering all this data, you know, trying to slog through it and figure it out. And now, with the tools that we have and the automation capabilities, it really is a new era of innovation and insights. So, Adam or they didn't thanks so much for coming on the Cube and participating in this program. >>Exciting times with that. Thank you very much Today. >>Now we're going to go into the power panel and go deeper into the technologies that enable smart data life cycles. Stay right there. You're watching the cube. Are >>you interested in test driving? The i o ta ho platform Kickstart the benefits of data automation for your business through the Iot Labs program. Ah, flexible, scalable sandbox environment on the cloud of your choice with set up a service and support provided by Iot. Top. Click on the Link and connect with the data engineer to learn more and see Iot Tahoe in action. >>Welcome back, everybody to the power panel driving business performance with smart data life cycles. Leicester Waters is here. He's the chief technology officer from Iot Tahoe. He's joined by Patrick Smith, who was field CTO from pure storage. And is that data? Who's a system engineering manager at KohI City? Gentlemen, good to see you. Thanks so much for coming on this panel. >>Thank you. >>Let's start with Lester. I wonder if each of you could just give us a quick overview of your role. And what's the number one problem that you're focused on solving for your customers? Let's start with Lester Fleet. >>Yes, I'm Lost Waters, chief technology officer for Iot Tahoe and really the number one problem that we're trying to solve for our customers is to understand, help them understand what they have, because if they don't understand what they have in terms of their data. They can't manage it. They can't control it. The cap monitor. They can't ensure compliance. So really, that's finding all you can about your data that you have. And building a catalog that could be readily consumed by the entire business is what we do. >>Patrick Field, CTO in your title That says to me, You're talking to customers all the time, so you got a good perspective on it. Give us your take on things here. >>Yeah, absolutely. So my patches in here on day talkto customers and prospects in lots of different verticals across the region. And as they look at their environments and their data landscape, they're faced with massive growth in the data that they're trying to analyze and demands to be able to get insight our stuff and to deliver better business value faster than they've ever had to do in the past. So >>got it. And is that of course, Kohi City. You're like the new kid on the block. You guys were really growing rapidly created this whole notion of data management, backup and and beyond. But I'm assistant system engineering manager. What are you seeing from from from customers your role and the number one problem that you're solving. >>Yeah, sure. So the number one problem I see time and again speaking with customers. It's around data fragmentation. So do two things like organic growth, even maybe budgetary limitations. Infrastructure has grown over time very piecemeal, and it's highly distributed internally. And just to be clear, you know, when I say internally, that >>could be >>that it's on multiple platforms or silos within an on Prem infrastructure that it also does extend to the cloud as well. >>Right Cloud is cool. Everybody wants to be in the cloud, right? So you're right, It creates, Ah, maybe unintended consequences. So let's start with the business outcome and kind of try to work backwards to people you know. They want to get more insights from data they want to have. Ah, Mawr efficient data lifecycle. But so let's let me start with you were thinking about like the North Star for creating data driven cultures. You know, what is the North Star or customers >>here? I think the North Star, in a nutshell, is driving value from your data. Without question, I mean way, differentiate ourselves these days by even nuances in our data now, underpinning that, there's a lot of things that have to happen to make that work out. Well, you know, for example, making sure you adequately protect your data, you know? Do you have a good You have a good storage sub system? Do you have a good backup and recovery point objectives? Recovery time objective. How do you Ah, are you fully compliant? Are you ensuring that you're taking all the boxes? There's a lot of regulations these days in terms with respect to compliance, data retention, data, privacy and so forth. Are you taking those boxes? Are you being efficient with your, uh, your your your data? You know, In other words, I think there's a statistic that someone mentioned me the other day that 53% of all businesses have between three and 15 copies of the same data. So you know, finding and eliminating does is it is part of the part of the problem is when you do a chase, >>um, I I like to think of you're right, no doubt, business value and and a lot of that comes from reducing the end in cycle times. But anything that you guys would would add to that. Patrick, Maybe start with Patrick. >>Yeah, I think I think in value from your data really hits on tips on what everyone wants to achieve. But I think there are a couple of key steps in doing that. First of all, is getting access to the data and asked that, Really, it's three big problems, firstly, working out what you've got. Secondly, looking at what? After working on what you've got, how to get access to it? Because it's all very well knowing that you've got some data. But if you can't get access to it either because of privacy reasons, security reasons, then that's a big challenge. And then finally, once you've got access to the data making sure that you can process that data in a timely manner >>for me, you know it would be that an organization has got a really good global view of all of its data. It understands the data flow and dependencies within their infrastructure, understands that precise legal and compliance requirements, and you had the ability to action changes or initiatives within their environment to give the fun. But with a cloud like agility. Um, you know, and that's no easy feat, right? That is hard work. >>Okay, so we've we've talked about. The challenge is in some of the objectives, but there's a lot of blockers out there, and I want to understand how you guys are helping remove them. So So, Lester. But what do you see as some of the big blockers in terms of people really leaning in? So this smart data lifecycle >>yeah, Silos is is probably one of the biggest one I see in business is yes, it's it's my data, not your data. Lots of lots of compartmentalization. Breaking that down is one of the one of the challenges. And having the right tools to help you do that is only part of the solution. There's obviously a lot of cultural things that need to take place Teoh to break down those silos and work together. If you can identify where you have redundant data across your enterprise, you might be able to consolidate those. >>So, Patrick, so one of the blockers that I see is legacy infrastructure, technical debt, sucking all the budget you got. You know, too many people have having to look after, >>as you look at the infrastructure that supports people's data landscapes today for primarily legacy reasons. The infrastructure itself is siloed. So you have different technologies with different underlying hardware and different management methodologies that they're there for good reason, because historically you have to have specific fitness, the purpose for different data requirements. And that's one of the challenges that we tackled head on a pure with with the flash blade technology and the concept of the data, a platform that can deliver in different characteristics for the different workloads. But from a consistent data platform >>now is that I want to go to you because, you know, in the world in your world, which to me goes beyond backup. And one of the challenges is, you know, they say backup is one thing. Recovery is everything, but as well. The the CFO doesn't want to pay for just protection, and one of things that I like about what you guys have done is you. You broadened the perspective to get more value out of your what was once seen as an insurance policy. >>I do see one of the one of the biggest blockers as the fact that the task at hand can, you know, can be overwhelming for customers. But the key here is to remember that it's not an overnight change. It's not, you know, a flick of a switch. It's something that can be tackled in a very piecemeal manner on. Absolutely. Like you said, You know, reduction in TCO and being able to leverage the data for other purposes is a key driver for this. So, you know, this can be this can be resolved. It would be very, you know, pretty straightforward. It can be quite painless as well. Same goes for unstructured data, which is very complex to manage. And, you know, we've all heard the stats from the the analysts. You know, data obviously is growing at an extremely rapid rate, but actually, when you look at that, you know how is actually growing. 80% of that growth is actually in unstructured data, and only 20% of that growth is in unstructured data. S o. You know, these are quick win areas that customers can realize immediate tco improvement and increased agility as well >>paint a picture of this guy that you could bring up the life cycle. You know what you can see here is you've got this this cycle, the data lifecycle and what we're wanting to do is inject intelligence or smarts into this, like like life cycles. You see, you start with ingestion or creation of data. You're you're storing it. You got to put it somewhere, right? You gotta classify it. You got to protect it. And then, of course, you want to reduce the copies, make it, you know, efficient on. And then you want to prepare it so that businesses can actually sumit. And then you've got clients and governance and privacy issues, and I wonder if we could start with you. Lester, this is, you know, the picture of the life cycle. What role does automation play in terms of injecting smarts into the lifecycle? >>Automation is key here, especially from the discover it catalog and classify perspective. I've seen companies where they geo and will take and dump their all of their database scheme is into a spreadsheet so that they can sit down and manually figure out what attributes 37 means for a column names, Uh, and that's that's only the tip of the iceberg. So being able to do automatically detect what you have automatically deduced where what's consuming the data, you know, upstream and downstream. Being able to understand all of the things related to the lifecycle of your data. Back up archive deletion. It is key. And so we're having having good tool. IShares is very >>important. So, Patrick, obviously you participate in the store piece of this picture s I wonder if you could talk more specifically about that. But I'm also interested in how you effect the whole system view the the end end cycle time. >>Yeah, I think Leicester kind of hit the nail on the head in terms of the importance of automation because the data volumes are just just so massive. Now that you can, you can you can effectively manage or understand or catalog your data without automation. Once you understand the data and the value of the data, then that's where you can work out where the data needs to be at any point in >>time, right? So pure and kohi city obviously partner to do that and of course, is that you guys were part of the protect you certainly part of the retain. But Also, you provide data management capabilities and analytics. I wonder if you could add some color there. >>Yeah, absolutely. So, like you said, you know, we focused pretty heavily on data protection. Is just one of our one of our areas on that infrastructure. It is just sitting there, really? Can, you know, with the legacy infrastructure, It's just sitting there, you know, consuming power, space cooling and pretty inefficient. And what, if anything, that protest is a key part of that. If I If I have a modern data platform such as, you know, the cohesive data platform, I can actually do a lot of analytics on that through application. So we have a marketplace for APS. >>I wonder if we could talk about metadata. It's It's increasingly important. Metadata is data about the data, but Leicester maybe explain why it's so important and what role it plays in terms of creating smart data lifecycle. A >>lot of people think it's just about the data itself, but there's a lot of extended characteristics about your data. So so imagine if or my data life cycle I can communicate with the backup system from Kohi City and find out when the last time that data was backed up or where is backed up to. I can communicate exchange data with pure storage and find out what two years? And is the data at the right tier commensurate with its use level pointed out and being able to share that metadata across systems? I think that's the direction that we're going in right now. We're at the stage where just identifying the metadata and trying to bring it together and catalog the next stage will be OK using the AP eyes it that that we have between our systems can't communicate and share that data and build good solutions for customers to use. >>It's a huge point that you just made. I mean, you know, 10 years ago, automating classification was the big problem, and it was machine intelligence, you know, obviously attacking that, But your point about as machines start communicating to each other and you start, it's cloud to cloud. There's all kinds of metadata, uh, kind of new meta data that's being created. I often joke that someday there's gonna be more metadata than data, so that brings us to cloud and that I'd like to start with you. >>You know, I do think, you know, having the cloud is a great thing. And it has got its role to play, and you can have many different permutations and iterations of how you use it on. Um, you know, I may have sort of mentioned previously. You know, I've seen customers go into the cloud very, very quickly, and actually recently, they're starting to remove workloads from the cloud. And the reason why this happens is that, you know, Cloud has got its role to play, but it's not right for absolutely everything, especially in their current form as well. A good analogy I like to use on this may sound a little bit cliche, but you know, when you compare clouds versus on premises data centers, you can use the analogy of houses and hotels. So to give you an idea so you know, when we look at hotels, that's like the equivalent of a cloud, right? I can get everything I need from there. I can get my food, my water, my outdoor facilities. If I need to accommodate more people, I can rent some more rooms. I don't have to maintain the hotel. It's all done for me. When you look at houses the equivalent to on premises infrastructure, I pretty much have to do everything myself, right. So I have to purchase the house. I have to maintain it. I have to buy my own food and water. Eat it. You have to make improvements myself. But then why do we all live in houses? No, in hotels. And the simple answer that I can I can only think of is, is that it's cheaper, right. It's cheaper to do it myself. But that's not to say that hotels haven't got their role to play. Um, you know? So, for example, if I've got loads of visitors coming over for the weekend, I'm not going to go build an extension to my house just for them. I will burst into my hotel into the cloud, um, and use it for, you know, for for things like that. So what I'm really saying is the cloud is great for many things, but it can work out costlier for certain applications, while others are a perfect >>It's an interesting analogy. I hadn't thought of that before, but you're right because I was going to say Well, part of it is you want the cloud experience everywhere, but you don't always want the cloud experience especially, you know, when you're with your family, you want certain privacy that I've not heard that before. He's out. So that's the new perspective s Oh, thank you, but but But Patrick, I do want to come back to that cloud experience because, in fact, that's what's happening. In a lot of cases, organizations are extending the cloud properties of automation on Prem. >>Yeah, I thought, as I thought, a really interesting point and a great analogy for the use of the public cloud. And it really reinforces the importance of the hybrid and multi cloud environment because it gives you the flexibility to choose where is the optimal environment to run your business workloads? And that's what it's all about and the flexibility to change which environment you're running in, either for more months to the next or from one year to the next. Because workloads change and the characteristics that are available in the cloud change, the hybrid cloud is something that we've we've lived with ourselves of pure, So our pure one management technology actually sits in hybrid cloud and what we we started off entirely cloud native. But now we use public cloud for compute. We use our own technology at the end of a high performance network link to support our data platform. So we get the best of both worlds and I think that's where a lot of our customers are trying to get to. >>Alright, I want to come back in a moment there. But before we do, let's see, I wonder if we could talk a little bit about compliance, governance and privacy. I think the Brits hung on. This panel is still in the EU for now, but the you are looking at new rules. New regulations going beyond GDP are where does sort of privacy governance, compliance fit in the data lifecycle, then, is that I want your thoughts on this as well. >>Yeah, this is this is a very important point because the landscape for for compliance, around data privacy and data retention is changing very rapidly. And being able to keep up with those changing regulations in an automated fashion is the only way you're gonna be able to do it. Even I think there's a some sort of Ah, maybe ruling coming out today or tomorrow with the changed in the r. So this is things are all very key points and being able to codify those rules into some software. Whether you know, Iot Tahoe or or your storage system or kohi city, it will help you be compliant is crucial. >>Yeah. Is that anything you can add there? I mean, it's really is your wheelhouse. >>Yeah, absolutely. So, you know, I think anybody who's watching this probably has gotten the message that, you know, less silos is better. And it absolutely it also applies to data in the cloud is where as well. So you know, my aiming Teoh consolidate into fewer platforms, customers can realize a lot better control over their data. And the natural effect of this is that it makes meeting compliance and governance a lot easier. So when it's consolidated, you can start to confidently understand who's accessing your data. How frequently are they accessing the data? You can also do things like, you know, detecting anomalous file access activities and quickly identify potential threats. >>Okay, Patrick, we were talking. You talked earlier about storage optimization. We talked to Adam Worthington about the business case, the numerator, which is the business value, and then the denominator, which is the cost and what's unique about pure in this regard. >>Yeah, and I think there are. There are multiple time dimensions to that. Firstly, if you look at the difference between legacy storage platforms that used to take up racks or aisles of space in the data center, the flash technology that underpins flash blade way effectively switch out racks rack units on. It has a big play in terms of data center footprint, and the environmental is associated with the data center. If you look at extending out storage efficiencies and the benefits it brings, just the performance has a direct effect on start we whether that's, you know, the start from the simplicity that platform so that it's easy and efficient to manage, whether it's the efficiency you get from your data. Scientists who are using the outcomes from the platform, making them more efficient to new. If you look at some of our customers in the financial space there, their time to results are improved by 10 or 20 x by switching to our technology from legacy technologies for their analytics, platforms. >>The guys we've been running, you know, Cube interviews in our studios remotely for the last 120 days is probably the first interview I've done where haven't started off talking about Cove it, Lester. I wonder if you could talk about smart data lifecycle and how it fits into this isolation economy. And hopefully, what will soon be a post isolation economy? >>Yeah, Come. It has dramatically accelerated the data economy. I think. You know, first and foremost, we've all learned to work at home. You know, we've all had that experience where, you know, people would have been all about being able to work at home just a couple days a week. And here we are working five days. That's how to knock on impact to infrastructure, to be able to support that. But going further than that, you know, the data economy is all about how a business can leverage their data to compete in this New World order that we are now in code has really been a forcing function to, you know, it's probably one of the few good things that have come out of government is that we've been forced to adapt and It's a zoo. Been an interesting journey and it continues to be so >>like Lester said, you know, we've We're seeing huge impact here. Working from home has pretty much become the norm. Now, you know, companies have been forced into basically making it work. If you look online retail, that's accelerated dramatically as well. Unified communications and videoconferencing. So really, you know the point here, is that Yes, absolutely. We're you know, we've compressed, you know, in the past, maybe four months. What already would have taken maybe even five years, maybe 10 years or so >>We got to wrap. But Celester Louis, let me ask you to sort of get paint. A picture of the sort of journey the maturity model that people have to take. You know, if they want to get into it, where did they start? And where are they going to give us that view, >>I think, versus knowing what you have. You don't know what you have. You can't manage it. You can't control that. You can't secure what you can't ensure. It's a compliant s so that that's first and foremost. Uh, the second is really, you know, ensuring that your compliance once, once you know what you have. Are you securing it? Are you following the regulatory? The applicable regulations? Are you able to evidence that, uh, how are you storing your data? Are you archiving it? Are you storing it effectively and efficiently? Um, you know, have you Nirvana from my perspective, is really getting to a point where you you've consolidated your data, you've broken down the silos and you have a virtually self service environment by which the business can consume and build upon their data. And really, at the end of the day, as we said at the beginning, it's all about driving value out of your data. And ah, the automation is is key to this, sir. This journey >>that's awesome and you just described is sort of a winning data culture. Lester, Patrick, thanks so much for participating in this power panel. >>Thank you, David. >>Alright, So great overview of the steps in the data lifecycle and how to inject smarts into the process is really to drive business outcomes. Now it's your turn. Hop into the crowd chat, please log in with Twitter or linked in or Facebook. Ask questions, answer questions and engage with the community. Let's crowdchat, right. Yeah, yeah, yeah.

Published Date : Jul 31 2020

SUMMARY :

behind smart data life cycles, and it will hop into the crowdchat and give you a chance to ask questions. Enjoy the best this community has to offer Adam, good to see you. and So I kind of my job to be an expert in all of the technologies that we work with, So you guys really technology experts, data experts and probably also expert in That's a lot of what I like to speak to customers Let's talk about smart data, you know, when you when you throw in terms like this is it kind of can feel buzz, reducing the amount of copies of data that we have across the infrastructure and reducing I love that quite for lots of reasons So you provide self service capabilities help the customer realize that we talked about earlier that business out. that it keeps us on our ability to deliver on exactly what you just said is big experts Do you have examples that you can share with us even if they're anonymous customers that you work looking at the you mentioned data protection earlier. In this segment, But what you find with traditional approaches and you already touched on some of you know, trying to slog through it and figure it out. Thank you very much Today. Now we're going to go into the power panel and go deeper into the technologies that enable Click on the Link and connect with the data Welcome back, everybody to the power panel driving business performance with smart data life I wonder if each of you could just give us a quick overview of your role. So really, that's finding all you can about your data that you so you got a good perspective on it. to deliver better business value faster than they've ever had to do in the past. What are you seeing from from from And just to be clear, you know, when I say internally, that it also does extend to the cloud as well. So let's start with the business outcome and kind of try to work backwards to people you and eliminating does is it is part of the part of the problem is when you do a chase, But anything that you guys would would add to that. But if you can't get access to it either because of privacy reasons, and you had the ability to action changes or initiatives within their environment to give But what do you see as some of the big blockers in terms of people really If you can identify where you have redundant data across your enterprise, technical debt, sucking all the budget you got. So you have different And one of the challenges is, you know, they say backup is one thing. But the key here is to remember that it's not an overnight the copies, make it, you know, efficient on. what you have automatically deduced where what's consuming the data, this picture s I wonder if you could talk more specifically about that. you can you can effectively manage or understand or catalog your data without automation. is that you guys were part of the protect you certainly part of the retain. Can, you know, with the legacy infrastructure, It's just sitting there, you know, consuming power, the data, but Leicester maybe explain why it's so important and what role it And is the data at the right tier commensurate with its use level pointed out I mean, you know, 10 years ago, automating classification And it has got its role to play, and you can have many different permutations and iterations of how you you know, when you're with your family, you want certain privacy that I've not heard that before. at the end of a high performance network link to support our data platform. This panel is still in the EU for now, but the you are looking at new Whether you know, Iot Tahoe or or your storage system I mean, it's really is your wheelhouse. So you know, my aiming Teoh consolidate into Worthington about the business case, the numerator, which is the business value, to manage, whether it's the efficiency you get from your data. The guys we've been running, you know, Cube interviews in our studios remotely for the last 120 days But going further than that, you know, the data economy is all about how a business can leverage we've compressed, you know, in the past, maybe four months. A picture of the sort of journey the maturity model that people have to take. from my perspective, is really getting to a point where you you've consolidated your that's awesome and you just described is sort of a winning data culture. Alright, So great overview of the steps in the data lifecycle and how to inject smarts into the process

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

DavidPERSON

0.99+

Adam WorthingtonPERSON

0.99+

Adam WorthingtonPERSON

0.99+

Patrick FieldPERSON

0.99+

Patrick SmithPERSON

0.99+

AdamPERSON

0.99+

five daysQUANTITY

0.99+

June 6thDATE

0.99+

10QUANTITY

0.99+

tomorrowDATE

0.99+

five yearsQUANTITY

0.99+

third yearQUANTITY

0.99+

North StarORGANIZATION

0.99+

LesterPERSON

0.99+

SiriTITLE

0.99+

10 yearsQUANTITY

0.99+

80%QUANTITY

0.99+

second episodeQUANTITY

0.99+

Blaise PascalPERSON

0.99+

Leicester WatersORGANIZATION

0.99+

15 copiesQUANTITY

0.99+

53%QUANTITY

0.99+

LesterORGANIZATION

0.99+

TodayDATE

0.99+

both sidesQUANTITY

0.99+

four monthsQUANTITY

0.99+

eachQUANTITY

0.99+

todayDATE

0.99+

two yearsQUANTITY

0.99+

20 xQUANTITY

0.99+

Iot TahoeORGANIZATION

0.99+

oneQUANTITY

0.99+

first interviewQUANTITY

0.99+

secondQUANTITY

0.98+

Celester LouisPERSON

0.98+

TwitterORGANIZATION

0.98+

Lester FleetORGANIZATION

0.98+

FacebookORGANIZATION

0.98+

BothQUANTITY

0.98+

FirstlyQUANTITY

0.98+

firstQUANTITY

0.98+

one yearQUANTITY

0.98+

10 years agoDATE

0.98+

White HouseORGANIZATION

0.98+

OneQUANTITY

0.98+

two thingsQUANTITY

0.97+

both worldsQUANTITY

0.97+

SecondlyQUANTITY

0.97+

IotORGANIZATION

0.97+

Iot LabsORGANIZATION

0.97+

20%QUANTITY

0.96+

CoveORGANIZATION

0.96+

FirstQUANTITY

0.96+

Dave OutcomesPERSON

0.95+

firstlyQUANTITY

0.95+

three big problemsQUANTITY

0.94+

three coreQUANTITY

0.94+

IsraelLOCATION

0.94+

threeQUANTITY

0.94+

KohI CityORGANIZATION

0.91+

Kohi CityLOCATION

0.9+

one thingQUANTITY

0.89+

LeicesterORGANIZATION

0.89+

Io-Tahoe Smart Data Lifecycle CrowdChat | Digital


 

(upbeat music) >> Voiceover: From around the globe, it's theCUBE with digital coverage of Data Automated. An event series brought to you by Io-Tahoe. >> Welcome everyone to the second episode in our Data Automated series made possible with support from Io-Tahoe. Today, we're going to drill into the data lifecycle. Meaning the sequence of stages that data travels through from creation to consumption to archive. The problem as we discussed in our last episode is that data pipelines are complicated, they're cumbersome, they're disjointed and they involve highly manual processes. A smart data lifecycle uses automation and metadata to improve agility, performance, data quality and governance. And ultimately, reduce costs and time to outcomes. Now, in today's session we'll define the data lifecycle in detail and provide perspectives on what makes a data lifecycle smart? And importantly, how to build smarts into your processes. In a moment we'll be back with Adam Worthington from Ethos to kick things off. And then, we'll go into an expert power panel to dig into the tech behind smart data lifecyles. And, then we'll hop into the crowd chat and give you a chance to ask questions. So, stay right there, you're watching theCUBE. (upbeat music) >> Voiceover: Innovation. Impact. Influence. Welcome to theCUBE. Disruptors. Developers. And, practitioners. Learn from the voices of leaders, who share their personal insights from the hottest digital events around the globe. Enjoy the best this community has to offer on theCUBE. Your global leader in high tech digital coverage. >> Okay, we're back with Adam Worthington. Adam, good to see you, how are things across the pond? >> Good thank you, I'm sure our weather's a little bit worse than yours is over the other side, but good. >> Hey, so let's set it up, tell us about yourself, what your role is as CTO and--- >> Yeah, Adam Worthington as you said, CTO and co-founder of Ethos. But, we're a pretty young company ourselves, so we're in our sixth year. And, we specialize in emerging disruptive technology. So, within the infrastructure data center kind of cloud space. And, my role is a technical lead, so I, it's kind of my job to be an expert in all of the technologies that we work with. Which can be a bit of a challenge if you have a huge portfolio. One of the reasons we got to deliberately focus on. And also, kind of pieces of technical validation and evaluation of new technologies. >> So, you guys are really technology experts, data experts, and probably also expert in process and delivering customer outcomes, right? >> That's a great word there Dave, outcomes. I mean, that's a lot of what I like to speak to customers about. >> Let's talk about smart data you know, when you throw out terms like this it kind of can feel buzz wordy but what are the critical aspects of so-called smart data? >> Cool, well typically I had to step back a little bit and set the scene a little bit more in terms of kind of where I came from. So, and the types of problems I've sorted out. So, I'm really an infrastructure or solution architect by trade. And, what I kind of, relatively organically, but over time my personal framework and approach. I focused on three core design principles. So, simplicity, flexibility and efficiency. So, whatever it was I was designing and obviously they need different things depending on what the technology area is that we're working with. So, that's for me a pretty good step. So, they're the kind of areas that a smart approach in data will directly address both reducing silos. So, that comes from simplifying. So, moving away from complexity of infrastructure. Reducing the amount of copies of data that we have across the infrastructure. And, reducing the amount of application environment for the need for different areas. So, the smarter we get with data it's in my eyes anyway, the further we move away from those traditional legacy. >> But, how does it work? I mean, how, in other words, what's involved in injecting smarts into your data lifecycle? >> I think one of my, well actually I didn't have this quote ready, but genuinely one of my favorite quotes is from the French philosopher and mathematician, Blaise Pascal and he says, if I get this right, "I'd have written you a shorter letter, but I didn't have the time." So, there's real, I love that quote for lots of reasons. >> Dave: Alright. >> That's direct applications in terms of what we're talking about. In terms of, it's actually really complicated to develop a technology capability to make things simple. Be more directly meeting the needs of the business through tech. So, you provide self-service capability. And, I don't just mean self-driving, I mean making data and infrastructure make sense to the business users that are using it. >> Your job, correct me if I'm wrong, is to kind of put that all together in a solution. And then, help the customer you know, realize what we talked about earlier that business out. >> Yeah, and that's, it's sitting at both sides and understanding both sides. So, kind of key to us in our abilities to be able to deliver on exactly what you've just said, is being experts in the capabilities and new and better ways of doing things. But also, having the kind of, better business understanding to be able to ask the right questions to identify how can you better approach this 'cause it helps solve these issues. But, another area that I really like is the, with the platforms you can do more with less. And, that's not just about reducing data redundancy, that's about creating application environments that can service, an infrastructure to service different requirements that are able to do the random IO thing without getting too kind of low level tech. As well as the sequential. So, what that means is, that you don't necessarily have to move data from application environment A, do one thing with it, collate it and then move it to the application environment B, to application environment C, in terms of an analytics kind of left to right workload, you keep your data where it is, use it for different requirements within the infrastructure and again, do more with less. And, what that does, it's not just about simplicity and efficiency, it significantly reduces the times of value that that faces, as well. >> Do you have examples that you can share with us, even if they're anonymized of customers that you've worked with, that are maybe a little further down on the journey. Or, maybe not and--- >> Looking at the, you mentioned data protection earlier. So, another organization this is a project which is just coming nearing completion at the moment. Huge organization, that literally petabytes of data that was servicing their backup and archive. And, what they had is not just this reams of data. They had, I think I'm right in saying, five different backup applications that they had depending on the, what area of infrastructure they were backing up. So, whether it was virtualization, that was different to if they were backing up, different if they were backing up another data base environment they were using something else in the cloud. So, a consolidated approach that we recommended to work with them on. They were able to significantly reduce complexity and reduce the amount of time that it took them. So, what they were able to achieve and this was again, one of the key departments they had. They'd gone above the threshold of being able to backup all of them. >> Adam, give us the final thoughts, bring us home in this segment. >> Well, the final thoughts, so this is something, yeah we didn't particularly touch on. But, I think it's kind of slightly hidden, it isn't spoken about as much as I think it could be. Is the traditional approaches to infrastructure. We've already touched on that they can be complicated and there's a lack of efficiency. It impacts a user's ability to be agile. But, what you find with traditional approaches and we've already touched on some of the kind of benefits to new approaches there, is that they're often very prescriptive. They're designed for a particular firm. The infrastructure environment, the way that it's served up to the users in a kind of a packaged kind of way, means that they need to use it in that, whatever way it's been dictated. So, that kind of self-service aspect, as it comes in from a flexibility standpoint. But, these platforms and these platform approaches is the right way to address technology in my eyes. Enables the infrastructure to be used flexibly. So, the business users and the data users, what we find is that if we put in this capability into their hands. They start innovating the way that they use that data. And, the way that they bring benefits. And, if a platform is too prescriptive and they aren't able to do that, then what you're doing with these new approaches is get all of the metrics that we've touched on. It's fantastic from a cost standpoint, from an agility standpoint. But, what it means is that the innovators in the business, the ones that really understand what they're looking to achieve, they now have the tools to innovate with that. And, I think, and I've started to see that with projects that we've completed, if you do it in the right way, if you articulate the capability and you empower the business users in the right way. Then, they're in a significantly better position, these businesses to take advantages and really sort of match and significantly beat off their competition environment spaces. >> Super Adam, I mean a really exciting space. I mean we spent the last 10 years gathering all this data. You know, trying to slog through it and figure it out and now, with the tools that we have and the automation capabilities, it really is a new era of innovation and insight. So, Adam Worthington, thanks so much for coming in theCUBE and participating in this program. >> Yeah, exciting times and thank you very much Dave for inviting me, and yeah big pleasure. >> Now, we're going to go into the power panel and go deeper into the technologies that enable smart data lifecyles. And, stay right there, you're watching theCUBE. (light music) >> Voiceover: Are you interested in test-driving the Io-Tahoe platform? Kickstart the benefits of Data Automation for your business through the IoLabs program. A flexible, scalable, sandbox environment on the cloud of your choice. With setup, service and support provided by Io-Tahoe. Click on the link and connect with a data engineer to learn more and see Io-Tahoe in action. >> Welcome back everybody to the power panel, driving business performance with smart data lifecyles. Lester Waters is here, he's the Chief Technology Officer from Io-Tahoe. He's joined by Patrick Smith, who is field CTO from Pure Storage. And, Ezat Dayeh who is Assistant Engineering Manager at Cohesity. Gentlemen, good to see you, thanks so much for coming on this panel. >> Thank you, Dave. >> Yes. >> Thank you, Dave. >> Let's start with Lester, I wonder if each of you could just give us a quick overview of your role and what's the number one problem that you're focused on solving for your customers? Let's start with Lester, please. >> Ah yes, I'm Lester Waters, Chief Technology Officer for Io-Tahoe. And really, the number one problem that we are trying to solve for our customers is to help them understand what they have. 'Cause if they don't understand what they have in terms of their data, they can't manage it, they can't control it, they can't monitor it, they can't ensure compliance. So, really that's finding all that you can about your data that you have and building a catalog that can be readily consumed by the entire business is what we do. >> Patrick, field CTO in your title, that says to me you're talking to customers all the time so you've got a good perspective on it. Give us you know, your take on things here. >> Yeah absolutely, so my patch is in the air and talk to customers and prospects in lots of different verticals across the region. And, as they look at their environments and their data landscape, they're faced with massive growth in the data that they're trying to analyze. And, demands to be able to get inside are faster. And, to deliver business value faster than they've ever had to do in the past, so. >> Got it and then Ezat at Cohesity, you're like the new kid on the block. You guys are really growing rapidly. You created this whole notion of data management, backup and beyond, but from Assistant Engineering Manager what are you seeing from customers, your role and the number one problem that you're solving? >> Yeah sure, so the number one problem I see you know, time and again speaking with customers it's all around data fragmentation. So, due to things like organic growth you know, even maybe budgetary limitations, infrastructure has grown you know, over time, very piecemeal. And, it's highly distributed internally. And, just to be clear you know, when I say internally you know, that could be that it's on multiple platforms or silos within an on-prem infrastructure. But, that it also does extend to the cloud, as well. >> Right hey, cloud is cool, everybody wants to be in the cloud, right? So, you're right it creates maybe unattended consequences. So, let's start with the business outcome and kind of try to work backwards. I mean people you know, they want to get more insights from data, they want to have a more efficient data lifecyle. But, so Lester let me start with you, in thinking about like, the North Star, creating data driven cultures you know, what is the North Star for customers here? >> I think the North Star in a nutshell is driving value from your data. Without question, I mean we differentiate ourselves these days by even the nuances in our data. Now, underpinning that there's a lot of things that have to happen to make that work out well. You know for example, making sure you adequately protect your data. You know, do you have a good storage system? Do you have a good backup and recovery point objectives, recovering time objectives? Do you, are you fully compliant? Are you ensuring that you're ticking all the boxes? There's a lot of regulations these days in terms, with respect to compliance, data retention, data privacy and so fourth. Are you ticking those boxes? Are you being efficient with your data? You know, in other words I think there's a statistic that someone mentioned to me the other day that 53% of all businesses have between three and 15 copies of the same data. So you know, finding and eliminating those is part of the problems you need to chase. >> I like to think of you know, you're right. Lester, no doubt, business value and a lot of that comes from reducing the end to end cycle times. But, anything that you guys would add to that, Patrick and Ezat, maybe start with Patrick. >> Yeah, I think getting value from data really hits on, it hits on what everyone wants to achieve. But, I think there are a couple of key steps in doing that. First of all is getting access to the data. And that's, that really hits three big problems. Firstly, working out what you've got. Secondly, after working out what you've got, how to get access to it. Because, it's all very well knowing that you've got some data but if you can't get access to it. Either, because of privacy reasons, security reasons. Then, that's a big challenge. And then finally, once you've got access to the data, making sure that you can process that data in a timely manner. >> For me you know, it would be that an organization has got a really good global view of all of its data. It understands the data flow and dependencies within their infrastructure. Understands the precise legal and compliance requirements. And, has the ability to action changes or initiatives within their environment. Forgive the pun, but with a cloud like agility. You know, and that's no easy feat, right? That is hard work. >> Okay, so we've talked about the challenges and some of the objectives, but there's a lot of blockers out there and I want to understand how you guys are helping remove them? So, Lester what do you see as some of the big blockers in terms of people really leaning in to this smart data lifecycle. >> Yeah silos, is probably one of the biggest one I see in businesses. Yes, it's my data not your data. Lots of compartmentalization. And, breaking that down is one of the challenges. And, having the right tools to help you do that is only part of the solution. There's obviously a lot of cultural things that need to take place to break down those silos and work together. If you can identify where you have redundant data across your enterprise, you might be able to consolidate those. >> Yeah so, over to Patrick, so you know, one of the blockers that I see is legacy infrastructure, technical debt sucking all the budget. You got you know, too many people having to look after. >> As you look at the infrastructure that supports peoples data landscapes today. For primarily legacy reasons, the infrastructure itself is siloed. So, you have different technologies with different underlying hardware, different management methodologies that are there for good reason. Because, historically you had to have specific fitness for purpose for different data requirements. >> Dave: Ah-hm. >> And, that's one of the challenges that we tackled head on at Pure. With the flash plate technology and the concept of the data hub. A platform that can deliver in different characteristics for the different workloads. But, from a consistent data platform. >> Now, Ezat I want to go to you because you know, in the world, in your world which to me goes beyond backup and one of the challenges is you know, they say backup is one thing, recovery is everything. But as well, the CFO doesn't want to pay for just protection. Now, one of the things that I like about what you guys have done is you've broadened the perspective to get more value out of your what was once seen as an insurance policy. >> I do see one of the biggest blockers as the fact that the task at hand can you know, be overwhelming for customers. But, the key here is to remember that it's not an overnight change, it's not you know, the flick of the switch. It's something that can be tackled in a very piecemeal manner. And, absolutely like you've said you know, reduction in TCO and being able to leverage the data for other purposes is a key driver for this. So you know, this can be resolved. It can be very you know, pretty straightforward. It can be quite painless, as well. Same goes for unstructured data, which is very complex to manage. And you know, we've all heard the stats from the analysts, you know data obviously is growing at an extremely rapid rate. But, actually when you look at that you know, how is it actually growing? 80% of that growth is actually in unstructured data and only 20% of that growth is in structured data. So you know, these are quick win areas that the customers can realize immediate TCO improvement and increased agility, as well. >> Let's paint a picture of this guys, if I can bring up the lifecyle. You know what you can see here is you've got this cycle, the data lifecycle and what we're wanting to do is inject intelligence or smarts into this lifecyle. So, you can see you start with ingestion or creation of data. You're storing it, you've got to put it somewhere, right? You've got to classify it, you've got to protect it. And then, of course you want to you know, reduce the copies, make it you know, efficient. And then, you want to prepare it so that businesses can actually consume it and then you've got compliance and governance and privacy issues. And, I wonder if we could start with you Lester, this is you know, the picture of the lifecycle. What role does automation play in terms of injecting smarts into the lifecycle? >> Automation is key here, you know. Especially from the discover, catalog and classify perspective. I've seen companies where they go and we'll take and dump all of their data base schemes into a spreadsheet. So, that they can sit down and manually figure out what attribute 37 means for a column name. And, that's only the tip of the iceberg. So, being able to automatically detect what you have, automatically deduce where, what's consuming the data, you know upstream and downstream, being able to understand all of the things related to the lifecycle of your data backup, archive, deletion, it is key. And so, having good toolage areas is very important. >> So Patrick, obviously you participate in the store piece of this picture. So, I wondered if you could just talk more specifically about that, but I'm also interested in how you affect the whole system view, the end-to-end cycle time. >> Yeah, I think Lester kind of hit the nail on the head in terms of the importance of automation. Because, the data volumes are just so massive now that you can't effectively manage or understand or catalog your data without automation. Once you understand the data and the value of the data, then that's where you can work out where the data needs to be at any point in time. >> Right, so Pure and Cohesity obviously partnered to do that and of course, Ezat you guys are part of the protect, you're certainly part of the retain. But also, you provide data management capabilities and analytics, I wonder if you could add some color there? >> Yeah absolutely, so like you said you know, we focus pretty heavily on data protection as just one of our areas. And, that infrastructure it is just sitting there really can you know, the legacy infrastructure it's just sitting there you know, consuming power, space, cooling and pretty inefficient. And, automating that process is a key part of that. If I have a modern day platform such as you know, the Cohesity data platform I can actually do a lot of analytics on that through applications. So, we have a marketplace for apps. >> I wonder if we could talk about metadata. It's increasingly important you know, metadata is data about the data. But, Lester maybe explain why it's so important and what role it plays in terms of creating smart data lifecycle. >> A lot of people think it's just about the data itself. But, there's a lot of extended characteristics about your data. So, imagine if for my data lifecycle I can communicate with the backup system from Cohesity. And, find out when the last time that data was backed up or where it's backed up to. I can communicate, exchange data with Pure Storage and find out what tier it's on. Is the data at the right tier commencer with it's use level? If I could point it out. And, being able to share that metadata across systems. I think that's the direction that we're going in. Right now, we're at the stage we're just identifying the metadata and trying to bring it together and catalog it. The next stage will be okay, using the APIs and that we have between our systems. Can we communicate and share that data and build good solutions for customers to use? >> I think it's a huge point that you just made, I mean you know 10 years ago, automating classification was the big problem. And you know, with machine intelligence you know, we're obviously attacking that. But, your point about as machines start communicating to each other and you start you know, it's cloud to cloud. There's all kinds of metadata, kind of new metadata that's being created. I often joke that some day there's going to be more metadata than data. So, that brings us to cloud and Ezat, I'd like to start with you. >> You know, I do think that you know, having the cloud is a great thing. And, it has got its role to play and you can have many different you know, permutations and iterations of how you use it. And, you know, as I've may have sort of mentioned previously you know, I've seen customers go into the cloud very, very quickly and actually recently they're starting to remove workloads from the cloud. And, the reason why this happens is that you know, cloud has got its role to play but it's not right for absolutely everything. Especially in their current form, as well. A good analogy I like to use and this may sound a little bit clique but you know, when you compare clouds versus on premises data centers. You can use the analogies of houses and hotels. So, to give you an idea, so you know, when we look at hotels that's like the equivalent of a cloud, right? I can get everything I need from there. I can get my food, my water, my outdoor facilities, if I need to accommodate more people, I can rent some more rooms. I don't have to maintain the hotel, it's all done for me. When you look at houses the equivalent to you know, on premises infrastructure. I pretty much have to do everything myself, right? So, I have to purchase the house, I have to maintain it, I have buy my own food and water, eat it, I have to make improvements myself. But, then why do we all live in houses, not in hotels? And, the simple answer that I can only think of is, is that it's cheaper, right? It's cheaper to do it myself, but that's not to say that hotels haven't got their role to play. You know, so for example if I've got loads of visitors coming over for the weekend, I'm not going to go and build an extension to my house, just for them. I will burst into my hotel, into the cloud. And, you use it for you know, for things like that. So, what I'm really saying is the cloud is great for many things, but it can work out costlier for certain applications, while others are a perfect fit. >> That's an interesting analogy, I hadn't thought of that before. But, you're right, 'cause I was going to say well part of it is you want the cloud experience everywhere. But, you don't always want the cloud experience, especially you know, when you're with your family, you want certain privacy. I've not heard that before, Ezat. So, that's a new perspective, so thank you. But, Patrick I do want to come back to that cloud experience because in fact that's what's happening in a lot of cases. Organizations are extending the cloud properties of automation on-prem. >> Yeah, I thought Ezat brought up a really interesting point and a great analogy for the use of the public cloud. And, it really reinforces the importance of the Hybrid and the multicloud environment. Because, it gives you that flexibility to choose where is the optimal environment to run your business workloads. And, that's what it's all about. And, the flexibility to change which environment you're running in, either from one month to the next or from one year to the next. Because, workloads change and the characteristics that are available in the cloud change. The Hybrid cloud is something that we've lived with ourselves at Pure. So, our Pure management technology actually sits in a Hybrid cloud environment. We started off entirely cloud native but now, we use the public cloud for compute and we use our own technology at the end of a high performance network link to support our data platform. So, we're getting the best of both worlds. I think that's where a lot of our customers are trying to get to. >> All right, I want to come back in a moment there. But before we do, Lester I wonder if we could talk a little bit about compliance and governance and privacy. I think the Brits on this panel, we're still in the EU for now but the EU are looking at new rules, new regulations going beyond GDPR. Where does sort of privacy, governance, compliance fit in for the data lifecycle. And Ezat, I want your thought on this as well? >> Ah yeah, this is a very important point because the landscape for compliance around data privacy and data retention is changing very rapidly. And, being able to keep up with those changing regulations in an automated fashion is the only way you're going to be able to do it. Even, I think there's a some sort of a maybe ruling coming out today or tomorrow with a change to GDPR. So, this is, these are all very key points and being able to codify those rules into some software whether you know, Io-Tahoe or your storage system or Cohesity, it'll help you be compliant is crucial. >> Yeah, Ezat anything you can add there, I mean this really is your wheel house? >> Yeah, absolutely, so you know, I think anybody who's watching this probably has gotten the message that you know, less silos is better. And, it absolutely it also applies to data in the cloud, as well. So you know, by aiming to consolidate into you know, fewer platforms customers can realize a lot better control over their data. And, the natural affect of this is that it makes meeting compliance and governance a lot easier. So, when it's consolidated you can start to confidently understand who's accessing your data, how frequently are they accessing the data. You can also do things like you know, detecting an ominous file access activities and quickly identify potential threats. >> Okay Patrick, we were talking, you talked earlier about storage optimization. We talked to Adam Worthington about the business case, you've got the sort numerator which is the business value and then a denominator which is the cost. And, what's unique about Pure in this regard? >> Yeah, and I think there are multiple dimensions to that. Firstly, if you look at the difference between legacy storage platforms, they used to take up racks or aisles of space in a data center. With flash technology that underpins flash played we effectively switch out racks for rack units. And, it has a big play in terms of data center footprint and the environmentals associated with a data center. If you look at extending out storage efficiencies and the benefits it brings. Just the performance has a direct effect on staff. Whether that's you know, the staff and the simplicity of the platform so that it's easy and efficient to manage. Or, whether it's the efficiency you get from your data scientists who are using the outcomes from the platform and making them more efficient. If you look at some of our customers in the financial space their time to results are improved by 10 or 20 x by switching to our technology. From legacy technologies for their analytics platforms. >> So guys, we've been running you know, CUBE interviews in our studios remotely for the last 120 days. This is probably the first interview I've done where I haven't started off talking about COVID. Lester, I wondered if you could talk about smart data lifecycle and how it fits into this isolation economy and hopefully what will soon be a post-isolation economy? >> Yeah, COVID has dramatically accelerated the data economy. I think you know, first and foremost we've all learned to work at home. I you know, we've all had that experience where you know, people would hum and har about being able to work at home just a couple of days a week. And, here we are working five days a week. That's had a knock on impact to infrastructure to be able to support that. But, going further than that you know, the data economy is all about how a business can leverage their data to compete in this new world order that we are now in. COVID has really been a forcing function to you know, it's probably one of the few good things that have come out of COVID is that we've been forced to adapt. And, it's been an interesting journey and it continues to be so. >> Like Lester said you know, we're seeing huge impact here. You know, working from home has pretty much become the norm now. You know, companies have been forced into making it work. If you look at online retail, that's accelerated dramatically, as well. Unified communications and video conferencing. So, really you know, that the point here is that, yes absolutely we've compressed you know, in the past maybe four months what probably would have taken maybe even five years, maybe 10 years or so. >> We've got to wrap, but so Lester let me ask you, sort of paint a picture of the sort of journey the maturity model that people have to take. You know, if they want to get into it, where do they start and where are they going? Give us that view. >> Yeah, I think first is knowing what you have. If you don't know what you have you can't manage it, you can't control it, you can't secure it, you can't ensure it's compliant. So, that's first and foremost. The second is really you know, ensuring that you're compliant once you know what you have, are you securing it? Are you following the regulatory, the regulations? Are you able to evidence that? How are you storing your data? Are you archiving it? Are you storing it effectively and efficiently? You know, have you, nirvana from my perspective is really getting to a point where you've consolidated your data, you've broken down the silos and you have a virtually self-service environment by which the business can consume and build upon their data. And, really at the end of the day as we said at the beginning, it's all about driving value out of your data. And, automation is key to this journey. >> That's awesome and you've just described like sort of a winning data culture. Lester, Patrick, Ezat, thanks so much for participating in this power panel. >> Thank you, David. >> Thank you. >> All right, so great overview of the steps in the data lifecyle and how to inject smarts into the processes, really to drive business outcomes. Now, it's your turn, hop into the crowd chat. Please log in with Twitter or LinkedIn or Facebook, ask questions, answer questions and engage with the community. Let's crowd chat! (bright music)

Published Date : Jul 29 2020

SUMMARY :

to you by Io-Tahoe. and give you a chance to ask questions. Enjoy the best this community Adam, good to see you, how Good thank you, I'm sure our of the technologies that we work with. I like to speak to customers about. So, and the types of is from the French of the business through tech. And then, help the customer you know, to identify how can you that you can share with us, and reduce the amount of Adam, give us the final thoughts, the kind of benefits to and the automation capabilities, thank you very much Dave and go deeper into the technologies on the cloud of your choice. he's the Chief Technology I wonder if each of you So, really that's finding all that you can Give us you know, your in the data that they're and the number one problem And, just to be clear you know, I mean people you know, they is part of the problems you need to chase. from reducing the end to end cycle times. making sure that you can process And, has the ability to action changes So, Lester what do you see as some of And, having the right tools to help you Yeah so, over to Patrick, so you know, So, you have different technologies and the concept of the data hub. the challenges is you know, the analysts, you know to you know, reduce the copies, And, that's only the tip of the iceberg. in the store piece of this picture. the data needs to be at any point in time. and analytics, I wonder if you it's just sitting there you know, It's increasingly important you know, And, being able to share to each other and you start So, to give you an idea, so you know, especially you know, when And, the flexibility to change compliance fit in for the data lifecycle. in an automated fashion is the only way You can also do things like you know, about the business case, Whether that's you know, you know, CUBE interviews forcing function to you know, So, really you know, that of the sort of journey And, really at the end of the day for participating in this power panel. the processes, really to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

DavidPERSON

0.99+

Ezat DayehPERSON

0.99+

DavePERSON

0.99+

Adam WorthingtonPERSON

0.99+

Patrick SmithPERSON

0.99+

AdamPERSON

0.99+

EzatPERSON

0.99+

80%QUANTITY

0.99+

10QUANTITY

0.99+

second episodeQUANTITY

0.99+

Blaise PascalPERSON

0.99+

53%QUANTITY

0.99+

five yearsQUANTITY

0.99+

tomorrowDATE

0.99+

10 yearsQUANTITY

0.99+

EUORGANIZATION

0.99+

sixth yearQUANTITY

0.99+

Io-TahoeORGANIZATION

0.99+

EthosORGANIZATION

0.99+

North StarORGANIZATION

0.99+

LesterPERSON

0.99+

CohesityORGANIZATION

0.99+

secondQUANTITY

0.99+

both sidesQUANTITY

0.99+

first interviewQUANTITY

0.99+

eachQUANTITY

0.99+

firstQUANTITY

0.99+

one monthQUANTITY

0.99+

Lester WatersPERSON

0.99+

GDPRTITLE

0.98+

todayDATE

0.98+

FirstlyQUANTITY

0.98+

one yearQUANTITY

0.98+

15 copiesQUANTITY

0.98+

LinkedInORGANIZATION

0.98+

FirstQUANTITY

0.98+

TodayDATE

0.98+

20 xQUANTITY

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.97+

10 years agoDATE

0.97+

four monthsQUANTITY

0.97+

five days a weekQUANTITY

0.97+

SecondlyQUANTITY

0.97+

FacebookORGANIZATION

0.97+

both worldsQUANTITY

0.97+

TwitterORGANIZATION

0.97+

threeQUANTITY

0.96+

OneQUANTITY

0.96+

Pure StorageORGANIZATION

0.95+

LesterORGANIZATION

0.94+

20%QUANTITY

0.94+

PureORGANIZATION

0.93+

fourthQUANTITY

0.93+

Enterprise Data Automation | Crowdchat


 

>>from around the globe. It's the Cube with digital coverage of enterprise data automation, an event Siri's brought to you by Iot. Tahoe Welcome everybody to Enterprise Data Automation. Ah co created digital program on the Cube with support from my hotel. So my name is Dave Volante. And today we're using the hashtag data automated. You know, organizations. They really struggle to get more value out of their data, time to data driven insights that drive cost savings or new revenue opportunities. They simply take too long. So today we're gonna talk about how organizations can streamline their data operations through automation, machine intelligence and really simplifying data migrations to the cloud. We'll be talking to technologists, visionaries, hands on practitioners and experts that are not just talking about streamlining their data pipelines. They're actually doing it. So keep it right there. We'll be back shortly with a J ahora who's the CEO of Iot Tahoe to kick off the program. You're watching the Cube, the leader in digital global coverage. We're right back right after this short break. Innovation impact influence. Welcome to the Cube disruptors. Developers and practitioners learn from the voices of leaders who share their personal insights from the hottest digital events around the globe. Enjoy the best this community has to offer on the Cube, your global leader. High tech digital coverage from around the globe. It's the Cube with digital coverage of enterprise, data, automation and event. Siri's brought to you by Iot. Tahoe. Okay, we're back. Welcome back to Data Automated. A J ahora is CEO of I O ta ho, JJ. Good to see how things in London >>Thanks doing well. Things in, well, customers that I speak to on day in, day out that we partner with, um, they're busy adapting their businesses to serve their customers. It's very much a game of ensuring the week and serve our customers to help their customers. Um, you know, the adaptation that's happening here is, um, trying to be more agile. Got to be more flexible. Um, a lot of pressure on data, a lot of demand on data and to deliver more value to the business, too. So that customers, >>as I said, we've been talking about data ops a lot. The idea being Dev Ops applied to the data pipeline, But talk about enterprise data automation. What is it to you. And how is it different from data off >>Dev Ops, you know, has been great for breaking down those silos between different roles functions and bring people together to collaborate. Andi, you know, we definitely see that those tools, those methodologies, those processes, that kind of thinking, um, lending itself to data with data is exciting. We look to do is build on top of that when data automation, it's the it's the nuts and bolts of the the algorithms, the models behind machine learning that the functions. That's where we investors, our r and d on bringing that in to build on top of the the methods, the ways of thinking that break down those silos on injecting that automation into the business processes that are going to drive a business to serve its customers. It's, um, a layer beyond Dev ops data ops. They can get to that point where well, I think about it is is the automation behind new dimension. We've come a long way in the last few years. Boy is, we started out with automating some of those simple, um, to codify, um, I have a high impact on organization across the data a cost effective way house. There's data related tasks that classify data on and a lot of our original pattern certain people value that were built up is is very much around that >>love to get into the tech a little bit in terms of how it works. And I think we have a graphic here that gets into that a little bit. So, guys, if you bring that up, >>sure. I mean right there in the middle that the heart of what we do it is, you know, the intellectual property now that we've built up over time that takes from Hacha genius data sources. Your Oracle Relational database. Short your mainframe. It's a lay and increasingly AP eyes and devices that produce data and that creates the ability to automatically discover that data. Classify that data after it's classified. Them have the ability to form relationships across those different source systems, silos, different lines of business. And once we've automated that that we can start to do some cool things that just puts of contact and meaning around that data. So it's moving it now from bringing data driven on increasingly where we have really smile, right people in our customer organizations you want I do some of those advanced knowledge tasks data scientists and ah, yeah, quants in some of the banks that we work with, the the onus is on, then, putting everything we've done there with automation, pacifying it, relationship, understanding that equality, the policies that you can apply to that data. I'm putting it in context once you've got the ability to power. Okay, a professional is using data, um, to be able to put that data and contacts and search across the entire enterprise estate. Then then they can start to do some exciting things and piece together the the tapestry that fabric across that different system could be crm air P system such as s AP and some of the newer brown databases that we work with. Snowflake is a great well, if I look back maybe five years ago, we had prevalence of daily technologies at the cutting edge. Those are converging to some of the cloud platforms that we work with Google and AWS and I think very much is, as you said it, those manual attempts to try and grasp. But it is such a complex challenges scale quickly runs out of steam because once, once you've got your hat, once you've got your fingers on the details Oh, um, what's what's in your data state? It's changed, You know, you've onboard a new customer. You signed up a new partner. Um, customer has, you know, adopted a new product that you just Lawrence and there that that slew of data keeps coming. So it's keeping pace with that. The only answer really is is some form of automation >>you're working with AWS. You're working with Google, You got red hat. IBM is as partners. What is attracting those folks to your ecosystem and give us your thoughts on the importance of ecosystem? >>That's fundamental. So, I mean, when I caimans where you tell here is the CEO of one of the, um, trends that I wanted us CIO to be part of was being open, having an open architecture allowed one thing that was close to my heart, which is as a CEO, um, a c i o where you go, a budget vision on and you've already made investments into your organization, and some of those are pretty long term bets. They should be going out 5 10 years, sometimes with the CRM system training up your people, getting everybody working together around a common business platform. What I wanted to ensure is that we could openly like it using AP eyes that were available, the love that some investment on the cost that has already gone into managing in organizations I t. But business users to before. So part of the reason why we've been able to be successful with, um, the partners like Google AWS and increasingly, a number of technology players. That red hat mongo DB is another one where we're doing a lot of good work with, um and snowflake here is, um Is those investments have been made by the organizations that are our customers, and we want to make sure we're adding to that. And they're leveraging the value that they've already committed to. >>Yeah, and maybe you could give us some examples of the r A y and the business impact. >>Yeah, I mean, the r a y David is is built upon on three things that I mentioned is a combination off. You're leveraging the existing investment with the existing estate, whether that's on Microsoft Azure or AWS or Google, IBM, and I'm putting that to work because, yeah, the customers that we work with have had made those choices. On top of that, it's, um, is ensuring that we have got the automation that is working right down to the level off data, a column level or the file level we don't do with meta data. It is being very specific to be at the most granular level. So as we've grown our processes and on the automation, gasification tagging, applying policies from across different compliance and regulatory needs that an organization has to the data, everything that then happens downstream from that is ready to serve a business outcome now without hoping out which run those processes within hours of getting started And, um, Bill that picture, visualize that picture and bring it to life. You know, the PR Oh, I that's off the bat with finding data that should have been deleted data that was copies off on and being able to allow the architect whether it's we're working on GCB or a migration to any other clouds such as AWS or a multi cloud landscape right off the map. >>A. J. Thanks so much for coming on the Cube and sharing your insights and your experience is great to have you. >>Thank you, David. Look who is smoking in >>now. We want to bring in the customer perspective. We have a great conversation with Paul Damico, senior vice president data architecture, Webster Bank. So keep it right there. >>Utah Data automated Improve efficiency, Drive down costs and make your enterprise data work for you. Yeah, we're on a mission to enable our customers to automate the management of data to realise maximum strategic and operational benefits. We envisage a world where data users consume accurate, up to date unified data distilled from many silos to deliver transformational outcomes, activate your data and avoid manual processing. Accelerate data projects by enabling non I t resources and data experts to consolidate categorize and master data. Automate your data operations Power digital transformations by automating a significant portion of data management through human guided machine learning. Yeah, get value from the start. Increase the velocity of business outcomes with complete accurate data curated automatically for data, visualization tours and analytic insights. Improve the security and quality of your data. Data automation improves security by reducing the number of individuals who have access to sensitive data, and it can improve quality. Many companies report double digit era reduction in data entry and other repetitive tasks. Trust the way data works for you. Data automation by our Tahoe learns as it works and can ornament business user behavior. It learns from exception handling and scales up or down is needed to prevent system or application overloads or crashes. It also allows for innate knowledge to be socialized rather than individualized. No longer will your companies struggle when the employee who knows how this report is done, retires or takes another job, the work continues on without the need for detailed information transfer. Continue supporting the digital shift. Perhaps most importantly, data automation allows companies to begin making moves towards a broader, more aspirational transformation, but on a small scale but is easy to implement and manage and delivers quick wins. Digital is the buzzword of the day, but many companies recognized that it is a complex strategy requires time and investment. Once you get started with data automation, the digital transformation initiated and leaders and employees alike become more eager to invest time and effort in a broader digital transformational agenda. Yeah, >>everybody, we're back. And this is Dave Volante, and we're covering the whole notion of automating data in the Enterprise. And I'm really excited to have Paul Damico here. She's a senior vice president of enterprise Data Architecture at Webster Bank. Good to see you. Thanks for coming on. >>Nice to see you too. Yes. >>So let's let's start with Let's start with Webster Bank. You guys are kind of a regional. I think New York, New England, uh, leave headquartered out of Connecticut, but tell us a little bit about the >>bank. Yeah, Webster Bank is regional, Boston. And that again in New York, Um, very focused on in Westchester and Fairfield County. Um, they're a really highly rated bank regional bank for this area. They, um, hold, um, quite a few awards for the area for being supportive for the community. And, um, are really moving forward. Technology lives. Currently, today we have, ah, a small group that is just working toward moving into a more futuristic, more data driven data warehouse. That's our first item. And then the other item is to drive new revenue by anticipating what customers do when they go to the bank or when they log into there to be able to give them the best offer. The only way to do that is you have timely, accurate, complete data on the customer and what's really a great value on off something to offer that >>at the top level, what were some of what are some of the key business drivers there catalyzing your desire for change >>the ability to give the customer what they need at the time when they need it? And what I mean by that is that we have, um, customer interactions and multiple weights, right? And I want to be able for the customer, too. Walk into a bank, um, or online and see the same the same format and being able to have the same feel, the same look and also to be able to offer them the next best offer for them. >>Part of it is really the cycle time, the end end cycle, time that you're pressing. And then there's if I understand it, residual benefits that are pretty substantial from a revenue opportunity >>exactly. It's drive new customers, Teoh new opportunities. It's enhanced the risk, and it's to optimize the banking process and then obviously, to create new business. Um, and the only way we're going to be able to do that is that we have the ability to look at the data right when the customer walks in the door or right when they open up their app. >>Do you see the potential to increase the data sources and hence the quality of the data? Or is that sort of premature? >>Oh, no. Um, exactly. Right. So right now we ingest a lot of flat files and from our mainframe type of runnin system that we've had for quite a few years. But now that we're moving to the cloud and off Prem and on France, you know, moving off Prem into, like, an s three bucket Where that data king, we can process that data and get that data faster by using real time tools to move that data into a place where, like, snowflake Good, um, utilize that data or we can give it out to our market. The data scientists are out in the lines of business right now, which is great, cause I think that's where data science belongs. We should give them on, and that's what we're working towards now is giving them more self service, giving them the ability to access the data in a more robust way. And it's a single source of truth. So they're not pulling the data down into their own like tableau dashboards and then pushing the data back out. I have eight engineers, data architects, they database administrators, right, um, and then data traditional data forwarding people, Um, and because some customers that I have that our business customers lines of business, they want to just subscribe to a report. They don't want to go out and do any data science work. Um, and we still have to provide that. So we still want to provide them some kind of read regiment that they wake up in the morning and they open up their email. And there's the report that they just drive, um, which is great. And it works out really well. And one of the things. This is why we purchase I o waas. I would have the ability to give the lines of business the ability to do search within the data, and we read the data flows and data redundancy and things like that and help me cleanup the data and also, um, to give it to the data. Analysts who say All right, they just asked me. They want this certain report and it used to take Okay, well, we're gonna four weeks, we're going to go. We're gonna look at the data, and then we'll come back and tell you what we dio. But now with Iot Tahoe, they're able to look at the data and then, in one or two days of being able to go back and say, Yes, we have data. This is where it is. This is where we found that this is the data flows that we've found also, which is what I call it is the birth of a column. It's where the calm was created and where it went live as a teenager. And then it went to, you know, die very archive. >>In researching Iot Tahoe, it seems like one of the strengths of their platform is the ability to visualize data the data structure, and actually dig into it. But also see it, um, and that speeds things up and gives everybody additional confidence. And then the other pieces essentially infusing ai or machine intelligence into the data pipeline is really how you're attacking automation, right? >>Exactly. So you're able to let's say that I have I have seven cause lines of business that are asking me questions. And one of the questions I'll ask me is, um, we want to know if this customer is okay to contact, right? And you know, there's different avenues so you can go online to go. Do not contact me. You can go to the bank And you could say, I don't want, um, email, but I'll take tests and I want, you know, phone calls. Um, all that information. So seven different lines of business asked me that question in different ways once said Okay to contact the other one says, You know, just for one to pray all these, you know, um, and each project before I got there used to be siloed. So one customer would be 100 hours for them to do that and analytical work, and then another cut. Another of analysts would do another 100 hours on the other project. Well, now I can do that all at once, and I can do those type of searches and say yes we already have that documentation. Here it is. And this is where you can find where the customer has said, You know, you don't want I don't want to get access from you by email, or I've subscribed to get emails from you. I'm using Iot typos eight automation right now to bring in the data and to start analyzing the data close to make sure that I'm not missing anything and that I'm not bringing over redundant data. Um, the data warehouse that I'm working off is not, um a It's an on prem. It's an oracle database. Um, and it's 15 years old, so it has extra data in it. It has, um, things that we don't need anymore. And Iot. Tahoe's helping me shake out that, um, extra data that does not need to be moved into my S three. So it's saving me money when I'm moving from offering on Prem. >>What's your vision or your your data driven organization? >>Um, I want for the bankers to be able to walk around with on iPad in their hands and be able to access data for that customer really fast and be able to give them the best deal that they can get. I want Webster to be right there on top, with being able to add new customers and to be able to serve our existing customers who had bank accounts. Since you were 12 years old there and now our, you know, multi. Whatever. Um, I want them to be able to have the best experience with our our bankers. >>That's really what I want is a banking customer. I want my bank to know who I am, anticipate my needs and create a great experience for me. And then let me go on with my life. And so that's a great story. Love your experience, your background and your knowledge. Can't thank you enough for coming on the Cube. >>No, thank you very much. And you guys have a great day. >>Next, we'll talk with Lester Waters, who's the CTO of Iot Toe cluster takes us through the key considerations of moving to the cloud. >>Yeah, right. The entire platform Automated data Discovery data Discovery is the first step to knowing your data auto discover data across any application on any infrastructure and identify all unknown data relationships across the entire siloed data landscape. smart data catalog. Know how everything is connected? Understand everything in context, regained ownership and trust in your data and maintain a single source of truth across cloud platforms, SAS applications, reference data and legacy systems and power business users to quickly discover and understand the data that matters to them with a smart data catalog continuously updated ensuring business teams always have access to the most trusted data available. Automated data mapping and linking automate the identification of unknown relationships within and across data silos throughout the organization. Build your business glossary automatically using in house common business terms, vocabulary and definitions. Discovered relationships appears connections or dependencies between data entities such as customer account, address invoice and these data entities have many discovery properties. At a granular level, data signals dashboards. Get up to date feeds on the health of your data for faster improved data management. See trends, view for history. Compare versions and get accurate and timely visual insights from across the organization. Automated data flows automatically captured every data flow to locate all the dependencies across systems. Visualize how they work together collectively and know who within your organization has access to data. Understand the source and destination for all your business data with comprehensive data lineage constructed automatically during with data discovery phase and continuously load results into the smart Data catalog. Active, geeky automated data quality assessments Powered by active geek You ensure data is fit for consumption that meets the needs of enterprise data users. Keep information about the current data quality state readily available faster Improved decision making Data policy. Governor Automate data governance End to end over the entire data lifecycle with automation, instant transparency and control Automate data policy assessments with glossaries, metadata and policies for sensitive data discovery that automatically tag link and annotate with metadata to provide enterprise wide search for all lines of business self service knowledge graph Digitize and search your enterprise knowledge. Turn multiple siloed data sources into machine Understandable knowledge from a single data canvas searching Explore data content across systems including GRP CRM billing systems, social media to fuel data pipelines >>Yeah, yeah, focusing on enterprise data automation. We're gonna talk about the journey to the cloud Remember, the hashtag is data automate and we're here with Leicester Waters. Who's the CTO of Iot Tahoe? Give us a little background CTO, You've got a deep, deep expertise in a lot of different areas. But what do we need to know? >>Well, David, I started my career basically at Microsoft, uh, where I started the information Security Cryptography group. They're the very 1st 1 that the company had, and that led to a career in information, security. And and, of course, as easy as you go along with information security data is the key element to be protected. Eso I always had my hands and data not naturally progressed into a roll out Iot talk was their CTO. >>What's the prescription for that automation journey and simplifying that migration to the cloud? >>Well, I think the first thing is understanding what you've got. So discover and cataloging your data and your applications. You know, I don't know what I have. I can't move it. I can't. I can't improve it. I can't build upon it. And I have to understand there's dependence. And so building that data catalog is the very first step What I got. Okay, >>so So we've done the audit. We know we've got what's what's next? Where do we go >>next? So the next thing is remediating that data you know, where do I have duplicate data? I may have often times in an organization. Uh, data will get duplicated. So somebody will take a snapshot of the data, you know, and then end up building a new application, which suddenly becomes dependent on that data. So it's not uncommon for an organization of 20 master instances of a customer, and you can see where that will go. And trying to keep all that stuff in sync becomes a nightmare all by itself. So you want to sort of understand where all your redundant data is? So when you go to the cloud, maybe you have an opportunity here to do you consolidate that that data, >>then what? You figure out what to get rid of our actually get rid of it. What's what's next? >>Yes, yes, that would be the next step. So figure out what you need. What, you don't need you Often times I've found that there's obsolete columns of data in your databases that you just don't need. Or maybe it's been superseded by another. You've got tables have been superseded by other tables in your database, so you got to kind of understand what's being used and what's not. And then from that, you can decide. I'm gonna leave this stuff behind or I'm gonna I'm gonna archive this stuff because I might need it for data retention where I'm just gonna delete it. You don't need it. All were >>plowing through your steps here. What's next on the >>journey? The next one is is in a nutshell. Preserve your data format. Don't. Don't, Don't. Don't boil the ocean here at music Cliche. You know, you you want to do a certain degree of lift and shift because you've got application dependencies on that data and the data format, the tables in which they sent the columns and the way they're named. So some degree, you are gonna be doing a lift and ship, but it's an intelligent lift and ship. The >>data lives in silos. So how do you kind of deal with that? Problem? Is that is that part of the journey? >>That's that's great pointed because you're right that the data silos happen because, you know, this business unit is start chartered with this task. Another business unit has this task and that's how you get those in stance creations of the same data occurring in multiple places. So you really want to is part of your cloud migration. You really want a plan where there's an opportunity to consolidate your data because that means it will be less to manage. Would be less data to secure, and it will be. It will have a smaller footprint, which means reduce costs. >>But maybe you could address data quality. Where does that fit in on the >>journey? That's that's a very important point, you know. First of all, you don't want to bring your legacy issues with U. S. As the point I made earlier. If you've got data quality issues, this is a good time to find those and and identify and remediate them. But that could be a laborious task, and you could probably accomplish. It will take a lot of work. So the opportunity used tools you and automate that process is really will help you find those outliers that >>what's next? I think we're through. I think I've counted six. What's the What's the lucky seven >>Lucky seven involved your business users. Really, When you think about it, you're your data is in silos, part of part of this migration to cloud as an opportunity to break down the silos. These silence that naturally occurs are the business. You, uh, you've got to break these cultural barriers that sometimes exists between business and say so. For example, I always advise there's an opportunity year to consolidate your sensitive data. Your P I. I personally identifiable information and and three different business units have the same source of truth From that, there's an opportunity to consolidate that into one. >>Well, great advice, Lester. Thanks so much. I mean, it's clear that the Cap Ex investments on data centers they're generally not a good investment for most companies. Lester really appreciate Lester Water CTO of Iot Tahoe. Let's watch this short video and we'll come right back. >>Use cases. Data migration. Accelerate digitization of business by providing automated data migration work flows that save time in achieving project milestones. Eradicate operational risk and minimize labor intensive manual processes that demand costly overhead data quality. You know the data swamp and re establish trust in the data to enable data signs and Data analytics data governance. Ensure that business and technology understand critical data elements and have control over the enterprise data landscape Data Analytics ENABLEMENT Data Discovery to enable data scientists and Data Analytics teams to identify the right data set through self service for business demands or analytical reporting that advanced too complex regulatory compliance. Government mandated data privacy requirements. GDP Our CCP, A, e, p, R HIPPA and Data Lake Management. Identify late contents cleanup manage ongoing activity. Data mapping and knowledge graph Creates BKG models on business enterprise data with automated mapping to a specific ontology enabling semantic search across all sources in the data estate data ops scale as a foundation to automate data management presences. >>Are you interested in test driving the i o ta ho platform Kickstart the benefits of data automation for your business through the Iot Labs program? Ah, flexible, scalable sandbox environment on the cloud of your choice with set up service and support provided by Iot. Top Click on the link and connect with the data engineer to learn more and see Iot Tahoe in action. Everybody, we're back. We're talking about enterprise data automation. The hashtag is data automated and we're going to really dig into data migrations, data migrations. They're risky, they're time consuming and they're expensive. Yousef con is here. He's the head of partnerships and alliances at I o ta ho coming again from London. Hey, good to see you, Seth. Thanks very much. >>Thank you. >>So let's set up the problem a little bit. And then I want to get into some of the data said that migration is a risky, time consuming, expensive. They're they're often times a blocker for organizations to really get value out of data. Why is that? >>I think I mean, all migrations have to start with knowing the facts about your data. Uh, and you can try and do this manually. But when you have an organization that may have been going for decades or longer, they will probably have a pretty large legacy data estate so that I have everything from on premise mainframes. They may have stuff which is probably in the cloud, but they probably have hundreds, if not thousands of applications and potentially hundreds of different data stores. >>So I want to dig into this migration and let's let's pull up graphic. It will talk about We'll talk about what a typical migration project looks like. So what you see, here it is. It's very detailed. I know it's a bit of an eye test, but let me call your attention to some of the key aspects of this, uh and then use if I want you to chime in. So at the top here, you see that area graph that's operational risk for a typical migration project, and you can see the timeline and the the milestones That Blue Bar is the time to test so you can see the second step. Data analysis. It's 24 weeks so very time consuming, and then let's not get dig into the stuff in the middle of the fine print. But there's some real good detail there, but go down the bottom. That's labor intensity in the in the bottom, and you can see hi is that sort of brown and and you could see a number of data analysis data staging data prep, the trial, the implementation post implementation fixtures, the transition to be a Blu, which I think is business as usual. >>The key thing is, when you don't understand your data upfront, it's very difficult to scope to set up a project because you go to business stakeholders and decision makers, and you say Okay, we want to migrate these data stores. We want to put them in the cloud most often, but actually, you probably don't know how much data is there. You don't necessarily know how many applications that relates to, you know, the relationships between the data. You don't know the flow of the basis of the direction in which the data is going between different data stores and tables. So you start from a position where you have pretty high risk and probably the area that risk you could be. Stack your project team of lots and lots of people to do the next phase, which is analysis. And so you set up a project which has got a pretty high cost. The big projects, more people, the heavy of governance, obviously on then there, then in the phase where they're trying to do lots and lots of manual analysis, um, manual processes, as we all know, on the layer of trying to relate data that's in different grocery stores relating individual tables and columns, very time consuming, expensive. If you're hiring in resource from consultants or systems integrators externally, you might need to buy or to use party tools. Aziz said earlier the people who understand some of those systems may have left a while ago. CEO even higher risks quite cost situation from the off on the same things that have developed through the project. Um, what are you doing with Ayatollah? Who is that? We're able to automate a lot of this process from the very beginning because we can do the initial data. Discovery run, for example, automatically you very quickly have an automated validator. A data met on the data flow has been generated automatically, much less time and effort and much less cars stopped. >>Yeah. And now let's bring up the the the same chart. But with a set of an automation injection in here and now. So you now see the sort of Cisco said accelerated by Iot, Tom. Okay, great. And we're gonna talk about this, but look, what happens to the operational risk. A dramatic reduction in that, That that graph and then look at the bars, the bars, those blue bars. You know, data analysis went from 24 weeks down to four weeks and then look at the labor intensity. The it was all these were high data analysis, data staging data prep trialling post implementation fixtures in transition to be a you all those went from high labor intensity. So we've now attacked that and gone to low labor intensity. Explain how that magic happened. >>I think that the example off a data catalog. So every large enterprise wants to have some kind of repository where they put all their understanding about their data in its price States catalog. If you like, imagine trying to do that manually, you need to go into every individual data store. You need a DB, a business analyst, reach data store. They need to do an extract of the data. But it on the table was individually they need to cross reference that with other data school, it stores and schemers and tables you probably with the mother of all Lock Excel spreadsheets. It would be a very, very difficult exercise to do. I mean, in fact, one of our reflections as we automate lots of data lots of these things is, um it accelerates the ability to water may, But in some cases, it also makes it possible for enterprise customers with legacy systems take banks, for example. There quite often end up staying on mainframe systems that they've had in place for decades. I'm not migrating away from them because they're not able to actually do the work of understanding the data, duplicating the data, deleting data isn't relevant and then confidently going forward to migrate. So they stay where they are with all the attendant problems assistance systems that are out of support. You know, you know, the biggest frustration for lots of them and the thing that they spend far too much time doing is trying to work out what the right data is on cleaning data, which really you don't want a highly paid thanks to scientists doing with their time. But if you sort out your data in the first place, get rid of duplication that sounds migrate to cloud store where things are really accessible. It's easy to build connections and to use native machine learning tools. You well, on the way up to the maturity card, you can start to use some of the more advanced applications >>massive opportunities not only for technology companies, but for those organizations that can apply technology for business. Advantage yourself, count. Thanks so much for coming on the Cube. Much appreciated. Yeah, yeah, yeah, yeah

Published Date : Jun 23 2020

SUMMARY :

of enterprise data automation, an event Siri's brought to you by Iot. a lot of pressure on data, a lot of demand on data and to deliver more value What is it to you. into the business processes that are going to drive a business to love to get into the tech a little bit in terms of how it works. the ability to automatically discover that data. What is attracting those folks to your ecosystem and give us your thoughts on the So part of the reason why we've IBM, and I'm putting that to work because, yeah, the A. J. Thanks so much for coming on the Cube and sharing your insights and your experience is great to have Look who is smoking in We have a great conversation with Paul Increase the velocity of business outcomes with complete accurate data curated automatically And I'm really excited to have Paul Damico here. Nice to see you too. So let's let's start with Let's start with Webster Bank. complete data on the customer and what's really a great value the ability to give the customer what they need at the Part of it is really the cycle time, the end end cycle, time that you're pressing. It's enhanced the risk, and it's to optimize the banking process and to the cloud and off Prem and on France, you know, moving off Prem into, In researching Iot Tahoe, it seems like one of the strengths of their platform is the ability to visualize data the You know, just for one to pray all these, you know, um, and each project before data for that customer really fast and be able to give them the best deal that they Can't thank you enough for coming on the Cube. And you guys have a great day. Next, we'll talk with Lester Waters, who's the CTO of Iot Toe cluster takes Automated data Discovery data Discovery is the first step to knowing your We're gonna talk about the journey to the cloud Remember, the hashtag is data automate and we're here with Leicester Waters. data is the key element to be protected. And so building that data catalog is the very first step What I got. Where do we go So the next thing is remediating that data you know, You figure out what to get rid of our actually get rid of it. And then from that, you can decide. What's next on the You know, you you want to do a certain degree of lift and shift Is that is that part of the journey? So you really want to is part of your cloud migration. Where does that fit in on the So the opportunity used tools you and automate that process What's the What's the lucky seven there's an opportunity to consolidate that into one. I mean, it's clear that the Cap Ex investments You know the data swamp and re establish trust in the data to enable Top Click on the link and connect with the data for organizations to really get value out of data. Uh, and you can try and milestones That Blue Bar is the time to test so you can see the second step. have pretty high risk and probably the area that risk you could be. to be a you all those went from high labor intensity. But it on the table was individually they need to cross reference that with other data school, Thanks so much for coming on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VolantePERSON

0.99+

Paul DamicoPERSON

0.99+

Paul DamicoPERSON

0.99+

IBMORGANIZATION

0.99+

AzizPERSON

0.99+

Webster BankORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

WestchesterLOCATION

0.99+

AWSORGANIZATION

0.99+

24 weeksQUANTITY

0.99+

SethPERSON

0.99+

LondonLOCATION

0.99+

oneQUANTITY

0.99+

hundredsQUANTITY

0.99+

ConnecticutLOCATION

0.99+

New YorkLOCATION

0.99+

100 hoursQUANTITY

0.99+

iPadCOMMERCIAL_ITEM

0.99+

CiscoORGANIZATION

0.99+

four weeksQUANTITY

0.99+

SiriTITLE

0.99+

thousandsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

sixQUANTITY

0.99+

first itemQUANTITY

0.99+

20 master instancesQUANTITY

0.99+

todayDATE

0.99+

second stepQUANTITY

0.99+

S threeCOMMERCIAL_ITEM

0.99+

I o ta hoORGANIZATION

0.99+

first stepQUANTITY

0.99+

Fairfield CountyLOCATION

0.99+

five years agoDATE

0.99+

firstQUANTITY

0.99+

each projectQUANTITY

0.99+

FranceLOCATION

0.98+

two daysQUANTITY

0.98+

Leicester WatersORGANIZATION

0.98+

Iot TahoeORGANIZATION

0.98+

Cap ExORGANIZATION

0.98+

seven causeQUANTITY

0.98+

Lester WatersPERSON

0.98+

5 10 yearsQUANTITY

0.98+

BostonLOCATION

0.97+

IotORGANIZATION

0.97+

TahoeORGANIZATION

0.97+

TomPERSON

0.97+

FirstQUANTITY

0.97+

15 years oldQUANTITY

0.96+

seven different linesQUANTITY

0.96+

single sourceQUANTITY

0.96+

UtahLOCATION

0.96+

New EnglandLOCATION

0.96+

WebsterORGANIZATION

0.95+

12 years oldQUANTITY

0.95+

Iot LabsORGANIZATION

0.95+

Iot. TahoeORGANIZATION

0.95+

1st 1QUANTITY

0.95+

U. S.LOCATION

0.95+

J ahoraORGANIZATION

0.95+

CubeCOMMERCIAL_ITEM

0.94+

PremORGANIZATION

0.94+

one customerQUANTITY

0.93+

OracleORGANIZATION

0.93+

I O ta hoORGANIZATION

0.92+

SnowflakeTITLE

0.92+

sevenQUANTITY

0.92+

singleQUANTITY

0.92+

LesterORGANIZATION

0.91+

Aliye 1 2 w dave crowdchat v2


 

>>everybody, this is Dave Vellante. May 27th were hosting a crowd chat going on crowdchat dot net slash data ops. Data ops is all about automating the data pipeline infusing AI and operationalize ing ai and the Data Pipeline and your organizations, which has been a real challenge for companies over the last several years in most of the decade. With me is aljaz cannoli. What's changed? That companies can now succeed at automating and operationalize in the data pipeline. >>You're so right, David. As's faras. I remember myself in this industry data challenges that the bottlenecks are the bottlenecks. So why now? I think we can answer that one from three angles. People process technology. What changing people? What changes process will change with technology. Let me start with the technology part on the technology front. Right now. The compute power is they were rare and the cloud multi cloud artificial intelligence, Social mobile all connected and giving the power to the organizations to deal with these problems, especially, I want to highlight the artificial intelligence part, and I will highlight it with how IBM is leveraging artificial intelligence to solve some of the dormant data problems. One of the major major doorman problem is on boarding data. If you're unable to onboard your data fast, however beautiful factory the all the factor lines shining, waiting for data if you cannot. Onboard data fast, all dress is waiting. But what IBM did automated made metadata generation capabilities which is on boarding data leveraging artificial intelligence models so that it is not only on boarding the data but on boarding the data in a way that everyone can understand it. When data scientist looks at the data, look at the data. They don't stare at the data but they understand what that data means because it >>is >>interpreted into business taxonomy into business language in the fast fashion that is one the technology, the second part people and process parts so important in the process part the methodology. Now we have the methodologies, the first methodology that I would just say as a change. Sometimes we we call that as a legal I don't know whether you heard about it in an agile So these legal methodologies now asking us to how alterations fail >>fast, Try fast, fail fast, Try fast >>and these agile methodologies are now being applied to data pipelines in weeks, off iterations, we can look at the most important business challenge with the KP eyes that you're trying to achieve and then map those KP eyes to data sources needed to answer those KP eyes and then streamline everything in between passed. So that renders a change like this the market that we are in. Then all those data flows are streamlined and optimize. And during the Cube interview during the Cube program that we put together, you will see some of the organizations will mention that is agile practice they put in place in every geography is now even getting them closer and closer, because now we all depend on and >>live on digital. So I'm very excited because ah, interviewing Standard Bank Associated Bank. Harley Davidson, IBM chief data officer into public. Sorry to talk about how IBM is sort of drunk, its own champagne eating. It's own dog food. Whatever you prefer. This is not the the mumbo jumbo marketing. This is practitioners who are gonna talk about how they succeeded, how they funded these initiatives, how they did the business case, some of the challenges that they face, how they dealt with classification and metadata and some of the outcomes that they have. So join us on the crowd. Chat crowdchat dot net slash data ops on May 27th. Go there at your calendar. We'll see you in the crowdchat.

Published Date : May 6 2020

SUMMARY :

at automating and operationalize in the data pipeline. They don't stare at the data but they understand what that data that is one the technology, the second part people and process during the Cube program that we put together, you will see some of the organizations some of the challenges that they face, how they dealt with classification and metadata and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

May 27thDATE

0.99+

IBMORGANIZATION

0.99+

aljaz cannoliPERSON

0.98+

second partQUANTITY

0.98+

OneQUANTITY

0.97+

first methodologyQUANTITY

0.96+

three anglesQUANTITY

0.96+

AliyePERSON

0.94+

Standard Bank Associated BankORGANIZATION

0.92+

agileTITLE

0.92+

Harley DavidsonORGANIZATION

0.91+

oneQUANTITY

0.9+

crowdchatORGANIZATION

0.86+

yearsDATE

0.76+

lastDATE

0.64+

CubeTITLE

0.58+

CubeORGANIZATION

0.44+

Aliye 1 1 w dave crowdchat v2


 

>> Hi everybody, this is Dave Velante with the CUBE. And when we talk to practitioners about data and AI they have troubles infusing AI into their data pipeline and automating that data pipeline. So we're bringing together the community, brought to you by IBM to really understand how successful organizations are operationalizing the data pipeline and with me to talk about that is Aliye Ozcan. Aliye, hello, introduce yourself. Tell us about who you are. >> Hi Dave, how are you doing? Yes, my name is Aliye Ozcan I'm the Data Operations Data ops Global Marketing Leader at IBM. >> So I'm very excited about this project. Go to crowdchat.net/dataops, add it to your calendar and check it out. So we have practitioners, Aliye from Harley Davidson, Standard Bank, Associated Bank. What are we going to learn from them? >> What we are going to learn from them is the data experiences. What are the data challenges that they are going through? What are the data bottlenecks that they had? And especially in these challenging times right now. The industry is going through this challenging time. We are all going through this. How the foundation that they invested. Is now helping them to pivot quickly to market demands, the new market demands fast. That is fascinating to see, and I'm very excited having individual conversations with those experts and bringing those stories to the audience here. >> Awesome, and we also have Inderpal Bhandari from the CDO office at IBM, so go to crowdchat.net/dataops, add it to your calendar, we'll see you in the crowd chat.

Published Date : May 6 2020

SUMMARY :

are operationalizing the data pipeline I'm the Data Operations Data ops What are we going to learn from them? What are the data challenges add it to your calendar, we'll

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VelantePERSON

0.99+

Standard BankORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Associated BankORGANIZATION

0.99+

Inderpal BhandariPERSON

0.99+

Harley DavidsonORGANIZATION

0.99+

AliyePERSON

0.99+

crowdchat.net/dataopsOTHER

0.99+

Aliye OzcanPERSON

0.99+

Aliye 1PERSON

0.86+

CUBEORGANIZATION

0.85+

crowdchatTITLE

0.67+

Data opsORGANIZATION

0.61+

CDOORGANIZATION

0.53+

Thad Crowdchat promo v1


 

>> Hi, I'm Thad Vorozilchak, Vice President of Information Architecture with IBM. We're facing some really challenging times and the businesses that I'm talking to are looking for ways to get through these times together and prepare ourselves for the future. One of the ways that businesses are preparing is through a new methodology called data ops but they want to do that in a smart way, that both embraces the opportunities data ops presents, while avoiding its challenges. If you're one of those businesses, I invite you to join IBM and other leaders in data ops, as we discuss the road ahead. Join us on May 27th, for a data ops web chat. Hope to see you there.

Published Date : May 6 2020

SUMMARY :

and the businesses that I'm talking to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Thad VorozilchakPERSON

0.99+

IBMORGANIZATION

0.99+

May 27thDATE

0.99+

OneQUANTITY

0.99+

bothQUANTITY

0.99+

Thad CrowdchatPERSON

0.97+

oneQUANTITY

0.97+

Vice PresidentPERSON

0.69+

IBM promo Aliye Dave Crowdchat take two v1


 

hi everyone my name is alia özkan I'm the data operations data ops global marketing leader at IBM we are all going through challenging times especially the industry we cannot control what we cannot but there are still things that we can control there are two things that we can do if unless you are on the front lines the first thing is social distancing it's a personal accountability the second thing is skills improvement we can do improve our skills for today and tomorrow whether whether that is new normal or the normal iBM is helping you to improve your skills by bringing you IBM's biggest events think for free virtually it is starting on May 5th and 6:00 we will be covering areas such as IT resiliency business continuity Casas risk management digital transformation journey to AI multi cloud environments and how we are all connected with the digital platforms hope you can join us to sharpen your skills for sure you will need it more and more today and tomorrow please please stay safe stay well thank you

Published Date : May 4 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

May 5thDATE

0.99+

two thingsQUANTITY

0.99+

tomorrowDATE

0.99+

todayDATE

0.99+

Aliye Dave CrowdchatPERSON

0.99+

alia özkanPERSON

0.98+

second thingQUANTITY

0.97+

6:00DATE

0.95+

first thingQUANTITY

0.92+

iBMORGANIZATION

0.88+

Day 2 Livestream | Enabling Real AI with Dell


 

>>from the Cube Studios >>in Palo Alto and >>Boston connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back here. Ready? Jeff Frick here with the Cube. We're doing a special presentation today really talking about AI and making ai really with two companies that are right in the heart of the Dell EMC as well as Intel. So we're excited to have a couple Cube alumni back on the program. Haven't seen him in a little while. First off from Intel. Lisa Spelman. She is the corporate VP and GM for the Xeon Group in Jersey on and Memory Group. Great to see you, Lisa. >>Good to see you again, too. >>And we've got Ravi Pinter. Conte. He is the SBP server product management, also from Dell Technologies. Ravi, great to see you as well. >>Good to see you on beast. Of course, >>yes. So let's jump into it. So, yesterday, Robbie, you guys announced a bunch of new kind of ai based solutions where if you can take us through that >>Absolutely so one of the things we did Jeff was we said it's not good enough for us to have a point product. But we talked about hope, the tour of products, more importantly, everything from our workstation side to the server to these storage elements and things that we're doing with VM Ware, for example. Beyond that, we're also obviously pleased with everything we're doing on bringing the right set off validated configurations and reference architectures and ready solutions so that the customer really doesn't have to go ahead and do the due diligence. Are figuring out how the various integration points are coming for us in making a solution possible. Obviously, all this is based on the great partnership we have with Intel on using not just their, you know, super cues, but FPG's as well. >>That's great. So, Lisa, I wonder, you know, I think a lot of people you know, obviously everybody knows Intel for your CPU is, but I don't think they recognize kind of all the other stuff that can wrap around the core CPU to add value around a particular solution. Set or problems. That's what If you could tell us a little bit more about Z on family and what you guys are doing in the data center with this kind of new interesting thing called AI and machine learning. >>Yeah. Um, so thanks, Jeff and Ravi. It's, um, amazing. The way to see that artificial intelligence applications are just growing in their pervasiveness. And you see it taking it out across all sorts of industries. And it's actually being built into just about every application that is coming down the pipe. And so if you think about meeting toe, have your hardware foundation able to support that. That's where we're seeing a lot of the customer interest come in. And not just a first Xeon, but, like Robbie said on the whole portfolio and how the system and solution configuration come together. So we're approaching it from a total view of being able to move all that data, store all of that data and cross us all of that data and providing options along that entire pipeline that move, um, and within that on Z on. Specifically, we've really set that as our cornerstone foundation for AI. If it's the most deployed solution and data center CPU around the world and every single application is going to have artificial intelligence in it, it makes sense that you would have artificial intelligence acceleration built into the actual hardware so that customers get a better experience right out of the box, regardless of which industry they're in or which specialized function they might be focusing on. >>It's really it's really wild, right? Cause in process, right, you always move through your next point of failure. So, you know, having all these kind of accelerants and the ways that you can carve off parts of the workload part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution side. Nobody wants General Ai just for ai sake. It's a nice word. Interesting science experiment. But it's really in the applied. A world is. We're starting to see the value in the application of this stuff, and I wonder you have a customer. You want to highlight Absalon, tell us a little bit about their journey and what you guys did with them. >>Great, sure. I mean, if you didn't start looking at Epsilon there in the market in the marketing business, and one of the crucial things for them is to ensure that they're able to provide the right data. Based on that analysis, there run on? What is it that the customer is looking for? And they can't wait for a period of time, but they need to be doing that in the near real time basis, and that's what excellent does. And what really blew my mind was the fact that they actually service are send out close to 100 billion messages. Again, it's 100 billion messages a year. And so you can imagine the amount of data that they're analyzing, which is in petabytes of data, and they need to do real time. And that's all possible because of the kind of analytics we have driven into the power It silver's, you know, using the latest of the Intel Intel Xeon processor couple with some of the technologies from the BGS side, which again I love them to go back in and analyze this data and service to the customers very rapidly. >>You know, it's funny. I think Mark Tech is kind of an under appreciated ah world of ai and, you know, in machine to machine execution, right, That's the amount of transactions go through when you load a webpage on your site that actually ideas who you are you know, puts puts a marketplace together, sells time on that or a spot on that ad and then lets people in is a really sophisticated, as you said in massive amounts of data going through the interesting stuff. If it's done right, it's magic. And if it's done, not right, then people get pissed off. You gotta have. You gotta have use our tools. >>You got it. I mean, this is where I talked about, you know, it can be garbage in garbage out if you don't really act on the right data. Right. So that is where I think it becomes important. But also, if you don't do it in a timely fashion, but you don't service up the right content at the right time. You miss the opportunity to go ahead and grab attention, >>right? Right. Lisa kind of back to you. Um, you know, there's all kinds of open source stuff that's happening also in the in the AI and machine learning world. So we hear things about tense or flow and and all these different libraries. How are you guys, you know, kind of embracing that world as you look at ai and kind of the development. We've been at it for a while. You guys are involved in everything from autonomous vehicles to the Mar Tech. Is we discussed? How are you making sure that these things were using all the available resources to optimize the solutions? >>Yeah, I think you and Robbie we're just hitting on some of those examples of how many ways people have figured out how to apply AI now. So maybe at first it was really driven by just image recognition and image tagging. But now you see so much work being driven in recommendation engines and an object detection for much more industrial use cases, not just consumer enjoyment and also those things you mentioned and hit on where the personalization is a really fine line you walk between. How do you make an experience feel good? Personalized versus creepy personalized is a real challenge and opportunity across so many industries. And so open source like you mentioned, is a great place for that foundation because it gives people the tools to build upon. And I think our strategy is really a stack strategy that starts first with delivering the best hardware for artificial intelligence and again the other is the foundation for that. But we also have, you know, Milat type processing for out of the Edge. And then we have all the way through to very custom specific accelerators into the data center, then on top about the optimized software, which is going into each of those frameworks and doing the work so that the framework recognizes the specific acceleration we built into the CPU. Whether that steel boost or recognizes the capabilities that sit in that accelerator silicon, and then once we've done that software layer and this is where we have the opportunity for a lot of partnership is the ecosystem and the solutions work that Robbie started off by talking about. So Ai isn't, um, it's not easy for everyone. It has a lot of value, but it takes work to extract that value. And so partnerships within the ecosystem to make sure that I see these are taking those optimization is building them in and fundamentally can deliver to customers. Reliable solution is the last leg of that of that strategy, but it really is one of the most important because without it you get a lot of really good benchmark results but not a lot of good, happy customer, >>right? I'm just curious, Lee says, because you kind of sit in the catbird seat. You guys at the core, you know, kind of under all the layers running data centers run these workloads. How >>do you see >>kind of the evolution of machine learning and ai from kind of the early days, where with science projects and and really smart people on mahogany row versus now people are talking about trying to get it to, like a citizen developer, but really a citizen data science and, you know, in exposing in the power of AI to business leaders or business executioners. Analysts, if you will, so they can apply it to their day to day world in their day to day life. How do you see that kind of evolving? Because you not only in it early, but you get to see some of the stuff coming down the road in design, find wins and reference architectures. How should people think about this evolution? >>It really is one of those things where if you step back from the fundamentals of AI, they've actually been around for 50 or more years. It's just that the changes in the amount of computing capability that's available, the network capacity that's available and the fundamental efficiency that I t and infrastructure managers and get out of their cloud architectures as allowed for this pervasiveness to evolve. And I think that's been the big tipping point that pushed people over this fear. Of course, I went through the same thing that cloud did where you had maybe every business leader or CEO saying Hey, get me a cloud and I'll figure out what for later give me some AI will get a week and make it work, But we're through those initial use pieces and starting to see a business value derived from from those deployments. And I think some of the most exciting areas are in the medical services field and just the amount, especially if you think of the environment we're in right now. The amount of efficiency and in some cases, reduction in human contact that you could require for diagnostics and just customer tracking and ability, ability to follow their entire patient History is really powerful and represents the next wave and care and how we scale our limited resource of doctors nurses technician. And the point we're making of what's coming next is where you start to see even more mass personalization and recommendations in that way that feel very not spooky to people but actually comforting. And they take value from them because it allows them to immediately act. Robbie reference to the speed at which you have to utilize the data. When people get immediately act more efficiently. They're generally happier with the service. So we see so much opportunity and we're continuing to address across, you know, again that hardware, software and solution stack so we can stay a step ahead of our customers, >>Right? That's great, Ravi. I want to give you the final word because you guys have to put the solutions together, it actually delivering to the customer. So not only, you know the hardware and the software, but any other kind of ecosystem components that you have to bring together. So I wonder if you can talk about that approach and how you know it's it's really the solution. At the end of the day, not specs, not speeds and feeds. That's not really what people care about. It's really a good solution. >>Yeah, three like Jeff, because end of the day I mean, it's like this. Most of us probably use the A team to retry money, but we really don't know what really sits behind 80 and my point being that you really care at that particular point in time to be able to put a radio do machine and get your dollar bills out, for example. Likewise, when you start looking at what the customer really needs to know, what Lisa hit upon is actually right. I mean what they're looking for. And you said this on the whole solution side house. To our our mantra to this is very simple. We want to make sure that we use the right basic building blocks, ensuring that we bring the right solutions using three things the right products which essentially means that we need to use the right partners to get the right processes in GPU Xen. But then >>we get >>to the next level by ensuring that we can actually do things we can either provide no ready solutions are validated reference architectures being that you have the sausage making process that you now don't need to have the customer go through, right? In a way. We have done the cooking and we provide a recipe book and you just go through the ingredient process of peering does and then off your off right to go get your solution done. And finally, the final stages there might be helped that customers still need in terms of services. That's something else Dell technology provides. And the whole idea is that customers want to go out and have them help deploying the solutions. We can also do that we're services. So that's probably the way we approach our data. The way we approach, you know, providing the building blocks are using the right technologies from our partners, then making sure that we have the right solutions that our customers can look at. And finally, they need deployment. Help weaken due their services. >>Well, Robbie, Lisa, thanks for taking a few minutes. That was a great tee up, Rob, because I think we're gonna go to a customer a couple of customer interviews enjoying that nice meal that you prepared with that combination of hardware, software, services and support. So thank you for your time and a great to catch up. All right, let's go and run the tape. Hi, Jeff. I wanted to talk about two examples of collaboration that we have with the partners that have yielded Ah, really examples of ah put through HPC and AI activities. So the first example that I wanted to cover is within your AHMAD team up in Canada with that team. We collaborated with Intel on a tuning of algorithm and code in order to accelerate the mapping of the human brain. So we have a cluster down here in Texas called Zenith based on Z on and obtain memory on. And we were able to that customer with the three of us are friends and Intel the norm, our team on the Dell HPC on data innovation, injuring team to go and accelerate the mapping of the human brain. So imagine patients playing video games or doing all sorts of activities that help understand how the brain sends the signal in order to trigger a response of the nervous system. And it's not only good, good way to map the human brain, but think about what you can get with that type of information in order to help cure Alzheimer's or dementia down the road. So this is really something I'm passionate about. Is using technology to help all of us on all of those that are suffering from those really tough diseases? Yeah, yeah, way >>boil. I'm a project manager for the project, and the idea is actually to scan six participants really intensively in both the memory scanner and the G scanner and see if we can use human brain data to get closer to something called Generalized Intelligence. What we have in the AI world, the systems that are mathematically computational, built often they do one task really, really well, but they struggle with other tasks. Really good example. This is video games. Artificial neural nets can often outperform humans and video games, but they don't really play in a natural way. Artificial neural net. Playing Mario Brothers The way that it beats the system is by actually kind of gliding its way through as quickly as possible. And it doesn't like collect pennies. For example, if you play Mary Brothers as a child, you know that collecting those coins is part of your game. And so the idea is to get artificial neural nets to behave more like humans. So like we have Transfer of knowledge is just something that humans do really, really well and very naturally. It doesn't take 50,000 examples for a child to know the difference between a dog and a hot dog when you eat when you play with. But an artificial neural net can often take massive computational power and many examples before it understands >>that video games are awesome, because when you do video game, you're doing a vision task instant. You're also doing a >>lot of planning and strategy thinking, but >>you're also taking decisions you several times a second, and we record that we try to see. Can we from brain activity predict >>what people were doing? We can break almost 90% accuracy with this type of architecture. >>Yeah, yeah, >>Use I was the lead posts. Talk on this collaboration with Dell and Intel. She's trying to work on a model called Graph Convolution Neural nets. >>We have being involved like two computing systems to compare it, like how the performance >>was voting for The lab relies on both servers that we have internally here, so I have a GPU server, but what we really rely on is compute Canada and Compute Canada is just not powerful enough to be able to run the models that he was trying to run so it would take her days. Weeks it would crash, would have to wait in line. Dell was visiting, and I was invited into the meeting very kindly, and they >>told us that they started working with a new >>type of hardware to train our neural nets. >>Dell's using traditional CPU use, pairing it with a new >>type off memory developed by Intel. Which thing? They also >>their new CPU architectures and really optimized to do deep learning. So all of that sounds great because we had this problem. We run out of memory, >>the innovation lab having access to experts to help answer questions immediately. That's not something to gate. >>We were able to train the attic snatch within 20 minutes. But before we do the same thing, all the GPU we need to wait almost three hours to each one simple way we >>were able to train the short original neural net. Dell has been really great cause anytime we need more memory, we send an email, Dell says. Yeah, sure, no problem. We'll extended how much memory do you need? It's been really simple from our end, and I think it's really great to be at the edge of science and technology. We're not just doing the same old. We're pushing the boundaries. Like often. We don't know where we're going to be in six months. In the big data world computing power makes a big difference. >>Yeah, yeah, yeah, yeah. The second example I'd like to cover is the one that will call the data accelerator. That's a publisher that we have with the University of Cambridge, England. There we partnered with Intel on Cambridge, and we built up at the time the number one Io 500 storage solution on. And it's pretty amazing because it was built on standard building blocks, power edge servers until Xeon processors some envy me drives from our partners and Intel. And what we did is we. Both of this system with a very, very smart and elaborate suffering code that gives an ultra fast performance for our customers, are looking for a front and fast scratch to their HPC storage solutions. We're also very mindful that this innovation is great for others to leverage, so the suffering Could will soon be available on Get Hub on. And, as I said, this was number one on the Iot 500 was initially released >>within Cambridge with always out of focus on opening up our technologies to UK industry, where we can encourage UK companies to take advantage of advanced research computing technologies way have many customers in the fields of automotive gas life sciences find our systems really help them accelerate their product development process. Manage Poor Khalidiya. I'm the director of research computing at Cambridge University. Yeah, we are a research computing cloud provider, but the emphasis is on the consulting on the processes around how to exploit that technology rather than the better results. Our value is in how we help businesses use advanced computing resources rather than the provision. Those results we see increasingly more and more data being produced across a wide range of verticals, life sciences, astronomy, manufacturing. So the data accelerators that was created as a component within our data center compute environment. Data processing is becoming more and more central element within research computing. We're getting very large data sets, traditional spinning disk file systems can't keep up and we find applications being slowed down due to a lack of data, So the data accelerator was born to take advantage of new solid state storage devices. I tried to work out how we can have a a staging mechanism for keeping your data on spinning disk when it's not required pre staging it on fast envy any stories? Devices so that can feed the applications at the rate quiet for maximum performance. So we have the highest AI capability available anywhere in the UK, where we match II compute performance Very high stories performance Because for AI, high performance storage is a key element to get the performance up. Currently, the data accelerated is the fastest HPC storage system in the world way are able to obtain 500 gigabytes a second read write with AI ops up in the 20 million range. We provide advanced computing technologies allow some of the brightest minds in the world really pushed scientific and medical research. We enable some of the greatest academics in the world to make tomorrow's discoveries. Yeah, yeah, yeah. >>Alright, Welcome back, Jeff Frick here and we're excited for this next segment. We're joined by Jeremy Raider. He is the GM digital transformation and scale solutions for Intel Corporation. Jeremy, great to see you. Hey, thanks for having me. I love I love the flowers in the backyard. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Garden, Right To very beautiful places to visit in Portland. >>Yeah. You know, you only get him for a couple. Ah, couple weeks here, so we get the timing just right. >>Excellent. All right, so let's jump into it. Really? And in this conversation really is all about making Ai Riel. Um, and you guys are working with Dell and you're working with not only Dell, right? There's the hardware and software, but a lot of these smaller a solution provider. So what is some of the key attributes that that needs to make ai riel for your customers out there? >>Yeah, so, you know, it's a it's a complex space. So when you can bring the best of the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore you're getting into Memory technologies, network technologies and kind of a little less known as how many resources we have focused on the software side of things optimizing frameworks and optimizing, and in these key ingredients and libraries that you can stitch into that portfolio to really get more performance in value, out of your machine learning and deep learning space. And so you know what we've really done here with Dell? It has started to bring a bunch of that portfolio together with Dell's capabilities, and then bring in that ai's V partner, that software vendor where we can really take and stitch and bring the most value out of that broad portfolio, ultimately using using the complexity of what it takes to deploy an AI capability. So a lot going on. They're bringing kind of the three legged stool of the software vendor hardware vendor dental into the mix, and you get a really strong outcome, >>right? So before we get to the solutions piece, let's stick a little bit into the Intel world. And I don't know if a lot of people are aware that obviously you guys make CPUs and you've been making great CPIs forever. But there's a whole lot more stuff that you've added, you know, kind of around the core CPU. If you will in terms of of actual libraries and ways to really optimize the seond processors to operate in an AI world. I wonder if you can kind of take us a little bit below the surface on how that works. What are some of the examples of things you can do to get more from your Gambira Intel processors for ai specific applications of workloads? >>Yeah, well, you know, there's a ton of software optimization that goes into this. You know that having the great CPU is definitely step one. But ultimately you want to get down into the libraries like tensor flow. We have data analytics, acceleration libraries. You know, that really allows you to get kind of again under the covers a little bit and look at it. How do we have to get the most out of the kinds of capabilities that are ultimately used in machine learning in deep learning capabilities, and then bring that forward and trying and enable that with our software vendors so that they can take advantage of those acceleration components and ultimately, you know, move from, you know, less training time or could be a the cost factor. But those are the kind of capabilities we want to expose to software vendors do these kinds of partnerships. >>Okay. Ah, and that's terrific. And I do think that's a big part of the story that a lot of people are probably not as aware of that. There are a lot of these optimization opportunities that you guys have been leveraging for a while. So shifting gears a little bit, right? AI and machine learning is all about the data. And in doing a little research for this, I found actually you on stage talking about some company that had, like, 350 of road off, 315 petabytes of data, 140,000 sources of those data. And I think probably not great quote of six months access time to get that's right and actually work with it. And the company you're referencing was intel. So you guys know a lot about debt data, managing data, everything from your manufacturing, and obviously supporting a global organization for I t and run and ah, a lot of complexity and secrets and good stuff. So you know what have you guys leveraged as intel in the way you work with data and getting a good data pipeline. That's enabling you to kind of put that into these other solutions that you're providing to the customers, >>right? Well, it is, You know, it's absolutely a journey, and it doesn't happen overnight, and that's what we've you know. We've seen it at Intel on We see it with many of our customers that are on the same journey that we've been on. And so you know, this idea of building that pipeline it really starts with what kind of problems that you're trying to solve. What are the big issues that are holding you back that company where you see that competitive advantage that you're trying to get to? And then ultimately, how do you build the structure to enable the right kind of pipeline of that data? Because that's that's what machine learning and deep learning is that data journey. So really a lot of focus around you know how we can understand those business challenges bring forward those kinds of capabilities along the way through to where we structure our entire company around those assets and then ultimately some of the partnerships that we're gonna be talking about these companies that are out there to help us really squeeze the most out of that data as quickly as possible because otherwise it goes stale real fast, sits on the shelf and you're not getting that value out of right. So, yeah, we've been on the journey. It's Ah, it's a long journey, but ultimately we could take a lot of those those kind of learnings and we can apply them to our silicon technology. The software optimization is that we're doing and ultimately, how we talk to our enterprise customers about how they can solve overcome some of the same challenges that we did. >>Well, let's talk about some of those challenges specifically because, you know, I think part of the the challenge is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Little bit was there's a whole lot that goes into it. Besides just doing the analysis, there's a lot of data practice data collection, data organization, a whole bunch of things that have to happen before. You can actually start to do the sexy stuff of AI. So you know, what are some of those challenges. How are you helping people get over kind of these baby steps before they can really get into the deep end of the pool? >>Yeah, well, you know, one is you have to have the resource is so you know, do you even have the resource is if you can acquire those Resource is can you keep them interested in the kind of work that you're doing? So that's a big challenge on and actually will talk about how that fits into some of the partnerships that we've been establishing in the ecosystem. It's also you get stuck in this poc do loop, right? You finally get those resource is and they start to get access to that data that we talked about. It start to play out some scenarios, a theorize a little bit. Maybe they show you some really interesting value, but it never seems to make its way into a full production mode. And I think that is a challenge that has faced so many enterprises that are stuck in that loop. And so that's where we look at who's out there in the ecosystem that can help more readily move through that whole process of the evaluation that proved the r a y, the POC and ultimately move that thing that capability into production mode as quickly as possible that you know that to me is one of those fundamental aspects of if you're stuck in the POC. Nothing's happening from this. This is not helping your company. We want to move things more quickly, >>right? Right. And let's just talk about some of these companies that you guys are working with that you've got some reference architectures is data robot a Grid dynamics H 20 just down the road in Antigua. So a lot of the companies we've worked with with Cube and I think you know another part that's interesting. It again we can learn from kind of old days of big data is kind of generalized. Ai versus solution specific. Ai and I think you know where there's a real opportunity is not AI for a sake, but really it's got to be applied to a specific solution, a specific problem so that you have, you know, better chatbots, better customer service experience, you know, better something. So when you were working with these folks and trying to design solutions or some of the opportunities that you saw to work with some of these folks to now have an applied a application slash solution versus just kind of AI for ai's sake. >>Yeah. I mean, that could be anything from fraud, detection and financial services, or even taking a step back and looking more horizontally like back to that data challenge. If if you're stuck at the AI built a fantastic Data lake, but I haven't been able to pull anything back out of it, who are some of the companies that are out there that can help overcome some of those big data challenges and ultimately get you to where you know, you don't have a data scientist spending 60% of their time on data acquisition pre processing? That's not where we want them, right? We want them on building out that next theory. We want them on looking at the next business challenge. We want them on selecting the right models, but ultimately they have to do that as quickly as possible so that they can move that that capability forward into the next phase. So, really, it's about that that connection of looking at those those problems or challenges in the whole pipeline. And these companies like data robot in H 20 quasi. Oh, they're all addressing specific challenges in the end to end. That's why they've kind of bubbled up as ones that we want to continue to collaborate with, because it can help enterprises overcome those issues more fast. You know more readily. >>Great. Well, Jeremy, thanks for taking a few minutes and giving us the Intel side of the story. Um, it's a great company has been around forever. I worked there many, many moons ago. That's Ah, that's a story for another time, but really appreciate it and I'll interview you will go there. Alright, so super. Thanks a lot. So he's Jeremy. I'm Jeff Frick. So now it's time to go ahead and jump into the crowd chat. It's crowdchat dot net slash make ai real. Um, we'll see you in the chat. And thanks for watching

Published Date : Jun 3 2020

SUMMARY :

Boston connecting with thought leaders all around the world. She is the corporate VP and GM Ravi, great to see you as well. Good to see you on beast. solutions where if you can take us through that reference architectures and ready solutions so that the customer really doesn't have to on family and what you guys are doing in the data center with this kind of new interesting thing called AI and And so if you think about meeting toe, have your hardware foundation part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution we have driven into the power It silver's, you know, using the latest of the Intel Intel of ai and, you know, in machine to machine execution, right, That's the amount of transactions I mean, this is where I talked about, you know, How are you guys, you know, kind of embracing that world as you look But we also have, you know, Milat type processing for out of the Edge. you know, kind of under all the layers running data centers run these workloads. and, you know, in exposing in the power of AI to business leaders or business the speed at which you have to utilize the data. So I wonder if you can talk about that approach and how you know to retry money, but we really don't know what really sits behind 80 and my point being that you The way we approach, you know, providing the building blocks are using the right technologies the brain sends the signal in order to trigger a response of the nervous know the difference between a dog and a hot dog when you eat when you play with. that video games are awesome, because when you do video game, you're doing a vision task instant. that we try to see. We can break almost 90% accuracy with this Talk on this collaboration with Dell and Intel. to be able to run the models that he was trying to run so it would take her days. They also So all of that the innovation lab having access to experts to help answer questions immediately. do the same thing, all the GPU we need to wait almost three hours to each one do you need? That's a publisher that we have with the University of Cambridge, England. Devices so that can feed the applications at the rate quiet for maximum performance. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Ah, couple weeks here, so we get the timing just right. Um, and you guys are working with Dell and you're working with not only Dell, right? the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore What are some of the examples of things you can do to get more from You know, that really allows you to get kind of again under the covers a little bit and look at it. So you know what have you guys leveraged as intel in the way you work with data and getting And then ultimately, how do you build the structure to enable the right kind of pipeline of that is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Yeah, well, you know, one is you have to have the resource is so you know, do you even have the So a lot of the companies we've worked with with Cube and I think you know another that can help overcome some of those big data challenges and ultimately get you to where you we'll see you in the chat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

JeremyPERSON

0.99+

Lisa SpelmanPERSON

0.99+

CanadaLOCATION

0.99+

TexasLOCATION

0.99+

RobbiePERSON

0.99+

LeePERSON

0.99+

PortlandLOCATION

0.99+

Xeon GroupORGANIZATION

0.99+

LisaPERSON

0.99+

DellORGANIZATION

0.99+

RaviPERSON

0.99+

Palo AltoLOCATION

0.99+

UKLOCATION

0.99+

60%QUANTITY

0.99+

Jeremy RaiderPERSON

0.99+

Ravi PinterPERSON

0.99+

IntelORGANIZATION

0.99+

20 millionQUANTITY

0.99+

Mar TechORGANIZATION

0.99+

50,000 examplesQUANTITY

0.99+

RobPERSON

0.99+

Mario BrothersTITLE

0.99+

six monthsQUANTITY

0.99+

AntiguaLOCATION

0.99+

University of CambridgeORGANIZATION

0.99+

JerseyLOCATION

0.99+

140,000 sourcesQUANTITY

0.99+

six participantsQUANTITY

0.99+

315 petabytesQUANTITY

0.99+

threeQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

yesterdayDATE

0.99+

two companiesQUANTITY

0.99+

500 gigabytesQUANTITY

0.99+

AHMADORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

eachQUANTITY

0.99+

Cube StudiosORGANIZATION

0.99+

first exampleQUANTITY

0.99+

BothQUANTITY

0.99+

Memory GroupORGANIZATION

0.99+

two examplesQUANTITY

0.99+

Cambridge UniversityORGANIZATION

0.98+

Rose GardenLOCATION

0.98+

todayDATE

0.98+

both serversQUANTITY

0.98+

oneQUANTITY

0.98+

BostonLOCATION

0.98+

Intel CorporationORGANIZATION

0.98+

KhalidiyaPERSON

0.98+

second exampleQUANTITY

0.98+

one taskQUANTITY

0.98+

80QUANTITY

0.98+

intelORGANIZATION

0.97+

EpsilonORGANIZATION

0.97+

RocketPERSON

0.97+

bothQUANTITY

0.97+

CubeORGANIZATION

0.96+

Making Artifical Intelligance Real With Dell & VMware


 

>>artificial intelligence. The words are full of possibility. Yet to many it may seem complex, expensive and hard to know where to get started. How do you make AI really for your business? At Dell Technologies, we see AI enhancing business, enriching lives and improving the world. Dell Technologies is dedicated to making AI easy, so more people can use it to make a real difference. So you can adopt and run AI anywhere with your current skill. Sets with AI Solutions powered by power edge servers and made portable across hybrid multi clouds with VM ware. Plus solved I O bottlenecks with breakthrough performance delivered by Dell EMC Ready solutions for HPC storage and Data Accelerator. And enjoy automated, effortless management with open manage systems management so you can keep business insights flowing across a multi cloud environment. With an AI portfolio that spans from workstations to supercomputers, Dell Technologies can help you get started with AI easily and grow seamlessly. AI has the potential to profoundly change our lives with Dell Technologies. AI is easy to adopt, easy to manage and easy to scale. And there's nothing artificial about that. Yeah, yeah, from >>the Cube Studios in Palo Alto and Boston >>connecting with >>thought leaders all around the world. This is a cube conversation. Hi, I'm Stew Minimum. And welcome to this special launch with our friends at Dell Technologies. We're gonna be talking about AI and the reality of making artificial intelligence real happy to welcome to the program. Two of our Cube alumni Rob, depending 90. He's the senior vice president of server product management and very Pellegrino vice president, data centric workloads and solutions in high performance computing, both with Dell Technologies. Thank you both for joining thanks to you. So you know, is the industry we watch? You know, the AI has been this huge buzz word, but one of things I've actually liked about one of the differences about what I see when I listen to the vendor community talking about AI versus what I saw too much in the big data world is you know, it used to be, you know Oh, there was the opportunity. And data is so important. Yes, that's really But it was. It was a very wonky conversation. And the promise and the translation of what has been to the real world didn't necessarily always connect and We saw many of the big data solutions, you know, failed over time with AI on. And I've seen this in meetings from Dell talking about, you know, the business outcomes in general overall in i t. But you know how ai is helping make things real. So maybe we can start there for another product announcements and things we're gonna get into. But Robbie Interior talk to us a little bit about you know, the customers that you've been seeing in the impact that AI is having on their business. >>Sure, Teoh, I'll take us a job in it. A couple of things. For example, if you start looking at, uh, you know, the autonomous vehicles industry of the manufacturing industry where people are building better tools for anything they need to do on their manufacturing both. For example, uh, this is a good example of where that honors makers and stuff you've got Xeon ut It's actually a world war balcony. Now it is using our whole product suite right from the hardware and software to do multiple iterations off, ensuring that the software and the hardware come together pretty seamlessly and more importantly, ingesting, you know, probably tens of petabytes of data to ensure that we've got the right. They're training and gardens in place. So that's a great example of how we are helping some of our customers today in ensuring that we can really meet is really in terms of moving away from just a morning scenario in something that customers are able to use like today. >>Well, if I can have one more, Ah Yanai, one of our core and more partners than just customers in Italy in the energy sector have been been really, really driving innovation with us. We just deployed a pretty large 8000 accelerator cluster with them, which is the largest commercial cluster in the world. And where they're focusing on is the digital transformation and the development of energy sources. And it's really important not be an age. You know, the plan. It's not getting younger, and we have to be really careful about the type of energies that we utilize to do what we do every day on they put a lot of innovation. We've helped set up the right solution for them, and we'll talk some more about what they've done with that cluster. Later, during our chat, but it is one of the example that is tangible with the appointment that is being used to help there. >>Great. Well, we love starting with some of the customer stories. Really glad we're gonna be able to share some of those, you know, actual here from some of the customers a little bit later in this launch. But, Robbie, you know, maybe give us a little bit as to what you're hearing from customers. You know, the overall climate in AI. You know, obviously you know, so many challenges facing, you know, people today. But you know, specifically around ai, what are some of the hurdles that they might need to overcome Be able to make ai. Really? >>I think the two important pieces I can choose to number one as much as we talk about AI machine learning. One of the biggest challenges that customers have today is ensuring that they have the right amount and the right quality of data to go out and do the analytics percent. Because if you don't do it, it's giggle garbage in garbage out. So the one of the biggest challenges our customers have today is ensuring that they have the most pristine data to go back on, and that takes quite a bit of an effort. Number two. A lot of times, I think one of the challenges they also have is having the right skill set to go out and have the execution phase of the AI pod. You know, work done. And I think those are the two big challenges we hear off. And that doesn't seem to be changing in the very near term, given the very fact that nothing Forbes recently had an article that said that less than 15% off, our customers probably are using AI machine learning today so that talks to the challenges and the opportunities ahead for me. All right, >>So, Ravi, give us the news. Tell us the updates from Dell Technologies how you're helping customers with AI today, >>going back to one of the challenges, as I mentioned, which is not having the right skin set. One of the things we are doing at Dell Technologies is making sure that we provide them not just the product but also the ready solutions that we're working with. For example, Tier and his team. We're also working on validated and things are called reference architectures. The whole idea behind this is we want to take the guesswork out for our customers and actually go ahead and destroying things that we have already tested to ensure that the integration is right. There's rightsizing attributes, so they know exactly the kind of a product that would pick up our not worry about me in time and the resources needed you get to that particular location. So those are probably the two of the biggest things we're doing to help our customers make the right decision and execute seamlessly and on time. >>Excellent. So teary, maybe give us a little bit of a broader look as to, you know, Dell's part participation in the overall ecosystem when it comes to what's happening in AI on and you know why is this a unique time for what's happening in the in the industry? >>Yeah, I mean, I think we all live it. I mean, I'm right here in my home, and I'm trying to ensure that the business continues to operate, and it's important to make sure that we're also there for our customers, right? The fight against covered 19 is eyes changing what's happening around the quarantines, etcetera. So Dell, as a participant not only in the AI the world that we live in on enabling AI is also a participant in all of the community's s. So we've recently joined the covered 19 High Performance Computing Consortium on. We also made a lot of resources available to researchers and scientists leveraging AI in order to make progress towards you're and potentially the vaccine against Corbyn. 19 examples are we have our own supercomputers in the lab here in Austin, Texas, and we've given access to some of our partners. T. Gen. Is one example. The beginning of our chat I mentioned and I So not only did they have barely deport the cluster with us earlier this year that could 19 started hitting, so they've done what's the right thing to do for community and humanity is they made the resource available to scientists in Europe on tack just down the road here, which had the largest I can't make supercomputer that we deployed with them to. Ai's doing exactly the same thing. So this is one of the real examples that are very timely, and it's it's it's happening right now we hadn't planned for it. A booth there with our customers, the other pieces. This is probably going to be a trend, but healthcare is going through and version of data you mentioned in the beginning. You're talking about 2.3000 exabytes, about 3000 times the content of the Library of Congress. It's incredible, and that data is useless. I mean, it's great we can We can put that on our great ice on storage, but you can also see it as an opportunity to get business value out of it. That's going to be we're a lot more resource is with AI so a lot happening here. That's that's really if I can get into more of the science of it because it's healthcare, because it's the industry we see now that our family members at the M. Ware, part of the Dell Technologies Portfolio, are getting even more relevance in the discussion. The industry is based on virtualization, and the M ware is the number one virtualization solution for the industry. So now we're trying to weave in the reality in the I T environment with the new nodes of AI and data science and HPC. So you will see the VM Ware just added kubernetes control plane. This fear Andi were leveraging that to have a very flexible environment On one side, we can do some data science on the other side. We can go back to running some enterprise class hardware class software on top of it. So this is is great. And we're capitalizing on it with validates solutions, validated design on. And I think that's going to be adding a lot of ah power in the hands of our customers and always based on their feedback. And they asked back, >>Yeah, I may ask you just to build on that interesting comment that you made on we're actually looking at very shortly will be talking about how we're gonna have the ability to, for example, read or V Sphere and Allah servers begin. That essentially means that we're going to cut down the time our customers need to go ahead and deploy on their sites. >>Yeah, excellent. Definitely been, you know, very strong feedback from the community. We did videos around some of the B sphere seven launch, you know, theory. You know, we actually had done an interview with you. Ah, while back at your big lab, Jeff Frick. Otto, See the supercomputers behind what you were doing. Maybe bring us in a little bit inside as who? You know, some of the new pieces that help enable AI. You know, it often gets lost on the industry. You know, it's like, Oh, yeah, well, we've got the best hardware to accelerate or enable these kind of workloads. So, you know, bring us in its But what, You know, the engineering solution sets that are helping toe make this a reality >>of today. Yeah, and truly still you've been there. You've seen the engineers in the lab, and that's more than AI being real. That that is double real because we spend a lot of time analyzing workloads customer needs. We have a lot of PhD engineers in there, and what we're working on right now is kind of the next wave of HPC enablement Azaz. We all know the consumption model or the way that we want to have access to resources is evolving from something that is directly in front of us. 1 to 1 ratio to when virtualization became more prevalent. We had a one to many ratio on genes historically have been allocated on a per user. Or sometimes it is study modified view to have more than one user GP. But with the addition of big confusion to the VM our portfolio and be treated not being part of these fear. We're building up a GPU as a service solutions through a VM ware validated design that we are launching, and that's gonna give them flexibility. And the key here is flexibility. We have the ability, as you know, with the VM Ware environment, to bring in also some security, some flexibility through moving the workloads. And let's be honest with some ties into cloud models on, we have our own set of partners. We all know that the big players in the industry to But that's all about flexibility and giving our customers what they need and what they expect in the world. But really, >>Yeah, Ravi, I guess that brings us to ah, you know, one of the key pieces we need to look at here is how do we manage across all of these environments? Uh, and you know, how does AI fit into this whole discussion between what Dell and VM ware doing things like v Sphere, you know, put pulling in new workloads >>stew, actually a couple of things. So there's really nothing artificial about the real intelligence that comes through with all that foolish intelligence we're working out. And so one of the crucial things I think we need to, you know, ensure that we talk about is it's not just about the fact that it's a problem. So here are our stories there, but I think the crucial thing is we're looking at it from an end to end perspective from everything from ensuring that we have direct workstations, right servers, the storage, making sure that is well protected and all the way to working with an ecosystem of software renders. So first and foremost, that's the whole integration piece, making sure they realized people system. But more importantly, it's also ensuring that we help our customers by taking the guess work out again. I can't emphasize the fact that there are customers who are looking at different aliens off entry, for example, somebody will be looking at an F G. A. Everybody looking at GP use. API is probably, as you know, are great because they're price points and normal. Or should I say that our needs our lot lesser than the GP use? But on the flip side, there's a need for them to have a set of folks who can actually program right. It is why it's called the no programming programmable gate arrays of Saas fee programmable. My point being in all this, it's important that we actually provide dried end to end perspective, making sure that we're able to show the integration, show the value and also provide the options, because it's really not a cookie cutter approach of where you can take a particular solution and think that it will put the needs of every single customer. He doesn't even happen in the same industry, for that matter. So the flexibility that we provide all the way to the services is truly our attempt. At Dell Technologies, you get the entire gamut of solutions available for the customer to go out and pick and choose what says their needs the best. >>Alright, well, Ravi interior Thank you so much for the update. So we're gonna turn it over to actually hear from some of your customers. Talk about the power of ai. You're from their viewpoint, how real these solutions are becoming. Love the plan words there about, you know, enabling really artificial intelligence. Thanks so much for joining after the customers looking forward to the VM Ware discussion, we want to >>put robots into the world's dullest, deadliest and dirtiest jobs. We think that if we can have machines doing the work that put people at risk than we can allow people to do better work. Dell Technologies is the foundation for a lot of the >>work that we've done here. Every single piece of software that we developed is simulated dozens >>or hundreds of thousands of times. And having reliable compute infrastructure is critical for this. Yeah, yeah, A lot of technology has >>matured to actually do something really useful that can be used by non >>experts. We try to predict one system fails. We try to predict the >>business impatience things into images. On the end of the day, it's that >>now we have machines that learn how to speak a language from from zero. Yeah, everything >>we do really, at Epsilon centered around data and our ability >>to get the right message to >>the right person at the right >>time. We apply machine learning and artificial intelligence. So in real time you can adjust those campaigns to ensure that you're getting the most optimized message theme. >>It is a joint venture between Well, cars on the Amir are your progress is automated driving on Advanced Driver Assistance Systems Centre is really based on safety on how we can actually make lives better for you. Typically gets warned on distracted in cars. If you can take those kind of situations away, it will bring the accidents down about 70 to 80%. So what I appreciate it with Dell Technologies is the overall solution that they have to live in being able to deliver the full package. That has been a major differentiator compared to your competitors. >>Yeah. Yeah, alright, welcome back to help us dig into this discussion and happy to welcome to the program Chris Facade. He is the senior vice president and general manager of the B sphere business and just Simon, chief technologist for the High performance computing group, both of them with VM ware. Gentlemen, thanks so much for joining. Thank >>you for having us. >>All right, Krish. When vm Ware made the bit fusion acquisition. Everybody was looking the You know what this will do for space Force? GPU is we're talking about things like AI and ML. So bring us up to speed. As to you know, the news today is the what being worth doing with fusion. Yeah. >>Today we have a big announcement. I'm excited to announce that, you know, we're taking the next big step in the AI ML and more than application strategy. With the launch off bit fusion, we're just now being fully integrated with VCF. They're in black home, and we'll be releasing this very shortly to the market. As you said when we acquire institution A year ago, we had a showcase that's capable days as part of the animal event. And at that time we laid out a strategy that part of our institution as the cornerstone off our capabilities in the black home in the Iot space. Since then, we have had many customers take a look at the technology and we have had feedback from them as well as from partners and analysts. And the feedback has been tremendous. >>Excellent. Well, Chris, what does this then mean for customers? You know What's the value proposition that diffusion brings the VC? Yeah, >>if you look at our customers, they are in the midst of a big ah journey in digital transformation. And basically, what that means is customers are building a ton of applications and most of those applications some kind of data analytics or machine learning embedded in it. And what this is doing is that in the harbor and infrastructure industry, this is driving a lot of innovation. So you see the advent off a lot off specialized? Absolutely. There's custom a six FPs. And of course, the views being used to accelerate the special algorithms that these AI ml type applications need. And unfortunately, customer environment. Most of these specialized accelerators uh um bare metal kind of set up, but they're not taking advantage off optimization and everything that it brings to that. Also, with fusion launched today, we are essentially doing the accelerator space. What we need to compute several years ago and that is essentially bringing organization to the accelerators. But we take it one step further, which is, you know, we use the customers the ability to pull these accelerators and essentially going to be couple it from the server so you can have a pool of these accelerators sitting in the network. And customers are able to then target their workloads and share the accelerators get better utilization by a lot of past improvements and, in essence, have a smaller pool that they can use for a whole bunch of different applications across the enterprise. That is a huge angle for our customers. And that's the tremendous positive feedback that we get getting both from customers as well. >>Excellent. Well, I'm glad we've got Josh here to dig into some of the thesis before we get to you. They got Chris. Uh, part of this announcement is the partnership of VM Ware in Dell. So tell us about what the partnership is in the solutions for for this long. Yeah. >>We have been working with the Dell in the in the AI and ML space for a long time. We have ah, good partnership there. This just takes the partnership to the next level and we will have ah, execution solution. Support in some of the key. I am el targeted words like the sea for 1 40 the r 7 40 Those are the centers that would be partnering with them on and providing solutions. >>Excellent. Eso John. You know, we've watched for a long time. You know, various technologies. Oh, it's not a fit for virtualized environment. And then, you know, VM Ware does does what it does. Make sure you know, performance is there. And make sure all the options there bring us inside a little bit. You know what this solution means for leveraging GPS? Yeah. So actually, before I before us, answer that question. Let me say that the the fusion acquisition and the diffusion technology fits into a larger strategy at VM Ware around AI and ML. That I think matches pretty nicely the overall Dell strategy as well, in the sense that we are really focused on delivering AI ml capabilities or the ability for our customers to run their am ai and ml workloads from edge before the cloud. And that means running it on CPU or running it on hardware accelerators like like G fuse. Whatever is really required by the customer in this specific case, we're quite excited about using technology as it really allows us. As Chris was describing to extend our capabilities especially in the deep learning space where GPU accelerators are critically important. And so what this technology really brings to the table is the ability to, as Chris was outlining, to pull those resources those hardware resource together and then allow organizations to drive up the utilization of those GP Resource is through that pooling and also increase the degree of sharing that we support that supported for the customer. Okay, Jeff, take us in a little bit further as how you know the mechanisms of diffusion work. Sure, Yeah, that's a great question. So think of it this way. There there is a client component that we're using a server component. The server component is running on a machine that actually has the physical GPU is installed in it. The client machine, which is running the bit fusion client software, is where the user of the data scientist is actually running their machine machine learning application. But there's no GPU actually in that host. And what is happening with fusion technology is that it is essentially intercepting the cuda calls that are being made by that machine learning app, patience and promoting those protocols over to the bit fusion server and then injecting them into the local GPU on the server. So it's actually, you know, we call it into a position in the ability that remote these protocols, but it's actually much more sophisticated than that. There are a lot of underlying capabilities that are being deployed in terms of optimization who takes maximum advantage of the the networking link that sits between the client machine and the server machine. But given all of that, once we've done it with diffusion, it's now possible for the data scientist. Either consume multiple GP use for single GPU use or even fractional defuse across that Internet using the using technology. Okay, maybe it would help illustrate some of these technologies. If you got a couple of customers, Sure, so one example would be a retail customer. I'm thinking of who is. Actually it's ah, grocery chain. That is the flowing, ah, large number of video cameras into their to their stores in order to do things like, um, watch for pilfering, uh, identify when storage store shelves could be restocked and even looking for cases where, for example, maybe a customer has fallen down in denial on someone needs to go and help those multiple video streams and then multiple app patients that are being run that part are consuming the data from those video streams and doing analytics and ml on them would be perfectly suited for this type of environment where you would like to be ableto have these multiple independent applications running but having them be able to efficiently share the hardware resources of the GP use. Another example would be retailers who are deploying ml Howard Check out registers who helped reduce fraud customers who are buying, buying things with, uh, fake barcodes, for example. So in that case, you would not necessarily want to employ a single dedicated GPU for every single check out line. Instead, what you would prefer to do is have a full set of resource. Is that each inference operation that's occurring within each one of those check out lines could then consume collectively. That would be two examples of the use of this kind of pull in technology. Okay, great. So, Josh, a lot last question for you is this technology is this only for use and anything else. You can give us a little bit of a look forward to as to what we should be expecting from the big fusion technology. Yeah. So currently, the target is specifically NVIDIA GPU use with Cuda. The team, actually even prior to acquisition, had done some work on enablement of PJs and also had done some work on open CL, which is more open standard for a device that so what you will see over time is an expansion of the diffusion capabilities to embrace devices like PJs. The domain specific a six that first was referring to earlier will roll out over time. But we are starting with the NVIDIA GPU, which totally makes sense, since that is the primary hardware acceleration and for deep learning currently excellent. Well, John and Chris, thank you so much for the updates to the audience. If you're watching this live, please throwing the crowd chat and ask your questions. This faith, If you're watching this on demand, you can also go to crowdchat dot net slash make ai really to be able to see the conversation that we had. Thanks so much for joining. >>Thank you very much. >>Thank you. Managing your data center requires around the clock. Attention Dell, EMC open manage mobile enables I t administrators to monitor data center issues and respond rapidly toe unexpected events anytime, anywhere. Open Manage Mobile provides a wealth of features within a comprehensive user interface, including >>server configuration, push notifications, remote desktop augmented reality and more. The latest release features an updated Our interface Power and Thermal Policy Review. Emergency Power Reduction, an internal storage monitoring download Open Manage Mobile today.

Published Date : Jun 2 2020

SUMMARY :

the potential to profoundly change our lives with Dell Technologies. much in the big data world is you know, it used to be, you know Oh, there was the opportunity. product suite right from the hardware and software to do multiple iterations be really careful about the type of energies that we utilize to do what we do every day on You know, the overall climate in AI. is having the right skill set to go out and have the execution So, Ravi, give us the news. One of the things we are doing at Dell Technologies is making So teary, maybe give us a little bit of a broader look as to, you know, more of the science of it because it's healthcare, because it's the industry we see Yeah, I may ask you just to build on that interesting comment that you made on we're around some of the B sphere seven launch, you know, theory. We all know that the big players in the industry to But that's all about flexibility and so one of the crucial things I think we need to, you know, ensure that we talk about forward to the VM Ware discussion, we the foundation for a lot of the Every single piece of software that we developed is simulated dozens And having reliable compute infrastructure is critical for this. We try to predict one system fails. On the end of the day, now we have machines that learn how to speak a language from from So in real time you can adjust solution that they have to live in being able to deliver the full package. chief technologist for the High performance computing group, both of them with VM ware. As to you know, the news today And at that time we laid out a strategy that part of our institution as the cornerstone that diffusion brings the VC? and essentially going to be couple it from the server so you can have a pool So tell us about what the partnership is in the solutions for for this long. This just takes the partnership to the next the degree of sharing that we support that supported for the customer. to monitor data center issues and respond rapidly toe unexpected events anytime, Power and Thermal Policy Review.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

JeffPERSON

0.99+

JoshPERSON

0.99+

Library of CongressORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

RobbiePERSON

0.99+

DellORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

EuropeLOCATION

0.99+

todayDATE

0.99+

JohnPERSON

0.99+

ItalyLOCATION

0.99+

RaviPERSON

0.99+

Chris FacadePERSON

0.99+

TwoQUANTITY

0.99+

OneQUANTITY

0.99+

VM WareORGANIZATION

0.99+

RobPERSON

0.99+

BostonLOCATION

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

Palo AltoLOCATION

0.99+

EMCORGANIZATION

0.99+

KrishPERSON

0.99+

NVIDIAORGANIZATION

0.99+

sixQUANTITY

0.99+

dozensQUANTITY

0.99+

TodayDATE

0.99+

less than 15%QUANTITY

0.99+

bothQUANTITY

0.99+

tens of petabytesQUANTITY

0.99+

90QUANTITY

0.99+

AndiPERSON

0.99+

firstQUANTITY

0.98+

19 examplesQUANTITY

0.98+

Austin, TexasLOCATION

0.98+

EpsilonORGANIZATION

0.98+

two important piecesQUANTITY

0.98+

two big challengesQUANTITY

0.98+

ForbesORGANIZATION

0.98+

SimonPERSON

0.98+

one exampleQUANTITY

0.98+

about 3000 timesQUANTITY

0.97+

M. WareORGANIZATION

0.97+

Cube StudiosORGANIZATION

0.97+

more than one userQUANTITY

0.97+

1 40OTHER

0.97+

8000 acceleratorQUANTITY

0.96+

several years agoDATE

0.96+

Advanced Driver Assistance Systems CentreORGANIZATION

0.96+

VMwareORGANIZATION

0.95+

A year agoDATE

0.95+

six FPsQUANTITY

0.95+

VxRail: Taking HCI to Extremes


 

>> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCube Conversation. >> Hi, I'm Stu Miniman. And welcome to this special presentation. We have a launch from Dell Technologies updates from the VxRail family. We're going to do things a little bit different here. We actually have a launch video Shannon Champion, of Dell Technologies. And the way we do things a lot of times, is, analysts get a little preview or when you're watching things. You might have questions on it. So, rather than me just wanting it, or you wanting yourself I actually brought in a couple of Dell Technologies expertS two of our Cube alumni, happy to welcome you back to the program. Jon Siegal, he is the Vice President of Product Marketing, and Chad Dunn, who's the Vice President of Product Management, both of them with Dell Technologies. Gentlemen, thanks so much for joining us. >> Good to see you Stu. >> Great to be here. >> All right, and so what we're going to do is we're going to be rolling the video here. I've got a button I'm going to press, Andrew will stop it here and then we'll kind of dig in a little bit, go into some questions when we're all done. We're actually holding a crowd chat, where you will be able to ask your questions, talk to the experts and everything. And so a little bit different way to do a product announcement. Hope you enjoy it. And with that, it's VxRail. Taking HCI to the extremes is the theme. We'll see what that means and everything. But without any further ado, let's let Shannon take the video away. >> Hello, and welcome. My name is Shannon Champion, and I'm looking forward to taking you through what's new with VxRail. Let's get started. We have a lot to talk about. Our launch covers new announcements addressing use cases across the Core, Edge and Cloud and spans both new hardware platforms and options, as well as the latest in software innovations. So let's jump right in. Before we talk about our announcements, let's talk about where customers are adopting VxRail today. First of all, on behalf of the entire Dell Technologies and VxRail teams, I want to thank each of our over 8000 customers, big and small in virtually every industry, who've chosen VxRail to address a broad range of workloads, deploying nearly 100,000 nodes today. Thank you. Our promise to you is that we will add new functionality, improve serviceability, and support new use cases, so that we deliver the most value to you, whether in the Core, at the Edge or for the Cloud. In the Core, VxRail from day one has been a catalyst to accelerate IT transformation. Many of our customers started here and many will continue to leverage VxRail to simply extend and enhance your VMware environment. Now we can support even more demanding applications such as In-Memory databases, like SAP HANA, and more AI and ML applications, with support for more and more powerful GPUs. At the Edge, video surveillance, which also uses GPUs, by the way, is an example of a popular use case leveraging VxRail alongside external storage. And right now we all know the enhanced role that IT is playing. And as it relates to VDI, VxRail has always been a great option for that. In the Cloud, it's all about Kubernetes, and how Dell Technologies Cloud platform, which is VCF on VxRail can deliver consistent infrastructure for both traditional and Cloud native applications. And we're doing that together with VMware. VxRail is the only jointly engineered HCI system built with VMware for VMware environments, designed to enhance the native VMware experience. This joint engineering with VMware and investments in software innovation together deliver an optimized operational experience at reduced risk for our customers. >> Alright, so Shannon talked a bit about, the important role of IT Of course right now, with the global pandemic going on. It's really, calling in, essential things, putting, platforms to the test. So, I really love to hear what both of you are hearing from customers. Also, VDI, of course, in the early days, it was, HCI-only-does-VDI. Now, we know there are many solutions, but remote work is putting that back front and center. So, Jon, why don't we start with you as the what is (muffled speaking) >> Absolutely. So first of all, Stu, thank you, I want to do a shout out to our VxRail customers around the world. It's really been humbling, inspiring, and just amazing to see The impact of our VxRail customers around the world and what they're having on on human progress here. Just for a few examples, there are genomics companies that we have running VxRail that have rolled out testing at scale. We also have research universities out in the Netherlands, doing the antibody detection. The US Navy has stood up a floating hospital to of course care for those in need. So we are here to help that's been our message to our customers, but it's amazing to see how much they're helping society during this. So just just a pleasure there. But as you mentioned, just to hit on the VDI comments, so to your points too, HCI, VxRail, VDI, that was an initial use case years ago. And it's been great to see how many of our existing VxRail customers have been able to pivot very quickly leveraging VxRail to add and to help bring their remote workforce online and support them with their existing VxRail. Because VxRail is flexible, it is agile, to be able to support those multiple workloads. And in addition to that, we've also rolled out some new VDI bundles to make it simpler for customers more cost effective cater to everything from knowlEdge workers to multimedia workers. You name it, you know from 250, desktops up to 1000. But again, back to your point VxRail, HCI, is well beyond VDI, it crossed the chasm a couple years ago actually. And VDI now is less than a third of the typical workloads, any of our customers out there, it supports now a range of workloads that you heard from Shannon, whether it's video surveillance, whether it's general purpose, all the way to mission critical applications now with SAP HAN. So, this has changed the game for sure. But the range of work loads and the flexibility of the actual rules which really helping our existing customers during this pandemic. >> Yeah, I agree with you, Jon, we've seen customers really embrace HCI for a number of workloads in their environments, from the ones that we sure all knew and loved back in the initial days of HCI. Now, the mission critical things now to Cloud native workloads as well, and the sort of the efficiencies that customers are able to get from HCI. And specifically, VxRail gives them that ability to pivot. When these, shall we say unexpected circumstances arise? And I think that that's informing their their decisions and their opinions on what their IP strategies look like as they move forward. They want that same level of agility, and ability to react quickly with their overall infrastructure. >> Excellent. Now I want to get into the announcements. What I want my team actually, your team gave me access to the CIO from the city of Amarillo, so maybe they can dig up that footage, talk about how fast they pivoted, using VxRail to really spin up things fast. So let's hear from the announcement first and then definitely want to share that that customer story a little bit later. So let's get to the actual news that Shannon's going to share. >> Okay, now what's new? I am pleased to announce a number of exciting updates and new platforms, to further enable IT modernization across Core, Edge and Cloud. I will cover each of these announcements in more detail, demonstrating how only VxRail can offer the breadth of platform configurations, automation, orchestration and Lifecycle Management, across a fully integrated hardware and software full stack with consistent, simplified operations to address the broadest range of traditional and modern applications. I'll start with hybrid Cloud and recap what you may have seen in the Dell Technologies Cloud announcements just a few weeks ago, related to VMware Cloud foundation on VxRail. Then I'll cover two brand new VxRail hardware platforms and additional options. And finally circle back to talk about the latest enhancements to our VxRail HCI system software capabilities for Lifecycle Management. Let's get started with our new Cloud offerings based on VxRail. VxRail is the HCI foundation for Dell Technologies, Cloud Platform, bringing automation and financial models, similar to public Cloud to On-premises environments. VMware recently introduced Cloud foundation for Delta, which is based on vSphere 7.0. As you likely know by now, vSphere 7.0 was definitely an exciting and highly anticipated release. In keeping with our synchronous release commitment, we introduced VxRail 7.0 based on vSphere 7.0 in late April, which was within 30 days of VMware's release. Two key areas that VMware focused on we're embedding containers and Kubernetes into vSphere, unifying them with virtual machines. And the second is improving the work experience for vSphere administrators with vSphere Lifecycle Manager or VLCM. I'll address the second point a bit in terms of how VxRail fits in in a moment for VCF 4 with Tom Xu, based on vSphere 7.0 customers now have access to a hybrid Cloud platform that supports native Kubernetes workloads and management, as well as your traditional VM-based workloads. So containers are now first class citizens of your private Cloud alongside traditional VMs and this is now available with VCF 4.0, on VxRail 7.0. VxRail's tight integration with VMware Cloud foundation delivers a simple and direct path not only to the hybrid Cloud, but also to deliver Kubernetes at Cloud scale with one complete automated platform. The second Cloud announcement is also exciting. Recent VCF for networking advancements have made it easier than ever to get started with hybrid Cloud, because we're now able to offer a more accessible consolidated architecture. And with that Dell Technologies Cloud platform can now be deployed with a four-node configuration, lowering the cost of an entry level hybrid Cloud. This enables customers to start smaller and grow their Cloud deployment over time. VCF and VxRail can now be deployed in two different ways. For small environments, customers can utilize a consolidated architecture which starts with just four nodes. Since the management and workload domains share resources in this architecture, it's ideal for getting started with an entry level Cloud to run general purpose virtualized workloads with a smaller entry point. Both in terms of required infrastructure footprint as well as cost, but still with a Consistent Cloud operating model. For larger environments where dedicated resources and role-based access control to separate different sets of workloads is usually preferred. You can choose to deploy a standard architecture which starts at eight nodes for independent management and workload domains. A standard implementation is ideal for customers running applications that require dedicated workload domains that includes Horizon, VDI, and vSphere with Kubernetes. >> Alright, Jon, there's definitely been a lot of interest in our community around everything that VMware is doing with vSphere 7.0. understand if you wanted to use the Kubernetes piece, it's VCF as that so we've seen the announcements, Dell, partnering in there it helps us connect that story between, really the VMware strategy and how they talk about Cloud and where does VxRail fit in that overall, Delta Cloud story? >> Absolutely. So first of all Stu, the VxRail course is integral to the Delta Cloud strategy. it's been VCF on VxRail equals the Delta Cloud platform. And this is our flagship on prem Cloud offering, that we've been able to enable operational consistency across any Cloud, whether it's On-prem, in the Edge or in the public Cloud. And we've seen the Dell tech Cloud Platform embraced by customers for a couple key reasons. One is it offers the fastest hybrid Cloud deployment in the market. And this is really, thanks to a new subscription offer that we're now offering out there where in less than 14 days, it can be still up and running. And really, the Dell tech Cloud does bring a lot of flexibility in terms of consumption models, overall when it comes to VxRail. Secondly, I would say is fast and easy upgrades. This is what VxRail brings to the table for all workloads, if you will, into especially critical in the Cloud. So the full automation of Lifecycle Management across the hardware and software stack across the VMware software stack, and in the Dell software and hardware supporting that, together, this enables essentially the third thing, which is customers can just relax. They can be rest assured that their infrastructure will be continuously validated, and always be in a continuously validated state. And this is the kind of thing that those three value propositions together really fit well, with any on-prem Cloud. Now you take what Shannon just mentioned, and the fact that now you can build and run modern applications on the same VxRail infrastructure alongside traditional applications. This is a game changer. >> Yeah, I love it. I remember in the early days talking with Dunn about CI, how does that fit in with Cloud discussion and the line I've used the last couple years is, modernize the platform, then you can modernize the application. So as companies are doing their full modernization, then this plays into what you're talking about. All right, we can let Shannon continue, we can get some more before we dig into some more analysis. >> That's good. >> Let's talk about new hardware platforms and updates. that result in literally thousands of potential new configuration options. covering a wide breadth of modern and traditional application needs across a range of the actual use cases. First up, I am incredibly excited to announce a brand new Dell EMC VxRail series, the D series. This is a ruggedized durable platform that delivers the full power of VxRail for workloads at the Edge in challenging environments or for space constrained areas. VxRail D series offers the same compelling benefits as the rest of the VxRail portfolio with simplicity, agility and lifecycle management. But in a lightweight short depth at only 20 inches, it's adorable form factor that's extremely temperature-resilient, shock resistant, and easily portable. It even meets milspec standards. That means you have the full power of lifecycle automation with VxRail HCI system software and 24 by seven single point of support, enabling you to rapidly react to business needs, no matter the location or how harsh the conditions. So whether you're deploying a data center at a mobile command base, running real-time GPS mapping on the go, or implementing video surveillance in remote areas, you can ensure availability, integrity and confidence for every workload with the new VxRail ruggedized D series. >> All right, Chad we would love for you to bring us in a little bit that what customer requirement for bringing this to market. I remember seeing, Dell servers ruggedized, of course, Edge, really important growth to build on what Jon was talking about, Cloud. So, Chad, bring us inside, what was driving this piece of the offering? >> Sure Stu. Yeah, yeah, having been at the hardware platforms that can go out into some of these remote locations is really important. And that's being driven by the fact that customers are looking for compute performance and storage out at some of these Edges or some of the more exotic locations. whether that's manufacturing plants, oil rigs, submarine ships, military applications, places that we've never heard of. But it's also about extending that operational simplicity of the the sort of way that you're managing your data center that has VxRails you're managing your Edges the same way using the same set of tools. You don't need to learn anything else. So operational simplicity is absolutely key here. But in those locations, you can take a product that's designed for a data center where definitely controlling power cooling space and take it some of these places where you get sand blowing or seven to zero temperatures, could be Baghdad or it could be Ketchikan, Alaska. So we built this D series that was able to go to those extreme locations with extreme heat, extreme cold, extreme altitude, but still offer that operational simplicity. Now military is one of those applications for the rugged platform. If you look at the resistance that it has to heat, it operates at a 45 degrees Celsius or 113 degrees Fahrenheit range, but it can do an excursion up to 55 C or 131 degrees Fahrenheit for up to eight hours. It's also resistant to heat sand, dust, vibration, it's very lightweight, short depth, in fact, it's only 20 inches deep. This is a smallest form factor, obviously that we have in the VxRail family. And it's also built to be able to withstand sudden shocks certified to withstand 40 G's of shock and operation of the 15,000 feet of elevation. Pretty high. And this is sort of like wherever skydivers go to when they want the real thrill of skydiving where you actually need oxygen to, to be for that that altitude. They're milspec-certified. So, MIL-STD-810G, which I keep right beside my bed and read every night. And it comes with a VxRail stick hardening package is packaging scripts so that you can auto lock down the rail environment. And we've got a few other certifications that are on the roadmap now for naval shock requirements. EMI and radiation immunity often. >> Yeah, it's funny, I remember when we first launched it was like, "Oh, well everything's going to white boxes. "And it's going to be massive, "no differentiation between everything out there." If you look at what you're offering, if you look at how public Clouds build their things, but I called it a few years or is there's a pure optimization. So you need to scale, you need similarities but you know you need to fit some, very specific requirements, lots of places, so, interesting stuff. Yeah, certifications, always keep your teams busy. Alright, let's get back to Shannon to view on the report. >> We are also introducing three other hardware-based additions. First, a new VxRail E Series model based on where the first time AMD EPYC processors. These single socket 1U nodes, offer dual socket performance with CPU options that scale from eight to 64 Cores, up to a terabyte of memory and multiple storage options making it an ideal platform for desktop VDI analytics and computer aided design. Next, the addition of the latest Nvidia Quadro RTX GPUs brings the most significant advancement in computer graphics in over a decade to professional work flows. Designers and artists across industries can now expand the boundary of what's possible, working with the largest and most complex graphics rendering, deep learning and visual computing workloads. And Intel Optane DC persistent memory is here, and it offers high performance and significantly increased memory capacity with data persistence at an affordable price. Data persistence is a critical feature that maintains data integrity, even when power is lost, enabling quicker recovery and less downtime. With support for Intel obtain DC persistent memory customers can expand in memory intensive workloads and use cases like SAP HANA. Alright, let's finally dig into our HCI system software, which is the Core differentiation for VxRail regardless of your workload or platform choice. Our joining engineering with VMware and investments in VxRail HCI system software innovation together deliver an optimized operational experience at reduced risk for our customers. Under the covers, VxRail offers best in class hardware, married with VMware HCI software, either vSAN or VCF. But what makes us different stems from our investments to integrate the two. Dell Technologies has a dedicated VxRail team of about 400 people to build market sell and support a fully integrated hyper converged system. That team has also developed our unique VxRail HCI system software, which is a suite of integrated software elements that extend VMware native capabilities to deliver seamless, automated operational experience that customers cannot find elsewhere. The key components of VxRail HCI system software shown around the arc here that include the extra manager, full stack lifecycle management, ecosystem connectors, and support. I don't have time to get into all the details of these elements today, but if you're interested in learning more, I encourage you to meet our experts. And I will tell you how to do that in a moment. I touched on the LCM being a key feature to the vSphere 7.0 earlier and I'd like to take the opportunity to expand on that a bit in the context of VxRail Lifecycle Management. The LCM adds valuable automation to the execution of updates for customers, but it doesn't eliminate the manual work still needed to define and package the updates and validate all of the components prior to applying them. With VxRail customers have all of these areas addressed automatically on their behalf, freeing them to put their time into other important functions for their business. Customers tell us that Lifecycle management continues to be a major source of the maintenance effort they put into their infrastructure, and then it tends to lead to overburden IT staff, that it can cause disruptions to the business if not managed effectively, and that it isn't the most efficient economically. Automation of Lifecycle Management and VxRail results in the utmost simplicity from a customer experience perspective, and offers operational freedom from maintaining infrastructure. But as shown here, our customers not only realize greater IT team efficiencies, they have also reduced downtime with fewer unplanned outages, and reduced overall cost of operations. With VxRail HCI system software, intelligent Lifecycle Management upgrades of the fully integrated hardware and software stack are automated, keeping clusters and continuously validated states while minimizing risks and operational costs. How do we ensure Continuously validated states for VxRail. VxRail labs execute an extensive, automated, repeatable process on every firmware and software upgrade and patch to ensure clusters are in continuously validated states of the customers choosing across their VxRail environment. The VxRail labs are constantly testing, analyzing, optimizing, and sequencing all of the components in the upgrade to execute in a single package for the full stack. All the while VxRail is backed by Dell EMC's world class services and support with a single point of contact for both hardware and software. IT productivity skyrockets with single click non disruptive upgrades of the fully integrated hardware and software stack without the need to do extensive research and testing. taking you to the next VxRail version of your choice, while always in a continuously validated state. You can also confidently execute automated VxRail upgrades. No matter what hardware generation or node types are in the cluster. They don't have to all be the same. And upgrades with VxRail are faster and more efficient with leapfrogging simply choose any VxRail version you desire. And be assured you will get there in a validated state while seamlessly bypassing any other release in between. Only VxRail can do that. >> All right, so Chad, the lifecycle management piece that Shannon was just talking about is, not the sexiest, it's often underappreciated. There's not only the years of experience, but the continuous work you're doing, reminds me back the early vSAN deployments versus VxRail jointly developed, jointly tested between Dell and VMware. So bring us inside why, 2020 Lifecycle Management still, a very important piece, especially in the VM family line. >> Yes, Stu, I think it's sexy, but, I'm pretty big nerd. (all laughing) Yeah, this is really always been our bread and butter. And in fact, it gets even more important, the larger the deployments come, when you start to look at data centers full of VxRails and all the different hardware software, firmware combinations that could exist out there. It's really the value that you get out of that VxRail HCI system software that Shannon was talking about and how it's optimized around the VMware use case. Very tightly integrated with each VMware component, of course, and the intelligence of being able to do all the firmware, all of the drivers, all the software all together in tremendous value to our customers. But to deliver that we really need to make a fairly large investment. So as Shannon mentioned, we run about 25,000 hours of testing across Each major release for patches, express patches, that's about 7000 hours for each of those. So, obviously, there's a lot of parallelism. And we're always developing new test scenarios for each release that we need to build in as we as we introduce new functionality. And one of the key things that we're able to do, as Shannon mentioned, is to be able to leapfrog releases and get you to that next validated state. We've got about 100 engineers just working on creating and executing those test cases on a continuous basis and obviously, a huge amount of automation. And we've talked about that investment to execute those tests. That's one worth of $60 million of investment in our lab. In fact, we've got just over 2000 VxRail units in our testbed across the US, Shanghai, China and Cork, Ireland. So a massive amount of testing of each of those components to make sure that they operate together in a validated state. >> Yeah, well, absolutely, it's super important not only for the day one, but the day two deployments. But I think this actually a great place for us to bring in that customer that Dell gave me access to. So we've got the CIO of Amarillo, Texas, he was an existing VxRail customer. And he's going to explain what happened as to how he needed to react really fast to support the work-from-home initiative, as well as we get to hear in his words the value of what Lifecycle Management means. So Andrew, if we could queue up that customer segment, please? >> It's been massive and it's been interesting to see the IT team absorb it. As we mature, I think they embrace the ability to be innovative and to work with our departments. But this instance, really justified why I was driving progress. So fervently why it was so urgent today. Three years ago, the answer would have been no. We wouldn't have been in a place where we could adapt With VxRail in place, in a week we spun up hundreds of instant balls. We spun up a 75-person call center in a day and a half, for our public health. We rolled out multiple applications for public health so they could do remote clinics. It's given us the flexibility to be able to roll out new solutions very quickly and be very adaptive. And it's not only been apparent to my team, but it's really made an impact on the business. And now what I'm seeing is those of my customers that work, a little lagging or a little conservative, or understanding the impact of modernizing the way they do business because it makes them adaptable as well. >> Alright, so great, Richard, you talked a bunch about the the efficiencies that that the IT put in place, how about that, that overall just managed, you talked about how fast you spun up these new VDI instances. need to be able to do things much simpler? So how does the overall Lifecycle Management fit into this discussion? >> It makes it so much easier. And in the old environment, one, It took a lot of man hours to make change. It was very disruptive, when we did make change, it overburdened, I guess that's the word I'm looking for. It really overburdened our staff to cause disruption to business. That wasn't cost efficient. And then simple things like, I've worked for multi billion dollar companies where we had massive QA environments that replicated production, simply can't afford that at local government. Having this sort of environment lets me do a scaled down QA environment and still get the benefit of rolling out non disruptive change. As I said earlier, it's allowed us to take all of those cycles that we were spending on Lifecycle Management because it's greatly simplified, and move those resources and rescale them in other areas where we can actually have more impact on the business. It's hard to be innovative when 100% of your cycles are just keeping the ship afloat. >> All right, well, nothing better than hearing it straight from the end user, public sector reacting very fast to the COVID-19. And, if you heard him he said, if this is his, before he had run this project, he would not have been able to respond. So I think everybody out there understands, if I didn't actually have access to the latest technology, it would be much harder. All right, I'm looking forward to doing the CrowdChat letting everybody else dig in with questions and get follow up but a little bit more, I believe one more announcement he can and got for us though. Let's roll the final video clip. >> In our latest software release VxRail 4.7.510, We continue to add new automation and self service features. New functionality enables you to schedule and run upgrade health checks in advance of upgrades, to ensure clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade windows, as they can be assured the clusters will seamlessly upgrade within that window. Of course, running health checks on a regular basis also helps ensure that your clusters are always ready for unscheduled patches and security updates. We are also offering more flexibility and getting all nodes or clusters to a common release level with the ability to reimage nodes or clusters to a specific VxRail version, or down rev one or more nodes that may be shipped at a higher rate than the existing cluster. This enables you to easily choose your validated state when adding new nodes or repurposing nodes in a cluster. To sum up all of our announcements, whether you are accelerating data sets modernization extending HCI to harsh Edge environments, deploying an on-premises Dell Technologies Cloud platform to create a developer ready Kubernetes infrastructure. VxRail is there delivering a turn-key experience that enables you to continuously innovate, realize operational freedom and predictably evolve. VxRail provides an extensive breadth of platform configurations, automation and Lifecycle Management across the integrated hardware and software full stack and consistent hybrid Cloud operations to address the broadest range of traditional and modern applications across Core, Edge and Cloud. I now invite you to engage with us. First, the virtual passport program is an opportunity to have some fun while learning about VxRail new features and functionality and sCore some sweet digital swag while you're at it. Delivered via an augmented reality app. All you need is your device. So go to vxrail.is/passport to get started. And secondly, if you have any questions about anything I talked about or want a deeper conversation, we encourage you to join one of our exclusive VxRail Meet The Experts sessions available for a limited time. First come first served, just go to vxrail.is/expertsession to learn more. >> All right, well, obviously, with everyone being remote, there's different ways we're looking to engage. So we've got the CrowdChat right after this. But Jon, give us a little bit more as to how Dell's making sure to stay in close contact with customers and what you've got for options for them. >> Yeah, absolutely. So as Shannon said, so in lieu of not having done Tech World this year in person, where we could have those great in-person interactions and answer questions, whether it's in the booth or in meeting rooms, we are going to have these Meet The Experts sessions over the next couple weeks, and we're going to put our best and brightest from our technical community and make them accessible to everyone out there. So again, definitely encourage you. We're trying new things here in this virtual environment to ensure that we can still stay in touch, answer questions, be responsive, and really looking forward to, having these conversations over the next couple of weeks. >> All right, well, Jon and Chad, thank you so much. We definitely look forward to the conversation here and continued. If you're here live, definitely go down below and do it if you're watching this on demand. You can see the full transcript of it at crowdchat.net/vxrailrocks. For myself, Shannon on the video, Jon, Chad, Andrew, man in the booth there, thank you so much for watching, and go ahead and join the CrowdChat.

Published Date : May 27 2020

SUMMARY :

Announcer: From the Cube And the way we do things a lot of times, talk to the experts and everything. And as it relates to VDI, So, I really love to hear what both of you and the flexibility of the actual rules and the sort of the efficiencies that Shannon's going to share. the latest enhancements to really the VMware strategy and the fact that now you can build and the line I've used that delivers the full power of VxRail for bringing this to market. and operation of the "And it's going to be massive, and that it isn't the most especially in the VM family line. and all the different hardware software, And he's going to explain what happened the ability to be innovative that that the IT put in and still get the benefit it straight from the end user, for the next upgrade or patch. little bit more as to how to ensure that we can still and go ahead and join the CrowdChat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

JonPERSON

0.99+

ShannonPERSON

0.99+

AndrewPERSON

0.99+

Jon SiegalPERSON

0.99+

Chad DunnPERSON

0.99+

ChadPERSON

0.99+

Palo AltoLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

DellORGANIZATION

0.99+

15,000 feetQUANTITY

0.99+

100%QUANTITY

0.99+

Stu MinimanPERSON

0.99+

USLOCATION

0.99+

40 GQUANTITY

0.99+

NetherlandsLOCATION

0.99+

Tom XuPERSON

0.99+

$60 millionQUANTITY

0.99+

US NavyORGANIZATION

0.99+

131 degrees FahrenheitQUANTITY

0.99+

BaghdadLOCATION

0.99+

hundredsQUANTITY

0.99+

113 degrees FahrenheitQUANTITY

0.99+

vSphere 7.0TITLE

0.99+

75-personQUANTITY

0.99+

ChinaLOCATION

0.99+

vSphereTITLE

0.99+

45 degrees CelsiusQUANTITY

0.99+

FirstQUANTITY

0.99+

twoQUANTITY

0.99+

VMwareORGANIZATION

0.99+

VxRailTITLE

0.99+

30 daysQUANTITY

0.99+

ShanghaiLOCATION

0.99+

NvidiaORGANIZATION

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

StuPERSON

0.99+

eightQUANTITY

0.99+

VxRail 7.0TITLE

0.99+

AmarilloLOCATION

0.99+

less than 14 daysQUANTITY

0.99+

Delta CloudTITLE

0.99+

late AprilDATE

0.99+

DeltaORGANIZATION

0.99+

20 inchesQUANTITY

0.99+

thousandsQUANTITY

0.99+

24QUANTITY

0.99+

SAP HANATITLE

0.99+

sevenQUANTITY

0.99+

BothQUANTITY

0.99+

BostonLOCATION

0.99+

VxRail E SeriesCOMMERCIAL_ITEM

0.99+

eachQUANTITY

0.99+

todayDATE

0.99+

a day and a halfQUANTITY

0.98+

about 400 peopleQUANTITY

0.98+

VxRail: Taking HCI to Extremes


 

>> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCube Conversation. >> Hi, I'm Stu Miniman. And welcome to this special presentation. We have a launch from Dell Technologies updates from the VxRail family. We're going to do things a little bit different here. We actually have a launch video Shannon Champion, of Dell Technologies. And the way we do things a lot of times, is, analysts get a little preview or when you're watching things. You might have questions on it. So, rather than me just wanting it, or you wanting yourself I actually brought in a couple of Dell Technologies expertS two of our Cube alumni, happy to welcome you back to the program. Jon Siegal, he is the Vice President of Product Marketing, and Chad Dunn, who's the Vice President of Product Management, both of them with Dell Technologies. Gentlemen, thanks so much for joining us. >> Good to see you Stu. >> Great to be here. >> All right, and so what we're going to do is we're going to be rolling the video here. I've got a button I'm going to press, Andrew will stop it here and then we'll kind of dig in a little bit, go into some questions when we're all done. We're actually holding a crowd chat, where you will be able to ask your questions, talk to the experts and everything. And so a little bit different way to do a product announcement. Hope you enjoy it. And with that, it's VxRail. Taking HCI to the extremes is the theme. We'll see what that means and everything. But without any further ado, let's let Shannon take the video away. >> Hello, and welcome. My name is Shannon Champion, and I'm looking forward to taking you through what's new with VxRail. Let's get started. We have a lot to talk about. Our launch covers new announcements addressing use cases across the Core, Edge and Cloud and spans both new hardware platforms and options, as well as the latest in software innovations. So let's jump right in. Before we talk about our announcements, let's talk about where customers are adopting VxRail today. First of all, on behalf of the entire Dell Technologies and VxRail teams, I want to thank each of our over 8000 customers, big and small in virtually every industry, who've chosen VxRail to address a broad range of workloads, deploying nearly 100,000 nodes today. Thank you. Our promise to you is that we will add new functionality, improve serviceability, and support new use cases, so that we deliver the most value to you, whether in the Core, at the Edge or for the Cloud. In the Core, VxRail from day one has been a catalyst to accelerate IT transformation. Many of our customers started here and many will continue to leverage VxRail to simply extend and enhance your VMware environment. Now we can support even more demanding applications such as In-Memory databases, like SAP HANA, and more AI and ML applications, with support for more and more powerful GPUs. At the Edge, video surveillance, which also uses GPUs, by the way, is an example of a popular use case leveraging VxRail alongside external storage. And right now we all know the enhanced role that IT is playing. And as it relates to VDI, VxRail has always been a great option for that. In the Cloud, it's all about Kubernetes, and how Dell Technologies Cloud platform, which is VCF on VxRail can deliver consistent infrastructure for both traditional and Cloud native applications. And we're doing that together with VMware. VxRail is the only jointly engineered HCI system built with VMware for VMware environments, designed to enhance the native VMware experience. This joint engineering with VMware and investments in software innovation together deliver an optimized operational experience at reduced risk for our customers. >> Alright, so Shannon talked a bit about, the important role of IT Of course right now, with the global pandemic going on. It's really, calling in, essential things, putting, platforms to the test. So, I really love to hear what both of you are hearing from customers. Also, VDI, of course, in the early days, it was, HCI-only-does-VDI. Now, we know there are many solutions, but remote work is putting that back front and center. So, Jon, why don't we start with you as the what is (muffled speaking) >> Absolutely. So first of all, Stu, thank you, I want to do a shout out to our VxRail customers around the world. It's really been humbling, inspiring, and just amazing to see The impact of our VxRail customers around the world and what they're having on on human progress here. Just for a few examples, there are genomics companies that we have running VxRail that have rolled out testing at scale. We also have research universities out in the Netherlands, doing the antibody detection. The US Navy has stood up a floating hospital to of course care for those in need. So we are here to help that's been our message to our customers, but it's amazing to see how much they're helping society during this. So just just a pleasure there. But as you mentioned, just to hit on the VDI comments, so to your points too, HCI, VxRail, VDI, that was an initial use case years ago. And it's been great to see how many of our existing VxRail customers have been able to pivot very quickly leveraging VxRail to add and to help bring their remote workforce online and support them with their existing VxRail. Because VxRail is flexible, it is agile, to be able to support those multiple workloads. And in addition to that, we've also rolled out some new VDI bundles to make it simpler for customers more cost effective cater to everything from knowlEdge workers to multimedia workers. You name it, you know from 250, desktops up to 1000. But again, back to your point VxRail, HCI, is well beyond VDI, it crossed the chasm a couple years ago actually. And VDI now is less than a third of the typical workloads, any of our customers out there, it supports now a range of workloads that you heard from Shannon, whether it's video surveillance, whether it's general purpose, all the way to mission critical applications now with SAP HAN. So, this has changed the game for sure. But the range of work loads and the flexibility of the actual rules which really helping our existing customers during this pandemic. >> Yeah, I agree with you, Jon, we've seen customers really embrace HCI for a number of workloads in their environments, from the ones that we sure all knew and loved back in the initial days of HCI. Now, the mission critical things now to Cloud native workloads as well, and the sort of the efficiencies that customers are able to get from HCI. And specifically, VxRail gives them that ability to pivot. When these, shall we say unexpected circumstances arise? And I think that that's informing their their decisions and their opinions on what their IP strategies look like as they move forward. They want that same level of agility, and ability to react quickly with their overall infrastructure. >> Excellent. Now I want to get into the announcements. What I want my team actually, your team gave me access to the CIO from the city of Amarillo, so maybe they can dig up that footage, talk about how fast they pivoted, using VxRail to really spin up things fast. So let's hear from the announcement first and then definitely want to share that that customer story a little bit later. So let's get to the actual news that Shannon's going to share. >> Okay, now what's new? I am pleased to announce a number of exciting updates and new platforms, to further enable IT modernization across Core, Edge and Cloud. I will cover each of these announcements in more detail, demonstrating how only VxRail can offer the breadth of platform configurations, automation, orchestration and Lifecycle Management, across a fully integrated hardware and software full stack with consistent, simplified operations to address the broadest range of traditional and modern applications. I'll start with hybrid Cloud and recap what you may have seen in the Dell Technologies Cloud announcements just a few weeks ago, related to VMware Cloud foundation on VxRail. Then I'll cover two brand new VxRail hardware platforms and additional options. And finally circle back to talk about the latest enhancements to our VxRail HCI system software capabilities for Lifecycle Management. Let's get started with our new Cloud offerings based on VxRail. VxRail is the HCI foundation for Dell Technologies, Cloud Platform, bringing automation and financial models, similar to public Cloud to On-premises environments. VMware recently introduced Cloud foundation for Delta, which is based on vSphere 7.0. As you likely know by now, vSphere 7.0 was definitely an exciting and highly anticipated release. In keeping with our synchronous release commitment, we introduced VxRail 7.0 based on vSphere 7.0 in late April, which was within 30 days of VMware's release. Two key areas that VMware focused on we're embedding containers and Kubernetes into vSphere, unifying them with virtual machines. And the second is improving the work experience for vSphere administrators with vSphere Lifecycle Manager or VLCM. I'll address the second point a bit in terms of how VxRail fits in in a moment for VCF 4 with Tom Xu, based on vSphere 7.0 customers now have access to a hybrid Cloud platform that supports native Kubernetes workloads and management, as well as your traditional VM-based workloads. So containers are now first class citizens of your private Cloud alongside traditional VMs and this is now available with VCF 4.0, on VxRail 7.0. VxRail's tight integration with VMware Cloud foundation delivers a simple and direct path not only to the hybrid Cloud, but also to deliver Kubernetes at Cloud scale with one complete automated platform. The second Cloud announcement is also exciting. Recent VCF for networking advancements have made it easier than ever to get started with hybrid Cloud, because we're now able to offer a more accessible consolidated architecture. And with that Dell Technologies Cloud platform can now be deployed with a four-node configuration, lowering the cost of an entry level hybrid Cloud. This enables customers to start smaller and grow their Cloud deployment over time. VCF and VxRail can now be deployed in two different ways. For small environments, customers can utilize a consolidated architecture which starts with just four nodes. Since the management and workload domains share resources in this architecture, it's ideal for getting started with an entry level Cloud to run general purpose virtualized workloads with a smaller entry point. Both in terms of required infrastructure footprint as well as cost, but still with a Consistent Cloud operating model. For larger environments where dedicated resources and role-based access control to separate different sets of workloads is usually preferred. You can choose to deploy a standard architecture which starts at eight nodes for independent management and workload domains. A standard implementation is ideal for customers running applications that require dedicated workload domains that includes Horizon, VDI, and vSphere with Kubernetes. >> Alright, Jon, there's definitely been a lot of interest in our community around everything that VMware is doing with vSphere 7.0. understand if you wanted to use the Kubernetes piece, it's VCF as that so we've seen the announcements, Dell, partnering in there it helps us connect that story between, really the VMware strategy and how they talk about Cloud and where does VxRail fit in that overall, Delta Cloud story? >> Absolutely. So first of all Stu, the VxRail course is integral to the Delta Cloud strategy. it's been VCF on VxRail equals the Delta Cloud platform. And this is our flagship on prem Cloud offering, that we've been able to enable operational consistency across any Cloud, whether it's On-prem, in the Edge or in the public Cloud. And we've seen the Dell tech Cloud Platform embraced by customers for a couple key reasons. One is it offers the fastest hybrid Cloud deployment in the market. And this is really, thanks to a new subscription offer that we're now offering out there where in less than 14 days, it can be still up and running. And really, the Dell tech Cloud does bring a lot of flexibility in terms of consumption models, overall when it comes to VxRail. Secondly, I would say is fast and easy upgrades. This is what VxRail brings to the table for all workloads, if you will, into especially critical in the Cloud. So the full automation of Lifecycle Management across the hardware and software stack across the VMware software stack, and in the Dell software and hardware supporting that, together, this enables essentially the third thing, which is customers can just relax. They can be rest assured that their infrastructure will be continuously validated, and always be in a continuously validated state. And this is the kind of thing that those three value propositions together really fit well, with any on-prem Cloud. Now you take what Shannon just mentioned, and the fact that now you can build and run modern applications on the same VxRail infrastructure alongside traditional applications. This is a game changer. >> Yeah, I love it. I remember in the early days talking with Dunn about CI, how does that fit in with Cloud discussion and the line I've used the last couple years is, modernize the platform, then you can modernize the application. So as companies are doing their full modernization, then this plays into what you're talking about. All right, we can let Shannon continue, we can get some more before we dig into some more analysis. >> That's good. >> Let's talk about new hardware platforms and updates. that result in literally thousands of potential new configuration options. covering a wide breadth of modern and traditional application needs across a range of the actual use cases. First up, I am incredibly excited to announce a brand new Dell EMC VxRail series, the D series. This is a ruggedized durable platform that delivers the full power of VxRail for workloads at the Edge in challenging environments or for space constrained areas. VxRail D series offers the same compelling benefits as the rest of the VxRail portfolio with simplicity, agility and lifecycle management. But in a lightweight short depth at only 20 inches, it's adorable form factor that's extremely temperature-resilient, shock resistant, and easily portable. It even meets milspec standards. That means you have the full power of lifecycle automation with VxRail HCI system software and 24 by seven single point of support, enabling you to rapidly react to business needs, no matter the location or how harsh the conditions. So whether you're deploying a data center at a mobile command base, running real-time GPS mapping on the go, or implementing video surveillance in remote areas, you can ensure availability, integrity and confidence for every workload with the new VxRail ruggedized D series. >> All right, Chad we would love for you to bring us in a little bit that what customer requirement for bringing this to market. I remember seeing, Dell servers ruggedized, of course, Edge, really important growth to build on what Jon was talking about, Cloud. So, Chad, bring us inside, what was driving this piece of the offering? >> Sure Stu. Yeah, yeah, having been at the hardware platforms that can go out into some of these remote locations is really important. And that's being driven by the fact that customers are looking for compute performance and storage out at some of these Edges or some of the more exotic locations. whether that's manufacturing plants, oil rigs, submarine ships, military applications, places that we've never heard of. But it's also about extending that operational simplicity of the the sort of way that you're managing your data center that has VxRails you're managing your Edges the same way using the same set of tools. You don't need to learn anything else. So operational simplicity is absolutely key here. But in those locations, you can take a product that's designed for a data center where definitely controlling power cooling space and take it some of these places where you get sand blowing or seven to zero temperatures, could be Baghdad or it could be Ketchikan, Alaska. So we built this D series that was able to go to those extreme locations with extreme heat, extreme cold, extreme altitude, but still offer that operational simplicity. Now military is one of those applications for the rugged platform. If you look at the resistance that it has to heat, it operates at a 45 degrees Celsius or 113 degrees Fahrenheit range, but it can do an excursion up to 55 C or 131 degrees Fahrenheit for up to eight hours. It's also resistant to heat sand, dust, vibration, it's very lightweight, short depth, in fact, it's only 20 inches deep. This is a smallest form factor, obviously that we have in the VxRail family. And it's also built to be able to withstand sudden shocks certified to withstand 40 G's of shock and operation of the 15,000 feet of elevation. Pretty high. And this is sort of like wherever skydivers go to when they want the real thrill of skydiving where you actually need oxygen to, to be for that that altitude. They're milspec-certified. So, MIL-STD-810G, which I keep right beside my bed and read every night. And it comes with a VxRail stick hardening package is packaging scripts so that you can auto lock down the rail environment. And we've got a few other certifications that are on the roadmap now for naval shock requirements. EMI and radiation immunity often. >> Yeah, it's funny, I remember when we first launched it was like, "Oh, well everything's going to white boxes. "And it's going to be massive, "no differentiation between everything out there." If you look at what you're offering, if you look at how public Clouds build their things, but I called it a few years or is there's a pure optimization. So you need to scale, you need similarities but you know you need to fit some, very specific requirements, lots of places, so, interesting stuff. Yeah, certifications, always keep your teams busy. Alright, let's get back to Shannon to view on the report. >> We are also introducing three other hardware-based additions. First, a new VxRail E Series model based on where the first time AMD EPYC processors. These single socket 1U nodes, offer dual socket performance with CPU options that scale from eight to 64 Cores, up to a terabyte of memory and multiple storage options making it an ideal platform for desktop VDI analytics and computer aided design. Next, the addition of the latest Nvidia Quadro RTX GPUs brings the most significant advancement in computer graphics in over a decade to professional work flows. Designers and artists across industries can now expand the boundary of what's possible, working with the largest and most complex graphics rendering, deep learning and visual computing workloads. And Intel Optane DC persistent memory is here, and it offers high performance and significantly increased memory capacity with data persistence at an affordable price. Data persistence is a critical feature that maintains data integrity, even when power is lost, enabling quicker recovery and less downtime. With support for Intel obtain DC persistent memory customers can expand in memory intensive workloads and use cases like SAP HANA. Alright, let's finally dig into our HCI system software, which is the Core differentiation for VxRail regardless of your workload or platform choice. Our joining engineering with VMware and investments in VxRail HCI system software innovation together deliver an optimized operational experience at reduced risk for our customers. Under the covers, VxRail offers best in class hardware, married with VMware HCI software, either vSAN or VCF. But what makes us different stems from our investments to integrate the two. Dell Technologies has a dedicated VxRail team of about 400 people to build market sell and support a fully integrated hyper converged system. That team has also developed our unique VxRail HCI system software, which is a suite of integrated software elements that extend VMware native capabilities to deliver seamless, automated operational experience that customers cannot find elsewhere. The key components of VxRail HCI system software shown around the arc here that include the extra manager, full stack lifecycle management, ecosystem connectors, and support. I don't have time to get into all the details of these elements today, but if you're interested in learning more, I encourage you to meet our experts. And I will tell you how to do that in a moment. I touched on the LCM being a key feature to the vSphere 7.0 earlier and I'd like to take the opportunity to expand on that a bit in the context of VxRail Lifecycle Management. The LCM adds valuable automation to the execution of updates for customers, but it doesn't eliminate the manual work still needed to define and package the updates and validate all of the components prior to applying them. With VxRail customers have all of these areas addressed automatically on their behalf, freeing them to put their time into other important functions for their business. Customers tell us that Lifecycle management continues to be a major source of the maintenance effort they put into their infrastructure, and then it tends to lead to overburden IT staff, that it can cause disruptions to the business if not managed effectively, and that it isn't the most efficient economically. Automation of Lifecycle Management and VxRail results in the utmost simplicity from a customer experience perspective, and offers operational freedom from maintaining infrastructure. But as shown here, our customers not only realize greater IT team efficiencies, they have also reduced downtime with fewer unplanned outages, and reduced overall cost of operations. With VxRail HCI system software, intelligent Lifecycle Management upgrades of the fully integrated hardware and software stack are automated, keeping clusters and continuously validated states while minimizing risks and operational costs. How do we ensure Continuously validated states for VxRail. VxRail labs execute an extensive, automated, repeatable process on every firmware and software upgrade and patch to ensure clusters are in continuously validated states of the customers choosing across their VxRail environment. The VxRail labs are constantly testing, analyzing, optimizing, and sequencing all of the components in the upgrade to execute in a single package for the full stack. All the while VxRail is backed by Dell EMC's world class services and support with a single point of contact for both hardware and software. IT productivity skyrockets with single click non disruptive upgrades of the fully integrated hardware and software stack without the need to do extensive research and testing. taking you to the next VxRail version of your choice, while always in a continuously validated state. You can also confidently execute automated VxRail upgrades. No matter what hardware generation or node types are in the cluster. They don't have to all be the same. And upgrades with VxRail are faster and more efficient with leapfrogging simply choose any VxRail version you desire. And be assured you will get there in a validated state while seamlessly bypassing any other release in between. Only VxRail can do that. >> All right, so Chad, the lifecycle management piece that Shannon was just talking about is, not the sexiest, it's often underappreciated. There's not only the years of experience, but the continuous work you're doing, reminds me back the early vSAN deployments versus VxRail jointly developed, jointly tested between Dell and VMware. So bring us inside why, 2020 Lifecycle Management still, a very important piece, especially in the VM family line. >> Yes, Stu, I think it's sexy, but, I'm pretty big nerd. (all laughing) Yeah, this is really always been our bread and butter. And in fact, it gets even more important, the larger the deployments come, when you start to look at data centers full of VxRails and all the different hardware software, firmware combinations that could exist out there. It's really the value that you get out of that VxRail HCI system software that Shannon was talking about and how it's optimized around the VMware use case. Very tightly integrated with each VMware component, of course, and the intelligence of being able to do all the firmware, all of the drivers, all the software all together in tremendous value to our customers. But to deliver that we really need to make a fairly large investment. So as Shannon mentioned, we run about 25,000 hours of testing across Each major release for patches, express patches, that's about 7000 hours for each of those. So, obviously, there's a lot of parallelism. And we're always developing new test scenarios for each release that we need to build in as we as we introduce new functionality. And one of the key things that we're able to do, as Shannon mentioned, is to be able to leapfrog releases and get you to that next validated state. We've got about 100 engineers just working on creating and executing those test cases on a continuous basis and obviously, a huge amount of automation. And we've talked about that investment to execute those tests. That's one worth of $60 million of investment in our lab. In fact, we've got just over 2000 VxRail units in our testbed across the US, Shanghai, China and Cork, Ireland. So a massive amount of testing of each of those components to make sure that they operate together in a validated state. >> Yeah, well, absolutely, it's super important not only for the day one, but the day two deployments. But I think this actually a great place for us to bring in that customer that Dell gave me access to. So we've got the CIO of Amarillo, Texas, he was an existing VxRail customer. And he's going to explain what happened as to how he needed to react really fast to support the work-from-home initiative, as well as we get to hear in his words the value of what Lifecycle Management means. So Andrew, if we could queue up that customer segment, please? >> It's been massive and it's been interesting to see the IT team absorb it. As we mature, I think they embrace the ability to be innovative and to work with our departments. But this instance, really justified why I was driving progress. So fervently why it was so urgent today. Three years ago, the answer would have been no. We wouldn't have been in a place where we could adapt With VxRail in place, in a week we spun up hundreds of instant balls. We spun up a 75-person call center in a day and a half, for our public health. We rolled out multiple applications for public health so they could do remote clinics. It's given us the flexibility to be able to roll out new solutions very quickly and be very adaptive. And it's not only been apparent to my team, but it's really made an impact on the business. And now what I'm seeing is those of my customers that work, a little lagging or a little conservative, or understanding the impact of modernizing the way they do business because it makes them adaptable as well. >> Alright, so great, Richard, you talked a bunch about the the efficiencies that that the IT put in place, how about that, that overall just managed, you talked about how fast you spun up these new VDI instances. need to be able to do things much simpler? So how does the overall Lifecycle Management fit into this discussion? >> It makes it so much easier. And in the old environment, one, It took a lot of man hours to make change. It was very disruptive, when we did make change, it overburdened, I guess that's the word I'm looking for. It really overburdened our staff to cause disruption to business. That wasn't cost efficient. And then simple things like, I've worked for multi billion dollar companies where we had massive QA environments that replicated production, simply can't afford that at local government. Having this sort of environment lets me do a scaled down QA environment and still get the benefit of rolling out non disruptive change. As I said earlier, it's allowed us to take all of those cycles that we were spending on Lifecycle Management because it's greatly simplified, and move those resources and rescale them in other areas where we can actually have more impact on the business. It's hard to be innovative when 100% of your cycles are just keeping the ship afloat. >> All right, well, nothing better than hearing it straight from the end user, public sector reacting very fast to the COVID-19. And, if you heard him he said, if this is his, before he had run this project, he would not have been able to respond. So I think everybody out there understands, if I didn't actually have access to the latest technology, it would be much harder. All right, I'm looking forward to doing the CrowdChat letting everybody else dig in with questions and get follow up but a little bit more, I believe one more announcement he can and got for us though. Let's roll the final video clip. >> In our latest software release VxRail 4.7.510, We continue to add new automation and self service features. New functionality enables you to schedule and run upgrade health checks in advance of upgrades, to ensure clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade windows, as they can be assured the clusters will seamlessly upgrade within that window. Of course, running health checks on a regular basis also helps ensure that your clusters are always ready for unscheduled patches and security updates. We are also offering more flexibility and getting all nodes or clusters to a common release level with the ability to reimage nodes or clusters to a specific VxRail version, or down rev one or more nodes that may be shipped at a higher rate than the existing cluster. This enables you to easily choose your validated state when adding new nodes or repurposing nodes in a cluster. To sum up all of our announcements, whether you are accelerating data sets modernization extending HCI to harsh Edge environments, deploying an on-premises Dell Technologies Cloud platform to create a developer ready Kubernetes infrastructure. VxRail is there delivering a turn-key experience that enables you to continuously innovate, realize operational freedom and predictably evolve. VxRail provides an extensive breadth of platform configurations, automation and Lifecycle Management across the integrated hardware and software full stack and consistent hybrid Cloud operations to address the broadest range of traditional and modern applications across Core, Edge and Cloud. I now invite you to engage with us. First, the virtual passport program is an opportunity to have some fun while learning about VxRail new features and functionality and sCore some sweet digital swag while you're at it. Delivered via an augmented reality app. All you need is your device. So go to vxrail.is/passport to get started. And secondly, if you have any questions about anything I talked about or want a deeper conversation, we encourage you to join one of our exclusive VxRail Meet The Experts sessions available for a limited time. First come first served, just go to vxrail.is/expertsession to learn more. >> All right, well, obviously, with everyone being remote, there's different ways we're looking to engage. So we've got the CrowdChat right after this. But Jon, give us a little bit more as to how Dell's making sure to stay in close contact with customers and what you've got for options for them. >> Yeah, absolutely. So as Shannon said, so in lieu of not having done Tech World this year in person, where we could have those great in-person interactions and answer questions, whether it's in the booth or in meeting rooms, we are going to have these Meet The Experts sessions over the next couple weeks, and we're going to put our best and brightest from our technical community and make them accessible to everyone out there. So again, definitely encourage you. We're trying new things here in this virtual environment to ensure that we can still stay in touch, answer questions, be responsive, and really looking forward to, having these conversations over the next couple of weeks. >> All right, well, Jon and Chad, thank you so much. We definitely look forward to the conversation here and continued. If you're here live, definitely go down below and do it if you're watching this on demand. You can see the full transcript of it at crowdchat.net/vxrailrocks. For myself, Shannon on the video, Jon, Chad, Andrew, man in the booth there, thank you so much for watching, and go ahead and join the CrowdChat.

Published Date : May 22 2020

SUMMARY :

Announcer: From the Cube And the way we do things a lot of times, talk to the experts and everything. And as it relates to VDI, So, I really love to hear what both of you and the flexibility of the actual rules and the sort of the efficiencies that Shannon's going to share. the latest enhancements to really the VMware strategy and the fact that now you can build and the line I've used that delivers the full power of VxRail for bringing this to market. and operation of the "And it's going to be massive, and that it isn't the most especially in the VM family line. and all the different hardware software, And he's going to explain what happened the ability to be innovative that that the IT put in and still get the benefit it straight from the end user, for the next upgrade or patch. little bit more as to how to ensure that we can still and go ahead and join the CrowdChat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RichardPERSON

0.99+

JonPERSON

0.99+

ShannonPERSON

0.99+

AndrewPERSON

0.99+

Jon SiegalPERSON

0.99+

Chad DunnPERSON

0.99+

ChadPERSON

0.99+

Palo AltoLOCATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

DellORGANIZATION

0.99+

15,000 feetQUANTITY

0.99+

100%QUANTITY

0.99+

Stu MinimanPERSON

0.99+

USLOCATION

0.99+

40 GQUANTITY

0.99+

NetherlandsLOCATION

0.99+

Tom XuPERSON

0.99+

$60 millionQUANTITY

0.99+

US NavyORGANIZATION

0.99+

131 degrees FahrenheitQUANTITY

0.99+

BaghdadLOCATION

0.99+

hundredsQUANTITY

0.99+

113 degrees FahrenheitQUANTITY

0.99+

vSphere 7.0TITLE

0.99+

75-personQUANTITY

0.99+

ChinaLOCATION

0.99+

vSphereTITLE

0.99+

45 degrees CelsiusQUANTITY

0.99+

FirstQUANTITY

0.99+

twoQUANTITY

0.99+

VMwareORGANIZATION

0.99+

VxRailTITLE

0.99+

30 daysQUANTITY

0.99+

ShanghaiLOCATION

0.99+

NvidiaORGANIZATION

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

StuPERSON

0.99+

eightQUANTITY

0.99+

VxRail 7.0TITLE

0.99+

AmarilloLOCATION

0.99+

less than 14 daysQUANTITY

0.99+

Delta CloudTITLE

0.99+

late AprilDATE

0.99+

DeltaORGANIZATION

0.99+

20 inchesQUANTITY

0.99+

thousandsQUANTITY

0.99+

24QUANTITY

0.99+

SAP HANATITLE

0.99+

sevenQUANTITY

0.99+

BothQUANTITY

0.99+

BostonLOCATION

0.99+

VxRail E SeriesCOMMERCIAL_ITEM

0.99+

eachQUANTITY

0.99+

todayDATE

0.99+

a day and a halfQUANTITY

0.98+

about 400 peopleQUANTITY

0.98+

Krish Prasad & Josh Simons, VMware | Enabling Real Artificial Intelligence


 

>>from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. This is a cube conversation. Alright, welcome back to help us dig into this discussion and happy to welcome to the program. Chris Prasad. He is the senior vice president and general manager of the V Sphere business And just Simon, chief technologist for the high performance computing group. Both of them with VM ware. Gentlemen, thanks so much for joining. >>Thank you for having us. >>All right, Krish. When VM Ware made the bit fusion acquisition, everybody was looking the You know what this will do for this space GP use? We're talking about things like AI and ML. So bring us up to speed. As to, you know, the news today is the what being worth doing with fusion. >>Yeah. Today we have a big announcement. I'm excited to announce that, you know, we're taking the next big step in the AI ml and more than application strategy. With the launch off bit fusion, we just now being fully integrated with the V Sphere seven black home and we'll be releasing this very shortly to the market. As you said when we acquired institution a year ago, we had a showcase that's capable base as part of the normal event. And at that time we laid out a strategy that part of our institution as the cornerstone off our capabilities in the platform in the Iot space. Since then, we have had many customers. Take a look at the technology and we have had feedback from them as well as from partners and analysts. And the feedback has been tremendous. >>Excellent. Well, Chris, what does this then mean for customers, you know, what's the value proposition? That diffusion brings the visa versa? >>Yeah, if you look at our customers, they are in the midst of a big ah journey in digital transformation. And basically, what that means is customers are building a ton of applications, and most of those applications have some kind of data analytics or machine learning embedded in it. And what this is doing is that in the harbor and infrastructure industry, this is driving a lot of innovation. So you see the admin off a lot off specialized accelerators, custom a six FPs. And of course, the views being used to accelerate the special algorithms that these ai ml type applications need And, um, unfortunately, customer environment. Most of these specialized accelerators in a bare metal kind of set up. So they're not taking advantage off optimization and everything that it brings to that. Also, with fusion launched today, we are essentially doing the accelerator space. What we need to compute several years ago. And that is, um, essentially bringing organization to the accent leaders. But we take it one step further, which is, you know, we use the customers the ability to pull these accelerators and essentially going to be a couple of from the server so you can have a pool of these accelerators sitting in the network, and customers are able to then target their workloads and share the accelerators, get better utilization, drive a lot of cost improvements and, in essence, have a smaller pool that they can use for a whole bunch of different applications across the enterprise. That is a huge enabler for our customers. And that's the tremendous positive feedback that we get getting both from customers as well. >>Excellent. Well, I'm glad we've got Josh here to dig into some of the pieces, but before we get to you they got Chris. Uh, part of this announcement is the partnership of VM Ware in Dell. So tell us about what the partnership is in the solutions for for this long. >>Yeah. We have been working with the Dell in the in the AI and ML space for a long time. We have, ah, good partnership there. This just takes the partnership to the next level, and we will have, ah, execution solution support in some of the key. I am. It'll targeted the words like the sea for 1 40 the r 7 40 Those are the centers that would be partnering with them on and providing solutions. >>Okay, Tough. Take us in a little bit further as how you know the mechanisms of diffusion work. >>Yeah, that's a great question. So think of it this way. There there is a client component that we're using in a server component. The server component is running on a machine that actually has the physical GP use installed in it. The client machine, which is running the bit fusion client software, is where the user, the data scientist, is actually running their machine machine learning application. But there's no GPU actually in that host. And what is happening with fusion technology is that it is essentially intercepting the Cuda calls that are being made by that machine learning application and promoting those protocols over to the bit fusion server and then injecting them into the local GPU on the server. So it's actually, you know, we call it into a position in the ability that remote these protocols, but it's actually much more sophisticated than that. There are a lot of underlying capabilities that are being deployed in terms of optimization who takes maximum advantage of the, uh, the networking link that's it between the client machine and the server machine. But given all of that, once we've done it with diffusion, it's now possible for the data scientist either consume multiple GP use for single GPU use or even fractional GP use across that interconnected using the using technology. >>Okay, maybe it would help illustrate some of these technologies. If you got a couple of customers. >>Yeah, sure. So one example would be a retail customer. I'm thinking of who is. Actually it's ah grocery chain that is deploying ah, large number of video cameras into their into their stores in order to do things like, um, watch for pilfering, uh, identify when storage store shelves could be restocked and even looking for cases where, for example, maybe a customer has fallen down in denial on someone needs to go and help those multiple video streams and then multiple applications that are being run that part are consuming the data from those video screens and doing analytics and ml on them would be perfectly suited for this type of environment where you would like to be ableto have these multiple independent applications running. But having them be able to efficiently share the hardware resources of the GP is another example would be retailers who are deploying ML our check out registers who helped reduce fraud customers who are buying, buying things with, uh, fake barcodes, for example. So in that case, you would not necessarily want to deploy ah single dedicated GPU for every single check out line. Instead, what you would prefer to do is have a full set of resource. Is that each inference operation that's occurring within each one of those check out lines but then consume collectively. That would be two examples of the use of this wonderful in technology. >>Okay, great. So, Josh, last question for you is this technology is this only for use and anything else? You can give us a little bit of a look forward as to what we should be expecting from the big fusion technology. >>Yeah. So currently, the target is specifically NVIDIA gpu use with Buddha. Ah, the team, actually, even prior to acquisition had done some work on enablement of PJs. And also, I have done some work on open CL, which is more open standard for device access. So what you will see over time is an expansion of the diffusion capabilities to embrace devices like F PJs of the domain. Specific. A six that was referring to earlier will roll out over time, but we are starting with the NVIDIA GPU, which totally makes sense, since that is the primary hardware acceleration. And for deep learning currently >>excellent. Well, John and Chris, thank you so much for the updates to the audience. If you're watching this live leads growing, the crowd chat out Im to ask your questions. This page, if you're watching this on demand, you can also go to crowdchat dot net slash make ai really to be able to see the conversation that we had. Thanks so much for joy. Yeah, yeah, yeah, >>yeah.

Published Date : May 20 2020

SUMMARY :

from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. is the what being worth doing with fusion. And the feedback has been tremendous. That diffusion brings the visa versa? the server so you can have a pool of these accelerators sitting in the network, So tell us about in some of the key. Take us in a little bit further as how you know the mechanisms of that actually has the physical GP use installed in it. If you got a couple of customers. of the GP is another example would be retailers who are deploying So, Josh, last question for you is this technology is this only an expansion of the diffusion capabilities to embrace devices like F PJs really to be able to see the conversation that we had.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoshPERSON

0.99+

ChrisPERSON

0.99+

Chris PrasadPERSON

0.99+

DellORGANIZATION

0.99+

JohnPERSON

0.99+

KrishPERSON

0.99+

V SphereORGANIZATION

0.99+

TodayDATE

0.99+

Josh SimonsPERSON

0.99+

Palo AltoLOCATION

0.99+

Krish PrasadPERSON

0.99+

VM WareORGANIZATION

0.99+

BostonLOCATION

0.99+

NVIDIAORGANIZATION

0.99+

Cube StudiosORGANIZATION

0.99+

BothQUANTITY

0.99+

bothQUANTITY

0.99+

two examplesQUANTITY

0.99+

a year agoDATE

0.99+

todayDATE

0.98+

each inferenceQUANTITY

0.97+

singleQUANTITY

0.96+

one exampleQUANTITY

0.96+

six FPsQUANTITY

0.95+

each oneQUANTITY

0.95+

VMwareORGANIZATION

0.95+

sixQUANTITY

0.95+

SimonPERSON

0.91+

1 40OTHER

0.9+

one stepQUANTITY

0.86+

crowdchatORGANIZATION

0.86+

V Sphere seven black homeCOMMERCIAL_ITEM

0.86+

several years agoDATE

0.84+

every singleQUANTITY

0.7+

applicationsQUANTITY

0.7+

single GPUQUANTITY

0.69+

coupleQUANTITY

0.64+

tonQUANTITY

0.59+

r 7 40OTHER

0.53+

CudaCOMMERCIAL_ITEM

0.43+

VM wareORGANIZATION

0.41+

Jeremy Rader


 

>>from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. This is a cube conversation. >>Alright, welcome back. Jeff Frick here. And we're excited for this next segment. We're joined by Jeremy Raider. He is the GM digital transformation and scale solutions for Intel Corporation. Jeremy, great to see you. Hey, thanks for having me. I love I love the flowers in the backyard. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Garden. Right To very beautiful places to visit in Portland. >>Yeah. You know, you only get for a couple Ah, couple weeks here, so we get the timing just right. >>Excellent. All right, so let's jump into it. Really? And in this conversation really is all about making Ai Riel. Um, and you guys are working with Dell and you're working with not only Dell, right? There's the hardware and software, but a lot of these smaller a solution provider. So what is some of the key attributes that that needs to make ai riel for your customers out there? >>Yeah. So you know, it's a It's a complex space. So when you can bring the best of the Intel portfolio, which is which is expanding a lot. You know, it's not just the few anymore you're getting into memory technologies, network technologies and kind of a little less known as how many resources we have focused on the software side of things optimizing frameworks and optimizing and in these key ingredients and libraries that you can stitch into that portfolio to really get more performance in value, out of your machine learning and deep learning space. And so you know what we've really done here with Dell? It has started to bring a bunch of that portfolio together with Dell's capabilities, and then bring in that ai's V partner, that software vendor where we can really take and stitch and bring the most value out of a broad portfolio. Ultimately using using the complexity of what it takes to deploy an AI capability. So a lot going on. They're bringing kind of the three legged stool of the software vendor hardware vendor dental into the mix, and you get a really strong outcome, >>right? So before we get to the solutions piece, let's stick a little bit into the intel world, and I don't know if a lot of people are aware that obviously you guys make CPUs and you've been making great CPS forever. But there's a whole lot more stuff that you've added, you know, kind of around the core CPU, if you will. In terms of of actual libraries and ways to really optimize the seond processors to operate in an AI world. I wonder if you can kind of take us a little bit below the surface on how that works. What are some of the examples of things you can do to get more from your Gambira Intel processors for AI specific applications of workloads? >>Yeah, well, you know, there's a ton of software optimization that goes into this. You know that having the great CPU is definitely step one. But ultimately you want to get down into the libraries like tensor flow. We have data analytics, acceleration libraries. You know, that really allows you to get kind of again under the covers a little bit and look at how do we have to get the most out of the kinds of capabilities that are ultimately used in machine learning in deep learning capabilities, and then bring that forward and trying and enable that with our software vendors so that they can take advantage of those acceleration components and ultimately, you know, move from, you know, less training time or could be a cost factor, right? Those are the kind of capabilities we want to expose to software vendors do these kinds of partnerships >>on, and that's terrific. And I do think that's a big part of the story that a lot of people are probably not as aware of that. There are a lot of these optimization opportunities that you guys have been leveraging for a while. So shifting gears a little bit right AI and machine learning is all about the data. And in doing a little research for this, I found actually you on stage talking about some company that had, like, 350 of road off 315 petabytes of of data, 140,000 sources of those data, and I think probably not great quote of six months access time to get it right and actually work with it. And the company you're referencing was intel. So you guys know a lot about debt data, managing data, everything from your manufacturing and and obviously supporting a global organization for I, t and Brian and, ah, a lot of complexity and secrets and good stuff. So you know what have you guys leveraged as intel in the way you work with data and getting a good data pipeline that's enabling you to kind of put that into these other solutions that you're providing to the customers, >>right? Well is, you know, it's absolutely a journey, and it doesn't happen overnight, and that's what we've you know. We've seen it at Intel on We see it with many of our customers that are on the same journey that we've been on. And so you know, this idea of building that pipeline it really starts with what kind of problems that you're trying to solve. What are the big issues that are holding you back that company where you see that competitive advantage that you're trying to get to? And then ultimately, how do you build the structure to enable the right kind of pipeline of that data? Because that's that's what machine learning and deep learning is that data journey. So really a lot of focus around you know how we can understand those business challenges bring forward those kinds of capabilities along the way through to where we structure our entire company around those assets. And then ultimately, some of the partnerships that we're gonna be talking about these companies that are out there to help us really squeeze the most out of that data as quickly as possible because otherwise it goes stale real fast, sits on the shelf, and you're not getting that value out of right. So, yeah, we've been on the journey. It's ah, it's a long journey. But ultimately we could take a lot of those those kind of learnings and we can apply them to our silicon technology. The software optimization is that we're doing and ultimately, how we talk to our enterprise customers about how they can solve overcome some of the same challenges that we did. >>Well, let's talk about some of those challenges specifically because, you know, I think part of the the challenge is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Little bit was, there's a whole lot that goes into it. Besides just doing the analysis There's a lot of data practice data collection, data organization, a whole bunch of things that have to happen before You can actually start to do the sexy stuff of AI. So you know, what are some of those challenges? How are you helping people get over kind of these baby steps before they can really get into the deep end of the pool? >>Yeah, well, you know, one is you have to have the resource is so you know, do you even have the resource is if you can acquire those Resource is can you keep them interested in that kind of work that you're doing? So that's a big challenge on and actually will talk about how that fits into some of the partnerships that we've been establishing in the ecosystem. It's also you get stuck in this poc do loop, right? You finally get those resource is and they start to get access to that data that we talked about. They start to play out some scenarios a theorize a little bit. Maybe they show you some really interesting value, but it never seems to make its way into a full production mode. And I think that is a challenge that is facing so many enterprises that are stuck in that loop. And so that's where we look at who's out there in the ecosystem That can help more readily move through that whole process of the evaluation that proved they are a why the POC and ultimately move that thing that capability into production mode as quickly as possible that you know that to me is one of those fundamental aspects of if you're stuck in the POC. Nothing's happening from this. This is not helping your company. We want to move things more quickly, >>right? Right. And let's just talk about some of these companies that you guys are working with that you've got some reference architectures is data robot a Grid Dynamics H 20 just down the road in Antigua. So a lot of the companies we've worked with with Cube and I think you know another part that's interesting. It again we can learn from kind of old days of big data is kind of generalized. Ai versus solution specific. Ai and I think you know where there's a real opportunity is not AI for a sake, but really it's got to be applied to a specific solution. A specific problem so that you have, you know, better chatbots. Better customer service experience, you know, better something. So when you were working with these folks and trying to design solutions or some of the opportunities that you saw to work with, some of these folks to now have an applied a application slash solution versus just kind of AI for ai's sake, >>Yeah. I mean, that could be anything from fraud, detection and financial services, or even taking a step back and looking more horizontally like back to that data challenge. If if you're stuck at the AI built a fantastic data lake, but I haven't been able to pull anything back out of it, who are some of the companies that are out there that can help overcome some of those big data challenges and ultimately get you to where you know, you don't have a data scientist spending 60% of their time on data acquisition pre processing? That's not where we want them, right? We want them on building out that next theory. We want them on looking at the next business challenge. We want them on selecting the right models, but ultimately they have to do that as quickly as possible so that they can move that that capability forward into the next phase. So, really, it's about that that connection of looking at those those problems or challenges in the whole pipeline. And these companies like Data robot in H 20 because you know, they're all addressing specific challenges in the end to end. That's why they've kind of bubbled up as ones that we want to continue to collaborate with, because it can help enterprises overcome those issues more fast. You know more readily. >>Great. Well, Jeremy, thanks for taking a few minutes and giving us the Intel side of the story. Um, it's a great company. Has been around forever. I worked there many, many moons ago. That's Ah, that's a story for another time. But really appreciate it and >>I'll interview you >>will go there. Alright, So super Thanks a lot. So he's Jeremy. I'm Jeff Frick. So now it's time to go ahead and jump into the crowd chat. It's crowdchat dot net slash make ai Really, Um, we'll see you in the chat. And thanks for watching. Yeah, yeah, yeah, yeah

Published Date : May 20 2020

SUMMARY :

from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Um, and you guys are working with Dell and you're working with not only Dell, right? And so you know what we've really done here with Dell? What are some of the examples of things you can do to get more from You know, that really allows you to get kind of again under the covers a little bit and look at how do we have to get So you know what have you guys leveraged as intel in the way you work with data And then ultimately, how do you build the structure to enable the right kind of pipeline of that So you know, what are some of those challenges? Yeah, well, you know, one is you have to have the resource is so you know, So a lot of the companies we've worked with with Cube and I think you know another that can help overcome some of those big data challenges and ultimately get you to where you I worked there many, many moons ago. we'll see you in the chat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeremyPERSON

0.99+

Jeremy RaiderPERSON

0.99+

Jeff FrickPERSON

0.99+

PortlandLOCATION

0.99+

Jeremy RaderPERSON

0.99+

BrianPERSON

0.99+

AntiguaLOCATION

0.99+

DellORGANIZATION

0.99+

60%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

315 petabytesQUANTITY

0.99+

BostonLOCATION

0.99+

350QUANTITY

0.99+

six monthsQUANTITY

0.99+

140,000 sourcesQUANTITY

0.99+

IntelORGANIZATION

0.99+

Intel CorporationORGANIZATION

0.98+

intelORGANIZATION

0.98+

Cube StudiosORGANIZATION

0.98+

oneQUANTITY

0.97+

step oneQUANTITY

0.96+

CubeORGANIZATION

0.96+

Data robotORGANIZATION

0.95+

JapaneseLOCATION

0.93+

couple weeksQUANTITY

0.92+

Grid DynamicsORGANIZATION

0.92+

Rose GardenLOCATION

0.92+

GambiraORGANIZATION

0.81+

H 20COMMERCIAL_ITEM

0.76+

three leggedQUANTITY

0.76+

many moonsDATE

0.72+

Japanese gardenLOCATION

0.72+

20TITLE

0.53+

crowdchatORGANIZATION

0.52+

coupleQUANTITY

0.5+

HCOMMERCIAL_ITEM

0.34+

Jay ibm promo part one v2


 

>> Hi, I'm Jay Limburn, Director of Offering Management from IBM DataOps. As an organization, we've been focusing on simplifying the data and AI life cycle, allowing you to discover and prepare data, and then use that data to build, deploy, govern, and manage your models for the range of capabilities that take advantage of machine and human intelligence. DataOps is a critical and complementary discipline to AI. The methodology enables agile data collaboration, driving speed and scale of operation, (audio distorts) throughout the data and AI life cycle. Learn more on May 27th when IBM and client leaders come together during the DataOps CrowdChat event online. I hope to see you then.

Published Date : May 6 2020

SUMMARY :

on simplifying the data and AI life cycle,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jay LimburnPERSON

0.99+

May 27thDATE

0.99+

IBMORGANIZATION

0.99+

IBM DataOpsORGANIZATION

0.99+

part oneOTHER

0.97+

JayPERSON

0.97+

DataOps CrowdChatEVENT

0.95+

DataOpsORGANIZATION

0.86+

v2OTHER

0.84+

Jay ibm promo part one v1


 

>>Hi. I'm Jalen Burn, director of offering management, IBM Data Ops. As an organization, we've been focusing on simplifying the data, and they are lifecycle allowing you to discover and prepare data and then use that data the build, deploy, govern and manage your models with a range of capabilities that take advantage of machine and human intelligence. Data Ops is a critical and complementary Dissident II. The methodology enables agile data collaboration, driving speed and scale of operations and throughout the data lifecycle learn more on May 27 when IBM and client leaders come together during the data Ops Crowdchat event online. I hope to see you then.

Published Date : May 4 2020

SUMMARY :

As an organization, we've been focusing on simplifying the data, and they are lifecycle allowing you to discover

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jalen BurnPERSON

0.99+

IBMORGANIZATION

0.99+

May 27DATE

0.99+

part oneOTHER

0.97+

IBM Data OpsORGANIZATION

0.97+

data Ops CrowdchatEVENT

0.9+

Jay ibmPERSON

0.79+

Data OpsORGANIZATION

0.69+

Breaking Analysis: Coronavirus - Pivoting From Physical to Digital Events


 

>> From the SiliconANGLE Media office in Boston, Massachusetts, it's "theCUBE." (intro music) Now, here's your host, Dave Vellante. >> Hello, everyone and welcome to this week's episode of Wikibon's CUBE Insights, Powered by ETR. In this Breaking Analysis, we're going to take a break from our traditional spending assessment and share with you our advice on how to deal with this crisis, specifically shifting your physical to digital in the age of Coronavirus. So, we're not going to be digging into the spending data. I talked to ETR this week, and they are obviously surveying on the impact of COVID-19, but those results won't be ready for a little bit. So, theCUBE team has been in discussions with over 20 companies that have events planned in the near term and the inbound call volume has been increasing very rapidly. Now, we've been doing digital for a decade, and we have a lot of experience, and are really excited to share our learnings, tools, and best practices with you as you try to plan through this crisis. So look, this is uncharted territory. We haven't ever seen a country quarantine 35 million people before, so of course everyone is panicked by this uncertainty but our message, like others, is don't panic but don't be complacent. You have to act and you have to make decisions. This will reduce uncertainty for your stakeholders, your employees, and of course, your community. Now as you well know, major physical events are dropping very fast as a risk mitigation measure. Mobile World Congress, HIMSS canceled, Kube-Con was postponed, IMB Think has gone digital, and so it goes. Look, if you have an event in the next three weeks, you have little choice but to cancel the physical attendee portion of that event. You really have three choices here. One is to cancel the event completely and wait until next year. Now the problem with that is, that type of capitulation doesn't really preserve any of the value related to why you were originally holding the physical event in the first place. Now you can do what Kube-Con did and postpone til the summer or kind of indefinitely. Okay, that's a near-term recision on the event, but now you're in limbo. But if you can sort out a venue down the road, that might work. The third option is to pivot to digital. It requires more thought but what it does is allow you to create an ongoing content ark that has benefits. The number-one complaint brands tell us about physical events is that after the event, they don't create a post-event halo effect. A digital strategy that expands time will enable that. This is important because when the market calms down, you're going to be able to better-leverage digital for your physical events. The key question you want to ask is, what are the most important aspects of that physical event that you want to preserve? And then start thinking about building a digital twin of those areas. But it's much more than that. And I'll address this opportunity that we think is unfolding for you a little later. Your challenge right now is to act decisively and turn lemons into lemonade with digital. Experiences are built around content, community, and the interaction of people. This is our philosophy. It's a virtuous cycle where data and machine intelligence are going to drive insights, discovery by users is going to bring navigation which leads to engagement and ultimately outcomes. Now, very importantly, this is not about which event software package to use. Do not start there. Start with the outcome that you want to achieve and work backwards. Identify the parts of that outcome that are achievable and then work from there. The technology decision will be easy and fall out of it if you take that path. So out of a high-level, you have two paths. One, which is the preferred path is to pivot to digital, on the right-hand side, especially if your event is in March or early April. Two is hold your physical event, but your general counsel is going to be all over you about the risks and precautions that you need to take. There are others better than I to advise you on those precautions. I've listed some here on the left-hand side and I'm going to publish this on Wikibon, but you know what to do there. But we are suggesting advising for the near-term events that you optimize for digital. That's the right side. Send out a crisp and clear communications, Adobe has a good example, that asks your loyal community to opt-in for updates and start the planning process. You want to identify the key objectives of your event and build a digital program that maximizes the value for your attendees and the maps to those objectives. We're going to share some examples that theCUBE participated in this week on what might look like the digital event, and we'll share that with you. Event software should come last. Don't even worry about that until you've envisioned your outcome. And I'll talk about software tools a little bit later. So new thinking is required, we believe. The old way was a big venue, big bang event, you get thousands of people. You're spending tons of money on a band. There's exhibitor halls. You're not going to preserve that, obviously. Rather, think about resetting the physical and optimizing for digital which really is about serving a community. Now let's talk about, again, what that might look like in the near-term and then we're going to close on how we see this evolving to a new era. The pattern emerging with our sponsors and our clients is, they want to preserve five key content areas from physical. Not necessarily all of them but in some combination. First is the keynotes. You bring together a captive audience, and you have your customers there, they want to hear from executives. Your customers have made a bet on you, and they want to feel good about it. So one is keynotes. Two is the breakout sessions, the deeper dives from subject matter experts. Third are technical sessions. A big reason customers attend these events is to get technical training. Four is to actually share news in a press conference-like format. And the fifth area that we've seen is, of course, theCUBE. Many of our customers have said, "We not only want you to turn to turnkey the digital event, we want to plug theCUBE into our digital production that we are running." Now these are not in stone, they're just examples of what some of the customers are doing, and they're blending keynotes into their press conference, and there's a lot of different news cases. I want to stress that, initially, everyone's mindset is to simply replicate physical to digital. It's fine to start there, but there's more to this story that we'll address later on. So let's have a look at what something like this might look like in the near-term. Here's an example of a digital event we did this week with a company called "Aviatrix." Small company but very nice look for their brand which is a priority for them. You can see the live audience vibe. This was live but it can be pre-recorded. All the speakers were together in one place. You can see the very high production value. Now, some of our clients have said, "Look, soon we want to do this completely remote with 100 percent of the speakers distributed." And our feeling is that's much more challenging for high-value events. Our strong recommendation is plan to get the speakers into a physical venue. And ideally, get a small VIP/influencer audience to be there. Make the audience feel important with a vibe of a VIP event. Yeah, you can wait a few weeks to see how this thing shakes out, and if travel loosens up, then you can pull this off. But for your Brand value, you really want to look as professional as possible. Same thing for keynotes. You can see how good this looks. Nice stage, lighting, the blue lights, and a live audience. This is a higher-end production with a venue, and food, and music for the intros and outros, very professional audio and visual. And this requires budget. You got to think about at least 200 to 300 thousand dollars and up for a full-blown event that you bring in influencers and the like. But you have options. You can scale it down. You can host the event at your facility. Host it off at our facility in Palo Alto. I'll talk about that a little later. Use your own people for the studio audience. Use your own production people and dial back the glam, which will lower the cost. Just depends on the brand that you want to convey, and of course, your budget. Now as well, you can run the event as a live or as a semi-live. You can pre-record some of all of the segments. You can have a portion, like the press conference and/or the keynotes, run live and then insert the breakouts into the stream as a semi-live, or as on-demand assets. You have options. Now before I talk about technical sessions, I want to share another best practice. theCUBE this week participated in a digital event at Stanford with the Women in Data Science organization, WiDS, and we plugged into their digital platform. WiDS is amazing. They created a hybrid physical/digital event, and again, had a small group of VIPs and speakers onsite at Stanford with keynotes and panels and breakouts, and then theCUBE interviews all were streaming. What was really cool is they connected to dozens and dozens of outposts around the globe, and these outposts hosted intimate meet-ups and participated in the live event. And, of course, all the content is hosted on-demand for a post-event halo effect. I want to talk a little bit about technical sessions. Where as with press conferences and keynotes, we're strongly recommending a higher scale and stronger brand production. With technical sessions, we see a different approach working. Technical people are fine with you earbuds and laptop speakers. Here's an example of a technical talk that Dan Hushon, who is the Senior VP and CTO at DXC, has run for years using the CrowdChat platform. He used the free community edition, along with Google Handouts, and has run dozens and dozens of these tech talks designed for learning and collaboration. Look, you can run these weekly as part of the pre-game, up to your digital event. You can run them day of the event, at the crescendo, and you can continue the cadence post-event for that halo effect that I've been talking about. Now let's spend the moment talking about software tooling. There are a lot of tools out there. Some, super functional. Some are monolithic and bloated. Some are just emerging. And you might have some of these, either licensed or you might be wed to one. Webinar software, like ON24 and Brightcove, and there's other platforms, that's great, awesome. From our standpoint, we plug right into any platform and are really agnostic to that. But the key is not to allow your software to dictate the outcome of your digital event. Technology should serve the outcome, not the reverse. Let me share with you theCUBE's approach to software. Now first thing I want to tell you is our software is free. We have a community editions that are very robust, they're not neutered. And we're making these available to our community. We've taken a CloudNative horizontally scalable angle bringing to bear the right tools for the right job. We don't think of software just to hold content. Rather, we think about members of the community and our goal is to allow teams to form and be successful. We see digital events creating new or evolving roles in organizations where the event may end, but the social organization and community aspect lives on. Think of theCUBE as providing a membrane to the conference team and a template for organizing and executing on digital events. Whether it's engaging in CrowdChats, curating video, telling stories post-event, hosting content, amplifying content, visualize your community as a whole and serve them. That's really the goal. Presence here is critical in a digital event, "Oh hey, I see you're here. "Great, let's talk." There are a number of news cases, and I encourage you to call us, contact us, and we'll focus on how to keep it simple. We have a really simple MVP use case that we're happy to share with you. All right, I got to wrap. The key point here is we see a permanent change. This is not a prediction about Coronavirus. Rather, we see a transformation created with new dynamics. Digital is about groups which are essentially a proxy for communities. Successful online communities require new thinking and we see new roles emerging. Think about the protocol stack for an event today and how that's going to change. Today is very structured. You have a captive audience, you got a big physical venue. In the future, it may evolve to multiple venues and many runs of shows. Remote pods rules around who is speaking. Self-forming schedules is not going to be the same as today. We think digital moves to a persistent commitment by the community where the group collectively catalyzes collaboration. Hosting an online event is cool, but a longterm digital strategy doesn't just move physical to digital. Rather, it reimagines events as an organic entity, not a mechanism or a piece of software. This is not about hosting content. Digital communities have an emotional impact that must be reflected through your brand. Now our mission at theCUBE has always been to serve communities with great content. And it's evolving to provide the tools, infrastructure, and data for communities, to both self-govern and succeed. Even though these times are uncertain and very difficult, we are really excited to serve you. We'll make the time to consult with you and are really thrilled to share what we've learned in the last 10 years and collaborate with you to create great outcomes for audiences. Okay, that's a wrap. As always, we really appreciate the comments that we get on our LinkedIn posts, and on Twitter, I'm @DVellante, so thanks for that. And thank you for watching, everyone. This is Dave Vellante for theCUBE Insights, Powered by ETR. And we'll see you next time. (outro music)

Published Date : Mar 6 2020

SUMMARY :

From the SiliconANGLE Media office We'll make the time to consult with you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Justin WarrenPERSON

0.99+

Sanjay PoonenPERSON

0.99+

IBMORGANIZATION

0.99+

ClarkePERSON

0.99+

David FloyerPERSON

0.99+

Jeff FrickPERSON

0.99+

Dave VolantePERSON

0.99+

GeorgePERSON

0.99+

DavePERSON

0.99+

Diane GreenePERSON

0.99+

Michele PalusoPERSON

0.99+

AWSORGANIZATION

0.99+

Sam LightstonePERSON

0.99+

Dan HushonPERSON

0.99+

NutanixORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

KevinPERSON

0.99+

Andy ArmstrongPERSON

0.99+

Michael DellPERSON

0.99+

Pat GelsingerPERSON

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Kevin SheehanPERSON

0.99+

Leandro NunezPERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

EMCORGANIZATION

0.99+

GEORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

KeithPERSON

0.99+

Bob MetcalfePERSON

0.99+

VMwareORGANIZATION

0.99+

90%QUANTITY

0.99+

SamPERSON

0.99+

Larry BiaginiPERSON

0.99+

Rebecca KnightPERSON

0.99+

BrendanPERSON

0.99+

DellORGANIZATION

0.99+

PeterPERSON

0.99+

Clarke PattersonPERSON

0.99+

Intelligent Data Platform


 

>> Hi. This is Dave Vellante with theCUBE, and we're running a series of events with various episodes. The first one is that the intelligence data platform. I'm here with Terry Richardson of HPE. Terry, what's that all about? >> So intelligent in a platform is really the rebranding of our complete storage offering, but it transcends into our infrastructure compute infrastructure. So what you'LL learn on this particular session is what makes HPE absolutely unique in this marketplace, leveraging are for the data center technology. >> So watch this CrowdChat will be holding events. We said we have episodes will be flowing in how twos and white papers and other great content will see you in the CrowdChat.

Published Date : Apr 15 2019

SUMMARY :

and we're running a series of events with various episodes. So intelligent in a platform is really the rebranding of our complete storage So watch this CrowdChat will be holding events.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Terry RichardsonPERSON

0.99+

TerryPERSON

0.99+

Dave VellantePERSON

0.99+

HPEORGANIZATION

0.99+

first oneQUANTITY

0.98+

CrowdChatTITLE

0.98+

twosQUANTITY

0.88+

theCUBEORGANIZATION

0.83+

Sarbjeet Johal, Cloud Influencer | CUBEConversation, November 2018


 

(lively orchestral music) >> Hello, everyone. Welcome to this special CUBE Conversation. We're here in Palo Alto, California, theCUBE headquarters. I'm John Furrier, the cofounder of SiliconANGLE Media, cohost of theCUBE. We're here with fellow cloud influencer, friend of theCUBE, Sarbjeet Johal, who's always on Twitter. If you check out my Twitter stream, you'll find out we've always got some threads. He's always jumping in the CrowdChat and I think was in the leaderboard for our last CrowdChat on multi-cloud Kubernetes. Thanks for coming in. >> Yeah, thank you for having me here. >> Thanks for coming in. So you're very prolific on Twitter. We love the conversations. We're gettin' a lot of energy around some of the narratives that have been flowing around, obviously helped this week by the big news of IBM acquiring Red Hat for, what was it, 30, what was the number, 34? >> 34, yeah. >> $34 billion, huge premium, essentially changing the game in open source, some think, some don't, but it begs the question, you know, cloud obviously is relevant. Ginni Rometty, the CEO of IBM, actually now saying cloud is where it's at, 20% have been on the cloud, 80% have not yet moved over there, trillion-dollar market which we called, actually, I called, a few years ago when I wrote my Forbes story about Amazon, the Trillion Dollar Baby I called it. This is real. >> Yeah. So apps are moving to the cloud, value for businesses on the cloud, people are seeing accelerated timelines for shipping. Software. >> Yeah. >> Software offer is eating the world. Cloud is eating software, and data's at the center of it. So I want to get your thoughts on this, because I know that you've been talking a lot about technical debt, you know, the role of developer, cloud migration. The reality is, this is not easy. If you're doin' cloud native, it's pretty easy. >> Still pretty easy, yeah. >> If that's all you got, right, so if you're a startup and/or built on the cloud, you really got the wind at your back, and it's lookin' really good. >> Yeah. >> If you're not born in the cloud, you're an IT shop, they've been consolidating for years, and now told to jump to a competitive advantage, you literally got to make a pivot overnight. >> Yeah, actually, at high level, I think cloud consumption you can divide into two buckets, right? One is the greenfield which, as you said, it's not slam dunk, all these startups are born in cloud, and all these new projects, systems of innovation what I usually refer to those, are born in cloud, and they are operated in cloud, and at some point they will sort of fade away or die in cloud, but the hard part is the legacy applications sitting in the enterprise, right? So those are the trillion dollar sort of what IBM folks are talking about. That's a messy problem to tackle. Within that, actually, there are some low-hanging fruits. Of course, we can move those workloads to the cloud. I usually don't refer the application, the workloads as applications because people are sort of religiously attached to the applications. They feel like it's their babies, right? >> Yeah. >> So I usually say workload, so some workloads are ripe for the cloud. It's data mining, BI, and also the AI part of it, right? So but some other workloads which are not right for the cloud right now or they're hard to move or the ERP system, systems of record and systems of engagement or what we call CRMs and marketing sort of applications which are legacy ones. >> Yeah, hard-coded operationalized software frameworks and packages and vendors like Oracle. >> Yes. >> They're entrenched. >> Oracle SAP, and there's so many other software vendors that have provided tons of software to the data centers that they're sitting there, and the hard part is that nobody wants to pull the plug on the existing applications. I've seen that time and again. I have done, my team has done more than 100 data center audits from EMC and VMware days. We have seen that time and again. Nobody wants to pull the plug on the application. >> 'Cause they're runnin' in production! (laughs) >> They are running in production. And it's hard to measure the usage of those applications, also, that's a hard part of the sort of old stack, if you will. >> Yeah. So the reality is, this is kind of getting to the heart of what we wanted to talk about which is, you know, vendor hype and market realities. >> Yeah. >> The market reality is, you can't unplug legacy apps overnight, but you got a nice thing called containers and Kubernetes emerging, that's nice. >> Yeah. >> Okay, so check, I love that, but still, the reality is, is okay, then who does it? >> Yeah. >> Do I add more complexity? We just had Jerry Chen and hot startup Rockset on, they're trying to reduce the complexity by just having a more simple approach. This is a hard architectural challenge. >> It is. >> So that's one fundamental thing I want to discuss with you. And then there's the practical nature of saying assuming you get the architecture right, migrating and operating. Let's take those as separate, let's talk architecture, then we'll talk operating and migrating. >> Okay. >> Architecturally, what do people do, what are people doing, what you're seeing, what do you think is the right architecture for cloud architects, because that's a booming position. >> Yeah. >> There's more and more cloud architects out there, and the openings for cloud architects is massive. >> Yeah, I think in architecture, the microservices are on the rise. There are enabling technologies behind it. It doesn't happen sort of magically overnight. We have had some open source sort of development in that area the, the RESTful APIs actually gave the ports to the microservices. Now we can easily inter-operate between applications, right? So and our sort of, sorry I'm blanking out, so our way to divide the compute at the sort of micro-chunks from VM, virtual machine, to the container to the next level is the serverless, right? So that is giving ports to the microservices, and the integration technologies are improving at the same time. The problem of SEL lies in the data, which is the storage part and the data part and the network, and the network is closely associated with security. So security and network are two messy parts. They are in the architecture, even in the pure cloud architecture in the Kubernete world, those are two sort of hard parts. And Cisco is trying to address the network part. I speak, I spoke to some folks there, and what they are doing in that space, they are addressing the network and SCODI part, sort of deepening-- >> And it's a good time for them to do that. >> Yeah. >> Because, I mean, you go back, and you know, we covered DevNet Create, which is Susie Wee, she's a rising star at Cisco, and now she's running all of DevNet. So the developer network within Cisco's has a renaissance because, you know, you go back 20 years ago, if you were a network guy, you ran the show, I mean, everything ran the network. The network was everything. The network dictated what would happen. Then it kind of went through a funk of like now cloud native's hot and serverless, but now that programability's hitting the network because remember the holy trinity of transformation is compute, storage, and networking. (laughs) >> Yeah. >> Those aren't going away. >> Yeah, they aren't going away. >> Right, so networking now is seeing some, you know, revitalization because you can program it, you can automate it, you can throw DevOps to it. This is kind of changing the game a little bit. So I'm intrigued by the whole network piece of it because if you can automate some network with containers and Kubernetes and, say, service meshes, then it's become programmable, then you can do the automation, then it's infrastructure, it's code. >> Yeah, exactly. >> Infrastructure is code. It has to cover all three of those things. >> That is true, and another aspect is that we talk about multi-cloud all the time, which Cisco is focusing on also, like IBM, like VMware, like many other players who talk about multi-cloud, but problem with the multi-cloud right now is that you cannot take your security policies from one cloud provider to another and then just say, okay, just run there, right? So you can do the compute easy, containers, right, or Kubernetes are there, but you can't take the network as is, you cannot, you can still take the storage but not storage policies, so the policy-driven computing is still not there. >> Yeah. >> So we need, I think, more innovation in that area. >> Yeah, there's some technical issues. I talk a lot of startups, and they're jumpin' around from Azure to Amazon, and everyone comes back to Amazon because they say, and I'm not going to name names, but I'll just categorically say with what's going on is when they get to Microsoft and Oracle and IBM, the old kind of guards is they come in and they find that they check the boxes on the literature, oh, they do this, that, and that, but it's really just a lot of reverse proxies, there's a lot of homegrown stuff in there-- >> Yeah. >> That are making it work and hang together but not purely built from the ground up. >> Exactly, yeah, so they're actually sort of re-bottling the old sort of champagne kind of stuff, like they re-label old stuff and put layers of abstraction on top of it and that's why we're having those problems with the sort of legacy vendors. >> So let's get into some of the things that I know you're talking about a lot on Twitter, we're engaging on with with the community is migration, and so I want to kind of put a context to the questions so we can riff together on it. Let's just say that you and I were hired by the the CIO of a huge enterprise, financial services, pick your vertical. >> Yeah. >> Hey, Sarbjeet and John, fix my problems, and they give us the keys to the kingdom, bag of money, whatever it takes, go make it happen. What do we do, what's the first things that we do? Because they got a legacy, we know what it looks like, you got the networks, you're racking stack, top-of-rack switches, you got perimeter-based security. We got to go in and kind of level the playing field. What's our strategy, what do we what do we recommend? >> Yeah, the first thing first, right? So first, we need to know the drivers for the migration, right, what is it? Is it a cost-cutting, is it the agility, is it mergers and acquisitions? So what are the, what is the main driver? So that knowing that actually will help us like divvy up the problem, actually divide it up. The next thing, the next best practice is, I always suggest, I've done quite a few migrations, is that do the application portfolio analysis first. You want to find that low-hanging fruit which can be moved to the cloud first. The reason, main reason behind that is that your people and processes need to ease into using the cloud. I use consumption term a lot, actually on Twitter you see that, so I'm a big fan of consumption economics. So your people and processes need to adapt, like your change control, change management, ITSM, the old stuff still is valid, actually. We're giving it a new name, but those problems don't go away, right? How you log a ticket, how you how the support will react and all that stuff, so it needs to map to the cloud. SLA is another less talked about topic in our circles on Twitter, and our industry partners don't talk about, but that's another interesting part. Like what are the SLAs needed for, which applications and so forth. So first do the application profiling, find the low-hanging fruit. Go slow in the beginning, create the phases, like phase one, phase two, phase three, phase four. And it also depends number, on the number of applications, right? IBM folks were talking about that thousand average number of applications per enterprise. I think it's more than thousand, I've seen it. And that, just divvy up the problem. And then another best practice I've learned is migrate as is, do not transform and migrate, because then you're at, if something is not working over there or the performance problem or any latency problem, you will blame it on your newer architecture, if you will. Move as is, then then transform over there. And if you want me to elaborate a little more on the transformation part, I usually divide transformation into three buckets, actually this is what I tell the CIOs and CTOs and CEOs, that transformation is of three types. Well, after you move, transformation, first it is the infrastructure-led transformation. You can do the platforming and go from Windows to Linux and Linux to AIX and all that stuff, like you can go from VM to container kind of stuff, right? And the second is a process-led transformation, which is that you change your change control, change management, policy-driven computing, if you will, so you create automation there. The third thing is the application where you open the hood of the application and refactor the code and do the Web service enablement of your application so that you can weave in the systems of innovation and plug those into the existing application. So you want to open your application. That's the whole idea behind all this sort of transformation is your applications are open so you can bring in the data and take out the data as you weave. >> From your conversations and analysis, how does cloud, once migrations happen in cloud operations, how does that impact traditional network, network architecture, network security, and application performance? >> On the network side, actually, how does it, let me ask you a question, what do you mean by how does it-- >> In the old days, used a provisional VLAN. >> The older stuff? >> So I got networks out there, I got a big enterprise, okay, we know how to run the networks, but now I'm movin' to the cloud. >> Yeah. >> I'm off premises, I'm on premise, now I'm in the cloud. >> Yeah. >> How do I think about the network's differently? Whose provisioning the subnets, who's doing the VPNs? You know, where's the policy, all these policy-based things that we're startin' to see at Kubernetes. >> Yeah. >> They were traditionally like networks stuff-- >> You knew what it was. >> That's now happening at the microservices level. >> Yeah. >> So new paradigm. >> The new paradigm, actually, the whole idea is that your network folks, your storage folks, your server folks, like what they were used to be in-house, they need to be able to program, right? That's the number one thing. So you need to retrain your workforce, right? If you don't have the, you cannot retrain people overnight, and then you bring in some folks who know how to program networks and then bring those in. There's a big misconception about, from people, that the service, sorry, the service provider, which is called cloud service provider, is it responsible for the security of your applications or for the network, sort of segmentation of your network. They are not, actually, they don't have any liability over security if you read the SLAs. It's your responsibility to have the sort of right firewalling, right checks and balances in place for the network for storage, for compute, right policies in place. It's your responsibility. >> So let's talk about the, some tweets you've been doin' 'cause I've been wanting to pull the ones that I like. You tweeted a couple days ago, we don't know how to recycle failed startups. >> Yeah. (chuckles) >> Okay, and I said open source. And you picked up and brought up another image, is open source a dumping ground for failed startups? And it was interesting because what I love about open source is, in the old days of proprietary software, if the company went under, the code went under with it, but at least now, with open source, at least something can survive. But you bring up this dumping concept, that also came up in an interview earlier today with another guest which was with all this contribution coming in from vendors, it's almost like there's a dumping going on into open source in general, and you can't miss a beat without five new announcements per day that's, you know, someone's contributing their software from this project or a failed, even failed startup, you know, last hope, let's open source it. Is that good or bad, I mean, what's your take on that, what was your posture or thinking around this conversation? It is good, is it bad? >> Yeah, I believe it's, it's a economic problem, economics thing, right? So when somebody's like proprietary model doesn't work, they say, okay, let me see if this works, right? Actually, they always go first with like, okay let me sell-- >> Make money. >> Let me make money, right? A higher margin, right, everybody loves that, right? But then, if they cannot penetrate the market, they say, okay, let me make it open source, right? And then I will get the money from the support, or my own distro, like, distros are a big like open source killer, I said that a few times. Like the vendor-specific distributions of open source, they kill open source like nothing else does. Because I was at Rackspace when we open-sourced OpenStack, and I saw what happened to OpenStack. It was like eye-opening, so everybody kind of hijacked OpenStack and started putting their own sort of flavors in place. >> Yeah, yeah, we saw the outcome of that. >> Yeah. >> It niched into infrastructures of service, kind of has a special purpose-built view. >> And when I-- >> And that it comes cloud native didn't help either. Cloud grew at that time, too, talking about the 2008 timeframe. >> Yeah, yeah, and exactly. And another, why I said that was, it was in a different context, actually, I invested some money into an incubator in Berkeley, The Batchery, so we have taken what, 70-plus startups through that program so far, and I've seen that pattern there. So I will interview the people who want to bring their startup to our incubator and all that, and then after, most of them fail, right? >> Yeah. >> They kind of fade away or they leave, they definitely leave our incubator after a certain number of weeks, but then you see like what happens to them, and now also living in the Valley, you can't avoid it. I worked with 500 Startups a little bit and used to go to their demo days from the Rackspace days because we used to have a deal with them, a marketing deal, so the pattern I saw that was, there's a lot of innovation, there was a lot of brain power in these startups that we don't know what, these people just fade away. We don't have a mechanism to say, okay, hey you are doing this, and we are also doing similar stuff, we are a little more successful than, let's merge these two things and make it work. So we don't know how to recycle the startups. So that's what was on it. >> It's almost a personal network of intellectual capital. >> Yeah. >> Kind of, there needs to be a new way to network in the IP that's in people's heads. Or in this case, if it's open source, that's easy there, too, so being inaccessible. >> So there's no startup, there's no Internet of startups, if you will. >> Yeah, so there's no-- >> Hey, you start a CUBE group. (Sarbjeet laughing) You'll do it, start a CrowdChat. All right, I want to ask you about this consumption economics. >> Yeah. >> I like this concept. Can you take a minute to explain what you mean by consumption economics? You said you're all over it. I know you talk a lot about it on Twitter. >> Yes. >> What is it about, why is it important? >> Actually, the pattern I've seen in tech industry for last 25, 24 years in Silicon Valley, so the pattern I've seen is that everybody focuses on the supply side, like we do this, we like, we're going to change the way you work and all that stuff, but people usually do not focus on the consumption side of things, like people are consuming things. I'm a great fan of a theory called Jobs to Be Done theory. If you get a time, take a look at that. So what jobs people are trying to do and how you can solve that problem. Actually, if you approach your products, services from that angle, that goes a long way. Another aspect I talk about, the consumption economics, is age of micro-consumption, and again, there are reasons behind it. The main reason is there's so much thrown at us individually and and also enterprise-wise, like so much technology is thrown at us, if we try to batch, like if were ready to say, okay, we're not going to consume the technology now, and we're going to do every six months, like we're going to release every six months, or new software or new packages, and also at the same time, we will consume every six months, that doesn't work. So the whole notion when I talk about the micro-consumption is that you keep bringing the change in micro-chunks. And I think AWS has mastered the game of micro-supply, as a micro-supplier of that micro-change. >> Yeah. >> If you will. So they release-- >> And by the way, they're very customer-centric, so listening to the demand side. >> Exactly. So they kind of walk hand in hand with the customer in a way that customer wants this, so they're needing this, so let us release it. They don't wait for like old traditional model of like, okay, every year there's a new big release and there are service packs and patches and all that stuff, even though other vendors have moved along the industry. But they still have longer cycles, they still release like 10 things at a time. I think that doesn't work. So you have to give, as a supplier, to the masses of the workers of the world in HPs and IBMs, give the change in smaller chunks, don't give them monolithic. When you're marketing your stuff, even marketing message should be in micro-chunks, like or even if you created like five sort of features and sort of, let's, say in Watson, right, just give them one at a time. Be developer-friendly because developers are the people who will consume that stuff. >> Yeah, and then making it more supply, less supply side but micro-chunks or microservices or micro-supply. >> Yeah. >> Having a developer piece also plays well because they're also ones who can help assemble the micro, it's in a LEGO model of composeability. >> Yeah, exactly. >> And so I think that's definitely right. The other thing I wanted to get your thoughts on is validated by Jerry Chen at Greylock and his hot startups and a few others is my notion of stack overhaul. The changes in the stack are significant. I tweeted, and you commented on it, on the Red Hat IBM deal 'cause they were talkin' about, oh, the IBM stack is going to be everywhere, and they're talking about the IBM stack and the old full-stack developer model, but if you look at the consumption economics, you look at horizontally scalable cloud, native serverless and all those things goin' on with Kubernetes, the trend is a complete radical shifting of the stack where now the standardization is the horizontally scalable, and then the differentiations at the top of the stack, so the stack has tweaked and torqued it a little bit. >> Yeah. >> And so this is going to change a lot. Your thoughts and reaction to that concept of stack, not a complete, you know, radical wholesale change, but a tweak. >> Actually our CTO at Rackspace, John Engates, gave us a sort of speech at one of the kind of conferences here in Bay Area, the title of that was Stack, What Stack, right? So the point he was trying to make was like stack is like, we are not in the blue stack, red stack anymore, so we are a cross-stack, actually. There are a lot of the sort of small LEGO pieces, we're trying to put those together. And again, the reason behind that is because there's some enabling technology like Web services in RESTful APIs, so those have enabled us to-- >> And new kinds of glue layers, if you will. >> Yeah, yeah. >> Abstraction layers. >> Yeah, I call it digital glue. There's a new type of digital glue, and now we have, we are seeing the emergence of low code, no code sort of paradigms coming into the play, which is a long debate in itself. So they are changing the stack altogether. So everything is becoming kind of lightweight, if you will, again-- >> And more the level of granularity is getting, you know, thinner and thinner, not macro. So you know, macroservices doesn't exist. That was my, I think, my tweet, you know, macroservices or microservices? >> Yeah. >> Which one you think's better? And we know what's happening with microservices. That is the trend. >> That is the trend. >> So that is that antithesis of macro. >> Yeah. >> Or monolithic. >> Yeah, so there's a saying in tech, actually I will rephrase it, I don't know exactly how that is, so we actually tend to overestimate the impact of a technology in the short run and underestimate in the long term, right? So there's a famous saying somebody, said that, and that's, I think that's so true. What we actually wanted to do after the dot-com bust was the object-oriented, like the sort of black box services, it as, we called them Web services back then, right? >> Yeah. >> There were books written by IBM-- >> Service-oriented architecture-- >> Yeah, SOA. >> Web services, RSS came out of that. >> Yes. >> I mean, a lot of good things that are actually in part of what the vision is happening today. >> It's happening now actually, it just happening today. And mobile has changed everything, I believe, not only on the consumer side, even on the economic side. >> I mean, that's literally 16, 17 years later. >> Yes, exactly, it took that long. >> It's the gestation period. >> Yes. >> Bitcoin 10 years ago yesterday, the white paper was built. >> Yeah. >> So the acceleration's certainly happening. I know you're big fan of blockchain, you've been tweeting about it lately. Thoughts on blockchain, what's your view on blockchain? Real, going to have a big impact? >> I think it will have huge impact, actually. I've been studying on it, actually. I was light on it, now I'm a little bit, I'm reading on it this and I understand. I've talked to people who are doing this work. I think it will have a huge impact, actually. The problem right now with blockchain is that, the speed, right? >> It's slow, yeah. So yeah, it's very slow, doc slow, if you will. But I think that is a technical problem, we can solve that. There's no sort of functional problem with the blockchain. Actually, it's a beautiful thing. Another aspect which come into play is the data sovereignty. So blockchains actually are replicated throughout the world if you want the worldwide money exchange and all that kind of stuff going around. We will need to address that because the data in Switzerland needs to sit there, and data in the U.S. needs to stay in the U.S. That blockchain actually kind of, it doesn't do that. You have a copy of the same data everywhere. >> Yeah, I mean, you talk about digital software to find money, software to find data center. I mean, it's all digital. I mean, someone once said whatever gets digitized grows exponentially. (Sarbjeet laughing) Oh, that was you! >> Actually I-- >> On October 30th. >> That was, that came from a book, actually. It's called Exponential Organizations. Actually, they're two great books I will recommend for everybody to read, actually there's a third one also. So (laughs) the two are, one is Exponential Organizations. It's a pretty thin book, you should take, pick it up. And it talks about like whatever get digitized grows exponentially, but our organizations are not, like geared towards handling that exponential growth. And the other one is Consumption Economics. The title of the book is Consumption Economics, actually. I saw that book after I started talking about it, consumption economics myself. I'm an economics major, actually, so that's why I talk about that kind of stuff and those kind comments, so. >> Well, and I think one of the things, I mean, we've talked about this privately when we've seen each other at some of theCUBE events, I think economics, the chief economic officer role will be a title that will be as powerful as a CSO, chief security officer, because consumption economics, token economics which is the crypto kind of dynamic of gamification or network effects, you got economics in cloud, you got all kinds of new dynamics that are now instrumented that need, that are, they're throwin' off numbers. So there's math behind things, whether it's cryptocurrency, whether it's math behind reputation, or any anything. >> Yeah. >> Math is driving everything, machine learning, heavy math-oriented algorithms. >> Yeah, actually at the end of the day, economics matters, right? That's what we are all trying to do, right? We're trying to do things faster cheaper, right? That's what automation is all about. >> And simplifying, too. >> And simplifying service. >> You can't throw complexity in, more complexity. >> Yeah. >> That's exponential complexity. >> Sometimes while we are trying to simplify things, and I also said, like many times the tech is like medicine, right? I've said that many times. (laughs) Tech is like medicine, every pill has a side effect. Sometimes when we are trying to simplify stuff, we add more complexity, so. >> Yeah. What's worse, the pain or the side effects? Pick your thing. >> Yeah, you pick your thing. And your goal is to sort of reduce the side effects. They will be there, they will be there. And what is digital transformation? It's all about business. It's not, less about technology, technology's a small piece of that. It's more about business models, right? So we're trying to, when we talk about micro-consumption and the sharing economy, they're kind of similar concepts, right? So Ubers of the world and Airbnbs all over the world, so those new business models have been enabled by technology, and we want to to replicate that with the medicine, with the, I guess, education, autos, and you name it. >> So we obviously believe in microcontent at theCUBE. We've got the Clipper tool, the search engine. >> I love that. >> So the CUBEnomics. It's a book that we should be getting on right away. >> Yeah, we should do that! >> CUBEnomics. >> CUBEnomics, yeah. >> The economics behind theCUBE interviews. Sarbjeet, thank you for coming on. Great to see you, and thank you for your participation-- >> Thanks, John. >> And engagement online in our digital community. We love chatting with you and always great to see you, and let's talk more about economics and digital exponential growth. It's certainly happening. Thanks for coming in, appreciate it. >> It was great having, being here, actually. >> All right, the CUBE Conversation, here in Palo Alto Studios here for theCUBE headquarters. I'm John Furrier, thanks for watching. (lively orchestral music)

Published Date : Nov 1 2018

SUMMARY :

I'm John Furrier, the cofounder of SiliconANGLE Media, Yeah, thank you around some of the narratives that have been flowing around, Ginni Rometty, the CEO of IBM, actually now saying So apps are moving to the cloud, Cloud is eating software, and data's at the center of it. you really got the wind at your back, you literally got to make a pivot overnight. One is the greenfield which, as you said, for the cloud right now or they're hard to move and packages and vendors like Oracle. and the hard part is that nobody wants to pull the plug also, that's a hard part of the sort of old stack, So the reality is, this is kind of getting to the heart but you got a nice thing called containers Do I add more complexity? you get the architecture right, migrating and operating. what you're seeing, what do you think is the right for cloud architects is massive. and the network is closely associated with security. for them to do that. but now that programability's hitting the network This is kind of changing the game a little bit. It has to cover all three of those things. the network as is, you cannot, you can still take So we need, I think, the old kind of guards is they come in and hang together but not purely built from the ground up. the old sort of champagne kind of stuff, So let's get into some of the things that I know you got the networks, you're racking stack, and take out the data as you weave. In the old days, but now I'm movin' to the cloud. I'm on premise, now I'm in the cloud. about the network's differently? So you need to retrain your workforce, right? So let's talk about the, some tweets you've been doin' of proprietary software, if the company went under, Like the vendor-specific distributions of open source, we saw the outcome of that. It niched into infrastructures of service, the 2008 timeframe. and I've seen that pattern there. and now also living in the Valley, you can't avoid it. network of intellectual capital. Kind of, there needs to be if you will. All right, I want to ask you about this consumption economics. I know you talk a lot about it on Twitter. and also at the same time, we will consume If you will. And by the way, So you have to give, as a supplier, Yeah, and then making it more supply, the micro, it's in a LEGO model of composeability. is the horizontally scalable, and then the differentiations of stack, not a complete, you know, So the point he was trying to make was like stack is like, sort of paradigms coming into the play, And more the level of granularity is getting, That is the trend. of a technology in the short run and underestimate RSS came out of that. I mean, a lot of good things that are actually in part I believe, not only on the consumer side, I mean, that's literally it took that long. Bitcoin 10 years ago So the acceleration's the speed, right? and data in the U.S. needs to stay in the U.S. Yeah, I mean, you talk about digital software So (laughs) the two are, one is Exponential Organizations. one of the things, I mean, we've talked about this privately Math is driving everything, machine learning, Yeah, actually at the end of the day, You can't throw complexity in, and I also said, like many times the tech Yeah. So Ubers of the world and Airbnbs all over the world, We've got the Clipper tool, the search engine. So the CUBEnomics. Sarbjeet, thank you for coming on. We love chatting with you and always great to see you, All right, the CUBE Conversation,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Ginni RomettyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

SarbjeetPERSON

0.99+

John FurrierPERSON

0.99+

John EngatesPERSON

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

Susie WeePERSON

0.99+

CiscoORGANIZATION

0.99+

Bay AreaLOCATION

0.99+

Jerry ChenPERSON

0.99+

Sarbjeet JohalPERSON

0.99+

October 30thDATE

0.99+

SwitzerlandLOCATION

0.99+

2008DATE

0.99+

Silicon ValleyLOCATION

0.99+

80%QUANTITY

0.99+

November 2018DATE

0.99+

AmazonORGANIZATION

0.99+

twoQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

IBMsORGANIZATION

0.99+

U.S.LOCATION

0.99+

WindowsTITLE

0.99+

LinuxTITLE

0.99+

five new announcementsQUANTITY

0.99+

RackspaceORGANIZATION

0.99+

OpenStackTITLE

0.99+

20%QUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

two bucketsQUANTITY

0.99+

10 thingsQUANTITY

0.99+

10 years agoDATE

0.99+

BerkeleyLOCATION

0.99+

EMCORGANIZATION

0.98+

firstQUANTITY

0.98+

RocksetORGANIZATION

0.98+

threeQUANTITY

0.98+

third oneQUANTITY

0.98+

this weekDATE

0.98+

34QUANTITY

0.98+

LEGOORGANIZATION

0.98+

GreylockORGANIZATION

0.98+

two thingsQUANTITY

0.98+

TwitterORGANIZATION

0.98+

more than thousandQUANTITY

0.98+

oneQUANTITY

0.98+

20 years agoDATE

0.98+

secondQUANTITY

0.97+

OneQUANTITY

0.97+

$34 billionQUANTITY

0.96+

AIXTITLE

0.96+

HPsORGANIZATION

0.96+

two great booksQUANTITY

0.96+

DevNetTITLE

0.95+

two sortQUANTITY

0.95+

70-plus startupsQUANTITY

0.95+

theCUBEORGANIZATION

0.95+

30QUANTITY

0.95+

CUBEnomicsORGANIZATION

0.95+

Palo Alto StudiosLOCATION

0.94+

earlier todayDATE

0.94+

DevNet CreateTITLE

0.94+

Exponential OrganizationsTITLE

0.93+

500 StartupsORGANIZATION

0.93+

Dell EMC: Get Ready For AI


 

(bright orchestra music) >> Hi, I'm Peter Burris. Welcome to a special digital community event brought to you by Wikibon and theCUBE. Sponsored by Dell EMC. Today we're gonna spend quite some time talking about some of the trends in the relationship between hardware and AI. Specifically, we're seeing a number of companies doing some masterful work incorporating new technologies to simplify the infrastructure required to take full advantage of AI options and possibilities. Now at the end of this conversation, series of conversations, we're gonna run a CrowdChat, which will be your opportunity to engage your peers and engage thought leaders from Dell EMC and from Wikibon SiliconANGLE and have a broader conversation about what does it mean to be better at doing AI, more successful, improving time to value, et cetera. So wait 'til the very end for that. Alright, let's get it kicked off. Tom Burns is my first guest. And he is the Senior Vice President and General Manager of Networking Solutions at Dell EMC. Tom, it's great to have you back again. Welcome back to theCUBE. >> Thank you very much. It's great to be here. >> So Tom, this is gonna be a very, very exciting conversation we're gonna have. And it's gonna be about AI. So when you go out and talk to customers specifically, what are you hearing then as they describe their needs, their wants, their aspirations as they pertain to AI? >> Yeah, Pete, we've always been looking at this as this whole digital transformation. Some studies say that about 70% of enterprises today are looking how to take advantage of the digital transformation that's occurring. In fact, you're probably familiar with the Dell 2030 Survey, where we went out and talked to about 400 different companies of very different sizes. And they're looking at all these connected devices and edge computing and all the various changes that are happening from a technology standpoint, and certainly AI is one of the hottest areas. There's a report I think that was co-sponsored by ServiceNow. Over 62% of the CIO's and the Fortune 500 are looking at AI as far as managing their business in the future. And it's really about user outcomes. It's about how do they improve their businesses, their operations, their processes, their decision-making using the capability of compute coming down from a class perspective and the number of connected devices exploding bringing more and more data to their companies that they can use, analyze, and put to use cases that really make a difference in their business. >> But they make a difference in their business, but they're also often these use cases are a lot more complex. They're not, we have this little bromide that we use that the first 50 years of computing were about known process, unknown technology. We're now entering into an era where we know a little bit more about the technology. It's gonna be cloud-like, but we don't know what the processes are, because we're engaging directly with customers or partners in much more complex domains. That suggests a lot of things. How are customers dealing with that new level of complexity and where are they looking to simplify? >> You actually nailed it on the head. What's happening in our customers' environment is they're hiring these data scientists to really look at this data. And instead of looking at analyzing the data that's being connected, that's being analyzed and connected, they're spending more time worried about the infrastructure and building the components and looking about allocations of capacity in order to make these data scientists productive. And really, what we're trying to do is help them get through that particular hurdle. So you have the data scientists that are frustrated, because they're waiting for the IT Department to help them set up and scale the capacity that they need and infrastructure that they need in order to do their job. And then you got the IT Departments that are very frustrated, because they don't know how to manage all this infrastructure. So the question around do I go to the cloud? Do I remain on-prem? All of this is things that our companies, our customers, are continuing to be challenged with. >> Now, the ideal would be that you can have a cloud experience but have the data reside where it most naturally resides, given physics, given the cost, given bandwidth limitations, given regulatory regimes, et cetera. So how are you at Dell EMC helping to provide that sense of an experience based on what the work load is and where the data resides, as opposed to some other set of infrastructure choices? >> Well, that's the exciting part is that we're getting ready to announce a new solution called the Ready Solutions for AI. And what we've been doing is working with our customers over the last several years looking at these challenges around infrastructure, the data analytics, the connected devices, but giving them an experience that's real-time. Not letting them worry about how am I gonna set this up or management and so forth. So we're introducing the Ready Solutions for AI, which really focuses on three things. One is simplify the AI process. The second thing is to ensure that we give them deep and real-time analytics. And lastly, provide them the level of expertise that they need in a partner in order to make those tools useful and that information useful to their business. >> Now we want to not only provide AI to the business, but we also wanna start utilizing some of these advanced technologies directly into the infrastructure elements themselves to make it more simple. Is that a big feature of what the ready system for AI is? >> Absolutely, as I said, one of the key value propositions is around making AI simple. We are experts at building infrastructure. We have IP around compute, storage, networking, infinity band. The things that are capable of putting this infrastructure together. So we have tested that based upon customers' input, using traditional data analytics, libraries, and tool sets that the data scientists are gonna use, already pre-tested and certified. And then we're bringing this to them in a way which allows them through a service provisioning portal to basically set up and get to work much faster. The previous tools that were available out there, some from our competition. There were 15, 20, 25 different steps just to log on, just to get enough automation or enough capability in order to get the information that they need. The infrastructure allocated for this big data analytics through this service portal we've actually gotten it down to around five clicks with a very user-friendly GUI, no CLI required. And basically, again, interacting with the tools that they're used to immediately right out of the gate like in stage three. And then getting them to work in stage four and stage five so that they're not worried about the infrastructure, not worried about capacity, or is it gonna work. They basically are one, two, three, four clicks away, and they're up and working on the analytics that everyone wants them to work on. And heaven knows, these guys are not cheap. >> So you're talking about the data scientists. So presumably when you're saying they're not worried about all those things, they're also not worried about when the IT Department can get around to doing it. So this gives them the opportunity to self-provision. Have I got that right? >> That's correct. They don't need the IT to come in and set up the network to do the CLI for the provisioning, to make sure that there is enough VM's or workloads that are properly scheduled in order to give them the capacity that they need. They basically are set with a preset platform. Again, let's think about what Dell EMC is really working towards and that's becoming the infrastructure provider. We believe that the silos, the service storage, and networking are becoming eliminated, that companies want a platform that they can enable those capabilities. So you're absolutely right. The part about the simplicity or simplifying the AI process is really giving the data scientists the tools they need to provision the infrastructure they need very quickly. >> And so that means that the AI or rather the IT group can actually start acting more like a DevOps organization as opposed to a specialist in one or another technology. >> Correct, but we've also given them the capability by giving the usual automation and configuration tools that they're used to coming from some of our software partners, such as Cloudera. So in other words, you still want the IT Department involved, making sure that the infrastructure is meeting the requirements of the users. They're giving them what they want, but we're simplifying the tools and processes around the IT standpoint as well. >> Now we've done a lot of research into what's happening in the big data now is likely to happen in the AI world. And a lot of the problems that companies had with big data was they conflated or they confused the objectives, the outcome of a big data project, with just getting the infrastructure to work. And they walked away often, because they failed to get the infrastructure to work. So it sounds though what you're doing is you're trying to take the infrastructure out of the equation while at the same time going back to the customer and saying, "Wherever you want this job "to run or this workload to run, you're gonna get the same "experience irregardless." >> Correct, but we're gonna get an improved experience as well. Because of the products that we've put together in this particular solution, combined with our compute, our scale-out mass solution from a storage perspective, our partnership with Mellon Oshman infinity band or ethernet switch capability. We're gonna give them deeper insights and faster insights. The performance and scalability of this particular platform is tremendous. We believe in certain benchmark studies based upon the Reznik 50 benchmark. We've performed anywhere between two and half to almost three times faster than the competition. In addition from a storage standpoint, all of these workloads, all of the various characteristics that happen, you need a ton of IOPS. >> Yeah. >> And there's no one in the industry that has the IOP performance that we have with our All-Flash Isilon product. The capabilities that we have there we believe are somewhere around nine times the competition. Again, the scale-out performance while simplifying the overall architecture. >> Tom Burns, Senior Vice President of Networking and Solutions at Dell EMC. Thanks for being on theCUBE. >> Thank you very much. >> So there's some great points there about this new class of technology that dramatically simplifies how hardware can be deployed to improve the overall productivity and performance of AI solutions. But let's take a look at a product demo. >> Every week, more customers are telling us they know AI is possible for them, but they don't know where to start. Much of the recent progress in AI has been fueled by open source software. So it's tempting to think that do-it-yourself is the right way to go. Get some how-to references from the web and start building out your own distributive deep-learning platform. But it takes a lot of time and effort to create an enterprise-class AI platform with automation for deployment, management, and monitoring. There is no easy solution for that. Until now. Instead of putting the burden of do-it-yourself on your already limited staff, consider Dell EMC Ready Solutions for AI. Ready Solutions are complete software and hardware stacks pre-tested and validated with the most popular open source AI frameworks and libraries. Our professional services with proven AI expertise will have the solution up and running in days and ready for data scientists to start working in weeks. Data scientists will find the Dell EMC data science provisioning portal a welcome change for managing their own hardware and software environments. The portal lets data scientists acquire hardware resources from the cluster and customize their software environment with packages and libraries tested for compatibility with all dependencies. Data scientists choose between JupyterHub notebooks for interactive work, as well as terminal sessions for large-scale neural networks. These neural networks run across a high-performance cluster of power-edge servers with scalable Intel processors and scale-out Isilon storage that delivers up to 18 times the throughput of its closest all-flash competitor. IT pros will experience that AI is simplified as Bright Cluster Manager monitors your cluster for configuration drift down to the server BIOS using exclusive integration with Dell EMC's open manage API's for power-edge. This solution provides comprehensive metrics along with automatic health checks that keep an eye on the cluster and will alert you when there's trouble. Ready Solutions for AI are the only platforms that keep both data center professionals and data scientists productive and getting along. IT operations are simplified and that produces a more consistent experience for everyone. Data scientists get a customizable, high-performance, deep-learning service experience that can eliminate monthly charges spent on public cloud while keeping your data under your control. (upbeat guitar music) >> It's always great to see the product videos, but Tom Burns mentioned something earlier. He talked about the expansive expertise that Dell EMC has and bringing together advanced hardware and advanced software into more simple solutions that can liberate business value for customers, especially around AI. And so to really test that out, we sent Jeff Frick, who's the general manager and host of theCUBE down to the bowels of Dell EMC's operations in Austin, Texas. Jeff went and visited the Dell EMC HPC and AI Innovation Lab and met with Garima Kochhar, who's a tactical staff Senior Principal Engineer. Let's hear what Jeff learned. >> We're excited to have with us our next guest. She's Garima Kochhar. She's on the tactical staff and the Senior Principal Engineer at Dell EMC. Welcome. >> Thank you. >> From your perspective what kinda changing in the landscape from high-performance computing, which has been around for a long time, into more of the AI and machine learning and deep learning and stuff we hear about much more in business context today? >> High-performance computing has applicability across a broad range industries. So not just national labs and supercomputers, but commercial space as well. And our lab, we've done a lot of that work in the last several years. And then the deep learning algorithms, those have also been around for decades. But what we are finding right now is that the algorithms and the hardware, the technologies available, have hit that perfect point, along with industries' interest with the amount of data we have to make it more, what we would call, mainstream. >> So you can build an optimum solution, but ultimately you wanna build industry solutions. And then even subset of that, you invite customers in to optimize for what their particular workflow or their particular business case which may not match the perfect benchmark spec at all, right? >> That's exactly right. And so that's the reason this lab is set up for customer access, because we do the standard benchmarking. But you want to see what is my experience with this, how does my code work? And it allows us to learn from our customers, of course. And it allows them to get comfortable with their technologies, to work directly with the engineers and the experts so that we can be their true partners and trusted advisors and help them advance their research, their science, their business goals. >> Right. So you guys built the whole rack out, right? Not just the fun shiny new toys. >> Yeah, you're right. So typically, when something fails, it fails spectacularly. Right, so I'm you've heard horror stories where there was equipment on the dock and it wouldn't fit in the elevator or things like that, right? So there are lots of other teams that handle, of course Dell's really good at this, the logistics piece of it, but even within the lab. When you walk around the lab, you'll see our racks are set up with power meters. So we do power measurements. Whatever best practices in tuning we come up with, we feed that into our factories. So if you buy a solution, say targeted for HPC, it will come with different BIOS tuning options than a regular, say Oracle, database workload. We have this integration into our software deployment methods. So when you have racks and racks of equipment or one rack of equipment or maybe even three servers, and you're doing an installation, all the pieces are baked-in already and everything is easy, seamless, easy to operate. So our idea is... The more that we can do in building integrated solutions that are simple to use and performant, the less time our customers and their technical computing and IT Departments have to spend worrying about the equipment and they can focus on their unique and specific use case. >> Right, you guys have a services arm as well. >> Well, we're an engineering lab, which is why it's really messy, right? Like if you look at the racks, if you look at the work we do, we're a working lab. We're an engineering lab. We're a product development lab. And of course, we have a support arm. We have a services arm. And sometimes we're working with new technologies. We conduct training in the lab for our services and support people, but we're an engineering organization. And so when customers come into the lab and work with us, they work with it from an engineering point of view not from a pre-sales point of view or a services point of view. >> Right, kinda what's the benefit of having the experience in this broader set of applications as you can apply it to some of the newer, more exciting things around AI, machine learning, deep learning? >> Right, so the fact that we are a shared lab, right? Like the bulk of this lab is High Performance Computing and AI, but there's lots of other technologies and solutions we work on over here. And there's other labs in the building that we have colleagues in as well. The first thing is that the technology building blocks for several of these solutions are similar, right? So when you're looking at storage arrays, when you're looking at Linux kernels, when you're looking at network cards, or solid state drives, or NVMe, several of the building block technolgies are similar. And so when we find interoperability issues, which you would think that there would never be any problems, you throw all these things together, they always work like-- >> (laughs) Of course (laughs). >> Right, so when you sometimes, rarely find an interoperability issue, that issue can affect multiple solutions. And so we share those best practices, because we engineers sit next to each other and we discuss things with each other. We're part of the larger organization. Similarly, when you find tuning options and nuances and parameters for performance or for energy efficiency, those also apply across different domains. So while you might think of Oracle as something that it's been done for years, with every iteration of technology there's new learning and that applies broadly across anybody using enterprise infrastructure. >> Right, what gets you excited? What are some of the things that you see, like, "I'm so excited that we can now apply "this horsepower to some of these problems out there?" >> Right, so that's a really good point, right? Because most of the time when you're trying to describe what you do, it's hard to make everybody understand. Well, not what you're doing, right? But sometimes with deep technology it's hard to explain what's the actual value of this. And so a lot of work we're doing in terms of excess scale, it's to grow like the... Human body of knowledge forward, to grow the science happening in each country moving that forward. And that's kind of, at the higher end when you talk about national labs and defense and everybody understands that needs to be done. But when you find that your social media is doing some face recognition, everybody experiences that and everybody sees that. And when you're trying to describe the, we're all talking about driverless cars or we're all talking about, "Oh, it took me so long, "because I had this insurance claim and then I had "to get an appointment with the appraisor "and they had to come in." I mean, those are actual real-world use cases where some of these technologies are going to apply. So even industries where you didn't think of them as being leading-edge on the technical forefront in terms of IT infrastructure and digital transformation, in every one of these places you're going to have an impact of what you do. >> Right. >> Whether it's drug discovery, right? Or whether it's next-generation gene sequencing or whether it's designing the next car, like pick your favorite car, or when you're flying in an aircraft the engineers who were designing the engine and the blades and the rotors for that craft were using technologies that you worked with. And so now it's everywhere, everywhere you go. We talked about 5G and IoT and edge computing. >> Right. >> I mean, we all work on this collectively. >> Right. >> So it's our world. >> Right. Okay, so last question before I let you go. Just being, having the resources to bear, in terms of being in your position, to do the work when you've got the massive resources now behind you. You have Dell, the merger of EMC, all the subset brands, Isilon, so many brands. How does that help you do your job better? What does that let you do here in this lab that probably a lot of other people can't do? >> Yeah, exactly. So when you're building complex solutions, there's no one company that makes every single piece of it, but the tighter that things work together the better that they work together. And that's directly through all the technologies that we have in the Dell technologies umbrella and with Dell EMC. And that's because of our super close relationships with our partners that allows us to build these solutions that are painless for our customers and our users. And so that's the advantage we bring. >> Alright. >> This lab and our company. >> Alright, Garima. Well, thank you for taking a few minutes. Your passion shines through. (laughs) >> Thank you. >> I really liked hearing about what Dell EMC's doing in their innovation labs down at Austin, Texas, but it all comes together for the customer. And so the last segment that we wanna bring you here is a great segment. Nick Curcuru, who's the Vice President of Big Data Analytics at Mastercard is here to talk about how some of these technologies are coming together to speed value and realize the potential of AI at Mastercard. Nick, welcome to theCUBE. >> Thank you for letting me be here. >> So Mastercard, tell us a little bit about what's going on at Mastercard. >> There's a lot that's going on with Mastercard, but I think the most exciting things that we're doing out of Mastercard right now is with artificial intelligence and how we're bringing the ability for artificial intelligence to really allow a seamless transition when someone's actually doing a transaction and also bringing a level of security to our customers and our banks and the people that use Mastercards. >> So AI to improve engagement, provide a better experience, but that's a pretty broad range of things. What specifically kinds of, when you think about how AI can be applied, what are you looking to? Especially early on. >> Well, let's actually take a look at our core business, which is being able to make sure that we can secure a payment, right? So at this particular point, people are used to, we're applying AI to biometrics. But not just a fingerprint or a facial recognition, but actually how you interact with your device. So you think of like the Internet of Things and you're sitting back saying, "I'm using, "I'm swiping my device, my mobile device, "or how I interact with a keyboard." Those are all key signatures. And we, with our company, new data that we've just acquired are taking that capability to create a profile and make that a part of your signature. So it's not just beyond a fingerprint. It's not just beyond a facial. It's actually how you're interacting so that we know it's you. >> So there's a lot of different potential sources of information that you can utilize, but AI is still a relatively young technology and practice. And one of the big issues for a lot of our clients is how do you get time to value? So take us through, if you would, a little bit about some of the challenges that Mastercard and anybody would face to try to get to that time to value. >> Well, what you're really seeing is looking for actually a good partner to be with when you're doing artificial intelligence, because again, at that particular point, you try to get to scale. For us, it's always about scale. How can we roll this across 220 countries? We're 165 million transactions per hour, right? So what we're looking for is a partner who also has that ability to scale. A partner who has the global presence, who's learning. So that's the first step. That's gonna help you with your time to value. The other part is actually sitting back and actually using those particular partners to bring their expertise that they're learning to combine with yours. It's no longer just silos. So when we talk about artificial intelligence, how can we be learning from each other? Those open source systems that are out there, how do we learn from that community? It's that community that allows you to get there. Again, those that are trying to do it on their own, trying to do it by themselves, they're not gonna get to the point where they need to be. In other words, in a six month time to value it's gonna take them years. We're trying to accelerate that, you say, "How can we get out of those algorithms operating for us "the way we need them to provide the experiences "that people want quickly." And that's with good partners. >> 165 million transactions per hour is only likely to go up over the course of the next few years. That creates an operational challenge. AI is associated with a probabilistic set of behaviors as opposed to categorical. Little bit more difficult to test, little bit more difficult to verify, how is the introduction of some of these AI technologies impacting the way you think about operations at Mastercard? >> Well, for the operations, it's actually when you take a look there's three components, right? There's right there on the edge. So when someone's interacting and actually doing the transaction, and then we'll look at it as we have a core. So that core sits there, right? Basically, that's where you're learning, right? And then there's actually, what we call, the deep learning component of it. So for us, it's how can we move what we need to have in the core and what we need to have on the edge? So the question for us always is we want that algorithm to be smart. So what three to four things do we need that algorithm to be looking for within that artificial intelligence needs to know that it then goes back into the core and retrieves something, whether that's your fingerprint, your biometrics, how you're interacting with that machine, to say, "Yes, that's you. "Yes, we want that transaction to go through." Or, "No, stop it before it even begins." It's that interaction and operational basis that we're always have a dynamic tension with, but it's how we get from the edge to the core. And it's understanding what we need it to do. So we're breaking apart what we have to have that intelligence to be able to create a decision for us. So that's how we're trying to manage it, as well as of course, the hardware that goes with it and the tools that we need in order to make that happen. >> When we get on the hardware just a little bit, so that historically different applications put pressure on different components within a stack. One of the observations that we've made is that the transition from spinning disk to flash allows companies like Mastercard to think about just persisting data to actually delivering data. >> Yeah. >> Much more rapidly. How does some of the, how does these AI technologies, what kinda new pressures do they put on storage? >> Well, they put a tremendous pressure, because that's actually again, the next tension or dynamics that you have to play with. So what do you wanna have on disk? What do you need flash to do? Again, if you look at some people, everyone's like, "Oh, flash will take over everything." It's like no, flash has, there's a reason for it to exist, and understanding what that reason is and understanding, "Hey, I need that to be able to do this "in sub-seconds, nanoseconds," I've heard the term before. That's what you're asking flash to do. When you want deep learning, that, I want it on disk. I want to be taking all those millions of billions of transactions that we're gonna see and learn from them. All the ways that people will be trying to attack me, right? The bad guys, how am I learning from everything that I'm having that can sit there on disk and let it continue to run, that's the deep learning. The flash is when I wanna create a seamless transaction with a customer, or a consumer, or from a business to business. I need to have that decision now. I need to know it is you who is trying to swipe or purchase something with my mobile device or through the, basically through the Internet. Or how am I actually even swiping or inserting, tipping my card in that particular machine at a merchant. That's we're looking at how we use flash. >> So you're looking at perhaps using older technologies or different classes technologies for some of the training elements, but really moving to flash for the interfacing piece where you gotta deliver the real-time effort right now. >> And that's the experience. And that's what you're looking for. And that's you're looking, you wanna be able to make sure you're making those distinctions. 'Cause again there's no longer one or the other. It's how they interact. And again, when you look at your partners, it's the question now is how are they interacting? Am I actually, has this been done at scale somewhere else? Can you help me understand how I need to deploy this so that I can reduce my time to value, which is very, very important to create that seamless, frictionless transaction we want our consumers to have. >> So Nick, you talked about how you wanna work with companies that demonstrate that they have expertise, because you can't do it on your own. Companies that are capable of providing the scale that you need to provide. So just as we talk about how AI is placing pressure on different parts of the technology stack, it's got also to be putting pressure on the traditional relationships you have with technology suppliers. What are you looking for in suppliers as you think about these new classes of applications? >> Well, the part is you're looking at, for us it's do you have that scale that we're looking at? Have you done this before, that global scale? Again, in many cases you can have five guys in a garage that can do great things, but where has it been tested? When we say tested, it's not just, "Hey, we did this "in a pilot." We're talking it's gotta be robust. So that's one thing that you're looking for. You're looking for also a partner we can bring, for us, additional information that we don't have ourselves, right? In many cases, when you look at that partner they're gonna bring something that they're almost like they are an adjunct part of your team. They are your bench strength. That's what we're looking for when we look at it. What expertise do you have that we may not? What are you seeing, especially on the technology front, that we're not privy to? What are those different chips that are coming out, the new ways we should be handling the storage, the new ways the applications are interacting with that? We want to know from you, because again, everyone's, there's a talent, competition for talent, and we're looking for a partner who has that talent and will bring it to us so that we don't have to search it. >> At scale. >> Yeah, especially at scale. >> Nick Curcuro, Mastercard. Thanks for being on theCUBE. >> Thank you for having me. >> So there you have a great example of what leading companies or what a leading company is doing to try to take full advantage of the possibilities of AI by utilizing infrastructure that gets the job done simpler, faster, and better. So let's imagine for a second how it might affect your life. Well, here's your opportunity. We're now gonna move into the CrowdChat part of the event, and this is your chance to ask peers questions, provide your insights, tell your war stories. Ultimately, to interact with thought leaders about what it means to get ready for AI. Once again, I'm Peter Burris, thank you for watching. Now let's jump into the CrowdChat.

Published Date : Aug 14 2018

SUMMARY :

Tom, it's great to have you back again. It's great to be here. So when you go out and talk to customers specifically, and certainly AI is one of the hottest areas. that the first 50 years of computing So the question around do I go to the cloud? Now, the ideal would be that you can have Well, that's the exciting part is that we're getting ready into the infrastructure elements themselves And then getting them to work in stage four and stage five So this gives them the opportunity to self-provision. They don't need the IT to come in and set up the network And so that means that the AI or rather the IT group involved, making sure that the infrastructure in the big data now is likely to happen in the AI world. Because of the products that we've put together the IOP performance that we have and Solutions at Dell EMC. can be deployed to improve the overall productivity on the cluster and will alert you when there's trouble. And so to really test that out, we sent Jeff Frick, We're excited to have with us our next guest. and the hardware, the technologies available, So you can build an optimum solution, And so that's the reason this lab is set up So you guys built the whole rack out, right? So when you have racks and racks of equipment And of course, we have a support arm. Right, so the fact that we are a shared lab, right? So while you might think of Oracle as something And that's kind of, at the higher end when you talk and the blades and the rotors for that craft Just being, having the resources to bear, And so that's the advantage we bring. Well, thank you for taking a few minutes. And so the last segment that we wanna bring you here So Mastercard, tell us a little bit for artificial intelligence to really allow So AI to improve engagement, provide a better experience, are taking that capability to create a profile of information that you can utilize, but AI is still that they're learning to combine with yours. impacting the way you think about operations at Mastercard? Well, for the operations, it's actually when you is that the transition from spinning disk what kinda new pressures do they put on storage? I need to know it is you who is trying to swipe for the interfacing piece where you gotta deliver so that I can reduce my time to value, on the traditional relationships you have the new ways we should be handling the storage, Thanks for being on theCUBE. that gets the job done simpler, faster, and better.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

Tom BurnsPERSON

0.99+

Garima KochharPERSON

0.99+

Peter BurrisPERSON

0.99+

NickPERSON

0.99+

Nick CurcuruPERSON

0.99+

DellORGANIZATION

0.99+

GarimaPERSON

0.99+

15QUANTITY

0.99+

TomPERSON

0.99+

PetePERSON

0.99+

five guysQUANTITY

0.99+

MastercardORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

EMCORGANIZATION

0.99+

Mellon OshmanORGANIZATION

0.99+

20QUANTITY

0.99+

220 countriesQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

IsilonORGANIZATION

0.99+

six monthQUANTITY

0.99+

first stepQUANTITY

0.99+

OracleORGANIZATION

0.99+

ServiceNowORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

twoQUANTITY

0.99+

millionsQUANTITY

0.99+

each countryQUANTITY

0.99+

first 50 yearsQUANTITY

0.99+

TodayDATE

0.99+

first guestQUANTITY

0.98+

AI Innovation LabORGANIZATION

0.98+

threeQUANTITY

0.98+

one rackQUANTITY

0.98+

first thingQUANTITY

0.98+

oneQUANTITY

0.97+

Over 62%QUANTITY

0.97+

second thingQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

Nick CurcuroPERSON

0.97+

about 70%QUANTITY

0.97+

OneQUANTITY

0.97+

Dell EMC HPCORGANIZATION

0.97+

bothQUANTITY

0.97+

three componentsQUANTITY

0.96+

halfQUANTITY

0.95+

about 400 different companiesQUANTITY

0.95+

three serversQUANTITY

0.94+

IntelORGANIZATION

0.94+

around five clicksQUANTITY

0.93+

JupyterHubORGANIZATION

0.93+

Big Data AnalyticsORGANIZATION

0.93+

decadesQUANTITY

0.93+

todayDATE

0.92+

25 different stepsQUANTITY

0.92+

Vice PresidentPERSON

0.92+

up to 18 timesQUANTITY

0.92+

three thingsQUANTITY

0.91+

around nine timesQUANTITY

0.91+

fourQUANTITY

0.89+

BMC Digital Launch


 

(dynamic music) >> Hi, I'm Peter Burris, and welcome to another CUBEConversation. This is another very special CUBEConversation in that it's part of a product launch. Today, BMC has come on to theCUBE to launch Helix, a new approach to thinking about cognitive services management. And we're, over the course of the next 20 minutes or so, gonna present some of the salient features of Helix and how it solves critical business problems. And at the end of the segment, at the end of this video segment, we're gonna then go into a CrowdChat and give you, the community, an opportunity to express your thoughts, ask your questions, and get the information that you need from us analysts, from BMC, and also from your peers about what you need to do to exploit cognitive systems management in your business. Now this is a very real problem, this is not something that's being made up. The reality is we're looking at a lot of data-first technologies that are transforming the way business works. Technologies like AI, and machine learning, and deep learning, technologies like big data, having an enormous impact about how businesses behave. These technologies invoke much greater complexity at the application at the systems level and Wikibon strongly believes that we do not understand how businesses can pursue these technologies and these richer applications without finding ways to apply elements of them directly into the IT service management stack. And the reason why is if you don't have high-quality, lower-cost, speedy automation inside how you run your service management overall platform, then it's going to create uncertainty up hiring stack and that's awful for digital business. So to better understand and take us through this launch today, we've got some great guests. And it starts, obviously, with the esteemed Nayaki Nayyar who is the President of the Digital Services Management business unit at BMC, CUBE alum. Nayaki, thanks very much for being here. >> Thank you, Peter, really excited to be here and look forward to our conversation. We are too excited about the launch of BMC Helix and happy to share the details with you. >> So let's start with the why. Obviously, there's a... You know, I've articulated kind of a generalization of some of the challenges that businesses face but it goes deeper than that. Take us through some of the key issues that your customers are facing as they think about this transition to a new way of running their business. >> So, let's put ourselves in the customers' shoes. Then you look at what their journey looks like. Customers are evolving from the online world into the digital world and what we see is, what we call, cognitive world. And the way their journey looks like, especially as customers are entering into the digital world, there are proliferation of clouds. They don't have just one cloud, they have private clouds, hybrid clouds, managed clouds, we call it multi-cloud. So they're entering into a multi-cloud world. In addition, there's also proliferation of devices. It's not just phones that we have to worry about now. As IoT's getting more and more relevant and prevalent, how you help customers manage all the devices and how you provide the service through not just one channel but channel of our customers' or consumers' preference. It could be a Slack as a channel, SMS as a channel, Skype as a channel. So across this multi-cloud, multi-device, and multi-channel, this explosion of technology that is happening in every customer's landscape, and to address this explosion, is where AIML, chatbots, and virtual agents really play a role for them to handle the complexities. So the automation that AIML, chatbots, and virtual agents bring to help customers address these multi-cloud, multi-channel, multi-device world is what we call how we have them evolve from ITSM to cognitive services management. >> Let's talk about that a little bit. We'll get into exactly what you're announcing in a second but historically when we thought about service management we thought about devices. What you're really describing, this transition is, again that notion of how all of these different elements come together in, sometimes, very unique ways and that's what's driving the need for the cognitive. It's not just, you can do multiple clouds, multi-devices, multiple channels, it's your business can put them together in ways that serve your business' needs the best. And now we need a service management capability that can attend to those resources. >> Absolutely. So if you go 10, 15 years back, BMC had a great portfolio. We had Remedy Service Management Suite. We also had Discovery to help customers discover the on-prem assets and provide its service to remedy service management. That's what we had, we were very successful. ITSM, as a category, was created for that whole space. But in this new world of multi-cloud, right, where customers have private clouds, managed clouds, hybrid clouds, multi-devices where IoT is becoming more and more relevant, and multi-channel, customers now have to discover these assets. We call it Discovery as-a-Service but now they can discover the assets across AWS, Azure, OpenStack, and Cloud Foundry and evolve into providing service from reactive to proactive service, and that's what we call Remedy as-a-Service, and then extend that service beyond IT to also lines of business. Now you wanna also provide that service to HR, and procurement, and also various lines of business. And the most important thing is how you provide that experience to your end-users and your end-customers is what we call Digital Workplace-as-a-Service where now customers can consume that service in channel of their preference. They can consume that service through mobile device, of course through web, but also Slack, SMS, chatbots, and virtual agents. So that's what we are combining all of that, that entire suite, we are containerizing that suite using Dockers and Kubernetes so that now customers can run in their choice of cloud. They can run it in AWS cloud, Azure cloud, or in BMC cloud. This whole suite is what we call BMC Helix and helps our customers evolve from ITSM to what we call cognitive services management. >> So that's what BMC's announcing today. >> Yes. >> It's this notion of BMC Helix. >> Yes. >> And it's predicated on the idea, if I can, also of, not only you're going to use these technologies to manage new stuff, we have to bring the old stuff forward. Additionally, we're gonna see a mix of labor, or people, and automation as companies find the right mix for them. >> Right. >> And so we wanna bring and sustain these practices and these approaches forward. Nobody likes a forced migration, especially not in an IT organization. >> Right. >> So that's how we see Helix. if I got this right. >> Yes. >> Helix is gonna help customers bring their existing assets, existing practices, modernize them using some of the new technologies and that's how we get to this new cognitive vision. >> Absolutely. The investments customers have already made in their on-prem assets, in their managing their IT assets, that same concepts come into this new multi-cloud, multi-device, and multi-channel world but now it extends beyond that. It extends beyond just IT to also lines of business and also all these, what we call, omni-channel experiences that you can provide. And this whole suite is, what we call, 3 C's, Helix stands for 3 C's. Everything as a service, Remedy as-a-Service, Discovery as-a-Service, Business Workplace as-a-Service, containerized so that customers can run this in the choice of their cloud, they can run in AWS cloud, Azure cloud, or our cloud with cognitive capabilities, with AIML, and chatbots. And that's how we help them evolve from that existing implementations to this whole new world as they enter into the cognitive world. >> Exciting stuff. >> Absolutely. We are very excited about it. We've been working with a lot of customers already, and we have made really, really good traction. >> So let's do this, Nayaki, let's take a look at a product video that kinda describes how this all comes together in a relatively simple, straightforward way. >> Absolutely. (upbeat music) >> Hi, Peter Burris again, welcome back. We're talking more about BMC's Helix announcement. Great product video. Once again, we're here with Nayaki Nayyar, but we're also being joined by Vidhya Srinivasan who's in Marketing within the Digital Services Management unit at BMC. Thank you very much for joining us in theCUBE. >> Great to be here, thank you. >> So we've heard a lot about the problems, we've heard a lot about BMC Helix as a solution, but obviously it's more than just the technology. There's things that customers have to think about, about how these technologies, how service management, cognitive service management's going to be impacting the business. As businesses become more digital, technology and related services get dragged more deeply into functions. So, Nayaki, tell us a little bit more about how the outcomes within business, the capabilities of businesses are gonna change as a consequence of applying these technologies. >> Absolutely, Peter. So if you look at, traditionally, IT service management was a very reactive process. Every ticket that came in was manually created, assigned, and routed. That was a very reactive process. But as we enter into this cognitive world and you apply intelligence, AIML, you evolve into what we call a proactive and predictive. Before an issue actually happens, you want to resolve that issue. And that's what we call the cognitive services management. And the real business outcomes, you put yourself in a customer's shoes who's providing this service and evolving into this proactive, predictive, and cognitive world, they wanna provide that service at the highest accuracy, at the highest speed, and the lowest cost. That's what is gonna become competitive advantage for every company indifferent of the industry. They could be in a telco, they could be in high-tech, or pharmaceutical. It doesn't matter which industry they are in, how they provide this service at the highest accuracy, highest speed, and lowest cost is gonna be fundamentally a competitive advantage for these customers. >> And when we talk about accuracy, again we're not just talking about accuracy in a technology context. We're talking about accuracy in terms of a brand promise, perhaps. >> Absolutely. >> Or a service promise, or a product promise. >> Yes. >> That's the context. We wanna make sure that the customer is getting what they expect fast, with accuracy, and at low cost. >> Right, every time you tweet or you're SMS-ing your service provider, you expect that response to be at the highest accuracy, at the speed, and the cost. >> So when we start talking about multi-channel, Vidhya, what we're really saying is that this is not just your, you know, this is not just service management for the traditional technology service desk. We're talking about service management for other personas, other individuals, other consumers as well. Take us through that a little bit. >> Yeah, that's right. So we actually take a very holistic approach, right, across the enterprise. So we have end-users who are, at the end of the day, the key subscribers or consumers of our service and we wanna make sure they're very happy with what we provide. We have the agents which kinda goes to the IT persona that people know about in the service desk. But then, as Nayaki said earlier, it's also about extending to a lines of business so you have HR agents, right, people who support HR requests, people who support facilities or procurement request. So making sure that the agent persona is able to do everything that they need to do at the most efficiency level that they can so that they can meet their SLAs to their end consumers is a big part of what Helix, BMC Helix and cognitive service management can provide. And ultimately, when you think about this transformation and where they wanna go, there's a lot of custom applications and custom needs that businesses have. So really thinking about the developer persona and how you actually embed and build intelligent applications through our cognitive microservices that BMC Helix provides is a big part of that value proposition we provide. So as you navigate through this journey and become a cognitive enterprise, how do you make sure that all of these personas throughout your enterprise is able to deliver and get value out of this is what BMC Helix provides for the whole enterprise. >> So the whole concept of incorporating these cognitive capabilities into a service management stack allows us to not only envision, in a traditional way, more complex applications but actually extend this out to new classes of users because we are masking a lot of the complexity and a lot of the uncertainty associated with how this stuff works from that customer. >> That's correct. >> For end-users, for agents, and for developers, and consumers, and customers too. >> Great. >> That's good. >> So you know what... Great conversation. But let's hear what a customer has to say about it, shall we? >> Absolutely, okay. >> My name is Marco Jongen. I work for a company called DSM. And I'm the Director for Service Management within the Global Business Services department. Royal DSM is a global science-based company active in health, nutrition, and materials. And by connecting our unique competencies in life science and in material sciences, DSM is driving economic prosperity, environmental progress, and social advance to create sustainable value for all stakeholders simultaneously. The Global Business Service department is serving the 20,000 employees of DSM spread over 200 locations globally. We are handling, annually, about 600,000 tickets, and we are supporting four business functions: finance, HR, procurement, and IT. We started together with BMC on a shared services transformation across IT, HR, finance, and procurement. And we created a unified ticketing system and a self-service portal using the Remedy system and the Digital Workplace environment. And with this, we are now able to handle all functions in one unified ticketing tool and giving visibility to all our employees with questions related to finance, HR, purchasing, and IT. We were still have and involved with BMC in bringing this product to the next level and we are very excited in the work we have done with BMC so far. >> That was great to hear Royal DSM is transforming its shared services organization with cognitive services management. But, Nayaki, there's no such thing as an easy transformation especially one of this magnitude. We're talking about digital business which is, we're using data assets differently, it's affecting virtually every feature of business today. And now we've got a technology set that's gonna have potentially an enormous impact on IT but everything that IT is being, or everywhere that IT is being employed. That kind of a transformation is not something that people do lightly. They expect their suppliers to help them out. So what is BMC gonna do to ensure that customers are successful as they go through this transformation to cognitive services management? >> Absolutely, Peter. I always say these transformations are not one-month, two-month transformations. These are multi-year transformations and it's a journey that customers go through. We partner very closely with customers in this journey, assessing their requirements, understanding what their future looks like, and helping them every step of the way. Especially in service management, this change, this transformation that is happening, is gonna be very disruptive to their end-to-end processes. Today, all service desks are manned by individuals. Every ticket that comes in gets manually created, assigned, and routed. But if you fast forward into the future world in the next two to three years, that service desk function, which is especially level zero, level one, level two, service desk function, will completely get replaced by bots or virtual agents. It could be 50-50, 70-30, you can pick what the percentage-- >> Whatever the business needs. >> Right? But it is coming. And it is very important for customers to see that change and that transformation that is happening and to be ready for it. And that's where we are working very closely with them in making sure it's not just a system transformation. It's also the people side and the process that have to change. And companies who can do that, what we call cognitive service management using bots and virtual agents at the highest accuracy, highest speed, and the lowest cost, I keep coming back to that because that is what is gonna give them the highest competitive advantage. >> Lot to think about. >> Absolutely. >> Exciting future, crucial for IT if it's gonna succeed moving forward, but even if the business choose to use cloud, you're going to need to be able to discover and sustain service management at a very, very high level. >> Absolutely. How we discover, how we help them discover, how we help them provide that service proactively, predictively, and provide that experience through omni-channel experiences, what this whole thing brings together for our customers. >> Excellent, this has been a great conversation. Nayaki Nayyar, President of BMC's Digital Services Management business unit. Thank you very much for being here on theCUBE and working with us to help announce Helix. Now don't forget folks, that immediately after this, we'll be running the CrowdChat. And in that CrowdChat, your peers, BMC experts, us analysts will be participating to help answer your questions, share experience, identify simpler ways of doing more complex things. So join us in the CrowdChat. Once again, Nayaki, thank you very much. >> Thank you, Peter, and thank you everyone. Thank you all.

Published Date : Jun 4 2018

SUMMARY :

and Wikibon strongly believes that we do not understand and look forward to our conversation. of the challenges that businesses face and how you provide the service that can attend to those resources. and provide its service to remedy service management. So that's and automation as companies find the right mix for them. and sustain these practices So that's how we see Helix. and that's how we get to this new cognitive vision. from that existing implementations to this whole new world and we have made really, really good traction. how this all comes together Absolutely. Thank you very much for joining us in theCUBE. and related services get dragged more deeply into functions. and the lowest cost. And when we talk about accuracy, again That's the context. at the highest accuracy, at the speed, and the cost. for the traditional technology service desk. So making sure that the agent persona is able of the complexity and a lot of the uncertainty associated and consumers, and customers too. So you know what... and the Digital Workplace environment. They expect their suppliers to help them out. in the next two to three years, and the process that have to change. but even if the business choose to use cloud, and provide that experience And in that CrowdChat, your peers, BMC experts, Thank you all.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NayakiPERSON

0.99+

Peter BurrisPERSON

0.99+

BMCORGANIZATION

0.99+

PeterPERSON

0.99+

Nayaki NayyarPERSON

0.99+

Marco JongenPERSON

0.99+

DSMORGANIZATION

0.99+

Vidhya SrinivasanPERSON

0.99+

20,000 employeesQUANTITY

0.99+

two-monthQUANTITY

0.99+

one-monthQUANTITY

0.99+

one channelQUANTITY

0.99+

TodayDATE

0.99+

three yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

about 600,000 ticketsQUANTITY

0.99+

CrowdChatTITLE

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

SkypeORGANIZATION

0.98+

one cloudQUANTITY

0.97+

WikibonORGANIZATION

0.97+

AzureTITLE

0.97+

RoyalORGANIZATION

0.96+

VidhyaPERSON

0.94+

Royal DSMORGANIZATION

0.94+

CUBEConversationEVENT

0.93+

BMC HelixORGANIZATION

0.92+

SlackTITLE

0.91+

DiscoveryORGANIZATION

0.9+

50-50QUANTITY

0.88+

15 years backDATE

0.87+

level oneQUANTITY

0.87+

DockersTITLE

0.85+

KubernetesTITLE

0.85+

level zeroQUANTITY

0.85+

10DATE

0.84+

HelixORGANIZATION

0.84+

firstQUANTITY

0.84+

HelixTITLE

0.83+

Remedy Service Management SuiteTITLE

0.82+

twoQUANTITY

0.81+

over 200 locationsQUANTITY

0.79+

CUBEORGANIZATION

0.78+

level twoQUANTITY

0.78+

Global Business ServicesORGANIZATION

0.77+

OpenStackORGANIZATION

0.75+

next 20 minutesDATE

0.69+

HelixCOMMERCIAL_ITEM

0.62+

Digital Services ManagementORGANIZATION

0.62+

AzureORGANIZATION

0.59+

INFINIDAT Portfolio Launch 2018


 

>> Announcer: From the SiliconANGLE Media office, in Boston Massachusetts, it's The Cube! Now, here's your host, Dave Vellante. >> Hi everybody! My name is Dave Vellante. Welcome to this special presentation on The Cube. Infinidat is a company that we've been following since it's early days. A hot storage company, growing like crazy, doing things differently than most storage companies. We've basically been doubling revenues every year for quite some time now. And Brian Carmody is here to help me kick off this announcement and the presentation today. Brian, thanks for coming back on. >> Hey Dave, thanks for having me. >> So, you may have noticed we have a crowd chat going on live. It's crowdchat.net/Infinichat. You can ask any question you want, it's an ask me anything chat about this announcement. This is a bi-coastal program that we're running today between here and our offices in Palo Alto. So, Brian let's get into it. Give us the update on Infinidat. >> Things are going very well at Infinidat. We're just coming out of our 17th consecutive quarter of revenue growth, so we have a healthy, sustainable, profitable business. We have happy, loyal customers. 71% of our revenue in 2017 came from existing customers that were increasing their investment in our technologies. We're delighted by that. And we have surpassed three exabytes of customer deployments. So, things are wonderful. >> And you've done this essentially as a one product company. Is that correct? Yes, so going back to our first sale in the summer of 2013, that growth has been on the back of a single product, InfiniBox, targeted at primary storage. >> Okay, so what's inside of InfiniBox? Tell me about some of the innovations. In speaking to some of your customers, and I've spoken to a number of them, they tell me that one of the things they like, is that from early on, I think serial number 0001, they can take advantage of any innovations that you've produced within that product, is that right? >> Yeah, exactly, so InfiniBox is a software product. It has dumb hardware, dumb commodity hardware, and it has it has very smart intelligent software. This allows us to kind of break from this forklift upgrade model, and move to a model where the product gets better over time. So if you look at the history of InfiniBox going back to the beginning, with each successive release of our software, latency goes down, new features are added, and capacity increases become available. And this is the difference between the software versus a hardware based innovation model. >> One of the interesting things I'll note about Infinidat is you're doing software defined, you don't really use that terminology, it's the buzzword in the industry. The other buzzword is artificial intelligence, machine learning. You're actually using machine intelligence, You and I have talked about this before, to optimize the placement of data that allows you to use much less expensive media than some of the other guys, and deliver more value to customers. Can you talk about that a little bit? >> Yeah, absolutely, and by the way the reason why that is is because we're an engineering company, not a marketing company, so we prefer just doing things rather than talking about them. So InfiniBox is the first expression of a set of fundamental technologies of our technology platform, and the first piece of that is what you're talking about. It's called NeuroCache. And it's our ML and AI infrastructure for learning customer workloads and using that insight in real time to optimize data placement. And the end result of this is driving cost out of storage infrastructure and driving up performance. That's the first piece. That's NeuroCache. The second piece of our technology foundations is INFINISNAP. So this is our snapshot mechanism that allows infinite, lock-free, copy data management with absolutely no performance impact. So that's the second. And then the third is INFINIRAID and our Raz platform. So this is our distributed raid architecture that allows us to have multi pedibytes scale, extremely high durability, but also have extremely high availability of the services and that what enables our seven nines reliability guarantee. Those things together are the basis of our products. >> Okay, so sort of, we're here today and now what's exciting is that you're expanding beyond just the one product company into a portfolio of products, so sort of take us through what you're announcing today. >> Yeah so this is a really exciting day, and it's a milestone for Infinidat because InfiniBox now has some brothers and sisters in the family. The first thing that we are announcing is a new F Series InfiniBox model which we call F6212. So this is the same feature set, it's the same software, it's the same everything as its smaller InfiniBox models, but it is extremely high capacity. It's our largest InfiniBox. It's 8.3 pedibytes of capacity in that same F6000 form factor. So that's number one. Numnber two, we're announcing a product called InfiniGuard. InfiniGuard is pedibytes scale, data protection, with lightening-fast restores. The third thing that we're announcing, is a new product called InfiniSync. InfiniSync is a revolutionary business continuity appliance that allows synchronous RPO zero replication over infinite distances. It's the first ever in this category. And then the fourth and final thing that we're announcing is a product called Neutrix Cloud. Neutrix Cloud is sovereign storage that enable real-time competition between public cloud providers. The ultimate in agility, which is the ability to go polycloud. And that's the content of the portfolio announcement. >> Excellent, okay, great! Thanks, Brian, for helping us set that up. The program today, as you say, there's a cloud chat going on. Crowdchat.net/infinichat. Ask any question that you want. We're going to cover all these announcements today. InfiniSync is the next segment that's up. Dr. Ricco is here. We're going to do a quick switch and I'll be interviewing doc, and then we're going to kick it over to our studio in Palo Alto to talk about InfiniGuard, which is essentially, what was happening, Infinidat customers were using InfiniBox as a back-up target, and then asked Infinidat, "Hey, can you actually make this a product and start "partnering with software companies, "back-up software companies, and making it a robust, "back-up and recovery solution?" And then MultiCloud, is one of the hottest topics going, really interested to hear more about that. And then we're going to bring on Eric Burgener from IDC to get the analyst perspective, that's also going to be on the West coast and then Brian and I are come back, and wrap up, and then we're going to dive in to the crowd chat. So, keep it right there everybody, we'll be back with Dr. Ricco, right after this short break. >> Narrator: InfiniBox was created to help solve one of the biggest data challenges in existence, the mapping of the human geno. Today InfiniBox is enabling the competitive business processes of some of the most dynamic companies in the world. It is the apex product of generations of technology, and lifetimes of engineering innovation. It's a system with seven nines of reliability making it the most available storage solution in the market InfiniBox is both powerful and simple to use. InfiniBox will transform how you experience your data. It is so intuitive, it will inform you about potential problems, and take corrective action before they happen. This is InfiniBox. This is confidence. >> We're back with Dr. Ricco, who's the CMO of Infinidat. Doc, welcome! >> Thank you, Dave. >> I've got to ask you, we've known each for a long time. >> We have. >> Chief Marketing Officer, you're an engineer. >> I am. >> Explain that please. >> Yeah, I have a PhD in engineering and I have 14 patents in the storage industry from my prior job, Infinidat is an unconventional company, and we're using technology to solve problems in an unconventional way. >> Well, congratulations. >> Dr. Ricco: Thank you. >> It's great to have you back on The Cube. Okay, InfiniSync, I'm very excited about this solution, want to understand it better. What is InfiniSync. >> Well, Dave, before we talk about InfiniSync directly, let's expand on what Brian talked about is the foundation technologies of Infinidat and the InfiniBox. In the InfiniBox we provide InfiniSnap, which is a near zero performance impact to the application with near zero overhead, just of course the incremental data that you write to it. We also provide async and we provide syncronous replication. Our async replication provides all that zero overhead that we talked about in InfiniSnap with a four-second interval. We can replicate data four seconds apart, nearly a four second RPO, recovery point objective. And our sync technology is built on all of that as well. We provide the lowest overhead, the lowest latency in the industry at only 400 microseconds, which provides an RPO of zero, with near zero performance impact application as well, which is exciting. But syncronis replication, for those applications while there's values to that, and by the way all of the technology I just talked about, is just as Brian said, it's zero additional cost to the customer with Infinidat. There are some exciting business cases why you'd use any of those technologies, but if you're in a disaster-recovery mode and you do need an RPO of zero, you need to recognize that disasters happen not just locally, not just within your facility, they happen in a larger scale regionally. So you need to locate your disaster recovery centers somewhere else, and when you do that, you're providing additional and additional performance overhead just replicating the data over distance. You're providing additional cost and you're providing additional complexity. So what we're providing is InfiniSync and InfiniSync extends the customer's ability to provide business continuity over long distances at an RPO of zero. >> Okay, so talk more about this. So, you're essentially putting in a hardened box on site and you're copying data synchronously to that, and then you're asynchronously going to distance. Is that correct? >> Yes, and in a traditional sense what a normal solution would do, is you would implement a multi-site or a multi-hop type of topology. You build out a bunker site, you'd put another box there, another storage unit there, you'd replicate synchronously to that, and you would either replicate asynchronously from there to a disaster recovery site, or you'd replicate from your initial primary source storage device to your disaster recovery site which would be a long distance away. The problem with that of course is complexity and management, the additional cost and overhead, the additional communications requirements. And, you're not necessarily guaranteeing an RPO of zero, depending upon the type of outage. So, what we're doing is we're providing in essence that bunker, by providing the InfiniSync black box which you can put right next to your InfiniBox. The synchronous replication happens behind the scenes, right there, and the asynchronous replication will happen automatically to your remote disaster recovery site. The performance that we provide is exceptional. In fact, the performance overhead of a right-to-earn InfiniSync black box is less than the right latency to your average all flasher right. And then, we have that protected, from any man-made or natural disaster, fire, explosion, earthquake, power outages, which of course you can protect with generators, but you can't protect from a communications outage, and we'll protect from a communications outage as well. So the asynchronous communication would use your wide area communications, it can use any other type of wifi communications, or if you lose all of that, it will communicate celluarly. >> So the problem you're solving is eliminating the trade-off, if I understand it. Previously, I would have to either put in a bunker site which is super expensive, I got to a huge telecommunications cost, and just a complicated infrastructure, or I would have to expose myself to a RPO nowhere close to zero, expose myself to data loss. Is that right? >> Correct. We're solving a performance problem because your performance overhead is extremely low. We're solving a complexity problem because you don't have to worry about managing that third location. You don't have to worry about the complexity of keeping three copies of your data in sync, we're solving the risk by protecting against any natural or man-made disaster, and we're significantly improving the cost. >> Let's talk about the business case for a moment, if we can. So, I got to buy this system from you, so there's a cost in, but I don't have to buy a bunker site, I don't have to rent, lease, buy staff, et cetera, I don't have to pay for the telecommunications lines, yet I get the same or actually even better RPO? >> You'll get an RPO of zero which is better than the worse case scenario in a bunker, and even if we lose your telecommunications you can still maintain an RPO of zero, again because of the cellular back-up or in the absolute worse case, you can take the InfiniSync black box to your remote location, plug it in, and it will synchronize automatically. >> And I can buy this today? >> You can buy it today and you can buy it today at a cost that will be less than a telecommunications equipment and subscriptions that you need at a bunker site. >> Excellent, well great. I'm really excited to see how this product goes in the market place. Congratulations on getting it out and good luck with it. >> Thank you, Dave. >> You're welcome, alright, now we're going to cut over to Peter Burris in Palo Alto with The Cube Studios there, and we're going to hear about InfiniGuard, which is an interesting solution. Infinidat customers were actually using InfiniBox as a back-up target, so they went to Infinidat and said, "Hey can you make this a back-up and recovery "solution and partner with back-up software companies." We're going to talk about MultiCloud, it's one of the hottest topics in the business, want to learn more about that, and then Eric Burgener from IDC is coming in to give us the analyst perspective, and then back here to back here to wrap up with Brian Carmody. Over to you, Peter. >> Thanks, Dave I'm Peter Burris and I'm here in our Palo Alto, The Cube studios, and I'm being joined here by Bob Cancilla, who's the Executive Vice President of Business Development and Relationships, and Neville Yates, who's a Business Continuity Consultant. Gentlemen, thank you very much for being here on The Cube with us. >> Thanks, Peter, thanks for being here. >> So, there is a lot of conversation about digital business and the role that data plays in it. From our perspective, we have a relatively simple way of thinking about these things, and we think that the difference between a business and digital business is the role the data plays in the digital business. A business gets more digital as it uses it's data differently. Specifically it's data assets, which means that the thinking inside business has to change from data protection or asset or server protection, or network protection to truly digital business protection. What do you guys say? >> Sure we're seeing the same thing, as you're saying there Peter. In fact, our customers have asked us to spread our influence in their data protection. We have been evaluating ways to expand our business, to expand our influence in the industry, and they came back and told us, if we wanted to help them the best way that we could help them is to go on and take on the high-end back-up and recovery solutions where there really is one major player in the market today. Effectively, a monopoly. Our customers' words, not our own. At the same time, our product management team was looking into ways of expanding our influence as well, and they strongly believed and convinced me, convinced us, our leadership team within side of Infinidat to enter into the secondary storage market. And it was very clear that we could build upon the foundation, the pillars of what we've done on the primary storage side and the innovations that we brought to the market there. Things around or multiple pedibyte scale, with incredible density, faster than flash performance, the extreme ease of use and lowering the total cost of operation at the enterprise client. >> So, I want to turn that into some numbers. We've done some research here now at Wikibon that suggests that a typical Fortune 1000 company, because of brittle and complex restore processes specifically, too many cooks involved, a focus on not the data but on devices, means that there's a lot of failure that happens especially during restore processes, and that can cause, again a typical Fortune 1000 company, 1.25 plus billion dollars revenue over a four year period. What do you say as you think about business continuity for some of these emerging and evolving companies? >> That translates into time is money. And if you need to recover data in support of revenue-generating operations and applications, you've got to have that data come back to be productively usable. What we do with InfiniGuard is ensure that those recovery time objectives are met in support of that business application and it is the leveraging of the pillars that Bob talked about in terms of performance, the way we are unbelievable custodians of data, and then we're able to deliver that data back faster than what people expect. They're used today to mediocrity. It takes too long. I was with a customer two weeks ago. We were backing up a three terabyte data base. This is not a big amount of data. It takes about half and hour. We would say, "Let's do a restore" and the gentleman looked at me and said, "We don't have time." I said, "No, it's a 30 minute process." This person expected it to take five and six hours. Add that up in terms of dollars per hours, what it means to that revenue-generating application, and that's where those numbers come from. >> Yeah, especially for fails because of, as you said, Bob, the lack of ease of use and the lack of simplicity. So, we're here to talk about something. What is it that we're talking about and how does it work? >> Let me tell ya, I'll cover the what it is. I'll let Nevil get into a little bit how it works. So the what it is, we built it off the building block of our InfiniBox technology. We started with our model F4260, a one pedibyte usable configuration, we integrated in stainless, deduplication engines, what we call DBEs, and a high availability topology that effectively protects up to 20 pedibytes of data. We combined that with a vast certification and openness of independent software vendors in the data protections space. We want to encourage openness, and an open ecosystem. We don't want to lock any customer out of their preferred software solution in that space. And, you can see that with the recent announcements that we've made about expanding our partnerships in this space specifically, Commvault and B. >> Well, very importantly, the idea of partnership and simplicity in these of views, you want your box, the InfiniGuard to be as high quality and productive as possible, but you don't want to force a dramatic change on how an organization works, so let's dig into some of that Nevil. How does this work in practice? >> It's very simple. We have these deduplication engines that front end the InfiniBox storage. But what is unique, because there's others ways of packaging this sort of thing, but what is unique is when the InfiniGuard gets the data, it builds knowledge of relationships of that data. Deduplication is a challenge for second tier storage systems because it is a random IO profile that has to be gathered in the fashion to sequentially feed this data back. Our knowledge-building engine, which we call NeuroCache in the InfiniBox is the means by which we understand how to gather this data in a timely fashion. >> So, NeuroCache helps essentially sustain some degree of organization of the data within the box. >> Absolutely. And there's a by-product of that organization that the ability to go and get it ahead of the ask allows us to respond to meet recovery time objectives. >> And that's where you go from five to six hours for a relatively small restore to >> To 30 minutes. >> Exactly. >> Yeah, exactly. >> By feeding the data back out to the system in a pre-organized way, the system's taking care of a lot of the randomness and therefore the time necessary to perform a restore. >> Exactly and other systems don't have that capability, and so they are six hours. >> So we're talking about a difference between 30 minutes and six hours and I also wanted very quickly, Bob, to ask you a question the last couple minutes here, you mentioned partnerships. We also want to make sure that we have a time to value equation that works for your average business. Because the box can work with a lot of different software that really is where the operations activities are defined, presumably it comes in pretty quickly and it delivers value pretty quickly. Have I got that right? >> Absolutely, so we have done a vast amount of testing, certification, demos, POCs, you name it, with all the major players out there that are in this market on the back-up software side, the data protection side of the business. All of them have commented about the better business continuity solution that we put together, in conjunction with their product as well. And, the number one feedback that comes back is, "Wow, the restore times that you guys deliver to the market "are unlike anything we've seen before." >> So, to summarize, it goes in faster, it works faster, and it scales better, so the business truly can think of itself as being protected, not just sets of data. >> Absolutely. >> Agreed. >> Alright, hey Bob Cancilla, EDP of Business Development Partnerships, Neville Yates, Business Continuity Consultant, thanks very much for being on The Cube, and we'll be right back to talk Multicloud after this short break. >> With our previous storage provider, we faced many challenges. We were growing so fast, that our storage solution wasn't able to keep up. We were having large amounts of downtime, problems with the infrastructure, problems with getting support. We needed a system that was scalable, that was cost effective, and allow our business to grow as our customers' demands were growing. We needed a product that enabled us to manage the outward provision customer workloads quickly and efficiently, be able to report on the amount of data that the customer was using. The solution better enabled us to replicate our customers' data between different geos. >> We're back. Joining me now are Gregory Touretsky and Erik Kaulberg, both senior directors at Infinidat, overseeing much of the company's portfolio. Gregory, let's talk Multicloud. It's become a default part of almost all IT strategies, but done wrong, it can generate a lot of data-related costs and risks. What's Infinidat's perspective? >> So yeah, before we go there, I will mention this phenomemon of the data gravity. So we see, as many of our customers report that, as much as amount of data grows in the organization, it becomes much harder for them to move applications and services to a different data center, or to a different oblicloud. So, the more data they accumulate, the harder it becomes to move it, and they get locked into this, so we believe that any organization deserves a way to move freely between different obliclouds or data centers, and that's the reason we are thinking about the multicloud solution and how we can provide an easy way for the companies to move between data centers. >> So, clearly there's a need to be able to optimize your costs to the benefits associated with data, Erik, as we think about this, what are some of the key considerations most enterprises have to worry about? >> The biggest one overall is the strategic nature of cloud choices. At one point, cloud was a back room, the shadow IT kind of thing. You saw some IT staff member go sign up for gmail and spread or dropbox %or things like that, but now CIOs are thinking, well, I've got to get all these cloud services under control and I'm spending a whole lot of money with one of the big two cloud providers. And so that's really the strategic rationale of why were saying, "Organizations, especially large enterprises require this kind of sovereign storage that disagregates the data from the public clouds to truly enable the possibility cloud competition as well as to truly deliver on the promise of the agility of public clouds. >> So, great conversation, but we're here to actually talk about something specifically Neutrix. Gregory, what is it? >> Sure, so Neutrix, is a completely new offering that we come with. We are not selling here any box or appliance for the customers to deploy in their data center. We're talking about a cloud service that is provided by Infinidat. >> We are building our infrastructure in a major colo, partnering with Equinix and others, we are finding data centers that are adjacent public clouds, such as AWS or Azure to ensure very low latency and high bandwidth connectivity. And then we build our infrastructure there with InfiniBox storage and networking gear that allows our customers to really use this for two main reasons. So one use case, is disaster recovery. If a customer has our storage on prem in his data center, they may use our efficient application mechanism to copy data and get second copy outside of the data center without building the second data center. So, in case of disaster, they can recover. The other use case we see is very interesting for the customers, is an ability to consume while running the application in the public cloud directly from our storage. So they can do any first mount or iSCSi mount to storage available from our cloud, and then run the application. We are also providing the capability to consume the sane file system from multiple clouds at the same time. So you may run your application both in Amazon and Microsoft clouds and still access and share the data. >> Sounds like it's also an opportunity to simplify ramping into a cloud as well. Is that one of the use cases? >> Absolutely. So it's basically a combination of those two use cases that I described. The customers may replicate data from their own prem environment into the Neutrix Cloud, and then consume it from the public cloud. >> Erik, this concept has been around for a while, even if it hasn't actually been realized. What makes this in particular different? I think there's a couple of elements to it. So number one is we don't really see that there's a true enterprise grade public cloud storage offering today for active data. And so we're basically bringing in that rich heritage of InfiniBox capabilities and those technologies we've developed over a number of years to deliver an enterprise grade storage except without the box as a service. So that's a big differentiator for us versus the native public cloud storage offerings. And then when you look at the universe of other companies who are trying to develop let's say, cloud adjacent type offerings, we believe we have the right combination of that scalable technology with the correct business model that is aligned in a way that people are buying cloud today. So that's kind of the differentiation in a nutshell. >> But it's not just the box, there's also some managed servces associated with it, right? >> Well, actually, it's not a box, that's the whole idea. So, the entire thing is a consumable service, you're paying by the drink, it's a simple flat pricing of nine cents per gigabyte per month, and it's essentially as easy to consume as the native public cloud storage offerings. >> So as you look forward and imagine the role that this is going to play in conjunction with some of the other offerings, what should customers be looking to out of Neutrix, in conjunction with the rest of the portfolio. >> So basically they can get, as Erik mentioned, what they like with InfiniBox, without dealing with the box. They get fully-managed service, they get freedom of choice, they can move applications easily between different public clouds and to or from the own prem environment without thinking about the egress costs, and they can get great capabilities, great features like snapshots writeables, snapshots without overpaying to the public cloud providers. >> So, better economics, greater flexibility, better protection and de-risking of the data overall. >> Absolutely. >> At scale. >> Yes. >> Alright, great. So I want to thank very much, Gregory, Erik being here on The Cube. We'll be right back to get the analyst perspective from Eric Burgener from IDC. >> And one of our challenges of our industry as a whole, is that it operates to four nines as a level of excellence for example. And what that means is well it could be down for 30 seconds a month. I can't think of anything worse than me having me to turn around to my customers and say, "Oh, I am sorry. "We weren't available for 30 seconds." And yet most people that work in our IT industry seem to think that's acceptable, but it's not when it comes to data centers, clouds, and the sort of stuff that we're doing. So, the fundamental aspect is that can we run storage that is always available? >> Welcome back. Now we're sitting here with Eric Burgener, who is a research vice-president and the storage at IDC. Eric, you've listened to Infinidat's portfolio announcement. What do you think? >> Yeah, Peter, thanks for having me on the show. So, I've got a couple of reactions to that. I think that what they've announced is playing into a couple of major trends that we've seen in the enterprise. Number one is, as companies undergo digital transformation, efficiency of the IT operations is really a critical issue. And so, I'm seeing a couple of things in this announcement that will really play into that area. They've got a much larger, much denser platform at this point that will allow a lot more consolidation of workload, and that's sort of an area that Infinidat has focused on in the past to consolidate a lot of different workloads under one platform, so I think the efficiency of those kind of operations will increase going forward with this announcement. Another area that sort of plays into this is every organization needs multiple storage platforms to be able to meet their business requirements. And what we've seen with announcement is their basically providing multiple platforms, but that are all built around the same architecture, so that has management ease of use advantages associated with that, so that's a benefit that will potentially allow CIOs to move to a smaller number of vendors and fewer administrative skill sets, yet still meet their requirements. And I think the other area that's sort of a big issue here, is what their announcing in the hybrid cloud arena. So, clearly, enterprises are operating as hybrid clouds today, well over 70% of all organizations actually have hybrid cloud operations in place. What we've seen with this announcement, is an ability for people to leverage the full storage mnagement data set of an Infinidat platform while they leverage multiple clouds on the back end. And if they need to move between clouds they have an ability to do that with this new feature, the Neutrix cloud. And so that really breaks the lock-in that you see from a lot of cloud operations out there today that in certain cases can really limit the flexibility that a CIO has to meet their business requirements. >> Let me build on that a second. So, really what you're saying is that by not binding the data to the cloud, the business gets greater flexibility in how they're going to use the data, how they're going to apply the data, both from an applications standpoint as well as resource and cost standpoint. >> Yeah, absolutely. I mean moving to the cloud is actually sort of a fluid decision that sometimes you need to move things back. We've actually seen a lot of repatriation going on, people that started in the cloud, and then as things changed they needed to move things back, or maybe they want to move to another cloud operation. They might want to move from Amazon to Google or Microsoft. What we're seeing with Neutrix Cloud is an ability basically to do that. It's breaks that lock-in. >> Great. >> They can still take advantage to those back end platforms. >> Fantastic. Eric Burgener, IDC Research Vice-President, Storage. Back to you, Dave. >> Thanks, Peter. We're back with Brian Cormody. We're going to summarize now. So we're seeing the evolution of Infinidat going from a single product company going to a portfolio company. Brian, I want to ask you to summarize. I want to start with InfiniBox, I'm also going to ask you "Is this the same software, and does it enable new use cases, or is this just bigger, better, faster?" >> Yeah, it's the same software that runs on all of our InfiniBox systems, it has the same feature set, it's completely compatible for replication and everything like that. It's just more capacity to use, 8.4 pedibytes of effective capacity. And the use cases that are pulling this into the field, are deep-learning, analytics, and IOT. >> Alright, let's go into the portfolio. I'm going to ask you, do you have a favorite child, do you have a favorite child in the portfolio. Let's start with InfiniSync. >> Sure, so I love them all equally. InfiniSync is a revolutionary appliance for banking and other highly regulated industries that have a requirement to have zero RPO, but also have protection against rolling disasters and regional disasters. Traditionally the way that that gets solved, you have a data center, say, in lower Manhatten where you do your primary computing, you do synchronous to a data bunker, say in northern New Jersey, and then you asynchronous out of region, say out to California. So, under our model with InfiniSync, it's a 450 pound, ballistically protected data bunker appliance, InfiniSync guarantees that with no data loss, and no reduction in performance, all transactions are guaranteed for delivery to the remote out-of-region site. So what this allows customers to do, is to erase data centers out of their terpology. Northern New Jersey, the bunker goes away, and customers, again in highly rated industries, like banking that have these requirements, they're going to save 10s of millions of dollars a year in cost avoidance by closing down unnecessary data centers. >> Dramatically sort of simplify their infrastructure and operations. Alright, InfiniGuardm I stumbled into it at another event, you guys hadn't announced it yet, and I was like, "Hmmm, what's this?" But tell us about InfiniGuard. >> Yeah, so InfiniGuard is a multi-pedibyte appliance that's 20 pedibytes of data protection in a single rack, in a single system, and it has 10 times the restore performance of data domain, at a fraction of the cost. >> Okay, and then the Neutrix Cloud, this is to me maybe the most interesting of all the announcements. What's your take on that? So, like I said, I love them all equally, but Neutrix Cloud for sure is the most disruptive of all the technologies that we're announcing this week. The idea of Neutrix Cloud is that it is neutral storage for consumption in the public cloud. So think about it like this. Do you think it's weird, that EBS and EFS are only compatible with Amazon coputing? And Google Cloud storage is only compatible with Google. Think about it for a second if IBM only worked with IBM servers. That's bringing us back to the 1950s and 60s. Or if EMC storage was only compatible with Dell servers, customers would never accept that, but in the Silicon Valley aligargic, wall-garden model, they can't help themselves. They just have to get your data. "And just give us your data, it'll be great. "We'll send a snowball or a truck to go pick it up." Because they know once they have your data, they have you locked in. They cannot help themselves from creating this wall-garden proprietary model. Well, like we call it a walled, prison yard. So the idea is with Neutrix Cloud, rather than your storage being weaponized as a customer to lock you in, what if they didn't get your data and what if instead you stored your data with a trusted, neutral, third party, that practices data neutrality. Because we guarantee contractually to every customer, that we will never take money and we will never shake down any of the cloud providers in order to access our Neutrix Cloud network, and we will never do side deals and partnerships with any of them to favor one cloud over the other. So the end result, you end up having for example, a couple of pedibytes of file systems, where you can have thousands of guests that have that file system mounted simultaneously from your V-Net and Azure, from your VPCs into AWS, and they all have simultaneous, screaming high performance access to one common set of your data. So by pulling and ripping your data from the arms of those public cloud providers, and instead only giving them shared common neutral access, we can now get them to start competing against each other for business. So rather than your storage being weaponized you, it's a tool that you can use to force the cloud providers to compete against each other for your business. >> So, I'm sure you guys may have a lot of questions there, hop into the crowd chat, it's crowdchat.net/infinichat. Ask me anything, ama crowdchat, Brian will be in there in a moment. I got to ask ya couple of more questions before I let you go. >> Sure. >> What was your motivation for this portfolio explansion. >> So the motivation was that at the end of the day, customers are very clear to us that they do not want to focus on their infrastructure. They want to focus on their businesses. And as their infrastructure scales, it becomes exponentially more complex to deal with issues of reliability, economics and performance. And, so we realized that if we're going to fulfill our company's mission, that we have to expand our mission, and help customers solves problems throughout more of the data lifecycle and focus on some of the pain points that extend beyond primary storage. That we have to start bringing solutions to market that help customers get to the cloud faster, and when they get there, to be more agile. And to focus on data protection, which again is a huge pain point. So the motivation at the end of the day is about helping customers do more with less. >> And the mission again, can you just summarize that, multi pedibyte? >> Yeah, the corporate mission of Infinidat is to store humanity's knowledge and to make new forms of computing possible. >> Big mission. >> Our humble mission. >> Humble, right. The reason I ask that question of your motivation, people might say, "Oh obviously, to make more money." But they're been a lot of single-product companies, feature companies that have done quite well, so in order to fulfill that mission, you really need a portfolio. What should we be watching as barometers of success? How are you guys measuring yourselves, How should we be measuring you? >> Oh I think the most fair way to do that is to measure us on successful execution of that mission, and at the end of the day, it's about helping customers compute harder and deeper on larger data sets, and to do so at lower costs than the competitor down the road, because at the end of the day, that's the only source of competitive advantage, that companies get out of their infrastructure. The better we help customers do that, the more that we consider ourselves succeeding in our mission. >> Alright, Brian, thank you, no kids but new products are kind of like giving birth. >> It's really cool. >> So hop into the crowd chat, it's an ask me anything questions. Brian will be in there, we got analysts in there, a bunch of experts as well. Brian, thanks very much. It was awesome having you on. >> Thanks, Dave. >> Thanks for watching everybody. We'll see you in the crowd chat. (upbeat digital music)

Published Date : Mar 21 2018

SUMMARY :

Announcer: From the SiliconANGLE Media office, And Brian Carmody is here to help me kick off this This is a bi-coastal program that we're running today of revenue growth, so we have a healthy, sustainable, that growth has been on the back of a single product, and I've spoken to a number of them, to the beginning, with each successive release to optimize the placement of data that allows you to use and the first piece of that is what you're talking about. just the one product company into a portfolio of products, And that's the content of the portfolio announcement. the analyst perspective, that's also going to be of the biggest data challenges in existence, We're back with Dr. Ricco, who's the CMO of Infinidat. and I have 14 patents in the storage industry It's great to have you back on The Cube. and InfiniSync extends the customer's ability to provide and then you're asynchronously going to distance. the InfiniSync black box which you can put So the problem you're solving is eliminating the You don't have to worry about the complexity of keeping I don't have to pay for the telecommunications lines, or in the absolute worse case, you can take the InfiniSync and subscriptions that you need at a bunker site. in the market place. and then back here to back here to wrap up I'm Peter Burris and I'm here in our Palo Alto, that the thinking inside business has to change the best way that we could help them a focus on not the data but on devices, of that business application and it is the leveraging and the lack of simplicity. So the what it is, we built it off the building block box, the InfiniGuard to be as high quality in the fashion to sequentially feed this data back. of organization of the data within the box. that the ability to go and get it ahead of the ask By feeding the data back out to the system Exactly and other systems don't have that capability, to ask you a question the last couple minutes here, "Wow, the restore times that you guys deliver to the market and it scales better, so the business truly can think and we'll be right back to talk Multicloud that the customer was using. of the company's portfolio. for the companies to move between data centers. that disagregates the data from the public clouds So, great conversation, but we're here to actually for the customers to deploy in their data center. We are also providing the capability to consume the sane Is that one of the use cases? environment into the Neutrix Cloud, So that's kind of the differentiation in a nutshell. and it's essentially as easy to consume as the native is going to play in conjunction with some of the other public clouds and to or from the own prem environment better protection and de-risking of the data overall. We'll be right back to get the analyst perspective is that it operates to four nines as a What do you think? And so that really breaks the lock-in that you see from the data to the cloud, the business gets greater people that started in the cloud, and then as things Back to you, Dave. I want to start with InfiniBox, I'm also going to ask you of our InfiniBox systems, it has the same feature set, Alright, let's go into the portfolio. is to erase data centers out of their terpology. you guys hadn't announced it yet, and I was like, performance of data domain, at a fraction of the cost. any of the cloud providers in order to access I got to ask ya couple of more questions before I let you go. that help customers get to the cloud faster, Yeah, the corporate mission of Infinidat is to store so in order to fulfill that mission, and at the end of the day, it's about helping customers are kind of like giving birth. So hop into the crowd chat, it's an We'll see you in the crowd chat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric BurgenerPERSON

0.99+

BrianPERSON

0.99+

PeterPERSON

0.99+

Brian CarmodyPERSON

0.99+

EricPERSON

0.99+

Bob CancillaPERSON

0.99+

EquinixORGANIZATION

0.99+

ErikPERSON

0.99+

Brian CormodyPERSON

0.99+

InfinidatORGANIZATION

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

2017DATE

0.99+

IBMORGANIZATION

0.99+

Erik KaulbergPERSON

0.99+

30 minuteQUANTITY

0.99+

GregoryPERSON

0.99+

Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

six hoursQUANTITY

0.99+

CaliforniaLOCATION

0.99+

Palo AltoLOCATION

0.99+

10 timesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

BobPERSON

0.99+

30 secondsQUANTITY

0.99+

fiveQUANTITY

0.99+

450 poundQUANTITY

0.99+

14 patentsQUANTITY

0.99+

second pieceQUANTITY

0.99+

first pieceQUANTITY

0.99+

DellORGANIZATION

0.99+

71%QUANTITY

0.99+

NeutrixORGANIZATION

0.99+

RiccoPERSON

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

30 minutesQUANTITY

0.99+

crowdchat.net/infinichatOTHER

0.99+

20 pedibytesQUANTITY

0.99+

firstQUANTITY

0.99+

Boston MassachusettsLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

Northern New JerseyLOCATION

0.99+

IDCORGANIZATION

0.99+

F4260COMMERCIAL_ITEM

0.99+

crowdchat.net/InfinichatOTHER

0.99+

Infinidat portfolio Outro


 

>> Narrator: From the SiliconANGLE Media office in Boston, Massachusetts, it's the CUBE. Now, here's your host, Dave Vellante. (electronic pop music) >> Thanks, Peter. We're back with Brian Carmody. We're going to summarize now. So we're seeing the evolution of Infinidat going from a single-product company to a portfolio company. Brian, I'm going to ask you to summarize. I want to start with InfiniBox. I'm also going to ask you, is this the same software, and does it enable new use cases, or is it just bigger, better, faster? >> It's the same software that runs on all of our InfiniBox systems. It has the same feature set, it's completely compatible for replication and everything like that. It's just more capacity. It's 8.4 petabytes of effective capacity. The use cases that are pulling this into the field are deep learning, analytics, and IOT. >> All right, let's go into the portfolio. I'm going to ask you, it's like, "Do you have a favorite child? Do you have a favorite child in the portfolio?" Let's start with InfiniSync. >> Sure. I love them all equally. InfiniSync is a revolutionary appliance for banking and other highly-regulated industries that have a requirement to have 0 RPO but also have protection against rolling disasters and regional disasters. Traditionally, the way that that gets solved is you have a data center, say, in lower Manhattan where you do your primary computing. You do synchronous to a data bunker, say, in northern New Jersey, and then you do asynchronous out of region, say, out to California. Under our model with InfiniSync, it's a 450-pound ballistically-protected data bunker appliance. InfiniSync guarantees that with no data loss and no reduction in performance, all transactions are guaranteed for delivery to the remote, out-of-region site. What this allows customers to do is to erase data centers out of their topology. Northern New Jersey, the bunker goes away. Again, highly-regulated industries like banking that have these requirements, they're going to save tens of millions of dollars a year in cost avoidance by closing down unnecessary data centers. >> And dramatically simplify their infrastructure and operations. >> Absolutely. >> InfiniGuard, I stumbled into it at another event. You guys hadn't announced it yet. I was like, "Hmm, what's this?" Tell us about InfiniGuard. >> InfiniGuard is a multi-petabyte appliance that fits 20 petabytes of data protection in a single rack, in a single system, and it has 10 times the restore performance of data domain at a fraction of the cost. >> Okay, and then the Nutrix cloud ... This is, to me, maybe the most interesting of all the announcements. What's your take on that? >> Like I said, I love them all equally, but Nutrix cloud for sure is the most disruptive of all the technologies that we're announcing this week. The idea of Nutrix cloud is that it is neutral storage for consumption in the public cloud. So think about it like this. Don't you think it's weird that EBS and EFS are only compatible with Amazon computing and Google cloud storage is only compatible with Google? Think about it for a second. If IBM storage only worked with IBM servers, that's bringing us back to the 1950s and '60s. Or if EMC storage was only compatible with Dell servers, customers would never accept that. But in the Silicon Valley oligarchic, walled-garden model, they can't help themselves. They just have to get your data. "Just give us your data. It'll be great. We'll send a snowball or a truck to go pick it up." Because they know once they have your data, they have you locked in. They cannot help themselves from creating this walled-garden proprietary model, or like we call it, a walled prison yard. So the idea is, with Nutrix cloud, rather than your storage being weaponized against you as a customer to lock you in, what if they didn't get your data? What if instead, you stored your data with a trusted, neutral third party that practices data neutrality? Because we guarantee contractually to every customer that we will never take money, and we will never shake down any of the cloud providers in order to get access to our Nutrix cloud network, and we will never do side deals and partnerships with any of them to favor one cloud over the other. So the end result is that you end up having, for example, a couple of petabyte-scale file systems where you can have thousands of guests that have that file system mounted simultaneously from your VNet in Azure, from your VPC's in AWS, and they all have simultaneous screaming high-performance access to one common set of your data. So by pulling and ripping your data out of the arms of those public cloud providers and instead, only giving them shared, common, neutral access, we can now get them to start competing against each other for business. Rather than your storage being weaponized against you, it's a tool which you can use to force the cloud providers to compete against each other for your business. >> I'm sure you guys may have a lot of questions there. Hop into the CrowdChat. It's crowdchat.net/infinichat. Ask Me Anything, AMA CrowdChat. Brian will be in there in a moment. I got to ask a couple of questions before I let you go. >> Brian: Sure. >> What was your motivation for this portfolio expansion? >> The motivation was that at the end of the day, customers are very clear to us that they do not want to focus on their infrastructure. They want to focus on their businesses. As their infrastructure scales, it becomes exponentially more complex. They deal with issues of reliability, and economics, and performance. We realized that if we're going to fulfill our company's mission, that we have to expand our mission and help customers solve problems throughout more of the data lifecycle, and focus on some of the pain points that extend beyond primary storage. We have to start bringing solutions to market that help customers get to the cloud faster, and when they get there, to be more agile, and to focus on data protection, which, again, is a huge pain point. The motivation at the end of the day is about helping customers do more with less. >> And the mission again, can you just summarize that? Multi-petabyte, and ... ? >> The corporate mission of Infinidat is to store humanity's knowledge and to make new forms of computing possible. >> Big mission. (laughs) Okay, fantastic. >> Our humble mission, yes. >> Humble, right. The reason I asked that question of your motivation, people always say, "Oh, obviously to make more money." But there have been a lot of single-product companies or feature companies that have done quite well. In order to fulfill that mission, you really need a portfolio. What should we be watching as barometers of success? How are you guys measuring yourselves? How should we be measuring you? >> I think the most fair way to do that is to measure us on successful execution of that mission. At the end of the day, it's about helping customers compute harder and deeper on larger data sets, and to do so at lower cost than the competitor down the road. Because at the end of the day, that's the only source of competitive advantage that companies get out of their infrastructure. The better we help customers do that, the more we consider ourselves succeeding in our mission. >> All right, Brian, thank you. No kids, but new products are kind of like giving birth. Best I can say. >> I have dogs. They're like dogs. >> So hop into the CrowdChat. It's an Ask Me Anything questions. Brian will be in there, we've got analysts in there, a bunch of experts as well. Brian, thanks very much. It was awesome having you on. >> Thanks, Dave. >> Thanks for watching, everybody. See you in the CrowdChat. (electronic pop music)

Published Date : Mar 16 2018

SUMMARY :

in Boston, Massachusetts, it's the CUBE. Brian, I'm going to ask you to summarize. It's the same software that runs on I'm going to ask you, it's like, that have a requirement to have 0 RPO And dramatically simplify their I was like, "Hmm, what's this?" of data domain at a fraction of the cost. interesting of all the announcements. So the end result is that you end up having, I got to ask a couple of questions before I let you go. The motivation at the end of the day is about And the mission again, can you just summarize that? The corporate mission of Infinidat is to Okay, fantastic. The reason I asked that question of your motivation, and to do so at lower cost than Best I can say. I have dogs. So hop into the CrowdChat. See you in the CrowdChat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Brian CarmodyPERSON

0.99+

CaliforniaLOCATION

0.99+

BrianPERSON

0.99+

PeterPERSON

0.99+

10 timesQUANTITY

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

450-poundQUANTITY

0.99+

DellORGANIZATION

0.99+

20 petabytesQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

8.4 petabytesQUANTITY

0.99+

Northern New JerseyLOCATION

0.99+

1950sDATE

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

AmazonORGANIZATION

0.99+

northern New JerseyLOCATION

0.99+

0 RPOQUANTITY

0.98+

CrowdChatTITLE

0.98+

crowdchat.net/infinichatOTHER

0.98+

single systemQUANTITY

0.97+

InfinidatORGANIZATION

0.97+

tens of millions of dollars a yearQUANTITY

0.95+

this weekDATE

0.95+

NutrixORGANIZATION

0.95+

thousands of guestsQUANTITY

0.93+

single rackQUANTITY

0.92+

AzureTITLE

0.9+

ManhattanLOCATION

0.87+

InfiniGuardTITLE

0.86+

InfiniGuardORGANIZATION

0.85+

VNetORGANIZATION

0.85+

HumbleORGANIZATION

0.84+

'60sDATE

0.84+

EMCORGANIZATION

0.83+

one common setQUANTITY

0.81+

one cloudQUANTITY

0.81+

single-productQUANTITY

0.8+

SiliconANGLEORGANIZATION

0.75+

EBSORGANIZATION

0.74+

InfiniBoxTITLE

0.72+

secondQUANTITY

0.72+

InfiniSyncORGANIZATION

0.7+

multiQUANTITY

0.69+

couple of questionsQUANTITY

0.68+

MultiQUANTITY

0.55+

cloudTITLE

0.53+

petabyteQUANTITY

0.53+

AMAORGANIZATION

0.52+

EFSORGANIZATION

0.48+

cloudCOMMERCIAL_ITEM

0.43+

NutrixCOMMERCIAL_ITEM

0.34+

Wikibon Predictions Webinar with Slides


 

(upbeat music) >> Hi, welcome to this year's Annual Wikibon Predictions. This is our 2018 version. Last year, we had a very successful webinar describing what we thought was going to happen in 2017 and beyond and we've assembled a team to do the same thing again this year. I'm very excited to be joined by the folks listed here on the screen. My name is Peter Burris. But with me is David Floyer, Jim Kobielus is remote. George Gilbert's here in our Pal Alto studio with me. Neil Raden is remote. David Vellante is here in the studio with me. And Stuart Miniman is back in our Marlboro office. So thank you analysts for attending and we look forward to a great teleconference today. Now what we're going to do over the course of the next 45 minutes or so is we're going to hit about 13 of the 22 predictions that we have for the coming year. So if you have additional questions, I want to reinforce this, if you have additional questions or things that don't get answered, if you're a client, give us a call. Reach out to us. We'll leave you with the contact information at the end of the session. But to start things off we just want to make sure that everybody understands where we're coming from. And let you know who is Wikibon. So Wikibon is a company that starts with the idea of what's important as to research communities. Communities are where the action is. Community is where the change is happening. And community is where the trends are being established. And so we use digital technologies like theCUbE, CrowdChat and others to really ensure that we are surfacing the best ideas that are in a community and making them available to our clients so that they can succeed successfully, they can be more successful in their endeavors. When we do that, our focus has always been on a very simple premise. And that is that we're moving to an era of digital business. For many people, digital business can mean virtually anything. For us it means something very specific. To us, the difference between business and digital business is data. A digital business uses data to differentially create and keep a customer. So borrowing from what Peter Drucker said if the goal of business is to create customers and keep and sustain customers, the goal of digital business is to use data to do that. And that's going to inform an enormous number of conversations and an enormous number of decisions and strategies over the next few years. We specifically believe that all businesses are going to have establish what we regard as the five core digital business capabilities. First, they're going to have to put in place concrete approaches to turning more data into work. It's not enough to just accrete data, to capture data or to move data around. You have to be very purposeful and planful in how you establish the means by which you turn that data into work so that you can create and keep more customers. Secondly, it's absolutely essential that we build kind of the three core technology issues here, technology capabilities of effectively doing a better job of capturing data and IoT and people, or internet of things and people, mobile computing for example, is going to be a crucial feature of that. You have to then once you capture that data, turn it into value. And we think this is the essence of what big data and in many respects AI is going to be all about. And then once you have the possibility, kind of the potential energy of that data in place, then you have to turn it into kinetic energy and generate work in your business through what we call systems of agency. Now, all of this is made possible by this significant transformation that happens to be conterminous with this transition to digital business. And that is the emergence of the cloud. The technology industry has always been defined by the problems it was able to solve, catalyzed by the characteristics of the technology that made it possible to solve them. And cloud is crucial to almost all of the new types of problems that we're going to solve. So these are the five digital business capabilities that we're going to talk about, where we're going to have our predictions. Let's start first and foremost with this notion of turn more data into work. So our first prediction relates to how data governance is likely to change in a global basis. If we believe that we need to turn more data into work well, businesses haven't generally adopted many of the principles associated with those practices. They haven't optimized to do that better. They haven't elevated those concepts within the business as broadly and successfully as they have or as they should. We think that's going to change in part by the emergence of GDPR or the General Data Protection Regulation. It's going to go in full effect in May 2018. A lot has been written about it. A lot has been talked about. But our core issues ultimately are is that the dictates associated with GDPR are going to elevate the conversation on a global basis. And it mandates something that's now called the data protection officer. We're going to talk about that in a second David Vellante. But if is going to have real teeth. So we were talking with one chief privacy officer not too long ago who suggested that had the Equifax breach occurred under the rules of GDPR that the actual finds that would have been levied would have been in excess of 160 billion dollars which is a little bit more than the zero dollars that has been fined thus far. Now we've seen new bills introduced in Congress but ultimately our observation and our conversations with a lot of data chief privacy officers or data protection officers is that in the B2B world, GDPR is going to strongly influence not just our businesses behavior regarding data in Europe but on a global basis. Now that has an enormous implication David Vellante because it certainly suggest this notion of a data protection officer is something now we've got another potential chief here. How do we think that's going to organize itself over the course of the next few years? >> Well thank you Peter. There are a lot of chiefs (laughs) in the house and sometimes it gets confusing as the CIO, there's the CDO and that's either chief digital officer or chief data officer. There's the CSO, could be strategy, sometimes that could be security. There's the CPO, is that privacy or product. As he says, it gets confusing sometimes. On theCUbE we talked to all of these roles so we wanted to try to add some clarity to that. First thing we want to say is that the CIO, the chief information officer, that role is not going away. A lot of people predict that, we think that's nonsense. They will continue to have a critical role. Digital transformations are the priority in organizations. And so the chief digital officer is evolving from more than just a strategy role to much more of an operation role. Generally speaking, these chiefs tend to report in our observation to the chief operating officer, president COO. And we see the chief digital officer as increasing operational responsibility aligning with the COO and getting incremental responsibility that's more operational in nature. So the prediction really is that the chief digital officer is going to emerge as a charismatic leader amongst these chiefs. And by 2022, nearly 50% of organizations will position the chief digital officer in a more prominent role than the CIO, the CISO, the CDO and the CPO. Those will still be critical roles. The CIO will be an enabler. The chief information security officer has a huge role obviously to play especially in terms of making security a teams sport and not just falling on IT's shoulders or the security team's shoulders. The chief data officer who really emerged from a records and data management role in many cases, particularly within regulated industries will still be responsible for that data architecture and data access working very closely with the emerging chief privacy officer and maybe even the chief data protection officer. Those roles will be pretty closely aligned. So again, these roles remain critical but the chief digital officer we see as increasing in prominence. >> Great, thank you very much David. So when we think about these two activities, what we're really describing is over the course of the next few years, we strongly believe that data will be regarded more as an asset within business and we'll see resources devoted to it and we'll see certainly management devoted to it. Now, that leads to the next set of questions as data becomes an asset, the pressure to acquire data becomes that much more acute. We believe strongly that IoT has an enormous implication longer term as a basis for thinking about how data gets acquired. Now, operational technology has been in place for a long time. We're not limiting ourselves just operational technology when we talk about this. We're really talking about the full range of devices that are going to provide and extend information and digital services out to consumers, out to the Edge, out to a number of other places. So let's start here. Over the course of the next few years, the Edge analytics are going to be an increasingly important feature overall of how technology decisions get made, how technology or digital business gets conceived and even ultimately how business gets defined. Now David Floyer's done a significant amount of work in this domain and we've provided that key finding on the right hand side. And what it shows is that if you take a look at an Edge based application, a stylized Edge based application and you presume that all the data moves back to an centralized cloud, you're going to increase your costs dramatically over a three year period. Now that moderates the idea or moderates the need ultimately for providing an approach to bringing greater autonomy, greater intelligence down to the Edge itself and we think that ultimately IoT and Edge analytics become increasingly synonymous. The challenge though is that as we evolve, while this has a pressure to keep more of the data at the Edge, that ultimately a lot of the data exhaust can someday become regarded as valuable data. And so as a consequence of that, there's still a countervailing impression to try to still move all data not at the moment of automation but for modeling and integration purposes, back to some other location. The thing that's going to determine that is going to be rate at which the cost of moving the data around go down. And our expectation is over the next few years when we think about the implications of some of the big cloud suppliers, Amazon, Google, others, that are building out significant networks to facilitate their business services may in fact have a greater impact on the common carriers or as great an impact on the common carriers as they have on any server or other infrastructure company. So our prediction over the next few years is watch what Amazon, watch what Google do as they try to drive costs down inside their networks because that will have an impact how much data moves from the Edge back to the cloud. It won't have an impact necessarily on the need for automation at the Edge because latency doesn't change but it will have a cost impact. Now that leads to a second consideration and the second consideration is ultimately that when we talk about greater autonomy at the Edge we need to think about how that's going to play out. Jim Kobielus. >> Jim: Hey thanks a lot Peter. Yeah, so what we're seeing at Wikibon is that more and more of the AI applications, more of the AI application development involves AI and more and more of the AI involves deployment of those models, deep learning machine learning and so forth to the Edges of the internet of things and people. And much of that AI will be operating autonomously with little or no round-tripping back to the cloud. What that's causing, in fact, we're seeing really about a quarter of the AI development projects (static interference with web-conference) as Edge deployment. What that involves is that more and more of that AI will be, those applications will be bespoke. They'll be one of a kind, or unique or an unprecedented application and what that means is that, you know, there's a lot of different deployment scenarios within which organizations will need to use new forms of learning to be able to ready that data, those AI applications to do their jobs effectively albeit to predictions of real time, guiding of an autonomous vehicle and so forth. Reinforcement learning is the core of what many of these kinds of projects, especially those that involve robotics. So really software is hitting the world and you know the biggest parts are being taken out of the Edge, much of that is AI, much of that autonomous, where there is no need or less need for real time latency in need of adaptive components, AI infused components where as they can learn by doing. From environmental variables, they can adapt their own algorithms to take the right actions. So, they'll have far reaching impacts on application development in 2018. For the developer, the new developer really is a data scientist at heart. They're going to have to tap into a new range of sources of data especially Edge sourced data from the senors on those devices. They're going to need to do commitment training and testing especially reinforcement learning which doesn't involve trained data so much as it involves being able to build an algorithm that can learn to maximum what's called accumulative reward function and if you do the training there adaptly in real time at the Edge and so forth and so on. So really, much of this will be bespoke in the sense that every Edge device increasingly will have its own set of parameters and its own set of objective functions which will need to be optimized. So that's one of the leading edge forces, trends, in development that we see in the coming year. Back to you Peter. >> Excellent Jim, thank you very much. The next question here how are you going to create value from data? So once you've, we've gone through a couple trends and we have multiple others about what's going to happen at the Edge. But as we think about how we're going to create value from data, Neil Raden. >> Neil: You know, the problem is that data science emerged rapidly out of sort of a perfect storm of big data and cloud computing and so forth. And people who had been involved in quantitative methods you know rapidly glommed onto the title because it was, lets face it, it was very glamorous and paid very well. But there weren't really good best practices. So what we have in data science is a pretty wide field of things that are called data science. My opinion is that the true data scientists are people who are scientists and are involved in developing new or improving algorithms as opposed to prepping data and applying models. So the whole field really kind of generated very quickly, in really, just in a few years. To me I called it generation zero which is more like data prep and model management all done manually. And it wasn't really sustainable in most organizations because for obvious reasons. So generation one, then some vendors stepped up with tool kits or benchmarks or whatever for data scientists and made it a little better. And generation two is what we're going to see in 2018, is the need for data scientists to no longer prep data or at least not spend very much time with it. And not to do model management because the software will not only manage the progression of the models but even recommend them and generate them and select the data and so forth. So it's in for a very big change and I think what you're going to see is that the ranks of data scientists are going to sort of bifurcate to old style, let me sit down and write some spaghetti code in R or Java or something and those that use these advanced tool kits to really get the work done. >> That's great Neil and of course, when we start talking about getting the work done, we are becoming increasingly dependent upon tools, aren't we George? But the tool marketplace for data science, for big data, has been somewhat fragmented and fractured. And hasn't necessarily focused on solving the problems of the data scientists. But in many respects focusing the problems that the tools themselves have. What's going to happen in the coming year when we start thinking about Neil's prescription that as the tools improve what's going to happen to the tools. >> Okay so, the big thing that we see supporting what Neil's talking about, what Neil was talking about is partly a symptom of a product issue and a go to market issue where the produce issue was we had a lot of best of breed products that were all designed to fit together. That in the broader big data space, that's the same issue that we faced with more narrowly with ArpiM Hadoop where you know, where we were trying to fit together a bunch of open source packages that had an admin and developer burden. More broadly, what Neil is talking about is sort of a richer end to end tools that handle both everything from the ingest all to the way to the operationalization and feedback of the models. But part of what has to go on here is that with open source, these open source tools the price point and the functional footprints that many of the vendors are supporting right now can't feed an enterprise sales force. Everyone talks with their open source business models about land and expand and inside sales. But the problem is once you want to go to wide deployment in an enterprise, you still need someone negotiating commercial terms at a senior level. You still need the technical people fitting the tools into a broader architecture. And most of the vendors that we have who are open source vendors today, don't have either the product breadth or the deal size to support traditional enterprise software. An account team would typically a million and a half to two million quota every year so we see consolidation and the consolidation again driven by the need for simplicity for the admins and the developers and for business model reasons to support enterprise sales force. >> All right, so what we're going to see happen in the course of the coming year is a lot of specialization and recognition of what is data science, what are the practices, how is it going to work, supported by an increasing quality of tools and a lot of tool vendors are going to be left behind. Now the third kind of notion here for those core technology capabilities is we still have to enact based on data. The good new is that big data is starting to show some returns in part because of some of the things that AI and other technologies are capable of doing. But we have to move beyond just creating the potential for, we have to turn that into work and that's what we mean ultimately by this notion of systems of agency. The idea that data driven applications will increasingly be act on behalf of a brand, on behalf of a company and building those systems out is going to be crucial. It's going to have a whole new set of disciplines and expertise required. So when we think about what's going to be required, it always starts with this notion of AI. A lot of folks are presuming however, that AI is going to be relatively easy to build or relatively easy to put together. We have a different opinion George. What do we think is going to happen as these next few years unfold related to AI adoption in large enterprises? >> Okay so, let's go back to the lessons we learned from sort of the big data, the raw, you know, let's put a data link in place which was sort of the top of everyone's agenda for several years. The expectation was it was going to cure cancer, taste like chocolate and cost a dollar. And uh. (laughing) It didn't quite work out that way. Partly because we had a burden on the administrator again of so many tools that weren't all designed to fit together, even though they were distributed together. And then the data scientists, the guys who had to take all this data that wasn't carefully curated yet. And turn that into advanced analytics and machine learning models. We have many of the same problems now with tool sets that are becoming more integrated but at lower levels. This is partly what Neil Raden was just talking about. What we have to recognize is something that we see all along, I mean since the beginning of (laughs) corporate computing. We have different levels of extraction and you know at the very bottom, when you're dealing with things like Tensorflow or MXNet, that's not for mainstream enterprises. That's for you know, the big sophisticated tech companies who are building new algorithms on those frameworks. There's a level above that where you're using like a spark cluster in the machine learning built into that. That's slightly more accessible but when we talk about mainstream enterprises taking advantage of AI, the low hanging fruit is for them to use the pre-trained models that the public cloud vendors have created with all the consumer data on speech, image recognition, natural language processing. And then some of those capabilities can be further combined into applications like managing a contact center and we'll see more from like Amazon, like recommendation engines, fulfillment optimization, pricing optimization. >> So our expectation ultimately George is that we're going to see a lot of this, a lot of AI adoption happen through existing applications because the vendors that are capable of acquiring a talent, taking or experimenting, creating value, software vendors are going to be where a lot of the talent ends up. So Neil, we have an example of that. Give us an example of what we think is going to happen in 2018 when we start thinking about exploiting AI and applications. >> Neil: I think that it's fairly clear to be the application of what's called advanced analytics and data science and even machine learning. But really, it's rapidly becoming a commonplace in organizations not just at the bottom of the triangle here. But I like the example of SalesForce.com. What they've done with Einstein, is they've made machine learning and I guess you can say, AI applications available to their customer base and why is that a good thing? Because their customer base already has a giant database of clean data that they can use. So you're going to see a huge number of applications being built with Einstein against Salesforce.com data. But there's another thing to consider and that is a long time ago Salesforce.com built connectors to a zillion times of external data. So, if you're a SalesForce.com customer using Einstein, you're going to be able to use those advanced tools without knowing anything about how to train a machine learning model and start to build those things. And I think that they're going to lead the industry in that sense. That's going to push their revenue next year to, I don't know, 11 billion dollars or 12 billion dollars. >> Great, thanks Neil. All right so when we think about further evidence of this and further impacts, we ultimately have to consider some of the challenges associated with how we're going to create application value continually from these tools. And that leads to the idea that one of the cobblers children, it's going to gain or benefit from AI will in fact be the developer organization. Jim, what's our prediction for how auto-programming impacts development? >> Jim: Thank you very much Peter. Yeah, automation, wow. Auto-programming like I said is the epitome of enterprise application development for us going forward. People know it as co-generation but that really understates the control of auto-programming as it's evolving. Within 2018, what we're going to see is that machine learning driven co-generation approach of becoming the forefront of innovation. We're seeing a lot of activity in the industry in which applications use ML to drive the productivity of developers for all kinds of applications. We're also seeing a fair amount of what's called RPA, robotic process automation. And really, how they differ is that ML will deliver or will drive co-generation, from what I call the inside out meaning, creating reams of code that are geared to optimize a particular application scenario. This is RPA which really takes over the outside in approach which is essentially, it's the evolution of screen scraping that it's able to infer the underlined code needed for applications of various sorts from the external artifacts, the screens and from sort of the flow of interactions and clips and so forth for a given application. We're going to see that ML and RPA will compliment each other in the next generation of auto-programming capabilities. And so, you know, really application development tedium is really the enemy of, one of the enemies of productivity (static interference with web-conference). This is a lot of work, very detailed painstaking work. And what they need is to be better, more nuanced and more adaptive auto-programming tools to be able to build the code at the pace that's absolutely necessary for this new environment of cloud computing. So really AI-related technologies can be applied and are being applied to application development productivity challenges of all sorts. AI is fundamental to RPA as well. We're seeing a fair number of the vendors in that stage incorporate ML driven OCR and natural language processing and screen scraping and so forth into their core tools to be able to quickly build up the logic albeit to drive sort of the verbiage outside in automation of fairly complex orchestration scenario. In 2018, we'll see more of these technologies come together. But you know, they're not a silver bullet. 'Cause fundamentally and for organizations that are considering going deeply down into auto-programming they're going to have to factor AI into their overall plans. They need to get knowledgeable about AI. They're going to need to bring more AI specialists into their core development teams to be able to select from the growing range of tools that are out there, RPA and ML driven auto-programming. Overall, really what we're seeing is that the AI, the data scientists, who's been the fundamental developer of AI, they're coming into the core of development tools and skills in organizations. And they're going to be fundamental to this whole trend in 2018 and beyond. If AI gets proven out in auto-programming, these developers will then be able to evangelize the core utility of the this technology, AI. In a variety of other backend but critically important investments that organizations will be making in 2018 and beyond. Especially in IT operations and in management, AI is big in that area as well. Back to you there, Peter. >> Yeah, we'll come to that a little bit later in the presentation Jim, that's a crucial point but the other thing we want to note here regarding ultimately how folks will create value out of these technologies is to consider the simple question of okay, how much will developers need to know about infrastructure? And one of the big things we see happening is this notion of serverless. And here we've called it serverless, developer more. Jim, why don't you take us through why we think serverless is going to have a significant impact on the industry, at least certainly from a developer perspective and developer productivity perspective. >> Jim: Yeah, thanks. Serverless is really having an impact already and has for the last several years now. Now, everybody, many are familiar in the developer world, AWS Lambda which is really the ground breaking public cloud service that incorporates the serverless capabilities which essentially is an extraction layer that enables developers to build stateless code that executes in a cloud environment without having to worry about and to build microservices, we don't have to worry about underlined management of containers and virtual machines and so forth. So in many ways, you know, serverless is a simplification strategy for developers. They don't have to worry about the underlying plumbing. They can worry, they need to worry about the code, of course. What are called Lambda functions or functional methods and so forth. Now functional programming has been around for quite a while but now it's coming to the form in this new era of serverless environment. What we'll see in 2018 is that we're predicting is that more than 50% of lean microservices employees, in the public cloud will be deployed in serverless environments. There's AWS and Microsoft has the Azure function. IMB has their own. Google has their own. There's a variety of private, there's a variety of multiple service cloud code bases for private deployment of serverless environments that we're seeing evolving and beginning to deploy in 2018. They all involve functional programming which really, along, you know, when coupled with serverless clouds, enables greater scale and speed in terms of development. And it's very agile friendly in the sense that you can quickly Github a functionally programmed serverless microservice in a hurry without having to manage state and so forth. It's very DevOps friendly. In the very real sense it's a lot faster than having to build and manage and tune. You know, containers and DM's and so forth. So it can enable a more real time and rapid and iterative development pipeline going forward in cloud computing. And really fundamentally what serverless is doing is it's pushing more of these Lamba functions to the Edge, to the Edges. If you're at an AWS Green event last week or the week before, but you notice AWS is putting a big push on putting Lambda functions at the Edge and devices for the IoT as we're going to see in 2018. Pretty much the entire cloud arena. Everybody will push more of the serverless, functional programming to the Edge devices. It's just a simplification strategy. And that actually is a powerful tool for speeding up some of the development metabolism. >> All right, so Jim let me jump in here and say that we've now introduced the, some of these benefits and really highlighted the role that the cloud is going to play. So, let's turn our attention to this question of cloud optimization. And Stu, I'm going to ask you to start us off by talking about what we mean by true private cloud and ultimately our prediction for private cloud. Do we have, why don't you take us through what we think is going to happen in this world of true private cloud? >> Stuart: Sure Peter, thanks a lot. So when Wikibon, when we launched the true private cloud terminology which was about two weeks ago next week, two years ago next week, it was in some ways coming together of a lot of trends similar to things that you know, George, Neil and James have been talking about. So, it is nothing new to say that we needed to simplify the IT stack. We all know, you know the tried and true discussion of you know, way too much of the budget is spent kind of keeping lights on. What we'd like to say is kind of running the business. If you squint through this beautiful chart that we have on here, a big piece of this is operational staffing is where we need to be able to make a significant change. And what we've been really excited and what led us to this initial market segment and what we're continuing to see good growth on is the move from traditional, really siloed infrastructure to you want to have, you know, infrastructure where it is software based. You want IT to really be able to focus on the application services that they're running. And what our focus for the this for the 2018 is of course it's the central point, it's the data that matters here. The whole reason we've infrastructured this to be able to run applications and one of the things that is a key determiner as to where and what I use is the data and how can I not only store that data but actually gain value from that data. Something we've talked about time and again and that is a major determining factor as to am I building this in a public cloud or am I doing it in you know my core. Is it something that is going to live on the Edge. So that's what we were saying here with the true private cloud is not only are we going to simplify our environment and therefore it's really the operational model that we talked about. So we often say the line, cloud is not a destination. But it's an operational model. So a true private cloud giving me some of the you know, feel and management type of capability that I had had in the public cloud. It's, as I said, not just virtualization. It's much more than that. But how can I start getting services and one of the extensions is true private cloud does not live in isolation. When we have kind of a core public cloud and Edge deployments, I need to think about the operational models. Where data lives, what processing happens need to be as environments, and what data we'll need to move between them and of course there's fundamental laws of physics that we need to consider in that. So, the prediction of course is that we know how much gear and focus has been on the traditional data center. And true private cloud helps that transformation to modernization and the big focus is many of these applications we've been talking about and uses of data sets are starting to come into these true private cloud environments. So, you know, we've had discussions. There's Spark, there's modern databases. Many of these, there's going to be many reasons why they might live in the private cloud environment. And therefore that's something that we're going to see tremendous growth and a lot of focus. And we're seeing a new wave of companies that are focusing on this to deliver solutions that will do more than just a step function for infrastructure or get us outside of our silos. But really helps us deliver on those cloud native applications where we pull in things like what Jim was talking about with serverless and the like. >> All right, so Stu, what that suggests ultimately is that data is going to dictate that everything's not going to end up in the private or in the public cloud or centralized public clouds because of latency costs, data governance and IP protection reasons. And there will be some others. At bare minimum, that means that we're going to have it in most large enterprises as least a couple of clouds. Talk to us about what this impact of multi cloud is going to look like over the course of the next few years. >> Stuart: Yeah, critical point there Peter. Because, right, unfortunately, we don't have one solution. There's nobody that we run into that say, oh, you know, I just do a single you know, one environment. You know it would be great if we only had one application to worry about. But as you've done this lovely diagram here, we all use lots of SaaS and increasingly, you know, Oracle, Microsoft, SalesForce, you know, all pushing everybody to multiple SaaS environments that has major impacts on my security and where my data lives. Public clouds, no doubt is growing at leaps and bounds. And many customers are choosing applications to live in different places. So just as in data centers, I would kind of look at it from an application standpoint and build up what I need. Often, there's you know, Amazon doing phenomenal. But you know, maybe there's things that I'm doing with Azure. Maybe there's things that's I'm doing with Google or others as well as my service providers for locality, for you know, specialized services, that there's reasons why people are doing it. And what customers would love is an operational model that can actually span between those. So we are very early in trying to attack this multi cloud environment. There's everything from licensing to security to you know, just operationally how do I manage those. And a piece of them that we were touching on in this prediction year, is Kubernetes actually can be a key enabler for that cloud native environment. As Jim talked about the serverless, what we'd really like is our developer to be able to focus on building their application and not think as much about the underlined infrastructure whether that be you know, racket servers that I built myself or public cloud infrastructures. So we really want to think more it's at the data and application level. It's SaaS and pass is the model and Kubernetes holds the promise to solve a piece of this puzzle. Now Kubernetes is not by no means a silver bullet for everything that we need. But it absolutely, it is doing very well. Our team was at the Linux, the CNCF show at KubeCon last week and there is you know, broad adoption from over 40 of the leading providers including Amazon is now a piece. Even SalesForce signed up to the CNCF. So Kubernetes is allowing me to be able to manage multi cloud workflows and therefore the prediction we have here Peter is that 50% of developing teams will be building, sustaining multi cloud with Kubernetes as a foundational component of that. >> That's excellent Stu. But when we think about it, the hardware of technology especially because of the opportunities associated with true private cloud, the hardware technologies are also going to evolve. There will be enough money here to sustain that investment. David Floyer, we do see another architecture on the horizon where for certain classes of workloads, we will be able to collapse and replicate many of these things in an economical, practical way on premise. We call that UniGrid, NVME is, over fabric is a crucial feature of UniGrid. >> Absolutely. So, NVMe takes, sorry NVMe over fabric or NVMe-oF takes NVMe which is out there as storage and turns it into a system framework. It's a major change in system architecture. We call this UniGrid. And it's going to be a focus of our research in 2018. Vendors are already out there. This is the fastest movement from early standards into products themselves. You can see on the chart that IMB have come out with NVMe over fabrics with the 900 storage connected to the power. Nine systems. NetApp have the EF750. A lot of other companies are there. Meta-Lox is out there looking for networks, for high speed networks. Acceler has a major part of the storage software. So and it's going to be used in particular with things like AI. So what are the drivers and benefits of this architecture? The key is that data is the bottleneck for application. We've talked about data. The amount of data is key to making applications more effective and higher value. So NVMe and NVMe over fabrics allows data to be accessed in microseconds as opposed to milliseconds. And it allows gigabytes of data per second as opposed to megabytes of data per second. And it also allows thousands of processes to access all of the data in very very low latencies. And that gives us amazing parallelism. So what's is about is disaggregation of storage and network and processes. There are some huge benefits from that. Not least of which is you save about 50% of the processor you get back because you don't have to do storage and networking on it. And you save from stranded storage. You save from stranded processor and networking capabilities. So it's overall, it's going to be cheaper. But more importantly, it makes it a basis for delivering systems of intelligence. And systems of intelligence are bringing together systems of record, the traditional systems, not rewriting them but attaching them to real time analytics, real time AI and being able to blend those two systems together because you've got all of that additional data you can bring to bare on a particular problem. So systems themselves have reached pretty well the limit of human management. So, one of the great benefits of UniGrid is to have a single metadata lab from all of that data, all of those processes. >> Peter: All those infrastructure elements. >> All those infrastructure elements. >> Peter: And application. >> And applications themselves. So what that leads to is a huge potential to improve automation of the data center and the application of AI to operations, operational AI. >> So George, it sounds like it's going to be one of the key potential areas where we'll see AI be practically adopted within business. What do we think is going to happen here as we think about the role that AI is going to play in IT operations management? >> Well if we go back to the analogy with big data that we thought was going to you know, cure cancer, taste like chocolate, cost a dollar, and it turned out that the application, the most wide spread application of big data was to offload ETL from expensive data warehouses. And what we expect is the first widespread application of AI embedded in applications for horizontal use where Neil mentioned SalesForce and the ability to use Einstein as SalesForce data and connected data. Now because the applications we're building are so complex that as Stu mentioned you know, we have this operational model with a true private cloud. It's actually not just the legacy stuff that's sucking up all the admin overhead. It's the complexity of the new applications and the stringency of the SLA's, means that we would have to turn millions of people into admins, the old you know, when the telephone networks started, everyone's going to have to be an operator. The only way we can get past this is if we sort of apply machine learning to IT Ops and application performance management. The key here is that the models can learn how the infrastructure is laid out and how it operates. And it can also learn about how all the application services and middleware works, behaving independently and with each other and how they tie with the infrastructure. The reason that's important is because all of a sudden you can get very high fidelity root cause analysis. In the old management technology, if you had an underlined problem, you'd have a whole sort of storm of alerts, because there was no reliable way to really triangulate on the or triage the root cause. Now, what's critical is if you have high fidelity root cause analysis, you can have really precise recommendations for remediation or automated remediation which is something that people will get comfortable with over time, that's not going to happen right away. But this is critical. And this is also the first large scale application of not just machine learning but machine data and so this topology of collecting widely desperate machine data and then applying models and then reconfiguring the software, it's training wheels for IoT apps where you're going to have it far more distributed and actuating devices instead of software. >> That's great, George. So let me sum up and then we'll take some questions. So very quickly, the action items that we have out of this overall session and again, we have another 15 or so predictions that we didn't get to today. But one is, as we said, digital business is the use of data assets to compete. And so ultimately, this notion is starting to diffuse rapidly. We're seeing it on theCUbE. We're seeing it on the the CrowdChats. We're seeing it in the increase of our customers. Ultimately, we believe that the users need to start preparing for even more business scrutiny over their technology management. For example, something very simple and David Floyer, you and I have talked about this extensively in our weekly action item research meeting, the idea of backing up and restoring a system is no longer in a digital business world. It's not just backing up and restoring a system or an application, we're talking about restoring the entire business. That's going to require greater business scrutiny over technology management. It's going to lead to new organizational structures. New challenges of adopting systems, et cetera. But, ultimately, our observations is that data is going to indicate technology directions across the board whether we talk about how businesses evolve or the roles that technology takes in business or we talk about the key business capability, digital business capabilities, of capturing data, turning it into value and then turning into work. Or whether we talk about how we think about cloud architecture and which organizations of cloud resources we're going to utilize. It all comes back to the role that data's going to play in helping us drive decisions. The last action item we want to put here before we get to the questions is clients, if we don't get to your question right now, contact us. Send us an inquiry. Support@silicongangle.freshdesk.com. And we'll respond to you as fast as we can over the course of the next day, two days, to try to answer your question. All right, David Vellante, you've been collecting some questions here. Why don't we see if we can take a couple of them before we close out. >> Yeah, we got about five or six minutes in the chat room, Jim Kobielus has been awesome helping out and so there's a lot of detailed answer there. The first, there's some questions and comments. The first one was, are there too many chiefs? And I guess, yeah. There's some title inflation. I guess my comment there would be titles are cheap, results aren't. So if you're creating chief X officers just for the, to check a box, you're probably wasting money. So you've got to give them clear roles. But I think each of these chiefs has clear roles to the extent that they are you know empowered. Another comment came up which is we don't want you know, Hadoop spaghetti soup all over again. Well true that. Are we at risk of having Hadoop spaghetti soup as the centricity of big data moves from Hadoop to AI and ML and deep learning? >> Well, my answer is we are at risk of that but that there's customer pressure and vendor economic pressure to start consolidating. And we'll also see, what we didn't see in the ArpiM big data era, with cloud vendors, they're just going to start making it easier to use some of the key services together. That's just natural. >> And I'll speak for Neil on this one too, very quickly, that the idea ultimately is as the discipline starts to mature, we won't have people that probably aren't really capable of doing some of this data science stuff, running around and buying a tool to try to supplement their knowledge and their experience. So, that's going to be another factor that I think ultimately leads to clarity in how we utilize these tools as we move into an AI oriented world. >> Okay, Jim is on mute so if you wouldn't mind unmuting him. There was a question, is ML a more informative way of describing AI? Jim, when you and I were in our Boston studio, I sort of asked a similar question. AI is sort of the uber category. Machine learning is math. Deep learning is a more sophisticated math. You have a detailed answer in the chat. But maybe you can give a brief summary. >> Jim: Sure, sure. I don't want too pedantic here but deep learning is essentially, it's a lot more hierarchical deeper stacks of neural network of layers to be able to infer high level extractions from data, you know face recognitions, sentiment analysis and so forth. Machine learning is the broader phenomenon. That's simply along a different and part various approaches for distilling patterns, correlations and algorithms from the data itself. What we've seen in the last week, five, six tenure, let's say, is that all of the neural network approaches for AI have come to the forefront. And in fact, the core often market place and the state of the art. AI is an ancient paradigm that's older than probably you or me that began and for the longest time was rules based system, expert systems. Those haven't gone away. The new era of AI we see as a combination of both statical approaches as well as rules based approaches, and possibly even orchestration based approaches like graph models or building broader context or AI for a variety of applications especially distributed Edge application. >> Okay, thank you and then another question slash comment, AI like graphics in 1985, we move from a separate category to a core part of all apps. AI infused apps, again, Jim, you have a very detailed answer in the chat room but maybe you can give the summary version. >> Jim: Well quickly now, the most disruptive applications we see across the world, enterprise, consumer and so forth, the advantage involves AI. You know at the heart of its machine learning, that's neural networking. I wouldn't say that every single application is doing AI. But the ones that are really blazing the trail in terms of changing the fabric of our lives very much, most of them have AI at their heart. That will continue as the state of the art of AI continues to advance. So really, one of the things we've been saying in our research at Wikibon `is that the data scientists or those skills and tools are the nucleus of the next generation application developer, really in every sphere of our lives. >> Great, quick comment is we will be sending out these slides to all participants. We'll be posting these slides. So thank you Kip for that question. >> And very importantly Dave, over the course of the next few days, most of our predictions docs will be posted up on Wikibon and we'll do a summary of everything that we've talked about here. >> So now the questions are coming through fast and furious. But let me just try to rapid fire here 'cause we only got about a minute left. True private cloud definition. Just say this, we have a detailed definition that we can share but essentially it's substantially mimicking the public cloud experience on PRIM. The way we like to say it is, bringing the cloud operating model to your data versus trying to force fit your business into the cloud. So we've got detailed definitions there that frankly are evolving. about PaaS, there's a question about PaaS. I think we have a prediction in one of our, you know, appendices predictions but maybe a quick word on PaaS. >> Yeah, very quick word on PaaS is that there's been an enormous amount of effort put on the idea of the PaaS marketplace. Cloud Foundry, others suggested that there would be a PaaS market that would evolve because you want to be able to effectively have mobility and migration and portability for this large cloud application. We're not seeing that happen necessarily but what we are seeing is that developers are increasingly becoming a force in dictating and driving cloud decision making and developers will start biasing their choices to the platforms that demonstrate that they have the best developer experience. So whether we call it PaaS, whether we call it something else. Providing the best developer experience is going to be really important to the future of the cloud market place. >> Okay great and then George, George O, George Gilbert, you'll follow up with George O with that other question we need some clarification on. There's a question, really David, I think it's for you. Will persistent dims emerge first on public clouds? >> Almost certainly. But public clouds are where everything is going first. And when we talked about UniGrid, that's where it's going first. And then, the NVMe over fabrics, that architecture is going to be in public clouds. And it has the same sort of benefits there. And NV dims will again develop pretty rapidly as a part of the NVMe over fabrics. >> Okay, we're out of time. We'll look through the chat and follow up with any other questions. Peter, back to you. >> Great, thanks very much Dave. So once again, we want to thank you everybody here that has participated in the webinar today. I apologize for, I feel like Hans Solo and saying it wasn't my fault. But having said that, none the less, I apologize Neil Raden and everybody who had to deal with us finding and unmuting people but we hope you got a lot out of today's conversation. Look for those additional pieces of research on Wikibon, that pertain to the specific predictions on each of these different things that we're talking about. And by all means, Support@silicongangle.freshdesk.com, if you have an additional question but we will follow up with as many as we can from those significant list that's starting to queue up. So thank you very much. This closes out our webinar. We appreciate your time. We look forward to working with you more in 2018. (upbeat music)

Published Date : Dec 16 2017

SUMMARY :

And that is the emergence of the cloud. but the chief digital officer we see how much data moves from the Edge back to the cloud. and more and more of the AI involves deployment and we have multiple others that the ranks of data scientists are going to sort Neil's prescription that as the tools improve And most of the vendors that we have that AI is going to be relatively easy to build the low hanging fruit is for them to use of the talent ends up. of the triangle here. And that leads to the idea the logic albeit to drive sort of the verbiage And one of the big things we see happening is in the sense that you can quickly the role that the cloud is going to play. Is it something that is going to live on the Edge. is that data is going to dictate that and Kubernetes holds the promise to solve the hardware technologies are also going to evolve. of the processor you get back and the application of AI to So George, it sounds like it's going to be one of the key and the stringency of the SLA's, over the course of the next day, two days, to the extent that they are you know empowered. in the ArpiM big data era, with cloud vendors, as the discipline starts to mature, AI is sort of the uber category. and the state of the art. in the chat room but maybe you can give the summary version. at Wikibon `is that the data scientists these slides to all participants. over the course of the next few days, bringing the cloud operating model to your data Providing the best developer experience is going to be with that other question we need some clarification on. that architecture is going to be in public clouds. Peter, back to you. on Wikibon, that pertain to the specific predictions

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

David VellantePERSON

0.99+

JimPERSON

0.99+

NeilPERSON

0.99+

DavidPERSON

0.99+

StuartPERSON

0.99+

Jim KobielusPERSON

0.99+

Neil RadenPERSON

0.99+

EuropeLOCATION

0.99+

AmazonORGANIZATION

0.99+

2018DATE

0.99+

AWSORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

GeorgePERSON

0.99+

WikibonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

2017DATE

0.99+

Stuart MinimanPERSON

0.99+

George GilbertPERSON

0.99+

Peter DruckerPERSON

0.99+

May 2018DATE

0.99+

PeterPERSON

0.99+

MicrosoftORGANIZATION

0.99+

General Data Protection RegulationTITLE

0.99+

DavePERSON

0.99+

1985DATE

0.99+

50%QUANTITY

0.99+

Last yearDATE

0.99+

George OPERSON

0.99+

OracleORGANIZATION

0.99+

Hans SoloPERSON

0.99+

Support@silicongangle.freshdesk.comOTHER

0.99+

12 billion dollarsQUANTITY

0.99+

second considerationQUANTITY

0.99+

11 billion dollarsQUANTITY

0.99+

Nine systemsQUANTITY

0.99+

Jon Siegal, Dell EMC | HCI: A Foundation For IT Transformation


 

>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hi, Dave Vellante here, with John Siegel, Vice President of product marketing at Dell EMC. John, what does it mean to be a leader in hyperconversion infrastructure? >> First of all, thanks for asking. It's been quite a year, 2017, for us. Just this past quarter, we became the leader, Dell EMC did, and the number one leader in hyperconversion infrastructure, and we want to thank, certainly, our customers out there, as well, We think it was also due to the fact that we have a full portfolio at HCI, and really strong partnerships with folks like the VMware. >> OK, so, how about workload progression? VDI was really sort of the initial sweet spot, it's true, of hyperconversion. Has it evolved, and how has it evolved? >> It has evolved quite a bit, really, I think over the past coupl years we've seen it evolve from HCI really addressing, like you said, VDI workloads, small consolidation-type projects, text EV, really, to a majority of virtualized workloads in the data center. In fact, with the announcement this week, with the support now of 14th generation PowerEdged servers, we think we've taken it to another level where, because of 14th generation power servers, we have the ability to now provide the power, if you will. The performance, and the predictable performance in particular, that workloads require, mission-critical workloads require in the data center. >> OK, so we've ticked the performance box. What about the economics piece? How is hyperconversion infrastructure helping IT operations lower cost? >> Ya know, it's I think that's one of the main reasons that HCI crossed the chasm in the past year. It's because it's become a no-brainer, from an economics perspective. As customers look to transform IT and move away from traditional IT, the TCO advantage relative to traditional IT is 30-40%. I mean, you name it. I think Wikibon's done a number of studies this year, as well. I mean, you name it, across the board. So, it's really become a no-brainer there. And it's also become very compelling relative to public cloud, as well. The on-prem model. So, if you look at whether it's traditional IT, or whether you look at public cloud, I think what we're finding now is true private cloud built on, if you will, HCI portfolio is becoming a compelling way for customers to transform their data center, and to build on top of that cloud-operating model. >> OK, so speaking of public cloud, what's Dell EMC's point of view on cloud generally? >> So, our view is that the cloud is an operating model. It's not a place. So, really, what it's all about is providing that turn-key, self service-type experience, regardless of where your data is, if you will. Whether it's off-prem, whether it's on-prem, I mean, clearly, we don't have a strong opinion of that, other than that we want to make the on-prem experience as cloud-like as possible, and we think that starts with a critical foundation of HCI. >> OK, John. You mentioned PowerEdged servers before, a lot of people say it's just servers, it's a commodity, what say you? >> I'll tell you what. So, first of all, HCI is defined by software, right? And then I think we've talked about in the past, but it's really the combination of software with hardware that really delivers that turn-key outcome that customers expect when it comes to hyperconversed infrastructure. And this announcement is really about that combination of software and hardware, and the hardware, in particular, is the star of the show. It's 14th generation PowerEdged servers. What this brings to the table is powerful, predictable performance, first and foremost. The ability, now, to support mission-critical workloads. This is something that we haven't had the ability to really do before in the past. It can now support mission-critical workloads in the data center, first and foremost. So, it's powerful from that perspective. It's purposeful, in that it can now support any configuration. We actually can support up to 20 million different configurations, I'm not kidding here, when it comes to PowerEdged configurations with VXRail, as an example. And PowerEdged, 14th generation servers are actually purpose-built for HCI. They're addressing over 150 different customer requirements out there, from performance, to reliability, to manageability, to deployment, because, typically, a commodity server's really built as a compute engine. Instead, what PowerEdge servers are about, the 14th generation ones, is they're really, literally, custom-built for HCI, and that's why we think this is going to help take HCI to a whole new level, and allow customers to now start to deploy HCI across their data center to build that foundation for the cloud. >> Excellent. I think you nailed it. To give you the last word, just maybe summarize the announcement, final thoughts, HCI, wherever you want to go. >> I'll tell you what. I mean, we're just so excited. I think HCI has, as I said, become the foundation for the cloud. And, we've got a full portfolio. We give customers choice. You know, regardless of the type of use case they have, regardless of the type of workload they have, we have an HCI answer for our customers. Some customers, for example, want to start small and grow with appliances, others want to actually transform their network, as well. So, we have VxRack, as an example, there, where customers that want to transform more of the stack. We're excited to have that as an option for customers, too. So really, across the board, we're providing anything from Ready Nodes, where customers can do a little more of the work themselves, to appliances like VXRail and Xe series, where it's a turnkey experience across a server the compute and storage, all the way up to VxRacks, where we're making the entire data center, if you will, turnkey, as a foundation for that cloud-operating model. >> OK, awesome. Let's see, I lied. Last word is mine. CrowdChat on December 1, where it's kind of an ask me anything on the announcement. >> Ask me, ask Chad, ask whoever anything. >> Great, and then where do people go to get more information? >> Dellemc.com/HCI. We keep it simple, my friend. >> That's great. John, thanks very much. Appreciate ya comin. All right, thanks for watching, everybody. We'll see ya next time. (upbeat music)

Published Date : Nov 9 2017

SUMMARY :

From the SiliconANGLE Media office with John Siegel, and the number one leader in hyperconversion infrastructure, and how has it evolved? the power, if you will. What about the economics piece? the TCO advantage relative to traditional IT and we think that starts with a critical foundation it's a commodity, and the hardware, in particular, I think you nailed it. You know, regardless of the type of use case they have, where it's kind of an ask me anything on the announcement. We keep it simple, my friend. John, thanks very much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JohnPERSON

0.99+

Jon SiegalPERSON

0.99+

ChadPERSON

0.99+

December 1DATE

0.99+

John SiegelPERSON

0.99+

Dell EMCORGANIZATION

0.99+

2017DATE

0.99+

Boston, MassachusettsLOCATION

0.99+

30-40%QUANTITY

0.98+

this yearDATE

0.98+

HCIORGANIZATION

0.98+

up to 20 millionQUANTITY

0.98+

oneQUANTITY

0.97+

Dellemc.com/HCIOTHER

0.96+

past yearDATE

0.96+

Ready NodesORGANIZATION

0.95+

this weekDATE

0.95+

WikibonORGANIZATION

0.95+

14th generationQUANTITY

0.94+

FirstQUANTITY

0.92+

over 150 different customer requirementsQUANTITY

0.89+

PowerEdgeORGANIZATION

0.84+

PowerEdgedORGANIZATION

0.81+

VMwareORGANIZATION

0.8+

firstQUANTITY

0.78+

14thQUANTITY

0.77+

For IT TransformationTITLE

0.76+

VicePERSON

0.72+

VxRacksTITLE

0.7+

this past quarterDATE

0.66+

SiliconANGLEORGANIZATION

0.66+

VxRackTITLE

0.59+

VXRailCOMMERCIAL_ITEM

0.57+

PowerEdgedCOMMERCIAL_ITEM

0.55+

XeCOMMERCIAL_ITEM

0.52+

generationCOMMERCIAL_ITEM

0.43+

MediaLOCATION

0.42+

CrowdChatORGANIZATION

0.38+

John Shirley, Dell EMC | HCI: A Foundation For IT Transformation


 

>> Announcer: From the Silicon Angle Media office, in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Prior to the historic merger between Dell and EMC, Dell had a relationship with a company called Nutanix. Nutanix was a pioneer in so called hyperconverged infrastructure and a lot of people questioned whether that relationship would continue after the merger. Hi, everybody, I'm Dave Vellante, and I'm here with John Shirley who's the Director of Product Management at Dell EMC and we're going to talk about that. Welcome, John. >> Thank you, thanks for having me. >> So, the XC Series you are continuing the innovation there, tell us about what you are announcing today. >> Yeah, so this is our third generation, so this is the third generation of the XC Series and what we are announcing is that our most popular models are available now, and the most popular models are the XC640, which is more of a compute intensive node that will be targeted at VDI compute intensive remote offices, things like that. And we're also announcing the XC740XD which is more for storage intensive and performance applications. Think big data, SharePoint, exchange, those kind of things. >> Okay, so we're seeing the evolution of the workloads that can be supported by hyperconvergent infrastructure. And this is more evidence, right? >> Absolutely, and to that point, where we started off, we saw a lot of VDI deployments but now very quickly, once those companies adopt the technology, they are growing that more to mainstream. >> Okay, so I see this term, marketing gurus at Dell EMC throw around this term, purposeful. Okay, let's put some meat on the bone, what does that mean? >> I love the term because it really helps describe what we do, right. This isn't just take things like SDS offers, in this case Nutanix, throw it on some PowerEdge and validate it. Those are really core, important steps. But we go above and beyond that so purposeful really is kind of end to end view of what the solution is. So it's things all the way from configuration to manufacturing and supportability. Things like processor choices, SSD selection, memory types, you can kind of go down the list and we've really designed this purposefully for ACI market. >> Okay, so Dell, of course, was the first to do an OEM relationship with Nutanix, there are others. Can you talk about your differentiation, what's special about Dell EMC and Nutanix. >> Yes, so you know, I think if I go back to the three points that I had before. You have a server, you have SDS solutions and you do some validation around it. Very important steps. We really feel that we have the strongest server in the world and so that's point number one for us. Nutanix, great partnership there. And then the validation steps which we have a very strong engineering team to go after now. If we take that a step further, Dell has created some soft or some IP that really helps kind of glue everything together. We call it the Power Tools SDK. And that's really years worth of experience working with SDS solutions that we know how to integrate into the server and really load that software on top of it so we can do things like life cycle management, we can have recovery options, and there's a whole list of options that are available with the Power Tools SDK. So that's one of them. And the final one is we're Dell EMC and the great part about being this new company is that we have this great, great portfolio of technologies. So it's things like integration with data protection, right. Now that we have Avamar Data Domain, we have the ability to create new products. In fact, that's one of the new things that we have as well. We are announcing a new data protection solution that is taking the Avamar software and taking Data Domain and we're integrating that right into the prism interface so if you listen to Nutanix, they say one click simplicity, well we're introducing a one click back-up, one click back-up automation into the portfolio. >> I love that, because a lot of times back-up is an after thought. You know, oh I got this new infrastructure, how am I going to back up the data. Okay, let's bolt this on. So let me ask you a follow up to that. As Dell EMC, you know one company, sometimes when you're two companies it's hard to do that type of engineering, can you talk about as Dell EMC as one, how the engineering culture and results, the outcomes have improved or changed? >> Yeah, absolutely. So, I'm not just going to focus on engineering, because I really want to take a look at the entire organization. So it goes all the way from engineering, marketing product management, sales, it's that whole eco-system. You can even talk about the support organization, the quality and we really have tight relationship between Nutanix and the Dell EMC counterparts. So to give you a good example, I talk with my product management counterparts and I talk with the sales leaders on a nearly daily basis and we want to make sure that relationship is really strong and that we evolve the relationship over time. >> Can we talk a little bit about scalability? We talked earlier at the top about work loads, VDI was very popular, remote office was kind of a sweet spot of hyperconvergent the early days. It's evolved, but scalability is always been a question. Where are we at with regards to scalability of hyperconvergent infrastructure? >> That's a great question. So, HCI came from the big Cloud providers and that technology was really meant to bring the tenants of what we saw with the scale of Cloud providers into the mainstream data centers. And so to that end, scalability is a core attribute. I'll give you a good example here, when the 14th generation of XC Series comes out, we'll be able to plug that in to customers existing eco-systems. So let's say a customer has a 12th generation or a 13th generation Power Edge XC series, we can now plug that technology right into the same cluster and if you talk about reusing technology, integrating technology into the data center, and really providing great value, and making sure customers don't have to throw away say older or medium term technology like the 13th generation, now they can just use the new technology right in place with the existing. >> John can you talk about the portfolio a little bit I mean, you guys got one of everything. If I want it, you probably have it. But a lot of times that gets confusing for customers and partners, probably sales reps. Where does the XC Series and these new announcements, where does it fit in the portfolio relative to some of the other things you are announcing? >> We get this question all the time. In my mind, it's really clear. For customers who have standardized on VMware, we have the xRail. For customers now who want say a choice of hyper visor or for customers who have already standardized on Nutanix software, we have XC Series. So there's absolutely room for both. We know the market is really big and it's growing fast and we have options for customers now whether they want to run on VMware or they want to run on say on Hyper-V as a good example. >> Let's see, when can I get this stuff? Can I buy it today or soon? >> It's available now, it's available now. And we have customers who are anxiously waiting because the new technologies are on their platforms. So it's available now and shipping now as well. >> Excellent. All right, we got to break, but I'll give you last word. Things like key take aways, you know, what should we be thinking about with this announcement, with the partnership? >> Absolutely, I think the key things here is the partnership is still growing strong, and we really feel that the best way to consume Nutanix software is on the XC series in combination with Dell and really getting the best out of both worlds. Out of the Nutanix relationship, out of the Dell relationship. >> Excellent, right, we got to go, but let's see CrowdChat coming up, #NextGenHCI, CrowdChat.net/NextGenHCI on Decemeber first. Where can I get more information about these products? >> If you go to DellEMC.com/HCI. >> Simple. All right, John, thanks very much for coming to theCUBE. Appreciate it. Thanks for watching everybody. This is Dave Vellante, we'll see you next time. (upbeat music)

Published Date : Nov 9 2017

SUMMARY :

Announcer: From the Silicon Angle Media office, and I'm here with John Shirley So, the XC Series you are continuing the innovation there, and the most popular models are the XC640, Okay, so we're seeing the evolution of the workloads the technology, they are growing that more to mainstream. Okay, let's put some meat on the bone, what does that mean? I love the term because it Can you talk about your differentiation, In fact, that's one of the new things that we have as well. how the engineering culture and results, and that we evolve the relationship over time. We talked earlier at the top about work loads, and if you talk about reusing technology, to some of the other things you are announcing? and it's growing fast and we have options for customers now And we have customers who are anxiously waiting All right, we got to break, but I'll give you last word. and really getting the best out of both worlds. Excellent, right, we got to go, This is Dave Vellante, we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

EMCORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

JohnPERSON

0.99+

John ShirleyPERSON

0.99+

DellORGANIZATION

0.99+

DecemeberDATE

0.99+

two companiesQUANTITY

0.99+

XC SeriesCOMMERCIAL_ITEM

0.99+

XC640COMMERCIAL_ITEM

0.99+

XC740XDCOMMERCIAL_ITEM

0.99+

Dell EMCORGANIZATION

0.99+

todayDATE

0.99+

third generationQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

bothQUANTITY

0.98+

both worldsQUANTITY

0.98+

firstQUANTITY

0.98+

one companyQUANTITY

0.98+

HCIORGANIZATION

0.98+

three pointsQUANTITY

0.98+

SharePointTITLE

0.97+

CrowdChat.net/NextGenHCIOTHER

0.96+

generationCOMMERCIAL_ITEM

0.95+

one clickQUANTITY

0.95+

14thQUANTITY

0.95+

oneQUANTITY

0.93+

XC seriesCOMMERCIAL_ITEM

0.93+

13thQUANTITY

0.92+

#NextGenHCIORGANIZATION

0.92+

Power Edge XC seriesCOMMERCIAL_ITEM

0.91+

12th generationQUANTITY

0.87+

Foundation For IT TransformationTITLE

0.87+

CrowdChatORGANIZATION

0.8+

DellEMC.com/HCIOTHER

0.8+

Power ToolsTITLE

0.78+

AvamarTITLE

0.76+

Power Tools SDKTITLE

0.74+

VMwareORGANIZATION

0.73+

Hyper-VTITLE

0.72+

ACIORGANIZATION

0.72+

xRailTITLE

0.69+

Silicon AngleLOCATION

0.66+

theCUBEORGANIZATION

0.53+

pointQUANTITY

0.52+

PowerEdgeTITLE

0.37+