Image Title

Search Results for three basics:

Dell EMC: Get Ready For AI


 

(bright orchestra music) >> Hi, I'm Peter Burris. Welcome to a special digital community event brought to you by Wikibon and theCUBE. Sponsored by Dell EMC. Today we're gonna spend quite some time talking about some of the trends in the relationship between hardware and AI. Specifically, we're seeing a number of companies doing some masterful work incorporating new technologies to simplify the infrastructure required to take full advantage of AI options and possibilities. Now at the end of this conversation, series of conversations, we're gonna run a CrowdChat, which will be your opportunity to engage your peers and engage thought leaders from Dell EMC and from Wikibon SiliconANGLE and have a broader conversation about what does it mean to be better at doing AI, more successful, improving time to value, et cetera. So wait 'til the very end for that. Alright, let's get it kicked off. Tom Burns is my first guest. And he is the Senior Vice President and General Manager of Networking Solutions at Dell EMC. Tom, it's great to have you back again. Welcome back to theCUBE. >> Thank you very much. It's great to be here. >> So Tom, this is gonna be a very, very exciting conversation we're gonna have. And it's gonna be about AI. So when you go out and talk to customers specifically, what are you hearing then as they describe their needs, their wants, their aspirations as they pertain to AI? >> Yeah, Pete, we've always been looking at this as this whole digital transformation. Some studies say that about 70% of enterprises today are looking how to take advantage of the digital transformation that's occurring. In fact, you're probably familiar with the Dell 2030 Survey, where we went out and talked to about 400 different companies of very different sizes. And they're looking at all these connected devices and edge computing and all the various changes that are happening from a technology standpoint, and certainly AI is one of the hottest areas. There's a report I think that was co-sponsored by ServiceNow. Over 62% of the CIO's and the Fortune 500 are looking at AI as far as managing their business in the future. And it's really about user outcomes. It's about how do they improve their businesses, their operations, their processes, their decision-making using the capability of compute coming down from a class perspective and the number of connected devices exploding bringing more and more data to their companies that they can use, analyze, and put to use cases that really make a difference in their business. >> But they make a difference in their business, but they're also often these use cases are a lot more complex. They're not, we have this little bromide that we use that the first 50 years of computing were about known process, unknown technology. We're now entering into an era where we know a little bit more about the technology. It's gonna be cloud-like, but we don't know what the processes are, because we're engaging directly with customers or partners in much more complex domains. That suggests a lot of things. How are customers dealing with that new level of complexity and where are they looking to simplify? >> You actually nailed it on the head. What's happening in our customers' environment is they're hiring these data scientists to really look at this data. And instead of looking at analyzing the data that's being connected, that's being analyzed and connected, they're spending more time worried about the infrastructure and building the components and looking about allocations of capacity in order to make these data scientists productive. And really, what we're trying to do is help them get through that particular hurdle. So you have the data scientists that are frustrated, because they're waiting for the IT Department to help them set up and scale the capacity that they need and infrastructure that they need in order to do their job. And then you got the IT Departments that are very frustrated, because they don't know how to manage all this infrastructure. So the question around do I go to the cloud? Do I remain on-prem? All of this is things that our companies, our customers, are continuing to be challenged with. >> Now, the ideal would be that you can have a cloud experience but have the data reside where it most naturally resides, given physics, given the cost, given bandwidth limitations, given regulatory regimes, et cetera. So how are you at Dell EMC helping to provide that sense of an experience based on what the work load is and where the data resides, as opposed to some other set of infrastructure choices? >> Well, that's the exciting part is that we're getting ready to announce a new solution called the Ready Solutions for AI. And what we've been doing is working with our customers over the last several years looking at these challenges around infrastructure, the data analytics, the connected devices, but giving them an experience that's real-time. Not letting them worry about how am I gonna set this up or management and so forth. So we're introducing the Ready Solutions for AI, which really focuses on three things. One is simplify the AI process. The second thing is to ensure that we give them deep and real-time analytics. And lastly, provide them the level of expertise that they need in a partner in order to make those tools useful and that information useful to their business. >> Now we want to not only provide AI to the business, but we also wanna start utilizing some of these advanced technologies directly into the infrastructure elements themselves to make it more simple. Is that a big feature of what the ready system for AI is? >> Absolutely, as I said, one of the key value propositions is around making AI simple. We are experts at building infrastructure. We have IP around compute, storage, networking, infinity band. The things that are capable of putting this infrastructure together. So we have tested that based upon customers' input, using traditional data analytics, libraries, and tool sets that the data scientists are gonna use, already pre-tested and certified. And then we're bringing this to them in a way which allows them through a service provisioning portal to basically set up and get to work much faster. The previous tools that were available out there, some from our competition. There were 15, 20, 25 different steps just to log on, just to get enough automation or enough capability in order to get the information that they need. The infrastructure allocated for this big data analytics through this service portal we've actually gotten it down to around five clicks with a very user-friendly GUI, no CLI required. And basically, again, interacting with the tools that they're used to immediately right out of the gate like in stage three. And then getting them to work in stage four and stage five so that they're not worried about the infrastructure, not worried about capacity, or is it gonna work. They basically are one, two, three, four clicks away, and they're up and working on the analytics that everyone wants them to work on. And heaven knows, these guys are not cheap. >> So you're talking about the data scientists. So presumably when you're saying they're not worried about all those things, they're also not worried about when the IT Department can get around to doing it. So this gives them the opportunity to self-provision. Have I got that right? >> That's correct. They don't need the IT to come in and set up the network to do the CLI for the provisioning, to make sure that there is enough VM's or workloads that are properly scheduled in order to give them the capacity that they need. They basically are set with a preset platform. Again, let's think about what Dell EMC is really working towards and that's becoming the infrastructure provider. We believe that the silos, the service storage, and networking are becoming eliminated, that companies want a platform that they can enable those capabilities. So you're absolutely right. The part about the simplicity or simplifying the AI process is really giving the data scientists the tools they need to provision the infrastructure they need very quickly. >> And so that means that the AI or rather the IT group can actually start acting more like a DevOps organization as opposed to a specialist in one or another technology. >> Correct, but we've also given them the capability by giving the usual automation and configuration tools that they're used to coming from some of our software partners, such as Cloudera. So in other words, you still want the IT Department involved, making sure that the infrastructure is meeting the requirements of the users. They're giving them what they want, but we're simplifying the tools and processes around the IT standpoint as well. >> Now we've done a lot of research into what's happening in the big data now is likely to happen in the AI world. And a lot of the problems that companies had with big data was they conflated or they confused the objectives, the outcome of a big data project, with just getting the infrastructure to work. And they walked away often, because they failed to get the infrastructure to work. So it sounds though what you're doing is you're trying to take the infrastructure out of the equation while at the same time going back to the customer and saying, "Wherever you want this job "to run or this workload to run, you're gonna get the same "experience irregardless." >> Correct, but we're gonna get an improved experience as well. Because of the products that we've put together in this particular solution, combined with our compute, our scale-out mass solution from a storage perspective, our partnership with Mellon Oshman infinity band or ethernet switch capability. We're gonna give them deeper insights and faster insights. The performance and scalability of this particular platform is tremendous. We believe in certain benchmark studies based upon the Reznik 50 benchmark. We've performed anywhere between two and half to almost three times faster than the competition. In addition from a storage standpoint, all of these workloads, all of the various characteristics that happen, you need a ton of IOPS. >> Yeah. >> And there's no one in the industry that has the IOP performance that we have with our All-Flash Isilon product. The capabilities that we have there we believe are somewhere around nine times the competition. Again, the scale-out performance while simplifying the overall architecture. >> Tom Burns, Senior Vice President of Networking and Solutions at Dell EMC. Thanks for being on theCUBE. >> Thank you very much. >> So there's some great points there about this new class of technology that dramatically simplifies how hardware can be deployed to improve the overall productivity and performance of AI solutions. But let's take a look at a product demo. >> Every week, more customers are telling us they know AI is possible for them, but they don't know where to start. Much of the recent progress in AI has been fueled by open source software. So it's tempting to think that do-it-yourself is the right way to go. Get some how-to references from the web and start building out your own distributive deep-learning platform. But it takes a lot of time and effort to create an enterprise-class AI platform with automation for deployment, management, and monitoring. There is no easy solution for that. Until now. Instead of putting the burden of do-it-yourself on your already limited staff, consider Dell EMC Ready Solutions for AI. Ready Solutions are complete software and hardware stacks pre-tested and validated with the most popular open source AI frameworks and libraries. Our professional services with proven AI expertise will have the solution up and running in days and ready for data scientists to start working in weeks. Data scientists will find the Dell EMC data science provisioning portal a welcome change for managing their own hardware and software environments. The portal lets data scientists acquire hardware resources from the cluster and customize their software environment with packages and libraries tested for compatibility with all dependencies. Data scientists choose between JupyterHub notebooks for interactive work, as well as terminal sessions for large-scale neural networks. These neural networks run across a high-performance cluster of power-edge servers with scalable Intel processors and scale-out Isilon storage that delivers up to 18 times the throughput of its closest all-flash competitor. IT pros will experience that AI is simplified as Bright Cluster Manager monitors your cluster for configuration drift down to the server BIOS using exclusive integration with Dell EMC's open manage API's for power-edge. This solution provides comprehensive metrics along with automatic health checks that keep an eye on the cluster and will alert you when there's trouble. Ready Solutions for AI are the only platforms that keep both data center professionals and data scientists productive and getting along. IT operations are simplified and that produces a more consistent experience for everyone. Data scientists get a customizable, high-performance, deep-learning service experience that can eliminate monthly charges spent on public cloud while keeping your data under your control. (upbeat guitar music) >> It's always great to see the product videos, but Tom Burns mentioned something earlier. He talked about the expansive expertise that Dell EMC has and bringing together advanced hardware and advanced software into more simple solutions that can liberate business value for customers, especially around AI. And so to really test that out, we sent Jeff Frick, who's the general manager and host of theCUBE down to the bowels of Dell EMC's operations in Austin, Texas. Jeff went and visited the Dell EMC HPC and AI Innovation Lab and met with Garima Kochhar, who's a tactical staff Senior Principal Engineer. Let's hear what Jeff learned. >> We're excited to have with us our next guest. She's Garima Kochhar. She's on the tactical staff and the Senior Principal Engineer at Dell EMC. Welcome. >> Thank you. >> From your perspective what kinda changing in the landscape from high-performance computing, which has been around for a long time, into more of the AI and machine learning and deep learning and stuff we hear about much more in business context today? >> High-performance computing has applicability across a broad range industries. So not just national labs and supercomputers, but commercial space as well. And our lab, we've done a lot of that work in the last several years. And then the deep learning algorithms, those have also been around for decades. But what we are finding right now is that the algorithms and the hardware, the technologies available, have hit that perfect point, along with industries' interest with the amount of data we have to make it more, what we would call, mainstream. >> So you can build an optimum solution, but ultimately you wanna build industry solutions. And then even subset of that, you invite customers in to optimize for what their particular workflow or their particular business case which may not match the perfect benchmark spec at all, right? >> That's exactly right. And so that's the reason this lab is set up for customer access, because we do the standard benchmarking. But you want to see what is my experience with this, how does my code work? And it allows us to learn from our customers, of course. And it allows them to get comfortable with their technologies, to work directly with the engineers and the experts so that we can be their true partners and trusted advisors and help them advance their research, their science, their business goals. >> Right. So you guys built the whole rack out, right? Not just the fun shiny new toys. >> Yeah, you're right. So typically, when something fails, it fails spectacularly. Right, so I'm you've heard horror stories where there was equipment on the dock and it wouldn't fit in the elevator or things like that, right? So there are lots of other teams that handle, of course Dell's really good at this, the logistics piece of it, but even within the lab. When you walk around the lab, you'll see our racks are set up with power meters. So we do power measurements. Whatever best practices in tuning we come up with, we feed that into our factories. So if you buy a solution, say targeted for HPC, it will come with different BIOS tuning options than a regular, say Oracle, database workload. We have this integration into our software deployment methods. So when you have racks and racks of equipment or one rack of equipment or maybe even three servers, and you're doing an installation, all the pieces are baked-in already and everything is easy, seamless, easy to operate. So our idea is... The more that we can do in building integrated solutions that are simple to use and performant, the less time our customers and their technical computing and IT Departments have to spend worrying about the equipment and they can focus on their unique and specific use case. >> Right, you guys have a services arm as well. >> Well, we're an engineering lab, which is why it's really messy, right? Like if you look at the racks, if you look at the work we do, we're a working lab. We're an engineering lab. We're a product development lab. And of course, we have a support arm. We have a services arm. And sometimes we're working with new technologies. We conduct training in the lab for our services and support people, but we're an engineering organization. And so when customers come into the lab and work with us, they work with it from an engineering point of view not from a pre-sales point of view or a services point of view. >> Right, kinda what's the benefit of having the experience in this broader set of applications as you can apply it to some of the newer, more exciting things around AI, machine learning, deep learning? >> Right, so the fact that we are a shared lab, right? Like the bulk of this lab is High Performance Computing and AI, but there's lots of other technologies and solutions we work on over here. And there's other labs in the building that we have colleagues in as well. The first thing is that the technology building blocks for several of these solutions are similar, right? So when you're looking at storage arrays, when you're looking at Linux kernels, when you're looking at network cards, or solid state drives, or NVMe, several of the building block technolgies are similar. And so when we find interoperability issues, which you would think that there would never be any problems, you throw all these things together, they always work like-- >> (laughs) Of course (laughs). >> Right, so when you sometimes, rarely find an interoperability issue, that issue can affect multiple solutions. And so we share those best practices, because we engineers sit next to each other and we discuss things with each other. We're part of the larger organization. Similarly, when you find tuning options and nuances and parameters for performance or for energy efficiency, those also apply across different domains. So while you might think of Oracle as something that it's been done for years, with every iteration of technology there's new learning and that applies broadly across anybody using enterprise infrastructure. >> Right, what gets you excited? What are some of the things that you see, like, "I'm so excited that we can now apply "this horsepower to some of these problems out there?" >> Right, so that's a really good point, right? Because most of the time when you're trying to describe what you do, it's hard to make everybody understand. Well, not what you're doing, right? But sometimes with deep technology it's hard to explain what's the actual value of this. And so a lot of work we're doing in terms of excess scale, it's to grow like the... Human body of knowledge forward, to grow the science happening in each country moving that forward. And that's kind of, at the higher end when you talk about national labs and defense and everybody understands that needs to be done. But when you find that your social media is doing some face recognition, everybody experiences that and everybody sees that. And when you're trying to describe the, we're all talking about driverless cars or we're all talking about, "Oh, it took me so long, "because I had this insurance claim and then I had "to get an appointment with the appraisor "and they had to come in." I mean, those are actual real-world use cases where some of these technologies are going to apply. So even industries where you didn't think of them as being leading-edge on the technical forefront in terms of IT infrastructure and digital transformation, in every one of these places you're going to have an impact of what you do. >> Right. >> Whether it's drug discovery, right? Or whether it's next-generation gene sequencing or whether it's designing the next car, like pick your favorite car, or when you're flying in an aircraft the engineers who were designing the engine and the blades and the rotors for that craft were using technologies that you worked with. And so now it's everywhere, everywhere you go. We talked about 5G and IoT and edge computing. >> Right. >> I mean, we all work on this collectively. >> Right. >> So it's our world. >> Right. Okay, so last question before I let you go. Just being, having the resources to bear, in terms of being in your position, to do the work when you've got the massive resources now behind you. You have Dell, the merger of EMC, all the subset brands, Isilon, so many brands. How does that help you do your job better? What does that let you do here in this lab that probably a lot of other people can't do? >> Yeah, exactly. So when you're building complex solutions, there's no one company that makes every single piece of it, but the tighter that things work together the better that they work together. And that's directly through all the technologies that we have in the Dell technologies umbrella and with Dell EMC. And that's because of our super close relationships with our partners that allows us to build these solutions that are painless for our customers and our users. And so that's the advantage we bring. >> Alright. >> This lab and our company. >> Alright, Garima. Well, thank you for taking a few minutes. Your passion shines through. (laughs) >> Thank you. >> I really liked hearing about what Dell EMC's doing in their innovation labs down at Austin, Texas, but it all comes together for the customer. And so the last segment that we wanna bring you here is a great segment. Nick Curcuru, who's the Vice President of Big Data Analytics at Mastercard is here to talk about how some of these technologies are coming together to speed value and realize the potential of AI at Mastercard. Nick, welcome to theCUBE. >> Thank you for letting me be here. >> So Mastercard, tell us a little bit about what's going on at Mastercard. >> There's a lot that's going on with Mastercard, but I think the most exciting things that we're doing out of Mastercard right now is with artificial intelligence and how we're bringing the ability for artificial intelligence to really allow a seamless transition when someone's actually doing a transaction and also bringing a level of security to our customers and our banks and the people that use Mastercards. >> So AI to improve engagement, provide a better experience, but that's a pretty broad range of things. What specifically kinds of, when you think about how AI can be applied, what are you looking to? Especially early on. >> Well, let's actually take a look at our core business, which is being able to make sure that we can secure a payment, right? So at this particular point, people are used to, we're applying AI to biometrics. But not just a fingerprint or a facial recognition, but actually how you interact with your device. So you think of like the Internet of Things and you're sitting back saying, "I'm using, "I'm swiping my device, my mobile device, "or how I interact with a keyboard." Those are all key signatures. And we, with our company, new data that we've just acquired are taking that capability to create a profile and make that a part of your signature. So it's not just beyond a fingerprint. It's not just beyond a facial. It's actually how you're interacting so that we know it's you. >> So there's a lot of different potential sources of information that you can utilize, but AI is still a relatively young technology and practice. And one of the big issues for a lot of our clients is how do you get time to value? So take us through, if you would, a little bit about some of the challenges that Mastercard and anybody would face to try to get to that time to value. >> Well, what you're really seeing is looking for actually a good partner to be with when you're doing artificial intelligence, because again, at that particular point, you try to get to scale. For us, it's always about scale. How can we roll this across 220 countries? We're 165 million transactions per hour, right? So what we're looking for is a partner who also has that ability to scale. A partner who has the global presence, who's learning. So that's the first step. That's gonna help you with your time to value. The other part is actually sitting back and actually using those particular partners to bring their expertise that they're learning to combine with yours. It's no longer just silos. So when we talk about artificial intelligence, how can we be learning from each other? Those open source systems that are out there, how do we learn from that community? It's that community that allows you to get there. Again, those that are trying to do it on their own, trying to do it by themselves, they're not gonna get to the point where they need to be. In other words, in a six month time to value it's gonna take them years. We're trying to accelerate that, you say, "How can we get out of those algorithms operating for us "the way we need them to provide the experiences "that people want quickly." And that's with good partners. >> 165 million transactions per hour is only likely to go up over the course of the next few years. That creates an operational challenge. AI is associated with a probabilistic set of behaviors as opposed to categorical. Little bit more difficult to test, little bit more difficult to verify, how is the introduction of some of these AI technologies impacting the way you think about operations at Mastercard? >> Well, for the operations, it's actually when you take a look there's three components, right? There's right there on the edge. So when someone's interacting and actually doing the transaction, and then we'll look at it as we have a core. So that core sits there, right? Basically, that's where you're learning, right? And then there's actually, what we call, the deep learning component of it. So for us, it's how can we move what we need to have in the core and what we need to have on the edge? So the question for us always is we want that algorithm to be smart. So what three to four things do we need that algorithm to be looking for within that artificial intelligence needs to know that it then goes back into the core and retrieves something, whether that's your fingerprint, your biometrics, how you're interacting with that machine, to say, "Yes, that's you. "Yes, we want that transaction to go through." Or, "No, stop it before it even begins." It's that interaction and operational basis that we're always have a dynamic tension with, but it's how we get from the edge to the core. And it's understanding what we need it to do. So we're breaking apart what we have to have that intelligence to be able to create a decision for us. So that's how we're trying to manage it, as well as of course, the hardware that goes with it and the tools that we need in order to make that happen. >> When we get on the hardware just a little bit, so that historically different applications put pressure on different components within a stack. One of the observations that we've made is that the transition from spinning disk to flash allows companies like Mastercard to think about just persisting data to actually delivering data. >> Yeah. >> Much more rapidly. How does some of the, how does these AI technologies, what kinda new pressures do they put on storage? >> Well, they put a tremendous pressure, because that's actually again, the next tension or dynamics that you have to play with. So what do you wanna have on disk? What do you need flash to do? Again, if you look at some people, everyone's like, "Oh, flash will take over everything." It's like no, flash has, there's a reason for it to exist, and understanding what that reason is and understanding, "Hey, I need that to be able to do this "in sub-seconds, nanoseconds," I've heard the term before. That's what you're asking flash to do. When you want deep learning, that, I want it on disk. I want to be taking all those millions of billions of transactions that we're gonna see and learn from them. All the ways that people will be trying to attack me, right? The bad guys, how am I learning from everything that I'm having that can sit there on disk and let it continue to run, that's the deep learning. The flash is when I wanna create a seamless transaction with a customer, or a consumer, or from a business to business. I need to have that decision now. I need to know it is you who is trying to swipe or purchase something with my mobile device or through the, basically through the Internet. Or how am I actually even swiping or inserting, tipping my card in that particular machine at a merchant. That's we're looking at how we use flash. >> So you're looking at perhaps using older technologies or different classes technologies for some of the training elements, but really moving to flash for the interfacing piece where you gotta deliver the real-time effort right now. >> And that's the experience. And that's what you're looking for. And that's you're looking, you wanna be able to make sure you're making those distinctions. 'Cause again there's no longer one or the other. It's how they interact. And again, when you look at your partners, it's the question now is how are they interacting? Am I actually, has this been done at scale somewhere else? Can you help me understand how I need to deploy this so that I can reduce my time to value, which is very, very important to create that seamless, frictionless transaction we want our consumers to have. >> So Nick, you talked about how you wanna work with companies that demonstrate that they have expertise, because you can't do it on your own. Companies that are capable of providing the scale that you need to provide. So just as we talk about how AI is placing pressure on different parts of the technology stack, it's got also to be putting pressure on the traditional relationships you have with technology suppliers. What are you looking for in suppliers as you think about these new classes of applications? >> Well, the part is you're looking at, for us it's do you have that scale that we're looking at? Have you done this before, that global scale? Again, in many cases you can have five guys in a garage that can do great things, but where has it been tested? When we say tested, it's not just, "Hey, we did this "in a pilot." We're talking it's gotta be robust. So that's one thing that you're looking for. You're looking for also a partner we can bring, for us, additional information that we don't have ourselves, right? In many cases, when you look at that partner they're gonna bring something that they're almost like they are an adjunct part of your team. They are your bench strength. That's what we're looking for when we look at it. What expertise do you have that we may not? What are you seeing, especially on the technology front, that we're not privy to? What are those different chips that are coming out, the new ways we should be handling the storage, the new ways the applications are interacting with that? We want to know from you, because again, everyone's, there's a talent, competition for talent, and we're looking for a partner who has that talent and will bring it to us so that we don't have to search it. >> At scale. >> Yeah, especially at scale. >> Nick Curcuro, Mastercard. Thanks for being on theCUBE. >> Thank you for having me. >> So there you have a great example of what leading companies or what a leading company is doing to try to take full advantage of the possibilities of AI by utilizing infrastructure that gets the job done simpler, faster, and better. So let's imagine for a second how it might affect your life. Well, here's your opportunity. We're now gonna move into the CrowdChat part of the event, and this is your chance to ask peers questions, provide your insights, tell your war stories. Ultimately, to interact with thought leaders about what it means to get ready for AI. Once again, I'm Peter Burris, thank you for watching. Now let's jump into the CrowdChat.

Published Date : Aug 14 2018

SUMMARY :

Tom, it's great to have you back again. It's great to be here. So when you go out and talk to customers specifically, and certainly AI is one of the hottest areas. that the first 50 years of computing So the question around do I go to the cloud? Now, the ideal would be that you can have Well, that's the exciting part is that we're getting ready into the infrastructure elements themselves And then getting them to work in stage four and stage five So this gives them the opportunity to self-provision. They don't need the IT to come in and set up the network And so that means that the AI or rather the IT group involved, making sure that the infrastructure in the big data now is likely to happen in the AI world. Because of the products that we've put together the IOP performance that we have and Solutions at Dell EMC. can be deployed to improve the overall productivity on the cluster and will alert you when there's trouble. And so to really test that out, we sent Jeff Frick, We're excited to have with us our next guest. and the hardware, the technologies available, So you can build an optimum solution, And so that's the reason this lab is set up So you guys built the whole rack out, right? So when you have racks and racks of equipment And of course, we have a support arm. Right, so the fact that we are a shared lab, right? So while you might think of Oracle as something And that's kind of, at the higher end when you talk and the blades and the rotors for that craft Just being, having the resources to bear, And so that's the advantage we bring. Well, thank you for taking a few minutes. And so the last segment that we wanna bring you here So Mastercard, tell us a little bit for artificial intelligence to really allow So AI to improve engagement, provide a better experience, are taking that capability to create a profile of information that you can utilize, but AI is still that they're learning to combine with yours. impacting the way you think about operations at Mastercard? Well, for the operations, it's actually when you is that the transition from spinning disk what kinda new pressures do they put on storage? I need to know it is you who is trying to swipe for the interfacing piece where you gotta deliver so that I can reduce my time to value, on the traditional relationships you have the new ways we should be handling the storage, Thanks for being on theCUBE. that gets the job done simpler, faster, and better.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

Tom BurnsPERSON

0.99+

Garima KochharPERSON

0.99+

Peter BurrisPERSON

0.99+

NickPERSON

0.99+

Nick CurcuruPERSON

0.99+

DellORGANIZATION

0.99+

GarimaPERSON

0.99+

15QUANTITY

0.99+

TomPERSON

0.99+

PetePERSON

0.99+

five guysQUANTITY

0.99+

MastercardORGANIZATION

0.99+

Dell EMCORGANIZATION

0.99+

EMCORGANIZATION

0.99+

Mellon OshmanORGANIZATION

0.99+

20QUANTITY

0.99+

220 countriesQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

IsilonORGANIZATION

0.99+

six monthQUANTITY

0.99+

first stepQUANTITY

0.99+

OracleORGANIZATION

0.99+

ServiceNowORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

twoQUANTITY

0.99+

millionsQUANTITY

0.99+

each countryQUANTITY

0.99+

first 50 yearsQUANTITY

0.99+

TodayDATE

0.99+

first guestQUANTITY

0.98+

AI Innovation LabORGANIZATION

0.98+

threeQUANTITY

0.98+

one rackQUANTITY

0.98+

first thingQUANTITY

0.98+

oneQUANTITY

0.97+

Over 62%QUANTITY

0.97+

second thingQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

Nick CurcuroPERSON

0.97+

about 70%QUANTITY

0.97+

OneQUANTITY

0.97+

Dell EMC HPCORGANIZATION

0.97+

bothQUANTITY

0.97+

three componentsQUANTITY

0.96+

halfQUANTITY

0.95+

about 400 different companiesQUANTITY

0.95+

three serversQUANTITY

0.94+

IntelORGANIZATION

0.94+

around five clicksQUANTITY

0.93+

JupyterHubORGANIZATION

0.93+

Big Data AnalyticsORGANIZATION

0.93+

decadesQUANTITY

0.93+

todayDATE

0.92+

25 different stepsQUANTITY

0.92+

Vice PresidentPERSON

0.92+

up to 18 timesQUANTITY

0.92+

three thingsQUANTITY

0.91+

around nine timesQUANTITY

0.91+

fourQUANTITY

0.89+