Dell EMC: Get Ready For AI
(bright orchestra music) >> Hi, I'm Peter Burris. Welcome to a special digital community event brought to you by Wikibon and theCUBE. Sponsored by Dell EMC. Today we're gonna spend quite some time talking about some of the trends in the relationship between hardware and AI. Specifically, we're seeing a number of companies doing some masterful work incorporating new technologies to simplify the infrastructure required to take full advantage of AI options and possibilities. Now at the end of this conversation, series of conversations, we're gonna run a CrowdChat, which will be your opportunity to engage your peers and engage thought leaders from Dell EMC and from Wikibon SiliconANGLE and have a broader conversation about what does it mean to be better at doing AI, more successful, improving time to value, et cetera. So wait 'til the very end for that. Alright, let's get it kicked off. Tom Burns is my first guest. And he is the Senior Vice President and General Manager of Networking Solutions at Dell EMC. Tom, it's great to have you back again. Welcome back to theCUBE. >> Thank you very much. It's great to be here. >> So Tom, this is gonna be a very, very exciting conversation we're gonna have. And it's gonna be about AI. So when you go out and talk to customers specifically, what are you hearing then as they describe their needs, their wants, their aspirations as they pertain to AI? >> Yeah, Pete, we've always been looking at this as this whole digital transformation. Some studies say that about 70% of enterprises today are looking how to take advantage of the digital transformation that's occurring. In fact, you're probably familiar with the Dell 2030 Survey, where we went out and talked to about 400 different companies of very different sizes. And they're looking at all these connected devices and edge computing and all the various changes that are happening from a technology standpoint, and certainly AI is one of the hottest areas. There's a report I think that was co-sponsored by ServiceNow. Over 62% of the CIO's and the Fortune 500 are looking at AI as far as managing their business in the future. And it's really about user outcomes. It's about how do they improve their businesses, their operations, their processes, their decision-making using the capability of compute coming down from a class perspective and the number of connected devices exploding bringing more and more data to their companies that they can use, analyze, and put to use cases that really make a difference in their business. >> But they make a difference in their business, but they're also often these use cases are a lot more complex. They're not, we have this little bromide that we use that the first 50 years of computing were about known process, unknown technology. We're now entering into an era where we know a little bit more about the technology. It's gonna be cloud-like, but we don't know what the processes are, because we're engaging directly with customers or partners in much more complex domains. That suggests a lot of things. How are customers dealing with that new level of complexity and where are they looking to simplify? >> You actually nailed it on the head. What's happening in our customers' environment is they're hiring these data scientists to really look at this data. And instead of looking at analyzing the data that's being connected, that's being analyzed and connected, they're spending more time worried about the infrastructure and building the components and looking about allocations of capacity in order to make these data scientists productive. And really, what we're trying to do is help them get through that particular hurdle. So you have the data scientists that are frustrated, because they're waiting for the IT Department to help them set up and scale the capacity that they need and infrastructure that they need in order to do their job. And then you got the IT Departments that are very frustrated, because they don't know how to manage all this infrastructure. So the question around do I go to the cloud? Do I remain on-prem? All of this is things that our companies, our customers, are continuing to be challenged with. >> Now, the ideal would be that you can have a cloud experience but have the data reside where it most naturally resides, given physics, given the cost, given bandwidth limitations, given regulatory regimes, et cetera. So how are you at Dell EMC helping to provide that sense of an experience based on what the work load is and where the data resides, as opposed to some other set of infrastructure choices? >> Well, that's the exciting part is that we're getting ready to announce a new solution called the Ready Solutions for AI. And what we've been doing is working with our customers over the last several years looking at these challenges around infrastructure, the data analytics, the connected devices, but giving them an experience that's real-time. Not letting them worry about how am I gonna set this up or management and so forth. So we're introducing the Ready Solutions for AI, which really focuses on three things. One is simplify the AI process. The second thing is to ensure that we give them deep and real-time analytics. And lastly, provide them the level of expertise that they need in a partner in order to make those tools useful and that information useful to their business. >> Now we want to not only provide AI to the business, but we also wanna start utilizing some of these advanced technologies directly into the infrastructure elements themselves to make it more simple. Is that a big feature of what the ready system for AI is? >> Absolutely, as I said, one of the key value propositions is around making AI simple. We are experts at building infrastructure. We have IP around compute, storage, networking, infinity band. The things that are capable of putting this infrastructure together. So we have tested that based upon customers' input, using traditional data analytics, libraries, and tool sets that the data scientists are gonna use, already pre-tested and certified. And then we're bringing this to them in a way which allows them through a service provisioning portal to basically set up and get to work much faster. The previous tools that were available out there, some from our competition. There were 15, 20, 25 different steps just to log on, just to get enough automation or enough capability in order to get the information that they need. The infrastructure allocated for this big data analytics through this service portal we've actually gotten it down to around five clicks with a very user-friendly GUI, no CLI required. And basically, again, interacting with the tools that they're used to immediately right out of the gate like in stage three. And then getting them to work in stage four and stage five so that they're not worried about the infrastructure, not worried about capacity, or is it gonna work. They basically are one, two, three, four clicks away, and they're up and working on the analytics that everyone wants them to work on. And heaven knows, these guys are not cheap. >> So you're talking about the data scientists. So presumably when you're saying they're not worried about all those things, they're also not worried about when the IT Department can get around to doing it. So this gives them the opportunity to self-provision. Have I got that right? >> That's correct. They don't need the IT to come in and set up the network to do the CLI for the provisioning, to make sure that there is enough VM's or workloads that are properly scheduled in order to give them the capacity that they need. They basically are set with a preset platform. Again, let's think about what Dell EMC is really working towards and that's becoming the infrastructure provider. We believe that the silos, the service storage, and networking are becoming eliminated, that companies want a platform that they can enable those capabilities. So you're absolutely right. The part about the simplicity or simplifying the AI process is really giving the data scientists the tools they need to provision the infrastructure they need very quickly. >> And so that means that the AI or rather the IT group can actually start acting more like a DevOps organization as opposed to a specialist in one or another technology. >> Correct, but we've also given them the capability by giving the usual automation and configuration tools that they're used to coming from some of our software partners, such as Cloudera. So in other words, you still want the IT Department involved, making sure that the infrastructure is meeting the requirements of the users. They're giving them what they want, but we're simplifying the tools and processes around the IT standpoint as well. >> Now we've done a lot of research into what's happening in the big data now is likely to happen in the AI world. And a lot of the problems that companies had with big data was they conflated or they confused the objectives, the outcome of a big data project, with just getting the infrastructure to work. And they walked away often, because they failed to get the infrastructure to work. So it sounds though what you're doing is you're trying to take the infrastructure out of the equation while at the same time going back to the customer and saying, "Wherever you want this job "to run or this workload to run, you're gonna get the same "experience irregardless." >> Correct, but we're gonna get an improved experience as well. Because of the products that we've put together in this particular solution, combined with our compute, our scale-out mass solution from a storage perspective, our partnership with Mellon Oshman infinity band or ethernet switch capability. We're gonna give them deeper insights and faster insights. The performance and scalability of this particular platform is tremendous. We believe in certain benchmark studies based upon the Reznik 50 benchmark. We've performed anywhere between two and half to almost three times faster than the competition. In addition from a storage standpoint, all of these workloads, all of the various characteristics that happen, you need a ton of IOPS. >> Yeah. >> And there's no one in the industry that has the IOP performance that we have with our All-Flash Isilon product. The capabilities that we have there we believe are somewhere around nine times the competition. Again, the scale-out performance while simplifying the overall architecture. >> Tom Burns, Senior Vice President of Networking and Solutions at Dell EMC. Thanks for being on theCUBE. >> Thank you very much. >> So there's some great points there about this new class of technology that dramatically simplifies how hardware can be deployed to improve the overall productivity and performance of AI solutions. But let's take a look at a product demo. >> Every week, more customers are telling us they know AI is possible for them, but they don't know where to start. Much of the recent progress in AI has been fueled by open source software. So it's tempting to think that do-it-yourself is the right way to go. Get some how-to references from the web and start building out your own distributive deep-learning platform. But it takes a lot of time and effort to create an enterprise-class AI platform with automation for deployment, management, and monitoring. There is no easy solution for that. Until now. Instead of putting the burden of do-it-yourself on your already limited staff, consider Dell EMC Ready Solutions for AI. Ready Solutions are complete software and hardware stacks pre-tested and validated with the most popular open source AI frameworks and libraries. Our professional services with proven AI expertise will have the solution up and running in days and ready for data scientists to start working in weeks. Data scientists will find the Dell EMC data science provisioning portal a welcome change for managing their own hardware and software environments. The portal lets data scientists acquire hardware resources from the cluster and customize their software environment with packages and libraries tested for compatibility with all dependencies. Data scientists choose between JupyterHub notebooks for interactive work, as well as terminal sessions for large-scale neural networks. These neural networks run across a high-performance cluster of power-edge servers with scalable Intel processors and scale-out Isilon storage that delivers up to 18 times the throughput of its closest all-flash competitor. IT pros will experience that AI is simplified as Bright Cluster Manager monitors your cluster for configuration drift down to the server BIOS using exclusive integration with Dell EMC's open manage API's for power-edge. This solution provides comprehensive metrics along with automatic health checks that keep an eye on the cluster and will alert you when there's trouble. Ready Solutions for AI are the only platforms that keep both data center professionals and data scientists productive and getting along. IT operations are simplified and that produces a more consistent experience for everyone. Data scientists get a customizable, high-performance, deep-learning service experience that can eliminate monthly charges spent on public cloud while keeping your data under your control. (upbeat guitar music) >> It's always great to see the product videos, but Tom Burns mentioned something earlier. He talked about the expansive expertise that Dell EMC has and bringing together advanced hardware and advanced software into more simple solutions that can liberate business value for customers, especially around AI. And so to really test that out, we sent Jeff Frick, who's the general manager and host of theCUBE down to the bowels of Dell EMC's operations in Austin, Texas. Jeff went and visited the Dell EMC HPC and AI Innovation Lab and met with Garima Kochhar, who's a tactical staff Senior Principal Engineer. Let's hear what Jeff learned. >> We're excited to have with us our next guest. She's Garima Kochhar. She's on the tactical staff and the Senior Principal Engineer at Dell EMC. Welcome. >> Thank you. >> From your perspective what kinda changing in the landscape from high-performance computing, which has been around for a long time, into more of the AI and machine learning and deep learning and stuff we hear about much more in business context today? >> High-performance computing has applicability across a broad range industries. So not just national labs and supercomputers, but commercial space as well. And our lab, we've done a lot of that work in the last several years. And then the deep learning algorithms, those have also been around for decades. But what we are finding right now is that the algorithms and the hardware, the technologies available, have hit that perfect point, along with industries' interest with the amount of data we have to make it more, what we would call, mainstream. >> So you can build an optimum solution, but ultimately you wanna build industry solutions. And then even subset of that, you invite customers in to optimize for what their particular workflow or their particular business case which may not match the perfect benchmark spec at all, right? >> That's exactly right. And so that's the reason this lab is set up for customer access, because we do the standard benchmarking. But you want to see what is my experience with this, how does my code work? And it allows us to learn from our customers, of course. And it allows them to get comfortable with their technologies, to work directly with the engineers and the experts so that we can be their true partners and trusted advisors and help them advance their research, their science, their business goals. >> Right. So you guys built the whole rack out, right? Not just the fun shiny new toys. >> Yeah, you're right. So typically, when something fails, it fails spectacularly. Right, so I'm you've heard horror stories where there was equipment on the dock and it wouldn't fit in the elevator or things like that, right? So there are lots of other teams that handle, of course Dell's really good at this, the logistics piece of it, but even within the lab. When you walk around the lab, you'll see our racks are set up with power meters. So we do power measurements. Whatever best practices in tuning we come up with, we feed that into our factories. So if you buy a solution, say targeted for HPC, it will come with different BIOS tuning options than a regular, say Oracle, database workload. We have this integration into our software deployment methods. So when you have racks and racks of equipment or one rack of equipment or maybe even three servers, and you're doing an installation, all the pieces are baked-in already and everything is easy, seamless, easy to operate. So our idea is... The more that we can do in building integrated solutions that are simple to use and performant, the less time our customers and their technical computing and IT Departments have to spend worrying about the equipment and they can focus on their unique and specific use case. >> Right, you guys have a services arm as well. >> Well, we're an engineering lab, which is why it's really messy, right? Like if you look at the racks, if you look at the work we do, we're a working lab. We're an engineering lab. We're a product development lab. And of course, we have a support arm. We have a services arm. And sometimes we're working with new technologies. We conduct training in the lab for our services and support people, but we're an engineering organization. And so when customers come into the lab and work with us, they work with it from an engineering point of view not from a pre-sales point of view or a services point of view. >> Right, kinda what's the benefit of having the experience in this broader set of applications as you can apply it to some of the newer, more exciting things around AI, machine learning, deep learning? >> Right, so the fact that we are a shared lab, right? Like the bulk of this lab is High Performance Computing and AI, but there's lots of other technologies and solutions we work on over here. And there's other labs in the building that we have colleagues in as well. The first thing is that the technology building blocks for several of these solutions are similar, right? So when you're looking at storage arrays, when you're looking at Linux kernels, when you're looking at network cards, or solid state drives, or NVMe, several of the building block technolgies are similar. And so when we find interoperability issues, which you would think that there would never be any problems, you throw all these things together, they always work like-- >> (laughs) Of course (laughs). >> Right, so when you sometimes, rarely find an interoperability issue, that issue can affect multiple solutions. And so we share those best practices, because we engineers sit next to each other and we discuss things with each other. We're part of the larger organization. Similarly, when you find tuning options and nuances and parameters for performance or for energy efficiency, those also apply across different domains. So while you might think of Oracle as something that it's been done for years, with every iteration of technology there's new learning and that applies broadly across anybody using enterprise infrastructure. >> Right, what gets you excited? What are some of the things that you see, like, "I'm so excited that we can now apply "this horsepower to some of these problems out there?" >> Right, so that's a really good point, right? Because most of the time when you're trying to describe what you do, it's hard to make everybody understand. Well, not what you're doing, right? But sometimes with deep technology it's hard to explain what's the actual value of this. And so a lot of work we're doing in terms of excess scale, it's to grow like the... Human body of knowledge forward, to grow the science happening in each country moving that forward. And that's kind of, at the higher end when you talk about national labs and defense and everybody understands that needs to be done. But when you find that your social media is doing some face recognition, everybody experiences that and everybody sees that. And when you're trying to describe the, we're all talking about driverless cars or we're all talking about, "Oh, it took me so long, "because I had this insurance claim and then I had "to get an appointment with the appraisor "and they had to come in." I mean, those are actual real-world use cases where some of these technologies are going to apply. So even industries where you didn't think of them as being leading-edge on the technical forefront in terms of IT infrastructure and digital transformation, in every one of these places you're going to have an impact of what you do. >> Right. >> Whether it's drug discovery, right? Or whether it's next-generation gene sequencing or whether it's designing the next car, like pick your favorite car, or when you're flying in an aircraft the engineers who were designing the engine and the blades and the rotors for that craft were using technologies that you worked with. And so now it's everywhere, everywhere you go. We talked about 5G and IoT and edge computing. >> Right. >> I mean, we all work on this collectively. >> Right. >> So it's our world. >> Right. Okay, so last question before I let you go. Just being, having the resources to bear, in terms of being in your position, to do the work when you've got the massive resources now behind you. You have Dell, the merger of EMC, all the subset brands, Isilon, so many brands. How does that help you do your job better? What does that let you do here in this lab that probably a lot of other people can't do? >> Yeah, exactly. So when you're building complex solutions, there's no one company that makes every single piece of it, but the tighter that things work together the better that they work together. And that's directly through all the technologies that we have in the Dell technologies umbrella and with Dell EMC. And that's because of our super close relationships with our partners that allows us to build these solutions that are painless for our customers and our users. And so that's the advantage we bring. >> Alright. >> This lab and our company. >> Alright, Garima. Well, thank you for taking a few minutes. Your passion shines through. (laughs) >> Thank you. >> I really liked hearing about what Dell EMC's doing in their innovation labs down at Austin, Texas, but it all comes together for the customer. And so the last segment that we wanna bring you here is a great segment. Nick Curcuru, who's the Vice President of Big Data Analytics at Mastercard is here to talk about how some of these technologies are coming together to speed value and realize the potential of AI at Mastercard. Nick, welcome to theCUBE. >> Thank you for letting me be here. >> So Mastercard, tell us a little bit about what's going on at Mastercard. >> There's a lot that's going on with Mastercard, but I think the most exciting things that we're doing out of Mastercard right now is with artificial intelligence and how we're bringing the ability for artificial intelligence to really allow a seamless transition when someone's actually doing a transaction and also bringing a level of security to our customers and our banks and the people that use Mastercards. >> So AI to improve engagement, provide a better experience, but that's a pretty broad range of things. What specifically kinds of, when you think about how AI can be applied, what are you looking to? Especially early on. >> Well, let's actually take a look at our core business, which is being able to make sure that we can secure a payment, right? So at this particular point, people are used to, we're applying AI to biometrics. But not just a fingerprint or a facial recognition, but actually how you interact with your device. So you think of like the Internet of Things and you're sitting back saying, "I'm using, "I'm swiping my device, my mobile device, "or how I interact with a keyboard." Those are all key signatures. And we, with our company, new data that we've just acquired are taking that capability to create a profile and make that a part of your signature. So it's not just beyond a fingerprint. It's not just beyond a facial. It's actually how you're interacting so that we know it's you. >> So there's a lot of different potential sources of information that you can utilize, but AI is still a relatively young technology and practice. And one of the big issues for a lot of our clients is how do you get time to value? So take us through, if you would, a little bit about some of the challenges that Mastercard and anybody would face to try to get to that time to value. >> Well, what you're really seeing is looking for actually a good partner to be with when you're doing artificial intelligence, because again, at that particular point, you try to get to scale. For us, it's always about scale. How can we roll this across 220 countries? We're 165 million transactions per hour, right? So what we're looking for is a partner who also has that ability to scale. A partner who has the global presence, who's learning. So that's the first step. That's gonna help you with your time to value. The other part is actually sitting back and actually using those particular partners to bring their expertise that they're learning to combine with yours. It's no longer just silos. So when we talk about artificial intelligence, how can we be learning from each other? Those open source systems that are out there, how do we learn from that community? It's that community that allows you to get there. Again, those that are trying to do it on their own, trying to do it by themselves, they're not gonna get to the point where they need to be. In other words, in a six month time to value it's gonna take them years. We're trying to accelerate that, you say, "How can we get out of those algorithms operating for us "the way we need them to provide the experiences "that people want quickly." And that's with good partners. >> 165 million transactions per hour is only likely to go up over the course of the next few years. That creates an operational challenge. AI is associated with a probabilistic set of behaviors as opposed to categorical. Little bit more difficult to test, little bit more difficult to verify, how is the introduction of some of these AI technologies impacting the way you think about operations at Mastercard? >> Well, for the operations, it's actually when you take a look there's three components, right? There's right there on the edge. So when someone's interacting and actually doing the transaction, and then we'll look at it as we have a core. So that core sits there, right? Basically, that's where you're learning, right? And then there's actually, what we call, the deep learning component of it. So for us, it's how can we move what we need to have in the core and what we need to have on the edge? So the question for us always is we want that algorithm to be smart. So what three to four things do we need that algorithm to be looking for within that artificial intelligence needs to know that it then goes back into the core and retrieves something, whether that's your fingerprint, your biometrics, how you're interacting with that machine, to say, "Yes, that's you. "Yes, we want that transaction to go through." Or, "No, stop it before it even begins." It's that interaction and operational basis that we're always have a dynamic tension with, but it's how we get from the edge to the core. And it's understanding what we need it to do. So we're breaking apart what we have to have that intelligence to be able to create a decision for us. So that's how we're trying to manage it, as well as of course, the hardware that goes with it and the tools that we need in order to make that happen. >> When we get on the hardware just a little bit, so that historically different applications put pressure on different components within a stack. One of the observations that we've made is that the transition from spinning disk to flash allows companies like Mastercard to think about just persisting data to actually delivering data. >> Yeah. >> Much more rapidly. How does some of the, how does these AI technologies, what kinda new pressures do they put on storage? >> Well, they put a tremendous pressure, because that's actually again, the next tension or dynamics that you have to play with. So what do you wanna have on disk? What do you need flash to do? Again, if you look at some people, everyone's like, "Oh, flash will take over everything." It's like no, flash has, there's a reason for it to exist, and understanding what that reason is and understanding, "Hey, I need that to be able to do this "in sub-seconds, nanoseconds," I've heard the term before. That's what you're asking flash to do. When you want deep learning, that, I want it on disk. I want to be taking all those millions of billions of transactions that we're gonna see and learn from them. All the ways that people will be trying to attack me, right? The bad guys, how am I learning from everything that I'm having that can sit there on disk and let it continue to run, that's the deep learning. The flash is when I wanna create a seamless transaction with a customer, or a consumer, or from a business to business. I need to have that decision now. I need to know it is you who is trying to swipe or purchase something with my mobile device or through the, basically through the Internet. Or how am I actually even swiping or inserting, tipping my card in that particular machine at a merchant. That's we're looking at how we use flash. >> So you're looking at perhaps using older technologies or different classes technologies for some of the training elements, but really moving to flash for the interfacing piece where you gotta deliver the real-time effort right now. >> And that's the experience. And that's what you're looking for. And that's you're looking, you wanna be able to make sure you're making those distinctions. 'Cause again there's no longer one or the other. It's how they interact. And again, when you look at your partners, it's the question now is how are they interacting? Am I actually, has this been done at scale somewhere else? Can you help me understand how I need to deploy this so that I can reduce my time to value, which is very, very important to create that seamless, frictionless transaction we want our consumers to have. >> So Nick, you talked about how you wanna work with companies that demonstrate that they have expertise, because you can't do it on your own. Companies that are capable of providing the scale that you need to provide. So just as we talk about how AI is placing pressure on different parts of the technology stack, it's got also to be putting pressure on the traditional relationships you have with technology suppliers. What are you looking for in suppliers as you think about these new classes of applications? >> Well, the part is you're looking at, for us it's do you have that scale that we're looking at? Have you done this before, that global scale? Again, in many cases you can have five guys in a garage that can do great things, but where has it been tested? When we say tested, it's not just, "Hey, we did this "in a pilot." We're talking it's gotta be robust. So that's one thing that you're looking for. You're looking for also a partner we can bring, for us, additional information that we don't have ourselves, right? In many cases, when you look at that partner they're gonna bring something that they're almost like they are an adjunct part of your team. They are your bench strength. That's what we're looking for when we look at it. What expertise do you have that we may not? What are you seeing, especially on the technology front, that we're not privy to? What are those different chips that are coming out, the new ways we should be handling the storage, the new ways the applications are interacting with that? We want to know from you, because again, everyone's, there's a talent, competition for talent, and we're looking for a partner who has that talent and will bring it to us so that we don't have to search it. >> At scale. >> Yeah, especially at scale. >> Nick Curcuro, Mastercard. Thanks for being on theCUBE. >> Thank you for having me. >> So there you have a great example of what leading companies or what a leading company is doing to try to take full advantage of the possibilities of AI by utilizing infrastructure that gets the job done simpler, faster, and better. So let's imagine for a second how it might affect your life. Well, here's your opportunity. We're now gonna move into the CrowdChat part of the event, and this is your chance to ask peers questions, provide your insights, tell your war stories. Ultimately, to interact with thought leaders about what it means to get ready for AI. Once again, I'm Peter Burris, thank you for watching. Now let's jump into the CrowdChat.
SUMMARY :
Tom, it's great to have you back again. It's great to be here. So when you go out and talk to customers specifically, and certainly AI is one of the hottest areas. that the first 50 years of computing So the question around do I go to the cloud? Now, the ideal would be that you can have Well, that's the exciting part is that we're getting ready into the infrastructure elements themselves And then getting them to work in stage four and stage five So this gives them the opportunity to self-provision. They don't need the IT to come in and set up the network And so that means that the AI or rather the IT group involved, making sure that the infrastructure in the big data now is likely to happen in the AI world. Because of the products that we've put together the IOP performance that we have and Solutions at Dell EMC. can be deployed to improve the overall productivity on the cluster and will alert you when there's trouble. And so to really test that out, we sent Jeff Frick, We're excited to have with us our next guest. and the hardware, the technologies available, So you can build an optimum solution, And so that's the reason this lab is set up So you guys built the whole rack out, right? So when you have racks and racks of equipment And of course, we have a support arm. Right, so the fact that we are a shared lab, right? So while you might think of Oracle as something And that's kind of, at the higher end when you talk and the blades and the rotors for that craft Just being, having the resources to bear, And so that's the advantage we bring. Well, thank you for taking a few minutes. And so the last segment that we wanna bring you here So Mastercard, tell us a little bit for artificial intelligence to really allow So AI to improve engagement, provide a better experience, are taking that capability to create a profile of information that you can utilize, but AI is still that they're learning to combine with yours. impacting the way you think about operations at Mastercard? Well, for the operations, it's actually when you is that the transition from spinning disk what kinda new pressures do they put on storage? I need to know it is you who is trying to swipe for the interfacing piece where you gotta deliver so that I can reduce my time to value, on the traditional relationships you have the new ways we should be handling the storage, Thanks for being on theCUBE. that gets the job done simpler, faster, and better.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Tom Burns | PERSON | 0.99+ |
Garima Kochhar | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
Nick Curcuru | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Garima | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
Pete | PERSON | 0.99+ |
five guys | QUANTITY | 0.99+ |
Mastercard | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Mellon Oshman | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
220 countries | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
six month | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
each country | QUANTITY | 0.99+ |
first 50 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
first guest | QUANTITY | 0.98+ |
AI Innovation Lab | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
one rack | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Over 62% | QUANTITY | 0.97+ |
second thing | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Nick Curcuro | PERSON | 0.97+ |
about 70% | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
Dell EMC HPC | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
three components | QUANTITY | 0.96+ |
half | QUANTITY | 0.95+ |
about 400 different companies | QUANTITY | 0.95+ |
three servers | QUANTITY | 0.94+ |
Intel | ORGANIZATION | 0.94+ |
around five clicks | QUANTITY | 0.93+ |
JupyterHub | ORGANIZATION | 0.93+ |
Big Data Analytics | ORGANIZATION | 0.93+ |
decades | QUANTITY | 0.93+ |
today | DATE | 0.92+ |
25 different steps | QUANTITY | 0.92+ |
Vice President | PERSON | 0.92+ |
up to 18 times | QUANTITY | 0.92+ |
three things | QUANTITY | 0.91+ |
around nine times | QUANTITY | 0.91+ |
four | QUANTITY | 0.89+ |
Bala Chandrasekaran, Dell EMC | Dell EMC: Get Ready For AI
(techno music) >> Hey welcome back everybody, Jeff Frick here with theCUBE. We're in Austin, Texas at the Dell EMC HPC and AI Innovation Lab. As you can see behind me, there's racks and racks and racks of gear, where they build all types of vessel configurations around specific applications, whether its Oracle or S.A.P. And more recently a lot more around artificial intelligence, whether it's machine learning, deep learning, so it's a really cool place to be. We're excited to be here. And our next guest is Bala Chandrasekaran. He is in the technical staff as a systems engineer. Bala, welcome! >> Thank you. >> So how do you like playing with all these toys all day long? >> Oh I love it! >> I mean you guys have literally everything in there. A lot more than just Dell EMC gear, but you've got switches and networking gear-- >> Right. >> Everything. >> And not just the gear, it's also all the software components, it's the deep learning libraries, deep learning models, so a whole bunch of things that we can get to play around with. >> Now that's interesting 'cause it's harder to see the software, right? >> Exactly right. >> The software's pumping through all these machines but you guys do all types of really, optimization and configuration, correct? >> Yes, we try to make it easy for the end customer. And the project that I'm working on, machine learning for Hadoop, we try to make things easy for the data scientists. >> Right, so we got all the Hadoop shows, Hadoop World, Hadoop Summit, Strata, Big Data NYC, Silicone Valley, and the knock on Hadoop is always it's too hard, there aren't enough engineers, I can't get enough people to do it myself. It's a cool open source project, but it's not that easy to do. You guys are really helping people solve that problem. >> Yes and what you're saying is true for the infrastructure guys. Now imagine a data scientist, right? So Hadoop cluster accessing it, securing it, is going to be really tough for them. And they shouldn't be worried about it. Right? They should be focused on data science. So those are some of the things that we try to do for them. >> So what are some of the tips and tricks as you build these systems that throw people off all the time that are relatively simple things to fix? And then what are some of the hard stuff where you guys have really applied your expertise to get over those challenges? >> Let me give you a small example. So this is a new project A.I. we hired data scientists. So I walk the data scientist through the lab. He looked at all he cluster and he pulled me aside and said hey you're not going to ask me to work on these things, right? I have no idea how to do these things. So that kind of gives you a sense of what a data scientist should focus on and what what they shouldn't focus on. So some of the things that we do, and some of the things that are probably difficult for them is all the libraries that are needed to run their project, the conflicts between libraries, the dependencies between them. So one of the things that we do deliver this pre-configured engine that you can readily download into our product and run. So data scientist don't have to worry about what library I should use. >> Right. >> They have to worry about the models and accuracy and whatever data science needs to be done, rather than focusing on the infrastructure. >> So you not only package the hardware and the systems, but you've packaged the software distribution and all the kind of surrounding components of that as well. >> Exactly right. Right. >> So when you have the data scientists here talking about the Hadoop cluster, if they didn't want to talk about the hardware and the software, what were you helping them with? How did you engage with the customers here at the lab? >> So the example that I gave is for the data scientist that we newly hired for our team so we had to set up environments for them. so that was the example, but the same thing applies for a customer as well. So again to help them in solving the problem we tried to package some of the things as part of our product and deliver it to them so it's easy for them to deploy and get started on things. >> Now the other piece that's included and again is not in this room is the services -- >> Right. >> And the support so you guys have a full team of professional services. Once you configure and figure out what the optimum solution is for them then you got a team that can actually go deploy it at their actual site. >> So we have packaged things even for our services. So the services would go to the customer side. They would apply the solution and download and deploy our packages and be able to demonstrate how easy it is to think of them as tutorials if you like. So here are the tutorials. Here's how you run various models. So here's how easy it is for you to get started. So that's what they would train the customer on. So there's not just the deployment piece of it but just packaging things for them so they can show customers how to get started quickly, how everything works and kind of of give a green check mark if you will. >> So what are some of your favorite applications that people are using these things for? Do you get involved in the applications stack on the customer side? What are some of the fun use cases that people use in your technology to solve? >> So for the application my project is about mission learning on Hadoop via packaging Cloudera's CDSW that's Cloudera Data Science Workbench as part of the product. So that allows data science access to the Hadoop cluster and abstracting the complexities of the cluster. So they can access the cluster. They can access the data. They can have security without worrying about all the intricacies of the cluster. In addition to that they can create different projects, have different libraries in different projects. So they don't have to conflict with each other and also they can add users to it. They can work collaboratively. So basically choose to help data scientists, software developers, do their job and not worry about the infrastructure. >> Right. >> They should not be. >> Right great. Well Bala it's pretty exciting place to work. I'm sure you're having a ball. >> Yes I am thank you. >> All right. Well thanks for taking a few minutes with us and really enjoyed the conversation. >> I appreciate it thank you. All right he's Bala. I'm Jeff. You're watching theCUBE from Austin, Texas at the Dell EMC High Performance Computing and Artificial Intelligence Labs. Thanks for watching. (techno music)
SUMMARY :
He is in the technical staff as a systems engineer. I mean you guys have literally everything in there. And not just the gear, And the project that I'm working on, but it's not that easy to do. So those are some of the things that we try to do for them. So some of the things that we do, They have to worry about the models and accuracy and all the kind of surrounding components of that as well. Right. So the example that I gave is for the data scientist And the support so you guys So the services would go to the customer side. So for the application my project is about Well Bala it's pretty exciting place to work. All right. at the Dell EMC High Performance Computing
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Bala Chandrasekaran | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Bala | PERSON | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
AI Innovation Lab | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
Dell EMC High Performance Computing | ORGANIZATION | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
Cloudera | ORGANIZATION | 0.97+ |
Dell EMC HPC | ORGANIZATION | 0.96+ |
Hadoop | TITLE | 0.95+ |
S.A.P. | ORGANIZATION | 0.94+ |
Artificial Intelligence Labs | ORGANIZATION | 0.87+ |
NYC | LOCATION | 0.85+ |
theCUBE | ORGANIZATION | 0.83+ |
Silicone Valley | LOCATION | 0.79+ |
Hadoop Summit | EVENT | 0.78+ |
Big Data | EVENT | 0.72+ |
Strata | EVENT | 0.58+ |
Hadoop World | EVENT | 0.44+ |
Hadoop | ORGANIZATION | 0.41+ |
Michael Bennett, Dell EMC | Dell EMC: Get Ready For AI
(energetic electronic music) >> Hey, welcome back everybody. Jeff Frick here with The Cube. We're in a very special place. We're in Austin, Texas at the Dell EMC HPC and AI Innovation Lab. High performance computing, artificial intelligence. This is really where it all happens. Where the engineers at Dell EMC are putting together these ready-made solutions for the customers. They got every type of application stack in here, and we're really excited to have our next guest. He's right in the middle of it, he's Michael Bennett, Senior Principal Engineer for Dell EMC. Mike, great to see you. >> Great to see you too. >> So you're working on one particular flavor of the AI solutions, and that's really machine learning with Hadoop. So tell us a little bit about that. >> Sure yeah, the product that I work on is called the Ready Solution for AI Machine Learning with Hadoop, and that product is a Cloudera Hadoop distribution on top of our Dell powered servers. And we've partnered with Intel, who has released a deep learning library, called Big DL, to bring both the traditional machine learning capabilities as well as deep learning capabilities to the product. Product also adds a data science workbench that's released by Cloudera. And this tool allows the customer's data scientists to collaborate together, provides them secure access to the Hadoop cluster, and we think all-around makes a great product to allow customers to gain the power of machine learning and deep learning in their environment, while also kind of reducing some of those overhead complexities that IT often faces with managing multiple environments, providing secure access, things like that. >> Right, cause the big knock always on Hadoop is that it's just hard. It's hard to put in, there aren't enough people, there aren't enough experts. So you guys are really offering a pre-bundled solution that's ready to go? >> Correct, yeah. We've built seven or eight different environments going in the lab at any time to validate different hardware permutations that we may offer of the product as well as, we've been doing this since 2009, so there's a lot of institutional knowledge here at Dell to draw on when building and validating these Hadoop products. Our Dell services team has also been going out installing and setting these up, and our consulting services has been helping customers fit the Hadoop infrastructure into their IT model. >> Right, so is there one basic configuration that you guys have? Or have you found there's two or three different standard-use cases that call for two or three different kinds of standardized solutions? >> We find that most customers are preferring the R7-40XC series. This platform can hold 12 3 1/2" form-factor drives in the front, along with four in the mid-plane, while still providing four SSDs in the back. So customers get a lot of versatility with this. It's also won several Hadoop benchmarking awards. >> And do you find, when you're talking to customers or you're putting this together, that they've tried themselves and they've tried to kind of stitch together and cobble together the open-source proprietary stuff all the way down to network cards and all this other stuff to actually make the solution come together? And it's just really hard, right? >> Yeah, right exactly. What we hear over and over from our product management team is that their interactions with customers, come back with customers saying it's just too hard. They get something that's stable and they come back and they don't know why it's no longer working. They have customized environments that each developer wants for their big data analytics jobs. Things like that. So yeah, overall we're hearing that customers are finding it very complex. >> Right, so we hear time and time again that same thing. And even though we've been going to Hadoop Summit and Hadoop World and Stratus, since 2010. The momentum seems to be a little slower in terms of the hype, but now we're really moving into heavy-duty real time production and that's what you guys are enabling with this ready-made solution. >> So with this product, yeah, we focused on enabling Apache Spark on the Hadoop environment. And that Apache Spark distributed computing has really changed the game as far as what it allows customers to do with their analytics jobs. No longer are we writing things to disc, but multiple transformations are being performed in memory, and that's also a big part of what enables the big DL library that Intel released for the platform to train these deep-learning models. >> Right, cause the Sparks enables the real-time analytics, right? Now you've got streaming data coming into this thing, versus the batch which was kind of the classic play of Hadoop. >> Right and not only do you have streaming data coming in, but Spark also enables you to load your data in memory and perform multiple operations on it. And draw insights that maybe you couldn't before with traditional map-reduce jobs. >> Right, right. So what gets you excited to come to work every day? You've been playing with these big machines. You're in the middle of nerd nirvana I think-- >> Yeah exactly. >> With all of the servers and spin-discs. What gets you up in the morning? What are you excited about, as you see AI get more pervasive within the customers and the solutions that you guys are enabling? >> You know, for me, what's always exciting is trying new things. We've got this huge lab environment with all kinds of lab equipment. So if you want to test a new iteration, let's say tiered HGFS storage with SSDs and traditional hard drives, throw it together in a couple of hours and see what the results are. If we wanted to add new PCIE devices like FPGAs for the inference portion the deep-learning development we can put those in our servers and try them out. So I enjoy that, on top of the validated, thoroughly-worked-through solutions that we offer customers, we can also experiment, play around, and work towards that next generation of technology. >> Right, 'cause any combination of hardware that you basically have at your disposal to try together and test and see what happens? >> Right, exactly. And this is my first time actually working at a OEM, and so I was surprised, not only do we have access to anything that you can see out in the market, but we often receive test and development equipment from partners and vendors, that we can work with and collaborate with to ensure that once the product reaches market it has the features that customers need. >> Right, what's the one thing that trips people up the most? Just some simple little switch configuration that you think is like a minor piece of something, that always seems to get in the way? >> Right, or switches in general. I think that people focus on the application because the switch is so abstracted from what the developer or even somebody troubleshooting the system sees, that oftentimes some misconfiguration or some typo that was entered during the switch configuration process that throws customers off or has somebody scratching their head, wondering why they're not getting the kind of performance that they thought. >> Right, well that's why we need more automation, right? That's what you guys are working on. >> Right yeah exactly. >> Keep the fat-finger typos out of the config settings. >> Right, consistent reproducible. None of that, I did it yesterday and it worked I don't know what changed. >> Right, alright Mike. Well thanks for taking a few minutes out of your day, and don't have too much fun playing with all this gear. >> Awesome, thanks for having me. >> Alright, he's Mike Bennett and I'm Jeff Frick. You're watching The Cube, from Austin Texas at the Dell EMC High Performance Computing and AI Labs. Thanks for watching. (energetic electronic music)
SUMMARY :
at the Dell EMC HPC and AI Innovation Lab. of the AI solutions, and that's really that IT often faces with managing multiple environments, Right, cause the big knock always on Hadoop going in the lab at any time to validate in the front, along with four in the mid-plane, is that their interactions with customers, and that's what you guys are enabling has really changed the game as far as what it allows Right, cause the Sparks enables And draw insights that maybe you couldn't before You're in the middle of nerd nirvana I think-- that you guys are enabling? for the inference portion the deep-learning development that you can see out in the market, the kind of performance that they thought. That's what you guys are working on. Right, consistent reproducible. and don't have too much fun playing with all this gear. at the Dell EMC High Performance Computing and AI Labs.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Michael Bennett | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Mike Bennett | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
seven | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
The Cube | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
2010 | DATE | 0.99+ |
Austin, Texas | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
Austin Texas | LOCATION | 0.98+ |
Spark | TITLE | 0.98+ |
2009 | DATE | 0.98+ |
R7-40XC | COMMERCIAL_ITEM | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
each developer | QUANTITY | 0.98+ |
AI Innovation Lab | ORGANIZATION | 0.97+ |
Hadoop | TITLE | 0.97+ |
first time | QUANTITY | 0.96+ |
Dell EMC High Performance Computing | ORGANIZATION | 0.96+ |
four | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
Apache | ORGANIZATION | 0.94+ |
one thing | QUANTITY | 0.93+ |
The Cube | ORGANIZATION | 0.92+ |
12 3 1/2" | QUANTITY | 0.92+ |
Dell EMC HPC | ORGANIZATION | 0.9+ |
three different standard-use cases | QUANTITY | 0.9+ |
eight different environments | QUANTITY | 0.89+ |
three different | QUANTITY | 0.88+ |
Stratus | ORGANIZATION | 0.83+ |
Hadoop World | ORGANIZATION | 0.79+ |
one basic configuration | QUANTITY | 0.76+ |
AI Labs | ORGANIZATION | 0.74+ |
four SSDs | QUANTITY | 0.73+ |
Cloudera | TITLE | 0.71+ |
Hadoop Summit | EVENT | 0.69+ |
hours | QUANTITY | 0.67+ |
Hadoop benchmarking awards | TITLE | 0.67+ |
Sparks | COMMERCIAL_ITEM | 0.48+ |
Hadoop | COMMERCIAL_ITEM | 0.34+ |
Thierry Pellegrino, Dell EMC | Dell EMC: Get Ready For AI
[Music] and welcome back everybody Jeff Rick here at the cube we're in Austin Texas at the deli MC high performance computing and artificial intelligence labs last been here for a long time as you can see behind us and probably here racks and racks and racks of some of the biggest baddest computers on the planet in fact I think number 256 we were told earlier it's just behind us we're excited to be here really as Dell and EMC puts together you know pre-configured solutions for artificial intelligence machine learning deep learning applications because that's a growing growing concern and growing growing importance to all the business people out there so we're excited to have the guy running the show he's Terry Pellegrino the VP of HPC and business strategy had a whole bunch of stuff you're a pretty busy guy I'm busy but you can see all those servers they're very busy too they're humming so just your perspective so the HPC part of this has been around for a while the rise of kind of machine learning and artificial intelligence as a business priority is relatively recent but you guys are jumping in with both feet oh absolutely I mean HPC is not new to us AI machine learning deep learning is happening that's the buzzword but we've been working on HPC clusters since back in the 90s and it's it's great to see this technology or this best practice getting into the enterprise space where data scientists need help and instead of looking for a one processor that will solve it all they look for the knowledge of HPC and what we've been able to put together and applying into their field right so how do you kind of delineate between HPC and say the AI portion of the lab or is it just kind of on a on a continuum how do you kind of slice and dice absolutely it's it's all in one place and you see it all behind us this area in front of us we try to get all those those those servers put together and add the value for all the different workloads right so you get HPC a piece equal a IML deal all in one lab right and they're all here they're all here the old the legacy application only be called legacy applications all the way to the to the meanest and the newest and greatest exactly the old stuff the new stuff and and actually you know what some things you don't see is we're also looking at where the technology is going to take all those workloads AI m LD L is the buzzword today but down the road you're gonna see more applications and we're already starting to test those technologies in this lab so it's past present and future right so one of the specific solutions you guys have put together is the DL using the new Nvidia technology what if you could talk we hear about a media all the time obviously they're in really well position in autonomous vehicles and and their GPUs are taking data centers by storm how's that going where do you see some of the applications outside of autonomous vehicles for the the Nvidia base oh there are many applications I think the technology itself is is proving to solve a lot of customer problems and you can apply it in many different verticals many workloads again and you can see it in autonomous vehicles you can see it in healthcare live science in financial services risk management it's it's really everywhere you need to solve a problem and you need dense compute solutions and NVIDIA has one of technologies that a lot of our customers leverage to solve their problems right and you're also launching a machine learning solution based on Hadoop which we we've been going to Hadoop summit Hadoop world and strata for eight nine years I guess since 2010 eight years and you know it's kind of funny because the knock on Hadoop is always there aren't enough people it's too hard you know it's just a really difficult technology so you guys are really taken again a solutions approach with a dupe for machine learning to basically deliver either a whole rack full of stuff or that spec that you can build at your own place no absolutely that's one of the three major tenants that we have for those solutions that we're launching we really want it to be a solution that's faster so performance is key when you're trying to extract data and insights from from your data set you really need to be fast you don't want it to take months it has to be within countable measures so it's one of them we want to make it simple a data scientist is never going to be a PhD in HPC or any kind of computer technologies so making it simple it's critical and the last one is we want to have this proven trusted adviser feel for our customers you see it around you this HPC lab was not built yesterday it's been here showcasing our capabilities in HPC world our ability to combine the Hadoop environment with other environments to solve enterprise class problems and bring business value to our customers and that's really what we we think are our differentiation comes from right and it's really a lab I mean you and I are both wearing court coats right now but there's a gear stack following really heights of every shape and size and I think what's interesting is while we talk about the sexy stuff the GPUs and the CPUs and the do there's a lot of details that make one of these racks actually work and it's probably integrating some of those things as lower tier things and making sure they all work seamlessly together so you don't get some nasty bottleneck on an inexpensive part that's holding back all that capacity oh absolutely you know it's funny you mentioned that we're talking to customers about the technologies we're assembling and contrary to some web tech type companies that just look for any compute at all costs and they'll just stack up a lot of technologies because they want the compute in in HPC type environments or when you try to solve problems with deep learning and machine learning you're only as strong as your weakest link and if you have a a server or a storage unit or a an interconnect between all those that is really weak you're gonna see your performance go way down and we watch out for that and you know the one thing that you alluded to which I just wanted to point out what you see behind us is the hardware the the secret sauce is really in the aggregation of all the components and all the software stacks because AI M LDL great easy acronyms but when you start peeling the layers you realize it's layers and layers of software which are moving very fast where you don't want to be spending your life understanding the inter up requirements between those layers and and worrying about whether your your compute and your storage solution is gonna work right you want to solve problems a scientist and that's what we're trying to do give you a solution which is an infrastructure plus a stack that's been validated proven and you can really get to work right and even within that validated design for a particular workload customers have an opportunity maybe needs a little bit more IO as a relative scale these a little bit more storage needs a little bit more compute so even within a basic structured system that you guys have SPECT and certified still customers can come in and make little mods based on what their specific workload you've got it this is not we're not in the phase in the acceptance of a I am LDL where things are cookie cutter it's still going to be a collaboration that's what we have a really strong team working with our customers directly and trying to solution for their problem right if you need a little bit more storage if you need faster storage for your scratch if you need a little bit more i/o bandwidth because you're in a remote environment I mean all those characteristics are gonna be critical and the solutions we're launching are not rigid they're they're perfect starting point for customers I want to get something to run directly they feel like it but if you if you have a solution that's more pointed we can definitely iterate and that's what our team in the field and all the engineers that you have seen today walk through the lab that's what their role is we want to be as a consultant as a partner designing the right solution for the customer right so Terry before I let you guys just kind of one question from your perspective of customers and you're out talking to customers and how the conversation around artificial intelligence and machine learning has evolved over the last several years from you know kind of a cool science experiment or it's all the HPC stuff with the government or whether or heavy lifting really moving from that actually into a boardroom conversation as a priority and a strategic imperative going forward how's that conversation evolving when you're out talking to customers well you know it has changed you're right back in the 60s the science was there the technology wasn't there today we have the science we have the technology and we're seeing all the C Class decision makers really want to find value out of the data that we've collected and that that's where the discussion takes place this is not a CIO discussion most of the time and in what's really fantastic in mama contrary to a lot of the the technologies I have grown on like big data cloud and all those buzzwords here we're looking at something that's tangible we have real-life examples of companies that are using deep learning and machine learning to solve problems save lives and get our technology in the hands of the right folks so they can impact the community it's really really fantastic and that growth is set for success and we want to be part of that right it's just a minute just you know the continuation of this democratization trend you know get more people more data give more people more tools get more people more power and you're gonna get innovation you're gonna solve more problems and it's so exciting absolutely totally agree with you all right teri well thanks for taking a few minutes out of your busy day and congrats on the Innovation Lab here thank you so much all righty teri I'm Jeff Rick we're at the Dell EMC HPC and AI innovation labs in Austin Texas thanks for watching
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Terry Pellegrino | PERSON | 0.99+ |
Jeff Rick | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Terry | PERSON | 0.99+ |
Austin Texas | LOCATION | 0.99+ |
Austin Texas | LOCATION | 0.99+ |
2010 | DATE | 0.99+ |
today | DATE | 0.99+ |
Nvidia | ORGANIZATION | 0.98+ |
yesterday | DATE | 0.98+ |
both feet | QUANTITY | 0.98+ |
one question | QUANTITY | 0.98+ |
Hadoop | TITLE | 0.97+ |
one | QUANTITY | 0.97+ |
Thierry Pellegrino | PERSON | 0.96+ |
eight years | QUANTITY | 0.96+ |
three major tenants | QUANTITY | 0.95+ |
Innovation Lab | ORGANIZATION | 0.94+ |
both | QUANTITY | 0.93+ |
Dell EMC | ORGANIZATION | 0.93+ |
eight nine years | QUANTITY | 0.92+ |
deli MC | ORGANIZATION | 0.92+ |
Dell EMC HPC | ORGANIZATION | 0.9+ |
one thing | QUANTITY | 0.88+ |
one processor | QUANTITY | 0.87+ |
HPC | ORGANIZATION | 0.87+ |
one place | QUANTITY | 0.84+ |
60s | DATE | 0.71+ |
SPECT | ORGANIZATION | 0.62+ |
years | DATE | 0.58+ |
few minutes | QUANTITY | 0.55+ |
90s | DATE | 0.52+ |
256 | OTHER | 0.51+ |
last | DATE | 0.49+ |
strata | LOCATION | 0.44+ |