Image Title

Search Results for Peter Burrs:

Kurt Kuckein, DDN Storage, and Darrin Johnson, NVIDIA | CUBEConversation, Sept 2018


 

[Music] [Applause] I'll Buena Burris and welcome to another cube conversation from our fantastic studios in beautiful palo alto california today we're going to be talking about what infrastructure can do to accelerate AI and specifically we're gonna use a relationship a burgeoning relationship between PDN and nvidia to describe what we can do to accelerate AI workloads by using higher performance smarter and more focused of infrastructure for computing now to have this conversation we've got two great guests here we've got Kurt ku kind who is the senior director of marketing at ddn and also Darren Johnson is a global director of technical marketing for enterprise and NVIDIA Kurt Gerron welcome to the cube thanks for thank you very much so let's get going on this because this is a very very important topic and I think it all starts with this notion of that there is a relationship that you guys have put forward Kurt once you describe it sure well so what we're announcing today is ddn's a3i architecture powered by Nvidia so it is a full rack level solution a reference architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply very completely so if we think about how this is gonna or why this is important AI workloads clearly have a special stress on underlying technology Darin talk to us a little bit about the nature of these workloads and why in particular things like GPUs and other technologies are so important to make them go fast absolutely and as you probably know AI is all about the data whether you're doing medical imaging whether you're doing natural language processing whatever it is it's all driven by the data the more data that you have the better results that you get but to drive that data into the GPUs you need great IO and that's why we're here today to talk about ddn and the partnership of how to bring that I owe to the GPUs on our dgx platforms so if we think about what you described a lot of small files off and randomly just riveted with nonetheless very high-profile jobs that just can't stop midstream and start over absolutely and if you think about the history of high-performance computing which is very similar to a I really I owe is just that lots of files you have to get it they're low latency high throughput and that's why ddn's probably nearly twenty years of experience working in that exact same domain is perfect because you get the parallel file system which gives you that throughput gives you that low latency just helps drive the GPU so we you'd mention HPC from 20 years of experience now it used to be that HPC you'd have scientists with a bunch of graduate students setting up some of these big honkin machines but now we're moving into the commercial domain you don't have graduate students running around you don't have very low cost high quality people you're you know a lot of administrators who nonetheless good people but a lot to learn so how does this relationship actually start making or bringing AI within reach of the commercial world exactly where this reference architecture comes in right so a customer doesn't need to start from scratch they have a design now that allows them to quickly implement AI it's something that's really easily deployable we've fully integrated this solution ddn has made changes to our parallel file system appliance to integrate directly within the DG x1 environment makes that even easier to deploy from there and extract the maximum performance out of this without having to run around and tune a bunch of knobs change a bunch of settings it's really gonna work out of the box and the you know nvidia has done more than just the DG x1 it's more than hardware you've done a lot of optimization of different of AI toolkits if Sarah I'm talking what about that Darin yeah so I mean talking about the example I use researchers in the past with HPC what we have today are data scientists data scientists understand pie tours they understand tensorflow they understand the frameworks they don't want to understand the underlying filesystem networking RDMA InfiniBand any of that they just want to be able to come in run their tensorflow get the data get the results and just turn that keep turning that whether it's a single GPU or 90 Jex's or as many dejection as you want so this solution helps bring that to customers much easier so those data scientists don't have to be system administrators so a reference architecture that makes things easier but that's more than just for some of these commercial things it's also the overall ecosystem new application providers application developers how is this going to impact the aggregate ecosystem it's growing up around the need to do AI related outcomes well I think one point that Darrin was getting to you there and one of the big effects is also as these ecosystems reach a point where they're going to need to scale right there's somewhere where ddn has tons of experience right so many customers are starting off with smaller data sets they still need the performance a parallel file system in that case is going to deliver that performance but then also as they grow right going from one GPU to 90 G X's is going to be an incredible amount of both performance scalability that they're going to need from their i/o as well as probably capacity scalability and that's another thing that we've made easy with a3i is being able to scale that environment seamlessly within a single namespace so that people don't have to deal with a lot of again tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as they're successful right so in the end it is the application that's most important to both of us right it's it's not the infrastructure it's making the discoveries faster it's processing information out in the field faster it's doing analysis of the MRI faster it's you know helping the doctors helping the anybody who's using this to really make faster decisions better decisions exactly and just to add to that I mean in automotive industry you have datasets that are from 50 to 500 petabytes and you need access to all that data all the time because you're constantly training and Retraining to create better models to create better autonomous vehicles and you need you need the performance to do that ddn helps bring that to bear and with this reference architecture simplifies it so you get the value add of nvidia gpus plus its ecosystem of software plus DD on its match made in heaven Darren Johnson Nvidia Curt Koo Kien ddn thanks very much for being on the cube thank you very much and I'm Peter burrs and once again I'd like to thank you for watching this cube conversation until next time [Music]

Published Date : Oct 4 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Darren JohnsonPERSON

0.99+

20 yearsQUANTITY

0.99+

Kurt KuckeinPERSON

0.99+

SarahPERSON

0.99+

Sept 2018DATE

0.99+

ddnORGANIZATION

0.99+

nvidiaORGANIZATION

0.99+

Kurt GerronPERSON

0.99+

KurtPERSON

0.99+

NvidiaORGANIZATION

0.99+

Darrin JohnsonPERSON

0.99+

todayDATE

0.99+

NVIDIAORGANIZATION

0.99+

bothQUANTITY

0.98+

50QUANTITY

0.98+

two great guestsQUANTITY

0.98+

one pointQUANTITY

0.96+

500 petabytesQUANTITY

0.96+

Curt Koo KienPERSON

0.96+

PDNORGANIZATION

0.96+

palo alto californiaLOCATION

0.95+

one GPUQUANTITY

0.94+

oneQUANTITY

0.93+

DDN StorageORGANIZATION

0.92+

Peter burrsPERSON

0.88+

nearly twenty yearsQUANTITY

0.86+

lots of filesQUANTITY

0.85+

90 G XQUANTITY

0.83+

single namespaceQUANTITY

0.79+

BurrisPERSON

0.75+

single GPUQUANTITY

0.74+

DG x1TITLE

0.74+

90 JexQUANTITY

0.66+

a lot of small filesQUANTITY

0.62+

gpusCOMMERCIAL_ITEM

0.61+

DarrinORGANIZATION

0.56+

experienceQUANTITY

0.52+

Suresh Menon, Informatica | Informatica World 2018


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering Informatica World 2018. Brought to you by Informatica. >> Welcome back everyone. This is theCUBE's exclusive coverage of Informatica World 2018. Live here in Las Vegas at the Venetian Hotel. I'm John Furrier, co-host with Peter Burris. Here for the next two days of wall to wall coverage. Our next guest is is Suresh Menon, Senior Vice President and General Manager of the Master Data Management group within Informatica. He's got the keys to the kingdom, literally. Welcome back, good to see you. >> Thank you for having me. >> The key of all this pun intended is the data. And the cataloging's looking good. There's a lot of buzz around cataloging. What you guys have as a core product. Your customers love the product. The world's changing. Where are we, what's the update? >> Catalog is extremely important. Not just to enterprise data, the entire landscape by itself. But it's equally very exciting for MDM. Cause what has the potential to to is transform the way in how quickly people can get value out of MDM. Cause a combination of metadata and artificial intelligence through machine learning is what can create self-configuring, self-operating, even self maintaining Master Data Management. And that's extremely important because in today's world, the digital world that we live in, the explosion of data. The explosion of data sources. The new kinds of data that MDM is being asked to master, correlate and link with is becoming so huge that it's not humanly going to be possible to manage/curate this data. And you need to have AINML, and the underlying metadata awareness that the catalog brings, in order to solve these new problems. >> So Suresh, after you came onto theCUBE last year. You left and I said, there's a question I should've asked him. I'm going to put you on the spot. If you could do it. If you could create a new term for this Master Data Management. And where it's going. What would you call it? >> Yeah. You know Master Data Management has been around not for very long. About eight or nine years. It doesn't begin to describe the kind of problem that we're trying to solve here today. The only one that I can think of is 360's. It's more about getting the complete holistic view of all the business critical entities that you as an organization need to know. And 360 has traditionally been used around customer. But it's not only about the customer. You need to understand what products the customer owns. Engineer a 360 around their product. You need to understand how those customers interact with employees. You need an employee 360. You need an asset 360. How can you even begin to do householding, if you don't do a location 360? >> I want to build on that. In many respects it's the ability to sustain the context of data for different personas, for different applications, for different utilizations. So in many respects, Master Data Management really is the contextual framework by which an organization consumes data. Have I got that right? >> Absolutely. It is the you know. Another way to describe that would be it is what delivers the consistent authoritative description where you have the semantics being completely differently described in all of these cloud applications. We've gone very far away from the days maybe ten years ago, where you had a handful of CRM and ERP applications that you needed to disambiguate this information. Today I think I was reading this morning that an organization on average has 1,050 different cloud applications. And 3/4 of them are not connected to anything. And the describing, creating, authoring information around all these business critical entities. MDM is becoming the center of this ultra-connected universe in another way that I would look at it. >> It's also a key part of making data addressable. And we talked about this last year. But something that I have observed that's been happening since last year. The storage vendors have been radically changing their view. They're going to be have storage, but their data layer is sitting in all the clouds. That's interesting. That means that they're seeing that there's a data abstraction kind of underneath Informatica if you will. If that happens then you have to be working across all the clouds. Are customers seeing that? Are they coming to you saying that? Or are you guys getting out front? How do you view that dynamic? >> Customers are seeing that, have been seeing that for the last two to three years. As they have started taking these monolithic, very comprehensive, on premise applications to a fragmented set of applications in the cloud. Where do they keep a layer where they have all this business critical data in one place? And they're beginning to realize that as they move these things to the cloud, these applications are moving to the cloud, it's going from one to a couple of hundred. Master data is being seen as that layer that basically connects all these pieces of information together. And very importantly for a lot of these organizations, data that's proprietary to them. That they don't necessarily want locked up in an application that may or may not be there a couple of years down the road. >> The value shifting from state commodity. Even I was talking last week with the guys from NetApp about a great solid state drive they're going to have. But that values up top where the data is. And they have the data stored. So why not facilitate? And you guys can take it and integrate it into the applications, into the workloads. How is that going with respect to say catalog or the edge, for instance? How should a customer think about MDM? If they have to architect it out, what's the playbook? >> The number one thing is where the catalog comes in is first of all trying to identify in this highly fragmented universe you now have. As to where all your fragments, or master data reside. This is where the catalog comes in. It gives you in one Google-like text search, tells you where all the customer master attributes are residing across the landscape. Third party, on premise, in the cloud. The catalog will also tell you what the relative quality is of those those attributes. And then by apply AINML to it, be able to now figure out how those pieces of data can be transformed, cleansed, enriched and brought into MDM. The catalog has a role to play within MDM. What are the most appropriate matching and linking rules? What are the most appropriate survivorship trust tools that you need to apply? And how do you secure all that data that's now sitting in MDM? Because it's now in the cloud, and you know data security and protection is top of mind for most-- >> Talk about AI over at MDM. Because last year Claire was announced. We've seen certainly with GDPR that AI will play a role. Machine learning and AI. It's all coming together. The relationship between MDM and AI. Natural to me, seems like it's natural. How do you guys see the fit between AI and MDM? >> It is fundamental to MDM. And where we've begun our investment in AINML is one of the most core capabilities around MDM, which is being able to recognize potential duplicates. Or detect non-obvious relationships across this vast set of master data that's coming in. We've applied AINML, and we'll see a demo of that tomorrow, and we'll here in Vegas, is using machine learning on top of the world's best matching algorithms, in order to infer what are the most appropriate strategies in order to link and discover these entities? And build a relationship graph, without a human having to introspect the data. >> One of our predictions is that over the course of the next few years companies are actually going to start thinking about networks of data. That data is going to get the network formation treatment. That devices, and pages, and identities and services that we've gotten in the past. It does seem as though MDM could play a very, very important role in as you said identifying patterns in the data, utilization of the data. What constitutes a data node? What constitutes an edge? Number of different ways of thinking about it. Is that the direction that you see? First of all, do you agree with that notion of networks of data? And is that the direction you see MDM playing in the future? >> Absolutely. Because up until now MDM was used to solve the problem of creating a distinct node of data. Where we absolutely had to ensure that whatever it is then node was describing is actually the entire, complete, comprehensive entity. Now the next step, the new frontier for MDM is now about trying to understand the relationships across those nodes. And absolutely. MDM is both about that curation that governs, which is very important for GDPR and all of the other initiatives out there. But equally importantly now being able to understand how these entities are related across those, the graph of all of those nodes now. >> Weave in the role that security's going to play. Because MDM can... Well we'll step back. Everybody has historically figured that either data is secure or it's not. Largely because it was focused on a device. And if you have a device, and secure the device, all the data on that device got equally secured. Nowadays data is much more in flight. It's all over the place. It's a lot of different sources. The role that security plays in crafting the node, in privatizing data and turning it into an asset, is really important. But it could really use the information that's MDM to ensure that we are applying the appropriate levels of security, and types of security. Do you see an evolving role between MDM and data security? >> I would actually describe it differently. I would say that security is now the core design principal for MDM. It has to be baked into everything that we do around designing MDM for the future. Because like you said, we've again gone away from some handful of sources, bringing data into MDM in a highly protected, on premise environment with a very limited number of consumers. Now we have thousands of applications delivering that data to MDM. And you've got thousands of business users. Tens of thousands of them. Applications all leveraging that master data in the context of those interfaces. Security has never bee more important for MDM. This is again another way of security. And I want to bring catalog back again. Catalog is going to automatically tell the MDM configuration developer that these are pieces of data that should be protected. This is PII data. The the health data. This is credit data. That security is implicit in the design of those MDM initiatives. >> I think that's huge with cloud and connected edge in the network that is critical. I got to ask you. I now we're tight on time. I want to get one more question in. Define intelligent MDM. I've heard that term. What does that mean to you? You mentioned security design in the beginning. I get that, what that is. But I heard the term intelligent MDM. What is the definition of that? What does it mean? >> It really means MDM that is built for three new imperatives. One is being able to scale, what I would call digital scale. It's no longer enterprise scale. It is about being able to make sense of interactions and relationships, and being able to use the power of the catalog, and AINML, in order to connect all of these dots. Because connecting these dots is what's going to deliver immense business value to those organizations. Facilitate the rise of the business user, and their requirements. Intuitive interfaces that allow them to perform their day to day interaction with MDM. And finally time to value. Intelligent MDM should be up and running, not in months or years, but in weeks if not days. And this is where the power of catalog, power of machine learning, can make this a reality. >> That's a great clip. I'm going to clip that. That's awesome. And then putting it into action, that's the key to success. Suresh, thanks for coming on. Great to see you. >> Thank you very much. >> As always. You've got the keys to the kingdom, literally. MDM is at the center of it all, the things going on with data from cloud, edge computing, all connected. I'm John Furrier with Peter Burrs bringing all the action here at Informatica World 2018. We'll be back with more after this short break.

Published Date : May 22 2018

SUMMARY :

Brought to you by Informatica. He's got the keys to the kingdom, literally. is the data. that the catalog brings, I'm going to put you on the spot. of all the business critical entities the ability to sustain the context It is the you know. Are they coming to you saying that? have been seeing that for the last two to three years. How is that going with respect to say catalog What are the most appropriate matching and linking rules? Natural to me, seems like it's natural. is one of the most core capabilities around MDM, And is that the direction you see MDM playing and all of the other Weave in the role that security's going to play. in the context of those interfaces. What is the definition of that? It is about being able to that's the key to success. You've got the keys to the kingdom, literally.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Suresh MenonPERSON

0.99+

InformaticaORGANIZATION

0.99+

SureshPERSON

0.99+

Peter BurrsPERSON

0.99+

VegasLOCATION

0.99+

John FurrierPERSON

0.99+

last yearDATE

0.99+

Las VegasLOCATION

0.99+

TodayDATE

0.99+

thousandsQUANTITY

0.99+

last weekDATE

0.99+

ClairePERSON

0.99+

GDPRTITLE

0.99+

tomorrowDATE

0.99+

3/4QUANTITY

0.98+

ten years agoDATE

0.98+

OneQUANTITY

0.98+

1,050 different cloud applicationsQUANTITY

0.98+

Informatica World 2018EVENT

0.97+

bothQUANTITY

0.97+

oneQUANTITY

0.96+

todayDATE

0.96+

GoogleORGANIZATION

0.94+

one placeQUANTITY

0.94+

one more questionQUANTITY

0.94+

Tens of thousandsQUANTITY

0.93+

MDMTITLE

0.9+

three yearsQUANTITY

0.9+

this morningDATE

0.9+

firstQUANTITY

0.9+

FirstQUANTITY

0.89+

applicationsQUANTITY

0.89+

Venetian HotelLOCATION

0.89+

About eightQUANTITY

0.88+

twoQUANTITY

0.87+

threeQUANTITY

0.84+

360QUANTITY

0.84+

theCUBEORGANIZATION

0.79+

nine yearsQUANTITY

0.77+

AINMLTITLE

0.76+

a couple of hundredQUANTITY

0.76+

thousands of businessQUANTITY

0.75+

two daysQUANTITY

0.72+

360TITLE

0.68+

couple of yearsQUANTITY

0.59+

NetAppORGANIZATION

0.55+

themQUANTITY

0.52+

360COMMERCIAL_ITEM

0.51+

few yearsDATE

0.49+

predictionsQUANTITY

0.49+

360ORGANIZATION

0.43+

360OTHER

0.35+