Image Title

Search Results for NetApp Data Visionary Center:

Santosh Rao, NetApp | Accelerate Your Journey to AI


 

>> From Sunnyvale California, in the heart of Silicon Valley. It's theCUBE, covering, Accelerate Your Journey to AI, Brought to you by NetApp. >> Hi I'm Peter Burris, welcome to another conversation here from the Data Visionary Center at NetApp's headquarters in beautiful Sunnyvale California. I'm being joined today by Santosh Rao. Santosh is the Senior Technical Director at NetApp, Specifically Santosh we're going to talk about some of the challenges and opportunities associated with AI and how NetApp is making that possible. Welcome to theCUBE. >> Thank you Peter, I'm excited to be here. Thank you for that. >> So, Santosh what is your role at Netapp? Why don't we start there. >> Wonderful, glad to be here, my name is Santosh Rao, I'm a Senior Technical Director at NetApp, part of the Product Operations group, and I've been here 10 years. My role is to drive up new lines of opportunity for NetApp, build up new product businesses. The most recent one has been AI. So I've been focused on bootstrapping and incubating the AI effort at NetApp for the last nine months now. Been excited to be part of this effort now. >> So nine months of talking, both internally, but spending time with customers too. What are customers telling you that are NetApp's opportunities, and what NetApp has to do to respond to those opportunities? >> That's a great question. We are seeing a lot of focus around expanding the digital transformation to really get value out of the data, and start looking at AI, and Deep Learning in particular, as a way to prove the ROI on the opportunities that they've had. AI and deep learning requires a tremendous amount of data. We're actually fascinated to see the amount of data sets that customers are starting to look at. A petabyte of data is sort of the minimum size of data set. So when you think about petabyte-scale data lakes. The first think you want to think about is how do you optimize the TCO for the solution. NetApp is seen as a leader in that, just because of our rich heritage of storage efficiency. A lot of these are video image and audio files, and so you're seeing a lot of unstructured data in general, and we're a leader in NFS as well. So a lot of that starts to come together from a NetApp perspective. And that's where customers see us as the leader in NFS, the leader in files, and the leader in storage efficiency, all coming together. >> And you want to join that together with some leadership, especially in GPU's, so that leads to NVIDIA. So you've announced an interesting partnership between NetApp and NVIDIA. How did that factor into your products, and where do you think that goes? >> It's kind of interesting how that came about, because when you look at the industry it's a small place. Some of the folks driving the NVIDIA leadership have been working with us in the past, when we've bootstrapped converged infrastructures with other vendors. We're known to have been a 10 year metro vendor in the converged infrastructure space. The way this came about was NVIDIA is clearly a leader in the GPU and AI acceleration from a computer perspective. But they're also seen as a long history of GPU virtualization and GPU graphics acceleration. When they look at NetApp, what NetApp brings to NVIDIA is just the converged infrastructure, the maturity of that solution, the depth that we have in the enterprise and the rich partner ecosystem. All of that starts to come together, and some of the players in this particular case, have had aligned in the past working on virtualization based conversion infrastructures in the past. It's an exciting time, we're really looking forward to working closely with NVIDIA. >> So NVIDIA brings these lighting fast machines, optimized for some of the new data types, data forms, data structures associated with AI. But they got to be fed, got to get the data to them. What is NetApp doing from a standpoint of the underlying hardware to improve the overall performance, and insure that these solutions really scream for customers? >> Yeah, it's kind of interesting, because when you look at how customers are designing this. They're thinking about digital transformation as, "What is the flow of that data? "What am I doing to create new sensors "and endpoints that create data? "How do I flow the data in? "How do I forecast how much data I'm going to "create quarter over quarter, year over year? "How many endpoints? what is the resolution of the data?" And then as that starts to come into the data center, they got to think about, where are the bottlenecks. So you start looking at a wide range of bottlenecks. You look at the edge data aggregation, then you start looking at network bandwidth to push data into the core data centers. You got to think smart about some of these things. For example, no matter how much network bandwidth you throw at it, you want to reduce the amount of data you're moving. Smart data movement technologies like SnapMirror, which NetApp brings to the table, are some things that we uniquely enable compared to others. The fact of the matter is when you take a common operating system, like ONTAP, and you can lear it across the Edge, Core and Cloud, that gives us some unnatural advantages. We can do things that you can't do in a silo. You've got a commodities server trying to push data, and having to do raw full copies of data into the data center. So we think smart data movement is a huge opportunity. When you look at the core, obviously it's a workhorse, and you've got the random sampling of data into this hardware. And we think the A800 is a workhorse built for AI. It is a best of a system in terms of performance, it does about 25 gigabytes per second just on a dual controller pair. You'll recall that we spent several number of years building out the foundation of Clustered ONTAP to allow us to scale to gigantic sizes. So 24 node or 12 controller pad A800 gets us to over 300 gigabytes per second, and over 11 million IOPS if you think about that. That's over about four to six times greater than anybody else in the industry. So when you think about NVIDIA investment in DGX and they're performance investment they've made there. We think only NetApp can keep up with that, in terms of performance. >> So 11 million IOPS, phenomenal performance for today. But the future is going to demand ever more. Where do you think these trends go? >> Well nobody really knows for sure. The most exciting part of this journey, is nobody knows where this is going. This is where you need to future proof customers, and you need to enable the technology to have sufficient legs, and the architecture to have sufficient legs. That no matter how it evolves and where customers go, the vendors working with customers can go there with them. And actually when customers look at NetApp and say, "You guys are working with the Cloud partners, "you're now working with NVIDIA. "And in the past you worked with a "variety of data source vendors. "So we think we can work with NetApp because, "you're not affiliated to any one of them, "and yet you're giving us that full range of solutions." So we think that performance is going to be key. Acceleration of compute workloads is going to demand orders of magnitude performance improvement. We think data set efficiencies and storage efficiencies is absolutely key. And we think you got to really look at PCO, because customers want to build these great solutions for the business, but they can't afford it unless vendors give them viable options. So it's really up to partners like NVIDIA and NetApp to work together to give customers the best of breed solutions that reduce the TCO, accelerate compute, accelerate the data pipeline, and yet, bring the cost of the overall solution down, and make it simple to deploy and pre integrated. These are the things customers are looking for and we think we have the best bet at getting there. >> So that leads to... Great summary, but that leads to some interesting observations on what customers should be basing their decisions on. What would you say are the two or three most crucial things that customers need to think about right now as a conceptualized, where to go with their AI application, or AI workloads, their AI projects and initiatives? >> So when customers are designing and building these solutions, they're thinking the entire data lifecycle. "How am I getting this new type of "data for digital transformation? "What is the ingestion architecture? "What are my data aggregation endpoints for ingestion? "How am I going to build out my AI data sources? "What are the types of data? "Am I collecting sensor data? Is it a variety of images? "Am I going to add in audio transcription? "Is there video feeds that come in over time?" So customers are having to think about the entire digital experience, the types of data, because that leads to the selection of data sources. For example, if you're going to be learning sensor data, you want to be looking at maybe graph databases. If you want to be learning log data, you're going to be looking at log analytics over time, as well as AI. You're going to look at video image and audio accordingly. Architecting these solutions requires an understanding of, what is your digital experience? How does that evolve over time? What is the right and optimal data source to learn that data, so that you get the best experience from a search, from an indexing, from a tiering, from analytics and AI? And then, what is the flow of that data? And how do you architect it for a global experience? How do you build out these data centers where you're not having to copy all data maybe, into your global headquarters. If you're a global company with presence across multiple Geo's, how do you architect for regional data centers to be self contained? Because we're looking at exabyte scale opportunities in some of these. I think that's pretty much the two or three things that I'd say, across the entire gamut of space here. >> Excellent, turning that then into some simple observations about the fact that data still is physical. There's latency issues, there's the cost of bandwidth issues. There's other types of issues. This notion of Edge, Core, Cloud. How do you see the ONTAP operating system, the ONTAP product set, facilitating being able to put data where it needs to be, while at the same time creating the options that a customer needs to use data as they need to use it? >> The fact of the matter is, these things cannot be achieved overnight. It takes a certain amount of foundational work, that, frankly, takes several years. The fact that ONTAP can run on small, form factor hardware at the edge is a journey that we started several years ago. The fact that ONTAP can run on commodity white box hardware, has been a journey that we have run over the last three, four years. Same thing in the Cloud, we have virtualized ONTAP to the point that it can run on all hyperscalers and now we are in the process of consuming ONTAP as a service, where you don't even know that it is an infrastructure product, or has been. So the process of building an Edge, Core, and Cloud data pipeline leverages the investments that we've made over time. When you think about the scale of compute, data and performance needed, that's a five to six year journey in Clustered ONTAP, if you look at NetApp's past. These are all elements that are coming together from a product and solution perspective. But the reality is that leveraging years and years of investment that NetApp engineering has made. In a a way that the industry really did not invest in the same areas. So when we compare and contrast what NetApp has done versus the rest of the industry. At a time when people were building monolithic engineered systems, we were building software defined architectures. At a time when they were building tightly cobbled system for traditional enterprise, we were building flexible, scale out systems, that assumed that you would want to scale in modular increments. Now as the world has shifted from enterprise into third platform and Webscale. We're finding all those investments NetApp made over the years is really starting to pay off for us. >> Including some of the investments in how AI can be used to handle how ONTAP operates at each of those different levels of scale. >> Absolutely, yes. >> Sontash Rao, Technical Director at NetApp, talking about AI, some of the new changes in the relationships between AI and storage. Thanks very much for being on theCUBE. >> Thank you, appreciate it.

Published Date : Aug 1 2018

SUMMARY :

Brought to you by NetApp. Santosh is the Senior Technical Director at NetApp, Thank you Peter, I'm excited to be here. Why don't we start there. the AI effort at NetApp for the last nine months now. What are customers telling you that are So a lot of that starts to come especially in GPU's, so that leads to NVIDIA. All of that starts to come together, What is NetApp doing from a standpoint of the The fact of the matter is when you But the future is going to demand ever more. and the architecture to have sufficient legs. Great summary, but that leads to some because that leads to the selection of data sources. observations about the fact that data The fact of the matter is, Including some of the investments in how AI can in the relationships between AI and storage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Justin WarrenPERSON

0.99+

Sanjay PoonenPERSON

0.99+

IBMORGANIZATION

0.99+

ClarkePERSON

0.99+

David FloyerPERSON

0.99+

Jeff FrickPERSON

0.99+

Dave VolantePERSON

0.99+

GeorgePERSON

0.99+

DavePERSON

0.99+

Diane GreenePERSON

0.99+

Michele PalusoPERSON

0.99+

AWSORGANIZATION

0.99+

Sam LightstonePERSON

0.99+

Dan HushonPERSON

0.99+

NutanixORGANIZATION

0.99+

Teresa CarlsonPERSON

0.99+

KevinPERSON

0.99+

Andy ArmstrongPERSON

0.99+

Michael DellPERSON

0.99+

Pat GelsingerPERSON

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Kevin SheehanPERSON

0.99+

Leandro NunezPERSON

0.99+

MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

EMCORGANIZATION

0.99+

GEORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

KeithPERSON

0.99+

Bob MetcalfePERSON

0.99+

VMwareORGANIZATION

0.99+

90%QUANTITY

0.99+

SamPERSON

0.99+

Larry BiaginiPERSON

0.99+

Rebecca KnightPERSON

0.99+

BrendanPERSON

0.99+

DellORGANIZATION

0.99+

PeterPERSON

0.99+

Clarke PattersonPERSON

0.99+

Jim McHugh, NVIDIA and Octavian Tanase, NetApp | Accelerate Your Journey to AI


 

>> From Sunnyvale, California, in the heart of Silicon Valley, it's theCUBE, covering Accelerate Your Journey to AI. Brought to you by NetApp. >> Hi, I'm Peter Burris, with theCUBE and Wikibon, and we're here at the NetApp Data Visionary Center today to talk about NetApp, NVIDIA, AI, and data. We're being joined by two great guests. Jim McHugh is the Vice President and General Manager of Deep Learning Systems at NVIDIA, and Octavian Tanase is the Senior Vice President of ONTAP at NetApp. Gentlemen, welcome to theCUBE. >> Thanks for having me. >> So Jim, I want to start with you. NVIDIA's been all over the place regarding AI right now. You've had a lot of conversations with customers. What is the state of those conversations today? >> Well, I mean, it really depends on the industry that the customer's in. So, AI at at its core, is really a horizontal technology, right? It's when when we engage with a customer and their data and their vertical domain knowledge that it becomes very specialized from there. So you're seeing a lot of acceleration where there's been a lot of data, right? So it's not any secret that you're seeing a lot around autonomous driving vehicles and the activity going there. Health care, right? Because when you can marry the technology of AI with the years, and years, and years of medical research that's going on out there, incredible things come out, right? We've seen some things around looking at cancer cells, we're looking at your retina being sort of the gateway to so many health indications. We can tell you whether you have everything from Dengue fever, to malaria, to whether you're susceptible to have hypertension. All of these kind of things that we're finding, that data is actually letting us to be superhuman in our knowledge about what we're trying to accomplish. Now the exciting thing is, if you grew up like we did, in the IT industry, is you're seeing it go into mainstream companies, so you're seeing it in financial services, where they for years were, quants were very specialized, and they were writing their own apps, and now they figured out, hey, look, I could broaden this out. You're seeing it in cybersecurity, right? For years, if you wanted to check malware, what did we do? We looked up the definition in a database and said, okay, yeah, that's malware, stop it, right? But now, they're learning the characteristics of malware. They're studying the patterns of it, and that's kind of what it is. Go industry by industry, and tell me if there's enough data to show a pattern, and AI will come in and change it. >> Enough data to show a pattern? Well, that kind of introduces NetApp to the equation. A company that's been, especially more recently, very focused on the relationship between data and business value. Octavian, what has NetApp seen from customers? >> Well, we know a little bit about data. We've been the stewards of that data in the enterprise for more than 25 years, and AI comes up in every single customer conversation. They're looking to leverage AI in their digital transformation, so we see this desire to extract more value out of the data, and make better decisions, faster decisions in every sector of the industry. So, it's ubiquitous, and we are uniquely positioned to enable customers to do their data management wherever data is being created. Whether the data is created at the edge, in the traditional data center, what we call the core, or in the cloud, we enable this seamless data management via the data fabric architecture and vision. >> So, data fabric, data management, the ability to extract that, turn it into patterns. Sounds like a good partnership, Jim? >> Yeah, no, we say, data's the new source code. Really, what AI is, we're changing the way software's written. Where, instead of having humans going in, do the feature engineering and feature sets that would be required, you're letting data dictate and guide you on what the features are going to be of software. >> So right now, we've got the GPU, Graphic Data Processing revolution, you guys driving that. We've got some real advances in how data fabric works. You have come together and created a partnership. Talk a little bit about that partnership. >> Well, when we started down this journey, and it began, really, in 2012 in AI, right? So when Alex Krizhevsky discovered how to create AlexNet, NVIDIA's been focused on how do we meet the needs of the data scientists every step of the way. So beginning started around making sure they had enough compute power to solve things that they couldn't solve before. Then we started focusing on what is the software that was required, right? So how do we get them the frameworks they need? How do we integrate that? How do we get more tuned, so they could get more and more performance? Our goal has always been, if we can make the data scientists more productive, we can actually help democratize AI. As it's starting to take hold, and get more deployments, obviously we need the data. We need it to help them with the data ingest, and then deployments are starting to scale out to the point where we need to make this easy, right? We need to take the headaches of trying to figure out what are all the configurations between our product lines, but also the networking product lines, as well. We have to bring that whole, holistic picture, and do it from there. So our goal, and what we're seeing, is not only we've made the data scientists more productive, but if we can help the guys that have to do the equipment for him more productive as well, the data scientists, she and he, can get back to doing what their real core work is. They can add value, and really change a lot of the things that are going on in our lives. >> So fast, flexibility, simpler to use. Does that, kind of, capture some of the, summarize some of the strategies that NetApp has for Artificial Intelligence workloads? >> Absolutely, I think simplicity, it's one of the key attributes, because the audience for some of the infrastructure that we're deploying together, it's a data scientist, and he wants to adopt that solution with confidence, and it has to be simple to deploy. He doesn't have to think about the infrastructure. It's also important to have an integrated approach, because, again, a lot of the data will be created in the future at the core, or at edge more than in the core, and more in the cloud than in traditional data center. So that seamless data management across the edge, to the core, to the cloud, it's also important. And scalability, it's also important, because customers who look to start, perhaps, simple, with a small deployment, and have that ability to seamlessly scale. Currently, the performance of the solution that we just announced, basically beats the competition by a 4x, in terms of the performance and capability. >> So as we think about where we're going, this is a crucial partnership for both companies, and it's part of a broader ecosystem that NVIDIA's building out. How does the NetApp partnership fit into that broader ecosystem? >> Well, starting with our relationship, when the announcement we made, it should be no secret that we engaged our channel partners, right? 'Cause they are that last mile. They are those trusted advisors, a lot of times, of our customers, and going in, and we want them to add this to their portfolio, take it out to 'em, and I think we've had resounding feedback, so far, that this is something that they can definitely take, and drive out. On top of that, NVIDIA is focused on, again, this new way of writing software, right? The software that leverages the data to do the things, and so we have an ecosystem that's built around our inception program, which are thousan%ds of startups. If you add to that the thousands of startups that are coming through Sand Hill, and the investment community, that are based around NVIDIA compute, as well, all of these guys are standardizing saying, hey we need to leverage this new model. We need to go as quickly as possible, and what we've pulled together, together, is the ability for them to do that. So whether they want to do the data center, or whether they want to go with one of our joint cloud providers and do it through their service, as well. >> So a great partnership that's capable of creating a great horizontal platform. It's that last mile that does the specialization. Have I got that right? >> You had the last mile helping reach the customers who are the specialization. The customers, and their data, and their vertical domain expertise, and what the data scientists that they have bring to it. Look, they're creating the magic. We're giving them the tools to make sure they can create that magic as easy as possible. >> That's great, so one of the things, Octavian, that Jim mentioned, was industries that are able to generate significant value out of data are moving first. One of the more important industries is IT Operations, because we have a lot of devices, we're generating a lot of data. How is NetApp going to use AI in your product set to drive further levels of productivity, from a simplicity standpoint, so customers can, in fact, spend more time on creating value? >> So interestingly enough, we've been users, or practitioners, of AI for quite a while. I don't know if a lot of people in the audience know, we have a predictive analytics system called Active IQ, which is an implementation of AI in the enterprise. We take data from more than 300 thousand assets that we have deployed in the field, more than 70 billion data points every day, and we correlate that together. We put them in a data lake. We train a cluster, and we enable our customers to drive value in best practices from the data that we collect from the broader set of deployments that we have in the field, so this is something that we are sharing with our customers, in terms of blueprint, and we're looking to drive the ubiquity in the type of solutions that we enable customers to build on top of our joint infrastructure. >> Excellent, Jim McHugh, NVIDIA, Octavian Tanase, NetApp. Great partnership represented right here on theCUBE. Thanks very much for being on theCUBE tonight. >> All right. >> Thank you. >> Thank you for having us. (electronic music)

Published Date : Aug 1 2018

SUMMARY :

in the heart of Silicon Valley, it's theCUBE, and Octavian Tanase is the Senior What is the state of those conversations today? the gateway to so many health indications. Well, that kind of introduces NetApp to the equation. or in the cloud, we enable this seamless data management So, data fabric, data management, the ability Where, instead of having humans going in, do the feature Talk a little bit about that partnership. the data scientists, she and he, can get back to summarize some of the strategies that NetApp has So that seamless data management across the edge, How does the NetApp partnership fit The software that leverages the data to do the things, It's that last mile that does the specialization. You had the last mile helping reach One of the more important industries is IT Operations, in the type of solutions that we enable customers Thanks very much for being on theCUBE tonight. Thank you for having us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Jim McHughPERSON

0.99+

JimPERSON

0.99+

2012DATE

0.99+

Alex KrizhevskyPERSON

0.99+

NVIDIAORGANIZATION

0.99+

OctavianPERSON

0.99+

Octavian TanasePERSON

0.99+

Silicon ValleyLOCATION

0.99+

ONTAPORGANIZATION

0.99+

more than 300 thousand assetsQUANTITY

0.99+

Sunnyvale, CaliforniaLOCATION

0.99+

more than 25 yearsQUANTITY

0.99+

Deep Learning SystemsORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

both companiesQUANTITY

0.99+

thousandsQUANTITY

0.99+

OneQUANTITY

0.99+

NetAppTITLE

0.99+

todayDATE

0.98+

more than 70 billion data pointsQUANTITY

0.97+

WikibonORGANIZATION

0.97+

4xQUANTITY

0.97+

malariaOTHER

0.97+

oneQUANTITY

0.96+

NetApp Data Visionary CenterORGANIZATION

0.96+

theCUBEORGANIZATION

0.96+

tonightDATE

0.96+

two great guestsQUANTITY

0.95+

Dengue feverOTHER

0.89+

Sand HillORGANIZATION

0.81+

firstQUANTITY

0.8+

hypertensionOTHER

0.77+

thousan%ds of startupsQUANTITY

0.68+

IQOTHER

0.68+

single customerQUANTITY

0.66+

AlexNetORGANIZATION

0.61+

VicePERSON

0.55+

yearsQUANTITY

0.54+

theCUBETITLE

0.51+

ActiveTITLE

0.32+