Image Title

Search Results for Arti Garg:

Arti Garg & Sorin Cheran, HPE | HPE Discover 2020


 

>> Male Voice: From around the globe, it's theCUBE covering HPE Discover Virtual Experience brought to you by HPE. >> Hi everybody, you're watching theCUBE. And this is Dave Vellante in our continuous coverage of the Discover 2020 Virtual Experience, HPE's virtual event, theCUBE is here, theCUBE virtual. We're really excited, we got a great session here. We're going to dig deep into machine intelligence and artificial intelligence. Dr. Arti Garg is here. She's the Head of Advanced AI Solutions and Technologies at Hewlett Packard Enterprise. And she's joined by Dr. Sorin Cheran, who is the Vice President of AI Strategy and Solutions Group at HPE. Folks, great to see you. Welcome to theCUBE. >> Hi. >> Hi, nice to meet you, hello! >> Dr. Cheran, let's start with you. Maybe talk a little bit about your role. You've had a variety of roles and maybe what's your current situation at HPE? >> Hello! Hi, so currently at HPE, I'm driving the Artificial Intelligence Strategy and Solution group who is currently looking at how do we bring solutions across the HPE portfolio, looking at every business unit, but also on the various geos. At the same time, the team is responsible for building the strategy around the AI for the entire company. We're working closely with the field, we're working closely with the things that are facing the customers every day. And we're also working very closely with the various groups in order to make sure that whatever we build holds water for the entire company. >> Dr. Garg, maybe you could share with us your focus these days? >> Yeah, sure, so I'm also part of the AI Strategy and Solutions team under Sorin as our new vice president in that role, and what I'm focused on is really trying to understand, what are some of the emerging technologies, whether those be things like new processor architectures, or advanced software technologies that could really enhance what we can offer to our customers in terms of AI and exploring what makes sense and how do we bring them to our customers? What are the right ways to package them into solutions? >> So everybody's talking about how digital transformation has been accelerated. If you're not digital, you can't transact business. AI infused into every application. And now people are realizing, "Hey, we can't solve all the world's problems with labor." What are you seeing just in terms of AI being accelerated throughout the portfolio and your customers? >> So that's a very good idea, because we've been talking about digital transformation for some time now. And I believe most of our customers believed initially that the one thing they have is time thinking that, "Oh yes I'm going to somehow at one point apply AI "and somehow at one point "I'm going to figure out how to build the data strategy, "or how to use AI in my different line of businesses." What happened with COVID-19 and in this area is that we lost one thing: time. So I think discussed what they see in our customers is the idea of accelerating their data strategy accelerating, moving from let's say an environment where they would compute center models per data center models trying to understand how do they capture data, how they accelerate the adoption of AI within the various business units, why? Because they understand that currently the way they are actually going to the business changed completely, they need to understand how to adapt a new business model, they need to understand how to look for value pools where there are none as well. So most of our customers today, while initially they spend a lot of time in an never ending POC trying to investigate where do they want to go. Currently they do want to accelerate the application of AI models, the build of data strategies, how then they use all of this data? How do they capture the data to make sure that they look at new business models, new value pools, new customer experience and so on and so forth. So I think what they've seen in the past, let's say three to six months is that we lost time. But the shift towards an adoption of analytics, AI and data strategy is accelerated a lot, simply because customers realize that they need to get ahead of the game. >> So Dr. Garg, what if you could talk about how HPE is utilizing machine intelligence during this pandemic, maybe helping some of your customers, get ahead of it, or at least trying to track it. How are you applying AI in this context? >> So I think that Sorin sort of spoke to one of the things with adopting AI is, it's very transformational for a business so it changes how you do things. You need to actually adopt new processes to take advantage of it. So what I would say is right now we're hearing from customers who recognize that the context in which they are doing their work is completely different. And they're exploring how AI can help them really meet the challenges of those context. So one example might be how can AI and computer vision be coupled together in a way that makes it easier to reopen stores, or ensures that people are distancing appropriately in factories. So I would say that it's the beginning of these conversations as customers as businesses try to figure out how do we operate in the new reality that we have? And I think it's a pretty exciting time. And I think just to the point that Sorin just made, there's a lot of openness to new technologies that there wasn't before, because there's this willingness to change the business processes to really take advantage of any technologies. >> So Dr. Cheran, I probably should have started here but help us understand HPE's overall strategy with regard to AI. I would certainly know that you're using AI to improve IT, the InfoSite product and capability via the Nimble acquisition, et cetera, and bringing that across the portfolio. But what's the strategy for HPE? >> So, yeah, thank you. That's (laughs) a good question. So obviously you started with a couple of our acquisition in the past because obviously Nimble and then we talked a lot about our efforts to bring InfoSite across the portfolio. But currently, in the past couple of months, let's say close to a year, we've been announcing a lot of other acquisitions and we've been talking about Tuteybens, we've been talking about Scytale we've been talking about Cray, and so on, so forth, and now what we're doing at HPE is to bring all of this IP together into one place and try to help our customers within their region out. If you're looking at what, for example, what did they actually get when Cray play was not only the receiver, but we also acquire and they also have a lot of software and a lot of IP around optimization and so on and so forth. Also within our own labs, we've been investigating AI around like, for example, some learning or accelerators or a lot of other activity. So right now what we're trying to help our customers with is to understand how do they lead from the production stage, from the POC stage to the production stage. So (mumbles) what we are trying to do is we are trying to accelerate their adoption of AI. So simply starting from an optimized platform infrastructure up to the solution they are actually going to apply or to use to solve their business problems and wrapping all of that around with services either consumed on-prem as a service and so on. So practically what we want to do is we want to help our customers optimize, orchestrate and operationalize AI. Because the problem of our customers is not to start in our PLC, the problem is how do I then take everything that I've been developing or working on and then put it in production at the edge, right? And then keep it, maintaining production in order to get insights and then actually take actions that are helping the enterprise. So basically, we want to be data driven assets in cloud enable, and we want to help our customers move from POC into production. >> Or do you work with obviously a lot of data folks, companies or data driven data scientists, you are hands on practitioners in this regard. One of the challenges that I hear a lot from customers is they're trying to operationalize AI put AI into production, they have data in silos, they spend all their time, munging data, you guys have made a number of acquisitions. Not a list of which is prey, obviously map of, data specialist, my friend Kumar's company Blue Data. So what do you see as HPE's role in terms of helping companies operationalize AI. >> So I think that a big part of operationalizing AI moving away from the PLC to really integrate AI into the business processes you have and also the sort of pre existing IT infrastructure you talked about, you might already have siloed data. That's sort of something we know very well at HPE, we understand a lot of the IT that enterprises already have the incumbent IT and those systems. We also understand how to put together systems and integrated systems that include a lot of different types of computing infrastructure. So whether that being different types of servers and different types of storage, we have the ability to bring all of that together. And then we also have the software that allows you to talk to all of these different components and build applications that can be deployed in the real world in a way that's easy to maintain, and scale and grow as your AI applications will almost invariably get more complex involved, more outputs involved and more input. So one of the important things as customers try to operationalize AI is think is knowing that it's not just solving the problem you're currently solving. It's not just operationalizing the solution you have today, it's ensuring that you can continue to operationalize new things or additional capabilities in the future. >> I want to talk a little bit about AI for good. We talked about AI taking away jobs, but the reality is, when you look at the productivity data, for instance, in the United States, in Europe, it's declining and it has for the last several decades and so I guess my point is that we're not going to be able to solve some of the world problems in the coming decades without machine intelligence. I mean you think about health care, you think about feeding populations, you think about obviously paying things like pandemics, climate change, energy alternatives, et cetera, productivity is coming down. Machines are potential opportunity. So there's an automation imperative. And you feel, Dr. Cheran, the people who are sort of beyond that machines replacing human's issue? Is that's still an item or has the pandemic sort of changed that? >> So I believe it is, so it used to be a very big item, you're right. And every time we were speaking at a conference and every time you're actually looking at the features of AI, right? Two scenarios are coming to plays, right? The first one where machines are here, actually take a walk, and then the second one as you know even a darker version where terminator is coming, yes and so forth, right? So basically these are the two, is the lesser evil in the greater evil and so on and so forth. And we still see that regular thing coming over and over again. And I believe that 2019 was the year of reckoning, where people are trying to realize that not only we can actually take responsible AI, but we can actually create an AI that is trustworthy, an AI that is fair and so on and so forth. And that we also understood in 2019 it was highly debated everywhere, which part of our jobs are going to be replaced like the parts that are mundane, or that can actually be easily automated and so on and so forth. With the COVID-19 what happened is that people are starting to look at AI differently, why? Because people are starting to look at data differently. And looking at data differently, how do I actually create this core of data which is trusted, secure and so on and so forth, and they are trying to understand that if the data is trusted and secure somehow, AI will be trusted and secure as well. Now, if I actually shifted forward, as you said, and then I try to understand, for example on the manufacturing floor, how do I add more machines? Or how do I replace humans with machines simply because, I need to make sure that I am able to stay in production and so on and so forth. From their perspective, I don't believe that the view of all people are actually looking at AI from the job marketplace perspective changed a lot. The view that actually changes how AI is helping us better certain prices, how AI is helping us, for example, in health care, but the idea of AI actually taking part of the jobs or automating parts of the jobs, we are not actually past yet, even if 2018 and even more so in 2019, it was the year also where actually AI through automation replaced the number of jobs but at the same time because as I was saying the first year where AI created more jobs it's because once you're displacing in one place, they're actually creating more work more opportunities in other places as well. But still, I don't believe the feeling changed. But we realize that AI is a lot more valuable and it can actually help us through some of our darkest hours, but also allow us to get better and faster insights as well. >> Well, machines have always replaced humans and now for the first time in history doing so in a really cognitive functions in a big way. But I want to ask you guys, I'll start with Dr. Arti, a series of questions that I think underscore the impact of AI and the central role that it plays in companies digital transformations, we talk about that a lot. But the questions that I'm going to ask you, I think will hit home just in terms of some hardcore examples, and if you have others I'd love to hear them but I'm going to start with Arti. So when do you think Dr. or machines will be able to make better diagnoses than doctors? We're actually there today already? >> So I think it depends a little bit on how you define that. And I'm just going to preface this by saying both of my parents are physicians. So I have a little bit of bias in this space. But I think that humans can bring creativity in a certain type of intelligence that it's not clear to me. We even know how to model with the computer. And so diagnoses have sometimes two components. One is recognizing patterns and being able to say, "I'm going to diagnose this disease that I've seen before." I think that we are getting to the place where there are certain examples. It's just starting to happen where you might be able to take the data that you need to make a diagnosis as well understood. A machine may be able to sort of recognize those subtle patterns better. But there's another component of doing diagnosis is when it's not obvious what you're looking for. You're trying to figure out what is the actual sort of setup diseases I might be looking at. And I think that's where we don't really know how to model that type of inspiration and creativity that humans still bring to things that they do, including medical diagnoses. >> So Dr. Cheran my next question is, when do you think that owning and driving your own vehicle will become largely obsolete? >> (laughs) Well, I believe my son is six year old now. And I believe, I'm working with a lot of companies to make sure that he will not get his driving license with his ID, right? So depending who you're asking and depending the level of autonomy that you're looking at, but you just mentioned the level five most likely. So there are a lot of dates out there so some people actually say 2030. I believe that my son in most of the cities in US but also most of the cities in Europe, by the time he's 18 in let's say 2035, I'll try to make sure that I'm working with the right companies not to allow them to get the driving license. >> I'll let my next question is from maybe both of you can answer. Do you take the traditional banks will lose control of payment system? >> So that's an interesting question, because I think it's broader than an AI question, right? I think that it goes into some other emerging technologies, including distributed ledgers and sort of the more secure forms of blockchain. I think that's a challenging question to my mind, because it's bigger than the technology. It's got Economic and Policy implications that I'm not sure I can answer. >> Well, that's a great answer, 'cause I agree with you already. I think that governments and banks have a partnership. It's important partnership for social stability. But similar we've seen now, Dr. Cheran in retail, obviously the COVID-19 has affected retail in a major way, especially physical retail, do you think that large retail stores are going to go away? I mean, we've seen many in chapter 11. At this point, how much of that is machine intelligence versus just social change versus digital transformation? It's an interesting question, isn't it? >> So I think most of the... Right now the retailers are here to stay I guess for the next couple of years. But moving forward, I think their capacity of adapting to stores like to walk in stores or to stores where basically we just go in and there are no shop assistants and just you don't even need the credit card to pay you're actually being able to pay either with your face or with your phone or with your small chips and so on and so forth. So I believe currently in the next couple of years, obviously they are here to stay. Moving forward then we'll get artificial intelligence, or robotics applied everywhere in the store and so on and so forth. Most likely their capacity of adapting to the new normal, which is placing AI everywhere and optimizing the walk in through predicting when and how to guide the customers to the shop, and so on and so forth, would allow them to actually survive. I don't believe that everything is actually going to be done online, especially from the retailer perspective. Most of the... We've seen a big shift at COVID-19. But what I was reading the other day, especially in France that the counter has opened again, we've seen a very quick pickup in the retailers of people that actually visiting the stores as well. So it's going to be some very interesting five to 10 years, and then most of the companies that have adapted to the digital transformation and to the new normal I think they are here to stay. Some of them obviously are going to take sometime. >> I mean, I think it's an interesting question too that you really sort of triggering in my mind is when you think about the framework for how companies are going to come back and come out of this, it's not just digital, that's a big piece of it, like how digital businesses, can they physically distance? I mean, I don't know how sports arenas are going to be able to physically distance that's going to be interesting to see how essential is the business and if you think about the different industries that it really is quite different across those industries. And obviously, digital plays a big factor there, but maybe we could end on that your final thoughts and maybe any other other things you'd like to share with our audience? >> So I think one of the things that's interesting anytime you talk about adopting a new technology, and right now we're happening to see this sort of huge uptick in AI adoption happening right at the same time but this sort of massive shift in how we live our lives is happening and sort of an acceptance, I think that can't just go back to the way things work as you mentioned, they'll probably be continued sort of desire to maintain social distancing. I think that it's going to force us to sort of rethink why we do things the way we do now, a lot, the retail, environments that we have the transportation solutions that we have, they were adapted in many cases in a very different context, in terms of what people need to do on a day-to-day basis within their life. And then what were the sort of state of technologies available. We're sort of being thrust and forced to reckon with like, what is it I really need to do to live my life and then what are the technologies I have available to meet to answer that and I think, it's really difficult to predict right now what people will think is important about a retail experience, I wouldn't be surprised if you start to find in person retail actually be much less, technologically aided, and much more about having the ability to talk to a human being and get their opinion and maybe the tactile sense of being able to like touch new clothes, or whatever it is. And so it's really difficult I think right now to predict what things are going to look like maybe even a year or two from now from that perspective. I think that what I feel fairly confident is that people are really starting to understand and engage with new technologies, and they're going to be really open to thinking about what those new technologies enable them to do in this sort of new way of living that we're going to probably be entering pretty soon. >> Excellent! All right, Sorin, bring us home. We'll give you the last word on this topic. >> Now, so I wanted to... I agree with Arti because what these three months of staying at home and of busy shutting down allowed us to do was to actually have a very big reset. So let's say a great reset but basically we realize that all the things we've taken from granted like our freedom of movement, our technology, our interactions with each other, and also for suddenly we realize that everything needs to change. And the only one thing that we actually kept doing is interacting with each other remotely, interacting with each other with our peers in the house, and so on and so forth. But the one thing that stayed was generating data, and data was here to stay because we actually leave traces of data everywhere we go, we leave traces of data when we put our watch on where we are actually playing with our phone, or to consume digital and so on and so forth. So what these three months reinforced for me personally, but also for some of our customers was that the data is here to stay. And even if the world shut down for three months, we did not generate less data. Data was there on the contrary, in some cases, more data. So the data is the main enabler for the new normal, which is going to pick up and the data will actually allow us to understand how to increase customer experience in the new normal, most likely using AI. As I was saying at the beginning, how do I actually operate new business model? How do I find, who do I partner with? How do I actually go to market together? How do I make collaborations more secure, and so on and so forth. And finally, where do I actually find new value pools? For example, how do I actually still enjoy for having a beer in a pub, right? Because suddenly during the COVID-19, that wasn't possible. I have a very nice place around the corner, but it's actually cheaply stuff. I'm not talking about beer but in general, I mean, so the finance is different the pools of data, the pools (mumbles) actually, getting values are different as well. So data is here to stay, and the AI definitely is going to be accelerated because it needs to use data to allow us to adopt the new normal in the digital transformation. >> A lot of unknowns but certainly machines and data are going to play a big role in the coming decade. I want to thank Dr. Arti Garg and Dr. Sorin Cheran for coming on theCUBE. It's great to have you. Thank you for a wonderful conversation. Really appreciate it. >> Thank you very much. >> Thanks so much. >> All right. And thank you for watching everybody. This is Dave Vellante for theCUBE and the HPE 2020 Virtual Experience. We'll be right back right after this short break. (upbeat music)

Published Date : Jun 23 2020

SUMMARY :

brought to you by HPE. of the Discover 2020 Virtual Experience, and maybe what's your in order to make sure Dr. Garg, maybe you could share with us and your customers? that the one thing they So Dr. Garg, what And I think just to the and bringing that across the portfolio. from the POC stage to the production stage. One of the challenges that the solution you have today, but the reality is, when you I need to make sure that I am able to stay and now for the first time in history and being able to say, question is, when do you think but also most of the cities in Europe, maybe both of you can answer. and sort of the more obviously the COVID-19 has Right now the retailers are here to stay for how companies are going to having the ability to talk We'll give you the last and the data will actually are going to play a big And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

CheranPERSON

0.99+

FranceLOCATION

0.99+

Blue DataORGANIZATION

0.99+

EuropeLOCATION

0.99+

2019DATE

0.99+

USLOCATION

0.99+

2018DATE

0.99+

HPEORGANIZATION

0.99+

KumarPERSON

0.99+

NimbleORGANIZATION

0.99+

Sorin CheranPERSON

0.99+

Arti GargPERSON

0.99+

Arti GargPERSON

0.99+

threeQUANTITY

0.99+

COVID-19OTHER

0.99+

GargPERSON

0.99+

three monthsQUANTITY

0.99+

bothQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

United StatesLOCATION

0.99+

twoQUANTITY

0.99+

18QUANTITY

0.99+

fiveQUANTITY

0.99+

2035DATE

0.99+

six monthsQUANTITY

0.99+

Two scenariosQUANTITY

0.99+

oneQUANTITY

0.98+

one thingQUANTITY

0.98+

first timeQUANTITY

0.98+

ArtiPERSON

0.98+

10 yearsQUANTITY

0.98+

OneQUANTITY

0.98+

first yearQUANTITY

0.98+

InfoSiteORGANIZATION

0.98+

SorinPERSON

0.98+

2030DATE

0.98+

todayDATE

0.98+

two componentsQUANTITY

0.97+

AI Strategy and Solutions GroupORGANIZATION

0.97+

a yearQUANTITY

0.97+

one exampleQUANTITY

0.96+

six year oldQUANTITY

0.96+

second oneQUANTITY

0.96+

next couple of yearsDATE

0.96+

Dr.PERSON

0.96+

chapter 11OTHER

0.96+

one placeQUANTITY

0.95+

Discover 2020 Virtual ExperienceEVENT

0.95+

Cray playTITLE

0.94+

HPE 2020EVENT

0.91+

pandemicEVENT

0.89+

past couple of monthsDATE

0.88+

ScytaleORGANIZATION

0.87+

Making AI Real – A practitioner’s view | Exascale Day


 

>> Narrator: From around the globe, it's theCUBE with digital coverage of Exascale day, made possible by Hewlett Packard Enterprise. >> Hey, welcome back Jeff Frick here with the cube come due from our Palo Alto studios, for their ongoing coverage in the celebration of Exascale day 10 to the 18th on October 18th, 10 with 18 zeros, it's all about big powerful giant computing and computing resources and computing power. And we're excited to invite back our next guest she's been on before. She's Dr. Arti Garg, head of advanced AI solutions and technologies for HPE. Arti great to see you again. >> Great to see you. >> Absolutely. So let's jump into before we get into Exascale day I was just looking at your LinkedIn profile. It's such a very interesting career. You've done time at Lawrence Livermore, You've done time in the federal government, You've done time at GE and industry, I just love if you can share a little bit of your perspective going from hardcore academia to, kind of some government positions, then into industry as a data scientist, and now with originally Cray and now HPE looking at it really from more of a vendor side. >> Yeah. So I think in some ways, I think I'm like a lot of people who've had the title of data scientists somewhere in their history where there's no single path, to really working in this industry. I come from a scientific background. I have a PhD in physics, So that's where I started working with large data sets. I think of myself as a data scientist before the term data scientist was a term. And I think it's an advantage, to be able to have seen this explosion of interest in leveraging data to gain insights, whether that be into the structure of the galaxy, which is what I used to look at, or whether that be into maybe new types of materials that could advance our ability to build lightweight cars or safety gear. It's allows you to take a perspective to not only understand what the technical challenges are, but what also the implementation challenges are, and why it can be hard to use data to solve problems. >> Well, I'd just love to get your, again your perspective cause you are into data, you chose that as your profession, and you probably run with a whole lot of people, that are also like-minded in terms of data. As an industry and as a society, we're trying to get people to do a better job of making database decisions and getting away from their gut and actually using data. I wonder if you can talk about the challenges of working with people who don't come from such an intense data background to get them to basically, I don't know if it's understand the value of more of a data kind decision making process or board just it's worth the effort, cause it's not easy to get the data and cleanse the data, and trust the data and get the right context, working with people that don't come from that background. And aren't so entrenched in that point of view, what surprises you? How do you help them? What can you share in terms of helping everybody get to be a more data centric decision maker? >> So I would actually rephrase the question a little bit Jeff, and say that actually I think people have always made data driven decisions. It's just that in the past we maybe had less data available to us or the quality of it was not as good. And so as a result most organizations have developed organize themselves to make decisions, to run their processes based on a much smaller and more refined set of information, than is currently available both given our ability to generate lots of data, through software and sensors, our ability to store that data. And then our ability to run a lot of computing cycles and a lot of advanced math against that data, to learn things that maybe in the past took, hundreds of years of experiments in scientists to understand. And so before I jumped into, how do you overcome that barrier? Just I'll use an example because you mentioned, I used to work in industry I used to work at GE. And one of the things that I often joked about, is the number of times I discovered Bernoulli's principle, in data coming off a GE jet engines you could do that overnight processing these large data but of course historically that took hundreds of years, to really understand these physical principles. And so I think when it comes to how do we bridge the gap between people who are adapt at processing large amounts of data, and running algorithms to pull insights out? I think it's both sides. I think it's those of us who are coming from the technical background, really understanding the way decisions are currently made, the way process and operations currently work at an organization. And understanding why those things are the way they are maybe their security or compliance or accountability concerns, that a new algorithm can't just replace those. And so I think it's on our end, really trying to understand, and make sure that whatever new approaches we're bringing address those concerns. And I think for folks who aren't necessarily coming from a large data set, and analytical background and when I say analytical, I mean in the data science sense, not in the sense of thinking about things in an abstract way to really recognize that these are just tools, that can enhance what they're doing, and they don't necessarily need to be frightening because I think that people who have been say operating electric grids for a long time, or fixing aircraft engines, they have a lot of expertise and a lot of understanding, and that's really important to making any kind of AI driven solution work. >> That's great insight but that but I do think one thing that's changed you come from a world where you had big data sets, so you kind of have a big data set point of view, where I think for a lot of decision makers they didn't have that data before. So we won't go through all the up until the right explosions of data, and obviously we're talking about Exascale day, but I think for a lot of processes now, the amount of data that they can bring to bear, is so dwarfs what they had in the past that before they even consider how to use it they still have to contextualize it, and they have to manage it and they have to organize it and there's data silos. So there's all this kind of nasty processes stuff, that's in the way some would argue has been kind of a real problem with the promise of BI, and does decision support tools. So as you look at at this new stuff and these new datasets, what are some of the people in process challenges beyond the obvious things that we can think about, which are the technical challenges? >> So I think that you've really hit on, something I talk about sometimes it was kind of a data deluge that we experienced these days, and the notion of feeling like you're drowning in information but really lacking any kind of insight. And one of the things that I like to think about, is to actually step back from the data questions the infrastructure questions, sort of all of these technical questions that can seem very challenging to navigate. And first ask ourselves, what problems am I trying to solve? It's really no different than any other type of decision you might make in an organization to say like, what are my biggest pain points? What keeps me up at night? or what would just transform the way my business works? And those are the problems worth solving. And then the next question becomes, if I had more data if I had a better understanding of something about my business or about my customers or about the world in which we all operate, would that really move the needle for me? And if the answer is yes, then that starts to give you a picture of what you might be able to do with AI, and it starts to tell you which of those data management challenges, whether they be cleaning the data, whether it be organizing the data, what it, whether it be building models on the data are worth solving because you're right, those are going to be a time intensive, labor intensive, highly iterative efforts. But if you know why you're doing it, then you will have a better understanding of why it's worth the effort. And also which shortcuts you can take which ones you can't, because often in order to sort of see the end state you might want to do a really quick experiment or prototype. And so you want to know what matters and what doesn't at least to that. Is this going to work at all time. >> So you're not buying the age old adage that you just throw a bunch of data in a data Lake and the answers will just spring up, just come right back out of the wall. I mean, you bring up such a good point, It's all about asking the right questions and thinking about asking questions. So again, when you talk to people, about helping them think about the questions, cause then you've got to shape the data to the question. And then you've got to start to build the algorithm, to kind of answer that question. How should people think when they're actually building algorithm and training algorithms, what are some of the typical kind of pitfalls that a lot of people fall in, haven't really thought about it before and how should people frame this process? Cause it's not simple and it's not easy and you really don't know that you have the answer, until you run multiple iterations and compare it against some other type of reference? >> Well, one of the things that I like to think about just so that you're sort of thinking about, all the challenges you're going to face up front, you don't necessarily need to solve all of these problems at the outset. But I think it's important to identify them, is I like to think about AI solutions as, they get deployed being part of a kind of workflow, and the workflow has multiple stages associated with it. The first stage being generating your data, and then starting to prepare and explore your data and then building models for your data. But sometimes I think where we don't always think about it is the next two phases, which is deploying whatever model or AI solution you've developed. And what will that really take especially in the ecosystem where it's going to live. If is it going to live in a secure and compliant ecosystem? Is it actually going to live in an outdoor ecosystem? We're seeing more applications on the edge, and then finally who's going to use it and how are they going to drive value from it? Because it could be that your AI solution doesn't work cause you don't have the right dashboard, that highlights and visualizes the data for the decision maker who will benefit from it. So I think it's important to sort of think through all of these stages upfront, and think through maybe what some of the biggest challenges you might encounter at the Mar, so that you're prepared when you meet them, and you can kind of refine and iterate along the way and even upfront tweak the question you're asking. >> That's great. So I want to get your take on we're celebrating Exascale day which is something very specific on 1018, share your thoughts on Exascale day specifically, but more generally I think just in terms of being a data scientist and suddenly having, all this massive compute power. At your disposal yoy're been around for a while. So you've seen the development of the cloud, these huge data sets and really the ability to, put so much compute horsepower against the problems as, networking and storage and compute, just asymptotically approach zero, I mean for as a data scientist you got to be pretty excited about kind of new mysteries, new adventures, new places to go, that we just you just couldn't do it 10 years ago five years ago, 15 years ago. >> Yeah I think that it's, it'll--only time will tell exactly all of the things that we'll be able to unlock, from these new sort of massive computing capabilities that we're going to have. But a couple of things that I'm very excited about, are that in addition to sort of this explosion or these very large investments in large supercomputers Exascale super computers, we're also seeing actually investment in these other types of scientific instruments that when I say scientific it's not just academic research, it's driving pharmaceutical drug discovery because we're talking about these, what they call light sources which shoot x-rays at molecules, and allow you to really understand the structure of the molecules. What Exascale allows you to do is, historically it's been that you would go take your molecule to one of these light sources and you shoot your, x-rays edit and you would generate just masses and masses of data, terabytes of data it was each shot. And being able to then understand, what you were looking at was a long process, getting computing time and analyzing the data. We're on the precipice of being able to do that, if not in real time much closer to real time. And I don't really know what happens if instead of coming up with a few molecules, taking them, studying them, and then saying maybe I need to do something different. I can do it while I'm still running my instrument. And I think that it's very exciting, from the perspective of someone who's got a scientific background who likes using large data sets. There's just a lot of possibility of what Exascale computing allows us to do in from the standpoint of I don't have to wait to get results, and I can either stimulate much bigger say galaxies, and really compare that to my data or galaxies or universes, if you're an astrophysicist or I can simulate, much smaller finer details of a hypothetical molecule and use that to predict what might be possible, from a materials or drug perspective, just to name two applications that I think Exascale could really drive. >> That's really great feedback just to shorten that compute loop. We had an interview earlier in some was talking about when the, biggest workload you had to worry about was the end of the month when you're running your financial, And I was like, why wouldn't that be nice to be the biggest job that we have to worry about? But now I think we saw some of this at animation, in the movie business when you know the rendering for whether it's a full animation movie, or just something that's a heavy duty three effects. When you can get those dailies back to the, to the artist as you said while you're still working, or closer to when you're working versus having this, huge kind of compute delay, it just changes the workflow dramatically and the pace of change and the pace of output. Because you're not context switching as much and you can really get back into it. That's a super point. I want to shift gears a little bit, and talk about explainable AI. So this is a concept that a lot of people hopefully are familiar with. So AI you build the algorithm it's in a box, it runs and it kicks out an answer. And one of the things that people talk about, is we should be able to go in and pull that algorithm apart to know, why it came out with the answer that it did. To me this just sounds really really hard because it's smart people like you, that are writing the algorithms the inputs and the and the data that feeds that thing, are super complex. The math behind it is very complex. And we know that the AI trains and can change over time as you you train the algorithm it gets more data, it adjusts itself. So it's explainable AI even possible? Is it possible at some degree? Because I do think it's important. And my next question is going to be about ethics, to know why something came out. And the other piece that becomes so much more important, is as we use that output not only to drive, human based decision that needs some more information, but increasingly moving it over to automation. So now you really want to know why did it do what it did explainable AI? Share your thoughts. >> It's a great question. And it's obviously a question that's on a lot of people's mind these days. I'm actually going to revert back to what I said earlier, when I talked about Bernoulli's principle, and just the ability sometimes when you do throw an algorithm at data, it might come the first thing it will find is probably some known law of physics. And so I think that really thinking about what do we mean by explainable AI, also requires us to think about what do we mean by AI? These days AI is often used anonymously with deep learning which is a particular type of algorithm that is not very analytical at its core. And what I mean by that is, other types of statistical machine learning models, have some underlying theory of what the population of data that you're studying. And whereas deep learning doesn't, it kind of just learns whatever pattern is sitting in front of it. And so there is a sense in which if you look at other types of algorithms, they are inherently explainable because you're choosing your algorithm based on what you think the is the sort of ground truth, about the population you're studying. And so I think we going to get to explainable deep learning. I think it's kind of challenging because you're always going to be in a position, where deep learning is designed to just be as flexible as possible. I'm sort of throw more math at the problem, because there may be are things that your sort of simpler model doesn't account for. However deep learning could be, part of an explainable AI solution. If for example, it helps you identify what are important so called features to look at what are the important aspects of your data. So I don't know it depends on what you mean by AI, but are you ever going to get to the point where, you don't need humans sort of interpreting outputs, and making some sets of judgments about what a set of computer algorithms that are processing data think. I think it will take, I don't want to say I know what's going to happen 50 years from now, but I think it'll take a little while to get to the point where you don't have, to maybe apply some subject matter understanding and some human judgment to what an algorithm is putting out. >> It's really interesting we had Dr. Robert Gates on a years ago at another show, and he talked about the only guns in the U.S. military if I'm getting this right, that are automatic, that will go based on what the computer tells them to do, and start shooting are on the Korean border. But short of that there's always a person involved, before anybody hits a button which begs a question cause we've seen this on the big data, kind of curve, i think Gartner has talked about it, as we move up from kind of descriptive analytics diagnostic analytics, predictive, and then prescriptive and then hopefully autonomous. So I wonder so you're saying will still little ways in that that last little bumps going to be tough to overcome to get to the true autonomy. >> I think so and you know it's going to be very application dependent as well. So it's an interesting example to use the DMZ because that is obviously also a very, mission critical I would say example but in general I think that you'll see autonomy. You already do see autonomy in certain places, where I would say the States are lower. So if I'm going to have some kind of recommendation engine, that suggests if you look at the sweater maybe like that one, the risk of getting that wrong. And so fully automating that as a little bit lower, because the risk is you don't buy the sweater. I lose a little bit of income I lose a little bit of revenue as a retailer, but the risk of I make that turn, because I'm going to autonomous vehicle as much higher. So I think that you will see the progression up that curve being highly dependent on what's at stake, with different degrees of automation. That being said you will also see in certain places where there's, it's either really expensive or it's humans aren't doing a great job. You may actually start to see some mission critical automation. But those would be the places where you're seeing them. And actually I think that's one of the reasons why you see actually a lot more autonomy, in the agriculture space, than you do in the sort of passenger vehicle space. Because there's a lot at stake and it's very difficult for human beings to sort of drive large combines. >> plus they have a real they have a controlled environment. So I've interviewed Caterpillar they're doing a ton of stuff with autonomy. Cause they're there control that field, where those things are operating, and whether it's a field or a mine, it's actually fascinating how far they've come with autonomy. But let me switch to a different industry that I know is closer to your heart, and looking at some other interviews and let's talk about diagnosing disease. And if we take something specific like reviewing x-rays where the computer, and it also brings in the whole computer vision and bringing in computer vision algorithms, excuse me they can see things probably fast or do a lot more comparisons, than potentially a human doctor can. And or hopefully this whole signal to noise conversation elevate the signal for the doctor to review, and suppress the noise it's really not worth their time. They can also review a lot of literature, and hopefully bring a broader potential perspective of potential diagnoses within a set of symptoms. You said before you both your folks are physicians, and there's a certain kind of magic, a nuance, almost like kind of more childlike exploration to try to get out of the algorithm if you will to think outside the box. I wonder if you can share that, synergy between using computers and AI and machine learning to do really arduous nasty things, like going through lots and lots and lots and lots of, x-rays compared to and how that helps with, doctor who's got a whole different kind of set of experience a whole different kind of empathy, whole different type of relationship with that patient, than just a bunch of pictures of their heart or their lungs. >> I think that one of the things is, and this kind of goes back to this question of, is AI for decision support versus automation? And I think that what AI can do, and what we're pretty good at these days, with computer vision is picking up on subtle patterns right now especially if you have a very large data set. So if I can train on lots of pictures of lungs, it's a lot easier for me to identify the pictures that somehow these are not like the other ones. And that can be helpful but I think then to really interpret what you're seeing and understand is this. Is it actually bad quality image? Is it some kind of some kind of medical issue? And what is the medical issue? I think that's where bringing in, a lot of different types of knowledge, and a lot of different pieces of information. Right now I think humans are a little bit better at doing that. And some of that's because I don't think we have great ways to train on, sort of sparse datasets I guess. And the second part is that human beings might be 40 years of training a model. They 50 years of training a model as opposed to six months, or something with sparse information. That's another thing that human beings have their sort of lived experience, and the data that they bring to bear, on any type of prediction or classification is actually more than just say what they saw in their medical training. It might be the people they've met, the places they've lived what have you. And I think that's that part that sort of broader set of learning, and how things that might not be related might actually be related to your understanding of what you're looking at. I think we've got a ways to go from a sort of artificial intelligence perspective and developed. >> But it is Exascale day. And we all know about the compound exponential curves on the computing side. But let's shift gears a little bit. I know you're interested in emerging technology to support this effort, and there's so much going on in terms of, kind of the atomization of compute store and networking to be able to break it down into smaller, smaller pieces, so that you can really scale the amount of horsepower that you need to apply to a problem, to very big or to very small. Obviously the stuff that you work is more big than small. Work on GPU a lot of activity there. So I wonder if you could share, some of the emerging technologies that you're excited about to bring again more tools to the task. >> I mean, one of the areas I personally spend a lot of my time exploring are, I guess this word gets used a lot, the Cambrian  explosion of new AI accelerators. New types of chips that are really designed for different types of AI workloads. And as you sort of talked about going down, and it's almost in a way where we were sort of going back and looking at these large systems, but then exploring each little component on them, and trying to really optimize that or understand how that component contributes to the overall performance of the whole. And I think one of the things that just, I don't even know there's probably close to a hundred active vendors in the space of developing new processors, and new types of computer chips. I think one of the things that that points to is, we're moving in the direction of generally infrastructure heterogeneity. So it used to be when you built a system you probably had one type of processor, or you probably had a pretty uniform fabric across your system you usually had, I think maybe storage we started to get tearing a little bit earlier. But now I think that what we're going to see, and we're already starting to see it with Exascale systems where you've got GPUs and CPUs on the same blades, is we're starting to see as the workloads that are running at large scales are becoming more complicated. Maybe I'm doing some simulation and then I'm running I'm training some kind of AI model, and then I'm inferring it on some other type, some other output of the simulation. I need to have the ability to do a lot of different things, and do them in at a very advanced level. Which means I need very specialized technology to do it. And I think it's an exciting time. And I think we're going to test, we're going to break a lot of things. I probably shouldn't say that in this interview, but I'm hopeful that we're going to break some stuff. We're going to push all these systems to the limit, and find out where we actually need to push a little harder. And I some of the areas I think that we're going to see that, is there We're going to want to move data, and move data off of scientific instruments, into computing, into memory, into a lot of different places. And I'm really excited to see how it plays out, and what you can do and where the limits are of what you can do with the new systems. >> Arti I could talk to you all day. I love the experience and the perspective, cause you've been doing this for a long time. So I'm going to give you the final word before we sign out and really bring it back, to a more human thing which is ethics. So one of the conversations we hear all the time, is that if you are going to do something, if you're going to put together a project and you justify that project, and then you go and you collect the data and you run that algorithm and you do that project. That's great but there's like an inherent problem with, kind of data collection that may be used for something else down the road that maybe you don't even anticipate. So I just wonder if you can share, kind of top level kind of ethical take on how data scientists specifically, and then ultimately more business practitioners and other people that don't carry that title. Need to be thinking about ethics and not just kind of forget about it. That these are I had a great interview with Paul Doherty. Everybody's data is not just their data, it's it represents a person, It's a representation of what they do and how they lives. So when you think about kind of entering into a project and getting started, what do you think about in terms of the ethical considerations and how should people be cautious that they don't go places that they probably shouldn't go? >> I think that's a great question out a short answer. But I think that I honestly don't know that we have a great solutions right now, but I think that the best we can do is take a very multifaceted, and also vigilant approach to it. So when you're collecting data, and often we should remember a lot of the data that gets used isn't necessarily collected for the purpose it's being used, because we might be looking at old medical records, or old any kind of transactional records whether it be from a government or a business. And so as you start to collect data or build solutions, try to think through who are all the people who might use it? And what are the possible ways in which it could be misused? And also I encourage people to think backwards. What were the biases in place that when the data were collected, you see this a lot in the criminal justice space is the historical records reflect, historical biases in our systems. And so is I there are limits to how much you can correct for previous biases, but there are some ways to do it, but you can't do it if you're not thinking about it. So I think, sort of at the outset of developing solutions, that's important but I think equally important is putting in the systems to maintain the vigilance around it. So one don't move to autonomy before you know, what potential new errors you might or new biases you might introduce into the world. And also have systems in place to constantly ask these questions. Am I perpetuating things I don't want to perpetuate? Or how can I correct for them? And be willing to scrap your system and start from scratch if you need to. >> Well Arti thank you. Thank you so much for your time. Like I said I could talk to you for days and days and days. I love the perspective and the insight and the thoughtfulness. So thank you for sharing your thoughts, as we celebrate Exascale day. >> Thank you for having me. >> My pleasure thank you. All right she's Arti I'm Jeff it's Exascale day. We're covering on the queue thanks for watching. We'll see you next time. (bright upbeat music)

Published Date : Oct 16 2020

SUMMARY :

Narrator: From around the globe, Arti great to see you again. I just love if you can share a little bit And I think it's an advantage, and you probably run with and that's really important to making and they have to manage it and it starts to tell you which of those the data to the question. and then starting to prepare that we just you just and really compare that to my and pull that algorithm apart to know, and some human judgment to what the computer tells them to do, because the risk is you the doctor to review, and the data that they bring to bear, and networking to be able to break it down And I some of the areas I think Arti I could talk to you all day. in the systems to maintain and the thoughtfulness. We're covering on the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

50 yearsQUANTITY

0.99+

40 yearsQUANTITY

0.99+

JeffPERSON

0.99+

Paul DohertyPERSON

0.99+

GEORGANIZATION

0.99+

both sidesQUANTITY

0.99+

ArtiPERSON

0.99+

six monthsQUANTITY

0.99+

BernoulliPERSON

0.99+

Arti GargPERSON

0.99+

second partQUANTITY

0.99+

GartnerORGANIZATION

0.99+

hundreds of yearsQUANTITY

0.99+

firstQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

oneQUANTITY

0.99+

10 years agoDATE

0.99+

1018DATE

0.98+

Dr.PERSON

0.98+

ExascaleTITLE

0.98+

each shotQUANTITY

0.98+

CaterpillarORGANIZATION

0.98+

Robert GatesPERSON

0.98+

15 years agoDATE

0.98+

LinkedInORGANIZATION

0.98+

HPEORGANIZATION

0.98+

first stageQUANTITY

0.97+

bothQUANTITY

0.96+

five years agoDATE

0.95+

Exascale dayEVENT

0.95+

two applicationsQUANTITY

0.94+

October 18thDATE

0.94+

two phasesQUANTITY

0.92+

18thDATE

0.91+

10DATE

0.9+

one thingQUANTITY

0.86+

U.S. militaryORGANIZATION

0.82+

one typeQUANTITY

0.81+

a years agoDATE

0.81+

each little componentQUANTITY

0.79+

single pathQUANTITY

0.79+

Korean borderLOCATION

0.72+

hundredQUANTITY

0.71+

terabytes of dataQUANTITY

0.71+

18 zerosQUANTITY

0.71+

three effectsQUANTITY

0.68+

one of these lightQUANTITY

0.68+

Exascale DayEVENT

0.68+

ExascaleEVENT

0.67+

thingsQUANTITY

0.66+

CrayORGANIZATION

0.61+

Exascale day 10EVENT

0.6+

Lawrence LivermorePERSON

0.56+

vendorsQUANTITY

0.53+

fewQUANTITY

0.52+

reasonsQUANTITY

0.46+

lotsQUANTITY

0.46+

CambrianOTHER

0.43+

DMZORGANIZATION

0.41+

ExascaleCOMMERCIAL_ITEM

0.39+