Sriram Raghavan, IBM Research AI | IBM Think 2020
(upbeat music) >> Announcer: From the cube Studios in Palo Alto and Boston, it's the cube! Covering IBM Think. Brought to you by IBM. >> Hi everybody, this is Dave Vellante of theCUBE, and you're watching our coverage of the IBM digital event experience. A multi-day program, tons of content, and it's our pleasure to be able to bring in experts, practitioners, customers, and partners. Sriram Raghavan is here. He's the Vice President of IBM Research in AI. Sriram, thanks so much for coming on thecUBE. >> Thank you, pleasure to be here. >> I love this title, I love the role. It's great work if you're qualified for it.(laughs) So, tell us a little bit about your role and your background. You came out of Stanford, you had the pleasure, I'm sure, of hanging out in South San Jose at the Almaden labs. Beautiful place to create. But give us a little background. >> Absolutely, yeah. So, let me start, maybe go backwards in time. What do I do now? My role's responsible for AI strategy, planning, and execution in IBM Research across our global footprint, all our labs worldwide and their working area. I also work closely with the commercial parts. The parts of IBM, our Software and Services business that take the innovation, AI innovation, from IBM Research to market. That's the second part of what I do. And where did I begin life in IBM? As you said, I began life at our Almaden Research Center up in San Jose, up in the hills. Beautiful, I had in a view. I still think it's the best view I had. I spent many years there doing work at the intersection of AI and large-scale data management, NLP. Went back to India, I was running the India lab there for a few years, and now I'm back here in New York running AI strategy. >> That's awesome. Let's talk a little bit about AI, the landscape of AI. IBM has always made it clear that you're not doing consumer AI. You're really tying to help businesses. But how do you look at the landscape? >> So, it's a great question. It's one of those things that, you know, we constantly measure ourselves and our partners tell us. I think we, you've probably heard us talk about the cloud journey . But look barely 20% of the workloads are in the cloud, 80% still waiting. AI, at that number is even less. But, of course, it varies. Depending on who you ask, you would say AI adoption is anywhere from 4% to 30% depending on who you ask in this case. But I think it's more important to look at where is this, directionally? And it's very, very clear. Adoption is rising. The value is more, it's getting better appreciated. But I think more important, I think is, there is broader recognition, awareness and investment, knowing that to get value out of AI, you start with where AI begins, which is data. So, the story around having a solid enterprise information architecture as the base on which to drive AI, is starting to happen. So, as the investments in data platform, becoming making your data ready for AI, starts to come through. We're definitely seeing that adoption. And I think, you know, the second imperative that businesses look for obviously is the skills. The tools and the skills to scale AI. It can't take me months and months and hours to go build an AI model, I got to accelerate it, and then comes operationalizing. But this is happening, and the upward trajectory is very, very clear. >> We've been talking a lot on theCUBE over the last couple of years, it's not the innovation engine of our industry is no longer Moore's Law, it's a combination of data. You just talked about data. Applying machine technology to that data, being able to scale it, across clouds, on-prem, wherever the data lives. So. >> Right. >> Having said that, you know, you've had a journey. You know, you started out kind of playing "Jeopardy!", if you will. It was a very narrow use case, and you're expanding that use case. I wonder if you could talk about that journey, specifically in the context of your vision. >> Yeah. So, let me step back and say for IBM Research AI, when I think about how we, what's our strategy and vision, we think of it as in two parts. One part is the evolution of the science and techniques behind AI. And you said it, right? From narrow, bespoke AI that all it can do is this one thing that it's really trained for, it takes a large amount of data, a lot of computing power. Two, how do you have the techniques and the innovation for AI to learn from one use case to the other? Be less data hungry, less resource hungry. Be more trustworthy and explainable. So, we call that the journey from narrow to broad AI. And one part of our strategy, as scientists and technologists, is the innovation to make that happen. So that's sort of one part. But, as you said, as people involved in making AI work in the enterprise, and IBM Research AI vision would be incomplete without the second part, which is, what are the challenges in scaling and operationalizing AI? It isn't sufficient that I can tell you AI can do this, how do I make AI do this so that you get the right ROI, the investment relative to the return makes sense and you can scale and operationalize. So, we took both of these imperatives. The AI narrow-to-broad journey, and the need to scale and operationalize. And what of the things that are making it hard? The things that make scaling and operationalizing harder: data challenges, we talked about that, skills challenges, and the fact that in enterprises, you have to govern and manage AI. And we took that together and we think of our AI agenda in three pieces: Advancing, trusting, and scaling AI. Advancing is the piece of pushing the boundary, making AI narrow to broad. Trusting is building AI which is trustworthy, is explainable, you can control and understand its behavior, make sense of it and all of the technology that goes with it. And scaling AI is when we address the problem of, how do I, you know, reduce the time and cost for data prep? How do I reduce the time for model tweaking and engineering? How do I make sure that a model that you build today, when something changes in the data, I can quickly allow for you to close the loop and improve the model? All of the things, think of day-two operations of AI. All of that is part of our scaling AI strategy. So advancing, trusting, scaling is sort of the three big mantras around which the way we think about our AI. >> Yeah, so I've been doing a little work in this around this notion of DataOps. Essentially, you know, DevOps applied to the data and the data pipeline, and I had a great conversation recently with Inderpal Bhandari, IBM's Global Chief Data Officer, and he explained to me how, first of all, customers will tell you, it's very hard to operationalize AIs. He and his team took that challenge on themselves and have had some great success. And, you know, we all know the problem. It's that, you know AI has to wait for the data. It has to wait for the data to be cleansed and wrangled. Can AI actually help with that part of the problem, compressing that? >> 100%. In fact, the way we think of the automation and scaling story is what we call the "AI For AI" story. So, AI in service of helping you build the AI that helps you make this with speed, right? So, and I think of it really in three parts. It's AI for data automation, our DataOps. AI used in better discovery, better cleansing, better configuration, faster linking, quality assessment, all of that. Using AI to do all of those data problems that you had to do. And I called it AI for data automation. The second part is using AI to automatically figure out the best model. And that's AI for data science automation, which is, feature engineering, hyperparameter optimization, having them all do work, why should a data scientist take weeks and months experimenting? If the AI can accelerate that from weeks to a matter of hours? That's data science automation. And then comes the important part, also, which is operations automation. Okay, I've put a data model into an application. How do I monitor its behavior? If the data that it's seeing is different from the data it was trained on, how do I quickly detect it? And a lot of the work from Research that was part of that Watson OpenScale offering is really addressing the operational side. So AI for data, AI for data science automation, and AI to help automate production of AI, is the way we break that problem up. >> So, I always like to ask folks that are deep into R&D, how they are ultimately are translating into commercial products and offerings? Because ultimately, you got to make money to fund more R&D. So, can you talk a little bit about how you do that, what your focus is there? >> Yeah, so that's a great question, and I'm going to use a few examples as well. But let me say at the outset, this is a very, very closed partnership. So when we, the Research part of AI and our portfolio, it's a closed partnership where we're constantly both drawing problem as well as building technology that goes into the offering. So, a lot of our work, much of our work in AI automation that we were talking about, is part of our Watson Studio, Watson Machine Learning, Watson OpenScale. In fact, OpenScale came out of Research working Trusted AI, and is now a centerpiece of our Watson project. Let me give a very different example. We have a very, very strong portfolio and focus in NLP, Natural Language Processing. And this directly goes into capabilities out of Watson Assistant, which is our system for conversational support and customer support, and Watson Discovery, which is about making enterprise understand unstructurally. And a great example of that is the Working Project Debater that you might have heard, which is a grand challenge in Research about building a machine that can do debate. Now, look, we weren't looking to go sell you a debating machine. But what did we build as part of doing that, is advances in NLP that are all making their way into assistant and discovery. And we actually just talked about earlier this year, announced a set of capabilities around better clustering, advanced summarization, deeper sentiment analysis. These made their way into Assistant and Discovery but are born out of research innovation and solving a grand problem like building a debating machine. That's just an example of how that journey from research to product happens. >> Yeah, the Debater documentary, I've seen some of that. It's actually quite astounding. I don't know what you're doing there. It sounds like you're taking natural language and turning it into complex queries with data science and AI, but it's quite amazing. >> Yes, and I would encourage you, you will see that documentary, by the way, on Channel 7, in the Think Event. And I would encourage you, actually the documentary around how Debater happened, sort of featuring back of the you know, backdoor interviews with the scientist who created it was actually featured last minute at Copenhagen International Documentary Festival. I'll invite viewers to go to Channel 7 and Data and AI Tech On-Demand to go take a look at that documentary. >> Yeah, you should take a look at it. It's actually quite astounding and amazing. Sriram, what are you working on these days? What kind of exciting projects or what's your focus area today? >> Look, I think there are three imperatives that we're really focused on, and one is very, you know, just really the project you're talking about, NLP. NLP in the enterprise, look, text is a language of business, right? Text is the way business is communicated. Within each other, with their partners, with the entire world. So, helping machines understand language, but in an enterprise context, recognizing that data and the enterprises live in complex documents, unstructured documents, in e-mail, they live in conversations with the customers. So, really pushing the boundary on how all our customers and clients can make sense of this vast volume of unstructured data by pushing the advances of NLP, that's one focus area. Second focus area, we talked about trust and how important that is. And we've done amazing work in monitoring and explainability. And we're really focused now on this emerging area of causality. Using causality to explain, right? The model makes this because the model believes this is what it wants, it's a beautiful way. And the third big focus continues to be on automation. So, NLP, trust, automation. Those are, like, three big focus areas for us. >> sriram, how far do you think we can take AI? I know it's a topic of conversation, but from your perspective, deep into the research, how far can it go? And maybe how far should it go? >> Look, I think we are, let me answer it this way. I think the arc of the possible is enormous. But I think we are at this inflection point in which I think the next wave of AI, the AI that's going to help us this narrow-to-broad journey we talked about, look, the narrow-to-broad journey's not like a one-week, one-year. We're talking about a decade of innovation. But I think we are at a point where we're going to see a wave of AI that we like to call "neuro-symbolic AI," which is AI that brings together two sort of fundamentally different approaches to building intelligence systems. One approach of building intelligence system is what we call "knowledge driven." Understand data, understand concept, logically, reasonable. We human beings do that. That was really the way AI was born. The more recent last couple of decades of AI was data driven, Machine learning. Give me vast volumes of data, I'll use neural techniques, deep learning, to to get value. We're at a point where we're going to bring both of them together. Cause you can't build trustworthy, explainable systems using only one, you can't get away from not using all of the data that you have to make them. So, neuro-symbolic AI is, I think, going to be the linchpin of how we advance AI and make it more powerful and trustworthy. >> So, are you, like, living your childhood dream here or what? >> Look, for me I'm fascinated. I've always been fascinated. And any time you can't find a technology person who hasn't dreamt of building an intelligent machine. To have a job where I can work across our worldwide set of 3,000 plus researchers and think and brainstorm on strategy with AI. And then, most importantly, not to forget, right? That you talked about being able to move it into our portfolios so it actually makes a difference for our clients. I think it's a dream job and a whole lot of fun. >> Well, Sriram, it was great having you on theCUBE. A lot of fun, interviewing folks like you. I feel a little bit smarter just talking to you. So thanks so much for coming on. >> Fantastic. It's been a pleasure to be here. >> And thank you for watching, everybody. You're watching theCUBE's coverage of IBM Think 2020. This is Dave Vellante. We'll be right back right after this short break. (upbeat music)
SUMMARY :
Brought to you by IBM. and it's our pleasure to be at the Almaden labs. that take the innovation, AI innovation, But how do you look at the landscape? But look barely 20% of the it's not the innovation I wonder if you could and the innovation for AI to learn and the data pipeline, and And a lot of the work from So, can you talk a little that goes into the offering. Yeah, the Debater documentary, of featuring back of the Sriram, what are you and the enterprises live the data that you have to make them. And any time you can't just talking to you. a pleasure to be here. And thank you for watching, everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Sriram Raghavan | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Sriram | PERSON | 0.99+ |
IBM Research | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Inderpal Bhandari | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
4% | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
One part | QUANTITY | 0.99+ |
one part | QUANTITY | 0.99+ |
Channel 7 | ORGANIZATION | 0.99+ |
one-year | QUANTITY | 0.99+ |
San Jose | LOCATION | 0.99+ |
sriram | PERSON | 0.99+ |
one-week | QUANTITY | 0.99+ |
3,000 plus researchers | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
three parts | QUANTITY | 0.98+ |
Copenhagen International Documentary Festival | EVENT | 0.98+ |
South San Jose | LOCATION | 0.98+ |
Second focus | QUANTITY | 0.98+ |
30% | QUANTITY | 0.98+ |
three pieces | QUANTITY | 0.98+ |
Data | ORGANIZATION | 0.98+ |
One approach | QUANTITY | 0.97+ |
earlier this year | DATE | 0.97+ |
Jeopardy | TITLE | 0.96+ |
Almaden | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.95+ |
OpenScale | ORGANIZATION | 0.95+ |
three | QUANTITY | 0.94+ |
one focus area | QUANTITY | 0.94+ |
third big | QUANTITY | 0.93+ |
Watson Assistant | TITLE | 0.92+ |
one use case | QUANTITY | 0.92+ |
Moore | ORGANIZATION | 0.92+ |
today | DATE | 0.91+ |
Stanford | LOCATION | 0.91+ |
Almaden Research Center | ORGANIZATION | 0.9+ |
one thing | QUANTITY | 0.88+ |
2020 | TITLE | 0.87+ |
wave | EVENT | 0.87+ |
Watson | TITLE | 0.86+ |
three big mantras | QUANTITY | 0.85+ |
> 100% | QUANTITY | 0.85+ |
two sort | QUANTITY | 0.84+ |
Think | COMMERCIAL_ITEM | 0.83+ |
second imperative | QUANTITY | 0.81+ |
Global Chief Data Officer | PERSON | 0.8+ |
three imperatives | QUANTITY | 0.76+ |
last couple of years | DATE | 0.76+ |
Debater | TITLE | 0.76+ |
Watson | ORGANIZATION | 0.72+ |
NLP | ORGANIZATION | 0.72+ |
Studio | ORGANIZATION | 0.72+ |
day | QUANTITY | 0.67+ |
two | QUANTITY | 0.65+ |
Vice | PERSON | 0.65+ |
theCUBE | ORGANIZATION | 0.63+ |
Watson Discovery | TITLE | 0.62+ |
theCUBE | TITLE | 0.6+ |