Krishna Gade, Fiddler AI | CUBE Conversation May 2021
(upbeat pop music) >> Well, hi everyone, John Walls here on "theCUBE" as we continue our CUBE conversations as part of the "AWS Startup Showcase". And we welcome in today Krishna Gade who is the founder and the CEO of Fiddler AI. and Krishna, good to see you today. Thanks for joining us here on the "theCUBE". >> Hey John, thanks so much for inviting us and I'm glad to be here, and looking forward to our conversation. >> Yeah me two, and first off, I want to say congratulations as I look at your company's, this tremendous roster, this list of awards that just keep coming your way. Most recently recognized by "Forbes" as one of the Top 50 AI Companies To Watch here in 2021. I know Gartner called you one of their Cool Companies not too long ago. World Economic Forum also giving you a shout out. So whatever it is you're doing, you're doing it very well, but it's got to feel good I would think, some validation to get all this kind of recognition. >> Absolutely, I know we've been very fortunate to get all the recognition. You know, part of it is also because of the space we are playing in, right? A lot of companies are, you know, operationalizing AI and therefore, you know, this whole point of, you know, explainability monitoring and governance of AI is like forefront and it's in the news for various different reasons. So there's a lot of, you know, good sort of talk that is going on in the press around how one should bear responsible AI. And we are very fortunate to be, you know, in the space and pioneering, you know, some of the technologies here. >> Right. And talking about machine learning monitoring, obviously, in the AI space, and you mentioned explainability. So let's just talk about that concept broadly first off and explain to our viewers what you mean by explainability in this particular context. >> Yeah, that's a good question. So if you think about an AI system, one of the main differences between it and a traditional software system is that it's a black box in the sense that you cannot open it up and read it's code like a traditional software system. The reason is, you know, the AI systems that are built using data and training models which are represented in this non-human readable format. And you cannot really understand how a model is actually making a prediction at any given point of time. So therefore what happens is when you are deploying these AI systems at scale for a variety of use cases, let's say credit underwriting or, you know, screening resumes, or clinical diagnosis which are extremely, you know, important for general human beings. There is a need to understand how the AI system is working. You know, why did it approve a positive person's loan or reject someone's loan? Or why did it reject someone's, you know, resume from, you know, a job screening pipeline? How is it working overall? Right? And so this is where explainability becomes important because you need to understand the AI system, you need a way to probe it, to interrogate it, to understand how the system is making predictions, how is it being influenced by various inputs you're supplying to the system. And so this gamut of technologies or the algorithms that have come across in the last, you know, few years have really matured to a point where, you know, products like Fiddler are developing them and productizing them for the general enterprise to you know, put it in their machine learning and AI workflows. >> So you're talking about context basically, right? I mean, trying to give everybody an idea. This is, you know, kind of where this inputs coming, this is where the problem is, this is where the bottleneck might be, whatever it is, and and doing that in real time. Very efficient operation here. Well, let's talk about the ML world right now and in terms of how it relates to artificial intelligence and this interaction you know, that we're seeing and the, I guess, the problem that you are trying to fix, if you will, in terms of machine learning monitoring. So let's just deal with that first off. When you look at somebody's architecture and somebody set up, what do you see? What are you looking for? And what kind of problems are you trying to solve for your clients? >> Yeah. So just following up what I said. The two main problems with operationalizing AI is one is the black box nature of AI, which I already talked about. The other problem is that the AI system is fundamentally a stochastic system or a probabilistic system. By that, I mean that its performance, you know, its predictions can change over time based on the data it is receiving. So it's not a deterministic system like traditional software systems where you expect the same output all the time, right? So when you have a system that is stochastic in nature where its performance can vary based on the data it is receiving, then you are in a situation where you have uncertainty, right? You know, you let's say you have an AI system that is deployed for serving a credit underwriting model or a fraud, you know, detection use case. And you see that, okay, sometimes accuracy is up, sometimes accuracy is down. You know, when do you want, when do you trust your predictions, when you're not. How do you know if the model is actually performing in the same manner that you trained it? All of these issues open up the need for continuous monitoring of these AI systems, because without which you may have AI systems making bad predictions for your users, hurting your business metrics, potentially making biased decisions that can put your company into a compliance or a brand reputation risk scenario. To avoid all of these things you can actually monitor these AI systems continuously so that you know exactly if they're performing the way you expect them to be. Do you to retrain them right now, right? Or do you need to shut them down because they are actually not predicting the way that you expect them to be? So this is actually very important. And so that's what Fiddler tries to solve for our customers by helping them operationalize AI with full visibility and explainability, right? So you can essentially install Fiddler in your workflow to continuously monitor your AI systems and analyze and explain them when you have questions about how they're working. >> I mean, you talked about governance earlier a little bit, you know, compliance, obviously a great critical issue, big concern, fraud detection. Security, just in general here, as we know, I mean, we keep almost every day it seems like we're hearing about some kinds of security intrusion. So, in terms of identifying vulnerabilities or in terms of identifying anomalies, whatever it might be, what kind of work are you doing in that space to give your client base the kind of comfort and the peace of mind that everybody's searching for these days? >> Right, I mean, if you step back a little bit, John, we are truly living in the age of algorithms, right? So everything that we interact with on a day-to-day basis, the movies we watch, or when we request an Uber driver, or when we go to a financial institution and request for a loan application or a mortgage, there are algorithms behind the scenes that are processing our requests and delivering the experiences that we have. Now, increasingly these algorithms are becoming AI based algorithms. And when you have these AI based algorithms, they're trained on this data that's available, that an institution may collect from their users, or they may buy from other third parties. And when you develop these AI systems based on this data, if this data is not equally distributed amongst all different ethnicity backgrounds, people coming from different cultures, different religions, different races, different genders, you may actually build systems that can make very different decisions for different individuals based on like this bias that could creep into them. And so this actually needs, this means that at the end of the day, you can actually create a dystopian world where, you know, some people get like really great decisions from your systems, where some people are left out, right? So therefore, you know, this aspect of governing your AI systems so that you're validating what you're building upfront. You're validating the data that you're using to train the systems. You're continuously monitoring the systems there so that they're actually producing the right outcomes for your users. And then you can actually explain if some customer asks you or some regulator or a third party asks you how your system is working. It's very very important. This is an emerging area in industry, certain sectors already have this, for example, financial services. It's in companies like banks, where it is mandated to have model governance, so that every model that they are deploying needs to be validated and needs to be monitored. And we are seeing the emergence of generally AI governance creeping into other sectors as well. And so this is like a broader topic that covers explainability, covers monitoring, covers detecting bias in your AI systems and ensuring that you're building safe and responsible AI for your customers and your organization. >> Yeah, I find the bias point really interesting, actually, because I hadn't really thought about these prejudices or subjectivities, you know, it might bring to our work with us in terms of what we look at, what we ignore, what we process, how we don't. But it's a really interesting point you just raised. So thank you for that. And then there's also the kind of issue with data drift too a little bit, right? It's like, where did it go (laughing)? >> Right. >> What are we doing here? What happened to it? So maybe if you could talk about that a little bit in terms of all this data that's coming in and corralling it, right? Making sure that it stays organized and stays in a way that you can analyze and process it, and then glean insight from. >> Yeah, data drift is one of the main reasons why AI systems deteriorate in performance. So for example, let's say I'm trying to build a recommendation system that predicts the items that you want to buy when you go to an E-commerce website. Now, if I have used data pre-COVID, then the user behavior was very different, right? That kind of items people were probably buying before you know, February, 2020 was like probably much different than the kind of items that people were buying after it. So what happens is when you train your AI systems on datasets that are older but then that data has changed ever since because of an event like COVID-19 has happened, or some other seasonality has kicked in, then your AI systems are seeing different distribution data. For example, you may see that suddenly, you know, people who were shopping, let's say, in March or April last year, people were shopping for all kinds of, you know, toilet paper and all kinds of things to stock up, you know, to be ready for lockdown, right? And maybe they were not buying similar amounts in there previously. So therefore, if you have an inventory management system based on AI or an E-commerce recommendation system based on AI, you know, they would see data drift, because the buying patterns are different. The amount of stuff that people are buying in terms of toilet paper has completely shifted. And so their model is actually, may not be predicting as accurately as it would, right? So therefore identifying this data drift and alerting your AI engineer so that they can be prepared for this is very important. Otherwise, what you would see is if you're an E-commerce company, this has actually happened, you know? Instacart, a grocery delivery company and another company www.etsy.com, they blogged about it where they have seen their models go down in accuracy from 90% to 65% when this data shift happened, you know, especially during COVID-19. And so you need the ability to continuously monitor for drift so that when you can catch these things earlier, and then, you know, save your business from losing, you know, in terms of business metrics like such as number of sales that you may be making, number of bad recommendations that your systems are making to your users. >> So we've talked a lot about these various components of monitoring of which, you know, all of which you do extremely well. And I was reading earlier, just a little bit about the company, and we talked about accountability. We've already talked about that. We talked about fraud detection, we talked about reliability. There was also a point about ethical considerations, you know, and so I was interested in that, hearing from you about that in terms of why that's a pillar of your service or what exactly that was pointed toward in terms of monitoring, and what you can do. >> Right. So, I guess I'll just go back to like a famous quote from Marc Andreessen. He mentioned, you know, a few years ago that software is eating the world, right? Now, what's happening is AI is eating software. All the software that we are consuming is becoming AI based software, because basically at the end of the day some intelligence is being baked into the software to make it, you know, predict more interesting things for you to make those decisions. Instead of rule-based decisions, make it more AI based decisions. And so therefore it is very important that when we are building the software, we need to use ethical practices. You know, we need to know how, where you're collecting the data from. It can be very dangerous if you don't do it and you can land into trouble. And we have seen these incidents many times, right? For example, in 2019, when Apple and Goldman Sachs came up with a credit card, a lot of customers complained about gender bias with respect to the credit card limits that the algorithm was setting. You know, in the same household, the husband and wife were getting 10 times in terms of a difference between the credit limit between a male and a female, right? Even though they probably had similar salary ranges, similar FICO scores, right? So if you do not actually make sure that, you know, you're collecting data from the right sources that your datasets are not outbalanced. If your models, if your algorithms are tested for bias you know, before hand, before you deploy them and then you're continuously monitoring them, these are all ethical practices. These are all the responsible ways of building your AI. You can actually, you know, land into trouble. Your customers will complain about it. You know, you would lose your brand reputation. And at the end of the day you'll be essentially, and instead of actually adding value to the customers, you may be actually hurting them, right? And so this is actually why it's so important, and it's become more important when the more stakes, the higher the stakes are, right? You know, for example, when it's being used for criminal justice scenarios or when it's being used for clinical diagnosis scenarios. Being able to ensure that the system is making unbiased decisions is very, very important. >> Well, before I let you go, too, I like you to touch base on your AWS relationship about, you know, what was the Genesis of that. And currently what it is that you're working on together to provide this great value to your customers. >> Absolutely. So the follow-up to this ethical AI is like Amazon as a company is interested in pursuing, you know, the responsible AI but, you know, they have a lot of AI products. So they are looking for, you know, fostering a community and ecosystem of AI technologies. And in that hypothesis they actually invested in Fiddler last year in terms of enabling us to develop this explainable AI and ethical AI technology. And so we are working with Alexa Fund and also like AWS ecosystem in terms of partnering with how effectively Fiddler can be delivered to other AWS customers through, like, through their marketplace and other sort of areas that we can distribute the software. So it's a great partnership. We are very, very excited about the opportunity to work with Alexa Fund as well as the AWS ecosystem. It increases another opportunity for us to enable a lot more customers than we than we can otherwise. So this is a great win-win situation for both Amazon and Fiddler. >> Well, it sure is. And congratulations on that and developing that partnership. I know it's working well for your clients and it's working well for Fiddler AI obviously by the number of recognitions that have been coming your way. So Krishna, we wish you continued success and thanks for the time here today on "theCUBE". >> Yep. Thank you so much, John. It was a pleasure talking to you today. >> I enjoyed it. Thank you. John Walls here wrapping up our conversation with Fiddler AI's Krishna Gade, talking today about machine learning monitoring on the "AWS Startup Showcase". (upbeat pop music)
SUMMARY :
and Krishna, good to see you today. and I'm glad to be here, I know Gartner called you one in the space and pioneering, you know, and you mentioned explainability. across in the last, you know, few years the problem that you are the way you expect them to be. you know, compliance, obviously So therefore, you know, prejudices or subjectivities, you know, that you can analyze and process it, for drift so that when you can of which, you know, to make it, you know, predict too, I like you to touch base the responsible AI but, you know, So Krishna, we wish you continued success It was a pleasure talking to you today. on the "AWS Startup Showcase".
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
10 times | QUANTITY | 0.99+ |
Marc Andreessen | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
John | PERSON | 0.99+ |
Krishna | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
February, 2020 | DATE | 0.99+ |
John Walls | PERSON | 0.99+ |
Krishna Gade | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
May 2021 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
last year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Fiddler AI | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
March | DATE | 0.99+ |
Fiddler | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
65% | QUANTITY | 0.98+ |
Uber | ORGANIZATION | 0.98+ |
April last year | DATE | 0.97+ |
Instacart | ORGANIZATION | 0.97+ |
FICO | ORGANIZATION | 0.96+ |
two main problems | QUANTITY | 0.94+ |
World Economic Forum | ORGANIZATION | 0.91+ |
Alexa Fund | TITLE | 0.91+ |
AWS Startup Showcase | EVENT | 0.9+ |
Alexa | TITLE | 0.88+ |
COVID | OTHER | 0.82+ |
www.etsy.com | OTHER | 0.82+ |
Top 50 AI Companies | QUANTITY | 0.78+ |
Fund | OTHER | 0.77+ |
few years ago | DATE | 0.76+ |
CUBE | ORGANIZATION | 0.76+ |
Startup Showcase | EVENT | 0.72+ |
theCUBE | TITLE | 0.64+ |
Forbes | TITLE | 0.59+ |
theCUBE | ORGANIZATION | 0.57+ |
CUBE Conversation | EVENT | 0.56+ |