Image Title

Search Results for Krishna Gade:

Krishna Gade, Fiddler.ai | Amazon re:MARS 2022


 

(upbeat music) >> Welcome back. Day two of theCUBE's coverage of re:MARS in Las Vegas. Amazon re:MARS, it's part of the Re Series they call it at Amazon. re:Invent is their big show, re:Inforce is a security show, re:MARS is the new emerging machine learning automation, robotics, and space. The confluence of machine learning powering a new industrial age and inflection point. I'm John Furrier, host of theCUBE. We're here to break it down for another wall to wall coverage. We've got a great guest here, CUBE alumni from our AWS startup showcase, Krishna Gade, founder and CEO of fiddler.ai. Welcome back to theCUBE. Good to see you. >> Great to see you, John. >> In person. We did the remote one before. >> Absolutely, great to be here, and I always love to be part of these interviews and love to talk more about what we're doing. >> Well, you guys have a lot of good street cred, a lot of good word of mouth around the quality of your product, the work you're doing. I know a lot of folks that I admire and trust in the AI machine learning area say great things about you. A lot going on, you guys are growing companies. So you're kind of like a startup on a rocket ship, getting ready to go, pun intended here at the space event. What's going on with you guys? You're here. Machine learning is the centerpiece of it. Swami gave the keynote here at day two and it really is an inflection point. Machine learning is now ready, it's scaling, and some of the examples that they were showing with the workloads and the data sets that they're tapping into, you know, you've got CodeWhisperer, which they announced, you've got trust and bias now being addressed, we're hitting a level, a new level in ML, ML operations, ML modeling, ML workloads for developers. >> Yep, yep, absolutely. You know, I think machine learning now has become an operational software, right? Like you know a lot of companies are investing millions and billions of dollars and creating teams to operationalize machine learning based products. And that's the exciting part. I think the thing that that is very exciting for us is like we are helping those teams to observe how those machine learning applications are working so that they can build trust into it. Because I believe as Swami was alluding to this today, without actually building trust into AI, it's really hard to actually have your business users use it in their business workflows. And that's where we are excited about bringing their trust and visibility factor into machine learning. >> You know, a lot of us all know what you guys are doing here in the ecosystem of AWS. And now extending here, take a minute to explain what Fiddler is doing for the folks that are in the space, that are in discovery mode, trying to understand who's got what, because like Swami said on stage, it's a full-time job to keep up on all the machine learning activities and tool sets and platforms. Take a minute to explain what Fiddler's doing, then we can get into some, some good questions. >> Absolutely. As the enterprise is taking on operationalization of machine learning models, one of the key problems that they run into is lack of visibility into how those models perform. You know, for example, let's say if I'm a bank, I'm trying to introduce credit risk scoring models using machine learning. You know, how do I know when my model is rejecting someone's loan? You know, when my model is accepting someone's loan? And why is it doing it? And I think this is basically what makes machine learning a complex thing to implement and operationalize. Without this visibility, you cannot build trust and actually use it in your business. With Fiddler, what we provide is we actually open up this black box and we help our customers to really understand how those models work. You know, for example, how is my model doing? Is it accurately working or not? You know, why is it actually rejecting someone's loan application? We provide these both fine grain as well as coarse grain insights. So our customers can actually deploy machine learning in a safe and trustworthy manner. >> Who is your customer? Who you're targeting? What persona is it, the data engineer, is it data science, is it the CSO, is it all the above? >> Yeah, our customer is the data scientist and the machine learning engineer, right? And we usually talk to teams that have a few models running in production, that's basically our sweet spot, where they're trying to look for a single pane of glass to see like what models are running in their production, how they're performing, how they're affecting their business metrics. So we typically engage with like head of data science or head of machine learning that has a few machine learning engineers and data scientists. >> Okay, so those people that are watching, you're into this, you can go check it out. It's good to learn. I want to get your thoughts on some trends that I see emerging, and I want to get your reaction to those. Number one, we're seeing the cloud scale now and integration a big part of things. So the time to value was brought up on stage today, Swami kind of mentioned time to value, showed some benchmark where they got four hours, some other teams were doing eight weeks. Where are we on the progression of value, time to value, and on the scale side. Can you scope that for me? >> I mean, it depends, right? You know, depending upon the company. So for example, when we work with banks, for them to time to operationalize a model can take months actually, because of all the regulatory procedures that they have to go through. You know, they have to get the models reviewed by model validators, model risk management teams, and then they audit those models, they have to then ship those models and constantly monitor them. So it's a very long process for them. And even for non-regulated sectors, if you do not have the right tools and processes in place, operationalizing machine learning models can take a long time. You know, with tools like Fiddler, what we are enabling is we are basically compressing that life cycle. We are helping them automate like model monitoring and explainability so that they can actually ship models more faster. Like you get like velocity in terms of shipping models. For example, one of the growing fintech companies that started with us last year started with six models in production, now they're running about 36 models in production. So it's within a year, they were able to like grow like 10x. So that is basically what we are trying to do. >> At other things, we at re:MARS, so first of all, you got a great product and a lot of markets that grow onto, but here you got space. I mean, anyone who's coming out of college or university PhD program, and if they're into aero, they're going to be here, right? This is where they are. Now you have a new core companies with machine learning, not just the engineering that you see in the space or aerospace area, you have a new engineering. Now I go back to the old days where my parents, there was Fortran, you used Fortran was Lingua Franca to manage the equipment. Little throwback to the old school. But now machine learning is companion, first class citizen, to the hardware. And in fact, and some will say more important. >> Yep, I mean, machine learning model is the new software artifact. It is going into production in a big way. And I think it has two different things that compare to traditional software. Number one, unlike traditional software, it's a black box. You cannot read up a machine learning model score and see why it's making those predictions. Number two, it's a stochastic entity. What that means is it's predictive power can wane over time. So it needs to be constantly monitored and then constantly refreshed so that it's actually working in tech. So those are the two main things you need to take care. And if you can do that, then machine learning can give you a huge amount of ROI. >> There is some practitioner kind of like craft to it. >> Correct. >> As you said, you got to know when to refresh, what data sets to bring in, which to stay away from, certainly when you get to the bias, but I'll get to that in a second. My next question is really along the lines of software. So if you believe that open source will dominate the software business, which I do, I mean, most people won't argue. I think you would agree with that, right? Open source is driving everything. If everything's open source, where's the differentiation coming from? So if I'm a startup entrepreneur or I'm a project manager working on the next Artemis mission, I got to open source. Okay, there's definitely security issues here. I don't want to talk about shift left right now, but like, okay, open source is everything. Where's the differentiation, where do I have the proprietary edge? >> It's a great question, right? So I used to work in tech companies before Fiddler. You know, when I used to work at Facebook, we would build everything in house. We would not even use a lot of open source software. So there are companies like that that build everything in house. And then I also worked at companies like Twitter and Pinterest, which are actually used a lot of open source, right? So now, like the thing is, it depends on the maturity of the organization. So if you're a Facebook or a Google, you can build a lot of things in house. Then if you're like a modern tech company, you would probably leverage open source, but there are lots of other companies in the world that still don't have the talent pool to actually build, take things from open source and productionize it. And that's where the opportunity for startups comes in so that we can commercialize these things, create a great enterprise experience, so actually operationalize things for them so that they don't have to do it in house for them. And that's the advantage working with startups. >> I don't want to get all operating systems with you on theory here on the stage here, but I will have to ask you the next question, which I totally agree with you, by the way, that's the way to go. There's not a lot of people out there that are peaked. And that's just statistical and it'll get better. Data engineering is really narrow. That is like the SRE of data. That's a new role emerging. Okay, all the things are happening. So if open source is there, integration is a huge deal. And you start to see the rise of a lot of MSPs, managed service providers. I run Kubernetes clusters, I do this, that, and the other thing. So what's your reaction to the growth of the integration side of the business and this role of new services coming from third parties? >> Yeah, absolutely. I think one of the big challenges for a chief data officer or someone like a CTO is how do they devise this infrastructure architecture and with components, either homegrown components or open source components or some vendor components, and how do they integrate? You know, when I used to run data engineering at Pinterest, we had to devise a data architecture combining all of these things and create something that actually flows very nicely, right? >> If you didn't do it right, it would break. >> Absolutely. And this is why it's important for us, like at Fiddler, to really make sure that Fiddler can integrate to all varies of ML platforms. Today, a lot of our customers use machine learning, build machine learning models on SageMaker. So Fiddler nicely integrate with SageMaker so that data, they get a seamless experience to monitor their models. >> Yeah, I mean, this might not be the right words for it, but I think data engineering as a service is really what I see you guys doing, as well other things, you're providing all that. >> And ML engineering as a service. >> ML engineering as a- Well it's hard. I mean, it's like the hard stuff. >> Yeah, yeah. >> Hear, hear. But that has to enable. So you as a business entrepreneur, you have to create a multiple of value proposition to your customers. What's your vision on that? What is that value? It has to be a multiple, at least 5 to 10. >> I mean, the value is simple, right? You know, if you have to operationize machine learning, you need visibility into how these things work. You know, if you're CTO or like chief data officer is asking how is my model working and how is it affecting my business? You need to be able to show them a dashboard, how it's working, right? And so like a data scientist today struggles to do this. They have to manually generate a report, manually do this analysis. What Fiddler is doing them is basically reducing their work so that they can automate these things and they can still focus on the core aspect of model building and data preparation and this boring aspect of monitoring the model and creating reports around the models is automated for them. >> Yeah, you guys got a great business. I think it's a lot of great future there and it's only going to get bigger. Again, the TAM's going to expand as the growth rising tide comes in. I want to ask you on while we're on that topic of rising tides, Dave Malik and I, since re:Invent last year have been kind of kicked down around this term that we made up called supercloud. And supercloud was a word that came out of these clouds that were not Amazon hyperscalers. So Snowflake, Buildman Sachs, Capital One, you name it, they're building massive proprietary value on top of the CapEx of Amazon. Jerry Chen at Greylock calls it castles in the cloud. You can create these moats. >> Yeah, right. >> So this is a phenomenon, right? And you land on one, and then you go to the others. So the strategies, everyone goes to Amazon first, and then hits Azure and GCP. That then creates this kind of multicloud so, okay, so super cloud's kind of happening, it's a thing. Charles Fitzgerald will disagree, he's a platformer, he says he's against the term. I get why, but he's off base a little. We can't wait to debate him on that. So superclouds are happening, but now what do I do about multicloud, because now I understand multicloud, I have this on that cloud, integrating across clouds is a very difficult thing. >> Krishna: Right, right, right. >> If I'm Snowflake or whatever, hey, I'll go to Azure, more TAM expansion, more market. But are people actually working together? Are we there yet? Where it's like, okay, I'm going to re-operationalize this code base over here. >> I mean, the reality of it, enterprise wants optionality, right? I think they don't want to be locked in into one particular cloud vendor on one particular software. And therefore you actually have in a situation where you have a multicloud scenario where they want to have some workloads in Amazon, some workloads in Azure. And this is an opportunity for startups like us because we are cloud agnostic. We can monitor models wherever you have. So this is where a lot of our customers, they have some of their models are running in their data centers and some of their models running in Amazon. And so we can provide a universal single pan of glass, right? So we can basically connect all of those data and actually showcase. I think this is an opportunity for startups to combine the data streams come from various different clouds and give them a single pain of experience. That way, the sort of the where is your data, where are my models running, which cloud are there, is all abstracted out from the customer. Because at the end of the day, enterprises will want optionality. And we are in this multicloud. >> Yeah, I mean, this reminds me of the interoperability days back when I was growing into the business. Everything was interoperability and OSI and the standards came out, but what's your opinion on openness, okay? There's a kneejerk reaction right now in the market to go silo on your data for governance or whatever reasons, but yet machine learning gurus and experts will say, "Hey, you want to horizon horizontal scalability and have the best machine learning models, you've got to have access to data and fast in real time or near real time." And the antithesis is siloing. >> Krishna: Right, right, right. >> So what's the solution? Customers control the data plane and have a control plane that's... What do customers do? It's a big challenge. >> Yeah, absolutely. I think there are multiple different architectures of ML, right, you know? We've seen like where vendors like us used to deploy completely on-prem, right? And they still do it, we still do it in some customers. And then you had this managed cloud experience where you just abstract out the entire operations from the customer. And then now you have this hybrid experience where you split the control plane and data plane. So you preserve the privacy of the customer from the data perspective, but you still control the infrastructure, right? I don't think there's a right answer. It depends on the product that you're trying to solve. You know, Databricks is able to solve this control plane, data plane split really well. I've seen some other tools that have not done this really well. So I think it all depends upon- >> What about Snowflake? I think they a- >> Sorry, correct. They have a managed cloud service, right? So predominantly that's their business. So I think it all depends on what is your go to market? You know, which customers you're talking to? You know, what's your product architecture look like? You know, from Fiddler's perspective today, we actually have chosen, we either go completely on-prem or we basically provide a managed cloud service and that's actually simpler for us instead of splitting- >> John: So it's customer choice. >> Exactly. >> That's your position. >> Exactly. >> Whoever you want to use Fiddler, go on-prem, no problem, or cloud. >> Correct, or cloud, yeah. >> You'll deploy and you'll work across whatever observability space you want to. >> That's right, that's right. >> Okay, yeah. So that's the big challenge, all right. What's the big observation from your standpoint? You've been on the hyperscaler side, your journey, Facebook, Pinterest, so back then you built everything, because no one else had software for you, but now everybody wants to be a hyperscaler, but there's a huge CapEx advantage. What should someone do? If you're a big enterprise, obviously I could be a big insurance, I could be financial services, oil and gas, whatever vertical, I want a supercloud, what do I do? >> I think like the biggest advantage enterprise today have is they have a plethora of tools. You know, when I used to work on machine learning way back in Microsoft on Bing Search, we had to build everything. You know, from like training platforms, deployment platforms, experimentation platforms. You know, how do we monitor those models? You know, everything has to be homegrown, right? A lot of open source also did not exist at the time. Today, the enterprise has this advantage, they're sitting on this gold mine of tools. You know, obviously there's probably a little bit of tool fatigue as well. You know, which tools to select? >> There's plenty of tools available. >> Exactly, right? And then there's like services available for you. So now you need to make like smarter choices to cobble together this, to create like a workflow for your engineers. And you can really get started quite fast, and actually get on par with some of these modern tech companies. And that is the advantage that a lot of enterprises see. >> If you were going to be the CTO or CEO of a big transformation, knowing what you know, 'cause you just brought up the killer point about why it's such a great time right now, you got platform as a service and the tooling essentially reset everything. So if you're going to throw everything out and start fresh, you're basically brewing the system architecture. It's a complete reset. That's doable. How fast do you think you could do that for say a large enterprise? >> See, I think if you set aside the organization processes and whatever kind of comes in the friction, from a technology perspective, it's pretty fast, right? You can devise a data architecture today with like tools like Kafka, Snowflake and Redshift, and you can actually devise a data architecture very clearly right from day one and actually implement it at scale. And then once you have accumulated enough data and you can extract more value from it, you can go and implement your MLOps workflow as well on top of it. And I think this is where tools like Fiddler can help as well. So I would start with looking at data, do we have centralization of data? Do we have like governance around data? Do we have analytics around data? And then kind of get into machine learning operations. >> Krishna, always great to have you on theCUBE. You're great masterclass guest. Obviously great success in your company. Been there, done that, and doing it again. I got to ask you, since you just brought that up about the whole reset, what is the superhero persona right now? Because it used to be the full stack developer, you know? And then it's like, then I call them, it didn't go over very well in theCUBE, the half stack developer, because nobody wants to be a half stack anything, a half sounds bad, worse than full. But cloud is essentially half a stack. I mean, you got infrastructure, you got tools. Now you're talking about a persona that's going to reset, look at tools, make selections, build an architecture, build an operating environment, distributed computing operating. Who is that person? What's that persona look like? >> I mean, I think the superhero persona today is ML engineering. I'm usually surprised how much is put on an ML engineer to do actually these days. You know, when I entered the industry as a software engineer, I had three or four things in my job to do, I write code, I test it, I deploy it, I'm done. Like today as an ML engineer, I need to worry about my data. How do I collect it? I need to clean the data, I need to train my models, I need to experiment with what it is, and to deploy them, I need to make sure that they're working once they're deployed. >> Now you got to do all the DevOps behind it. >> And all the DevOps behind it. And so I'm like working halftime as a data scientist, halftime as a software engineer, halftime as like a DevOps cloud. >> Cloud architect. >> It's like a heroic job. And I think this is why this is why obviously these jobs are like now really hard jobs and people want to be more and more machine learning >> And they get paid. >> engineering. >> Commensurate with the- >> And they're paid commensurately as well. And this is where I think an opportunity for tools like Fiddler exists as well because we can help those ML engineers do their jobs better. >> Thanks for coming on theCUBE. Great to see you. We're here at re:MARS. And great to see you again. And congratulations on being on the AWS startup showcase that we're in year two, episode four, coming up. We'll have to have you back on. Krishna, great to see you. Thanks for coming on. Okay, This is theCUBE's coverage here at re:MARS. I'm John Furrier, bringing all the signal from all the noise here. Not a lot of noise at this event, it's very small, very intimate, a little bit different, but all on point with space, machine learning, robotics, the future of industrial. We'll back with more coverage after the short break. >> Man: Thank you John. (upbeat music)

Published Date : Jun 23 2022

SUMMARY :

re:MARS is the new emerging We did the remote one before. and I always love to be and some of the examples And that's the exciting part. folks that are in the space, And I think this is basically and the machine learning engineer, right? So the time to value was You know, they have to that you see in the space And if you can do that, kind of like craft to it. I think you would agree with that, right? so that they don't have to That is like the SRE of data. and create something that If you didn't do it And this is why it's important is really what I see you guys doing, I mean, it's like the hard stuff. But that has to enable. You know, if you have to Again, the TAM's going to expand And you land on one, and I'm going to re-operationalize I mean, the reality of it, and have the best machine learning models, Customers control the data plane And then now you have You know, what's your product Whoever you want to whatever observability space you want to. So that's the big challenge, all right. Today, the enterprise has this advantage, And that is the advantage and the tooling essentially And then once you have to have you on theCUBE. I need to experiment with what Now you got to do all And all the DevOps behind it. And I think this is why this And this is where I think an opportunity And great to see you again. Man: Thank you John.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jerry ChenPERSON

0.99+

KrishnaPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

Dave MalikPERSON

0.99+

JohnPERSON

0.99+

Charles FitzgeraldPERSON

0.99+

millionsQUANTITY

0.99+

six modelsQUANTITY

0.99+

four hoursQUANTITY

0.99+

AWSORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

eight weeksQUANTITY

0.99+

PinterestORGANIZATION

0.99+

last yearDATE

0.99+

Buildman SachsORGANIZATION

0.99+

SwamiPERSON

0.99+

Capital OneORGANIZATION

0.99+

10xQUANTITY

0.99+

TwitterORGANIZATION

0.99+

todayDATE

0.99+

MicrosoftORGANIZATION

0.99+

FiddlerORGANIZATION

0.99+

Krishna GadePERSON

0.99+

Las VegasLOCATION

0.99+

FortranORGANIZATION

0.99+

TAMORGANIZATION

0.99+

two different thingsQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

ArtemisORGANIZATION

0.97+

TodayDATE

0.97+

theCUBEORGANIZATION

0.97+

SnowflakeORGANIZATION

0.97+

four thingsQUANTITY

0.96+

billions of dollarsQUANTITY

0.96+

Day twoQUANTITY

0.96+

RedshiftTITLE

0.95+

DatabricksORGANIZATION

0.95+

two mainQUANTITY

0.94+

KafkaTITLE

0.94+

SnowflakeTITLE

0.94+

SageMakerTITLE

0.94+

a yearQUANTITY

0.93+

10QUANTITY

0.93+

AzureTITLE

0.93+

firstQUANTITY

0.92+

CUBEORGANIZATION

0.92+

GreylockORGANIZATION

0.91+

singleQUANTITY

0.91+

single pan of glassQUANTITY

0.9+

about 36 modelsQUANTITY

0.9+

year twoQUANTITY

0.89+

CapExORGANIZATION

0.89+

Lingua FrancaORGANIZATION

0.84+

2021 AWSSQ2 069 AWS Krishna Gade and Amit Paka


 

(upbeat music) >> Hello and welcome to theCUBE as we present AWS Startup Showcase, The Next Big Thing in AI, Security & Life Sciences, the hottest startups. And today's session is really the next big thing in AI Security & Life Sciences. As to the AI track is really a big one most important. And we have a feature in company, fiddler.ai. I'm your host, John Furrier with theCUBE. And we're joined by the founders, Krishna Gade, founder and CEO, and Amit Paka, founder and Chief Product Officer. Great to have the founders on. Gentlemen, thank you for coming on this Cube segment for the AWS Startup Showcase. >> Thanks, John... >> Good to be here. >> So the topic of this session is staying compliant and accelerating AI adoption and model performance monitoring. Basically, bottom line is how to be innovative with AI and stay (John laughs) within the rules of the road, if you will. So, super important topic. Everyone knows the benefits of what AI can do. Everyone sees machine learning being embedded in every single application, but the business drivers of compliance and all kinds of new kinds of regulations are popping up. So we don't. The question is how do you stay compliant? Which is essentially how do you not foreclose the future opportunities? That's really the question on everyone's mind these days. So let's get into it. But before we start let's take a minute to explain what you guys do. Krishna, we'll start with you first. What does fiddler.ai do? >> Absolutely, yeah. Fiddler is a model performance management platform company. We help, you know, enterprises, mid-market companies to build responsible AI by helping them continuously monitoring their AI, analyzing it, explaining it, so that they know what's going on with their AI solutions at any given point of time. And they can be like, ensuring that, you know businesses are intact and they're compliant with all the regulations that they have in their industry. >> Everyone thinks AI is a secret sauce. It's magic beans and automatically will just change over the company. (John laughs) So it's kind of like this almost like it's a hope. But the reality is there is some value there but there's something that has to be done first. So let's get into what this model performance management is because it's a concept that needs to be understood well but also you got to implement it properly. There's some foundational things you've got to you know, walk, crawl before you walk and walk before you run kind of thing. So let's get into it. What is model performance management? >> Yeah, that's a great question. So the core software artifact most an AI system is called an AI model. So it essentially represents the patterns inside data accessing manner so that it can actually predict the future. Now, for example, let's say I'm trying to build an AI based credit underwriting system. What I would do is I would look at the historical you know, loans data. You know, good loans and bad loans. And then, I will build it a model that can capture those patterns so that when a new customer comes in I can actually predict, you know, how likely they are going to default on the loan much more activity. And this helps me as a bank or center company to produce more good loans for my company and ensure that my customer is not, you know, getting the right customer service. Now, the problem though is this AI model is a black box. Unlike regular software code you cannot really open up and read its code and read its patterns and how it is doing. And so that's where the risks around the AI models come along. And so you need a ways to innovate to actually explain it. You need to understand it and you need to monitor it. And this is where the model performance management system like Fiddler can help you look into that black box. Understand how it's doing it, monitor its predictions continuously so that you know what these models are doing at any given point of time. >> I mean, I'd love to get your thoughts on this because on the product side I could, first of all, totally awesome concept. No one debates that. But now you've got more and more companies integrating with each other more data's being shared. And so the, you know, everyone knows what an app sec review is, right? But now they're thinking about this concept of how do you do review of models, right? So understanding what's inside the black box is a huge thing. How do you do this? What does it mean? >> Yeah, so typically what you would do is it's just like software where you would validate software code going through QA and like analysis. In case of models you would try to prove the model in like different granularities to really understand how the model is behaving. This could be at a model prediction like level in case of the loans example, Krishna just gave. Why is my model saying high-risk to in particular loan? Or it might be in case of explaining groups of loans. For example, why is my model making high-risk predictions to loans made in California or loans made to all men? Was it loans made to all women? And it could also be at the global level. What are the key data factors important to my model? So the ability to prove the model deeper and really opening up the black box and then using that knowledge to explain how the model is working to non-technical folks in compliance. Or to folks who are regulators, who just want to ensure that they know how the model works to make sure that it's keeping up with kind of lending regulations to ensure that it's not biased and so on. So that's typically the way you would do it with the machine learning model. >> Krishna, talk about the potential embarrassments that could happen. You just mentioned some of the use cases you heard from a mid-saying you know, female, male. I mean, machines, aren't that smart. (John laughs) >> Yeah. >> If they don't have the data. >> Yeah. >> And data is fragmented you've got silos with all kinds of challenges just on the data problem, right? >> Yeah. >> So nevermind the machine learning problems. So, this is huge. I mean, the embarrassment opportunities. >> Yeah. >> And the risk management on whether it's a hack or something else. So you've got public embarrassment by doing something really went wrong. And then, you've got the real business impact that could be damaging. >> Absolutely. You know, AI has come forward a lot, right? I mean, you know, you have lots of data these days. You have a lot of computing power an amazing algorithms that you can actually build really sophisticated models. Some of these models were known to beat humans in image recognition and whatnot. However, the problem is there are risks in using AI, you know, without properly testing it, without properly monitoring it. For example, a couple of years ago, Apple and Goldman Sachs launched a credit card, right? And for their users where they were using algorithms presumably AI or machine learning algorithms to set credit limits. What happened was within the same household husband and wife got 10 times difference in the credit limits being set for them. And some of these people had similar FICO scores, similar salary ranges. And some of them went online and complained about it and that included the likes of Steve Wozniak as well. >> Yeah. >> So this was, these kind of stories are usually embarrassing when you could lose customer trust overnight, right? And, you know, you have to do a lot of PR damage. Eventually, there was a regulatory probate with Goldman Sachs. So there are these problems if you're not properly monitoring area systems, properly validating and testing them before you launch to the users. And that is why tools like Fiddler are coming forward so that you know, enterprises can do this. So that they can ensure responsible AI for both their organization as well as their customers. >> That's a great point, I want to get into this. What it kind of means and the kind of the industry side of it? And then, how that impacts customers? If you guys don't mind, machine learning opposite a term MLOps has been coined in the industry as you know. Basically, operations around machine learning, which kind of gets into the workflows and development life cycles. But ultimately, as you mentioned, this black box and this model being made. There's a heavy reliance on data. So Amit, what does this mean? Because now is it becomes operational with MLOps. There is now internal workflows and activities and roles and responsibilities. How is this changing organizations, you know separate the embarrassment, which is totally true. Now I've got an internal operational aspect and there's dev involved. What's the issue? >> Yeah, so typically, so if you look at the whole life cycle of machine learning ops, in some ways mirrors the traditional life cycle of kind of DevOps but in some ways it introduces new complexities. Specifically, because the models can be a black box. That's one thing to kind of watch out for. And secondly, because these models are probabilistic artifact, which means they are trained on data to grab relationships for what kind of potentially making high accuracy predictions. But the data that they see in life might actually differ and that might hurt their performance especially because machine learning is applied towards these high ROI use cases. So this process of MLOps needs to change to incorporate the fact that machine learning models can be black boxes and machine learning models can decay. And so the second part I think that's also relevant is because machine learning models can decay. You don't just create one model you create multiple versions of these models. And so you have to constantly stay on top of how your model is deviating from your reality and actual reality and kind of bring it back to that representation of reality. >> So this is interesting, I like this. So now there's a model for the model. So this is interesting. You guys have innovated on this model performance management idea. Can you explain the framework and how you guys solve that regulatory compliance piece? Because if you can be a model of the model, if you will. >> Then. >> Then you can then have some stability around maintaining the code basis or the integrity of the model. >> Okay. >> How does that? What do you guys offer? Take us through the framework and how it works and then how it ties to that regulatory piece? >> So the MPM system or the model performance management system really sits at the heart of the machine learning workflow. Keeping track of the data that is flowing through your ML life cycle, keeping track of the models that are going, you know, we're getting created and getting deployed and how they're performing. Keeping track of the whole parts of the models. So it gives you a centralized way of managing all of these information in one place, right? It gives you an oversight from a compliance standpoint from an operational standpoint of what's going on with your models in production. Imagine you're a bank you're probably creating hundreds of these models, but a variety of use cases, credit risk, fraud, anti-money laundering. How are you going to know which models are actually working very well? Which models are stale? Which models are expired? How do you know which models are underperforming? You know, are you getting alerts? So this is what this kind of governance, this performance management is what the system offers. It's a visual interface, lots of dashboards, the developers, operations folks, compliance folks can go and look into. And then they would get alerts when things go wrong with respect to their models. In terms of how it can be helpful to meet in compliance regulations. For example, let's say I'm starting to create a new credit risk model in a bank. Now I'm innovating on different AI algorithms here immediately before I even deploy that model I have to validate it. I have to explain it and create a report so that I can submit to my internal risk management team which can then review it, you know, understand all kinds of risks around it. And then potentially share it with the audit team and then keep a log of these reports so that when a regulator comes visits them, you know they can share these reports. These are the model reports. Is that how the model was created? Fiddler helps them create these reports, keep all of these reports in one place. And then once the model is deployed, you know, it basically can help them monitor these models continuously. So that they don't just have one ad hoc report when it was created upfront, they can a continuous monitoring continuous dashboard in terms of what it was doing in the last one whatever number of months it was running for. >> You know what? >> Historically, if you were to regulate it like all AI applications in the U.S. the legacy regulations are the ones that today are applied as to the equal credit opportunity or the Fed guidelines of like SR 11-7 that kind of comment that's applicable to all banks. So there is no purpose-built AI regulation but the EU released a proposed regulation just about three weeks back. That classifies risk within applications, and specifically for high-risk applications. They propose new oversight and the ads mandating explainability helping teams understand how the models are working and monitoring to ensure that when a model is trained for high accuracy, it maintains that. So now those two mandatory needs of high risk application, those are the ones that are solved by Fiddler. >> Yeah, this is, you mentioned explainable AI. Could you just quickly define that for the audience? Because this is a trend we're seeing a lot more of. Take a minute to explain what is explainable AI? >> Yeah, as I said in the beginning, you know AI model is a new software artifact that is being created. It is the core of an AI system. It's what represents all the patterns in the data and coach them and then uses that knowledge to predict the future. Now how it encodes all of these patterns is black magic, right? >> Yeah. >> You really don't know how the model is working. And so explainable AI is a set of technologies that can help you unlock that black box. You know, quote-unquote debug that model, looking to the model is introspected inspected, probate, whatever you want to call it, to understand how it works. For example, let's say I created an AI model, that again, predicts, you know, loan risk. Now let's say some person, a person comes to my bank and applies for a $10,000 loan, and the bank rejects the loan or the model rejects the loan. Now, why did it do it, right? That's a question that can explain the way I can answer. They can answer, hey, you know, the person's, you know salary range, you know, is contributing to 20% of the loan risk or this person's previous debt is contributing to 30% of the loan risk. So you can get a detailed set of dashboards in terms of attribution of taking the loan risk, the composite loan risk, and then attributing it to all the inputs that the model is observing. And so therefore, you now know how the moral is treating each of these inputs. And so now you have an idea of like where the person is getting effected by this loaner's mark. So now as a human, as an underwriter or a loan officer lending officer, I have knowledge about how the model is working. I can then have my human intuition or lap on it. I can approve the model sometimes I can disapprove the model sometimes. I can use this feedback and deliver it to the data science team, the AI team, so they can actually make the model better over time. So this unlocking black box has several benefits throughout their life cycle. >> That's awesome. Great definition. Great call. I want to grab get that on the record for the audience. Also, we'll make a clip out of that too. One of the things that I meant you brought up I love and want to get into is this MLOps impact. So as we were just talking earlier debugging module models and production, totally cool, relevant, unpacked a black box. But model decay, that's an interesting concept. Can you explain more? Because this to me, I think is potentially a big blind spot for the industry, because, you know, I talked to Swami at Amazon, who runs their AI group and, you know, they want to make AI easier and ML easier with SageMaker and other tools. But you can fall into a trap of thinking everything's done at one and done. It's iterative is you've got leverage here. You got to keep track of the performance of the models, not just debugging them. Are they actually working? Is there new data? This is a whole another practice. Could you explain this concept of model decay? >> Yeah, so let's look at the lending example Krishna was just talking about. If you expect your customers to be your citizens, right? So you will have examples in your training set which might have historical loans made to people that the needs of 40, and let's say 70. And so you will train your model and your model will be trained our highest accuracy in making loans to these type of applicants. But now let's say introduced a new loan product that you're targeting, let's say younger college going folks. So that model is not trained to work well in those kinds of scenarios. Or it could also happen that you could get a lot more older people coming in to apply for these loans. So the data that the model can see in life might not represent the data that you train the model with. And the model has recognized relationships in this data and it might not recognize relationships in this new data. So this is a constant, I would say, it's an ongoing challenge that you would face when you have a live model in ensuring that the reality meets your representation of the reality when you train the model. And so this is something that's unique to machine learning models and it has not been a problem historically in the world of DevOps. But it is a very key problem in the DevOps. >> This is really great topic. And most people who are watching might want to might know of some of these problems when they see the main mainstream press talk about fairness in black versus white skin and bias and algorithms. I mean, that's kind of like the press state that talk about those kinds of like big high level topics. But what it really means is that the data (John laughs) of practiced fairness and bias and skewing and all kinds of new things that come up that the machines just can't handle. This is a big deal. So this is happening to every part of data in an organization. So, great problem statement. I guess the next segue would be, why Fiddler, why now? What are you guys doing? How are you solving these problems? Take us through some use cases. How people engage with you guys? How you solve the problem and how you guys see this evolving? >> Great, so Fiddler is a purpose-built platform to solve for model explainability of modern monitoring and moderate bias detection. This is the only thing that we do, right? So we are super focused on building this tool to be useful across a variety of, you know, AI problems, from financial services to retail, to advertising to human resources, healthcare and so on and so forth. And so we have found a lot of commonalities around how data scientists are solving these problems across these industries. And we've created a system that can be plugged into their workflows. For example, I could be a bank, you know, creating anti-money laundering models on a modern AI platform like TensorFlow. Or I could be like a retail company that is building a recommendation models in, you know, PyTorch, like library. You can bring all of those models into one under one sort of umbrella, like using Fiddler. We can support a variety of heterogeneous types of models. And that is a very very hard technical problem to solve. To be able to ingest and digest all these different types of monotypes and then provide a single pane of glass in terms of how the model is performing. How explaining the model, tracking the model life cycle throughout its existence, right? And so that is the value prop that Fiddler offers, the MLOps team, so they can get this oversight. And so this plugs in nicely with their MLOps so they don't have to change anything and give the additional benefit... >> So, you're basically creating faster outcomes because the teams can work on real problems. >> Right. >> And not have to deal with the maintenance of model management. >> Right. >> Whether it's debugging or decay evaluations, right? >> Right, we take care of all of their model operations from a monitoring standpoint, analysis standpoint, debugability, alerting. So that they can just build the right kind of models for their customers. And we give them all the insights and intelligence to know the problems with behind those models behind their datasets. So that they can actually build more accurate models more responsible models for their customers. >> Okay, Amit, give us the secret sauce. What's going on in the product? How does it all work? What's the secret sauce? >> So there are three key kind of pillars to Fiddler product. One is of course, we leverage the latest research, and we actually productize that in like amazing ways where when you explain models you get the explanation within a second. So this activates new use cases like, let's say counterfactual analysis. You can not only get explanations for your loan, you can also see hypothetically. What if this the loan applicant was, you know, had a higher income? What would the model do? So, that's one part productizing latest research. The second part is infrastructure at scale. So we are not just building something that would work for SMBs. We are building something that works on enterprise scale. So billions and billions of predictions, right? Flowing through the system. We want to make sure that we can handle as larger scale as seamlessly as kind of possible. So we are trying to activate that and making sure we are the best enterprise grade product on the market. And thirdly, user experience. What you'll see when you use Fiddler. Finally, when we do demos to kind of customers what they really see is the product. They don't see that the scale right, right, right then and there. They don't see the deep reason. What they see, what they see are these like beautiful experiences that are very intuitive to them. Where we've merged explainability and monitoring and bias detection in like seamless way. So you get the most intuitive experiences that are not just designed for the technical user, but also for the non-technical user. Who are also stakeholders within AI. >> So the scale thing is a huge point, by the way. I think that's something that you see successful companies. That's a differentiator and frankly, it's the new sustainability. So new lock-in, if you will, not to be in a bad way but in a good way. You do a good job. You get scale, you get leverage. I want to just point out and get your guys' thoughts on your approach on the frame. Where you guys are centralized. >> Right. >> So as decentralization continues to be a wave you guys are taking much more of a centralized approach. Why is that done? Take us through the decision on that. >> Yeah. So, I mean, in terms of, you know decentralization in terms of running models on different you know, containers and, you know, scoring them on multiple number of nodes, that's absolutely makes sense, right? When from a deployment standpoint from a inference standpoint. But when it comes to actually you know, understanding how the models are working. Visualizing them, monitoring them, knowing what's going on with the models. You need a centralized dashboard that a lapsed user can actually use or a head of AI governance inside a bank and use what are all the models that my team is shipping? You know, which models carry risk, you know? How are these models performing last week? This, you need a centralized repository. Otherwise, it'll be very very hard to track these models, right? Because the models are going to grow really really fast. You know, there are so many open source libraries, open source model architecture has been produced. And so many data scientists coming out of grad schools and whatnot. And the number of models in enterprise is just going to grow many many fold in the coming years. Now, how are you going to track all of these things without having a centralized platform? And that's what we envisaged a few years ago that every team will need an oversight tool like Fiddler. Which can keep track of all of their models in one place. And that's what we are finding from our customers. >> As long as you don't get in the way of them creating value, which is the goal, right? >> Right. >> And be frictionless take away the friction. >> Yeah. >> And enable it. Love the concept. I think you guys are on something big there, great products. Great vision. The question I have for you to kind of wrap things up here. Is that this is all new, right? And new, it's all goodness, right? If you've got scale in the Cloud, all these new benefits. Again, more techies coming out of grad school and Computer Science and Engineering, and just data analysis in general is changing. And there's more people to be democratized to be contributing. >> Right. >> How do you operationalize it? How do companies get this going? Because you've got a new thing happening. It's a new wave. >> Okay. >> But it's still the same game, make business run better. >> Right. >> So you've got to deploy something new. What's the operational playbook for companies to get started? >> Absolutely. First step is to, if a company is trying to install AI, incorporate AI into their workflow. You know, most companies I would say, they're in still early stages, right? There a lot of enterprises are still, you know, developing these models. Some of them may have been in labs. ML operationalization is starting to happen and it probably started in a year or two ago, right? So now when it comes to, you know, putting AI into practice, so far, you know, you can have AI models in labs. They're not going to hurt anyone. They're not going to hurt your business. They're not going to hurt your users. But once you operationalize them then you have to do it in a proper manner, in a responsible manner, in a trustworthy manner. And so we actually have a playbook in terms of how you would have to do this, right? How are you going to test these models? How are you going to analyze and validate them before they actually are deployed? How are you going to analyze, you know, look into data bias and training set bias, or test set bias. And once they are deployed to production are you tracking, you know, model performance or time? Are you tracking drifting models? You know, the decay part that we talked about. Do you have alerts in place when model performance goes all over the place? Now, all of a sudden, suddenly you get a lot of false positives in your fraud models. Are you able to track them? We have the personnel in place. You have the data scientists, the ML engineers, the MLOps engineers, the governance teams in place if it's in a regulated industry to use these tools. And then, the tools like Fiddler, will add value, will make them, you know, do their job, institutionalize this process of responsible AI. So that they're not only reaping the benefits of this great technology. There's no doubt about the AI, right? It's actually, it's going to be game changing but then they can also do it in a responsible and trustworthy manner. >> Yeah, it's really get some wins, get some momentum, see it. This is the Cloud way. It gets them some value immediately and grow from there. I was talking to a friend the other day, Amit, about IT the lecture. I don't worry about IT and all the Cloud. I go, there's no longer IT, IT is dead. It's an AI department now. (Amit laughs) So and this is kind of what you guys are getting at. This now it's data now it's AI. It's kind of like what IT used to be enabling organizations to be successful. You guys are looking at it from the perspective of the same way it's enabled success. You put it out that you provision (John laughs) algorithms instead of servers they're algorithms now. This is the new model. >> Yeah, we believe that all companies in the future as it happened to this wave of data are going to be AI companies, right? So it's really just a matter of time. And the companies that are first movers in this are going to have a significant advantage like we're seeing that in like banking already. Where the banks that have made the leap into AI battles are reaping benefits of enabling a lot more models at the same risk profile using deep learning models. As long as you're able to like validate these to ensure that they're meeting kind of like the regulations. But it's going to give significant advantages to a lot of companies as they move faster with respect to others in the same industry. >> Yeah, quickers too, saw a friend too on the compliance side. You mentioned trust and transparency with the whole EU thing. Some are saying that, you know, to be a public company, you're going to have to have AI disclosure soon. You're going to have to have on the disclosure in your public statements around how you're explaining your AI. Again, fantasy today. But pretty plausible. >> Right, absolutely. I mean, the real reality today is, you know less than 10% of the CEOs care about ethical AI, right? And that has to change. And I think, you know, and I think that has to change for the better, because at the end of the day, if you are using AI, if you're not using in a responsible and trustworthy manner then there is like regulation. There is compliance risk, there's operational business risk. You know, customer trust. Losing customers trust can be huge. So I think, you know, we want to provide that you know, insurance, or like, you know like a preventative mechanism. So that, you know, if you have these tools in place then you're less likely to get into those situations. >> Awesome. Great, great conversation, Krishna, Amit. Thank you for sharing both the founders of Fiddler.ai. Great company. On the right side of history in my opinion, the next big thing in AI. AI departments, AI compliance, AI reporting. (John laughs) Explainable AI, ethical AI, all part of this next revolution. Gentlemen, thank you for joining us on theCUBE Amazon Startup Showcase. >> Thanks for having us, John. >> Okay, it's theCUBE coverage. Thank you for watching. (upbeat music)

Published Date : May 28 2021

SUMMARY :

really the next big thing So the topic of this We help, you know, enterprises, and walk before you run kind of thing. so that you know what And so the, you know, So the ability to prove the model deeper of the use cases you heard So nevermind the And the risk management and that included the likes so that you know, enterprises can do this. and the kind of the industry side of it? And so you have to constantly stay on top of the model, if you will. the integrity of the model. that are going, you know, and the ads mandating define that for the audience? It is the core of an AI system. know, the person's, you know One of the things that of the reality when you train the model. and how you guys see this evolving? And so that is the value because the teams can And not have to deal So that they can just build What's going on in the product? They don't see that the scale So the scale thing is you guys are taking much more And the number of models in enterprise take away the friction. I think you guys are How do you operationalize it? But it's still the same game, What's the operational playbook So now when it comes to, you know, You put it out that you of like the regulations. you know, to be a public company, And I think, you know, the founders of Fiddler.ai. Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaliforniaLOCATION

0.99+

10 timesQUANTITY

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

Amit PakaPERSON

0.99+

Steve WozniakPERSON

0.99+

AppleORGANIZATION

0.99+

EUORGANIZATION

0.99+

30%QUANTITY

0.99+

20%QUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

JohnPERSON

0.99+

40QUANTITY

0.99+

$10,000QUANTITY

0.99+

KrishnaPERSON

0.99+

AmitPERSON

0.99+

billionsQUANTITY

0.99+

70QUANTITY

0.99+

FedORGANIZATION

0.99+

last weekDATE

0.99+

Krishna GadePERSON

0.99+

OneQUANTITY

0.99+

one partQUANTITY

0.99+

second partQUANTITY

0.99+

less than 10%QUANTITY

0.99+

one modelQUANTITY

0.99+

AWSORGANIZATION

0.99+

three keyQUANTITY

0.98+

bothQUANTITY

0.98+

one thingQUANTITY

0.98+

First stepQUANTITY

0.98+

todayDATE

0.98+

one placeQUANTITY

0.98+

Fiddler.aiORGANIZATION

0.98+

secondlyQUANTITY

0.97+

hundredsQUANTITY

0.97+

eachQUANTITY

0.97+

firstQUANTITY

0.97+

U.S.LOCATION

0.97+

SwamiPERSON

0.96+

a yearDATE

0.94+

first moversQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

FiddlerORGANIZATION

0.94+

FICOORGANIZATION

0.93+

SR 11-7TITLE

0.93+

oneQUANTITY

0.91+

two agoDATE

0.88+

three weeks backDATE

0.83+

couple of years agoDATE

0.82+

Amazon Startup ShowcaseEVENT

0.81+

few years agoDATE

0.8+

billions of predictionsQUANTITY

0.77+

FiddlerTITLE

0.77+

two mandatoryQUANTITY

0.76+

SageMakerTITLE

0.76+

single pane ofQUANTITY

0.75+

a secondQUANTITY

0.74+

thirdlyQUANTITY

0.73+

aboutDATE

0.73+

single applicationQUANTITY

0.73+

PyTorchORGANIZATION

0.73+

AWSEVENT

0.72+

Startup ShowcaseEVENT

0.69+

TensorFlowTITLE

0.67+

AWS Startup ShowcaseEVENT

0.65+

Krishna Gade, Fiddler AI | CUBE Conversation May 2021


 

(upbeat pop music) >> Well, hi everyone, John Walls here on "theCUBE" as we continue our CUBE conversations as part of the "AWS Startup Showcase". And we welcome in today Krishna Gade who is the founder and the CEO of Fiddler AI. and Krishna, good to see you today. Thanks for joining us here on the "theCUBE". >> Hey John, thanks so much for inviting us and I'm glad to be here, and looking forward to our conversation. >> Yeah me two, and first off, I want to say congratulations as I look at your company's, this tremendous roster, this list of awards that just keep coming your way. Most recently recognized by "Forbes" as one of the Top 50 AI Companies To Watch here in 2021. I know Gartner called you one of their Cool Companies not too long ago. World Economic Forum also giving you a shout out. So whatever it is you're doing, you're doing it very well, but it's got to feel good I would think, some validation to get all this kind of recognition. >> Absolutely, I know we've been very fortunate to get all the recognition. You know, part of it is also because of the space we are playing in, right? A lot of companies are, you know, operationalizing AI and therefore, you know, this whole point of, you know, explainability monitoring and governance of AI is like forefront and it's in the news for various different reasons. So there's a lot of, you know, good sort of talk that is going on in the press around how one should bear responsible AI. And we are very fortunate to be, you know, in the space and pioneering, you know, some of the technologies here. >> Right. And talking about machine learning monitoring, obviously, in the AI space, and you mentioned explainability. So let's just talk about that concept broadly first off and explain to our viewers what you mean by explainability in this particular context. >> Yeah, that's a good question. So if you think about an AI system, one of the main differences between it and a traditional software system is that it's a black box in the sense that you cannot open it up and read it's code like a traditional software system. The reason is, you know, the AI systems that are built using data and training models which are represented in this non-human readable format. And you cannot really understand how a model is actually making a prediction at any given point of time. So therefore what happens is when you are deploying these AI systems at scale for a variety of use cases, let's say credit underwriting or, you know, screening resumes, or clinical diagnosis which are extremely, you know, important for general human beings. There is a need to understand how the AI system is working. You know, why did it approve a positive person's loan or reject someone's loan? Or why did it reject someone's, you know, resume from, you know, a job screening pipeline? How is it working overall? Right? And so this is where explainability becomes important because you need to understand the AI system, you need a way to probe it, to interrogate it, to understand how the system is making predictions, how is it being influenced by various inputs you're supplying to the system. And so this gamut of technologies or the algorithms that have come across in the last, you know, few years have really matured to a point where, you know, products like Fiddler are developing them and productizing them for the general enterprise to you know, put it in their machine learning and AI workflows. >> So you're talking about context basically, right? I mean, trying to give everybody an idea. This is, you know, kind of where this inputs coming, this is where the problem is, this is where the bottleneck might be, whatever it is, and and doing that in real time. Very efficient operation here. Well, let's talk about the ML world right now and in terms of how it relates to artificial intelligence and this interaction you know, that we're seeing and the, I guess, the problem that you are trying to fix, if you will, in terms of machine learning monitoring. So let's just deal with that first off. When you look at somebody's architecture and somebody set up, what do you see? What are you looking for? And what kind of problems are you trying to solve for your clients? >> Yeah. So just following up what I said. The two main problems with operationalizing AI is one is the black box nature of AI, which I already talked about. The other problem is that the AI system is fundamentally a stochastic system or a probabilistic system. By that, I mean that its performance, you know, its predictions can change over time based on the data it is receiving. So it's not a deterministic system like traditional software systems where you expect the same output all the time, right? So when you have a system that is stochastic in nature where its performance can vary based on the data it is receiving, then you are in a situation where you have uncertainty, right? You know, you let's say you have an AI system that is deployed for serving a credit underwriting model or a fraud, you know, detection use case. And you see that, okay, sometimes accuracy is up, sometimes accuracy is down. You know, when do you want, when do you trust your predictions, when you're not. How do you know if the model is actually performing in the same manner that you trained it? All of these issues open up the need for continuous monitoring of these AI systems, because without which you may have AI systems making bad predictions for your users, hurting your business metrics, potentially making biased decisions that can put your company into a compliance or a brand reputation risk scenario. To avoid all of these things you can actually monitor these AI systems continuously so that you know exactly if they're performing the way you expect them to be. Do you to retrain them right now, right? Or do you need to shut them down because they are actually not predicting the way that you expect them to be? So this is actually very important. And so that's what Fiddler tries to solve for our customers by helping them operationalize AI with full visibility and explainability, right? So you can essentially install Fiddler in your workflow to continuously monitor your AI systems and analyze and explain them when you have questions about how they're working. >> I mean, you talked about governance earlier a little bit, you know, compliance, obviously a great critical issue, big concern, fraud detection. Security, just in general here, as we know, I mean, we keep almost every day it seems like we're hearing about some kinds of security intrusion. So, in terms of identifying vulnerabilities or in terms of identifying anomalies, whatever it might be, what kind of work are you doing in that space to give your client base the kind of comfort and the peace of mind that everybody's searching for these days? >> Right, I mean, if you step back a little bit, John, we are truly living in the age of algorithms, right? So everything that we interact with on a day-to-day basis, the movies we watch, or when we request an Uber driver, or when we go to a financial institution and request for a loan application or a mortgage, there are algorithms behind the scenes that are processing our requests and delivering the experiences that we have. Now, increasingly these algorithms are becoming AI based algorithms. And when you have these AI based algorithms, they're trained on this data that's available, that an institution may collect from their users, or they may buy from other third parties. And when you develop these AI systems based on this data, if this data is not equally distributed amongst all different ethnicity backgrounds, people coming from different cultures, different religions, different races, different genders, you may actually build systems that can make very different decisions for different individuals based on like this bias that could creep into them. And so this actually needs, this means that at the end of the day, you can actually create a dystopian world where, you know, some people get like really great decisions from your systems, where some people are left out, right? So therefore, you know, this aspect of governing your AI systems so that you're validating what you're building upfront. You're validating the data that you're using to train the systems. You're continuously monitoring the systems there so that they're actually producing the right outcomes for your users. And then you can actually explain if some customer asks you or some regulator or a third party asks you how your system is working. It's very very important. This is an emerging area in industry, certain sectors already have this, for example, financial services. It's in companies like banks, where it is mandated to have model governance, so that every model that they are deploying needs to be validated and needs to be monitored. And we are seeing the emergence of generally AI governance creeping into other sectors as well. And so this is like a broader topic that covers explainability, covers monitoring, covers detecting bias in your AI systems and ensuring that you're building safe and responsible AI for your customers and your organization. >> Yeah, I find the bias point really interesting, actually, because I hadn't really thought about these prejudices or subjectivities, you know, it might bring to our work with us in terms of what we look at, what we ignore, what we process, how we don't. But it's a really interesting point you just raised. So thank you for that. And then there's also the kind of issue with data drift too a little bit, right? It's like, where did it go (laughing)? >> Right. >> What are we doing here? What happened to it? So maybe if you could talk about that a little bit in terms of all this data that's coming in and corralling it, right? Making sure that it stays organized and stays in a way that you can analyze and process it, and then glean insight from. >> Yeah, data drift is one of the main reasons why AI systems deteriorate in performance. So for example, let's say I'm trying to build a recommendation system that predicts the items that you want to buy when you go to an E-commerce website. Now, if I have used data pre-COVID, then the user behavior was very different, right? That kind of items people were probably buying before you know, February, 2020 was like probably much different than the kind of items that people were buying after it. So what happens is when you train your AI systems on datasets that are older but then that data has changed ever since because of an event like COVID-19 has happened, or some other seasonality has kicked in, then your AI systems are seeing different distribution data. For example, you may see that suddenly, you know, people who were shopping, let's say, in March or April last year, people were shopping for all kinds of, you know, toilet paper and all kinds of things to stock up, you know, to be ready for lockdown, right? And maybe they were not buying similar amounts in there previously. So therefore, if you have an inventory management system based on AI or an E-commerce recommendation system based on AI, you know, they would see data drift, because the buying patterns are different. The amount of stuff that people are buying in terms of toilet paper has completely shifted. And so their model is actually, may not be predicting as accurately as it would, right? So therefore identifying this data drift and alerting your AI engineer so that they can be prepared for this is very important. Otherwise, what you would see is if you're an E-commerce company, this has actually happened, you know? Instacart, a grocery delivery company and another company www.etsy.com, they blogged about it where they have seen their models go down in accuracy from 90% to 65% when this data shift happened, you know, especially during COVID-19. And so you need the ability to continuously monitor for drift so that when you can catch these things earlier, and then, you know, save your business from losing, you know, in terms of business metrics like such as number of sales that you may be making, number of bad recommendations that your systems are making to your users. >> So we've talked a lot about these various components of monitoring of which, you know, all of which you do extremely well. And I was reading earlier, just a little bit about the company, and we talked about accountability. We've already talked about that. We talked about fraud detection, we talked about reliability. There was also a point about ethical considerations, you know, and so I was interested in that, hearing from you about that in terms of why that's a pillar of your service or what exactly that was pointed toward in terms of monitoring, and what you can do. >> Right. So, I guess I'll just go back to like a famous quote from Marc Andreessen. He mentioned, you know, a few years ago that software is eating the world, right? Now, what's happening is AI is eating software. All the software that we are consuming is becoming AI based software, because basically at the end of the day some intelligence is being baked into the software to make it, you know, predict more interesting things for you to make those decisions. Instead of rule-based decisions, make it more AI based decisions. And so therefore it is very important that when we are building the software, we need to use ethical practices. You know, we need to know how, where you're collecting the data from. It can be very dangerous if you don't do it and you can land into trouble. And we have seen these incidents many times, right? For example, in 2019, when Apple and Goldman Sachs came up with a credit card, a lot of customers complained about gender bias with respect to the credit card limits that the algorithm was setting. You know, in the same household, the husband and wife were getting 10 times in terms of a difference between the credit limit between a male and a female, right? Even though they probably had similar salary ranges, similar FICO scores, right? So if you do not actually make sure that, you know, you're collecting data from the right sources that your datasets are not outbalanced. If your models, if your algorithms are tested for bias you know, before hand, before you deploy them and then you're continuously monitoring them, these are all ethical practices. These are all the responsible ways of building your AI. You can actually, you know, land into trouble. Your customers will complain about it. You know, you would lose your brand reputation. And at the end of the day you'll be essentially, and instead of actually adding value to the customers, you may be actually hurting them, right? And so this is actually why it's so important, and it's become more important when the more stakes, the higher the stakes are, right? You know, for example, when it's being used for criminal justice scenarios or when it's being used for clinical diagnosis scenarios. Being able to ensure that the system is making unbiased decisions is very, very important. >> Well, before I let you go, too, I like you to touch base on your AWS relationship about, you know, what was the Genesis of that. And currently what it is that you're working on together to provide this great value to your customers. >> Absolutely. So the follow-up to this ethical AI is like Amazon as a company is interested in pursuing, you know, the responsible AI but, you know, they have a lot of AI products. So they are looking for, you know, fostering a community and ecosystem of AI technologies. And in that hypothesis they actually invested in Fiddler last year in terms of enabling us to develop this explainable AI and ethical AI technology. And so we are working with Alexa Fund and also like AWS ecosystem in terms of partnering with how effectively Fiddler can be delivered to other AWS customers through, like, through their marketplace and other sort of areas that we can distribute the software. So it's a great partnership. We are very, very excited about the opportunity to work with Alexa Fund as well as the AWS ecosystem. It increases another opportunity for us to enable a lot more customers than we than we can otherwise. So this is a great win-win situation for both Amazon and Fiddler. >> Well, it sure is. And congratulations on that and developing that partnership. I know it's working well for your clients and it's working well for Fiddler AI obviously by the number of recognitions that have been coming your way. So Krishna, we wish you continued success and thanks for the time here today on "theCUBE". >> Yep. Thank you so much, John. It was a pleasure talking to you today. >> I enjoyed it. Thank you. John Walls here wrapping up our conversation with Fiddler AI's Krishna Gade, talking today about machine learning monitoring on the "AWS Startup Showcase". (upbeat pop music)

Published Date : May 18 2021

SUMMARY :

and Krishna, good to see you today. and I'm glad to be here, I know Gartner called you one in the space and pioneering, you know, and you mentioned explainability. across in the last, you know, few years the problem that you are the way you expect them to be. you know, compliance, obviously So therefore, you know, prejudices or subjectivities, you know, that you can analyze and process it, for drift so that when you can of which, you know, to make it, you know, predict too, I like you to touch base the responsible AI but, you know, So Krishna, we wish you continued success It was a pleasure talking to you today. on the "AWS Startup Showcase".

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

AppleORGANIZATION

0.99+

10 timesQUANTITY

0.99+

Marc AndreessenPERSON

0.99+

John WallsPERSON

0.99+

AmazonORGANIZATION

0.99+

2019DATE

0.99+

JohnPERSON

0.99+

KrishnaPERSON

0.99+

90%QUANTITY

0.99+

February, 2020DATE

0.99+

John WallsPERSON

0.99+

Krishna GadePERSON

0.99+

Goldman SachsORGANIZATION

0.99+

May 2021DATE

0.99+

2021DATE

0.99+

last yearDATE

0.99+

oneQUANTITY

0.99+

Fiddler AIORGANIZATION

0.99+

twoQUANTITY

0.99+

COVID-19OTHER

0.99+

GartnerORGANIZATION

0.99+

todayDATE

0.99+

MarchDATE

0.99+

FiddlerORGANIZATION

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.98+

65%QUANTITY

0.98+

UberORGANIZATION

0.98+

April last yearDATE

0.97+

InstacartORGANIZATION

0.97+

FICOORGANIZATION

0.96+

two main problemsQUANTITY

0.94+

World Economic ForumORGANIZATION

0.91+

Alexa FundTITLE

0.91+

AWS Startup ShowcaseEVENT

0.9+

AlexaTITLE

0.88+

COVIDOTHER

0.82+

www.etsy.comOTHER

0.82+

Top 50 AI CompaniesQUANTITY

0.78+

FundOTHER

0.77+

few years agoDATE

0.76+

CUBEORGANIZATION

0.76+

Startup ShowcaseEVENT

0.72+

theCUBETITLE

0.64+

ForbesTITLE

0.59+

theCUBEORGANIZATION

0.57+

CUBE ConversationEVENT

0.56+