Image Title

Search Results for Amit Paka:

2021 AWSSQ2 069 AWS Krishna Gade and Amit Paka


 

(upbeat music) >> Hello and welcome to theCUBE as we present AWS Startup Showcase, The Next Big Thing in AI, Security & Life Sciences, the hottest startups. And today's session is really the next big thing in AI Security & Life Sciences. As to the AI track is really a big one most important. And we have a feature in company, fiddler.ai. I'm your host, John Furrier with theCUBE. And we're joined by the founders, Krishna Gade, founder and CEO, and Amit Paka, founder and Chief Product Officer. Great to have the founders on. Gentlemen, thank you for coming on this Cube segment for the AWS Startup Showcase. >> Thanks, John... >> Good to be here. >> So the topic of this session is staying compliant and accelerating AI adoption and model performance monitoring. Basically, bottom line is how to be innovative with AI and stay (John laughs) within the rules of the road, if you will. So, super important topic. Everyone knows the benefits of what AI can do. Everyone sees machine learning being embedded in every single application, but the business drivers of compliance and all kinds of new kinds of regulations are popping up. So we don't. The question is how do you stay compliant? Which is essentially how do you not foreclose the future opportunities? That's really the question on everyone's mind these days. So let's get into it. But before we start let's take a minute to explain what you guys do. Krishna, we'll start with you first. What does fiddler.ai do? >> Absolutely, yeah. Fiddler is a model performance management platform company. We help, you know, enterprises, mid-market companies to build responsible AI by helping them continuously monitoring their AI, analyzing it, explaining it, so that they know what's going on with their AI solutions at any given point of time. And they can be like, ensuring that, you know businesses are intact and they're compliant with all the regulations that they have in their industry. >> Everyone thinks AI is a secret sauce. It's magic beans and automatically will just change over the company. (John laughs) So it's kind of like this almost like it's a hope. But the reality is there is some value there but there's something that has to be done first. So let's get into what this model performance management is because it's a concept that needs to be understood well but also you got to implement it properly. There's some foundational things you've got to you know, walk, crawl before you walk and walk before you run kind of thing. So let's get into it. What is model performance management? >> Yeah, that's a great question. So the core software artifact most an AI system is called an AI model. So it essentially represents the patterns inside data accessing manner so that it can actually predict the future. Now, for example, let's say I'm trying to build an AI based credit underwriting system. What I would do is I would look at the historical you know, loans data. You know, good loans and bad loans. And then, I will build it a model that can capture those patterns so that when a new customer comes in I can actually predict, you know, how likely they are going to default on the loan much more activity. And this helps me as a bank or center company to produce more good loans for my company and ensure that my customer is not, you know, getting the right customer service. Now, the problem though is this AI model is a black box. Unlike regular software code you cannot really open up and read its code and read its patterns and how it is doing. And so that's where the risks around the AI models come along. And so you need a ways to innovate to actually explain it. You need to understand it and you need to monitor it. And this is where the model performance management system like Fiddler can help you look into that black box. Understand how it's doing it, monitor its predictions continuously so that you know what these models are doing at any given point of time. >> I mean, I'd love to get your thoughts on this because on the product side I could, first of all, totally awesome concept. No one debates that. But now you've got more and more companies integrating with each other more data's being shared. And so the, you know, everyone knows what an app sec review is, right? But now they're thinking about this concept of how do you do review of models, right? So understanding what's inside the black box is a huge thing. How do you do this? What does it mean? >> Yeah, so typically what you would do is it's just like software where you would validate software code going through QA and like analysis. In case of models you would try to prove the model in like different granularities to really understand how the model is behaving. This could be at a model prediction like level in case of the loans example, Krishna just gave. Why is my model saying high-risk to in particular loan? Or it might be in case of explaining groups of loans. For example, why is my model making high-risk predictions to loans made in California or loans made to all men? Was it loans made to all women? And it could also be at the global level. What are the key data factors important to my model? So the ability to prove the model deeper and really opening up the black box and then using that knowledge to explain how the model is working to non-technical folks in compliance. Or to folks who are regulators, who just want to ensure that they know how the model works to make sure that it's keeping up with kind of lending regulations to ensure that it's not biased and so on. So that's typically the way you would do it with the machine learning model. >> Krishna, talk about the potential embarrassments that could happen. You just mentioned some of the use cases you heard from a mid-saying you know, female, male. I mean, machines, aren't that smart. (John laughs) >> Yeah. >> If they don't have the data. >> Yeah. >> And data is fragmented you've got silos with all kinds of challenges just on the data problem, right? >> Yeah. >> So nevermind the machine learning problems. So, this is huge. I mean, the embarrassment opportunities. >> Yeah. >> And the risk management on whether it's a hack or something else. So you've got public embarrassment by doing something really went wrong. And then, you've got the real business impact that could be damaging. >> Absolutely. You know, AI has come forward a lot, right? I mean, you know, you have lots of data these days. You have a lot of computing power an amazing algorithms that you can actually build really sophisticated models. Some of these models were known to beat humans in image recognition and whatnot. However, the problem is there are risks in using AI, you know, without properly testing it, without properly monitoring it. For example, a couple of years ago, Apple and Goldman Sachs launched a credit card, right? And for their users where they were using algorithms presumably AI or machine learning algorithms to set credit limits. What happened was within the same household husband and wife got 10 times difference in the credit limits being set for them. And some of these people had similar FICO scores, similar salary ranges. And some of them went online and complained about it and that included the likes of Steve Wozniak as well. >> Yeah. >> So this was, these kind of stories are usually embarrassing when you could lose customer trust overnight, right? And, you know, you have to do a lot of PR damage. Eventually, there was a regulatory probate with Goldman Sachs. So there are these problems if you're not properly monitoring area systems, properly validating and testing them before you launch to the users. And that is why tools like Fiddler are coming forward so that you know, enterprises can do this. So that they can ensure responsible AI for both their organization as well as their customers. >> That's a great point, I want to get into this. What it kind of means and the kind of the industry side of it? And then, how that impacts customers? If you guys don't mind, machine learning opposite a term MLOps has been coined in the industry as you know. Basically, operations around machine learning, which kind of gets into the workflows and development life cycles. But ultimately, as you mentioned, this black box and this model being made. There's a heavy reliance on data. So Amit, what does this mean? Because now is it becomes operational with MLOps. There is now internal workflows and activities and roles and responsibilities. How is this changing organizations, you know separate the embarrassment, which is totally true. Now I've got an internal operational aspect and there's dev involved. What's the issue? >> Yeah, so typically, so if you look at the whole life cycle of machine learning ops, in some ways mirrors the traditional life cycle of kind of DevOps but in some ways it introduces new complexities. Specifically, because the models can be a black box. That's one thing to kind of watch out for. And secondly, because these models are probabilistic artifact, which means they are trained on data to grab relationships for what kind of potentially making high accuracy predictions. But the data that they see in life might actually differ and that might hurt their performance especially because machine learning is applied towards these high ROI use cases. So this process of MLOps needs to change to incorporate the fact that machine learning models can be black boxes and machine learning models can decay. And so the second part I think that's also relevant is because machine learning models can decay. You don't just create one model you create multiple versions of these models. And so you have to constantly stay on top of how your model is deviating from your reality and actual reality and kind of bring it back to that representation of reality. >> So this is interesting, I like this. So now there's a model for the model. So this is interesting. You guys have innovated on this model performance management idea. Can you explain the framework and how you guys solve that regulatory compliance piece? Because if you can be a model of the model, if you will. >> Then. >> Then you can then have some stability around maintaining the code basis or the integrity of the model. >> Okay. >> How does that? What do you guys offer? Take us through the framework and how it works and then how it ties to that regulatory piece? >> So the MPM system or the model performance management system really sits at the heart of the machine learning workflow. Keeping track of the data that is flowing through your ML life cycle, keeping track of the models that are going, you know, we're getting created and getting deployed and how they're performing. Keeping track of the whole parts of the models. So it gives you a centralized way of managing all of these information in one place, right? It gives you an oversight from a compliance standpoint from an operational standpoint of what's going on with your models in production. Imagine you're a bank you're probably creating hundreds of these models, but a variety of use cases, credit risk, fraud, anti-money laundering. How are you going to know which models are actually working very well? Which models are stale? Which models are expired? How do you know which models are underperforming? You know, are you getting alerts? So this is what this kind of governance, this performance management is what the system offers. It's a visual interface, lots of dashboards, the developers, operations folks, compliance folks can go and look into. And then they would get alerts when things go wrong with respect to their models. In terms of how it can be helpful to meet in compliance regulations. For example, let's say I'm starting to create a new credit risk model in a bank. Now I'm innovating on different AI algorithms here immediately before I even deploy that model I have to validate it. I have to explain it and create a report so that I can submit to my internal risk management team which can then review it, you know, understand all kinds of risks around it. And then potentially share it with the audit team and then keep a log of these reports so that when a regulator comes visits them, you know they can share these reports. These are the model reports. Is that how the model was created? Fiddler helps them create these reports, keep all of these reports in one place. And then once the model is deployed, you know, it basically can help them monitor these models continuously. So that they don't just have one ad hoc report when it was created upfront, they can a continuous monitoring continuous dashboard in terms of what it was doing in the last one whatever number of months it was running for. >> You know what? >> Historically, if you were to regulate it like all AI applications in the U.S. the legacy regulations are the ones that today are applied as to the equal credit opportunity or the Fed guidelines of like SR 11-7 that kind of comment that's applicable to all banks. So there is no purpose-built AI regulation but the EU released a proposed regulation just about three weeks back. That classifies risk within applications, and specifically for high-risk applications. They propose new oversight and the ads mandating explainability helping teams understand how the models are working and monitoring to ensure that when a model is trained for high accuracy, it maintains that. So now those two mandatory needs of high risk application, those are the ones that are solved by Fiddler. >> Yeah, this is, you mentioned explainable AI. Could you just quickly define that for the audience? Because this is a trend we're seeing a lot more of. Take a minute to explain what is explainable AI? >> Yeah, as I said in the beginning, you know AI model is a new software artifact that is being created. It is the core of an AI system. It's what represents all the patterns in the data and coach them and then uses that knowledge to predict the future. Now how it encodes all of these patterns is black magic, right? >> Yeah. >> You really don't know how the model is working. And so explainable AI is a set of technologies that can help you unlock that black box. You know, quote-unquote debug that model, looking to the model is introspected inspected, probate, whatever you want to call it, to understand how it works. For example, let's say I created an AI model, that again, predicts, you know, loan risk. Now let's say some person, a person comes to my bank and applies for a $10,000 loan, and the bank rejects the loan or the model rejects the loan. Now, why did it do it, right? That's a question that can explain the way I can answer. They can answer, hey, you know, the person's, you know salary range, you know, is contributing to 20% of the loan risk or this person's previous debt is contributing to 30% of the loan risk. So you can get a detailed set of dashboards in terms of attribution of taking the loan risk, the composite loan risk, and then attributing it to all the inputs that the model is observing. And so therefore, you now know how the moral is treating each of these inputs. And so now you have an idea of like where the person is getting effected by this loaner's mark. So now as a human, as an underwriter or a loan officer lending officer, I have knowledge about how the model is working. I can then have my human intuition or lap on it. I can approve the model sometimes I can disapprove the model sometimes. I can use this feedback and deliver it to the data science team, the AI team, so they can actually make the model better over time. So this unlocking black box has several benefits throughout their life cycle. >> That's awesome. Great definition. Great call. I want to grab get that on the record for the audience. Also, we'll make a clip out of that too. One of the things that I meant you brought up I love and want to get into is this MLOps impact. So as we were just talking earlier debugging module models and production, totally cool, relevant, unpacked a black box. But model decay, that's an interesting concept. Can you explain more? Because this to me, I think is potentially a big blind spot for the industry, because, you know, I talked to Swami at Amazon, who runs their AI group and, you know, they want to make AI easier and ML easier with SageMaker and other tools. But you can fall into a trap of thinking everything's done at one and done. It's iterative is you've got leverage here. You got to keep track of the performance of the models, not just debugging them. Are they actually working? Is there new data? This is a whole another practice. Could you explain this concept of model decay? >> Yeah, so let's look at the lending example Krishna was just talking about. If you expect your customers to be your citizens, right? So you will have examples in your training set which might have historical loans made to people that the needs of 40, and let's say 70. And so you will train your model and your model will be trained our highest accuracy in making loans to these type of applicants. But now let's say introduced a new loan product that you're targeting, let's say younger college going folks. So that model is not trained to work well in those kinds of scenarios. Or it could also happen that you could get a lot more older people coming in to apply for these loans. So the data that the model can see in life might not represent the data that you train the model with. And the model has recognized relationships in this data and it might not recognize relationships in this new data. So this is a constant, I would say, it's an ongoing challenge that you would face when you have a live model in ensuring that the reality meets your representation of the reality when you train the model. And so this is something that's unique to machine learning models and it has not been a problem historically in the world of DevOps. But it is a very key problem in the DevOps. >> This is really great topic. And most people who are watching might want to might know of some of these problems when they see the main mainstream press talk about fairness in black versus white skin and bias and algorithms. I mean, that's kind of like the press state that talk about those kinds of like big high level topics. But what it really means is that the data (John laughs) of practiced fairness and bias and skewing and all kinds of new things that come up that the machines just can't handle. This is a big deal. So this is happening to every part of data in an organization. So, great problem statement. I guess the next segue would be, why Fiddler, why now? What are you guys doing? How are you solving these problems? Take us through some use cases. How people engage with you guys? How you solve the problem and how you guys see this evolving? >> Great, so Fiddler is a purpose-built platform to solve for model explainability of modern monitoring and moderate bias detection. This is the only thing that we do, right? So we are super focused on building this tool to be useful across a variety of, you know, AI problems, from financial services to retail, to advertising to human resources, healthcare and so on and so forth. And so we have found a lot of commonalities around how data scientists are solving these problems across these industries. And we've created a system that can be plugged into their workflows. For example, I could be a bank, you know, creating anti-money laundering models on a modern AI platform like TensorFlow. Or I could be like a retail company that is building a recommendation models in, you know, PyTorch, like library. You can bring all of those models into one under one sort of umbrella, like using Fiddler. We can support a variety of heterogeneous types of models. And that is a very very hard technical problem to solve. To be able to ingest and digest all these different types of monotypes and then provide a single pane of glass in terms of how the model is performing. How explaining the model, tracking the model life cycle throughout its existence, right? And so that is the value prop that Fiddler offers, the MLOps team, so they can get this oversight. And so this plugs in nicely with their MLOps so they don't have to change anything and give the additional benefit... >> So, you're basically creating faster outcomes because the teams can work on real problems. >> Right. >> And not have to deal with the maintenance of model management. >> Right. >> Whether it's debugging or decay evaluations, right? >> Right, we take care of all of their model operations from a monitoring standpoint, analysis standpoint, debugability, alerting. So that they can just build the right kind of models for their customers. And we give them all the insights and intelligence to know the problems with behind those models behind their datasets. So that they can actually build more accurate models more responsible models for their customers. >> Okay, Amit, give us the secret sauce. What's going on in the product? How does it all work? What's the secret sauce? >> So there are three key kind of pillars to Fiddler product. One is of course, we leverage the latest research, and we actually productize that in like amazing ways where when you explain models you get the explanation within a second. So this activates new use cases like, let's say counterfactual analysis. You can not only get explanations for your loan, you can also see hypothetically. What if this the loan applicant was, you know, had a higher income? What would the model do? So, that's one part productizing latest research. The second part is infrastructure at scale. So we are not just building something that would work for SMBs. We are building something that works on enterprise scale. So billions and billions of predictions, right? Flowing through the system. We want to make sure that we can handle as larger scale as seamlessly as kind of possible. So we are trying to activate that and making sure we are the best enterprise grade product on the market. And thirdly, user experience. What you'll see when you use Fiddler. Finally, when we do demos to kind of customers what they really see is the product. They don't see that the scale right, right, right then and there. They don't see the deep reason. What they see, what they see are these like beautiful experiences that are very intuitive to them. Where we've merged explainability and monitoring and bias detection in like seamless way. So you get the most intuitive experiences that are not just designed for the technical user, but also for the non-technical user. Who are also stakeholders within AI. >> So the scale thing is a huge point, by the way. I think that's something that you see successful companies. That's a differentiator and frankly, it's the new sustainability. So new lock-in, if you will, not to be in a bad way but in a good way. You do a good job. You get scale, you get leverage. I want to just point out and get your guys' thoughts on your approach on the frame. Where you guys are centralized. >> Right. >> So as decentralization continues to be a wave you guys are taking much more of a centralized approach. Why is that done? Take us through the decision on that. >> Yeah. So, I mean, in terms of, you know decentralization in terms of running models on different you know, containers and, you know, scoring them on multiple number of nodes, that's absolutely makes sense, right? When from a deployment standpoint from a inference standpoint. But when it comes to actually you know, understanding how the models are working. Visualizing them, monitoring them, knowing what's going on with the models. You need a centralized dashboard that a lapsed user can actually use or a head of AI governance inside a bank and use what are all the models that my team is shipping? You know, which models carry risk, you know? How are these models performing last week? This, you need a centralized repository. Otherwise, it'll be very very hard to track these models, right? Because the models are going to grow really really fast. You know, there are so many open source libraries, open source model architecture has been produced. And so many data scientists coming out of grad schools and whatnot. And the number of models in enterprise is just going to grow many many fold in the coming years. Now, how are you going to track all of these things without having a centralized platform? And that's what we envisaged a few years ago that every team will need an oversight tool like Fiddler. Which can keep track of all of their models in one place. And that's what we are finding from our customers. >> As long as you don't get in the way of them creating value, which is the goal, right? >> Right. >> And be frictionless take away the friction. >> Yeah. >> And enable it. Love the concept. I think you guys are on something big there, great products. Great vision. The question I have for you to kind of wrap things up here. Is that this is all new, right? And new, it's all goodness, right? If you've got scale in the Cloud, all these new benefits. Again, more techies coming out of grad school and Computer Science and Engineering, and just data analysis in general is changing. And there's more people to be democratized to be contributing. >> Right. >> How do you operationalize it? How do companies get this going? Because you've got a new thing happening. It's a new wave. >> Okay. >> But it's still the same game, make business run better. >> Right. >> So you've got to deploy something new. What's the operational playbook for companies to get started? >> Absolutely. First step is to, if a company is trying to install AI, incorporate AI into their workflow. You know, most companies I would say, they're in still early stages, right? There a lot of enterprises are still, you know, developing these models. Some of them may have been in labs. ML operationalization is starting to happen and it probably started in a year or two ago, right? So now when it comes to, you know, putting AI into practice, so far, you know, you can have AI models in labs. They're not going to hurt anyone. They're not going to hurt your business. They're not going to hurt your users. But once you operationalize them then you have to do it in a proper manner, in a responsible manner, in a trustworthy manner. And so we actually have a playbook in terms of how you would have to do this, right? How are you going to test these models? How are you going to analyze and validate them before they actually are deployed? How are you going to analyze, you know, look into data bias and training set bias, or test set bias. And once they are deployed to production are you tracking, you know, model performance or time? Are you tracking drifting models? You know, the decay part that we talked about. Do you have alerts in place when model performance goes all over the place? Now, all of a sudden, suddenly you get a lot of false positives in your fraud models. Are you able to track them? We have the personnel in place. You have the data scientists, the ML engineers, the MLOps engineers, the governance teams in place if it's in a regulated industry to use these tools. And then, the tools like Fiddler, will add value, will make them, you know, do their job, institutionalize this process of responsible AI. So that they're not only reaping the benefits of this great technology. There's no doubt about the AI, right? It's actually, it's going to be game changing but then they can also do it in a responsible and trustworthy manner. >> Yeah, it's really get some wins, get some momentum, see it. This is the Cloud way. It gets them some value immediately and grow from there. I was talking to a friend the other day, Amit, about IT the lecture. I don't worry about IT and all the Cloud. I go, there's no longer IT, IT is dead. It's an AI department now. (Amit laughs) So and this is kind of what you guys are getting at. This now it's data now it's AI. It's kind of like what IT used to be enabling organizations to be successful. You guys are looking at it from the perspective of the same way it's enabled success. You put it out that you provision (John laughs) algorithms instead of servers they're algorithms now. This is the new model. >> Yeah, we believe that all companies in the future as it happened to this wave of data are going to be AI companies, right? So it's really just a matter of time. And the companies that are first movers in this are going to have a significant advantage like we're seeing that in like banking already. Where the banks that have made the leap into AI battles are reaping benefits of enabling a lot more models at the same risk profile using deep learning models. As long as you're able to like validate these to ensure that they're meeting kind of like the regulations. But it's going to give significant advantages to a lot of companies as they move faster with respect to others in the same industry. >> Yeah, quickers too, saw a friend too on the compliance side. You mentioned trust and transparency with the whole EU thing. Some are saying that, you know, to be a public company, you're going to have to have AI disclosure soon. You're going to have to have on the disclosure in your public statements around how you're explaining your AI. Again, fantasy today. But pretty plausible. >> Right, absolutely. I mean, the real reality today is, you know less than 10% of the CEOs care about ethical AI, right? And that has to change. And I think, you know, and I think that has to change for the better, because at the end of the day, if you are using AI, if you're not using in a responsible and trustworthy manner then there is like regulation. There is compliance risk, there's operational business risk. You know, customer trust. Losing customers trust can be huge. So I think, you know, we want to provide that you know, insurance, or like, you know like a preventative mechanism. So that, you know, if you have these tools in place then you're less likely to get into those situations. >> Awesome. Great, great conversation, Krishna, Amit. Thank you for sharing both the founders of Fiddler.ai. Great company. On the right side of history in my opinion, the next big thing in AI. AI departments, AI compliance, AI reporting. (John laughs) Explainable AI, ethical AI, all part of this next revolution. Gentlemen, thank you for joining us on theCUBE Amazon Startup Showcase. >> Thanks for having us, John. >> Okay, it's theCUBE coverage. Thank you for watching. (upbeat music)

Published Date : May 28 2021

SUMMARY :

really the next big thing So the topic of this We help, you know, enterprises, and walk before you run kind of thing. so that you know what And so the, you know, So the ability to prove the model deeper of the use cases you heard So nevermind the And the risk management and that included the likes so that you know, enterprises can do this. and the kind of the industry side of it? And so you have to constantly stay on top of the model, if you will. the integrity of the model. that are going, you know, and the ads mandating define that for the audience? It is the core of an AI system. know, the person's, you know One of the things that of the reality when you train the model. and how you guys see this evolving? And so that is the value because the teams can And not have to deal So that they can just build What's going on in the product? They don't see that the scale So the scale thing is you guys are taking much more And the number of models in enterprise take away the friction. I think you guys are How do you operationalize it? But it's still the same game, What's the operational playbook So now when it comes to, you know, You put it out that you of like the regulations. you know, to be a public company, And I think, you know, the founders of Fiddler.ai. Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaliforniaLOCATION

0.99+

10 timesQUANTITY

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

Amit PakaPERSON

0.99+

Steve WozniakPERSON

0.99+

AppleORGANIZATION

0.99+

EUORGANIZATION

0.99+

30%QUANTITY

0.99+

20%QUANTITY

0.99+

Goldman SachsORGANIZATION

0.99+

JohnPERSON

0.99+

40QUANTITY

0.99+

$10,000QUANTITY

0.99+

KrishnaPERSON

0.99+

AmitPERSON

0.99+

billionsQUANTITY

0.99+

70QUANTITY

0.99+

FedORGANIZATION

0.99+

last weekDATE

0.99+

Krishna GadePERSON

0.99+

OneQUANTITY

0.99+

one partQUANTITY

0.99+

second partQUANTITY

0.99+

less than 10%QUANTITY

0.99+

one modelQUANTITY

0.99+

AWSORGANIZATION

0.99+

three keyQUANTITY

0.98+

bothQUANTITY

0.98+

one thingQUANTITY

0.98+

First stepQUANTITY

0.98+

todayDATE

0.98+

one placeQUANTITY

0.98+

Fiddler.aiORGANIZATION

0.98+

secondlyQUANTITY

0.97+

hundredsQUANTITY

0.97+

eachQUANTITY

0.97+

firstQUANTITY

0.97+

U.S.LOCATION

0.97+

SwamiPERSON

0.96+

a yearDATE

0.94+

first moversQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

FiddlerORGANIZATION

0.94+

FICOORGANIZATION

0.93+

SR 11-7TITLE

0.93+

oneQUANTITY

0.91+

two agoDATE

0.88+

three weeks backDATE

0.83+

couple of years agoDATE

0.82+

Amazon Startup ShowcaseEVENT

0.81+

few years agoDATE

0.8+

billions of predictionsQUANTITY

0.77+

FiddlerTITLE

0.77+

two mandatoryQUANTITY

0.76+

SageMakerTITLE

0.76+

single pane ofQUANTITY

0.75+

a secondQUANTITY

0.74+

thirdlyQUANTITY

0.73+

aboutDATE

0.73+

single applicationQUANTITY

0.73+

PyTorchORGANIZATION

0.73+

AWSEVENT

0.72+

Startup ShowcaseEVENT

0.69+

TensorFlowTITLE

0.67+

AWS Startup ShowcaseEVENT

0.65+