Image Title

Search Results for Gary Marcus:

Ryan Welsh, Kyndi | CUBEConversation, October 2018


 

(dramatic music) >> Welcome back, everyone to theCUBE's headquarters in Palo Alto, I'm John Furrier, the host of theCUBE, founder of SiliconANGLE Media, we're here for Cube Conversation with Ryan Welsh, who's the founder of CEO of Kyndi. It's a hot startup, it's a growing startup, doing really well in a hot area, it's in AI, it's where cloud computing, AI, data, all intersect around IoT, RPA's been a hot trend everyone's on, they're in that as well, but really an interesting startup we want to profile here, Ryan, thanks for spending the time to come in and talk about the startup. >> Yeah, thanks for having me. >> So I love getting the startups in, because we get the real scoop, you know, what's real, what's not real, and also, practitioners also tell us the truth too, so we love to have especially founders in. So first, before we get started, tell 'em about the company, how old is your company, what's the core value proposition, what do you guys do? >> Yeah, we're four years old, we were founded in June 2014. The first two, three years were really fundamental research and developing some new AI algorithms. What we focus on is, we focused on building explainable AI products for government customers, pharmaceutical customers and financial services customers. So our-- >> Let's explain the AI, what does that mean, like how do you explain AI? AI works, especially machine learning, well AI doesn't really exist, 'cause it's really machine learning, and what is AI? So what is explainable AI? >> Yeah, for us, it's the ability of a machine to communicate with the user in natural language. So there's kind of two aspects to explainability. Some of the deep learning folks are grabbing onto it, and really what they're talking about with explainability is algorithmic transparency, but where they tell you how the algorithm works, they tell you the parameters that are being used. So I explain to you the algorithm, you can actually interrogate the system. For us, if our system's going to make a recommendation to you, you would want to know why it's making the recommendation, right? So for us, we're able to communicate with users in natural language, like it's another person, of why we make a recommendation, why we bring back a search result, why we do whatever it is as part of the business process. >> And you mentioned deep learning AI is obviously the buzzword everybody's talking about, I mean I'm a big fan of AI in the sense that hyping it up means my kids know what it is, and everybody say, hey Dad, love machine learning. They love AI 'cause it's got a futuristic sound to it, but deep learning is real, deep learning is about learning systems that learn, which means they need to know what's going on, right? So this learning loop, how does that work? Is that kind of where explainable AI needs to go? Is that where it's going, where if you can explain it and it's explainable, you can interrogate it, does it have a learning mechanism to it? >> I think there's two major aspects of intelligence. There's the learning aspect, then there's the reasoning aspect. So if you look back through the history of AI, current machine learning is phenomenal at learning from data, like you're saying, learning the patterns in the data, but its reasoning is actually pretty weak. It can do statistical inferencing, but in the field of symbolic AI, where there's inductive, deductive, abductive, analogical reasoning, kind of advanced reasoning, it's terrible at reasoning. Whereas the symbolic approaches are phenomenal at reasoning but can't learn from data. So what is AI? A sub-group of that is machine learning that can learn from data. Another sub-group of that, it's knowledge-based approaches, which can't learn from data, they are phenomenal at reasoning, and really the trend that we're seeing at the edge in AI, or kind of the cutting edge, is actually fusing those two paradigms together, which is effectively what we've done. You've seen DeepMind and Google Brain publish a paper on that earlier this year, you've seen Gary Marcus start to talk about that, so for us, explainability is kind of bringing together these two paradigms of AI, that can both learn from data, reason about data, and answer questions like, why are you giving me this recommendation. >> Great explanation. And I want to just ask you, what' the impact of that, because we've always talked in the old search world, meta-reasoning, you type in a misspelling on Google, and it says, there's the misspelling, okay, I get that, but what if is misspell the word all the time, can't Google figure out that I really want that word? So reasoning has been a hard nut to crack, big time. >> Well you have to acquire the knowledge first to combine bits of knowledge to then reason, right? But the challenge is acquiring the knowledge. So you have all these systems or knowledge-based approaches, and you have human beings on-site, professional services, building and managing your knowledge base. So that's been one of the hurdles for knowledge-based approaches. Now you have machine learning that can learn from data, one of the problems with that is, that you need a bunch of labeled data. So you're kind of trading off between handcrafted knowledge systems, handcrafted labeled systems which you can then learn from data. So the benefits of fusing the two together is you can use machine learning approaches to acquire the knowledge, as opposed to hand engineering it, and then you can put that in a form or a data model that you can then reason about. So the benefit is really it all comes down to customer. >> Awesome, great area, great concepts, we can go for an hour on this, I love this topic, I think it's super relevant, especially as cloud and automation become the key accelerant to a lot of new value. But let's get back to the company. So four years old, you've done some R and D, give me the stats, where are you guys in the product side, product shipping, what's the value proposition, how do people engage with you, just go down looking on the list. >> Yeah, yeah, shipping product to customers in pharmaceutical, and government use cases. How people engage with us-- >> It's a software product? >> It's a software product. Yeah, yeah. So we can deliver it, surprisingly a lot of customers still want it on-prem. (both laugh) But we can deploy in the cloud as well. Typically, how we work with customers is we'll have close engagements for specific use cases within pharma or government or financial services, because it's a very broad platform an can be applied to any text-based use case. So we work with them closely, develop a use case, they're able to sell that internally to champions >> And what problems are they solving, what specifically is the answer? >> So for pharmaceutical companies, a lot of their internal, historical clinical trial data, they'll develop memos, emails, notes as they bring a drug to market. How do you leverage that data now? Instead of just storing it, how do I find new and innovative ways to use existing drugs that someone in another part of the organization could have developed? How do I manage the risks within that historical clinical trial data? Are there people that are doing research incorrectly? Are they reporting things incorrectly? You know, this entire process of both getting drugs through the pipeline and managing drugs as they move through the pipeline, is a very manual process that revolves around text-based data sources. So how do you develop systems that amplify the productivity of the people that are developing the drugs, then also the people that are managing the process. >> And so what are you guys actually delivering as value? What's the value proposition for them? >> Yeah, so >> Is it time? >> It's saving time, but ultimately increasing their productivity of getting that work done. It's not replacing individuals, because there's so much work to do. >> So all the... The loose stuff like the paper, they can discover it faster, so they have more access to the data. >> That's right. >> Using your tools >> That's right >> and your software. >> You can classify things in certain ways, saying there's data integrity issues, you need to look at this closer, but ultimately managing that data. >> And that's where machine learning and some of these AI techniques matter, because you want to essentially throw software at that problem, accelerate that process of getting the data, bringing it in, assessing it. >> Yeah, I mean we spend most of our time looking for the information to then analyze. I mean we spend 80% of our time doing it, right? Where it's like are there ways to automate that process, so we can spend 80% of our time actually doing our job? >> So Ryan, who's the customer out there? So is it someone, someone's watching this video, and what's their pain point, when do they call you, why do they call you? What's some of the signals that might tell someone, hey I want to give these guys a call, I need this solution? >> Yeah, a lot of it comes down to the amount of manual labor that you're doing. So we see a lot of big expenses around people, because you haven't traditionally been able to automate that process, or to use software in that process. So if you actually look at your income statement and you say where am I spending my most money, on tons of people, and I'm just throwing people at the problem, that's typically where people engage with us and say, how do I amplify the productivity of these people so I can get more out of them, hopefully make them more efficient? >> And it's not just so much to reduce the head count issue, it's more of increasing the automation for saying value in top-line revenue, because if you have to reproduce people all the time, why not replicate that in software? So I think what I'm seeing is, get that right? >> That's exactly right. And the job consistently changes too, so it's not like this robotic process that you can just automate away. They're looking for certain things one day, then they're looking for certain things the next day, but you need a capability that kind of matches their expertise. >> You know, I was talking to a CIO the other day and we were talking about some of the things around reproducing things, replicating, and the notion of how things get scaled or moved along, growth, is, and the expression was "Throw a body at that". That's been IT. Outsource it. So throwing a body, or throw bodies at it, you know, throw that problem at me, that doesn't really end well. With software automation you can say, you don't just throw a body at it, you can say, if it can be automated, automate it. >> Yeah, here's what I think most people miss, is that we are the bottleneck in the modern production process because we can't read and understand information any faster than our parents or grandparents. And there's not enough people on the planet to increase our capacity, to push things through. So if we were to compare the modern knowledge economy, it's interesting, to the manufacturing process, you have raw materials, manufacture it, and end product. All these technologies that we have effectively stack information and raw materials at the front of it. We haven't actually automated that process. >> You nailed it, and in fact one of the things I would say that would support that is, in interviewed Dave Redskin, who's a site reliable engineer at Google, and we were talking about the history of how Google scaled, and they have this whole new program around how to operate large data centers. He said years and years ago at Google, they looked up the growth and said, we're going to need a thousand people per data center, at least, if not, per data center, so that means we need 15,000 people just to manage the servers. 'Cause what they did was they just did the operating cycle on provisioning servers, and essentially, they automated it all away, and they created a lot of the tools that became now Google Cloud. His point was, is that, they now have one person, site reliability engineer, who overlooks the entire automation piece. This is where the action is. That concept of not, to scale down the people focus, scale up the machine base model. Is that kind of the trend that you guys are riding? >> Absolutely. And I think that's why AI is hot right now. I mean, AI's been around since the late 40s, early 50s, but why this time I think it's different is, one, that it's starting to work, given the computational resources and the data that we have, but then also the economic need for it. Businesses are looking, and saying, how I historically address these problems, I can no longer address them that way, I can't hire 15,000 people to run my data center. I need to now automate-- >> You got to get out front on it. >> Yeah, I got to augment those people with better technologies to make them do the work better. >> All right, so how much does the product cost, how do people engage with you guys, what's the engagement cost, is it consulting you come in, POC you ship 'em software, to appliances in the cloud, you mention on-premise. >> Yeah, yeah. >> So what's, how's the product look, how much does it cost? >> Yeah, it costs a good chunk for folks, so typically north of 500K. We do provide a lot of ROI around that, hence the ability to charge such a high price. Typically how we push people through the cycle and how we actually engage with folks is, we do what we demonstration of value. So there's a lot of different, or typically there's about 15 use cases that any given Fortune 500 customer wants to address. We find the ones with the highest ROI, the ones with accessible data >> And they point at it, >> The ones with budget >> They think, that's my problem, they point to it, right? >> Yeah. >> It's not hard to find. >> We have to walk 'em through it a little bit. Hopefully they've engaged with other vendors in the market that have been pushing AI solutions for the last few years, and have had some problems. So they're coached up on that, but we engage with demonstration of value, we typically demonstrate that ROI, and then we transition that into a full operational deployment for them. If they have a private cloud, we can deploy on a private cloud. Typically we provide an appliance to government customers and other folk. >> So is that a pre-sale activity, and you throw bodies at it, on your team. What's the engagement required kind of like a... Then during that workshop if you will, call it workshop. You come in and you show some value. Kind of throw some people at it, right? >> Yeah, you got-- >> You have SE, and sales all that. >> Exactly right. Exactly right. So we'll have our sales person managing the relationship, an SE also interacting with the data, working with the system, working closely with a contact on the customer's side. >> And they typically go, this is amazing, let's get started. Do they break it up, or-- >> They break it up. It's an iterative process, 'cause a lot of times, people don't fully grasp the power of these capabilities, so they'll come through and say, hey can you just help us with this small aspect of it, and once you show 'em that I can manage all of your unstructured text data, I can turn it into this giant knowledge graph, on top of which I can build apps. Then the light kind of goes off and they go, they go, all right, I can see this being used in HR, marketing, I mean legal, everywhere. >> Yeah, I mean you open up a whole new insight engine basically for 'em. >> That's exactly right. >> So, okay, so competition. Who are you competing with? I mean, we've been covering UiPath, they just had an event in Miami. This is the hot area, who's competing with you, who are you up against, and how are you guys winning, why are you winning? >> Yeah, we don't compete with the RPA folks. You know there's interesting aspects there, and I think we'll chat about that. Mainly there are incumbents like IBM Watson that are out there, we think IBM has done phenomenal research over the last 60 years in the field of AI. But we do run into the IBMs, big consulting companies, a lot of the AI deployments that we see, candidly are from all the big consulting shops. >> And they're weak, or... They're weaker than yours. >> Yeah, I would argue yes. (both laugh) >> It's okay, get that out of your sleigh. >> I think one of the big challenges-- >> Is it because they just don't have the chops, or they're just recycling old tech into a-- >> We do have new novel algorithms. I mean, what's interesting is, and this has actually been quite hard for us, is coming out saying, we've taken a step beyond deep learning. We've take a step beyond existing approaches. And really it's fusing those two paradigms of AI together, 'cause what I want to do is to be able to acquire the knowledge from the data, build a giant knowledge graph, and use that knowledge graph for different applications. So yeah, we deploy our systems way faster than everyone else out there, and our system's fully explainable. >> Well I mean it's a good position to be in. At least from a marketing standpoint, you can have a leadership strategy, you don't need to differentiate in anyway 'cause you're different, right, so... >> Yeah, yeah >> Looks like you're in good shape. So easy marketing playbook there, just got to pound the pavement. RPA, you brought that up and I think that's certainly been an area. You mentioned you guys kind of dip into that. How do you, I mean that's not an area you would, you would fit well in there, so, I want to get you, well you're not positioning yourself as an RPA solution, but you can solve RPA challenges or those kinds of... Explain why you're not an RPA but you will play in it. >> Here's what's so fascinating about this market is, a lot of people in AI will knock the RPA guys as not being sophisticated approaches. Those guys are solving real business problems, providing real value to enterprises, and they are automating processes. Then you have sophisticated AI companies like ours, that are solving really really high-level white-collar worker tasks, and it's interesting, I feel like the AI community needs to kind of come down a step of sophistication, and the RPA companies are starting to come up a level of sophistication, and that's where you're starting to see that overlap. RPA companies moving from RPA to intelligence process automation, where AI companies can actually add value in the analysis of unstructured text data. So around natural language processing, natural language understanding. RPA companies no longer need to look at specific structured aspects and forms, but can actually move into more sophisticated extraction of things from text data and other-- >> Well I think it's not a mutually exclusive scenario anymore, as you mentioned earlier, there's a blending of the two machine learning and symbolics coming together in this new reasoning model. If you look at RPA, my view is it's kind of a dogmatic view of certain things. They're there to replace people, right (laughs) >> Yeah, totally. >> We got robotics, we don't need people on the manufacturing line, we just put some robotics on as an example. And AI's always been about getting the best out of the software and the data, so if you look at the new RPA that we see that's relevant is to your point, let's use machines to augment humans. A different, that's a cultural thing. So I think you're right, I think it's coming together in new ground where most people who are succeeding in data, if you will, data driven or AI, really have the philosophy that humans have to be getting the value. Like that SRE example, Google, so that's a fundamental thing. >> Absolutely. >> And okay, so what's next for you guys? Business is good? >> Business is good. >> Hiring, I'm imagining with your kind of community >> Always hiring phenomenal AI and ML expertise, if you have it, >> Good luck competing with Google >> Shoot us an email. >> And Google will think that you're hiring 'em all. How do you handle that, I mean... >> Yeah I mean they actually get to work on novel algorithms. I mean what's fascinating is a lot of the AI out there, I mean you can date it all the way back to Rumelhart and Hinton's paper from 1986. So I mean, we've had backprop for a while. If you want to come work on new, novel algorithms, that are really pushing the limit of what's possible, >> Yeah, if you're bored at Google or Facebook, check these guys out. >> Check us out. >> Okay, so funding, you got plenty of money in the bank, strategic partners, what's the vision, what's your goal for the next 12 months or so, what's your objective? >> Yeah, focusing big on the customers that we have now. I'm always big on having customers, get a viral factor within the B2B enterprise software space, get customers that are screaming from the mountaintop that this is the best stuff ever, then you can kind of take care of it. >> How about biz dev, partnerships, are you guys looking at an ecosystem? Obviously rising tide floats all boats, I mean I can almost imagine might salivate for some of the software you're talking about, like we have all this data, here inside theCUBE, we have all kinds of processes that are, we're trying to streamline, I mean, we need more software, I mean, can I buy your stuff? I mean we don't have half a million bucks, can I get a discount? I mean how do I >> We'll see. We'll see how we end up. >> I mean is there like a biz dev partner program? >> No, not... >> Forgetting about theCUBE, we'd love if that's so, but if it's to partner, do you guys partner? >> So not yet in exposing APIs to third parties. So I mean I would love if I had the balance sheet to go to market horizontally, but I don't. So it's go to market vertically, focus on specific solutions. >> Industries. >> Industries, pharma >> So you're sort of, you're industry-focused >> government, financial services. >> That's the ones you've got right now. >> They're the three. >> For now. >> Yep. >> Okay, so once you nail an industry, you move onto the next one. >> Yeah, then I would love expose APIs for tab partners to work on this stuff. I mean we see that every day someone wants to use certain engines that we have, or to embed them within applications. >> Well I mean you've got a nice vertical strategy. You've knocked down maybe one or two verticals. Then you kind of lay down a foundational... >> Yeah. >> Yeah, development platform. >> Yeah, that's right. >> That's your strategy. >> And we can be, I mean at Kyndi I think we can be embedded in every application out there that's looking at unstructured data >> Which is also the mark of maturity, you got to go where the customers are, and you know the vision of having this global platform could be a great vision, but you've got to meet the customers where they are, and where they are now is, solve my vertical problem. (laughs) >> Yeah, and for us, with new technologies, well, show me that they're better than other approaches. I can't go to market horizontally and just say, I have better AI than Google. Who's going to come beyond the Kyndi person? >> Well IBM's been trying to do it with Watson, and that's hard. >> It's very hard. >> And they end up specializing in industries. Well Ryan, thanks for coming on theCUBE, appreciate it. Kyndi, great company, check 'em out, they're hiring. We're going to keep an eye on these guys 'cause they're really hitting a part of the market that we think, here at theCUBE, is going to be super-powerful, it's really the intersection of a lot of major markets, cloud, AIs, soon to be blockchain, supply chain, data center of course, storage networking, this is IoT security and data at the center of all the action. New models can emerge, with you guys in the center, so thanks for coming and sharing your story, appreciate it. >> Thank you very much. >> I'm John Furrier, here in theCUBE studios in Palo Alto. Thanks for watching. (dramatic music)

Published Date : Oct 17 2018

SUMMARY :

Ryan, thanks for spending the time to come in because we get the real scoop, you know, What we focus on is, we focused on building So I explain to you the algorithm, Is that where it's going, where if you can explain it So if you look back through the history of AI, So reasoning has been a hard nut to crack, big time. So the benefit is really it all comes down to customer. give me the stats, where are you guys in the product side, How people engage with us-- So we work with them closely, develop a use case, So how do you develop systems that amplify so much work to do. so they have more access to the data. you need to look at this closer, of getting the data, bringing it in, assessing it. looking for the information to then analyze. So if you actually look at your income statement that you can just automate away. With software automation you can say, is that we are the bottleneck in the modern Is that kind of the trend that you guys are riding? given the computational resources and the data that we have, Yeah, I got to augment those people with does the product cost, how do people engage with you guys, hence the ability to charge such a high price. in the market that have been pushing AI solutions and you throw bodies at it, on your team. You have SE, and sales a contact on the customer's side. And they typically go, this is amazing, let's get started. and once you show 'em that I can manage all of Yeah, I mean you open up a whole new insight engine and how are you guys winning, why are you winning? a lot of the AI deployments that we see, And they're weak, or... Yeah, I would argue yes. acquire the knowledge from the data, you can have a leadership strategy, You mentioned you guys kind of dip into that. and the RPA companies are starting to come up If you look at RPA, my view is it's kind of a on the manufacturing line, we just put some robotics on How do you handle that, I mean... I mean you can date it all the way back to Yeah, if you're bored at Google or Facebook, Yeah, focusing big on the customers that we have now. We'll see how we end up. So it's go to market vertically, Okay, so once you nail an industry, I mean we see that every day someone wants to use Then you kind of lay down a foundational... and you know the vision of having this global platform Yeah, and for us, with new technologies, and that's hard. New models can emerge, with you guys in the center, I'm John Furrier, here in theCUBE studios in Palo Alto.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RyanPERSON

0.99+

Dave RedskinPERSON

0.99+

Gary MarcusPERSON

0.99+

IBMORGANIZATION

0.99+

June 2014DATE

0.99+

GoogleORGANIZATION

0.99+

Ryan WelshPERSON

0.99+

MiamiLOCATION

0.99+

1986DATE

0.99+

John FurrierPERSON

0.99+

Palo AltoLOCATION

0.99+

80%QUANTITY

0.99+

15,000 peopleQUANTITY

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

oneQUANTITY

0.99+

KyndiORGANIZATION

0.99+

four yearsQUANTITY

0.99+

three yearsQUANTITY

0.99+

October 2018DATE

0.99+

KyndiPERSON

0.99+

late 40sDATE

0.99+

FacebookORGANIZATION

0.99+

threeQUANTITY

0.99+

early 50sDATE

0.99+

twoQUANTITY

0.98+

two paradigmsQUANTITY

0.98+

bothQUANTITY

0.98+

half a million bucksQUANTITY

0.98+

two aspectsQUANTITY

0.98+

first twoQUANTITY

0.97+

IBMsORGANIZATION

0.97+

one personQUANTITY

0.97+

firstQUANTITY

0.97+

theCUBEORGANIZATION

0.96+

one dayQUANTITY

0.96+

an hourQUANTITY

0.95+

next dayDATE

0.95+

earlier this yearDATE

0.93+

about 15 use casesQUANTITY

0.92+

two major aspectsQUANTITY

0.91+

yearsDATE

0.91+

two machineQUANTITY

0.9+

UiPathORGANIZATION

0.9+

DeepMindORGANIZATION

0.87+

two verticalsQUANTITY

0.86+

next 12 monthsDATE

0.85+

tons of peopleQUANTITY

0.82+

years agoDATE

0.8+

both laughQUANTITY

0.77+

thousand peopleQUANTITY

0.76+

last 60 yearsDATE

0.75+

IBM WatsonORGANIZATION

0.72+

north of 500KQUANTITY

0.67+

last few yearsDATE

0.64+

Rumelhart andORGANIZATION

0.63+

CEOPERSON

0.57+

CubeORGANIZATION

0.56+

CUBEConversationEVENT

0.52+

HintonPERSON

0.5+

SREORGANIZATION

0.45+

ConversationEVENT

0.43+

WatsonORGANIZATION

0.41+

BrainTITLE

0.38+