Closing Panel | Generative AI: Riding the Wave | AWS Startup Showcase S3 E1
(mellow music) >> Hello everyone, welcome to theCUBE's coverage of AWS Startup Showcase. This is the closing panel session on AI machine learning, the top startups generating generative AI on AWS. It's a great panel. This is going to be the experts talking about riding the wave in generative AI. We got Ankur Mehrotra, who's the director and general manager of AI and machine learning at AWS, and Clem Delangue, co-founder and CEO of Hugging Face, and Ori Goshen, who's the co-founder and CEO of AI21 Labs. Ori from Tel Aviv dialing in, and rest coming in here on theCUBE. Appreciate you coming on for this closing session for the Startup Showcase. >> Thanks for having us. >> Thank you for having us. >> Thank you. >> I'm super excited to have you all on. Hugging Face was recently in the news with the AWS relationship, so congratulations. Open source, open science, really driving the machine learning. And we got the AI21 Labs access to the LLMs, generating huge scale live applications, commercial applications, coming to the market, all powered by AWS. So everyone, congratulations on all your success, and thank you for headlining this panel. Let's get right into it. AWS is powering this wave here. We're seeing a lot of push here from applications. Ankur, set the table for us on the AI machine learning. It's not new, it's been goin' on for a while. Past three years have been significant advancements, but there's been a lot of work done in AI machine learning. Now it's released to the public. Everybody's super excited and now says, "Oh, the future's here!" It's kind of been going on for a while and baking. Now it's kind of coming out. What's your view here? Let's get it started. >> Yes, thank you. So, yeah, as you may be aware, Amazon has been in investing in machine learning research and development since quite some time now. And we've used machine learning to innovate and improve user experiences across different Amazon products, whether it's Alexa or Amazon.com. But we've also brought in our expertise to extend what we are doing in the space and add more generative AI technology to our AWS products and services, starting with CodeWhisperer, which is an AWS service that we announced a few months ago, which is, you can think of it as a coding companion as a service, which uses generative AI models underneath. And so this is a service that customers who have no machine learning expertise can just use. And we also are talking to customers, and we see a lot of excitement about generative AI, and customers who want to build these models themselves, who have the talent and the expertise and resources. For them, AWS has a number of different options and capabilities they can leverage, such as our custom silicon, such as Trainium and Inferentia, as well as distributed machine learning capabilities that we offer as part of SageMaker, which is an end-to-end machine learning development service. At the same time, many of our customers tell us that they're interested in not training and building these generative AI models from scratch, given they can be expensive and can require specialized talent and skills to build. And so for those customers, we are also making it super easy to bring in existing generative AI models into their machine learning development environment within SageMaker for them to use. So we recently announced our partnership with Hugging Face, where we are making it super easy for customers to bring in those models into their SageMaker development environment for fine tuning and deployment. And then we are also partnering with other proprietary model providers such as AI21 and others, where we making these generative AI models available within SageMaker for our customers to use. So our approach here is to really provide customers options and choices and help them accelerate their generative AI journey. >> Ankur, thank you for setting the table there. Clem and Ori, I want to get your take, because the riding the waves, the theme of this session, and to me being in California, I imagine the big surf, the big waves, the big talent out there. This is like alpha geeks, alpha coders, developers are really leaning into this. You're seeing massive uptake from the smartest people. Whether they're young or around, they're coming in with their kind of surfboards, (chuckles) if you will. These early adopters, they've been on this for a while; Now the waves are hitting. This is a big wave, everyone sees it. What are some of those early adopter devs doing? What are some of the use cases you're seeing right out of the gate? And what does this mean for the folks that are going to come in and get on this wave? Can you guys share your perspective on this? Because you're seeing the best talent now leaning into this. >> Yeah, absolutely. I mean, from Hugging Face vantage points, it's not even a a wave, it's a tidal wave, or maybe even the tide itself. Because actually what we are seeing is that AI and machine learning is not something that you add to your products. It's very much a new paradigm to do all technology. It's this idea that we had in the past 15, 20 years, one way to build software and to build technology, which was writing a million lines of code, very rule-based, and then you get your product. Now what we are seeing is that every single product, every single feature, every single company is starting to adopt AI to build the next generation of technology. And that works both to make the existing use cases better, if you think of search, if you think of social network, if you think of SaaS, but also it's creating completely new capabilities that weren't possible with the previous paradigm. Now AI can generate text, it can generate image, it can describe your image, it can do so many new things that weren't possible before. >> It's going to really make the developers really productive, right? I mean, you're seeing the developer uptake strong, right? >> Yes, we have over 15,000 companies using Hugging Face now, and it keeps accelerating. I really think that maybe in like three, five years, there's not going to be any company not using AI. It's going to be really kind of the default to build all technology. >> Ori, weigh in on this. APIs, the cloud. Now I'm a developer, I want to have live applications, I want the commercial applications on this. What's your take? Weigh in here. >> Yeah, first, I absolutely agree. I mean, we're in the midst of a technology shift here. I think not a lot of people realize how big this is going to be. Just the number of possibilities is endless, and I think hard to imagine. And I don't think it's just the use cases. I think we can think of it as two separate categories. We'll see companies and products enhancing their offerings with these new AI capabilities, but we'll also see new companies that are AI first, that kind of reimagine certain experiences. They build something that wasn't possible before. And that's why I think it's actually extremely exciting times. And maybe more philosophically, I think now these large language models and large transformer based models are helping us people to express our thoughts and kind of making the bridge from our thinking to a creative digital asset in a speed we've never imagined before. I can write something down and get a piece of text, or an image, or a code. So I'll start by saying it's hard to imagine all the possibilities right now, but it's certainly big. And if I had to bet, I would say it's probably at least as big as the mobile revolution we've seen in the last 20 years. >> Yeah, this is the biggest. I mean, it's been compared to the Enlightenment Age. I saw the Wall Street Journal had a recent story on this. We've been saying that this is probably going to be bigger than all inflection points combined in the tech industry, given what transformation is coming. I guess I want to ask you guys, on the early adopters, we've been hearing on these interviews and throughout the industry that there's already a set of big companies, a set of companies out there that have a lot of data and they're already there, they're kind of tinkering. Kind of reminds me of the old hyper scaler days where they were building their own scale, and they're eatin' glass, spittin' nails out, you know, they're hardcore. Then you got everybody else kind of saying board level, "Hey team, how do I leverage this?" How do you see those two things coming together? You got the fast followers coming in behind the early adopters. What's it like for the second wave coming in? What are those conversations for those developers like? >> I mean, I think for me, the important switch for companies is to change their mindset from being kind of like a traditional software company to being an AI or machine learning company. And that means investing, hiring machine learning engineers, machine learning scientists, infrastructure in members who are working on how to put these models in production, team members who are able to optimize models, specialized models, customized models for the company's specific use cases. So it's really changing this mindset of how you build technology and optimize your company building around that. Things are moving so fast that I think now it's kind of like too late for low hanging fruits or small, small adjustments. I think it's important to realize that if you want to be good at that, and if you really want to surf this wave, you need massive investments. If there are like some surfers listening with this analogy of the wave, right, when there are waves, it's not enough just to stand and make a little bit of adjustments. You need to position yourself aggressively, paddle like crazy, and that's how you get into the waves. So that's what companies, in my opinion, need to do right now. >> Ori, what's your take on the generative models out there? We hear a lot about foundation models. What's your experience running end-to-end applications for large foundation models? Any insights you can share with the app developers out there who are looking to get in? >> Yeah, I think first of all, it's start create an economy, where it probably doesn't make sense for every company to create their own foundation models. You can basically start by using an existing foundation model, either open source or a proprietary one, and start deploying it for your needs. And then comes the second round when you are starting the optimization process. You bootstrap, whether it's a demo, or a small feature, or introducing new capability within your product, and then start collecting data. That data, and particularly the human feedback data, helps you to constantly improve the model, so you create this data flywheel. And I think we're now entering an era where customers have a lot of different choice of how they want to start their generative AI endeavor. And it's a good thing that there's a variety of choices. And the really amazing thing here is that every industry, any company you speak with, it could be something very traditional like industrial or financial, medical, really any company. I think peoples now start to imagine what are the possibilities, and seriously think what's their strategy for adopting this generative AI technology. And I think in that sense, the foundation model actually enabled this to become scalable. So the barrier to entry became lower; Now the adoption could actually accelerate. >> There's a lot of integration aspects here in this new wave that's a little bit different. Before it was like very monolithic, hardcore, very brittle. A lot more integration, you see a lot more data coming together. I have to ask you guys, as developers come in and grow, I mean, when I went to college and you were a software engineer, I mean, I got a degree in computer science, and software engineering, that's all you did was code, (chuckles) you coded. Now, isn't it like everyone's a machine learning engineer at this point? Because that will be ultimately the science. So, (chuckles) you got open source, you got open software, you got the communities. Swami called you guys the GitHub of machine learning, Hugging Face is the GitHub of machine learning, mainly because that's where people are going to code. So this is essentially, machine learning is computer science. What's your reaction to that? >> Yes, my co-founder Julien at Hugging Face have been having this thing for quite a while now, for over three years, which was saying that actually software engineering as we know it today is a subset of machine learning, instead of the other way around. People would call us crazy a few years ago when we're seeing that. But now we are realizing that you can actually code with machine learning. So machine learning is generating code. And we are starting to see that every software engineer can leverage machine learning through open models, through APIs, through different technology stack. So yeah, it's not crazy anymore to think that maybe in a few years, there's going to be more people doing AI and machine learning. However you call it, right? Maybe you'll still call them software engineers, maybe you'll call them machine learning engineers. But there might be more of these people in a couple of years than there is software engineers today. >> I bring this up as more tongue in cheek as well, because Ankur, infrastructure's co is what made Cloud great, right? That's kind of the DevOps movement. But here the shift is so massive, there will be a game-changing philosophy around coding. Machine learning as code, you're starting to see CodeWhisperer, you guys have had coding companions for a while on AWS. So this is a paradigm shift. How is the cloud playing into this for you guys? Because to me, I've been riffing on some interviews where it's like, okay, you got the cloud going next level. This is an example of that, where there is a DevOps-like moment happening with machine learning, whether you call it coding or whatever. It's writing code on its own. Can you guys comment on what this means on top of the cloud? What comes out of the scale? What comes out of the benefit here? >> Absolutely, so- >> Well first- >> Oh, go ahead. >> Yeah, so I think as far as scale is concerned, I think customers are really relying on cloud to make sure that the applications that they build can scale along with the needs of their business. But there's another aspect to it, which is that until a few years ago, John, what we saw was that machine learning was a data scientist heavy activity. They were data scientists who were taking the data and training models. And then as machine learning found its way more and more into production and actual usage, we saw the MLOps become a thing, and MLOps engineers become more involved into the process. And then we now are seeing, as machine learning is being used to solve more business critical problems, we're seeing even legal and compliance teams get involved. We are seeing business stakeholders more engaged. So, more and more machine learning is becoming an activity that's not just performed by data scientists, but is performed by a team and a group of people with different skills. And for them, we as AWS are focused on providing the best tools and services for these different personas to be able to do their job and really complete that end-to-end machine learning story. So that's where, whether it's tools related to MLOps or even for folks who cannot code or don't know any machine learning. For example, we launched SageMaker Canvas as a tool last year, which is a UI-based tool which data analysts and business analysts can use to build machine learning models. So overall, the spectrum in terms of persona and who can get involved in the machine learning process is expanding, and the cloud is playing a big role in that process. >> Ori, Clem, can you guys weigh in too? 'Cause this is just another abstraction layer of scale. What's it mean for you guys as you look forward to your customers and the use cases that you're enabling? >> Yes, I think what's important is that the AI companies and providers and the cloud kind of work together. That's how you make a seamless experience and you actually reduce the barrier to entry for this technology. So that's what we've been super happy to do with AWS for the past few years. We actually announced not too long ago that we are doubling down on our partnership with AWS. We're excited to have many, many customers on our shared product, the Hugging Face deep learning container on SageMaker. And we are working really closely with the Inferentia team and the Trainium team to release some more exciting stuff in the coming weeks and coming months. So I think when you have an ecosystem and a system where the AWS and the AI providers, AI startups can work hand in hand, it's to the benefit of the customers and the companies, because it makes it orders of magnitude easier for them to adopt this new paradigm to build technology AI. >> Ori, this is a scale on reasoning too. The data's out there and making sense out of it, making it reason, getting comprehension, having it make decisions is next, isn't it? And you need scale for that. >> Yes. Just a comment about the infrastructure side. So I think really the purpose is to streamline and make these technologies much more accessible. And I think we'll see, I predict that we'll see in the next few years more and more tooling that make this technology much more simple to consume. And I think it plays a very important role. There's so many aspects, like the monitoring the models and their kind of outputs they produce, and kind of containing and running them in a production environment. There's so much there to build on, the infrastructure side will play a very significant role. >> All right, that's awesome stuff. I'd love to change gears a little bit and get a little philosophy here around AI and how it's going to transform, if you guys don't mind. There's been a lot of conversations around, on theCUBE here as well as in some industry areas, where it's like, okay, all the heavy lifting is automated away with machine learning and AI, the complexity, there's some efficiencies, it's horizontal and scalable across all industries. Ankur, good point there. Everyone's going to use it for something. And a lot of stuff gets brought to the table with large language models and other things. But the key ingredient will be proprietary data or human input, or some sort of AI whisperer kind of role, or prompt engineering, people are saying. So with that being said, some are saying it's automating intelligence. And that creativity will be unleashed from this. If the heavy lifting goes away and AI can fill the void, that shifts the value to the intellect or the input. And so that means data's got to come together, interact, fuse, and understand each other. This is kind of new. I mean, old school AI was, okay, got a big model, I provisioned it long time, very expensive. Now it's all free flowing. Can you guys comment on where you see this going with this freeform, data flowing everywhere, heavy lifting, and then specialization? >> Yeah, I think- >> Go ahead. >> Yeah, I think, so what we are seeing with these large language models or generative models is that they're really good at creating stuff. But I think it's also important to recognize their limitations. They're not as good at reasoning and logic. And I think now we're seeing great enthusiasm, I think, which is justified. And the next phase would be how to make these systems more reliable. How to inject more reasoning capabilities into these models, or augment with other mechanisms that actually perform more reasoning so we can achieve more reliable results. And we can count on these models to perform for critical tasks, whether it's medical tasks, legal tasks. We really want to kind of offload a lot of the intelligence to these systems. And then we'll have to get back, we'll have to make sure these are reliable, we'll have to make sure we get some sort of explainability that we can understand the process behind the generated results that we received. So I think this is kind of the next phase of systems that are based on these generated models. >> Clem, what's your view on this? Obviously you're at open community, open source has been around, it's been a great track record, proven model. I'm assuming creativity's going to come out of the woodwork, and if we can automate open source contribution, and relationships, and onboarding more developers, there's going to be unleashing of creativity. >> Yes, it's been so exciting on the open source front. We all know Bert, Bloom, GPT-J, T5, Stable Diffusion, that work up. The previous or the current generation of open source models that are on Hugging Face. It has been accelerating in the past few months. So I'm super excited about ControlNet right now that is really having a lot of impact, which is kind of like a way to control the generation of images. Super excited about Flan UL2, which is like a new model that has been recently released and is open source. So yeah, it's really fun to see the ecosystem coming together. Open source has been the basis for traditional software, with like open source programming languages, of course, but also all the great open source that we've gotten over the years. So we're happy to see that the same thing is happening for machine learning and AI, and hopefully can help a lot of companies reduce a little bit the barrier to entry. So yeah, it's going to be exciting to see how it evolves in the next few years in that respect. >> I think the developer productivity angle that's been talked about a lot in the industry will be accelerated significantly. I think security will be enhanced by this. I think in general, applications are going to transform at a radical rate, accelerated, incredible rate. So I think it's not a big wave, it's the water, right? I mean, (chuckles) it's the new thing. My final question for you guys, if you don't mind, I'd love to get each of you to answer the question I'm going to ask you, which is, a lot of conversations around data. Data infrastructure's obviously involved in this. And the common thread that I'm hearing is that every company that looks at this is asking themselves, if we don't rebuild our company, start thinking about rebuilding our business model around AI, we might be dinosaurs, we might be extinct. And it reminds me that scene in Moneyball when, at the end, it's like, if we're not building the model around your model, every company will be out of business. What's your advice to companies out there that are having those kind of moments where it's like, okay, this is real, this is next gen, this is happening. I better start thinking and putting into motion plans to refactor my business, 'cause it's happening, business transformation is happening on the cloud. This kind of puts an exclamation point on, with the AI, as a next step function. Big increase in value. So it's an opportunity for leaders. Ankur, we'll start with you. What's your advice for folks out there thinking about this? Do they put their toe in the water? Do they jump right into the deep end? What's your advice? >> Yeah, John, so we talk to a lot of customers, and customers are excited about what's happening in the space, but they often ask us like, "Hey, where do we start?" So we always advise our customers to do a lot of proof of concepts, understand where they can drive the biggest ROI. And then also leverage existing tools and services to move fast and scale, and try and not reinvent the wheel where it doesn't need to be. That's basically our advice to customers. >> Get it. Ori, what's your advice to folks who are scratching their head going, "I better jump in here. "How do I get started?" What's your advice? >> So I actually think that need to think about it really economically. Both on the opportunity side and the challenges. So there's a lot of opportunities for many companies to actually gain revenue upside by building these new generative features and capabilities. On the other hand, of course, this would probably affect the cogs, and incorporating these capabilities could probably affect the cogs. So I think we really need to think carefully about both of these sides, and also understand clearly if this is a project or an F word towards cost reduction, then the ROI is pretty clear, or revenue amplifier, where there's, again, a lot of different opportunities. So I think once you think about this in a structured way, I think, and map the different initiatives, then it's probably a good way to start and a good way to start thinking about these endeavors. >> Awesome. Clem, what's your take on this? What's your advice, folks out there? >> Yes, all of these are very good advice already. Something that you said before, John, that I disagreed a little bit, a lot of people are talking about the data mode and proprietary data. Actually, when you look at some of the organizations that have been building the best models, they don't have specialized or unique access to data. So I'm not sure that's so important today. I think what's important for companies, and it's been the same for the previous generation of technology, is their ability to build better technology faster than others. And in this new paradigm, that means being able to build machine learning faster than others, and better. So that's how, in my opinion, you should approach this. And kind of like how can you evolve your company, your teams, your products, so that you are able in the long run to build machine learning better and faster than your competitors. And if you manage to put yourself in that situation, then that's when you'll be able to differentiate yourself to really kind of be impactful and get results. That's really hard to do. It's something really different, because machine learning and AI is a different paradigm than traditional software. So this is going to be challenging, but I think if you manage to nail that, then the future is going to be very interesting for your company. >> That's a great point. Thanks for calling that out. I think this all reminds me of the cloud days early on. If you went to the cloud early, you took advantage of it when the pandemic hit. If you weren't native in the cloud, you got hamstrung by that, you were flatfooted. So just get in there. (laughs) Get in the cloud, get into AI, you're going to be good. Thanks for for calling that. Final parting comments, what's your most exciting thing going on right now for you guys? Ori, Clem, what's the most exciting thing on your plate right now that you'd like to share with folks? >> I mean, for me it's just the diversity of use cases and really creative ways of companies leveraging this technology. Every day I speak with about two, three customers, and I'm continuously being surprised by the creative ideas. And the future is really exciting of what can be achieved here. And also I'm amazed by the pace that things move in this industry. It's just, there's not at dull moment. So, definitely exciting times. >> Clem, what are you most excited about right now? >> For me, it's all the new open source models that have been released in the past few weeks, and that they'll keep being released in the next few weeks. I'm also super excited about more and more companies getting into this capability of chaining different models and different APIs. I think that's a very, very interesting development, because it creates new capabilities, new possibilities, new functionalities that weren't possible before. You can plug an API with an open source embedding model, with like a no-geo transcription model. So that's also very exciting. This capability of having more interoperable machine learning will also, I think, open a lot of interesting things in the future. >> Clem, congratulations on your success at Hugging Face. Please pass that on to your team. Ori, congratulations on your success, and continue to, just day one. I mean, it's just the beginning. It's not even scratching the service. Ankur, I'll give you the last word. What are you excited for at AWS? More cloud goodness coming here with AI. Give you the final word. >> Yeah, so as both Clem and Ori said, I think the research in the space is moving really, really fast, so we are excited about that. But we are also excited to see the speed at which enterprises and other AWS customers are applying machine learning to solve real business problems, and the kind of results they're seeing. So when they come back to us and tell us the kind of improvement in their business metrics and overall customer experience that they're driving and they're seeing real business results, that's what keeps us going and inspires us to continue inventing on their behalf. >> Gentlemen, thank you so much for this awesome high impact panel. Ankur, Clem, Ori, congratulations on all your success. We'll see you around. Thanks for coming on. Generative AI, riding the wave, it's a tidal wave, it's the water, it's all happening. All great stuff. This is season three, episode one of AWS Startup Showcase closing panel. This is the AI ML episode, the top startups building generative AI on AWS. I'm John Furrier, your host. Thanks for watching. (mellow music)
SUMMARY :
This is the closing panel I'm super excited to have you all on. is to really provide and to me being in California, and then you get your product. kind of the default APIs, the cloud. and kind of making the I saw the Wall Street Journal I think it's important to realize that the app developers out there So the barrier to entry became lower; I have to ask you guys, instead of the other way around. That's kind of the DevOps movement. and the cloud is playing a and the use cases that you're enabling? the barrier to entry And you need scale for that. in the next few years and AI can fill the void, a lot of the intelligence and if we can automate reduce a little bit the barrier to entry. I'd love to get each of you drive the biggest ROI. to folks who are scratching So I think once you think Clem, what's your take on this? and it's been the same of the cloud days early on. And also I'm amazed by the pace in the past few weeks, Please pass that on to your team. and the kind of results they're seeing. This is the AI ML episode,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ankur Mehrotra | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Clem | PERSON | 0.99+ |
Ori Goshen | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Ori | PERSON | 0.99+ |
Clem Delangue | PERSON | 0.99+ |
Hugging Face | ORGANIZATION | 0.99+ |
Julien | PERSON | 0.99+ |
Ankur | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Ankur | ORGANIZATION | 0.99+ |
second round | QUANTITY | 0.99+ |
AI21 Labs | ORGANIZATION | 0.99+ |
two separate categories | QUANTITY | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
over 15,000 companies | QUANTITY | 0.98+ |
Both | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
over three years | QUANTITY | 0.98+ |
three customers | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Trainium | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Alexa | TITLE | 0.98+ |
Stable Diffusion | ORGANIZATION | 0.97+ |
Swami | PERSON | 0.97+ |
Inferentia | ORGANIZATION | 0.96+ |
GPT-J | ORGANIZATION | 0.96+ |
SageMaker | TITLE | 0.96+ |
AI21 Labs | ORGANIZATION | 0.95+ |
Riding the Wave | TITLE | 0.95+ |
ControlNet | ORGANIZATION | 0.94+ |
one way | QUANTITY | 0.94+ |
a million lines | QUANTITY | 0.93+ |
Startup Showcase | EVENT | 0.92+ |
few months ago | DATE | 0.92+ |
second wave | EVENT | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
few years ago | DATE | 0.91+ |
CodeWhisperer | TITLE | 0.9+ |
AI21 | ORGANIZATION | 0.89+ |
Adam Wenchel, Arthur.ai | CUBE Conversation
(bright upbeat music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCUBE. We've got a great conversation featuring Arthur AI. I'm your host. I'm excited to have Adam Wenchel who's the Co-Founder and CEO. Thanks for joining us today, appreciate it. >> Yeah, thanks for having me on, John, looking forward to the conversation. >> I got to say, it's been an exciting world in AI or artificial intelligence. Just an explosion of interest kind of in the mainstream with the language models, which people don't really get, but they're seeing the benefits of some of the hype around OpenAI. Which kind of wakes everyone up to, "Oh, I get it now." And then of course the pessimism comes in, all the skeptics are out there. But this breakthrough in generative AI field is just awesome, it's really a shift, it's a wave. We've been calling it probably the biggest inflection point, then the others combined of what this can do from a surge standpoint, applications. I mean, all aspects of what we used to know is the computing industry, software industry, hardware, is completely going to get turbo. So we're totally obviously bullish on this thing. So, this is really interesting. So my first question is, I got to ask you, what's you guys taking? 'Cause you've been doing this, you're in it, and now all of a sudden you're at the beach where the big waves are. What's the explosion of interest is there? What are you seeing right now? >> Yeah, I mean, it's amazing, so for starters, I've been in AI for over 20 years and just seeing this amount of excitement and the growth, and like you said, the inflection point we've hit in the last six months has just been amazing. And, you know, what we're seeing is like people are getting applications into production using LLMs. I mean, really all this excitement just started a few months ago, with ChatGPT and other breakthroughs and the amount of activity and the amount of new systems that we're seeing hitting production already so soon after that is just unlike anything we've ever seen. So it's pretty awesome. And, you know, these language models are just, they could be applied in so many different business contexts and that it's just the amount of value that's being created is again, like unprecedented compared to anything. >> Adam, you know, you've been in this for a while, so it's an interesting point you're bringing up, and this is a good point. I was talking with my friend John Markoff, former New York Times journalist and he was talking about, there's been a lot of work been done on ethics. So there's been, it's not like it's new. It's like been, there's a lot of stuff that's been baking over many, many years and, you know, decades. So now everyone wakes up in the season, so I think that is a key point I want to get into some of your observations. But before we get into it, I want you to explain for the folks watching, just so we can kind of get a definition on the record. What's an LLM, what's a foundational model and what's generative ai? Can you just quickly explain the three things there? >> Yeah, absolutely. So an LLM or a large language model, it's just a large, they would imply a large language model that's been trained on a huge amount of data typically pulled from the internet. And it's a general purpose language model that can be built on top for all sorts of different things, that includes traditional NLP tasks like document classification and sentiment understanding. But the thing that's gotten people really excited is it's used for generative tasks. So, you know, asking it to summarize documents or asking it to answer questions. And these aren't new techniques, they've been around for a while, but what's changed is just this new class of models that's based on new architectures. They're just so much more capable that they've gone from sort of science projects to something that's actually incredibly useful in the real world. And there's a number of companies that are making them accessible to everyone so that you can build on top of them. So that's the other big thing is, this kind of access to these models that can power generative tasks has been democratized in the last few months and it's just opening up all these new possibilities. And then the third one you mentioned foundation models is sort of a broader term for the category that includes LLMs, but it's not just language models that are included. So we've actually seen this for a while in the computer vision world. So people have been building on top of computer vision models, pre-trained computer vision models for a while for image classification, object detection, that's something we've had customers doing for three or four years already. And so, you know, like you said, there are antecedents to like, everything that's happened, it's not entirely new, but it does feel like a step change. >> Yeah, I did ask ChatGPT to give me a riveting introduction to you and it gave me an interesting read. If we have time, I'll read it. It's kind of, it's fun, you get a kick out of it. "Ladies and gentlemen, today we're a privileged "to have Adam Wenchel, Founder of Arthur who's going to talk "about the exciting world of artificial intelligence." And then it goes on with some really riveting sentences. So if we have time, I'll share that, it's kind of funny. It was good. >> Okay. >> So anyway, this is what people see and this is why I think it's exciting 'cause I think people are going to start refactoring what they do. And I've been saying this on theCUBE now for about a couple months is that, you know, there's a scene in "Moneyball" where Billy Beane sits down with the Red Sox owner and the Red Sox owner says, "If people aren't rebuilding their teams on your model, "they're going to be dinosaurs." And it reminds me of what's happening right now. And I think everyone that I talk to in the business sphere is looking at this and they're connecting the dots and just saying, if we don't rebuild our business with this new wave, they're going to be out of business because there's so much efficiency, there's so much automation, not like DevOps automation, but like the generative tasks that will free up the intellect of people. Like just the simple things like do an intro or do this for me, write some code, write a countermeasure to a hack. I mean, this is kind of what people are doing. And you mentioned computer vision, again, another huge field where 5G things are coming on, it's going to accelerate. What do you say to people when they kind of are leaning towards that, I need to rethink my business? >> Yeah, it's 100% accurate and what's been amazing to watch the last few months is the speed at which, and the urgency that companies like Microsoft and Google or others are actually racing to, to do that rethinking of their business. And you know, those teams, those companies which are large and haven't always been the fastest moving companies are working around the clock. And the pace at which they're rolling out LLMs across their suite of products is just phenomenal to watch. And it's not just the big, the large tech companies as well, I mean, we're seeing the number of startups, like we get, every week a couple of new startups get in touch with us for help with their LLMs and you know, there's just a huge amount of venture capital flowing into it right now because everyone realizes the opportunities for transforming like legal and healthcare and content creation in all these different areas is just wide open. And so there's a massive gold rush going on right now, which is amazing. >> And the cloud scale, obviously horizontal scalability of the cloud brings us to another level. We've been seeing data infrastructure since the Hadoop days where big data was coined. Now you're seeing this kind of take fruit, now you have vertical specialization where data shines, large language models all of a set up perfectly for kind of this piece. And you know, as you mentioned, you've been doing it for a long time. Let's take a step back and I want to get into how you started the company, what drove you to start it? Because you know, as an entrepreneur you're probably saw this opportunity before other people like, "Hey, this is finally it, it's here." Can you share the origination story of what you guys came up with, how you started it, what was the motivation and take us through that origination story. >> Yeah, absolutely. So as I mentioned, I've been doing AI for many years. I started my career at DARPA, but it wasn't really until 2015, 2016, my previous company was acquired by Capital One. Then I started working there and shortly after I joined, I was asked to start their AI team and scale it up. And for the first time I was actually doing it, had production models that we were working with, that was at scale, right? And so there was hundreds of millions of dollars of business revenue and certainly a big group of customers who were impacted by the way these models acted. And so it got me hyper-aware of these issues of when you get models into production, it, you know. So I think people who are earlier in the AI maturity look at that as a finish line, but it's really just the beginning and there's this constant drive to make them better, make sure they're not degrading, make sure you can explain what they're doing, if they're impacting people, making sure they're not biased. And so at that time, there really weren't any tools to exist to do this, there wasn't open source, there wasn't anything. And so after a few years there, I really started talking to other people in the industry and there was a really clear theme that this needed to be addressed. And so, I joined with my Co-Founder John Dickerson, who was on the faculty in University of Maryland and he'd been doing a lot of research in these areas. And so we ended up joining up together and starting Arthur. >> Awesome. Well, let's get into what you guys do. Can you explain the value proposition? What are people using you for now? Where's the action? What's the customers look like? What do prospects look like? Obviously you mentioned production, this has been the theme. It's not like people woke up one day and said, "Hey, I'm going to put stuff into production." This has kind of been happening. There's been companies that have been doing this at scale and then yet there's a whole follower model coming on mainstream enterprise and businesses. So there's kind of the early adopters are there now in production. What do you guys do? I mean, 'cause I think about just driving the car off the lot is not, you got to manage operations. I mean, that's a big thing. So what do you guys do? Talk about the value proposition and how you guys make money? >> Yeah, so what we do is, listen, when you go to validate ahead of deploying these models in production, starts at that point, right? So you want to make sure that if you're going to be upgrading a model, if you're going to replacing one that's currently in production, that you've proven that it's going to perform well, that it's going to be perform ethically and that you can explain what it's doing. And then when you launch it into production, traditionally data scientists would spend 25, 30% of their time just manually checking in on their model day-to-day babysitting as we call it, just to make sure that the data hasn't drifted, the model performance hasn't degraded, that a programmer did make a change in an upstream data system. You know, there's all sorts of reasons why the world changes and that can have a real adverse effect on these models. And so what we do is bring the same kind of automation that you have for other kinds of, let's say infrastructure monitoring, application monitoring, we bring that to your AI systems. And that way if there ever is an issue, it's not like weeks or months till you find it and you find it before it has an effect on your P&L and your balance sheet, which is too often before they had tools like Arthur, that was the way they were detected. >> You know, I was talking to Swami at Amazon who I've known for a long time for 13 years and been on theCUBE multiple times and you know, I watched Amazon try to pick up that sting with stage maker about six years ago and so much has happened since then. And he and I were talking about this wave, and I kind of brought up this analogy to how when cloud started, it was, Hey, I don't need a data center. 'Cause when I did my startup that time when Amazon, one of my startups at that time, my choice was put a box in the colo, get all the configuration before I could write over the line of code. So the cloud became the benefit for that and you can stand up stuff quickly and then it grew from there. Here it's kind of the same dynamic, you don't want to have to provision a large language model or do all this heavy lifting. So that seeing companies coming out there saying, you can get started faster, there's like a new way to get it going. So it's kind of like the same vibe of limiting that heavy lifting. >> Absolutely. >> How do you look at that because this seems to be a wave that's going to be coming in and how do you guys help companies who are going to move quickly and start developing? >> Yeah, so I think in the race to this kind of gold rush mentality, race to get these models into production, there's starting to see more sort of examples and evidence that there are a lot of risks that go along with it. Either your model says things, your system says things that are just wrong, you know, whether it's hallucination or just making things up, there's lots of examples. If you go on Twitter and the news, you can read about those, as well as sort of times when there could be toxic content coming out of things like that. And so there's a lot of risks there that you need to think about and be thoughtful about when you're deploying these systems. But you know, you need to balance that with the business imperative of getting these things into production and really transforming your business. And so that's where we help people, we say go ahead, put them in production, but just make sure you have the right guardrails in place so that you can do it in a smart way that's going to reflect well on you and your company. >> Let's frame the challenge for the companies now that you have, obviously there's the people who doing large scale production and then you have companies maybe like as small as us who have large linguistic databases or transcripts for example, right? So what are customers doing and why are they deploying AI right now? And is it a speed game, is it a cost game? Why have some companies been able to deploy AI at such faster rates than others? And what's a best practice to onboard new customers? >> Yeah, absolutely. So I mean, we're seeing across a bunch of different verticals, there are leaders who have really kind of started to solve this puzzle about getting AI models into production quickly and being able to iterate on them quickly. And I think those are the ones that realize that imperative that you mentioned earlier about how transformational this technology is. And you know, a lot of times, even like the CEOs or the boards are very personally kind of driving this sense of urgency around it. And so, you know, that creates a lot of movement, right? And so those companies have put in place really smart infrastructure and rails so that people can, data scientists aren't encumbered by having to like hunt down data, get access to it. They're not encumbered by having to stand up new platforms every time they want to deploy an AI system, but that stuff is already in place. There's a really nice ecosystem of products out there, including Arthur, that you can tap into. Compared to five or six years ago when I was building at a top 10 US bank, at that point you really had to build almost everything yourself and that's not the case now. And so it's really nice to have things like, you know, you mentioned AWS SageMaker and a whole host of other tools that can really accelerate things. >> What's your profile customer? Is it someone who already has a team or can people who are learning just dial into the service? What's the persona? What's the pitch, if you will, how do you align with that customer value proposition? Do people have to be built out with a team and in play or is it pre-production or can you start with people who are just getting going? >> Yeah, people do start using it pre-production for validation, but I think a lot of our customers do have a team going and they're starting to put, either close to putting something into production or about to, it's everything from large enterprises that have really sort of complicated, they have dozens of models running all over doing all sorts of use cases to tech startups that are very focused on a single problem, but that's like the lifeblood of the company and so they need to guarantee that it works well. And you know, we make it really easy to get started, especially if you're using one of the common model development platforms, you can just kind of turn key, get going and make sure that you have a nice feedback loop. So then when your models are out there, it's pointing out, areas where it's performing well, areas where it's performing less well, giving you that feedback so that you can make improvements, whether it's in training data or futurization work or algorithm selection. There's a number of, you know, depending on the symptoms, there's a number of things you can do to increase performance over time and we help guide people on that journey. >> So Adam, I have to ask, since you have such a great customer base and they're smart and they got teams and you're on the front end, I mean, early adopters is kind of an overused word, but they're killing it. They're putting stuff in the production's, not like it's a test, it's not like it's early. So as the next wave comes of fast followers, how do you see that coming online? What's your vision for that? How do you see companies that are like just waking up out of the frozen, you know, freeze of like old IT to like, okay, they got cloud, but they're not yet there. What do you see in the market? I see you're in the front end now with the top people really nailing AI and working hard. What's the- >> Yeah, I think a lot of these tools are becoming, or every year they get easier, more accessible, easier to use. And so, you know, even for that kind of like, as the market broadens, it takes less and less of a lift to put these systems in place. And the thing is, every business is unique, they have their own kind of data and so you can use these foundation models which have just been trained on generic data. They're a great starting point, a great accelerant, but then, in most cases you're either going to want to create a model or fine tune a model using data that's really kind of comes from your particular customers, the people you serve and so that it really reflects that and takes that into account. And so I do think that these, like the size of that market is expanding and its broadening as these tools just become easier to use and also the knowledge about how to build these systems becomes more widespread. >> Talk about your customer base you have now, what's the makeup, what size are they? Give a taste a little bit of a customer base you got there, what's they look like? I'll say Capital One, we know very well while you were at there, they were large scale, lot of data from fraud detection to all kinds of cool stuff. What do your customers now look like? >> Yeah, so we have a variety, but I would say one area we're really strong, we have several of the top 10 US banks, that's not surprising, that's a strength for us, but we also have Fortune 100 customers in healthcare, in manufacturing, in retail, in semiconductor and electronics. So what we find is like in any sort of these major verticals, there's typically, you know, one, two, three kind of companies that are really leading the charge and are the ones that, you know, in our opinion, those are the ones that for the next multiple decades are going to be the leaders, the ones that really kind of lead the charge on this AI transformation. And so we're very fortunate to be working with some of those. And then we have a number of startups as well who we love working with just because they're really pushing the boundaries technologically and so they provide great feedback and make sure that we're continuing to innovate and staying abreast of everything that's going on. >> You know, these early markups, even when the hyperscalers were coming online, they had to build everything themselves. That's the new, they're like the alphas out there building it. This is going to be a big wave again as that fast follower comes in. And so when you look at the scale, what advice would you give folks out there right now who want to tee it up and what's your secret sauce that will help them get there? >> Yeah, I think that the secret to teeing it up is just dive in and start like the, I think these are, there's not really a secret. I think it's amazing how accessible these are. I mean, there's all sorts of ways to access LLMs either via either API access or downloadable in some cases. And so, you know, go ahead and get started. And then our secret sauce really is the way that we provide that performance analysis of what's going on, right? So we can tell you in a very actionable way, like, hey, here's where your model is doing good things, here's where it's doing bad things. Here's something you want to take a look at, here's some potential remedies for it. We can help guide you through that. And that way when you're putting it out there, A, you're avoiding a lot of the common pitfalls that people see and B, you're able to really kind of make it better in a much faster way with that tight feedback loop. >> It's interesting, we've been kind of riffing on this supercloud idea because it was just different name than multicloud and you see apps like Snowflake built on top of AWS without even spending any CapEx, you just ride that cloud wave. This next AI, super AI wave is coming. I don't want to call AIOps because I think there's a different distinction. If you, MLOps and AIOps seem a little bit old, almost a few years back, how do you view that because everyone's is like, "Is this AIOps?" And like, "No, not kind of, but not really." How would you, you know, when someone says, just shoots off the hip, "Hey Adam, aren't you doing AIOps?" Do you say, yes we are, do you say, yes, but we do differently because it's doesn't seem like it's the same old AIOps. What's your- >> Yeah, it's a good question. AIOps has been a term that was co-opted for other things and MLOps also has people have used it for different meanings. So I like the term just AI infrastructure, I think it kind of like describes it really well and succinctly. >> But you guys are doing the ops. I mean that's the kind of ironic thing, it's like the next level, it's like NextGen ops, but it's not, you don't want to be put in that bucket. >> Yeah, no, it's very operationally focused platform that we have, I mean, it fires alerts, people can action off them. If you're familiar with like the way people run security operations centers or network operations centers, we do that for data science, right? So think of it as a DSOC, a Data Science Operations Center where all your models, you might have hundreds of models running across your organization, you may have five, but as problems are detected, alerts can be fired and you can actually work the case, make sure they're resolved, escalate them as necessary. And so there is a very strong operational aspect to it, you're right. >> You know, one of the things I think is interesting is, is that, if you don't mind commenting on it, is that the aspect of scale is huge and it feels like that was made up and now you have scale and production. What's your reaction to that when people say, how does scale impact this? >> Yeah, scale is huge for some of, you know, I think, I think look, the highest leverage business areas to apply these to, are generally going to be the ones at the biggest scale, right? And I think that's one of the advantages we have. Several of us come from enterprise backgrounds and we're used to doing things enterprise grade at scale and so, you know, we're seeing more and more companies, I think they started out deploying AI and sort of, you know, important but not necessarily like the crown jewel area of their business, but now they're deploying AI right in the heart of things and yeah, the scale that some of our companies are operating at is pretty impressive. >> John: Well, super exciting, great to have you on and congratulations. I got a final question for you, just random. What are you most excited about right now? Because I mean, you got to be pretty pumped right now with the way the world is going and again, I think this is just the beginning. What's your personal view? How do you feel right now? >> Yeah, the thing I'm really excited about for the next couple years now, you touched on it a little bit earlier, but is a sort of convergence of AI and AI systems with sort of turning into AI native businesses. And so, as you sort of do more, get good further along this transformation curve with AI, it turns out that like the better the performance of your AI systems, the better the performance of your business. Because these models are really starting to underpin all these key areas that cumulatively drive your P&L. And so one of the things that we work a lot with our customers is to do is just understand, you know, take these really esoteric data science notions and performance and tie them to all their business KPIs so that way you really are, it's kind of like the operating system for running your AI native business. And we're starting to see more and more companies get farther along that maturity curve and starting to think that way, which is really exciting. >> I love the AI native. I haven't heard any startup yet say AI first, although we kind of use the term, but I guarantee that's going to come in all the pitch decks, we're an AI first company, it's going to be great run. Adam, congratulations on your success to you and the team. Hey, if we do a few more interviews, we'll get the linguistics down. We can have bots just interact with you directly and ask you, have an interview directly. >> That sounds good, I'm going to go hang out on the beach, right? So, sounds good. >> Thanks for coming on, really appreciate the conversation. Super exciting, really important area and you guys doing great work. Thanks for coming on. >> Adam: Yeah, thanks John. >> Again, this is Cube Conversation. I'm John Furrier here in Palo Alto, AI going next gen. This is legit, this is going to a whole nother level that's going to open up huge opportunities for startups, that's going to use opportunities for investors and the value to the users and the experience will come in, in ways I think no one will ever see. So keep an eye out for more coverage on siliconangle.com and theCUBE.net, thanks for watching. (bright upbeat music)
SUMMARY :
I'm excited to have Adam Wenchel looking forward to the conversation. kind of in the mainstream and that it's just the amount Adam, you know, you've so that you can build on top of them. to give me a riveting introduction to you And you mentioned computer vision, again, And you know, those teams, And you know, as you mentioned, of when you get models into off the lot is not, you and that you can explain what it's doing. So it's kind of like the same vibe so that you can do it in a smart way And so, you know, that creates and make sure that you out of the frozen, you know, and so you can use these foundation models a customer base you got there, that are really leading the And so when you look at the scale, And so, you know, go how do you view that So I like the term just AI infrastructure, I mean that's the kind of ironic thing, and you can actually work the case, is that the aspect of and so, you know, we're seeing exciting, great to have you on so that way you really are, success to you and the team. out on the beach, right? and you guys doing great work. and the value to the users and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Markoff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Adam Wenchel | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
John Dickerson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
13 years | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Billy Beane | PERSON | 0.99+ |
over 20 years | QUANTITY | 0.99+ |
DARPA | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
University of Maryland | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
US | LOCATION | 0.97+ |
first | QUANTITY | 0.96+ |
six years ago | DATE | 0.96+ |
New York Times | ORGANIZATION | 0.96+ |
ChatGPT | ORGANIZATION | 0.96+ |
Swami | PERSON | 0.95+ |
ChatGPT | TITLE | 0.95+ |
hundreds of models | QUANTITY | 0.95+ |
25, 30% | QUANTITY | 0.95+ |
single problem | QUANTITY | 0.95+ |
hundreds of millions of dollars | QUANTITY | 0.95+ |
10 | QUANTITY | 0.94+ |
Moneyball | TITLE | 0.94+ |
wave | EVENT | 0.91+ |
three things | QUANTITY | 0.9+ |
AIOps | TITLE | 0.9+ |
last six months | DATE | 0.89+ |
few months ago | DATE | 0.88+ |
big | EVENT | 0.86+ |
next couple years | DATE | 0.86+ |
DevOps | TITLE | 0.85+ |
Arthur | PERSON | 0.85+ |
CUBE | ORGANIZATION | 0.83+ |
dozens of models | QUANTITY | 0.8+ |
a few years back | DATE | 0.8+ |
six years ago | DATE | 0.78+ |
theCUBE | ORGANIZATION | 0.76+ |
SageMaker | TITLE | 0.75+ |
decades | QUANTITY | 0.75+ |
ORGANIZATION | 0.74+ | |
MLOps | TITLE | 0.74+ |
supercloud | ORGANIZATION | 0.73+ |
super AI wave | EVENT | 0.73+ |
a couple months | QUANTITY | 0.72+ |
Arthur | ORGANIZATION | 0.72+ |
100 customers | QUANTITY | 0.71+ |
Cube Conversation | EVENT | 0.69+ |
theCUBE.net | OTHER | 0.67+ |
Casimir Wierzynski, Intel | RSAC USA 2020
>>Fly from San Francisco. It's the cube covering RSA conference, 2020 San Francisco brought to you by Silicon angle media. >>Hello and welcome back to the cube coverage here in San Francisco, the Moscone center for RSA Congress 2020 for all the coverage period for three days. I'm John, host of the cube. You know, as cybersecurity goes to the next level, as cloud computing goes, continues to go more enterprise, large scale AI and machine learning have become critical managing the data. We've got a great guest here from Intel, Kaz Borzynski, senior director of the AI price with Intel. Thanks for joining us. Oh thanks. So data is a huge, huge data problem when it comes down to cybersecurity, uh, and and generally across the enterprise. Now it's well known, well documented, but you're here giving a talk about machine learning privacy because everyone wants to know who the bad guys are. So do the bad guys deserve privacy? Okay, we'll get to that later. But first tell about your talk and give a talk here at RSA. >>We'll get into other stuff later. I gave a talk, so thanks for having me. I gave a talk on a whole suite of exciting new techniques known as privacy preserving machine learning. So this is a set of machine learning techniques that help people realize the promise of AI and machine learning. But we know that machine learning systems rely on underlying data to train. So how can you also respect the privacy and the security of the underlying data while still being able to train and use AI systems and just take it, where are you within the Intel sphere? Because Intel osseous surgery obviously chips and power to all the enterprises in large. Skip. How are you on the software side AI group? Explain where you are. And so I'm in the AI group at Intel, but I have the most fun job at Intel. I think so cause I work in the CTO office of the AI group, which means I get to think about more futuristic, you know, where is AI going? >>What are some of the major inflection points? One of these that we've been looking at for the last couple of years is this kind of collision course between the need for data to train machine learning systems to unlock all the power of AI, but still the need to keep data private. Yeah, and I think that's generally consistent with our editorial in our research, which is the confluence of cloud native, large scale cloud computing, multi-cloud and AI or machine learning, all kinds of coming together. Those are multigenerational technologies that are coming. So that's, this wave is big. That's right. And I think one thing that's kind of maybe underappreciated about machine learning, especially in production is it's almost always a multi-party interaction. So you'll have, let's say one party that owns data and other party may own a model. They're running a system on somebody else's hardware. So because of the nature of digital data, if you want to share things, you have to worry about what other parties may be doing with those data. >>Because you bring up a great point I want to get your reaction and thoughts on is that, is that it's multidisciplinary. Now as people aren't breaking into the field. I mean people are really excited about AI. I mean you talk to someone who's 12 years old, they see a Tesla, they see software, they see all these things, they see all this cool stuff. So machine learning, which powers AI is very enticing to anyone that's got kind of technical or nerdy background and social attracting a lot of young people. So it's not just getting a computer science degree. There's so much more to AI because you talk about why, what someone needs to be successful too. And to engage in the AI wave. You don't need to just be a code or you could be outside the scope because it's an integrated model or is it's very much, so my group at Intel is better, very heterogeneous. >>So what have got a, you know, kind of mathematicians, but I also have coders. I have, uh, an attorney who's a public policy expert. I have cryptographers. Uh, I think there's a number of ways to get involved in, in meaning my, my background is actually a neuroscience. So, um, it makes sense. Good. Stitch it all together. Yeah. Well, societal changes has to be the, the algorithm needs training they need to learn. So having the most diverse input seems to me to be a, a posture the industry is taking and what's, is that right? Is that the right way to think about it? How should we be thinking about how to make AI highly effective versus super scary? Right. Well, one of the efforts that we're making, part of my message here is that to make these systems better, generally more data helps, right? If you can expand the availability of data, that's always going to help machine learning systems. >>And so we're trying to unlock data silos that may exist across countries, across the organizations. So for example, you know, in healthcare you could have multiple hospitals that have patient data. If somehow they could pool all their data together, you would get much more effective models, much better patient outcomes, but for very good privacy reasons, they're not allowed to do that. So there's these interesting ideas like federated learning where you could somehow decentralize the machine learning process so that you can still respect privacy but get the statistical power. That's a double down on that for a second cause I want to explore that. I think this is the most important story that's not being talked about. It's nuance a little bit. Yeah. You know, healthcare, you had HIPAA, which was built for all the right reasons back then, but now when you start to get into much more of a cross pollination of data, you need to manage the benefit of why it existed with privacy. >>So encryption, homomorphic encryption for instance, data and use. Yes. Okay. When it's being used, not just in flight or being arrested becomes, now you have the three triads of data. Yes. This is now causing a new formula for encryption privacy. What are some of the state of the art mindset thinkings around how to make data open a usable but yet either secure, encrypted or protected. That's right. So it's kind of this paradox of how do I use the data but not actually get the data. You mentioned homomorphic encryption. So this is one of the most kind of leading edge techniques in this area where somehow you're able to, there are ways of doing math on the data while it stays encrypted and the answer that comes out, it's still encrypted and it's only the actual owner of the data who can reveal the answer. So it seems like magic, but with this capability you enable all kinds of new use cases that wouldn't be possible before where third parties can act on, you know, your sensitive data without ever being exposed to it in any way. >>So discovery and leverage of the days that what you're getting at in terms of the benefits, I mean use cases. So stay on that. They used cases of the, of this new idea. Yeah. Is discovery and usage. How would that work? Well, so when we talked about federated learning and pooling across hospitals, that's one set of techniques. Homomorphic encryption would be, for example, suppose that some AI system has already been trained, but I'd like to use it on sensitive data. How do I do that in such a way that the third party service isn't, you know, this what makes, I think machine learning different from different types of data. You know, security problems is that machine learning, you have to operate on the data. You're not just storing it, you're not just moving it around. So how do you, yeah, and this is a key thing. >>So I've got to ask you the question because one of the things that's a real interesting trade off these days is AI and machine learning is really can create great benefits, but also people just go the knee jerk reaction of, you know, Oh my God, it's scary. My privacy. So it's a frontline with Amazon, just facial recognition. Oh my God, it's evil. Yeah. So there's a lot of scared people that might not be informed. Yeah. How should companies invest in machine learning and AI from your opinion? On how should they think about the next 10 year trajectory starting today, thinking about how to invest, what's the right way to think about it, build a team. Yeah. What's your thoughts on that? Because, and this is the number one challenge right now. Yeah. Well I think the, uh, some of this scary issues that you mentioned, you know, there are legitimately scary. >>They're going to have to be resolved, not by companies, but probably, you know, by society and kind of our delegates. So lawmakers, regulators, part of what we're trying to do at the technical level is give society and regulators a, a more flexible set of tools around which you can slice and dice data privacy and so on, so that it's not just all or none. Right. I think that's kind of my main goal as a, as an organization. I think again, the, this idea of having a heterogeneous set of talents, you know, you're going to need policy experts and applied mathematicians and linguists and you know, neuroscientists. So diversity is a huge opportunity, very much so. Not just diversity of people, but diverse data, diverse data, diverse kind of mindsets, approaches to problems that are hard but very promising. If so. Okay. Let's flip to the other side of the spectrum, which is what should people not do? >>What does, what's a, what's a fail failure formula one dimensional thinking? What's a, what's an identification of something that's not, may not go in the right way? Well, you know, one, uh, distinguishing feature of the machine learning field, and it's kind of a cultural thing, but it's given it a lot of traction is it's fundamentally, it had been a very open culture. So there's a lot of, uh, sharing of methods. It's a very, uh, collaborative academic field. So I think within a company you want to kind of be re you want to be part of that culture too. So every company is going to have its secret sauce. It's things that it needs to keep proprietary, but it's very important for companies to engage this broader community of researchers. So you're saying, which I would want, maybe I'm what I would agree with, but I'll just say it. >>You can agree or disagree to be successful, you got to be open. If you're data-driven, you've gotta be open. That's right. There's more JD equals better data. That's why more data, more approaches to data, kind of more eyes on the problem. But you know, still you can definitely keep your proprietary, you know, it kind of forces organizations to think about what are our core strengths that we really want to keep proprietary. But then other things let's, you know, open. All right. So what's the coolest thing you've working on right now? What are some of the fun projects you guys are digging into and you've got a great job. Sounds like you're excited about it. I mean, AI I think is the most exciting thing. I mean I wish I could be 20 again in computer science or whatever field. Cause I think AI is more than a multigenerational things. >>Super exciting as a technical person. But what are you working on that you're excited about? So I'm very excited about taking some of these things like homomorphic encryption and making them much more available to developers, to data scientists because it's asking too much for a data scientist to also be a kind of a post quantum crypto expert. So we've written an open source package called H E transformer, H G for homomorphic encryption. It allows the data scientists to kind of do their normal data science and Python or whatever they're used to, but then they kind of flick a switch and suddenly their model is able to run on encrypted data. Can you just take a minute to explain why homomorphic encryption trend right now is really important? I mean, give a peek into the why because this is something that is now becoming much more real. >>Yeah. The data in use kind of philosophy. Why now? Why is it so important right now? Well, I think, uh, the, because of cloud in the, the power of cloud and the fact that you know, data are collected in one place and possibly processed in another place, you're going to have to, you know, your data are moving around and they're being operated on. So if you can know that, you know, as long as my data are moving around and people are operating on it but it's staying encrypted the whole time, you know, not just in transit, that gives a much higher level of comfort around and the applications are going to probably be onboarded. I mean you can almost imagine new applications will emerge from this application discovery cataloging and API integration points. I mean you can almost imagine the trust will go up and you can also kind of end up with these different business models where you have entities that compete in some spheres but they may decide to collaborate in other ways. >>So for example, banks could compete on, you know, lending and so on under normal activities. But in terms of fraud detection, they may decide, Hey, maybe we can make some Alliance where we cross check with each other as models on certain transactions, but I'm not actually giving you any transaction data. So that's maybe okay. Right. So that's a very powerful, it's really interesting. I mean I think the uh, the compute power has allowed, the overhead seems to be much more robust because people are working on this for in the eighties and nineties I remember. Yes. But it was just so expensive overhead while that's right. Yeah. So you bring up a great point here. So, and this is one of the areas where Intel is really pushing, my team is pushing these techniques have been around for 20 years. Initially they were maybe like 10 million times slower than real time. >>So people thought, okay, this is interesting, you know, mathematically, but not practical. There've been massive improvements just in the last two years where now things are running, you know, a hundred times slower than, than kind of un-encrypted math. But still that, that means that something that you know would take 50 milliseconds now takes five seconds. That's still not an unreasonable, you're my new friend. Now, my best friend on AI. Um, and I got a business to run and I'm going to ask you, what should I do? I really want to leverage machine learning and AI in my business. Okay, I'm investing in more tech. I got cloud and building my own software. How should I be investing? How do I build out a great machine learning AI scene and then ultimately capabilities? How should I do that? Okay, well I would start with a team that has a kind of a research mindset, not because you want them to come in and like write research papers, but the path from research into production is so incredibly short in AI. >>You know, you have things that are papers one year and they're going into production at Google search and within a year. So you kind of need that research mindset. I think another thing is that you want to, uh, you're gonna, you're going to require a very close collaboration between this data science team and your CIO and kind of, you know, systems. And a lot of the challenges around AI are not just coming up with the model, but how do you actually scale it up and you know, go to production with it and interesting about the research. I totally agree with you. I think, you know, you can almost call that product management kind of new fangled Prague product management because if it's applied research, you kind of have your eye on a market generally, but you're not making hardcore product decisions. You're researching it, you're writing it so that you got to, got to do the homework, you know, dream it before you can build it. >>Well, I'm just saying that the field is moving so fast that you're going to need on your team, uh, people who can kind of consume the latest papers. Oh, you're saying consume the research as well. Yeah, I mean if they can contribute, that's great too. I mean, I think this is this kind of open culture where, you know, people consume, they find some improvement. They can then publish it at the next year's conference. It's just been this incredibly healthy eco software. Acceleration's a big part of the cloud. Awesome. Well I really appreciate your insight. This is great topic. I could go for an hour. One of my favorite things. I love the homophobic uh, encryption. I think that's going to be a game changer. I think we're going to start to see some interesting discoveries there. Uh, give a quick plug for Intel. What are you working on now? >>What are you looking to do? What's your plans, highs hiring, doing more research, what's going on? Well, so we think that this intersection of privacy and AI is kind of at the core of, of Intel's data centric mission. So we're trying to figure out, you know, whatever it takes to enable the community, whether it's, you know, uh, optimize software libraries. It could be custom Silicon, it could be even services where, you know, we really want to listen to customers, figure out what they need. Funding. Moore's law is always going to be around the next wave is going to have more compute. It's never going away. More storage, more data. It's just gets better and better. Yeah. Thanks for coming on Catherine. Thanks for having can we have Intel inside the cube breaking down the future of AI. Really exciting stuff on the technology front security day. That's all going to happen at large scale. Of course, it's the cube bringing you all the data here at RSA. I'm John furrier. Thanks for watching.
SUMMARY :
RSA conference, 2020 San Francisco brought to you by Silicon So do the bad guys deserve privacy? So how can you also respect So because of the nature of digital data, I mean you talk to someone who's 12 years old, they see a Tesla, they see software, So what have got a, you know, kind of mathematicians, but I also have coders. So for example, you know, in healthcare you could have multiple So it seems like magic, but with this capability you enable all kinds of new use cases So discovery and leverage of the days that what you're getting at in terms of the benefits, So I've got to ask you the question because one of the things that's a real interesting trade off these days They're going to have to be resolved, not by companies, but probably, you know, by society and kind you know, one, uh, distinguishing feature of the machine learning field, You can agree or disagree to be successful, you got to be open. But what are you working on that you're excited about? I mean you can almost imagine the trust will go up and you can also kind of end up So for example, banks could compete on, you know, lending and so on under normal activities. So people thought, okay, this is interesting, you know, mathematically, but not practical. I think, you know, you can almost call that product management kind of new fangled Prague product Well, I'm just saying that the field is moving so fast that you're going to need on your team, So we're trying to figure out, you know, whatever it takes to enable the community,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Casimir Wierzynski | PERSON | 0.99+ |
Kaz Borzynski | PERSON | 0.99+ |
five seconds | QUANTITY | 0.99+ |
Catherine | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
50 milliseconds | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
RSA | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
John furrier | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
20 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one year | QUANTITY | 0.98+ |
HIPAA | TITLE | 0.97+ |
RSAC | ORGANIZATION | 0.97+ |
RSA | EVENT | 0.97+ |
today | DATE | 0.97+ |
RSA Congress 2020 | EVENT | 0.97+ |
Prague | LOCATION | 0.97+ |
12 years old | QUANTITY | 0.97+ |
next year | DATE | 0.96+ |
eighties | DATE | 0.96+ |
one set | QUANTITY | 0.95+ |
20 years | QUANTITY | 0.95+ |
USA | LOCATION | 0.94+ |
three triads | QUANTITY | 0.94+ |
second | QUANTITY | 0.93+ |
10 million times | QUANTITY | 0.92+ |
an hour | QUANTITY | 0.92+ |
Moscone | LOCATION | 0.92+ |
one party | QUANTITY | 0.9+ |
Moore | PERSON | 0.89+ |
a year | QUANTITY | 0.88+ |
ORGANIZATION | 0.87+ | |
one thing | QUANTITY | 0.85+ |
next 10 year | DATE | 0.84+ |
last two years | DATE | 0.81+ |
one place | QUANTITY | 0.8+ |
Silicon angle media | ORGANIZATION | 0.78+ |
hundred times | QUANTITY | 0.74+ |
nineties | DATE | 0.74+ |
AI | ORGANIZATION | 0.68+ |
last couple | DATE | 0.65+ |
years | DATE | 0.65+ |
E | OTHER | 0.65+ |
CTO | ORGANIZATION | 0.64+ |
2020 | EVENT | 0.61+ |
AI wave | EVENT | 0.58+ |
G | TITLE | 0.54+ |
number one | QUANTITY | 0.52+ |
JD | PERSON | 0.51+ |
2020 | ORGANIZATION | 0.45+ |
next | DATE | 0.44+ |
H | TITLE | 0.35+ |
wave | EVENT | 0.3+ |