Image Title

Search Results for first stable release:

Closing Panel | Generative AI: Riding the Wave | AWS Startup Showcase S3 E1


 

(mellow music) >> Hello everyone, welcome to theCUBE's coverage of AWS Startup Showcase. This is the closing panel session on AI machine learning, the top startups generating generative AI on AWS. It's a great panel. This is going to be the experts talking about riding the wave in generative AI. We got Ankur Mehrotra, who's the director and general manager of AI and machine learning at AWS, and Clem Delangue, co-founder and CEO of Hugging Face, and Ori Goshen, who's the co-founder and CEO of AI21 Labs. Ori from Tel Aviv dialing in, and rest coming in here on theCUBE. Appreciate you coming on for this closing session for the Startup Showcase. >> Thanks for having us. >> Thank you for having us. >> Thank you. >> I'm super excited to have you all on. Hugging Face was recently in the news with the AWS relationship, so congratulations. Open source, open science, really driving the machine learning. And we got the AI21 Labs access to the LLMs, generating huge scale live applications, commercial applications, coming to the market, all powered by AWS. So everyone, congratulations on all your success, and thank you for headlining this panel. Let's get right into it. AWS is powering this wave here. We're seeing a lot of push here from applications. Ankur, set the table for us on the AI machine learning. It's not new, it's been goin' on for a while. Past three years have been significant advancements, but there's been a lot of work done in AI machine learning. Now it's released to the public. Everybody's super excited and now says, "Oh, the future's here!" It's kind of been going on for a while and baking. Now it's kind of coming out. What's your view here? Let's get it started. >> Yes, thank you. So, yeah, as you may be aware, Amazon has been in investing in machine learning research and development since quite some time now. And we've used machine learning to innovate and improve user experiences across different Amazon products, whether it's Alexa or Amazon.com. But we've also brought in our expertise to extend what we are doing in the space and add more generative AI technology to our AWS products and services, starting with CodeWhisperer, which is an AWS service that we announced a few months ago, which is, you can think of it as a coding companion as a service, which uses generative AI models underneath. And so this is a service that customers who have no machine learning expertise can just use. And we also are talking to customers, and we see a lot of excitement about generative AI, and customers who want to build these models themselves, who have the talent and the expertise and resources. For them, AWS has a number of different options and capabilities they can leverage, such as our custom silicon, such as Trainium and Inferentia, as well as distributed machine learning capabilities that we offer as part of SageMaker, which is an end-to-end machine learning development service. At the same time, many of our customers tell us that they're interested in not training and building these generative AI models from scratch, given they can be expensive and can require specialized talent and skills to build. And so for those customers, we are also making it super easy to bring in existing generative AI models into their machine learning development environment within SageMaker for them to use. So we recently announced our partnership with Hugging Face, where we are making it super easy for customers to bring in those models into their SageMaker development environment for fine tuning and deployment. And then we are also partnering with other proprietary model providers such as AI21 and others, where we making these generative AI models available within SageMaker for our customers to use. So our approach here is to really provide customers options and choices and help them accelerate their generative AI journey. >> Ankur, thank you for setting the table there. Clem and Ori, I want to get your take, because the riding the waves, the theme of this session, and to me being in California, I imagine the big surf, the big waves, the big talent out there. This is like alpha geeks, alpha coders, developers are really leaning into this. You're seeing massive uptake from the smartest people. Whether they're young or around, they're coming in with their kind of surfboards, (chuckles) if you will. These early adopters, they've been on this for a while; Now the waves are hitting. This is a big wave, everyone sees it. What are some of those early adopter devs doing? What are some of the use cases you're seeing right out of the gate? And what does this mean for the folks that are going to come in and get on this wave? Can you guys share your perspective on this? Because you're seeing the best talent now leaning into this. >> Yeah, absolutely. I mean, from Hugging Face vantage points, it's not even a a wave, it's a tidal wave, or maybe even the tide itself. Because actually what we are seeing is that AI and machine learning is not something that you add to your products. It's very much a new paradigm to do all technology. It's this idea that we had in the past 15, 20 years, one way to build software and to build technology, which was writing a million lines of code, very rule-based, and then you get your product. Now what we are seeing is that every single product, every single feature, every single company is starting to adopt AI to build the next generation of technology. And that works both to make the existing use cases better, if you think of search, if you think of social network, if you think of SaaS, but also it's creating completely new capabilities that weren't possible with the previous paradigm. Now AI can generate text, it can generate image, it can describe your image, it can do so many new things that weren't possible before. >> It's going to really make the developers really productive, right? I mean, you're seeing the developer uptake strong, right? >> Yes, we have over 15,000 companies using Hugging Face now, and it keeps accelerating. I really think that maybe in like three, five years, there's not going to be any company not using AI. It's going to be really kind of the default to build all technology. >> Ori, weigh in on this. APIs, the cloud. Now I'm a developer, I want to have live applications, I want the commercial applications on this. What's your take? Weigh in here. >> Yeah, first, I absolutely agree. I mean, we're in the midst of a technology shift here. I think not a lot of people realize how big this is going to be. Just the number of possibilities is endless, and I think hard to imagine. And I don't think it's just the use cases. I think we can think of it as two separate categories. We'll see companies and products enhancing their offerings with these new AI capabilities, but we'll also see new companies that are AI first, that kind of reimagine certain experiences. They build something that wasn't possible before. And that's why I think it's actually extremely exciting times. And maybe more philosophically, I think now these large language models and large transformer based models are helping us people to express our thoughts and kind of making the bridge from our thinking to a creative digital asset in a speed we've never imagined before. I can write something down and get a piece of text, or an image, or a code. So I'll start by saying it's hard to imagine all the possibilities right now, but it's certainly big. And if I had to bet, I would say it's probably at least as big as the mobile revolution we've seen in the last 20 years. >> Yeah, this is the biggest. I mean, it's been compared to the Enlightenment Age. I saw the Wall Street Journal had a recent story on this. We've been saying that this is probably going to be bigger than all inflection points combined in the tech industry, given what transformation is coming. I guess I want to ask you guys, on the early adopters, we've been hearing on these interviews and throughout the industry that there's already a set of big companies, a set of companies out there that have a lot of data and they're already there, they're kind of tinkering. Kind of reminds me of the old hyper scaler days where they were building their own scale, and they're eatin' glass, spittin' nails out, you know, they're hardcore. Then you got everybody else kind of saying board level, "Hey team, how do I leverage this?" How do you see those two things coming together? You got the fast followers coming in behind the early adopters. What's it like for the second wave coming in? What are those conversations for those developers like? >> I mean, I think for me, the important switch for companies is to change their mindset from being kind of like a traditional software company to being an AI or machine learning company. And that means investing, hiring machine learning engineers, machine learning scientists, infrastructure in members who are working on how to put these models in production, team members who are able to optimize models, specialized models, customized models for the company's specific use cases. So it's really changing this mindset of how you build technology and optimize your company building around that. Things are moving so fast that I think now it's kind of like too late for low hanging fruits or small, small adjustments. I think it's important to realize that if you want to be good at that, and if you really want to surf this wave, you need massive investments. If there are like some surfers listening with this analogy of the wave, right, when there are waves, it's not enough just to stand and make a little bit of adjustments. You need to position yourself aggressively, paddle like crazy, and that's how you get into the waves. So that's what companies, in my opinion, need to do right now. >> Ori, what's your take on the generative models out there? We hear a lot about foundation models. What's your experience running end-to-end applications for large foundation models? Any insights you can share with the app developers out there who are looking to get in? >> Yeah, I think first of all, it's start create an economy, where it probably doesn't make sense for every company to create their own foundation models. You can basically start by using an existing foundation model, either open source or a proprietary one, and start deploying it for your needs. And then comes the second round when you are starting the optimization process. You bootstrap, whether it's a demo, or a small feature, or introducing new capability within your product, and then start collecting data. That data, and particularly the human feedback data, helps you to constantly improve the model, so you create this data flywheel. And I think we're now entering an era where customers have a lot of different choice of how they want to start their generative AI endeavor. And it's a good thing that there's a variety of choices. And the really amazing thing here is that every industry, any company you speak with, it could be something very traditional like industrial or financial, medical, really any company. I think peoples now start to imagine what are the possibilities, and seriously think what's their strategy for adopting this generative AI technology. And I think in that sense, the foundation model actually enabled this to become scalable. So the barrier to entry became lower; Now the adoption could actually accelerate. >> There's a lot of integration aspects here in this new wave that's a little bit different. Before it was like very monolithic, hardcore, very brittle. A lot more integration, you see a lot more data coming together. I have to ask you guys, as developers come in and grow, I mean, when I went to college and you were a software engineer, I mean, I got a degree in computer science, and software engineering, that's all you did was code, (chuckles) you coded. Now, isn't it like everyone's a machine learning engineer at this point? Because that will be ultimately the science. So, (chuckles) you got open source, you got open software, you got the communities. Swami called you guys the GitHub of machine learning, Hugging Face is the GitHub of machine learning, mainly because that's where people are going to code. So this is essentially, machine learning is computer science. What's your reaction to that? >> Yes, my co-founder Julien at Hugging Face have been having this thing for quite a while now, for over three years, which was saying that actually software engineering as we know it today is a subset of machine learning, instead of the other way around. People would call us crazy a few years ago when we're seeing that. But now we are realizing that you can actually code with machine learning. So machine learning is generating code. And we are starting to see that every software engineer can leverage machine learning through open models, through APIs, through different technology stack. So yeah, it's not crazy anymore to think that maybe in a few years, there's going to be more people doing AI and machine learning. However you call it, right? Maybe you'll still call them software engineers, maybe you'll call them machine learning engineers. But there might be more of these people in a couple of years than there is software engineers today. >> I bring this up as more tongue in cheek as well, because Ankur, infrastructure's co is what made Cloud great, right? That's kind of the DevOps movement. But here the shift is so massive, there will be a game-changing philosophy around coding. Machine learning as code, you're starting to see CodeWhisperer, you guys have had coding companions for a while on AWS. So this is a paradigm shift. How is the cloud playing into this for you guys? Because to me, I've been riffing on some interviews where it's like, okay, you got the cloud going next level. This is an example of that, where there is a DevOps-like moment happening with machine learning, whether you call it coding or whatever. It's writing code on its own. Can you guys comment on what this means on top of the cloud? What comes out of the scale? What comes out of the benefit here? >> Absolutely, so- >> Well first- >> Oh, go ahead. >> Yeah, so I think as far as scale is concerned, I think customers are really relying on cloud to make sure that the applications that they build can scale along with the needs of their business. But there's another aspect to it, which is that until a few years ago, John, what we saw was that machine learning was a data scientist heavy activity. They were data scientists who were taking the data and training models. And then as machine learning found its way more and more into production and actual usage, we saw the MLOps become a thing, and MLOps engineers become more involved into the process. And then we now are seeing, as machine learning is being used to solve more business critical problems, we're seeing even legal and compliance teams get involved. We are seeing business stakeholders more engaged. So, more and more machine learning is becoming an activity that's not just performed by data scientists, but is performed by a team and a group of people with different skills. And for them, we as AWS are focused on providing the best tools and services for these different personas to be able to do their job and really complete that end-to-end machine learning story. So that's where, whether it's tools related to MLOps or even for folks who cannot code or don't know any machine learning. For example, we launched SageMaker Canvas as a tool last year, which is a UI-based tool which data analysts and business analysts can use to build machine learning models. So overall, the spectrum in terms of persona and who can get involved in the machine learning process is expanding, and the cloud is playing a big role in that process. >> Ori, Clem, can you guys weigh in too? 'Cause this is just another abstraction layer of scale. What's it mean for you guys as you look forward to your customers and the use cases that you're enabling? >> Yes, I think what's important is that the AI companies and providers and the cloud kind of work together. That's how you make a seamless experience and you actually reduce the barrier to entry for this technology. So that's what we've been super happy to do with AWS for the past few years. We actually announced not too long ago that we are doubling down on our partnership with AWS. We're excited to have many, many customers on our shared product, the Hugging Face deep learning container on SageMaker. And we are working really closely with the Inferentia team and the Trainium team to release some more exciting stuff in the coming weeks and coming months. So I think when you have an ecosystem and a system where the AWS and the AI providers, AI startups can work hand in hand, it's to the benefit of the customers and the companies, because it makes it orders of magnitude easier for them to adopt this new paradigm to build technology AI. >> Ori, this is a scale on reasoning too. The data's out there and making sense out of it, making it reason, getting comprehension, having it make decisions is next, isn't it? And you need scale for that. >> Yes. Just a comment about the infrastructure side. So I think really the purpose is to streamline and make these technologies much more accessible. And I think we'll see, I predict that we'll see in the next few years more and more tooling that make this technology much more simple to consume. And I think it plays a very important role. There's so many aspects, like the monitoring the models and their kind of outputs they produce, and kind of containing and running them in a production environment. There's so much there to build on, the infrastructure side will play a very significant role. >> All right, that's awesome stuff. I'd love to change gears a little bit and get a little philosophy here around AI and how it's going to transform, if you guys don't mind. There's been a lot of conversations around, on theCUBE here as well as in some industry areas, where it's like, okay, all the heavy lifting is automated away with machine learning and AI, the complexity, there's some efficiencies, it's horizontal and scalable across all industries. Ankur, good point there. Everyone's going to use it for something. And a lot of stuff gets brought to the table with large language models and other things. But the key ingredient will be proprietary data or human input, or some sort of AI whisperer kind of role, or prompt engineering, people are saying. So with that being said, some are saying it's automating intelligence. And that creativity will be unleashed from this. If the heavy lifting goes away and AI can fill the void, that shifts the value to the intellect or the input. And so that means data's got to come together, interact, fuse, and understand each other. This is kind of new. I mean, old school AI was, okay, got a big model, I provisioned it long time, very expensive. Now it's all free flowing. Can you guys comment on where you see this going with this freeform, data flowing everywhere, heavy lifting, and then specialization? >> Yeah, I think- >> Go ahead. >> Yeah, I think, so what we are seeing with these large language models or generative models is that they're really good at creating stuff. But I think it's also important to recognize their limitations. They're not as good at reasoning and logic. And I think now we're seeing great enthusiasm, I think, which is justified. And the next phase would be how to make these systems more reliable. How to inject more reasoning capabilities into these models, or augment with other mechanisms that actually perform more reasoning so we can achieve more reliable results. And we can count on these models to perform for critical tasks, whether it's medical tasks, legal tasks. We really want to kind of offload a lot of the intelligence to these systems. And then we'll have to get back, we'll have to make sure these are reliable, we'll have to make sure we get some sort of explainability that we can understand the process behind the generated results that we received. So I think this is kind of the next phase of systems that are based on these generated models. >> Clem, what's your view on this? Obviously you're at open community, open source has been around, it's been a great track record, proven model. I'm assuming creativity's going to come out of the woodwork, and if we can automate open source contribution, and relationships, and onboarding more developers, there's going to be unleashing of creativity. >> Yes, it's been so exciting on the open source front. We all know Bert, Bloom, GPT-J, T5, Stable Diffusion, that work up. The previous or the current generation of open source models that are on Hugging Face. It has been accelerating in the past few months. So I'm super excited about ControlNet right now that is really having a lot of impact, which is kind of like a way to control the generation of images. Super excited about Flan UL2, which is like a new model that has been recently released and is open source. So yeah, it's really fun to see the ecosystem coming together. Open source has been the basis for traditional software, with like open source programming languages, of course, but also all the great open source that we've gotten over the years. So we're happy to see that the same thing is happening for machine learning and AI, and hopefully can help a lot of companies reduce a little bit the barrier to entry. So yeah, it's going to be exciting to see how it evolves in the next few years in that respect. >> I think the developer productivity angle that's been talked about a lot in the industry will be accelerated significantly. I think security will be enhanced by this. I think in general, applications are going to transform at a radical rate, accelerated, incredible rate. So I think it's not a big wave, it's the water, right? I mean, (chuckles) it's the new thing. My final question for you guys, if you don't mind, I'd love to get each of you to answer the question I'm going to ask you, which is, a lot of conversations around data. Data infrastructure's obviously involved in this. And the common thread that I'm hearing is that every company that looks at this is asking themselves, if we don't rebuild our company, start thinking about rebuilding our business model around AI, we might be dinosaurs, we might be extinct. And it reminds me that scene in Moneyball when, at the end, it's like, if we're not building the model around your model, every company will be out of business. What's your advice to companies out there that are having those kind of moments where it's like, okay, this is real, this is next gen, this is happening. I better start thinking and putting into motion plans to refactor my business, 'cause it's happening, business transformation is happening on the cloud. This kind of puts an exclamation point on, with the AI, as a next step function. Big increase in value. So it's an opportunity for leaders. Ankur, we'll start with you. What's your advice for folks out there thinking about this? Do they put their toe in the water? Do they jump right into the deep end? What's your advice? >> Yeah, John, so we talk to a lot of customers, and customers are excited about what's happening in the space, but they often ask us like, "Hey, where do we start?" So we always advise our customers to do a lot of proof of concepts, understand where they can drive the biggest ROI. And then also leverage existing tools and services to move fast and scale, and try and not reinvent the wheel where it doesn't need to be. That's basically our advice to customers. >> Get it. Ori, what's your advice to folks who are scratching their head going, "I better jump in here. "How do I get started?" What's your advice? >> So I actually think that need to think about it really economically. Both on the opportunity side and the challenges. So there's a lot of opportunities for many companies to actually gain revenue upside by building these new generative features and capabilities. On the other hand, of course, this would probably affect the cogs, and incorporating these capabilities could probably affect the cogs. So I think we really need to think carefully about both of these sides, and also understand clearly if this is a project or an F word towards cost reduction, then the ROI is pretty clear, or revenue amplifier, where there's, again, a lot of different opportunities. So I think once you think about this in a structured way, I think, and map the different initiatives, then it's probably a good way to start and a good way to start thinking about these endeavors. >> Awesome. Clem, what's your take on this? What's your advice, folks out there? >> Yes, all of these are very good advice already. Something that you said before, John, that I disagreed a little bit, a lot of people are talking about the data mode and proprietary data. Actually, when you look at some of the organizations that have been building the best models, they don't have specialized or unique access to data. So I'm not sure that's so important today. I think what's important for companies, and it's been the same for the previous generation of technology, is their ability to build better technology faster than others. And in this new paradigm, that means being able to build machine learning faster than others, and better. So that's how, in my opinion, you should approach this. And kind of like how can you evolve your company, your teams, your products, so that you are able in the long run to build machine learning better and faster than your competitors. And if you manage to put yourself in that situation, then that's when you'll be able to differentiate yourself to really kind of be impactful and get results. That's really hard to do. It's something really different, because machine learning and AI is a different paradigm than traditional software. So this is going to be challenging, but I think if you manage to nail that, then the future is going to be very interesting for your company. >> That's a great point. Thanks for calling that out. I think this all reminds me of the cloud days early on. If you went to the cloud early, you took advantage of it when the pandemic hit. If you weren't native in the cloud, you got hamstrung by that, you were flatfooted. So just get in there. (laughs) Get in the cloud, get into AI, you're going to be good. Thanks for for calling that. Final parting comments, what's your most exciting thing going on right now for you guys? Ori, Clem, what's the most exciting thing on your plate right now that you'd like to share with folks? >> I mean, for me it's just the diversity of use cases and really creative ways of companies leveraging this technology. Every day I speak with about two, three customers, and I'm continuously being surprised by the creative ideas. And the future is really exciting of what can be achieved here. And also I'm amazed by the pace that things move in this industry. It's just, there's not at dull moment. So, definitely exciting times. >> Clem, what are you most excited about right now? >> For me, it's all the new open source models that have been released in the past few weeks, and that they'll keep being released in the next few weeks. I'm also super excited about more and more companies getting into this capability of chaining different models and different APIs. I think that's a very, very interesting development, because it creates new capabilities, new possibilities, new functionalities that weren't possible before. You can plug an API with an open source embedding model, with like a no-geo transcription model. So that's also very exciting. This capability of having more interoperable machine learning will also, I think, open a lot of interesting things in the future. >> Clem, congratulations on your success at Hugging Face. Please pass that on to your team. Ori, congratulations on your success, and continue to, just day one. I mean, it's just the beginning. It's not even scratching the service. Ankur, I'll give you the last word. What are you excited for at AWS? More cloud goodness coming here with AI. Give you the final word. >> Yeah, so as both Clem and Ori said, I think the research in the space is moving really, really fast, so we are excited about that. But we are also excited to see the speed at which enterprises and other AWS customers are applying machine learning to solve real business problems, and the kind of results they're seeing. So when they come back to us and tell us the kind of improvement in their business metrics and overall customer experience that they're driving and they're seeing real business results, that's what keeps us going and inspires us to continue inventing on their behalf. >> Gentlemen, thank you so much for this awesome high impact panel. Ankur, Clem, Ori, congratulations on all your success. We'll see you around. Thanks for coming on. Generative AI, riding the wave, it's a tidal wave, it's the water, it's all happening. All great stuff. This is season three, episode one of AWS Startup Showcase closing panel. This is the AI ML episode, the top startups building generative AI on AWS. I'm John Furrier, your host. Thanks for watching. (mellow music)

Published Date : Mar 9 2023

SUMMARY :

This is the closing panel I'm super excited to have you all on. is to really provide and to me being in California, and then you get your product. kind of the default APIs, the cloud. and kind of making the I saw the Wall Street Journal I think it's important to realize that the app developers out there So the barrier to entry became lower; I have to ask you guys, instead of the other way around. That's kind of the DevOps movement. and the cloud is playing a and the use cases that you're enabling? the barrier to entry And you need scale for that. in the next few years and AI can fill the void, a lot of the intelligence and if we can automate reduce a little bit the barrier to entry. I'd love to get each of you drive the biggest ROI. to folks who are scratching So I think once you think Clem, what's your take on this? and it's been the same of the cloud days early on. And also I'm amazed by the pace in the past few weeks, Please pass that on to your team. and the kind of results they're seeing. This is the AI ML episode,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ankur MehrotraPERSON

0.99+

JohnPERSON

0.99+

AWSORGANIZATION

0.99+

ClemPERSON

0.99+

Ori GoshenPERSON

0.99+

John FurrierPERSON

0.99+

CaliforniaLOCATION

0.99+

OriPERSON

0.99+

Clem DelanguePERSON

0.99+

Hugging FaceORGANIZATION

0.99+

JulienPERSON

0.99+

AnkurPERSON

0.99+

AmazonORGANIZATION

0.99+

Tel AvivLOCATION

0.99+

threeQUANTITY

0.99+

AnkurORGANIZATION

0.99+

second roundQUANTITY

0.99+

AI21 LabsORGANIZATION

0.99+

two separate categoriesQUANTITY

0.99+

Amazon.comORGANIZATION

0.99+

last yearDATE

0.99+

two thingsQUANTITY

0.99+

firstQUANTITY

0.98+

over 15,000 companiesQUANTITY

0.98+

BothQUANTITY

0.98+

five yearsQUANTITY

0.98+

bothQUANTITY

0.98+

over three yearsQUANTITY

0.98+

three customersQUANTITY

0.98+

eachQUANTITY

0.98+

TrainiumORGANIZATION

0.98+

todayDATE

0.98+

AlexaTITLE

0.98+

Stable DiffusionORGANIZATION

0.97+

SwamiPERSON

0.97+

InferentiaORGANIZATION

0.96+

GPT-JORGANIZATION

0.96+

SageMakerTITLE

0.96+

AI21 LabsORGANIZATION

0.95+

Riding the WaveTITLE

0.95+

ControlNetORGANIZATION

0.94+

one wayQUANTITY

0.94+

a million linesQUANTITY

0.93+

Startup ShowcaseEVENT

0.92+

few months agoDATE

0.92+

second waveEVENT

0.91+

theCUBEORGANIZATION

0.91+

few years agoDATE

0.91+

CodeWhispererTITLE

0.9+

AI21ORGANIZATION

0.89+

Nayaki Nayyar, Ivanti and Stephanie Hallford, Intel | CUBE Conversation, July 2020


 

(calm music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Welcome to this CUBE Conversation. I'm Lisa Martin, and today, I'm talking to Ivanti again and Intel, some breaking news. So please welcome two guests, the EVP and Chief Product Officer of Ivanti, Nayaki Nayyar. She's back, and we've also got the VP and GM of Business Client Salute Platforms for Intel, Stephanie Hallford. Nayaki and Stephanie, it's great to have you on the program. >> It's great to be back here with you Lisa, and Stephanie glad to have you here with us, thank you. >> Thank you, we're excited >> Yeah, you guys are going to break some news for us, so let's go ahead and start. Nayaki, hot off the presses is Ivanti's announcement of its new hyper-automation platform, Ivanti Neurons, helping organizations now in this new next normal of so much remote work. Now, just on the heels of that, you're announcing a new strategic partnership with Intel. Tell me about that. >> So Lisa, like we announced, our Ivanti Neurons platform that is helping our customers and all the IT organizations around the world to deal with this explosive growth of remote workers, the devices that would work is used, the data that it's getting from those devices, and also the security challenges, and Neurons really help address what we call discover all the devices, manage those devices, self-heal those devices, self-secure the devices, and with this partnership with Intel, we are extremely excited about the potential our customers and the benefits that customers can get. Intel is offering what they call Device as a Service, which includes both the hardware and software, and with this partnership, we are announcing the integration between Intel's vPro platform and Ivanti's Neurons platform, which is what we are so excited about. Our joint customers, joint enterprises that are using both the products can now benefit from this out of the box integration to take advantage of this Device as a Service combined offer. >> So Stephanie, talk to us from Intel's perspective. This is an integration of Intel's Endpoint Management Assistant with Ivanti Neurons. How does this drive up the value for the EMA solution for your customers who are already using it? >> Right, well, so vPro is just to step everyone back, vPro is the number one enterprise platform trusted now for over 14 years. We are in a vast majority of enterprises around the world, and that's because vPro is essentially our best performing CPUs, our highest level of security, our highest level manageability, which is our EMA or "Emma" manageability solution, which Ivanti is integrating, and also stability, so that is the promise to IT managers for a stable, the Intel Stable Image platform, and what that allows is IT managers to know that we will keep as much stability and fast forward and push through any fixes as quickly as possible on those vPro devices because we understand that IT networks usually QUAL, you know, not all at one time, but it's sequential. So vPro is our number one enterprise built for business, validated, enabled, and we're super excited today because we're taking that remote manageability solution that comes with vPro, and we are marrying it with Ivanti's top class in point management solution, and Ivanti is a world leader in managing and protecting endpoints, and today more than ever, because IT's remote and Intel. For instance, our IT over one weekend had to figure out how to support a hundred thousand remote workers, so the ability for Ivanti to now have our remote manageability in band, out of band, on-prem, in the cloud, it really rounds out. Ivanti's already fantastic world-class solution, so it's a fantastic start to what I foresee is going to be a great partnership. >> And probably a big target install base. Now, can you talk to me a little bit about COVID as a catalyst for this partnership? So many companies, the stuff they talked about a great example of Intel pivoting over a weekend for a hundred thousand people. We're hearing so many different numbers of an explosion of devices, but also experts and even C-suite from tech companies projecting maybe 30 to 40% of the workforce only will go back, so talk to me about COVID as really driving the necessity for organizations to benefit from this type of technology. >> Yeah, so Lisa, like Stephanie said, right, as Intel had to take hundred thousand employees remote over a weekend, that is true for pretty much every company, every organization, every enterprise independent of industry vertical that they had to take all their workforce and move them to be primarily remote workers, and the stats of BFC is what used to be, I would say, three to four percent before COVID of remote working. Post-COVID or during COVID, as we say, it's going to be around 30, 40, 50%, and this is a conversation and a challenge. Every IT organization, every C-level exec, and, in most cases, I'm also seeing this become a board conversation that they're trying to figure out not just how to support remote workers for a short time, but for a longer time as this becomes the new normal or the next normal, whatever you call that, Lisa, and really helping employees through this transition and providing what we call a seamless experience as we employees are working from home or on the move or location agnostic, being able to provide a experience, a service experience that understands what employee's preferences are, what their needs are, and providing that consumer with experiences, what this joint offering between Intel and Ivanti really brings together for our joint customers. >> So you talked about this being elevated to the board level conversation, you know, and this is something that we're hearing a lot of that suddenly there's so much more visibility and focus on certain parts of businesses, and survival is, so many businesses are at risk. Stephanie, I'd like to get your perspective on how this joint solution with Intel and Ivanti, do you see this as an opportunity to give your customers not just a competitive advantage, but for maybe some of those businesses who might be in jeopardy like a survival strategy? >> Absolutely, I mean, the, you know, while we both Ivanti and Intel have our own IT challenges and we support our workers directly, we are broadly experienced in supporting many many companies that frankly, perhaps, weren't planning for these types of instances, remote manageability overnight, security and cyber threats getting more and more sophisticated, but, you know, tech companies like Ivanti, like Intel, we have been thinking about this and experiencing and planning for these things and bringing them out in our products for some time, and so I think it is a great opportunity when we come together and we bring that, you know, IP expertise and IT expertise, both IP technical and that IT insight, and we bring it to customers who are of all industries, whether it be healthcare or financial or medium businesses who are increasingly being managed by service providers who can utilize this type of device as a service and endpoint manageability. Most companies and certainly all IT managers will tell you they're overwhelmed. They are traditionally squeezed on budget, and they have the massive requirement to take their companies entirely cloud and cloud oriented or maybe a hybrid of cloud and on-prem, and they really would prefer to leave network security and network management to experts, and that's where we can come in with our platform, with our intelligence, we work hard to continue to build that product roadmap to stay ahead of cyber threats. Our vPro platform, for instance, has what we call Intel Hardware Shield to set up technologies that actually protects against cyber attack, even under the OS, so if the OS is down or there's a cyber attack around the OS, we actually can lock down the BIOS and the Firmware and alert the OS and have that communication, which allows the system to protect those areas that need to be protected or lock down or encrypt those areas, so this is the type of thing we bring to the party, and than Ivanti has that absolute in Point Management credibility that there's just, I think, ease, So if IT managers are worried about moving to the cloud and getting workers remote and, you know, managing cyber threats, they really would prefer to leave this management and security of their network to experts like Ivanti, and so we're thrilled to kind of combine that expertise and give IT managers a little bit of peace of mind. >> I think it's even more than giving IT managers a peace of mind, but so talk to me, Nayaki, about how these technologies work together. So for example, when we talked about the Neurons and the hyper-automation platform that you just announced, you were talking about the discovery, the self-healing, self-securing of all these devices within an organization that they may not even know they have EDGE devices on-prem cloud. Talk to me about how these two technologies work together. Is it discovering all these devices first, self-security, self-healing? How does then EMA come into play? >> So let me give an analogy in our consumer world, Lisa. We all are used to or getting used to cars where they automatically heal themselves. I have a car sitting in my garage that I haven't taken to a workshop for last four years since I bought it, so it's almost a similar experience that combined offering things to our customers where all these endpoints, like Stephanie said, we are, I would say, one of the leading providers in the endpoint management where we support today. Ivanti supports over 40 million endpoints for our customers, and combining that with a strong vPro platform from Intel, that combined offering, which is what we call Device as a Service, so that the IT departments or the enterprises don't have to really worry about how we are discovering all of those devices, managing those devices. Self-healing, like if there's any performance issues, configuration drift issues, if there are any security vulnerabilities, anomalies on those devices, it automatically heals them. I mean, that is the beauty of it where IT doesn't have to worry about trying to do it reactively. These neurons detect and self-heal those devices automatically in the background, and almost augmenting IT with what I call these automation bots that are constantly running in the background on these devices and self-healing and self-securing those devices. So that's a benefit every organization, every company, every enterprise, every IT department gets from this joint offering, and if I were on their side, on the other side, I can really sleep at night knowing those devices are now not just being managed, but are secure because now we are able to auto-heal or auto-secure those devices in the background continuously. >> Let's talk about speed cause that's one of the things, speed and scale, we talk about with every different technology, but right now there's so much uncertainty across the globe, so for joint customers, Stephanie talked about the, you know, the large install base of customers on the vPro platform, how quickly would they be able to leverage this joint solution to really get those endpoints under management and start dialing down some of the risks like device sprawl and security threats? >> So the joint offering is available today and being released the integration between both the platforms with this announcement, so companies that have both of our platforms and solutions can start implementing it and really getting the benefit out of it. They don't have to wait for another three months or six months. Right after this release, they should be able to integrate the two platforms, discover everything that they have across their entire network, manage those, secure those devices and use these neurons to automatically heal and service those endpoints. >> So this is something that could get up and running pretty quickly? >> It's an AutoBox connection and integration that we worked very closely, Stephanie's team and my team had been working for months now, and, yeah, this is an exciting announcement not just from the product perspective, but also the benefit it gives our customers, the speed, the accuracy, and the service experience that they can provide to their end user, employees, customers, and consumers, I think, that's super beneficial for everyone. >> Absolutely, and then that 360 degree view. Stephanie, we'll wrap it up with you. Talk to us about how this new strategic partnership is a facilitator or an accelerant of Intel's device as a service vision. >> Well, you know, first off, I wanted to commend Nayaki's team because our engineers were so impressed. They, you know, felt like they were working with the PhD advanced version of so many other engineering partners they'd ever come across, so I think we have a very strong engineering culture between our two companies and the speed at which we were able to integrate our solutions, and at the same time start thinking about what we may be able to do in the future, should we put our heads together and start doing a joint product roadmap on opportunities in the future, network connectivity, wifi connectivity, all sorts of ideas, so huge congratulations to the engineering teams because the speed at which we were able to integrate and get a product offering out was impressive, but, you know, secondarily, on to your question on device as a service, this is going to be by far where the future moves. We know that companies will tend to continue to look for ways to have sustainability in their environments, and so when you have Device as a Service, you're able to do things like into end supporting that device from its start into a network to when you end of life a device and how you end of life that device has severe, some sustainability and costs, you know, complexities, and if we're able to manage that device from end to end and provide servicing to alert IT managers and self-heal before problems happen, that helps obviously not only with business models and, you know, protecting data, but it also helps in keeping systems running and being alert to when systems begin to degrade or if there are issues or if it's time to refresh because the hardware is not new enough to take advantage of the new software capabilities, then you're able to end of life that device in a sustainable way, in a safe way, and, even to some degree, provide some opportunity for remediation of data and, you know, remote erase and continue to provide that security all the way into the end, so when we look at device as a service, it's more than just one aspect. It's really taking a device and being responsible for the security, the manageability, the self-healing from beginning to end, and I know that all IT managers need that, appreciate that, and frankly don't have the time or skillsets to be able to provide that in their own house. So I think there's the beginnings today, and I think we have a huge upside to what we can do in the future. I look at Intel's strengths in enterprise and how long we have been, you know, operating in enterprises around the world. Ivanti's, you know, in the vast majority of Fortune 100s, and when you've got kind of engineering powerhouses that are coming together and brainstorming it's, I think, it's a great partnership for relief for customer pain points in the future, which unfortunately there's going to be more probably. >> And this is just the beginning. >> I think that's one thing we can guarantee. It's what, sorry? >> Yeah, and it's just the beginning. This partnership is just the beginning. You will see lot more happening between both the companies as we define the roadmap into the future, so we are super excited about all the work, the joint teams, and, Stephanie, I want to take this opportunity to thank you, your leadership, and your entire organization for helping us with this partnership. >> We're excited by it, we are, we know it's just the beginning of great things to come. >> Well, just the beginning means we have to have more conversations. The cultural fit really sounds like it's really there, and there's tight alignment with Ivanti and Intel. Ladies, thank you so much for joining me. Nayaki, great to have you back on the program. >> Thank you, thank you, Lisa. Thank you for hosting us, and, Stephanie, it's always a pleasure talking to you, thank you. >> Likewise, looking forward to the launch and all the customer reactions. >> Absolutely. >> Yes, all right, thanks Nayaki, thanks Stephanie. For my guests, I'm Lisa Martin. You're watching this CUBE Conversation. (calm music)

Published Date : Jul 23 2020

SUMMARY :

leaders all around the world, to have you on the program. and Stephanie glad to have Now, just on the heels of that, and all the IT organizations So Stephanie, talk to us so that is the promise to so talk to me about COVID as really and the stats of BFC is what to the board level conversation, you know, and the Firmware and alert the OS and the hyper-automation so that the IT departments and being released the integration and the service experience Absolutely, and then and how long we have been, you know, thing we can guarantee. Yeah, and it's just the beginning. of great things to come. Well, just the beginning means we have a pleasure talking to you, and all the customer reactions. Yes, all right, thanks

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NayakiPERSON

0.99+

StephaniePERSON

0.99+

Lisa MartinPERSON

0.99+

IvantiPERSON

0.99+

Stephanie HallfordPERSON

0.99+

July 2020DATE

0.99+

LisaPERSON

0.99+

six monthsQUANTITY

0.99+

Nayaki NayyarPERSON

0.99+

Palo AltoLOCATION

0.99+

30QUANTITY

0.99+

IntelORGANIZATION

0.99+

three monthsQUANTITY

0.99+

two companiesQUANTITY

0.99+

BostonLOCATION

0.99+

360 degreeQUANTITY

0.99+

two platformsQUANTITY

0.99+

bothQUANTITY

0.99+

IvantiORGANIZATION

0.99+

two guestsQUANTITY

0.99+

threeQUANTITY

0.99+

40QUANTITY

0.99+

over 14 yearsQUANTITY

0.98+

oneQUANTITY

0.98+

firstQUANTITY

0.98+

50%QUANTITY

0.98+

BFCORGANIZATION

0.98+

hundred thousand employeesQUANTITY

0.98+

todayDATE

0.98+

two technologiesQUANTITY

0.98+

Ivanti NeuronsTITLE

0.97+

EMAORGANIZATION

0.97+

40%QUANTITY

0.97+

four percentQUANTITY

0.97+

NeuronsTITLE

0.97+

one timeQUANTITY

0.96+

over 40 million endpointsQUANTITY

0.96+

vProTITLE

0.95+

Kenneth Knowles, Google - Flink Forward - #FFSF17 - #theCUBE


 

>> Welcome everybody, we're at the Flink Forward conference in San Francisco, at the Kabuki Hotel. Flink Forward U.S. is the first U.S. user conference for the Flink community sponsored by data Artisans, the creators of Flink, and we're here with special guest Kenneth Knowles-- >> Hi. >> Who works for Google and who heads up the Apache Beam Team where, just to set context, Beam is the API Or STK on which developers can build stream processing apps that can be supported by Google's Dataflow, Apache Flink, Spark, Apex, among other future products that'll come along. Ken, why don't you tell us, what was the genesis of Beam, and why did Google open up sort of the API to it. >> So, I can speak as an Apache Beam Team PMC member, that the genesis came from a combined code donation to Apache from Google Cloud Dataflow STK and there was also already written by data Artisans a Flink runner for that, which already included some portability hooks, and then there was also a runner for Spark that was written by some folks at PayPal. And so, sort of those three efforts pointed out that it was a good time to have a unified model for these DAG-based computational... I guess it's a DAG-based computational model. >> Okay, so I want to pause you for a moment. >> Yeah. >> And generally, we try to avoid being rude and cutting off our guests but, in this case, help us understand what a DAG is, and why it's so important. >> Okay, so a DAG is a directed acyclic graph, and, in some sense, if you draw a boxes and arrows diagram of your computation where you say "I read some data from here," and it goes through some filters and then I do a join and then I write it somewhere. These all end up looking what they call the DAG just because of the fact that it is the structure, and all computation sort of can be modeled this way, and in particular, these massively parallel computations profit a lot from being modeled this way as opposed to MapReduce because the fact that you have access to the entire DAG means you can perform transformations and optimizations and you have more opportunities for executing it in different ways. >> Oh, in other words, because you can see the big picture you can find, like, the shortest path as opposed to I've got to do this step, I've got to do this step and this step. >> Yeah, it's exactly like that, you're not constrained to sort of, the person writing the program knows what it is that they want to compute, and then, you know, you have very smart people writing the optimizer and the execution engine. So it may execute an entirely different way, so for example, if you're doing a summation, right, rather than shuffling all your data to one place and summing there, maybe you do some partial summations, and then you just shuffle accumulators to one place, and finish the summation, right? >> Okay, now let me bump you up a couple levels >> Yeah. >> And tell us, so, MapReduce was a trees within the forest approach, you know, lots of seeing just what's a couple feet ahead of you. And now we have the big picture that allows you to find the best path, perhaps, one way of saying it. Tell us though, with Google or with others who are using Beam-compatible applications, what new class of solutions can they build that you wouldn't have done with MapReduce before? >> Well, I guess there's... There's two main aspects to Beam that I would emphasize, there's the portability, so you can write this application without having to commit to which backend you're going to run it on. And there's... There's also the unification of streaming and batch which is not present in a number of backends, and Beam as this layer sort of makes it very easy to use sort of batch-style computation and streaming-style computation in the same pipeline. And actually I said there was two things, the third thing that actually really opens things up is that Beam is not just a portability layer across backends, it's also a portability layer across languages, so, something that really only has preliminary support on a lot of systems is Python, so, for example, Beam has a Python STK where you write a DAG description of your computation in Python, and via Beam's portability API's, one of these sort of usually Java-centric engines would be able to run that Python pipeline. >> Okay, so-- >> So, did I answer your question? >> Yes, yes, but let's go one level deeper, which is, if MapReduce, if its sweet spot was web crawl indexing in batch mode, what are some of the things that are now possible with a Beam-style platform that supports Beam, you know, underneath it, that can do this direct acyclic graph processing? >> I guess what I, I'm still learning all the different things that you can do with this style of computation, and the truth is it's just extremely general, right? You can set up a DAG, and there's a lot of talks here at Flink Forward about using a stream processor to do high frequency trading or fraud detection. And those are completely different even though they're in the same model of computation as, you know, you would still use it for things like crawling the web and doing PageRank over. Actually, at the moment we don't have iterative computations so we wouldn't do PageRank today. >> So, is it considered a complete replacement, and then new used cases for older style frameworks like MapReduce, or is it a complement for things where you want to do more with data in motion or lower latency? >> It is absolutely intended as a full replacement for MapReduce, yes, like, if you're thinking about writing a MapReduce pipeline, instead you should write a Beam pipeline, and then you should benchmark it on different Beam backends, right? >> And, so, working with Spark, working with Flink, how are they, in terms of implementing the full richness of the Beam-interface relative to the Google product Dataflow, from which I assumed Beam was derived? >> So, all of the different backends exist in sort of different states as far as implementing the full model. One thing I really want to emphasize is that Beam is not trying to take the intersection on all of these, right? And I think that your question already shows that you know this, we keep sort of a matrix on our website where we say, "Okay there's all these different "features you might want, "and then there's all these backends "you might want to run it on," and it's sort of there's can you do it, can you do it sometimes, and notes about that, we want this whole matrix to be, yes, you can use all of the model on Flink, all of it on Spark, all of it on Google Cloud Dataflow, but so they all have some gaps and I guess, yeah, we're really welcoming contributors in that space. >> So, for someone whose been around for a long time, you might think of it as an ODBC driver, where the capabilities of the databases behind it are different, and so the drivers can only support some subset of a full capability. >> Yeah, I think that there's, so, I'm not familiar enough with ODBC to say absolutely yes, absolutely no, but yes, it's that sort of a thing, it's like the JVM has many languages on it and ODBC provides this generic database abstraction. >> Is Google's goal with Beam API to make it so that customers demand a level of portability that goes not just for the on-prim products but for products that are in other public clouds, and sort of pry open the API lock in? >> So, I can't say what Google's goals are, but I can certainly say that Beam's goals are that nobody's going to be locked into a particular backend. >> Okay. >> I mean, I can't even say what Beam's goals are, sorry, those are my goals, I can speak for myself. >> Is Beam seeing so far adoption by the sort of big consumer internet companies, or has it started to spread to mainstream enterprises, or is still a little immature? >> I think Beam's still a little bit less mature than that, we're heading into our first stable release, so, we began incubating it as an Apache project about a year ago, and then, around the beginning of the new year, actually right at the end of 2016, we graduated to be an Apache top level project, so right now we're sort of on the road from we've become a top level project, we're seeing contributions ramp up dramatically, and we're aiming for a stable release as soon as possible, our next release we expect to be a stable API that we would encourage users and enterprises to adopt I think. >> Okay, and that's when we would see it in production form on the Google Cloud platform? >> Well, so the thing is that the code and the backends behind it are all very mature, but, right now, we're still sort of like, I don't know how to say it, we're polishing the edges, right, it's still got a lot of rough edges and you might encounter them if you're trying it out right now and things might change out from under you before we make our stable release. >> Understood. >> Yep. All right. Kenneth, thank you for joining us, and for the update on the Beam project and we'll be looking for that and seeing its progress over the next few months. >> Great. Thanks for having me. >> With that, I'm George Gilbert, I'm with Kenneth Knowles, we're at the dataArtisan's Flink Forward user conference in San Francisco at the Kabuki Hotel and we'll be back after a few minutes.

Published Date : Apr 15 2017

SUMMARY :

and we're here with special guest Kenneth Knowles-- Beam is the API Or STK on which developers can build and then there was also a runner for Spark and cutting off our guests but, in this case, and you have more opportunities for executing it Oh, in other words, because you can see the big picture and then you just shuffle accumulators to one place, that allows you to find the best path, and streaming-style computation in the same pipeline. and the truth is it's just extremely general, right? and it's sort of there's can you do it, and so the drivers can only support some subset and ODBC provides this generic database abstraction. are that nobody's going to be I mean, I can't even say what Beam's goals are, and we're aiming for a stable release and you might encounter them and for the update on the Beam project Thanks for having me. in San Francisco at the Kabuki Hotel

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

KennethPERSON

0.99+

Kenneth KnowlesPERSON

0.99+

San FranciscoLOCATION

0.99+

PythonTITLE

0.99+

GoogleORGANIZATION

0.99+

KenPERSON

0.99+

two thingsQUANTITY

0.99+

PayPalORGANIZATION

0.99+

one placeQUANTITY

0.98+

three effortsQUANTITY

0.98+

FlinkORGANIZATION

0.98+

Flink ForwardEVENT

0.98+

Python STKTITLE

0.98+

ApacheORGANIZATION

0.98+

MapReduceTITLE

0.98+

ODBCTITLE

0.97+

BeamTITLE

0.97+

dataArtisanORGANIZATION

0.97+

third thingQUANTITY

0.97+

first stable releaseQUANTITY

0.96+

firstQUANTITY

0.95+

#FFSF17EVENT

0.95+

Apache Beam TeamORGANIZATION

0.94+

Flink ForwardORGANIZATION

0.94+

two main aspectsQUANTITY

0.93+

ArtisansORGANIZATION

0.93+

BeamORGANIZATION

0.93+

SparkTITLE

0.92+

end of 2016DATE

0.92+

Kabuki HotelLOCATION

0.92+

todayDATE

0.87+

about a year agoDATE

0.85+

Cloud DataflowTITLE

0.83+

DataflowTITLE

0.82+

JavaTITLE

0.81+

one wayQUANTITY

0.77+

One thingQUANTITY

0.73+

Google CloudTITLE

0.72+

couple feetQUANTITY

0.71+

ApacheTITLE

0.7+

Flink Forward userEVENT

0.7+

JVMTITLE

0.69+

Cloud Dataflow STKTITLE

0.69+

PMCORGANIZATION

0.69+

ForwardEVENT

0.64+

yearDATE

0.62+

DAGOTHER

0.59+

U.S.LOCATION

0.53+

ApexTITLE

0.51+