Opening Panel | Generative AI: Hype or Reality | AWS Startup Showcase S3 E1
(light airy music) >> Hello, everyone, welcome to theCUBE's presentation of the AWS Startup Showcase, AI and machine learning. "Top Startups Building Generative AI on AWS." This is season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talking about AI machine learning. We have three great guests Bratin Saha, VP, Vice President of Machine Learning and AI Services at Amazon Web Services. Tom Mason, the CTO of Stability AI, and Aidan Gomez, CEO and co-founder of Cohere. Two practitioners doing startups and AWS. Gentlemen, thank you for opening up this session, this episode. Thanks for coming on. >> Thank you. >> Thank you. >> Thank you. >> So the topic is hype versus reality. So I think we're all on the reality is great, hype is great, but the reality's here. I want to get into it. Generative AI's got all the momentum, it's going mainstream, it's kind of come out of the behind the ropes, it's now mainstream. We saw the success of ChatGPT, opens up everyone's eyes, but there's so much more going on. Let's jump in and get your early perspectives on what should people be talking about right now? What are you guys working on? We'll start with AWS. What's the big focus right now for you guys as you come into this market that's highly active, highly hyped up, but people see value right out of the gate? >> You know, we have been working on generative AI for some time. In fact, last year we released Code Whisperer, which is about using generative AI for software development and a number of customers are using it and getting real value out of it. So generative AI is now something that's mainstream that can be used by enterprise users. And we have also been partnering with a number of other companies. So, you know, stability.ai, we've been partnering with them a lot. We want to be partnering with other companies as well. In seeing how we do three things, you know, first is providing the most efficient infrastructure for generative AI. And that is where, you know, things like Trainium, things like Inferentia, things like SageMaker come in. And then next is the set of models and then the third is the kind of applications like Code Whisperer and so on. So, you know, it's early days yet, but clearly there's a lot of amazing capabilities that will come out and something that, you know, our customers are starting to pay a lot of attention to. >> Tom, talk about your company and what your focus is and why the Amazon Web Services relationship's important for you? >> So yeah, we're primarily committed to making incredible open source foundation models and obviously stable effusions been our kind of first big model there, which we trained all on AWS. We've been working with them over the last year and a half to develop, obviously a big cluster, and bring all that compute to training these models at scale, which has been a really successful partnership. And we're excited to take it further this year as we develop commercial strategy of the business and build out, you know, the ability for enterprise customers to come and get all the value from these models that we think they can get. So we're really excited about the future. We got hugely exciting pipeline for this year with new modalities and video models and wonderful things and trying to solve images for once and for all and get the kind of general value and value proposition correct for customers. So it's a really exciting time and very honored to be part of it. >> It's great to see some of your customers doing so well out there. Congratulations to your team. Appreciate that. Aidan, let's get into what you guys do. What does Cohere do? What are you excited about right now? >> Yeah, so Cohere builds large language models, which are the backbone of applications like ChatGPT and GPT-3. We're extremely focused on solving the issues with adoption for enterprise. So it's great that you can make a super flashy demo for consumers, but it takes a lot to actually get it into billion user products and large global enterprises. So about six months ago, we released our command models, which are some of the best that exist for large language models. And in December, we released our multilingual text understanding models and that's on over a hundred different languages and it's trained on, you know, authentic data directly from native speakers. And so we're super excited to continue pushing this into enterprise and solving those barriers for adoption, making this transformation a reality. >> Just real quick, while I got you there on the new products coming out. Where are we in the progress? People see some of the new stuff out there right now. There's so much more headroom. Can you just scope out in your mind what that looks like? Like from a headroom standpoint? Okay, we see ChatGPT. "Oh yeah, it writes my papers for me, does some homework for me." I mean okay, yawn, maybe people say that, (Aidan chuckles) people excited or people are blown away. I mean, it's helped theCUBE out, it helps me, you know, feed up a little bit from my write-ups but it's not always perfect. >> Yeah, at the moment it's like a writing assistant, right? And it's still super early in the technologies trajectory. I think it's fascinating and it's interesting but its impact is still really limited. I think in the next year, like within the next eight months, we're going to see some major changes. You've already seen the very first hints of that with stuff like Bing Chat, where you augment these dialogue models with an external knowledge base. So now the models can be kept up to date to the millisecond, right? Because they can search the web and they can see events that happened a millisecond ago. But that's still limited in the sense that when you ask the question, what can these models actually do? Well they can just write text back at you. That's the extent of what they can do. And so the real project, the real effort, that I think we're all working towards is actually taking action. So what happens when you give these models the ability to use tools, to use APIs? What can they do when they can actually affect change out in the real world, beyond just streaming text back at the user? I think that's the really exciting piece. >> Okay, so I wanted to tee that up early in the segment 'cause I want to get into the customer applications. We're seeing early adopters come in, using the technology because they have a lot of data, they have a lot of large language model opportunities and then there's a big fast follower wave coming behind it. I call that the people who are going to jump in the pool early and get into it. They might not be advanced. Can you guys share what customer applications are being used with large language and vision models today and how they're using it to transform on the early adopter side, and how is that a tell sign of what's to come? >> You know, one of the things we have been seeing both with the text models that Aidan talked about as well as the vision models that stability.ai does, Tom, is customers are really using it to change the way you interact with information. You know, one example of a customer that we have, is someone who's kind of using that to query customer conversations and ask questions like, you know, "What was the customer issue? How did we solve it?" And trying to get those kinds of insights that was previously much harder to do. And then of course software is a big area. You know, generating software, making that, you know, just deploying it in production. Those have been really big areas that we have seen customers start to do. You know, looking at documentation, like instead of you know, searching for stuff and so on, you know, you just have an interactive way, in which you can just look at the documentation for a product. You know, all of this goes to where we need to take the technology. One of which is, you know, the models have to be there but they have to work reliably in a production setting at scale, with privacy, with security, and you know, making sure all of this is happening, is going to be really key. That is what, you know, we at AWS are looking to do, which is work with partners like stability and others and in the open source and really take all of these and make them available at scale to customers, where they work reliably. >> Tom, Aidan, what's your thoughts on this? Where are customers landing on this first use cases or set of low-hanging fruit use cases or applications? >> Yeah, so I think like the first group of adopters that really found product market fit were the copywriting companies. So one great example of that is HyperWrite. Another one is Jasper. And so for Cohere, that's the tip of the iceberg, like there's a very long tail of usage from a bunch of different applications. HyperWrite is one of our customers, they help beat writer's block by drafting blog posts, emails, and marketing copy. We also have a global audio streaming platform, which is using us the power of search engine that can comb through podcast transcripts, in a bunch of different languages. Then a global apparel brand, which is using us to transform how they interact with their customers through a virtual assistant, two dozen global news outlets who are using us for news summarization. So really like, these large language models, they can be deployed all over the place into every single industry sector, language is everywhere. It's hard to think of any company on Earth that doesn't use language. So it's, very, very- >> We're doing it right now. We got the language coming in. >> Exactly. >> We'll transcribe this puppy. All right. Tom, on your side, what do you see the- >> Yeah, we're seeing some amazing applications of it and you know, I guess that's partly been, because of the growth in the open source community and some of these applications have come from there that are then triggering this secondary wave of innovation, which is coming a lot from, you know, controllability and explainability of the model. But we've got companies like, you know, Jasper, which Aidan mentioned, who are using stable diffusion for image generation in block creation, content creation. We've got Lensa, you know, which exploded, and is built on top of stable diffusion for fine tuning so people can bring themselves and their pets and you know, everything into the models. So we've now got fine tuned stable diffusion at scale, which is democratized, you know, that process, which is really fun to see your Lensa, you know, exploded. You know, I think it was the largest growing app in the App Store at one point. And lots of other examples like NightCafe and Lexica and Playground. So seeing lots of cool applications. >> So much applications, we'll probably be a customer for all you guys. We'll definitely talk after. But the challenges are there for people adopting, they want to get into what you guys see as the challenges that turn into opportunities. How do you see the customers adopting generative AI applications? For example, we have massive amounts of transcripts, timed up to all the videos. I don't even know what to do. Do I just, do I code my API there. So, everyone has this problem, every vertical has these use cases. What are the challenges for people getting into this and adopting these applications? Is it figuring out what to do first? Or is it a technical setup? Do they stand up stuff, they just go to Amazon? What do you guys see as the challenges? >> I think, you know, the first thing is coming up with where you think you're going to reimagine your customer experience by using generative AI. You know, we talked about Ada, and Tom talked about a number of these ones and you know, you pick up one or two of these, to get that robust. And then once you have them, you know, we have models and we'll have more models on AWS, these large language models that Aidan was talking about. Then you go in and start using these models and testing them out and seeing whether they fit in use case or not. In many situations, like you said, John, our customers want to say, "You know, I know you've trained these models on a lot of publicly available data, but I want to be able to customize it for my use cases. Because, you know, there's some knowledge that I have created and I want to be able to use that." And then in many cases, and I think Aidan mentioned this. You know, you need these models to be up to date. Like you can't have it staying. And in those cases, you augmented with a knowledge base, you know you have to make sure that these models are not hallucinating. And so you need to be able to do the right kind of responsible AI checks. So, you know, you start with a particular use case, and there are a lot of them. Then, you know, you can come to AWS, and then look at one of the many models we have and you know, we are going to have more models for other modalities as well. And then, you know, play around with the models. We have a playground kind of thing where you can test these models on some data and then you can probably, you will probably want to bring your own data, customize it to your own needs, do some of the testing to make sure that the model is giving the right output and then just deploy it. And you know, we have a lot of tools. >> Yeah. >> To make this easy for our customers. >> How should people think about large language models? Because do they think about it as something that they tap into with their IP or their data? Or is it a large language model that they apply into their system? Is the interface that way? What's the interaction look like? >> In many situations, you can use these models out of the box. But in typical, in most of the other situations, you will want to customize it with your own data or with your own expectations. So the typical use case would be, you know, these are models are exposed through APIs. So the typical use case would be, you know you're using these APIs a little bit for testing and getting familiar and then there will be an API that will allow you to train this model further on your data. So you use that AI, you know, make sure you augmented the knowledge base. So then you use those APIs to customize the model and then just deploy it in an application. You know, like Tom was mentioning, a number of companies that are using these models. So once you have it, then you know, you again, use an endpoint API and use it in an application. >> All right, I love the example. I want to ask Tom and Aidan, because like most my experience with Amazon Web Service in 2007, I would stand up in EC2, put my code on there, play around, if it didn't work out, I'd shut it down. Is that a similar dynamic we're going to see with the machine learning where developers just kind of log in and stand up infrastructure and play around and then have a cloud-like experience? >> So I can go first. So I mean, we obviously, with AWS working really closely with the SageMaker team, do fantastic platform there for ML training and inference. And you know, going back to your point earlier, you know, where the data is, is hugely important for companies. Many companies bringing their models to their data in AWS on-premise for them is hugely important. Having the models to be, you know, open sources, makes them explainable and transparent to the adopters of those models. So, you know, we are really excited to work with the SageMaker team over the coming year to bring companies to that platform and make the most of our models. >> Aidan, what's your take on developers? Do they just need to have a team in place, if we want to interface with you guys? Let's say, can they start learning? What do they got to do to set up? >> Yeah, so I think for Cohere, our product makes it much, much easier to people, for people to get started and start building, it solves a lot of the productionization problems. But of course with SageMaker, like Tom was saying, I think that lowers a barrier even further because it solves problems like data privacy. So I want to underline what Bratin was saying earlier around when you're fine tuning or when you're using these models, you don't want your data being incorporated into someone else's model. You don't want it being used for training elsewhere. And so the ability to solve for enterprises, that data privacy and that security guarantee has been hugely important for Cohere, and that's very easy to do through SageMaker. >> Yeah. >> But the barriers for using this technology are coming down super quickly. And so for developers, it's just becoming completely intuitive. I love this, there's this quote from Andrej Karpathy. He was saying like, "It really wasn't on my 2022 list of things to happen that English would become, you know, the most popular programming language." And so the barrier is coming down- >> Yeah. >> Super quickly and it's exciting to see. >> It's going to be awesome for all the companies here, and then we'll do more, we're probably going to see explosion of startups, already seeing that, the maps, ecosystem maps, the landscape maps are happening. So this is happening and I'm convinced it's not yesterday's chat bot, it's not yesterday's AI Ops. It's a whole another ballgame. So I have to ask you guys for the final question before we kick off the company's showcasing here. How do you guys gauge success of generative AI applications? Is there a lens to look through and say, okay, how do I see success? It could be just getting a win or is it a bigger picture? Bratin we'll start with you. How do you gauge success for generative AI? >> You know, ultimately it's about bringing business value to our customers. And making sure that those customers are able to reimagine their experiences by using generative AI. Now the way to get their ease, of course to deploy those models in a safe, effective manner, and ensuring that all of the robustness and the security guarantees and the privacy guarantees are all there. And we want to make sure that this transitions from something that's great demos to actual at scale products, which means making them work reliably all of the time not just some of the time. >> Tom, what's your gauge for success? >> Look, I think this, we're seeing a completely new form of ways to interact with data, to make data intelligent, and directly to bring in new revenue streams into business. So if businesses can use our models to leverage that and generate completely new revenue streams and ultimately bring incredible new value to their customers, then that's fantastic. And we hope we can power that revolution. >> Aidan, what's your take? >> Yeah, reiterating Bratin and Tom's point, I think that value in the enterprise and value in market is like a huge, you know, it's the goal that we're striving towards. I also think that, you know, the value to consumers and actual users and the transformation of the surface area of technology to create experiences like ChatGPT that are magical and it's the first time in human history we've been able to talk to something compelling that's not a human. I think that in itself is just extraordinary and so exciting to see. >> It really brings up a whole another category of markets. B2B, B2C, it's B2D, business to developer. Because I think this is kind of the big trend the consumers have to win. The developers coding the apps, it's a whole another sea change. Reminds me everyone use the "Moneyball" movie as example during the big data wave. Then you know, the value of data. There's a scene in "Moneyball" at the end, where Billy Beane's getting the offer from the Red Sox, then the owner says to the Red Sox, "If every team's not rebuilding their teams based upon your model, there'll be dinosaurs." I think that's the same with AI here. Every company will have to need to think about their business model and how they operate with AI. So it'll be a great run. >> Completely Agree >> It'll be a great run. >> Yeah. >> Aidan, Tom, thank you so much for sharing about your experiences at your companies and congratulations on your success and it's just the beginning. And Bratin, thanks for coming on representing AWS. And thank you, appreciate for what you do. Thank you. >> Thank you, John. Thank you, Aidan. >> Thank you John. >> Thanks so much. >> Okay, let's kick off season three, episode one. I'm John Furrier, your host. Thanks for watching. (light airy music)
SUMMARY :
of the AWS Startup Showcase, of the behind the ropes, and something that, you know, and build out, you know, Aidan, let's get into what you guys do. and it's trained on, you know, it helps me, you know, the ability to use tools, to use APIs? I call that the people and you know, making sure the first group of adopters We got the language coming in. Tom, on your side, what do you see the- and you know, everything into the models. they want to get into what you guys see and you know, you pick for our customers. then you know, you again, All right, I love the example. and make the most of our models. And so the ability to And so the barrier is coming down- and it's exciting to see. So I have to ask you guys and ensuring that all of the robustness and directly to bring in new and it's the first time in human history the consumers have to win. and it's just the beginning. I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Tom Mason | PERSON | 0.99+ |
Aidan | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrej Karpathy | PERSON | 0.99+ |
Bratin Saha | PERSON | 0.99+ |
December | DATE | 0.99+ |
2007 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Aidan Gomez | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Billy Beane | PERSON | 0.99+ |
Bratin | PERSON | 0.99+ |
Moneyball | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
Ada | PERSON | 0.99+ |
last year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Two practitioners | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
ChatGPT | TITLE | 0.99+ |
next year | DATE | 0.99+ |
Code Whisperer | TITLE | 0.99+ |
third | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
App Store | TITLE | 0.99+ |
first time | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Inferentia | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
GPT-3 | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Lensa | TITLE | 0.98+ |
SageMaker | ORGANIZATION | 0.98+ |
three things | QUANTITY | 0.97+ |
Cohere | ORGANIZATION | 0.96+ |
over a hundred different languages | QUANTITY | 0.96+ |
English | OTHER | 0.96+ |
one example | QUANTITY | 0.96+ |
about six months ago | DATE | 0.96+ |
One | QUANTITY | 0.96+ |
first use | QUANTITY | 0.96+ |
SageMaker | TITLE | 0.96+ |
Bing Chat | TITLE | 0.95+ |
one point | QUANTITY | 0.95+ |
Trainium | TITLE | 0.95+ |
Lexica | TITLE | 0.94+ |
Playground | TITLE | 0.94+ |
three great guests | QUANTITY | 0.93+ |
HyperWrite | TITLE | 0.92+ |
AI Meets the Supercloud | Supercloud2
(upbeat music) >> Okay, welcome back everyone at Supercloud 2 event, live here in Palo Alto, theCUBE Studios live stage performance, virtually syndicating it all over the world. I'm John Furrier with Dave Vellante here as Cube alumni, and special influencer guest, Howie Xu, VP of Machine Learning and Zscaler, also part-time as a CUBE analyst 'cause he is that good. Comes on all the time. You're basically a CUBE analyst as well. Thanks for coming on. >> Thanks for inviting me. >> John: Technically, you're not really a CUBE analyst, but you're kind of like a CUBE analyst. >> Happy New Year to everyone. >> Dave: Great to see you. >> Great to see you, Dave and John. >> John: We've been talking about ChatGPT online. You wrote a great post about it being more like Amazon, not like Google. >> Howie: More than just Google Search. >> More than Google Search. Oh, it's going to compete with Google Search, which it kind of does a little bit, but more its infrastructure. So a clever point, good segue into this conversation, because this is kind of the beginning of these kinds of next gen things we're going to see. Things where it's like an obvious next gen, it's getting real. Kind of like seeing the browser for the first time, Mosaic browser. Whoa, this internet thing's real. I think this is that moment and Supercloud like enablement is coming. So this has been a big part of the Supercloud kind of theme. >> Yeah, you talk about Supercloud, you talk about, you know, AI, ChatGPT. I really think the ChatGPT is really another Netscape moment, the browser moment. Because if you think about internet technology, right? It was brewing for 20 years before early 90s. Not until you had a, you know, browser, people realize, "Wow, this is how wonderful this technology could do." Right? You know, all the wonderful things. Then you have Yahoo and Amazon. I think we have brewing, you know, the AI technology for, you know, quite some time. Even then, you know, neural networks, deep learning. But not until ChatGPT came along, people realize, "Wow, you know, the user interface, user experience could be that great," right? So I really think, you know, if you look at the last 30 years, there is a browser moment, there is iPhone moment. I think ChatGPT moment is as big as those. >> Dave: What do you see as the intersection of things like ChatGPT and the Supercloud? Of course, the media's going to focus, journalists are going to focus on all the negatives and the privacy. Okay. You know we're going to get by that, right? Always do. Where do you see the Supercloud and sort of the distributed data fitting in with ChatGPT? Does it use that as a data source? What's the link? >> Howie: I think there are number of use cases. One of the use cases, we talked about why we even have Supercloud because of the complexity, because of the, you know, heterogeneous nature of different clouds. In order for me as a developer, in order for me to create applications, I have so many things to worry about, right? It's a complexity. But with ChatGPT, with the AI, I don't have to worry about it, right? Those kind of details will be taken care of by, you know, the underlying layer. So we have been talking about on this show, you know, over the last, what, year or so about the Supercloud, hey, defining that, you know, API layer spanning across, you know, multiple clouds. I think that will be happening. However, for a lot of the things, that will be more hidden, right? A lot of that will be automated by the bots. You know, we were just talking about it right before the show. One of the profound statement I heard from Adrian Cockcroft about 10 years ago was, "Hey Howie, you know, at Netflix, right? You know, IT is just one API call away." That's a profound statement I heard about a decade ago. I think next decade, right? You know, the IT is just one English language away, right? So when it's one English language away, it's no longer as important, API this, API that. You still need API just like hardware, right? You still need all of those things. That's going to be more hidden. The high level thing will be more, you know, English language or the language, right? Any language for that matter. >> Dave: And so through language, you'll tap services that live across the Supercloud, is what you're saying? >> Howie: You just tell what you want, what you desire, right? You know, the bots will help you to figure out where the complexity is, right? You know, like you said, a lot of criticism about, "Hey, ChatGPT doesn't do this, doesn't do that." But if you think about how to break things down, right? For instance, right, you know, ChatGPT doesn't have Microsoft stock price today, obviously, right? However, you can ask ChatGPT to write a program for you, retrieve the Microsoft stock price, (laughs) and then just run it, right? >> Dave: Yeah. >> So the thing to think about- >> John: It's only going to get better. It's only going to get better. >> The thing people kind of unfairly criticize ChatGPT is it doesn't do this. But can you not break down humans' task into smaller things and get complex things to be done by the ChatGPT? I think we are there already, you know- >> John: That to me is the real game changer. That's the assembly of atomic elements at the top of the stack, whether the interface is voice or some programmatic gesture based thing, you know, wave your hand or- >> Howie: One of the analogy I used in my blog was, you know, each person, each professional now is a quarterback. And we suddenly have, you know, a lot more linebacks or you know, any backs to work for you, right? For free even, right? You know, and then that's sort of, you should think about it. You are the quarterback of your day-to-day job, right? Your job is not to do everything manually yourself. >> Dave: You call the play- >> Yes. >> Dave: And they execute. Do your job. >> Yes, exactly. >> Yeah, all the players are there. All the elves are in the North Pole making the toys, Dave, as we say. But this is the thing, I want to get your point. This change is going to require a new kind of infrastructure software relationship, a new kind of operating runtime, a new kind of assembler, a new kind of loader link things. This very operating systems kind of concepts. >> Data intensive, right? How to process the data, how to, you know, process so gigantic data in parallel, right? That's actually a tough job, right? So if you think about ChatGPT, why OpenAI is ahead of the game, right? You know, Google may not want to acknowledge it, right? It's not necessarily they do, you know, not have enough data scientist, but the software engineering pieces, you know, behind it, right? To train the model, to actually do all those things in parallel, to do all those things in a cost effective way. So I think, you know, a lot of those still- >> Let me ask you a question. Let me ask you a question because we've had this conversation privately, but I want to do it while we're on stage here. Where are all the alpha geeks and developers and creators and entrepreneurs going to gravitate to? You know, in every wave, you see it in crypto, all the alphas went into crypto. Now I think with ChatGPT, you're going to start to see, like, "Wow, it's that moment." A lot of people are going to, you know, scrum and do startups. CTOs will invent stuff. There's a lot of invention, a lot of computer science and customer requirements to figure out. That's new. Where are the alpha entrepreneurs going to go to? What do you think they're going to gravitate to? If you could point to the next layer to enable this super environment, super app environment, Supercloud. 'Cause there's a lot to do to enable what you just said. >> Howie: Right. You know, if you think about using internet as the analogy, right? You know, in the early 90s, internet came along, browser came along. You had two kind of companies, right? One is Amazon, the other one is walmart.com. And then there were company, like maybe GE or whatnot, right? Really didn't take advantage of internet that much. I think, you know, for entrepreneurs, suddenly created the Yahoo, Amazon of the ChatGPT native era. That's what we should be all excited about. But for most of the Fortune 500 companies, your job is to surviving sort of the big revolution. So you at least need to do your walmart.com sooner than later, right? (laughs) So not be like GE, right? You know, hand waving, hey, I do a lot of the internet, but you know, when you look back last 20, 30 years, what did they do much with leveraging the- >> So you think they're going to jump in, they're going to build service companies or SaaS tech companies or Supercloud companies? >> Howie: Okay, so there are two type of opportunities from that perspective. One is, you know, the OpenAI ish kind of the companies, I think the OpenAI, the game is still open, right? You know, it's really Close AI today. (laughs) >> John: There's room for competition, you mean? >> There's room for competition, right. You know, you can still spend you know, 50, $100 million to build something interesting. You know, there are company like Cohere and so on and so on. There are a bunch of companies, I think there is that. And then there are companies who's going to leverage those sort of the new AI primitives. I think, you know, we have been talking about AI forever, but finally, finally, it's no longer just good, but also super useful. I think, you know, the time is now. >> John: And if you have the cloud behind you, what do you make the Amazon do differently? 'Cause Amazon Web Services is only going to grow with this. It's not going to get smaller. There's more horsepower to handle, there's more needs. >> Howie: Well, Microsoft already showed what's the future, right? You know, you know, yes, there is a kind of the container, you know, the serverless that will continue to grow. But the future is really not about- >> John: Microsoft's shown the future? >> Well, showing that, you know, working with OpenAI, right? >> Oh okay. >> They already said that, you know, we are going to have ChatGPT service. >> $10 billion, I think they're putting it. >> $10 billion putting, and also open up the Open API services, right? You know, I actually made a prediction that Microsoft future hinges on OpenAI. I think, you know- >> John: They believe that $10 billion bet. >> Dave: Yeah. $10 billion bet. So I want to ask you a question. It's somewhat academic, but it's relevant. For a number of years, it looked like having first mover advantage wasn't an advantage. PCs, spreadsheets, the browser, right? Social media, Friendster, right? Mobile. Apple wasn't first to mobile. But that's somewhat changed. The cloud, AWS was first. You could debate whether or not, but AWS okay, they have first mover advantage. Crypto, Bitcoin, first mover advantage. Do you think OpenAI will have first mover advantage? >> It certainly has its advantage today. I think it's year two. I mean, I think the game is still out there, right? You know, we're still in the first inning, early inning of the game. So I don't think that the game is over for the rest of the players, whether the big players or the OpenAI kind of the, sort of competitors. So one of the VCs actually asked me the other day, right? "Hey, how much money do I need to spend, invest, to get, you know, another shot to the OpenAI sort of the level?" You know, I did a- (laughs) >> Line up. >> That's classic VC. "How much does it cost me to replicate?" >> I'm pretty sure he asked the question to a bunch of guys, right? >> Good luck with that. (laughs) >> So we kind of did some napkin- >> What'd you come up with? (laughs) >> $100 million is the order of magnitude that I came up with, right? You know, not a billion, not 10 million, right? So 100 million. >> John: Hundreds of millions. >> Yeah, yeah, yeah. 100 million order of magnitude is what I came up with. You know, we can get into details, you know, in other sort of the time, but- >> Dave: That's actually not that much if you think about it. >> Howie: Exactly. So when he heard me articulating why is that, you know, he's thinking, right? You know, he actually, you know, asked me, "Hey, you know, there's this company. Do you happen to know this company? Can I reach out?" You know, those things. So I truly believe it's not a billion or 10 billion issue, it's more like 100. >> John: And also, your other point about referencing the internet revolution as a good comparable. The other thing there is online user population was a big driver of the growth of that. So what's the equivalent here for online user population for AI? Is it more apps, more users? I mean, we're still early on, it's first inning. >> Yeah. We're kind of the, you know- >> What's the key metric for success of this sector? Do you have a read on that? >> I think the, you know, the number of users is a good metrics, but I think it's going to be a lot of people are going to use AI services without even knowing they're using it, right? You know, I think a lot of the applications are being already built on top of OpenAI, and then they are kind of, you know, help people to do marketing, legal documents, you know, so they're already inherently OpenAI kind of the users already. So I think yeah. >> Well, Howie, we've got to wrap, but I really appreciate you coming on. I want to give you a last minute to wrap up here. In your experience, and you've seen many waves of innovation. You've even had your hands in a lot of the big waves past three inflection points. And obviously, machine learning you're doing now, you're deep end. Why is this Supercloud movement, this wave of Supercloud and the discussion of this next inflection point, why is it so important? For the folks watching, why should they be paying attention to this particular moment in time? Could you share your super clip on Supercloud? >> Howie: Right. So this is simple from my point of view. So why do you even have cloud to begin with, right? IT is too complex, too complex to operate or too expensive. So there's a newer model. There is a better model, right? Let someone else operate it, there is elasticity out of it, right? That's great. Until you have multiple vendors, right? Many vendors even, you know, we're talking about kind of how to make multiple vendors look like the same, but frankly speaking, even one vendor has, you know, thousand services. Now it's kind of getting, what Kid was talking about what, cloud chaos, right? It's the evolution. You know, the history repeats itself, right? You know, you have, you know, next great things and then too many great things, and then people need to sort of abstract this out. So it's almost that you must do this. But I think how to abstract this out is something that at this time, AI is going to help a lot, right? You know, like I mentioned, right? A lot of the abstraction, you don't have to think about API anymore. I bet 10 years from now, you know, IT is one language away, not API away. So think about that world, right? So Supercloud in, in my opinion, sure, you kind of abstract things out. You have, you know, consistent layers. But who's going to do that? Is that like we all agreed upon the model, agreed upon those APIs? Not necessary. There are certain, you know, truth in that, but there are other truths, let bots take care of, right? Whether you know, I want some X happens, whether it's going to be done by Azure, by AWS, by GCP, bots will figure out at a given time with certain contacts with your security requirement, posture requirement. I'll think that out. >> John: That's awesome. And you know, Dave, you and I have been talking about this. We think scale is the new ratification. If you have first mover advantage, I'll see the benefit, but scale is a huge thing. OpenAI, AWS. >> Howie: Yeah. Every day, we are using OpenAI. Today, we are labeling data for them. So you know, that's a little bit of the- (laughs) >> John: Yeah. >> First mover advantage that other people don't have, right? So it's kind of scary. So I'm very sure that Google is a little bit- (laughs) >> When we do our super AI event, you're definitely going to be keynoting. (laughs) >> Howie: I think, you know, we're talking about Supercloud, you know, before long, we are going to talk about super intelligent cloud. (laughs) >> I'm super excited, Howie, about this. Thanks for coming on. Great to see you, Howie Xu. Always a great analyst for us contributing to the community. VP of Machine Learning and Zscaler, industry legend and friend of theCUBE. Thanks for coming on and sharing really, really great advice and insight into what this next wave means. This Supercloud is the next wave. "If you're not on it, you're driftwood," says Pat Gelsinger. So you're going to see a lot more discussion. We'll be back more here live in Palo Alto after this short break. >> Thank you. (upbeat music)
SUMMARY :
it all over the world. but you're kind of like a CUBE analyst. Great to see you, You wrote a great post about Kind of like seeing the So I really think, you know, Of course, the media's going to focus, will be more, you know, You know, like you said, John: It's only going to get better. I think we are there already, you know- you know, wave your hand or- or you know, any backs Do your job. making the toys, Dave, as we say. So I think, you know, A lot of people are going to, you know, I think, you know, for entrepreneurs, One is, you know, the OpenAI I think, you know, the time is now. John: And if you have You know, you know, yes, They already said that, you know, $10 billion, I think I think, you know- that $10 billion bet. So I want to ask you a question. to get, you know, another "How much does it cost me to replicate?" Good luck with that. You know, not a billion, into details, you know, if you think about it. You know, he actually, you know, asked me, the internet revolution We're kind of the, you know- I think the, you know, in a lot of the big waves You have, you know, consistent layers. And you know, Dave, you and I So you know, that's a little bit of the- So it's kind of scary. to be keynoting. Howie: I think, you know, This Supercloud is the next wave. (upbeat music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
10 million | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Howie Xu | PERSON | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
$100 million | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
Hundreds of millions | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
10 billion | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
North Pole | LOCATION | 0.99+ |
next decade | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Cohere | ORGANIZATION | 0.99+ |
first inning | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Machine Learning | ORGANIZATION | 0.99+ |
Supercloud 2 | EVENT | 0.99+ |
English | OTHER | 0.98+ |
each person | QUANTITY | 0.98+ |
two type | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Zscaler | ORGANIZATION | 0.98+ |
early 90s | DATE | 0.97+ |
Howie | PERSON | 0.97+ |
two kind | QUANTITY | 0.97+ |
one vendor | QUANTITY | 0.97+ |
one language | QUANTITY | 0.97+ |
each professional | QUANTITY | 0.97+ |
HPE Compute Engineered for your Hybrid World-Containers to Deploy Higher Performance AI Applications
>> Hello, everyone. Welcome to theCUBE's coverage of "Compute Engineered for your Hybrid World," sponsored by HPE and Intel. Today we're going to discuss the new 4th Gen Intel Xeon Scalable process impact on containers and AI. I'm John Furrier, your host of theCUBE, and I'm joined by three experts to guide us along. We have Jordan Plum, Senior Director of AI and products for Intel, Bradley Sweeney, Big Data and AI Product Manager, Mainstream Compute Workloads at HPE, and Gary Wang, Containers Product Manager, Mainstream Compute Workloads at HPE. Welcome to the program gentlemen. Thanks for coming on. >> Thanks John. >> Thank you for having us. >> This segment is going to be talking about containers to deploy high performance AI applications. This is a really important area right now. We're seeing a lot more AI deployed, kind of next gen AI coming. How is HPE supporting and testing and delivering containers for AI? >> Yeah, so what we're doing from HPE's perspective is we're taking these container platforms, combining with the next generation Intel servers to fully validate the deployment of the containers. So what we're doing is we're publishing the reference architectures. We're creating these automation scripts, and also creating a monitoring and security strategy for these container platforms. So for customers to easily deploy these Kubernete clusters and to easily secure their community environments. >> Gary, give us a quick overview of the new Proliant DL 360 and 380 Gen 11 servers. >> Yeah, the load, for example, for container platforms what we're seeing mostly is the DL 360 and DL 380 for matching really well for container use cases, especially for AI. The DL 360, with the expended now the DDR five memory and the new PCI five slots really, really helps the speeds to deploy these container environments and also to grow the data that's required to store it within these container environments. So for example, like the DL 380 if you want to deploy a data fabric whether it's the Ezmeral data fabric or different vendors data fabric software you can do so with the DL 360 and DL 380 with the new Intel Xeon processors. >> How does HP help customers with Kubernetes deployments? >> Yeah, like I mentioned earlier so we do a full validation to ensure the container deployment is easy and it's fast. So we create these automation scripts and then we publish them on GitHub for customers to use and to reference. So they can take that and then they can adjust as they need to. But following the deployment guide that we provide will make the, deploy the community deployment much easier, much faster. So we also have demo videos that's also published and then for reference architecture document that's published to guide the customer step by step through the process. >> Great stuff. Thanks everyone. We'll be going to take a quick break here and come back. We're going to do a deep dive on the fourth gen Intel Xeon scalable process and the impact on AI and containers. You're watching theCUBE, the leader in tech coverage. We'll be right back. (intense music) Hey, welcome back to theCUBE's continuing coverage of "Compute Engineered for your Hybrid World" series. I'm John Furrier with the Cube, joined by Jordan Plum with Intel, Bradley Sweeney with HPE, and Gary Wang from HPE. We're going to do a drill down and do a deeper dive into the AI containers with the fourth gen Intel Xeon scalable processors we appreciate your time coming in. Jordan, great to see you. I got to ask you right out of the gate, what is the view right now in terms of Intel's approach to containers for AI? It's hot right now. AI is booming. You're seeing kind of next gen use cases. What's your approach to containers relative to AI? >> Thanks John and thanks for the question. With the fourth generation Xeon scalable processor launch we have tested and validated this platform with over 400 deep learning and machine learning models and workloads. These models and workloads are publicly available in the framework repositories and they can be downloaded by anybody. Yet customers are not only looking for model validation they're looking for model performance and performance is usually a combination of a given throughput at a target latency. And to do that in the data center all the way to the factory floor, this is not always delivered from these generic proxy models that are publicly available in the industry. >> You know, performance is critical. We're seeing more and more developers saying, "Hey, I want to go faster on a better platform, faster all the time." No one wants to run slower stuff, that's for sure. Can you talk more about the different container approaches Intel is pursuing? >> Sure. First our approach is to meet the customers where they are and help them build and deploy AI everywhere. Some customers just want to focus on deployment they have more mature use cases, and they just want to download a model that works that's high performing and run. Others are really focused more on development and innovation. They want to build and train models from scratch or at least highly customize them. Therefore we have several container approaches to accelerate the customer's time to solution and help them meet their business SLA along their AI journey. >> So what developers can just download these containers and just go? >> Yeah, so let me talk about the different kinds of containers we have. We start off with pre-trained containers. We'll have about 55 or more of these containers where the model is actually pre-trained, highly performant, some are optimized for low latency, others are optimized for throughput and the customers can just download these from Intel's website or from HPE and they can just go into production right away. >> That's great. A lot of choice. People can just get jump right in. That's awesome. Good, good choice for developers. They want more faster velocity. We know that. What else does Intel provide? Can you share some thoughts there? What you guys else provide developers? >> Yeah, so we talked about how hey some are just focused on deployment and they maybe they have more mature use cases. Other customers really want to do some more customization or optimization. So we have another class of containers called development containers and this includes not just the kind of a model itself but it's integrated with the framework and some other capabilities and techniques like model serving. So now that customers can download just not only the model but an entire AI stack and they can be sort of do some optimizations but they can also be sure that Intel has optimized that specific stack on top of the HPE servers. >> So it sounds simple to just get started using the DL model and containers. Is that it? Where, what else are customers looking for? What can you take a little bit deeper? >> Yeah, not quite. Well, while the customer customer's ability to reproduce performance on their site that HPE and Intel have measured in our own labs is fantastic. That's not actually what the customer is only trying to do. They're actually building very complex end-to-end AI pipelines, okay? And a lot of data scientists are really good at building models, really good at building algorithms but they're less experienced in building end-to-end pipelines especially 'cause the number of use cases end-to-end are kind of infinite. So we are building end-to-end pipeline containers for use cases like media analytics and sentiment analysis, anomaly detection. Therefore a customer can download these end-to-end containers, right? They can either use them as a reference, just like, see how we built them and maybe they have some changes in their own data center where they like to use different tools, but they can just see, "Okay this is what's possible with an end-to-end container on top of an HPE server." And other cases they could actually, if the overlap in the use case is pretty close, they can just take our containers and go directly into production. So this provides developers, all three types of containers that I discussed provide developers an easy starting point to get them up and running quickly and make them productive. And that's a really important point. You talked a lot about performance, John. But really when we talk to data scientists what they really want to be is productive, right? They're under pressure to change the business to transform the business and containers is a great way to get started fast >> People take product productivity, you know, seriously now with developer productivity is the hottest trend obviously they want performance. Totally nailed it. Where can customers get these containers? >> Right. Great, thank you John. Our pre-trained model containers, our developmental containers, and our end-to-end containers are available at intel.com at the developer catalog. But we'd also post these on many third party marketplaces that other people like to pull containers from. And they're frequently updated. >> Love the developer productivity angle. Great stuff. We've still got more to discuss with Jordan, Bradley, and Gary. We're going to take a short break here. You're watching theCUBE, the leader in high tech coverage. We'll be right back. (intense music) Welcome back to theCUBE's coverage of "Compute Engineered for your Hybrid World." I'm John Furrier with theCUBE and we'll be discussing and wrapping up our discussion on containers to deploy high performance AI. This is a great segment on really a lot of demand for AI and the applications involved. And we got the fourth gen Intel Xeon scalable processors with HP Gen 11 servers. Bradley, what is the top AI use case that Gen 11 HP Proliant servers are optimized for? >> Yeah, thanks John. I would have to say intelligent video analytics. It's a use case that's supplied across industries and verticals. For example, a smart hospital solution that we conducted with Nvidia and Artisight in our previous customer success we've seen 5% more hospital procedures, a 16 times return on investment using operating room coordination. With that IVA, so with the Gen 11 DL 380 that we provide using the the Intel four gen Xeon processors it can really support workloads at scale. Whether that is a smart hospital solution whether that's manufacturing at the edge security camera integration, we can do it all with Intel. >> You know what's really great about AI right now you're starting to see people starting to figure out kind of where the value is does a lot of the heavy lifting on setting things up to make humans more productive. This has been clearly now kind of going neck level. You're seeing it all in the media now and all these new tools coming out. How does HPE make it easier for customers to manage their AI workloads? I imagine there's going to be a surge in demand. How are you guys making it easier to manage their AI workloads? >> Well, I would say the biggest way we do this is through GreenLake, which is our IT as a service model. So customers deploying AI workloads can get fully-managed services to optimize not only their operations but also their spending and the cost that they're putting towards it. In addition to that we have our Gen 11 reliance servers equipped with iLO 6 technology. What this does is allows customers to securely manage their server complete environment from anywhere in the world remotely. >> Any last thoughts or message on the overall fourth gen intel Xeon based Proliant Gen 11 servers? How they will improve workload performance? >> You know, with this generation, obviously the performance is only getting ramped up as the needs and requirements for customers grow. We partner with Intel to support that. >> Jordan, gimme the last word on the container's effect on AI applications. Your thoughts as we close out. >> Yeah, great. I think it's important to remember that containers themselves don't deliver performance, right? The AI stack is a very complex set of software that's compiled together and what we're doing together is to make it easier for customers to get access to that software, to make sure it all works well together and that it can be easily installed and run on sort of a cloud native infrastructure that's hosted by HPE Proliant servers. Hence the title of this talk. How to use Containers to Deploy High Performance AI Applications. Thank you. >> Gentlemen. Thank you for your time on the Compute Engineered for your Hybrid World sponsored by HPE and Intel. Again, I love this segment for AI applications Containers to Deploy Higher Performance. This is a great topic. Thanks for your time. >> Thank you. >> Thanks John. >> Okay, I'm John. We'll be back with more coverage. See you soon. (soft music)
SUMMARY :
Welcome to the program gentlemen. and delivering containers for AI? and to easily secure their of the new Proliant DL 360 and also to grow the data that's required and then they can adjust as they need to. and the impact on AI and containers. And to do that in the about the different container and they just want to download a model and they can just go into A lot of choice. and they can be sort of So it sounds simple to just to use different tools, is the hottest trend to pull containers from. on containers to deploy we can do it all with Intel. for customers to manage and the cost that they're obviously the performance on the container's effect How to use Containers on the Compute Engineered We'll be back with more coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jordan Plum | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Gary Wang | PERSON | 0.99+ |
Bradley | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
16 times | QUANTITY | 0.99+ |
5% | QUANTITY | 0.99+ |
Jordan | PERSON | 0.99+ |
Artisight | ORGANIZATION | 0.99+ |
DL 360 | COMMERCIAL_ITEM | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three experts | QUANTITY | 0.99+ |
DL 380 | COMMERCIAL_ITEM | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Compute Engineered for your Hybrid World | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
Bradley Sweeney | PERSON | 0.98+ |
over 400 deep learning | QUANTITY | 0.97+ |
intel | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Gen 11 DL 380 | COMMERCIAL_ITEM | 0.95+ |
Xeon | COMMERCIAL_ITEM | 0.95+ |
Today | DATE | 0.95+ |
fourth gen | QUANTITY | 0.92+ |
GitHub | ORGANIZATION | 0.91+ |
380 Gen 11 | COMMERCIAL_ITEM | 0.9+ |
about 55 or more | QUANTITY | 0.89+ |
four gen Xeon | COMMERCIAL_ITEM | 0.88+ |
Big Data | ORGANIZATION | 0.88+ |
Gen 11 | COMMERCIAL_ITEM | 0.87+ |
five slots | QUANTITY | 0.86+ |
Proliant | COMMERCIAL_ITEM | 0.84+ |
GreenLake | ORGANIZATION | 0.75+ |
Compute Engineered for your Hybrid | TITLE | 0.7+ |
Ezmeral | ORGANIZATION | 0.68+ |
Victoria Avseeva & Tom Leyden, Kasten by Veeam | KubeCon + CloudNativeCon NA 2022
>>Hello everyone, and welcome back to the Cube's Live coverage of Cuban here in Motor City, Michigan. My name is Savannah Peterson and I'm delighted to be joined for this segment by my co-host Lisa Martin. Lisa, how you doing? Good. >>We are, we've had such great energy for three days, especially on a Friday. Yeah, that's challenging to do for a tech conference. Go all week, push through the end of day Friday. But we're here, We're excited. We have a great conversation coming up. Absolutely. A little of our alumni is back with us. Love it. We have a great conversation about learning. >>There's been a lot of learning this week, and I cannot wait to hear what these folks have to say. Please welcome Tom and Victoria from Cast by Beam. You guys are swag up very well. You've got the Fanny pack. You've got the vest. You even were nice enough to give me a Carhartt Beanie. Carhartt being a Michigan company, we've had so much love for Detroit and, and locally sourced swag here. I've never seen that before. How has the week been for you? >>The week has been amazing, as you can say by my voice probably. >>So the mic helps. Don't worry. You're good. >>Yeah, so, So we've been talking to tons and tons of people, obviously some vendors, partners of ours. That was great seeing all those people face to face again, because in the past years we haven't really been able to meet up with those people. But then of course, also a lot of end users and most importantly, we've met a lot of people that wanted to learn Kubernetes, that came here to learn Kubernetes, and we've been able to help them. So feel very satisfied about that. >>When we were at VMware explorer, Tom, you were on the program with us, just, I guess that was a couple of months ago. I'm listening track. So many events are coming up. >>Time is a loop. It's >>Okay. It really is. You, you teased some new things coming from a learning perspective. What is going on there? >>All right. So I'm happy that you link back to VMware explorer there because Yeah, I was so excited to talk about it, but I couldn't, and it was frustrating. I knew it was coming up. That was was gonna be awesome. So just before Cuban, we launched Cube Campus, which is the rebrand of learning dot cast io. And Victoria is the great mind behind all of this, but what the gist of it, and then I'll let Victoria talk a little bit. The gist of Cube Campus is this all started as a small webpage in our own domain to bring some hands on lab online and let people use them. But we saw so many people who were interested in those labs that we thought, okay, we have to make this its own community, and this should not be a branded community or a company branded community. >>This needs to be its own thing because people, they like to be in just a community environment without the brand from the company being there. So we made it completely independent. It's a Cube campus, it's still a hundred percent free and it's still the That's right. Only platform where you actually learn Kubernetes with hands on labs. We have 14 labs today. We've been creating one per month and we have a lot of people on there. The most exciting part this week is that we had our first learning day, but before we go there, I suggest we let Victoria talk a little bit about that user experience of Cube Campus. >>Oh, absolutely. So Cube Campus is, and Tom mentioned it's a one year old platform, and we rebranded it specifically to welcome more and, you know, embrace this Kubernetes space total as one year anniversary. We have over 11,000 students and they've been taking labs Wow. Over 7,000. Yes. Labs taken. And per each user, if you actually count approximation, it's over three labs, three point 29. And I believe we're growing as per user if you look at the numbers. So it's a huge success and it's very easy to use overall. If you look at this, it's a number one free Kubernetes learning platform. So for you user journey for your Kubernetes journey, if you start from scratch, don't be afraid. That's we, we got, we got it all. We got you back. >>It's so important and, and I'm sure most of our audience knows this, but the, the number one challenge according to Gartner, according to everyone with Kubernetes, is the complexity. Especially when you're getting harder. I think it's incredibly awesome that you've decided to do this. 11,000 students. I just wanna settle on that. I mean, in your first year is really impressive. How did this become, and I'm sure this was a conversation you two probably had. How did this become a priority for CAST and by Beam? >>I have to go back for that. To the last virtual only Cuban where we were lucky enough to have set up a campaign. It was actually, we had an artist that was doing caricatures in a Zoom room, and it gave us an opportunity to actually talk to people because the challenge back in the days was that everything virtual, it's very hard to talk to people. Every single conversation we had with people asking them, Why are you at cu com virtual was to learn Kubernetes every single conversation. Yeah. And so that was, that is one data point. The other data point is we had one lab to, to use our software, and that was extremely popular. So as a team, we decided we should make more labs and not just about our product, but also about Kubernetes. So that initial page that I talked about that we built, we had three labs at launch. >>One was to learn install Kubernetes. One was to build a first application on Kubernetes, and then a third one was to learn how to back up and restore your application. So there was still a little bit of promoting our technology in there, but pretty soon we decided, okay, this has to become even more. So we added storage, we added security and, and a lot more labs. So today, 14 labs, and we're still adding one every month. The next step for the labs is going to be to involve other partners and have them bring their technologies in the lab. So that's our user base can actually learn more about Kubernetes related technologies and then hopefully with links to open source tools or free software tools. And it's, it's gonna continue to be a, a learning experience for Kubernetes. I >>Love how this seems to be, have been born out of the pandemic in terms of the inability to, to connect with customers, end users, to really understand what their challenges are, how do we help you best? But you saw the demand organically and built this, and then in, in the first year, not only 11,000 as Victoria mentioned, 11,000 users, but you've almost quadrupled the number of labs that you have on the platform in such a short time period. But you did hands on lab here, which I know was a major success. Talk to us about that and what, what surprised you about Yeah, the appetite to learn that's >>Here. Yeah. So actually I'm glad that you relay this back to the pandemic because yes, it was all online because it was still the, the tail end of the pandemic, but then for this event we're like, okay, it's time to do this in person. This is the next step, right? So we organized our first learning day as a co-located event. We were hoping to get 60 people together in a room. We did two labs, a rookie and a pro. So we said two times 30 people. That's our goal because it's really, it's competitive here with the collocated events. It's difficult >>Bringing people lots going on. >>And why don't I, why don't I let Victoria talk about the success of that learning day, because it was big part also her help for that. >>You know, our main goal is to meet expectations and actually see the challenges of our end user. So we actually, it also goes back to what we started doing research. We saw the pain points and yes, it's absolutely reflecting, reflecting on how we deal with this and what we see. And people very appreciative and they love platform because it's not only prerequisites, but also hands on lab practice. So, and it's free again, it's applied, which is great. Yes. So we thought about the user experience, user flow, also based, you know, the product when it's successful and you see the result. And that's where we, can you say the numbers? So our expectation was 60 >>People. You're kinda, you I feel like a suspense is starting killing. How many people came? >>We had over 350 people in our room. Whoa. >>Wow. Wow. >>And small disclaimer, we had a little bit of a technical issue in the beginning because of the success. There was a wireless problem in the hotel amongst others. Oh geez. So we were getting a little bit nervous because we were delayed 20 minutes. Nobody left that, that's, I was standing at the door while people were solving the issues and I was like, Okay, now people are gonna walk out. Right. Nobody left. Kind >>Of gives me >>Ose bump wearing that. We had a little reception afterwards and I talked to people, sorry about the, the disruption that we had under like, no, we, we are so happy that you're doing this. This was such a great experience. Castin also threw party later this week at the party. We had people come up to us like, I was at your learning day and this was so good. Thank you so much for doing this. I'm gonna take the rest of the classes online now. They love it. Really? >>Yeah. We had our instructors leading the program as well, so if they had any questions, it was also address immediately. So it was a, it was amazing event actually. I'm really grateful for people to come actually unappreciated. >>But now your boss knows how you can blow out metrics though. >>Yeah, yeah, yeah, yeah. Gonna >>Raise Victoria. >>Very good point. It's a very >>Good point. I can >>Tell. It's, it's actually, it's very tough to, for me personally, to analyze where the success came from. Because first of all, the team did an amazing job at setting the whole thing up. There was food and drinks for everybody, and it was really a very nice location in a hotel nearby. We made it a colocated event and we saw a lot of people register through the Cuban registration website. But we've done colocated events before and you typically see a very high no-show rate. And this was not the case right now. The a lot of, I mean the, the no-show was actually very low. Obviously we did our own campaign to our own database. Right. But it's hard to say like, we have a lot of people all over the world and how many people are actually gonna be in Detroit. Yeah. One element that also helped, I'm actually very proud of that, One of the people on our team, Thomas Keenan, he reached out to the local universities. Yes. And he invited students to come to learning day as well. I don't think it was very full with students. It was a good chunk of them. So there was a lot of people from here, but it was a good mix. And that way, I mean, we're giving back a little bit to the universities versus students. >>Absolutely. Much. >>I need to, >>There's a lot of love for Detroit this week. I'm all about it. >>It's amazing. But, but from a STEM perspective, that's huge. We're reaching down into that community and really giving them the opportunity to >>Learn. Well, and what a gateway for Castin. I mean, I can easily say, I mean, you are the number, we haven't really talked about casting at all, but before we do, what are those pins in front of you? >>So this is a physical pain. These are physical pins that we gave away for different programs. So people who took labs, for example, rookie level, they would get this p it's a rookie. >>Yes. I'm gonna hold this up just so they can do a little close shot on if you want. Yeah. >>And this is PR for, it's a, it's a next level program. So we have a program actually for IS to beginners inter intermediate and then pro. So three, three different levels. And this one is for Helman. It's actually from previous. >>No, Helmsman is someone who has taken the first three labs, right? >>Yes, it is. But we actually had it already before. So this one is, yeah, this one is, So we built two new labs for this event and it was very, very great, you know, to, to have a ready absolutely new before this event. So we launched the whole website, the whole platform with new labs, additional labs, and >>Before an event, honestly. Yeah. >>Yeah. We also had such >>Your expression just said it all. Exactly. >>You're a vacation and your future. I >>Hope so. >>We've had a couple of rough freaks. Yeah. This is part of it. Yeah. So, but about those labs. So in the classroom we had two, right? We had the, the, the rookie and the pro. And like I said, we wanted an audience for both. Most people stayed for both. And there were people at the venue one hour before we started because they did not want to miss it. Right. And what that chose to me is that even though Cuban has been around for a long time, and people have been coming back to this, there is a huge audience that considers themselves still very early on in their Kubernetes journey and wants to take and, and is not too proud to go to a rookie class for Kubernetes. So for us, that was like, okay, we're doing the right thing because yeah, with the website as well, more rookie users will keep, keep coming. And the big goal for us is just to accelerate their Kubernetes journey. Right. There's a lot of platforms out there. One platform I like as well is called the tech world with nana, she has a lot of instructional for >>You. Oh, she's a wonderful YouTuber. >>She, she's, yeah, her following is amazing. But what we add to this is the hands on part. Right? And, and there's a lot of auto resources as well where you have like papers and books and everything. We try to add those as well, but we feel that you can only learn it by doing it. And that is what we offer. >>Absolutely. Totally. Something like >>Kubernetes, and it sounds like you're demystifying it. You talked about one of the biggest things that everyone talks about with respect to Kubernetes adoption and some of the barriers is the complexity. But it sounds to me like at the, we talked about the demand being there for the hands on labs, the the cube campus.io, but also the fact that people were waiting an hour early, they're recognizing it's okay to raise, go. I don't really understand this. Yeah. In fact, another thing that I heard speaking of, of the rookies is that about 60% of the attendees at this year's cube con are Yeah, we heard that >>Out new. >>Yeah. So maybe that's smell a lot of those rookies showed up saying, >>Well, so even >>These guys are gonna help us really demystify and start learning this at a pace that works for me as an individual. >>There's some crazy macro data to support this. Just to echo this. So 85% of enterprise companies are about to start making this transition in leveraging Kubernetes. That means there's only 15% of a very healthy, substantial market that has adopted the technology at scale. You are teaching that group of people. Let's talk about casting a little bit. Number one, Kubernetes backup, 900% growth recently. How, how are we managing that? What's next for you, you guys? >>Yeah, so growth last year was amazing. Yeah. This year we're seeing very good numbers as well. I think part of the explanation is because people are going into production, you cannot sell back up to a company that is not in production with their right. With their applications. Right? So what we are starting to see is people are finally going into production with their Kubernetes applications and are realizing we have to back this up. The other trend that we're seeing is, I think still in LA last year we were having a lot of stateless first estate full conversations. Remember containers were created for stateless applications. That's no longer the case. Absolutely. But now the acceptance is there. We're not having those. Oh. But we're stateless conversations because everybody runs at least a database with some user data or application data, whatever. So all Kubernetes applications need to be backed up. Absolutely. And we're the number one product for that. >>And you guys just had recently had a new release. Yes. Talk to us a little bit about that before we wrap. It's new in the platform and, and also what gives you, what gives cast. And by being that competitive advantage in this new release, >>The competitive advantage is really simple. Our solution was built for Kubernetes. With Kubernetes. There are other products. >>Talk about dog fooding. Yeah. Yeah. >>That's great. Exactly. Yeah. And you know what, one of our successes at the show is also because we're using Kubernetes to build our application. People love to come to our booth to talk to our engineers, who we always bring to the show because they, they have so much experience to share. That also helps us with ems, by the way, to, to, to build those labs, Right? You need to have the, the experience. So the big competitive advantage is really that we're Kubernetes native. And then to talk about 5.5, I was going like, what was the other part of the question? So yeah, we had 5.5 launched also during the show. So it was really a busy week. The big focus for five five was simplicity. To make it even easier to use our product. We really want people to, to find it easy. We, we were using, we were using new helm charts and, and, and things like that. The second part of the launch was to do even more partner integrations. Because if you look at the space, this cloud native space, it's, you can also attest to that with, with Cube campus, when you build an application, you need so many different tools, right? And we are trying to integrate with all of those tools in the most easy and most efficient way so that it becomes easy for our customers to use our technology in their Kubernetes stack. >>I love it. Tom Victoria, one final question for you before we wrap up. You mentioned that you have a fantastic team. I can tell just from the energy you two have. That's probably the truth. You also mentioned that you bring the party everywhere you go. Where are we all going after this? Where's the party tonight? Yeah. >>Well, let's first go to a ballgame tonight. >>The party's on the court. I love it. Go Pistons. >>And, and then we'll end up somewhere downtown in a, in a good club, I guess. >>Yeah. Yeah. Well, we'll see how the show down with the hawks goes. I hope you guys make it to the game. Tom Victoria, thank you so much for being here. We're excited about what you're doing. Lisa, always a joy sharing the stage with you. My love. And to all of you who are watching, thank you so much for tuning into the cube. We are wrapping up here with one segment left in Detroit, Michigan. My name's Savannah Peterson. Thanks for being here.
SUMMARY :
Lisa, how you doing? Yeah, that's challenging to do for a tech conference. There's been a lot of learning this week, and I cannot wait to hear what these folks have to say. So the mic helps. So feel very satisfied about that. When we were at VMware explorer, Tom, you were on the program with us, just, Time is a loop. You, you teased some new things coming from a learning perspective. So I'm happy that you link back to VMware explorer there because Yeah, So we made it completely independent. And I believe we're growing as per user if you look and I'm sure this was a conversation you two probably had. So that initial page that I talked about that we built, we had three labs at So we added storage, Talk to us about that and what, what surprised you about Yeah, the appetite to learn that's So we organized our first learning day as a co-located event. because it was big part also her help for that. So we actually, it also goes back to what How many people came? We had over 350 people in our room. So we were getting a little bit We had people come up to us like, I was at your learning day and this was so good. it was a, it was amazing event actually. Yeah, yeah, yeah, yeah. It's a very I can But it's hard to say like, we have a lot of people all over the world and how Absolutely. There's a lot of love for Detroit this week. really giving them the opportunity to I mean, I can easily say, I mean, you are the number, These are physical pins that we gave away for different Yeah. So we have a program actually So we launched the whole website, Yeah. Your expression just said it all. I So in the classroom we had two, right? And, and there's a lot of auto resources as well where you have like Something like about 60% of the attendees at this year's cube con are Yeah, we heard that These guys are gonna help us really demystify and start learning this at a pace that works So 85% of enterprise companies is because people are going into production, you cannot sell back Talk to us a little bit about that before we wrap. Our solution was built for Kubernetes. Talk about dog fooding. And then to talk about 5.5, I was going like, what was the other part of the question? I can tell just from the energy you two have. The party's on the court. And to all of you who are watching, thank you so much for tuning into the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Thomas Keenan | PERSON | 0.99+ |
Tom Leyden | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
14 labs | QUANTITY | 0.99+ |
Detroit | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Carhartt | ORGANIZATION | 0.99+ |
LA | LOCATION | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
Tom Victoria | PERSON | 0.99+ |
900% | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Victoria | PERSON | 0.99+ |
last year | DATE | 0.99+ |
60 people | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two labs | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
Detroit, Michigan | LOCATION | 0.99+ |
Victoria Avseeva | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Michigan | LOCATION | 0.99+ |
11,000 users | QUANTITY | 0.99+ |
Motor City, Michigan | LOCATION | 0.99+ |
three labs | QUANTITY | 0.99+ |
11,000 students | QUANTITY | 0.99+ |
one lab | QUANTITY | 0.99+ |
over 11,000 students | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
first year | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
first application | QUANTITY | 0.99+ |
30 people | QUANTITY | 0.99+ |
11,000 | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one final question | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
first learning day | QUANTITY | 0.98+ |
15% | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
first | QUANTITY | 0.98+ |
over 350 people | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
third one | QUANTITY | 0.98+ |
tonight | DATE | 0.97+ |
one data point | QUANTITY | 0.97+ |
Over 7,000 | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
two new labs | QUANTITY | 0.97+ |
later this week | DATE | 0.97+ |
One platform | QUANTITY | 0.97+ |
KubeCon | EVENT | 0.96+ |
One element | QUANTITY | 0.96+ |
Helmsman | PERSON | 0.96+ |
Cube Campus | ORGANIZATION | 0.95+ |
Kasten | PERSON | 0.95+ |
Kubernetes | ORGANIZATION | 0.95+ |
about 60% | QUANTITY | 0.95+ |
hundred percent | QUANTITY | 0.95+ |
Oracle Announces MySQL HeatWave on AWS
>>Oracle continues to enhance my sequel Heatwave at a very rapid pace. The company is now in its fourth major release since the original announcement in December 2020. 1 of the main criticisms of my sequel, Heatwave, is that it only runs on O. C I. Oracle Cloud Infrastructure and as a lock in to Oracle's Cloud. Oracle recently announced that heat wave is now going to be available in AWS Cloud and it announced its intent to bring my sequel Heatwave to Azure. So my secret heatwave on AWS is a significant TAM expansion move for Oracle because of the momentum AWS Cloud continues to show. And evidently the Heatwave Engineering team has taken the development effort from O. C I. And is bringing that to A W S with a number of enhancements that we're gonna dig into today is senior vice president. My sequel Heatwave at Oracle is back with me on a cube conversation to discuss the latest heatwave news, and we're eager to hear any benchmarks relative to a W S or any others. Nippon has been leading the Heatwave engineering team for over 10 years and there's over 100 and 85 patents and database technology. Welcome back to the show and good to see you. >>Thank you. Very happy to be back. >>Now for those who might not have kept up with the news, uh, to kick things off, give us an overview of my sequel, Heatwave and its evolution. So far, >>so my sequel, Heat Wave, is a fully managed my secret database service offering from Oracle. Traditionally, my secret has been designed and optimised for transaction processing. So customers of my sequel then they had to run analytics or when they had to run machine learning, they would extract the data out of my sequel into some other database for doing. Unlike processing or machine learning processing my sequel, Heat provides all these capabilities built in to a single database service, which is my sequel. He'd fake So customers of my sequel don't need to move the data out with the same database. They can run transaction processing and predicts mixed workloads, machine learning, all with a very, very good performance in very good price performance. Furthermore, one of the design points of heat wave is is a scale out architecture, so the system continues to scale and performed very well, even when customers have very large late assignments. >>So we've seen some interesting moves by Oracle lately. The collaboration with Azure we've we've covered that pretty extensively. What was the impetus here for bringing my sequel Heatwave onto the AWS cloud? What were the drivers that you considered? >>So one of the observations is that a very large percentage of users of my sequel Heatwave, our AWS users who are migrating of Aurora or so already we see that a good percentage of my secret history of customers are migrating from GWS. However, there are some AWS customers who are still not able to migrate the O. C. I to my secret heat wave. And the reason is because of, um, exorbitant cost, which was charges. So in order to migrate the workload from AWS to go see, I digress. Charges are very high fees which becomes prohibitive for the customer or the second example we have seen is that the latency of practising a database which is outside of AWS is very high. So there's a class of customers who would like to get the benefits of my secret heatwave but were unable to do so and with this support of my secret trip inside of AWS, these customers can now get all the grease of the benefits of my secret he trip without having to pay the high fees or without having to suffer with the poorly agency, which is because of the ws architecture. >>Okay, so you're basically meeting the customer's where they are. So was this a straightforward lifted shift from from Oracle Cloud Infrastructure to AWS? >>No, it is not because one of the design girls we have with my sequel, Heatwave is that we want to provide our customers with the best price performance regardless of the cloud. So when we decided to offer my sequel, he headed west. Um, we have optimised my sequel Heatwave on it as well. So one of the things to point out is that this is a service with the data plane control plane and the console are natively running on AWS. And the benefits of doing so is that now we can optimise my sequel Heatwave for the E. W s architecture. In addition to that, we have also announced a bunch of new capabilities as a part of the service which will also be available to the my secret history of customers and our CI, But we just announced them and we're offering them as a part of my secret history of offering on AWS. >>So I just want to make sure I understand that it's not like you just wrapped your stack in a container and stuck it into a W s to be hosted. You're saying you're actually taking advantage of the capabilities of the AWS cloud natively? And I think you've made some other enhancements as well that you're alluding to. Can you maybe, uh, elucidate on those? Sure. >>So for status, um, we have taken the mind sequel Heatwave code and we have optimised for the It was infrastructure with its computer network. And as a result, customers get very good performance and price performance. Uh, with my secret he trade in AWS. That's one performance. Second thing is, we have designed new interactive counsel for the service, which means that customers can now provision there instances with the council. But in addition, they can also manage their schemas. They can. Then court is directly from the council. Autopilot is integrated. The council we have introduced performance monitoring, so a lot of capabilities which we have introduced as a part of the new counsel. The third thing is that we have added a bunch of new security features, uh, expose some of the security features which were part of the My Secret Enterprise edition as a part of the service, which gives customers now a choice of using these features to build more secure applications. And finally, we have extended my secret autopilot for a number of old gpus cases. In the past, my secret autopilot had a lot of capabilities for Benedict, and now we have augmented my secret autopilot to offer capabilities for elderly people. Includes as well. >>But there was something in your press release called Auto thread. Pooling says it provides higher and sustained throughput. High concerns concerns concurrency by determining Apple number of transactions, which should be executed. Uh, what is that all about? The auto thread pool? It seems pretty interesting. How does it affect performance? Can you help us understand that? >>Yes, and this is one of the capabilities of alluding to which we have added in my secret autopilot for transaction processing. So here is the basic idea. If you have a system where there's a large number of old EP transactions coming into it at a high degrees of concurrency in many of the existing systems of my sequel based systems, it can lead to a state where there are few transactions executing, but a bunch of them can get blocked with or a pilot tried pulling. What we basically do is we do workload aware admission control and what this does is it figures out, what's the right scheduling or all of these algorithms, so that either the transactions are executing or as soon as something frees up, they can start executing, so there's no transaction which is blocked. The advantage to the customer of this capability is twofold. A get significantly better throughput compared to service like Aurora at high levels of concurrency. So at high concurrency, for instance, uh, my secret because of this capability Uh oh, thread pulling offers up to 10 times higher compared to Aurora, that's one first benefit better throughput. The second advantage is that the true part of the system never drops, even at high levels of concurrency, whereas in the case of Aurora, the trooper goes up, but then, at high concurrency is, let's say, starting, uh, level of 500 or something. It depends upon the underlying shit they're using the troopers just dropping where it's with my secret heatwave. The truth will never drops. Now, the ramification for the customer is that if the truth is not gonna drop, the user can start off with a small shape, get the performance and be a show that even the workload increases. They will never get a performance, which is worse than what they're getting with lower levels of concurrency. So this let's leads to customers provisioning a shape which is just right for them. And if they need, they can, uh, go with the largest shape. But they don't like, you know, over pay. So those are the two benefits. Better performance and sustain, uh, regardless of the level of concurrency. >>So how do we quantify that? I know you've got some benchmarks. How can you share comparisons with other cloud databases especially interested in in Amazon's own databases are obviously very popular, and and are you publishing those again and get hub, as you have done in the past? Take us through the benchmarks. >>Sure, So benchmarks are important because that gives customers a sense of what performance to expect and what price performance to expect. So we have run a number of benchmarks. And yes, all these benchmarks are available on guitar for customers to take a look at. So we have performance results on all the three castle workloads, ol DB Analytics and Machine Learning. So let's start with the Rdp for Rdp and primarily because of the auto thread pulling feature. We show that for the IPCC for attended dataset at high levels of concurrency, heatwave offers up to 10 times better throughput and this performance is sustained, whereas in the case of Aurora, the performance really drops. So that's the first thing that, uh, tend to alibi. Sorry, 10 gigabytes. B B C c. I can come and see the performance are the throughput is 10 times better than Aurora for analytics. We have done a comparison of my secret heatwave in AWS and compared with Red Ship Snowflake Googled inquiry, we find that the price performance of my secret heatwave compared to read ship is seven times better. So my sequel, Heat Wave in AWS, provides seven times better price performance than red ship. That's a very, uh, interesting results to us. Which means that customers of Red Shift are really going to take the service seriously because they're gonna get seven times better price performance. And this is all running in a W s so compared. >>Okay, carry on. >>And then I was gonna say, compared to like, Snowflake, uh, in AWS offers 10 times better price performance. And compared to Google, ubiquity offers 12 times better price performance. And this is based on a four terabyte p PCH workload. Results are available on guitar, and then the third category is machine learning and for machine learning, uh, for training, the performance of my secret heatwave is 25 times faster compared to that shit. So all the three workloads we have benchmark's results, and all of these scripts are available on YouTube. >>Okay, so you're comparing, uh, my sequel Heatwave on AWS to Red Shift and snowflake on AWS. And you're comparing my sequel Heatwave on a W s too big query. Obviously running on on Google. Um, you know, one of the things Oracle is done in the past when you get the price performance and I've always tried to call fouls you're, like, double your price for running the oracle database. Uh, not Heatwave, but Oracle Database on a W s. And then you'll show how it's it's so much cheaper on on Oracle will be like Okay, come on. But they're not doing that here. You're basically taking my sequel Heatwave on a W s. I presume you're using the same pricing for whatever you see to whatever else you're using. Storage, um, reserved instances. That's apples to apples on A W s. And you have to obviously do some kind of mapping for for Google, for big query. Can you just verify that for me, >>we are being more than fair on two dimensions. The first thing is, when I'm talking about the price performance for analytics, right for, uh, with my secret heat rape, the cost I'm talking about from my secret heat rape is the cost of running transaction processing, analytics and machine learning. So it's a fully loaded cost for the case of my secret heatwave. There has been I'm talking about red ship when I'm talking about Snowflake. I'm just talking about the cost of these databases for running, and it's only it's not, including the source database, which may be more or some other database, right? So that's the first aspect that far, uh, trip. It's the cost for running all three kinds of workloads, whereas for the competition, it's only for running analytics. The second thing is that for these are those services whether it's like shit or snowflakes, That's right. We're talking about one year, fully paid up front cost, right? So that's what most of the customers would pay for. Many of the customers would pay that they will sign a one year contract and pay all the costs ahead of time because they get a discount. So we're using that price and the case of Snowflake. The costs were using is their standard edition of price, not the Enterprise edition price. So yes, uh, more than in this competitive. >>Yeah, I think that's an important point. I saw an analysis by Marx Tamer on Wiki Bond, where he was doing the TCO comparisons. And I mean, if you have to use two separate databases in two separate licences and you have to do et yelling and all the labour associated with that, that that's that's a big deal and you're not even including that aspect in in your comparison. So that's pretty impressive. To what do you attribute that? You know, given that unlike, oh, ci within the AWS cloud, you don't have as much control over the underlying hardware. >>So look hard, but is one aspect. Okay, so there are three things which give us this advantage. The first thing is, uh, we have designed hateful foreign scale out architecture. So we came up with new algorithms we have come up with, like, uh, one of the design points for heat wave is a massively partitioned architecture, which leads to a very high degree of parallelism. So that's a lot of hype. Each were built, So that's the first part. The second thing is that although we don't have control over the hardware, but the second design point for heat wave is that it is optimised for commodity cloud and the commodity infrastructure so we can have another guys, what to say? The computer we get, how much network bandwidth do we get? How much of, like objects to a brand that we get in here? W s. And we have tuned heat for that. That's the second point And the third thing is my secret autopilot, which provides machine learning based automation. So what it does is that has the users workload is running. It learns from it, it improves, uh, various premieres in the system. So the system keeps getting better as you learn more and more questions. And this is the third thing, uh, as a result of which we get a significant edge over the competition. >>Interesting. I mean, look, any I SV can go on any cloud and take advantage of it. And that's, uh I love it. We live in a new world. How about machine learning workloads? What? What did you see there in terms of performance and benchmarks? >>Right. So machine learning. We offer three capabilities training, which is fully automated, running in France and explanations. So one of the things which many of our customers told us coming from the enterprise is that explanations are very important to them because, uh, customers want to know that. Why did the the system, uh, choose a certain prediction? So we offer explanations for all models which have been derailed by. That's the first thing. Now, one of the interesting things about training is that training is usually the most expensive phase of machine learning. So we have spent a lot of time improving the performance of training. So we have a bunch of techniques which we have developed inside of Oracle to improve the training process. For instance, we have, uh, metal and proxy models, which really give us an advantage. We use adaptive sampling. We have, uh, invented in techniques for paralysing the hyper parameter search. So as a result of a lot of this work, our training is about 25 times faster than that ship them health and all the data is, uh, inside the database. All this processing is being done inside the database, so it's much faster. It is inside the database. And I want to point out that there is no additional charge for the history of customers because we're using the same cluster. You're not working in your service. So all of these machine learning capabilities are being offered at no additional charge inside the database and as a performance, which is significantly faster than that, >>are you taking advantage of or is there any, uh, need not need, but any advantage that you can get if two by exploiting things like gravity. John, we've talked about that a little bit in the past. Or trainee. Um, you just mentioned training so custom silicon that AWS is doing, you're taking advantage of that. Do you need to? Can you give us some insight >>there? So there are two things, right? We're always evaluating What are the choices we have from hybrid perspective? Obviously, for us to leverage is right and like all the things you mention about like we have considered them. But there are two things to consider. One is he is a memory system. So he favours a big is the dominant cost. The processor is a person of the cost, but memory is the dominant cost. So what we have evaluated and found is that the current shape which we are using is going to provide our customers with the best price performance. That's the first thing. The second thing is that there are opportunities at times when we can use a specialised processor for vaccinating the world for a bit. But then it becomes a matter of the cost of the customer. Advantage of our current architecture is on the same hardware. Customers are getting very good performance. Very good, energetic performance in a very good machine learning performance. If you will go with the specialised processor, it may. Actually, it's a machine learning, but then it's an additional cost with the customers we need to pay. So we are very sensitive to the customer's request, which is usually to provide very good performance at a very low cost. And we feel is that the current design we have as providing customers very good performance and very good price performance. >>So part of that is architectural. The memory intensive nature of of heat wave. The other is A W s pricing. If AWS pricing were to flip, it might make more sense for you to take advantage of something like like cranium. Okay, great. Thank you. And welcome back to the benchmarks benchmarks. Sometimes they're artificial right there. A car can go from 0 to 60 in two seconds. But I might not be able to experience that level of performance. Do you? Do you have any real world numbers from customers that have used my sequel Heatwave on A W s. And how they look at performance? >>Yes, absolutely so the my Secret service on the AWS. This has been in Vera for, like, since November, right? So we have a lot of customers who have tried the service. And what actually we have found is that many of these customers, um, planning to migrate from Aurora to my secret heat rape. And what they find is that the performance difference is actually much more pronounced than what I was talking about. Because with Aurora, the performance is actually much poorer compared to uh, like what I've talked about. So in some of these cases, the customers found improvement from 60 times, 240 times, right? So he travels 100 for 240 times faster. It was much less expensive. And the third thing, which is you know, a noteworthy is that customers don't need to change their applications. So if you ask the top three reasons why customers are migrating, it's because of this. No change to the application much faster, and it is cheaper. So in some cases, like Johnny Bites, what they found is that the performance of their applications for the complex storeys was about 60 to 90 times faster. Then we had 60 technologies. What they found is that the performance of heat we have compared to Aurora was 100 and 39 times faster. So, yes, we do have many such examples from real workloads from customers who have tried it. And all across what we find is if it offers better performance, lower cost and a single database such that it is compatible with all existing by sequel based applications and workloads. >>Really impressive. The analysts I talked to, they're all gaga over heatwave, and I can see why. Okay, last question. Maybe maybe two and one. Uh, what's next? In terms of new capabilities that customers are going to be able to leverage and any other clouds that you're thinking about? We talked about that upfront, but >>so in terms of the capabilities you have seen, like they have been, you know, non stop attending to the feedback from the customers in reacting to it. And also, we have been in a wedding like organically. So that's something which is gonna continue. So, yes, you can fully expect that people not dressed and continue to in a way and with respect to the other clouds. Yes, we are planning to support my sequel. He tripped on a show, and this is something that will be announced in the near future. Great. >>All right, Thank you. Really appreciate the the overview. Congratulations on the work. Really exciting news that you're moving my sequel Heatwave into other clouds. It's something that we've been expecting for some time. So it's great to see you guys, uh, making that move, and as always, great to have you on the Cube. >>Thank you for the opportunity. >>All right. And thank you for watching this special cube conversation. I'm Dave Volonte, and we'll see you next time.
SUMMARY :
The company is now in its fourth major release since the original announcement in December 2020. Very happy to be back. Now for those who might not have kept up with the news, uh, to kick things off, give us an overview of my So customers of my sequel then they had to run analytics or when they had to run machine So we've seen some interesting moves by Oracle lately. So one of the observations is that a very large percentage So was this a straightforward lifted shift from No, it is not because one of the design girls we have with my sequel, So I just want to make sure I understand that it's not like you just wrapped your stack in So for status, um, we have taken the mind sequel Heatwave code and we have optimised Can you help us understand that? So this let's leads to customers provisioning a shape which is So how do we quantify that? So that's the first thing that, So all the three workloads we That's apples to apples on A W s. And you have to obviously do some kind of So that's the first aspect And I mean, if you have to use two So the system keeps getting better as you learn more and What did you see there in terms of performance and benchmarks? So we have a bunch of techniques which we have developed inside of Oracle to improve the training need not need, but any advantage that you can get if two by exploiting We're always evaluating What are the choices we have So part of that is architectural. And the third thing, which is you know, a noteworthy is that In terms of new capabilities that customers are going to be able so in terms of the capabilities you have seen, like they have been, you know, non stop attending So it's great to see you guys, And thank you for watching this special cube conversation.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
December 2020 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
10 times | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Heatwave | TITLE | 0.99+ |
100 | QUANTITY | 0.99+ |
60 times | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
12 times | QUANTITY | 0.99+ |
GWS | ORGANIZATION | 0.99+ |
60 technologies | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
240 times | QUANTITY | 0.99+ |
two separate licences | QUANTITY | 0.99+ |
third category | QUANTITY | 0.99+ |
second advantage | QUANTITY | 0.99+ |
0 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
seven times | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
one | QUANTITY | 0.99+ |
25 times | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
November | DATE | 0.99+ |
85 patents | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
Aurora | TITLE | 0.99+ |
third thing | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
second example | QUANTITY | 0.99+ |
10 gigabytes | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two benefits | QUANTITY | 0.99+ |
one aspect | QUANTITY | 0.99+ |
first aspect | QUANTITY | 0.98+ |
two separate databases | QUANTITY | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
fourth major release | QUANTITY | 0.98+ |
39 times | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
Heat Wave | TITLE | 0.98+ |
Sarbjeet Johal | Supercloud22
(upbeat music) >> Welcome back, everyone to CUBE Supercloud 22. I'm John Furrier, your host. Got a great influencer, Cloud Cloud RRT segment with Sarbjeet Johal, Cloud influencer, Cloud economist, Cloud consultant, Cloud advisor. Sarbjeet, welcome back, CUBE alumni. Good to see you. >> Thanks John and nice to be here. >> Now, what's your title? Cloud consultant? Analyst? >> Consultant, actually. Yeah, I'm launching my own business right now formally, soon. It's in stealth mode right now, we'll be (inaudible) >> Well, I'll just call you a Cloud guru, Cloud influencer. You've been great, friend of theCUBE. Really powerful on social. You share a lot of content. You're digging into all the trends. Supercloud is a thing, it's getting a lot of traction. We introduced that concept last reinvent. We were riffing before that. As we kind of were seeing the structural change that is now Supercloud, it really is kind of the destination or outcome of what we're seeing with hybrid cloud as a steady state into the what's now, they call multicloud, which is kind of awkward. It feels like it's default. Like multicloud, multi-vendor, but Supercloud has much more of a comprehensive abstraction around it. What's your thoughts? >> As you said, as Dave says that too, the Supercloud has that abstraction built into it. It's built on top of cloud, right? So it's being built on top of the CapEx which is being spent by likes of AWS and Azure and Google Cloud, and many others, right? So it's leveraging that infrastructure and building software stack on top of that, which is a platform. I see that as a platform being built on top of infrastructure as code. It's another platform which is not native to the cloud providers. So it's like a kind of cross-Cloud platform. That's what I said. >> Yeah, VMware calls it that cloud-cross cloud. I'm not a big fan of the name but I get what you're saying. We had a segment on earlier with Adrian Cockcroft, Laurie McVety and Chris Wolf, all part of the Cloud RRT like ourselves, and you've involved in Cloud from day one. Remember the OpenStack days Early Cloud, AWS, when they started we saw the trajectory and we saw the change. And I think the OpenStack in those early days were tell signs because you saw the movement of API first but Amazon just grew so fast. And then Azure now is catching up, their CapEx is so large that companies like Snowflake's like, "Why should I build my own? "I just sit on top of AWS, "move fast on one native cloud, then figure it out." Seems to be one of the playbooks of the Supercloud. >> Yeah, that is true. And there are reasons behind that. And I think number one reason is the skills gravity. What I call it, the developers and/or operators are trained on one set of APIs. And I've said that many times, to out compete your competition you have to out educate the market. And we know which cloud has done that. We know what traditional vendor has done that, in '90s it was Microsoft, they had VBS number one language and they were winning. So in the cloud era, it's AWS, their marketing efforts, their go-to market strategy, the micro nature of the releasing the micro sort of features, if you will, almost every week there's a new feature. So they have got it. And other two are trying to mimic that and they're having low trouble light. >> Yeah and I think GCP has been struggling compared to the three and native cloud on native as you're right, completely successful. As you're caught up and you see the Microsoft, I think is a a great selling point around multiple clouds. And the question that's on the table here is do you stay with the native cloud or you jump right to multicloud? Now multicloud by default is kind of what I see happening. We've been debating this, I'd love to get your thoughts because, Microsoft has a huge install base. They've converted to Office 365. They even throw SQL databases in there to kind of give it a little extra bump on the earnings but I've been super critical on their numbers. I think their shares are, there's clearly overstating their share, in my opinion, compared to AWS is a need of cloud, Azure though is catching up. So you have customers that are happy with Microsoft, that are going to run their apps on Azure. So if a customer has Azure and Microsoft that's technically multiple clouds. >> Yeah, true. >> And it's not a strategy, it's just an outcome. >> Yeah, I see Microsoft cloud as friendly to the internal developers. Internal developers of enterprises. but AWS is a lot more ISV friendly which is the software shops friendly. So that's what they do. They just build software and give it to somebody else. But if you're in-house developer and you have been a Microsoft shop for a long time, which enterprise haven't been that, right? So Microsoft is well entrenched into the enterprise. We know that, right? >> Yeah. >> For a long time. >> Yeah and the old joke was developers love code and just go with a lock in and then ops people don't want lock in because they want choice. So you have the DevOps movement that's been successful and they get DevSecOps. The real focus to me, I think, is the operating teams because the ops side is really with the pressure vis-a-vis. I want to get your reaction because we're seeing kind of the script flip. DevOps worked, infrastructure's code has worked. We don't yet see security as code yet. And you have things like cloud native services which is all developer, goodness. So I think the developers are doing fine. Give 'em a thumbs up and open source's booming. So they're shifting left, CI/CD pipeline. You have some issues around repo, monolithic repos, but devs are doing fine. It's the ops that are now have to level up because that seems to be a hotspot. What's your take? What's your reaction to that? Do you agree? And if you say you agree, why? >> Yeah, I think devs are doing fine because some of the devs are going into ops. Like the whole movement behind DevOps culture is that devs and ops is one team. The people who are building that application they're also operating that as well. But that's very foreign and few in enterprise space. We know that, right? Big companies like Google, Microsoft, Amazon, Twitter, those guys can do that. They're very tech savvy shops. But when it comes to, if you go down from there to the second tier of enterprises, they are having hard time with that. Once you create software, I've said that, I sound like a broken record here. So once you create piece of software, you want to operate it. You're not always creating it. Especially when it's inhouse software development. It's not your core sort of competency to. You're not giving that software to somebody else or they're not multiple tenants of that software. You are the only user of that software as a company, or maybe maximum to your employees and partners. But that's where it stops. So there are those differences and when it comes to ops, we have to still differentiate the ops of the big companies, which are tech companies, pure tech companies and ops of the traditional enterprise. And you are right, the ops of the traditional enterprise are having tough time to cope up with the changing nature of things. And because they have to run the old traditional stacks whatever they happen to have, SAP, Oracle, financial, whatnot, right? Thousands of applications, they have to run that. And they have to learn on top of that, new scripting languages to operate the new stack, if you will. >> So for ops teams do they have to spin up operating teams for every cloud specialized tooling, there's consequences to that. >> Yeah. There's economics involved, the process, if you are learning three cloud APIs and most probably you will end up spending a lot more time and money on that. Number one, number two, there are a lot more problems which can arise from that, because of the differences in how the APIs work. The rule says if you pick one primary cloud and then you're focused on that, and most of your workloads are there, and then you go to the secondary cloud number two or three on as need basis. I think that's the right approach. >> Well, I want to get your take on something that I'm observing. And again, maybe it's because I'm old school, been around the IT block for a while. I'm observing the multi-vendors kind of as Dave calls the calisthenics, they're out in the market, trying to push their wears and convincing everyone to run their workloads on their infrastructure. multicloud to me sounds like multi-vendor. And I think there might not be a problem yet today so I want to get your reaction to my thoughts. I see the vendors pushing hard on multicloud because they don't have a native cloud. I mean, IBM ultimately will probably end up being a SaaS application on top of one of the CapEx hyperscale, some say, but I think the playbook today for customers is to stay on one native cloud, run cloud native hybrid go in on OneCloud and go fast. Then get success and then go multiple clouds. versus having a multicloud set of services out of the gate. Because if you're VMware you'd love to have cross cloud abstraction layer but that's lock in too. So what's your lock in? Success in the marketplace or vendor access? >> It's tricky actually. I've said that many times, that you don't wake up in the morning and say like, we're going to do multicloud. Nobody does that by choice. So it falls into your lab because of mostly because of what MNA is. And sometimes because of the price to performance ratio is better somewhere else for certain kind of workloads. That's like foreign few, to be honest with you. That's part of my read is, that being a developer an operator of many sort of systems, if you will. And the third tier which we talked about during the VMworld, I think 2019 that you want vendor diversity, just in case one vendor goes down or it's broken up by feds or something, and you want another vendor, maybe for price negotiation tactics, or- >> That's an op mentality. >> Yeah, yeah. >> And that's true, they want choice. They want to get locked in. >> You want choice because, and also like things can go wrong with the provider. We know that, we focus on top three cloud providers and we sort of assume that they'll be there for next 10 years or so at least. >> And what's also true is not everyone can do everything. >> Yeah, exactly. So you have to pick the provider based on all these sort of three sets of high level criteria, if you will. And I think the multicloud should be your last choice. Like you should not be gearing up for that by default but it should be by design, as Chuck said. >> Okay, so I need to ask you what does Supercloud in my opinion, look like five, 10 years out? What's the outcome of a good Supercloud structure? What's it look like? Where did it come from? How did it get there? What's your take? >> I think Supercloud is getting born in the absence of having standards around cloud. That's what it is. Because we don't have standards, we long, or we want the services at different cloud providers, Which have same APIs and there's less learning curve or almost zero learning curve for our developers and operators to learn that stuff. Snowflake is one example and VMware Stack is available at different cloud providers. That's sort of infrastructure as a service example if you will. And snowflake is a sort of data warehouse example and they're going down the stack. Well, they're trying to expand. So there are many examples like that. What was the question again? >> Is Supercloud 10 years out? What does it look like? What's the components? >> Yeah, I think the Supercloud 10 years out will expand because we will expand the software stack faster than the hardware stack and hardware stack will be expanding of course, with the custom chips and all that. There was the huge event yesterday was happening from AWS. >> Yeah, the Silicon. >> Silicon Day. And that's an eyeopening sort of movement and the whole technology consumption, if you will. >> And yeah, the differentiation with the chips with supply chain kind of herding right now, we think it's going to be a forcing function for more cloud adoption. Because if you can't buy networking gear you going to go to the cloud. >> Yeah, so Supercloud to me in 10 years, it will be bigger, better in the likes of HashiCorp. Actually, I think we need likes of HashiCorp on the infrastructure as a service side. I think they will be part of the Supercloud. They are kind of sitting on the side right now kind of a good vendor lost in transition kind of thing. That sort of thing. >> It's like Kubernetes, we'll just close out here. We'll make a statement. Is Kubernetes a developer thing or an infrastructure thing? It's an ops thing. I mean, people are coming out and saying Kubernetes is not a developer issue. >> It's ops thing. >> It's an ops thing. It's in operation, it's under the hood. So you, again, this infrastructure's a service integrating this super pass layer as Dave Vellante and Wikibon call it. >> Yeah, it's ops thing, actually, which enables developers to get that the Azure service, like you can deploy your software in sort of different format containers, and then you don't care like what VMs are those? And, but Serverless is the sort of arising as well. It was hard for a while now it's like the lull state, but I think Serverless will be better in next three to five years on. >> Well, certainly the hyperscale is like AWS and Azure and others have had great CapEx and investments. They need to stay ahead, in your opinion, final question, how do they stay ahead? 'Cause, AWS is not going to stand still nor will Azure, they're pedaling as fast as they can. Google's trying to figure out where they fit in. Are they going to be a real cloud or a software stack? Same with Oracle. To me, it's really, the big race is now with AWS and Azure's nipping at their heels. Hyperscale, what do they need to do to differentiate going forward? >> I think they are in a limbo. They, on one side, they don't want to compete with their customers who are sitting on top of them, likes of Snowflake and others, right? And VMware as well. But at the same time, they have to keep expanding and keep innovating. And they're debating within their themselves. Like, should we compete with these guys? Should we launch similar sort of features and functionality? Or should we keep it open? And what I have heard as of now that internally at AWS, especially, they're thinking about keeping it open and letting people sort of (inaudible)- >> And you see them buying some the Cerner with Oracle that bought Cerner, Amazon bought a healthcare company. I think the likes of MongoDB, Snowflake, Databricks, are perfect examples of what we'll see I think on the AWS side. Azure, I'm not so sure, they like to have a little bit more control at the top of the stack with the SaaS, but I think Databricks has been so successful open source, Snowflake, a little bit more proprietary and closed than Databricks. They're doing well is on top of data, and MongoDB has got great success. All of these things compete with AWS higher level services. So, that advantage of those companies not having the CapEx investment and then going multiple clouds on other ecosystems that's a path of customers. Stay one, go fast, get traction, then go. >> That's huge. Actually the last sort comment I want to make is that, Also, that you guys include this in the definition of Supercloud, the likes of Capital One and Soner sort of vendors, right? So they are verticals, Capital One is in this financial vertical, and then Soner which Oracle bar they are in this healthcare vertical. And remember in the beginning of the cloud and when the cloud was just getting born. We used to say that we will have the community clouds which will be serving different verticals. >> Specialty clouds. >> Specialty clouds, community clouds. And actually that is happening now at very sort of small level. But I think it will start happening at a bigger level. The Goldman Sachs and others are trying to build these services on the financial front risk management and whatnot. I think that will be- >> Well, what's interesting, which you're bringing up a great discussion. We were having discussions around these vertical clouds like Goldman Sachs Capital One, Liberty Mutual. They're going all in on one native cloud then going into multiple clouds after, but then there's also the specialty clouds around functionality, app identity, data security. So you have multiple 3D dimensional clouds here. You can have a specialty cloud just on identity. I mean, identity on Amazon is different than Azure. Huge issue. >> Yeah, I think at some point we have to distinguish these things, which are being built on top of these infrastructure as a service, in past with a platform, a service, which is very close to infrastructure service, like the lines are blurred, we have to distinguish these two things from these Superclouds. Actually, what we are calling Supercloud maybe there'll be better term, better name, but we are all industry path actually, including myself and you or everybody else. Like we tend to mix these things up. I think we have to separate these things a little bit to make things (inaudible) >> Yeah, I think that's what the super path thing's about because you think about the next generation SaaS has to be solved by innovations of the infrastructure services, to your point about HashiCorp and others. So it's not as clear as infrastructure platform, SaaS. There's going to be a lot of interplay between this levels of services. >> Yeah, we are in this flasker situation a lot of developers are lost. A lot of operators are lost in this transition and it's just like our economies right now. Like I was reading at CNBC today, and here's sort of headline that people are having hard time understanding what state the economy is in. And so same is true with our technology economy. Like we don't know what state we are in. It's kind of it's in the transition phase right now. >> Well we're definitely in a bad economy relative to the consumer market. I've said on theCUBE publicly, Dave has as well, not as aggressive. I think the tech is still in a boom. I don't think there's tech bubble at all that's bursting, I think, the digital transformation from post COVID is going to continue. And this is the first recession downturn where the hyperscalers have been in market, delivering the economic value, almost like they're pumping on all cylinders and going to the next level. Go back to 2008, Amazon web services, where were they? They were just emerging out. So the cloud economic impact has not been factored into the global GDP relationship. I think all the firms that are looking at GDP growth and tech spend as a correlation, are completely missing the boat on the fact that cloud economics and digital transformation is a big part of the new economics. So refactoring business models this is continuing and it's just the early days. >> Yeah, I have said that many times that cloud works good in the bad economy and cloud works great in the good economy. Do you know why? Because there are different type of workloads in the good economy. A lot of experimentation, innovative solutions go into the cloud. You can do experimentation that you have extra money now, but in the bad economy you don't want to spend the CapEx because don't have money. Money is expensive at that point. And then you want to keep working and you don't need (inaudible) >> I think inflation's a big factor too right now. Well, Sarbjeet, great to see you. Thanks for coming into our studio for our stage performance for Supercloud 22, this is a pilot episode that we're going to get a consortium of experts Cloud RRT like yourselves, in the conversation to discuss what the architecture is. What is a taxonomy? What are the key building blocks and what things need to be in place for Supercloud capability? Because it's clear that if without standards, without defacto standards, we're at this tipping point where if it all comes together, not all one company can do everything. Customers want choice, but they also want to go fast too. So DevOps is working. It's going the next level. We see this as Supercloud. So thank you so much for your participation. >> Thanks for having me. And I'm looking forward to listen to the other sessions (inaudible) >> We're going to take it on A stickers. We'll take it on the internet. I'm John Furrier, stay tuned for more Supercloud 22 coverage, here at the Palo Alto studios in one minute. (bright music)
SUMMARY :
Good to see you. It's in stealth mode right as a steady state into the what's now, the Supercloud has that I'm not a big fan of the name So in the cloud era, it's AWS, And the question that's on the table here And it's not a strategy, and you have been a Microsoft It's the ops that are now have to level up and ops of the traditional enterprise. have to spin up operating teams the process, if you are kind of as Dave calls the calisthenics, And the third tier And that's true, they want choice. and we sort of assume And what's also true is not And I think the multicloud in the absence of having faster than the hardware stack and the whole technology Because if you can't buy networking gear in the likes of HashiCorp. and saying Kubernetes is It's in operation, it's under the hood. get that the Azure service, Well, certainly the But at the same time, they at the top of the stack with the SaaS, And remember in the beginning of the cloud on the financial front risk So you have multiple 3D like the lines are blurred, by innovations of the It's kind of it's in the So the cloud economic but in the bad economy you in the conversation to discuss And I'm looking forward to listen We'll take it on the internet.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Goldman Sachs | ORGANIZATION | 0.99+ |
Sarbjeet | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Sarbjeet Johal | PERSON | 0.99+ |
Chris Wolf | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
Liberty Mutual | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Laurie McVety | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
2019 | DATE | 0.99+ |
one minute | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
multicloud | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Soner | ORGANIZATION | 0.98+ |
CNBC | ORGANIZATION | 0.98+ |
two things | QUANTITY | 0.98+ |
Office 365 | TITLE | 0.98+ |
CapEx | ORGANIZATION | 0.98+ |
Silicon Day | EVENT | 0.98+ |
third tier | QUANTITY | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
Snowflake | TITLE | 0.98+ |
second tier | QUANTITY | 0.98+ |
one team | QUANTITY | 0.98+ |
MNA | ORGANIZATION | 0.97+ |
five years | QUANTITY | 0.97+ |
Azure | ORGANIZATION | 0.97+ |
WS | ORGANIZATION | 0.97+ |
VBS | TITLE | 0.97+ |
10 years | QUANTITY | 0.97+ |
one example | QUANTITY | 0.96+ |
DevOps | TITLE | 0.96+ |
two | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
one set | QUANTITY | 0.96+ |
Goldman Sachs Capital One | ORGANIZATION | 0.96+ |
DevSecOps | TITLE | 0.95+ |
CapEx | TITLE | 0.95+ |
Serverless | TITLE | 0.95+ |
Thousands of applications | QUANTITY | 0.95+ |
VMware Stack | TITLE | 0.94+ |
Luis Ceze, OctoML | Amazon re:MARS 2022
(upbeat music) >> Welcome back, everyone, to theCUBE's coverage here live on the floor at AWS re:MARS 2022. I'm John Furrier, host for theCUBE. Great event, machine learning, automation, robotics, space, that's MARS. It's part of the re-series of events, re:Invent's the big event at the end of the year, re:Inforce, security, re:MARS, really intersection of the future of space, industrial, automation, which is very heavily DevOps machine learning, of course, machine learning, which is AI. We have Luis Ceze here, who's the CEO co-founder of OctoML. Welcome to theCUBE. >> Thank you very much for having me in the show, John. >> So we've been following you guys. You guys are a growing startup funded by Madrona Venture Capital, one of your backers. You guys are here at the show. This is a, I would say small show relative what it's going to be, but a lot of robotics, a lot of space, a lot of industrial kind of edge, but machine learning is the centerpiece of this trend. You guys are in the middle of it. Tell us your story. >> Absolutely, yeah. So our mission is to make machine learning sustainable and accessible to everyone. So I say sustainable because it means we're going to make it faster and more efficient. You know, use less human effort, and accessible to everyone, accessible to as many developers as possible, and also accessible in any device. So, we started from an open source project that began at University of Washington, where I'm a professor there. And several of the co-founders were PhD students there. We started with this open source project called Apache TVM that had actually contributions and collaborations from Amazon and a bunch of other big tech companies. And that allows you to get a machine learning model and run on any hardware, like run on CPUs, GPUs, various GPUs, accelerators, and so on. It was the kernel of our company and the project's been around for about six years or so. Company is about three years old. And we grew from Apache TVM into a whole platform that essentially supports any model on any hardware cloud and edge. >> So is the thesis that, when it first started, that you want to be agnostic on platform? >> Agnostic on hardware, that's right. >> Hardware, hardware. >> Yeah. >> What was it like back then? What kind of hardware were you talking about back then? Cause a lot's changed, certainly on the silicon side. >> Luis: Absolutely, yeah. >> So take me through the journey, 'cause I could see the progression. I'm connecting the dots here. >> So once upon a time, yeah, no... (both chuckling) >> I walked in the snow with my bare feet. >> You have to be careful because if you wake up the professor in me, then you're going to be here for two hours, you know. >> Fast forward. >> The average version here is that, clearly machine learning has shown to actually solve real interesting, high value problems. And where machine learning runs in the end, it becomes code that runs on different hardware, right? And when we started Apache TVM, which stands for tensor virtual machine, at that time it was just beginning to start using GPUs for machine learning, we already saw that, with a bunch of machine learning models popping up and CPUs and GPU's starting to be used for machine learning, it was clear that it come opportunity to run on everywhere. >> And GPU's were coming fast. >> GPUs were coming and huge diversity of CPUs, of GPU's and accelerators now, and the ecosystem and the system software that maps models to hardware is still very fragmented today. So hardware vendors have their own specific stacks. So Nvidia has its own software stack, and so does Intel, AMD. And honestly, I mean, I hope I'm not being, you know, too controversial here to say that it kind of of looks like the mainframe era. We had tight coupling between hardware and software. You know, if you bought IBM hardware, you had to buy IBM OS and IBM database, IBM applications, it all tightly coupled. And if you want to use IBM software, you had to buy IBM hardware. So that's kind of like what machine learning systems look like today. If you buy a certain big name GPU, you've got to use their software. Even if you use their software, which is pretty good, you have to buy their GPUs, right? So, but you know, we wanted to help peel away the model and the software infrastructure from the hardware to give people choice, ability to run the models where it best suit them. Right? So that includes picking the best instance in the cloud, that's going to give you the right, you know, cost properties, performance properties, or might want to run it on the edge. You might run it on an accelerator. >> What year was that roughly, when you were going this? >> We started that project in 2015, 2016 >> Yeah. So that was pre-conventional wisdom. I think TensorFlow wasn't even around yet. >> Luis: No, it wasn't. >> It was, I'm thinking like 2017 or so. >> Luis: Right. So that was the beginning of, okay, this is opportunity. AWS, I don't think they had released some of the nitro stuff that the Hamilton was working on. So, they were already kind of going that way. It's kind of like converging. >> Luis: Yeah. >> The space was happening, exploding. >> Right. And the way that was dealt with, and to this day, you know, to a large extent as well is by backing machine learning models with a bunch of hardware specific libraries. And we were some of the first ones to say, like, know what, let's take a compilation approach, take a model and compile it to very efficient code for that specific hardware. And what underpins all of that is using machine learning for machine learning code optimization. Right? But it was way back when. We can talk about where we are today. >> No, let's fast forward. >> That's the beginning of the open source project. >> But that was a fundamental belief, worldview there. I mean, you have a world real view that was logical when you compare to the mainframe, but not obvious to the machine learning community. Okay, good call, check. Now let's fast forward, okay. Evolution, we'll go through the speed of the years. More chips are coming, you got GPUs, and seeing what's going on in AWS. Wow! Now it's booming. Now I got unlimited processors, I got silicon on chips, I got, everywhere >> Yeah. And what's interesting is that the ecosystem got even more complex, in fact. Because now you have, there's a cross product between machine learning models, frameworks like TensorFlow, PyTorch, Keras, and like that and so on, and then hardware targets. So how do you navigate that? What we want here, our vision is to say, folks should focus, people should focus on making the machine learning models do what they want to do that solves a value, like solves a problem of high value to them. Right? So another deployment should be completely automatic. Today, it's very, very manual to a large extent. So once you're serious about deploying machine learning model, you got a good understanding where you're going to deploy it, how you're going to deploy it, and then, you know, pick out the right libraries and compilers, and we automated the whole thing in our platform. This is why you see the tagline, the booth is right there, like bringing DevOps agility for machine learning, because our mission is to make that fully transparent. >> Well, I think that, first of all, I use that line here, cause I'm looking at it here on live on camera. People can't see, but it's like, I use it on a couple couple of my interviews because the word agility is very interesting because that's kind of the test on any kind of approach these days. Agility could be, and I talked to the robotics guys, just having their product be more agile. I talked to Pepsi here just before you came on, they had this large scale data environment because they built an architecture, but that fostered agility. So again, this is an architectural concept, it's a systems' view of agility being the output, and removing dependencies, which I think what you guys were trying to do. >> Only part of what we do. Right? So agility means a bunch of things. First, you know-- >> Yeah explain. >> Today it takes a couple months to get a model from, when the model's ready, to production, why not turn that in two hours. Agile, literally, physically agile, in terms of walk off time. Right? And then the other thing is give you flexibility to choose where your model should run. So, in our deployment, between the demo and the platform expansion that we announced yesterday, you know, we give the ability of getting your model and, you know, get it compiled, get it optimized for any instance in the cloud and automatically move it around. Today, that's not the case. You have to pick one instance and that's what you do. And then you might auto scale with that one instance. So we give the agility of actually running and scaling the model the way you want, and the way it gives you the right SLAs. >> Yeah, I think Swami was mentioning that, not specifically that use case for you, but that use case generally, that scale being moving things around, making them faster, not having to do that integration work. >> Scale, and run the models where they need to run. Like some day you want to have a large scale deployment in the cloud. You're going to have models in the edge for various reasons because speed of light is limited. We cannot make lights faster. So, you know, got to have some, that's a physics there you cannot change. There's privacy reasons. You want to keep data locally, not send it around to run the model locally. So anyways, and giving the flexibility. >> Let me jump in real quick. I want to ask this specific question because you made me think of something. So we're just having a data mesh conversation. And one of the comments that's come out of a few of these data as code conversations is data's the product now. So if you can move data to the edge, which everyone's talking about, you know, why move data if you don't have to, but I can move a machine learning algorithm to the edge. Cause it's costly to move data. I can move computer, everyone knows that. But now I can move machine learning to anywhere else and not worry about integrating on the fly. So the model is the code. >> It is the product. >> Yeah. And since you said, the model is the code, okay, now we're talking even more here. So machine learning models today are not treated as code, by the way. So do not have any of the typical properties of code that you can, whenever you write a piece of code, you run a code, you don't know, you don't even think what is a CPU, we don't think where it runs, what kind of CPU it runs, what kind of instance it runs. But with machine learning model, you do. So what we are doing and created this fully transparent automated way of allowing you to treat your machine learning models if you were a regular function that you call and then a function could run anywhere. >> Yeah. >> Right. >> That's why-- >> That's better. >> Bringing DevOps agility-- >> That's better. >> Yeah. And you can use existing-- >> That's better, because I can run it on the Artemis too, in space. >> You could, yeah. >> If they have the hardware. (both laugh) >> And that allows you to run your existing, continue to use your existing DevOps infrastructure and your existing people. >> So I have to ask you, cause since you're a professor, this is like a masterclass on theCube. Thank you for coming on. Professor. (Luis laughing) I'm a hardware guy. I'm building hardware for Boston Dynamics, Spot, the dog, that's the diversity in hardware, it's tends to be purpose driven. I got a spaceship, I'm going to have hardware on there. >> Luis: Right. >> It's generally viewed in the community here, that everyone I talk to and other communities, open source is going to drive all software. That's a check. But the scale and integration is super important. And they're also recognizing that hardware is really about the software. And they even said on stage, here. Hardware is not about the hardware, it's about the software. So if you believe that to be true, then your model checks all the boxes. Are people getting this? >> I think they're starting to. Here is why, right. A lot of companies that were hardware first, that thought about software too late, aren't making it. Right? There's a large number of hardware companies, AI chip companies that aren't making it. Probably some of them that won't make it, unfortunately just because they started thinking about software too late. I'm so glad to see a lot of the early, I hope I'm not just doing our own horn here, but Apache TVM, the infrastructure that we built to map models to different hardware, it's very flexible. So we see a lot of emerging chip companies like SiMa.ai's been doing fantastic work, and they use Apache TVM to map algorithms to their hardware. And there's a bunch of others that are also using Apache TVM. That's because you have, you know, an opening infrastructure that keeps it up to date with all the machine learning frameworks and models and allows you to extend to the chips that you want. So these companies pay attention that early, gives them a much higher fighting chance, I'd say. >> Well, first of all, not only are you backable by the VCs cause you have pedigree, you're a professor, you're smart, and you get good recruiting-- >> Luis: I don't know about the smart part. >> And you get good recruiting for PhDs out of University of Washington, which is not too shabby computer science department. But they want to make money. The VCs want to make money. >> Right. >> So you have to make money. So what's the pitch? What's the business model? >> Yeah. Absolutely. >> Share us what you're thinking there. >> Yeah. The value of using our solution is shorter time to value for your model from months to hours. Second, you shrink operator, op-packs, because you don't need a specialized expensive team. Talk about expensive, expensive engineers who can understand machine learning hardware and software engineering to deploy models. You don't need those teams if you use this automated solution, right? Then you reduce that. And also, in the process of actually getting a model and getting specialized to the hardware, making hardware aware, we're talking about a very significant performance improvement that leads to lower cost of deployment in the cloud. We're talking about very significant reduction in costs in cloud deployment. And also enabling new applications on the edge that weren't possible before. It creates, you know, latent value opportunities. Right? So, that's the high level value pitch. But how do we make money? Well, we charge for access to the platform. Right? >> Usage. Consumption. >> Yeah, and value based. Yeah, so it's consumption and value based. So depends on the scale of the deployment. If you're going to deploy machine learning model at a larger scale, chances are that it produces a lot of value. So then we'll capture some of that value in our pricing scale. >> So, you have direct sales force then to work those deals. >> Exactly. >> Got it. How many customers do you have? Just curious. >> So we started, the SaaS platform just launched now. So we started onboarding customers. We've been building this for a while. We have a bunch of, you know, partners that we can talk about openly, like, you know, revenue generating partners, that's fair to say. We work closely with Qualcomm to enable Snapdragon on TVM and hence our platform. We're close with AMD as well, enabling AMD hardware on the platform. We've been working closely with two hyperscaler cloud providers that-- >> I wonder who they are. >> I don't know who they are, right. >> Both start with the letter A. >> And they're both here, right. What is that? >> They both start with the letter A. >> Oh, that's right. >> I won't give it away. (laughing) >> Don't give it away. >> One has three, one has four. (both laugh) >> I'm guessing, by the way. >> Then we have customers in the, actually, early customers have been using the platform from the beginning in the consumer electronics space, in Japan, you know, self driving car technology, as well. As well as some AI first companies that actually, whose core value, the core business come from AI models. >> So, serious, serious customers. They got deep tech chops. They're integrating, they see this as a strategic part of their architecture. >> That's what I call AI native, exactly. But now there's, we have several enterprise customers in line now, we've been talking to. Of course, because now we launched the platform, now we started onboarding and exploring how we're going to serve it to these customers. But it's pretty clear that our technology can solve a lot of other pain points right now. And we're going to work with them as early customers to go and refine them. >> So, do you sell to the little guys, like us? Will we be customers if we wanted to be? >> You could, absolutely, yeah. >> What we have to do, have machine learning folks on staff? >> So, here's what you're going to have to do. Since you can see the booth, others can't. No, but they can certainly, you can try our demo. >> OctoML. >> And you should look at the transparent AI app that's compiled and optimized with our flow, and deployed and built with our flow. That allows you to get your image and do style transfer. You know, you can get you and a pineapple and see how you look like with a pineapple texture. >> We got a lot of transcript and video data. >> Right. Yeah. Right, exactly. So, you can use that. Then there's a very clear-- >> But I could use it. You're not blocking me from using it. Everyone's, it's pretty much democratized. >> You can try the demo, and then you can request access to the platform. >> But you get a lot of more serious deeper customers. But you can serve anybody, what you're saying. >> Luis: We can serve anybody, yeah. >> All right, so what's the vision going forward? Let me ask this. When did people start getting the epiphany of removing the machine learning from the hardware? Was it recently, a couple years ago? >> Well, on the research side, we helped start that trend a while ago. I don't need to repeat that. But I think the vision that's important here, I want the audience here to take away is that, there's a lot of progress being made in creating machine learning models. So, there's fantastic tools to deal with training data, and creating the models, and so on. And now there's a bunch of models that can solve real problems there. The question is, how do you very easily integrate that into your intelligent applications? Madrona Venture Group has been very vocal and investing heavily in intelligent applications both and user applications as well as enablers. So we say an enable of that because it's so easy to use our flow to get a model integrated into your application. Now, any regular software developer can integrate that. And that's just the beginning, right? Because, you know, now we have CI/CD integration to keep your models up to date, to continue to integrate, and then there's more downstream support for other features that you normally have in regular software development. >> I've been thinking about this for a long, long, time. And I think this whole code, no one thinks about code. Like, I write code, I'm deploying it. I think this idea of machine learning as code independent of other dependencies is really amazing. It's so obvious now that you say it. What's the choices now? Let's just say that, I buy it, I love it, I'm using it. Now what do I got to do if I want to deploy it? Do I have to pick processors? Are there verified platforms that you support? Is there a short list? Is there every piece of hardware? >> We actually can help you. I hope we're not saying we can do everything in the world here, but we can help you with that. So, here's how. When you have them all in the platform you can actually see how this model runs on any instance of any cloud, by the way. So we support all the three major cloud providers. And then you can make decisions. For example, if you care about latency, your model has to run on, at most 50 milliseconds, because you're going to have interactivity. And then, after that, you don't care if it's faster. All you care is that, is it going to run cheap enough. So we can help you navigate. And also going to make it automatic. >> It's like tire kicking in the dealer showroom. >> Right. >> You can test everything out, you can see the simulation. Are they simulations, or are they real tests? >> Oh, no, we run all in real hardware. So, we have, as I said, we support any instances of any of the major clouds. We actually run on the cloud. But we also support a select number of edge devices today, like ARMs and Nvidia Jetsons. And we have the OctoML cloud, which is a bunch of racks with a bunch Raspberry Pis and Nvidia Jetsons, and very soon, a bunch of mobile phones there too that can actually run the real hardware, and validate it, and test it out, so you can see that your model runs performant and economically enough in the cloud. And it can run on the edge devices-- >> You're a machine learning as a service. Would that be an accurate? >> That's part of it, because we're not doing the machine learning model itself. You come with a model and we make it deployable and make it ready to deploy. So, here's why it's important. Let me try. There's a large number of really interesting companies that do API models, as in API as a service. You have an NLP model, you have computer vision models, where you call an API and then point in the cloud. You send an image and you got a description, for example. But it is using a third party. Now, if you want to have your model on your infrastructure but having the same convenience as an API you can use our service. So, today, chances are that, if you have a model that you know that you want to do, there might not be an API for it, we actually automatically create the API for you. >> Okay, so that's why I get the DevOps agility for machine learning is a better description. Cause it's not, you're not providing the service. You're providing the service of deploying it like DevOps infrastructure as code. You're now ML as code. >> It's your model, your API, your infrastructure, but all of the convenience of having it ready to go, fully automatic, hands off. >> Cause I think what's interesting about this is that it brings the craftsmanship back to machine learning. Cause it's a craft. I mean, let's face it. >> Yeah. I want human brains, which are very precious resources, to focus on building those models, that is going to solve business problems. I don't want these very smart human brains figuring out how to scrub this into actually getting run the right way. This should be automatic. That's why we use machine learning, for machine learning to solve that. >> Here's an idea for you. We should write a book called, The Lean Machine Learning. Cause the lean startup was all about DevOps. >> Luis: We call machine leaning. No, that's not it going to work. (laughs) >> Remember when iteration was the big mantra. Oh, yeah, iterate. You know, that was from DevOps. >> Yeah, that's right. >> This code allowed for standing up stuff fast, double down, we all know the history, what it turned out. That was a good value for developers. >> I could really agree. If you don't mind me building on that point. You know, something we see as OctoML, but we also see at Madrona as well. Seeing that there's a trend towards best in breed for each one of the stages of getting a model deployed. From the data aspect of creating the data, and then to the model creation aspect, to the model deployment, and even model monitoring. Right? We develop integrations with all the major pieces of the ecosystem, such that you can integrate, say with model monitoring to go and monitor how a model is doing. Just like you monitor how code is doing in deployment in the cloud. >> It's evolution. I think it's a great step. And again, I love the analogy to the mainstream. I lived during those days. I remember the monolithic propriety, and then, you know, OSI model kind of blew it. But that OSI stack never went full stack, and it only stopped at TCP/IP. So, I think the same thing's going on here. You see some scalability around it to try to uncouple it, free it. >> Absolutely. And sustainability and accessibility to make it run faster and make it run on any deice that you want by any developer. So, that's the tagline. >> Luis Ceze, thanks for coming on. Professor. >> Thank you. >> I didn't know you were a professor. That's great to have you on. It was a masterclass in DevOps agility for machine learning. Thanks for coming on. Appreciate it. >> Thank you very much. Thank you. >> Congratulations, again. All right. OctoML here on theCube. Really important. Uncoupling the machine learning from the hardware specifically. That's only going to make space faster and safer, and more reliable. And that's where the whole theme of re:MARS is. Let's see how they fit in. I'm John for theCube. Thanks for watching. More coverage after this short break. >> Luis: Thank you. (gentle music)
SUMMARY :
live on the floor at AWS re:MARS 2022. for having me in the show, John. but machine learning is the And that allows you to get certainly on the silicon side. 'cause I could see the progression. So once upon a time, yeah, no... because if you wake up learning runs in the end, that's going to give you the So that was pre-conventional wisdom. the Hamilton was working on. and to this day, you know, That's the beginning of that was logical when you is that the ecosystem because that's kind of the test First, you know-- and scaling the model the way you want, not having to do that integration work. Scale, and run the models So if you can move data to the edge, So do not have any of the typical And you can use existing-- the Artemis too, in space. If they have the hardware. And that allows you So I have to ask you, So if you believe that to be true, to the chips that you want. about the smart part. And you get good recruiting for PhDs So you have to make money. And also, in the process So depends on the scale of the deployment. So, you have direct sales How many customers do you have? We have a bunch of, you know, And they're both here, right. I won't give it away. One has three, one has four. in Japan, you know, self They're integrating, they see this as it to these customers. Since you can see the booth, others can't. and see how you look like We got a lot of So, you can use that. But I could use it. and then you can request But you can serve anybody, of removing the machine for other features that you normally have It's so obvious now that you say it. So we can help you navigate. in the dealer showroom. you can see the simulation. And it can run on the edge devices-- You're a machine learning as a service. know that you want to do, I get the DevOps agility but all of the convenience it brings the craftsmanship for machine learning to solve that. Cause the lean startup No, that's not it going to work. You know, that was from DevOps. double down, we all know the such that you can integrate, and then, you know, OSI on any deice that you Professor. That's great to have you on. Thank you very much. Uncoupling the machine learning Luis: Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Luis Ceze | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Luis | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Boston Dynamics | ORGANIZATION | 0.99+ |
two hours | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Japan | LOCATION | 0.99+ |
Madrona Venture Capital | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
University of Washington | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Pepsi | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
SiMa.ai | ORGANIZATION | 0.99+ |
OctoML | TITLE | 0.99+ |
OctoML | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.98+ |
one instance | QUANTITY | 0.98+ |
DevOps | TITLE | 0.98+ |
Madrona Venture Group | ORGANIZATION | 0.98+ |
Swami | PERSON | 0.98+ |
Madrona | ORGANIZATION | 0.98+ |
about six years | QUANTITY | 0.96+ |
Spot | ORGANIZATION | 0.96+ |
The Lean Machine Learning | TITLE | 0.95+ |
first | QUANTITY | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
ARMs | ORGANIZATION | 0.94+ |
pineapple | ORGANIZATION | 0.94+ |
Raspberry Pis | ORGANIZATION | 0.92+ |
TensorFlow | TITLE | 0.89+ |
Snapdragon | ORGANIZATION | 0.89+ |
about three years old | QUANTITY | 0.89+ |
a couple years ago | DATE | 0.88+ |
two hyperscaler cloud providers | QUANTITY | 0.88+ |
first ones | QUANTITY | 0.87+ |
one of | QUANTITY | 0.85+ |
50 milliseconds | QUANTITY | 0.83+ |
Apache TVM | ORGANIZATION | 0.82+ |
both laugh | QUANTITY | 0.82+ |
three major cloud providers | QUANTITY | 0.81+ |
Chris Samuels, Slalom & Bethany Petryszak Mudd, Experience Design | Snowflake Summit 2022
(upbeat music) >> Good morning. Welcome back to theCUBE's continuing coverage of Snowflake Summit 22, live from Las Vegas. Lisa Martin, here with Dave Villante. We are at Caesar's Forum, having lots of great conversations. As I mentioned, this is just the start of day two, a tremendous amount of content yesterday. I'm coming at you today. Two guests join us from Slalom, now, we've got Chris Samuels, Principal Machine Learning, and Bethany Mudd, Senior Director, Experience Design. Welcome to theCube, guys. >> Hi, thanks for having us. >> Thank you. >> So, Slalom and Snowflake, over 200 joint customers, over 1,800 plus engagements, lots of synergies there, partnership. We're here today to talk about intelligent products. Talk to us about what- how do you define intelligent products, and then kind of break that down? >> Yeah, I can, I can start with the simple version, right? So, when we think about intelligent products, what they're doing, is they're doing more than they were explicitly programmed to do. So, instead of having a developer write all of these rules and have, "If this, then that," right, we're using data, and real time insights to make products that are more performing and improving over time. >> Chris: Yeah, it's really bringing together an ecosystem of a series of things to have integrated capabilities working together that themselves offer constant improvement, better understanding, better flexibility, and better usability, for everyone involved. >> Lisa: And there are four pillars of intelligent products that let's walk through those: technology, intelligence, experiences, and operations. >> Sure. So for technology, like most modern data architectures, it has sort of a data component and it has a modern cloud platform, but here, the key is is sort of things being disconnected, things being self contained, and decoupled, such that there's better integration time, better iteration time, more cross use, and more extensibility and scalability with the cloud native portion of that. >> And the intelligence piece? >> The intelligence piece is the data that's been processed by machine learning algorithms, or by predictive analytics that provides sort of the most valuable, or more- most insightful inferences, or conclusions. So, by bringing together again, the tech and the intelligence, that's, you know, sort of the, two of the pillars that begin to move forward that enable sort of the other two pillars, which are- >> Experiences and operations. >> Yeah. >> Perfect. >> And if we think about those, all of the technology, all of the intelligence in the world, doesn't mean anything if it doesn't actually work for people. Without use, there is no value. So, as we're designing these products, we want to make sure that they're supporting people. As we're automating, there are still people accountable for those tasks. There are still impacts to people in the real world. So, we want to make sure that we're doing that intentionally. So, we're building the greater good. >> Yeah. And from the operations perspective, it's you can think of traditional DevOps becoming MLOps, where there's an overall platform and a framework in place to manage not only the software components of it, but the overall workflow, and the data flow, and the model life cycle such that we have tools and people from different backgrounds and different teams developing and maintaining this than you would previously see with something like product engineering. >> Dave: Can you guys walk us through an example of how you work with a customer? I'm envisioning, you know, meeting with a lot of yellow stickies, and prioritization, and I don't know if that's how it works, but take us through like the start and the sequence. >> You have my heart, I am a workshop lover. Anytime you have the scratch off, like, lottery stickers on something, you know it's a good one. But, as we think about our approach, we typically start with either a discovery or mobilized phase. We're really, we're starting by gathering context, and really understanding the business, the client, the users, and that full path the value. Who are all the teams that are going to have to come together and start working together to deliver this intelligent product? And once we've got that context, we can start solutioning and ideating on that. But, really it comes down to making sure that we've earned the right, and we've got the smarts to move into the space intelligently. >> Yeah, and, truly, it's the intelligent product itself is sort of tied to the use case. The business knows what the most- what is potentially the most valuable here. And so, so by communicating and working and co-creating with the business, we can define then, okay, here are the use cases and here are where machine learning and the overall intelligent product can maybe add more disruptive value than others. By saying, let's pretend that, you know, maybe your ML model or your predictive analytics is like a dial that we could turn up to 11. Which one of those dials turning turned up to 11 could add the most value or disruption to your business? And therefore, you know, how can we prioritize and then work toward that pie-in-the-sky goal. >> Okay. So the client comes and says, "This is the outcome we want." Okay, and then you help them. You gather the right people, sort of extract all the little, you know, pieces of knowledge, and then help them prioritize so they can focus. And then what? >> Yeah. So, from there we're going to take the approach that seeing is solving. We want to make sure that we get the right voices in the room, and we've got the right alignment. So, we're going to map out everything. We're going to diagram what that experience is going to look like, how technology's going to play into it, all of the roles and actors involved. We're going to draw a map of the ecosystem that everyone can understand, whether you're in marketing, or the IT sort of area, once again, so we can get crisp on that outcome and how we're going to deliver it. And, from there, we start building out that roadmap and backlog, and we deliver iteratively. So, by not thinking of things as getting to the final product after a three year push, we really want to shrink those build, measure, and learn loops. So, we're getting all of that feedback and we're listening and evolving and growing the same way that our products are. >> Yeah. Something like an intelligent product is is pretty heady. So it's a pretty heavy concept to talk about. And so, the question becomes, "What is the outcome that ultimately needs to be achieved?" And then, who, from where in the business across the different potentially business product lines or business departments needs to be brought together? What data needs to be brought together? Such that the people can understand how they themselves can shape. The stakeholders can, how the product itself can be shaped. And therefore, what is the ultimate outcome, collectively, for everybody involved? 'Cause while your data might be fueling, you know, finances or someone else's intelligence and that kind of thing, bringing it all together allows for a more seamless product that might benefit more of the overall structure of the organization. >> Can you talk a little bit about how Slalom and Snowflake are enabling, like a customer example? A customer to take that data, flex that muscle, and create intelligent products that delight and surprise their customers? >> Chris: Yeah, so here's a great story. We worked to co-create with Kawasaki Heavy Industries. So, we created an intelligent product with them to enable safer rail travel, more preventative, more efficient, preventative maintenance, and a more efficient and real time track status feedback to the rail operators. So, in this case, we brought, yeah, the intelligent product itself was, "Okay, how do you create a better rail monitoring service?" And while that itself was the primary driver of the data, multiple other parts of the organization are using sort of the intelligent product as part of their now daily routine, whether it's from the preventative maintenance perspective, or it's from route usage, route prediction. Or, indeed, helping KHI move forward into making trains a more software centered set of products in the future. >> So, taking that example, I would imagine when you running- like I'm going to call that a project. I hope that's okay. So, when I'm running a project, that I would imagine that sometimes you run into, "Oh, wow. Okay." To really be successful at this, the company- project versus whole house. The company doesn't have the right data architecture, the right skills or the right, you know, data team. Now, is it as simple as, oh yeah, just put it all into Snowflake? I doubt it. So how do you, do you encounter that often? How do you deal with that? >> Bethany: It's a journey. So, I think it's really about making sure we're meeting clients where they are. And I think that's something that we actually do pretty well. So, as we think about delivery co-creation, and co-delivering is a huge part of our model. So, we want to make sure that we have the client teams, with us. So, as we start thinking about intelligent products, it can be incorporating a small feature, with subscription based services. It doesn't have to be creating your own model and sort of going deep. It really does come down to like what value do you want to get out of this? Right? >> Yeah. It is important that it is a journey, right? So, it doesn't have to be okay, there's a big bang applied to you and your company's tech industry or tech ecosystem. You can just start by saying, "Okay, how will I bring my data together at a data lake? How do I see across my different pillars of excellence in my own business?" And then, "How do I manage, potentially, this in an overall MLOps platform such that it can be sustainable and gather more insights and improve itself with time, and therefore be more impactful to the ultimate users of the tool?" 'Cause again, as Bethany said that without use, these things are just tools on the shelf somewhere that have little value. >> So, it's a journey, as you both said, completely agree with that. It's a journey that's getting faster and faster. Because, I mean, we've seen so much acceleration in the last couple of the years, the consumer demands have massively changed. >> Bethany: Absolutely. >> In every industry, how do Slalom and Snowflake come together to help businesses define the journey, but also accelerate it, so that they can stay ahead or get ahead of the competition? >> Yeah. So, one thing I think is interesting about the technology field right now is I feel like we're at the point where it's not the technology or the tools that's limiting us or, you know, constraining what we can build, it's our imaginations. Right? And, when I think about intelligent products and all of the things that are capable, that you can achieve with AI and ML, that's not widely known. There's so much tech jargon. And, we put all of those statistical words on it, and you know the things you don't know. And, instead, really, what we're doing is we're providing different ways to learn and grow. So, I think if we can demystify and humanize some of that language, I really would love to see all of these companies better understand the crayons and the tools in their toolbox. >> Speaking from a creative perspective, I love it. >> No, And I'll do the tech nerd bit. So, there is- you're right. There is a portion where you need to bring data together, and tech together, and that kind of thing. So, something like Snowflake is a great enabler for how to actually bring the data of multiple parts of an organization together into, you know, a data warehouse, or a data lake, and then be able to manage that sort of in an MLOps platform, particularly with some of the press that Snowflake has put out this week. Things becoming more Python-native, allowing for more ML experimentation, and some more native insights on the platform, rather than going off Snowflake platform to do some of that kind of thing. Makes Snowflake an incredibly valuable portion of the data management and of the tech and of the engineering of the overall product. >> So, I agree, Bethany, lack of imagination sometimes is the barrier we get so down into the weeds, but there's also lack of skills, as mentioned the organizational, you know, structural issues, politics, you know, whatever it is, you know, specific agendas, how do you guys help with that? Can, will you bring in, you know, resources to help and fill gaps? >> Yeah, so we will bring in a cross-disciplinary team of experts. So, you will see an experienced designer, as well as your ML architects, as well as other technical architects, and what we call solution owners, because we want to make sure that we've got a lot of perspectives, so we can see that problem from a lot of different angles. The other thing that we're bringing in is a repeatable process, a repeatable engineering methodology, which, when you zoom out, and you look at it, it doesn't seem like that big of a deal. But, what we're doing, is we're training against it. We're building tools, we're building templates, we're re-imagining what our deliverables look like for intelligent products, just so, we're not only speeding up the development and getting to those outcomes faster, but we're also continuing to grow and we can gift those things to our clients, and help support them as well. >> And not only that, what we do at Slalom is we want to think about transition from the beginning. And so, by having all the stakeholders in the room from the earliest point, both the business stakeholders, the technical stakeholders, if they have data scientists, if they have engineers, who's going to be taking this and maintaining this intelligent product long after we're gone, because again, we will transition, and someone else will be taking over the maintenance of this team. One, they will understand, you know, early from beginning the path that it is on, and be more capable of maintaining this, and two, understand sort of the ethical concerns behind, okay, here's how parts of your system affect this other parts of the system. And, you know, sometimes ML gets some bad press because it's misapplied, or there are concerns, or models or data are used outside of context. And there's some, you know, there are potentially some ill effects to be had. By bringing those people together much earlier, it allows for the business to truly understand and the stakeholders to ask the questions that they- that need to be continually asked to evaluate, is this the right thing to do? How do I, how does my part affect the whole? And, how do I have an overall impact that is in a positive way and is something, you know, truly being done most effectively. >> So, that's that knowledge transfer. I hesitate to even say that because it makes it sound so black and white, because you're co-creating here. But, essentially, you're, you know, to use the the cliche, you're teaching them how to fish. Not, you know, going to ongoing, you know, do the fishing for them, so. >> Lisa: That thought diversity is so critical, as is the internal alignment. Last question for you guys, before we wrap here, where can customers go to get started? Do they engage Slalom, Snowflake? Can they do both? >> Chris: You definitely can. We can come through. I mean, we're fortunate that snowflake has blessed us with the title of partner of the year again for the fifth time. >> Lisa: Congratulations. >> Thank you, thank you. We are incredibly humbled in that. So, we would do a lot of work with Snowflake. You could certainly come to Slalom, any one of our local markets, or build or emerge. We'll definitely work together. We'll figure out what the right team is. We'll have lots and lots of conversations, because it is most important for you as a set of business stakeholders to define what is right for you and what you need. >> Yeah. Good stuff, you guys, thank you so much for joining Dave and me, talking about intelligent products, what they are, how you co-design them, and the impact that data can make with customers if they really bring the right minds together and get creative. We appreciate your insights and your thoughts. >> Thank you. >> Thanks for having us guys. Yeah. >> All right. For Dave Villante, I am Lisa Martin. You're watching theCUBE's coverage, day two, Snowflake Summit 22, from Las Vegas. We'll be right back with our next guest. (upbeat music)
SUMMARY :
just the start of day two, So, Slalom and Snowflake, and improving over time. and better usability, of intelligent products that and decoupled, such that and the intelligence, that's, all of the technology, all of and the data flow, the start and the sequence. and that full path the value. and the overall intelligent product sort of extract all the little, you know, all of the roles and actors involved. Such that the people can understand the intelligent product itself was, the right skills or the that we have the client teams, with us. there's a big bang applied to you in the last couple of the years, and all of the things that are capable, Speaking from a creative and of the engineering and getting to those outcomes faster, and the stakeholders to ask the questions do the fishing for them, so. as is the internal alignment. the title of partner of the to define what is right and the impact that data Thanks for having us guys. We'll be right back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Dave Villante | PERSON | 0.99+ |
Bethany | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Chris Samuels | PERSON | 0.99+ |
Kawasaki Heavy Industries | ORGANIZATION | 0.99+ |
Bethany Mudd | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Two guests | QUANTITY | 0.99+ |
two pillars | QUANTITY | 0.99+ |
Slalom | ORGANIZATION | 0.99+ |
three year | QUANTITY | 0.99+ |
KHI | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
fifth time | QUANTITY | 0.99+ |
Bethany Petryszak Mudd | PERSON | 0.99+ |
both | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
two | QUANTITY | 0.98+ |
Snowflake Summit 22 | EVENT | 0.98+ |
yesterday | DATE | 0.98+ |
over 200 joint customers | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
day two | QUANTITY | 0.97+ |
theCube | ORGANIZATION | 0.97+ |
this week | DATE | 0.96+ |
Snowflake Summit 2022 | EVENT | 0.96+ |
one thing | QUANTITY | 0.96+ |
Snowflake | TITLE | 0.95+ |
One | QUANTITY | 0.94+ |
over 1,800 plus engagements | QUANTITY | 0.93+ |
Slalom | PERSON | 0.92+ |
one | QUANTITY | 0.83+ |
Slalom | TITLE | 0.83+ |
four | QUANTITY | 0.82+ |
11 | QUANTITY | 0.78+ |
up | QUANTITY | 0.76+ |
two of the pillars | QUANTITY | 0.7+ |
Machine Learning | ORGANIZATION | 0.68+ |
Caesar's Forum | LOCATION | 0.6+ |
last couple of | DATE | 0.56+ |
years | QUANTITY | 0.42+ |
Nandi Leslie, Raytheon | WiDS 2022
(upbeat music) >> Hey everyone. Welcome back to theCUBE's live coverage of Women in Data Science, WiDS 2022, coming to live from Stanford University. I'm Lisa Martin. My next guest is here. Nandi Leslie, Doctor Nandi Leslie, Senior Engineering Fellow at Raytheon Technologies. Nandi, it's great to have you on the program. >> Oh it's my pleasure, thank you. >> This is your first WiDS you were saying before we went live. >> That's right. >> What's your take so far? >> I'm absolutely loving it. I love the comradery and the community of women in data science. You know, what more can you say? It's amazing. >> It is. It's amazing what they built since 2015, that this is now reaching 100,000 people 200 online event. It's a hybrid event. Of course, here we are in person, and the online event going on, but it's always an inspiring, energy-filled experience in my experience of WiDS. >> I'm thoroughly impressed at what the organizers have been able to accomplish. And it's amazing, that you know, you've been involved from the beginning. >> Yeah, yeah. Talk to me, so you're Senior Engineering Fellow at Raytheon. Talk to me a little bit about your role there and what you're doing. >> Well, my role is really to think about our customer's most challenging problems, primarily at the intersection of data science, and you know, the intersectional fields of applied mathematics, machine learning, cybersecurity. And then we have a plethora of government clients and commercial clients. And so what their needs are beyond those sub-fields as well, I address. >> And your background is mathematics. >> Yes. >> Have you always been a math fan? >> I have, I actually have loved math for many, many years. My dad is a mathematician, and he introduced me to, you know mathematical research and the sciences at a very early age. And so, yeah, I went on, I studied in a math degree at Howard undergrad, and then I went on to do my PhD at Princeton in applied math. And later did a postdoc in the math department at University of Maryland. >> And how long have you been with Raytheon? >> I've been with Raytheon about six years. Yeah, and before Raytheon, I worked at a small to midsize defense company, defense contracting company in the DC area, systems planning and analysis. And then prior to that, I taught in a math department where I also did my postdoc, at University of Maryland College Park. >> You have a really interesting background. I was doing some reading on you, and you have worked with the Navy. You've worked with very interesting organizations. Talk to the audience a little bit about your diverse background. >> Awesome yeah, I've worked with the Navy on submarine force security, and submarine tracking, and localization, sensor performance. Also with the Army and the Army Research Laboratory during research at the intersection of machine learning and cyber security. Also looking at game theoretic and graph theoretic approaches to understand network resilience and robustness. I've also supported Department of Homeland Security, and other government agencies, other governments, NATO. Yeah, so I've really been excited by the diverse problems that our various customers have you know, brought to us. >> Well, you get such great experience when you are able to work in different industries and different fields. And that really just really probably helps you have such a much diverse kind of diversity of thought with what you're doing even now with Raytheon. >> Yeah, it definitely does help me build like a portfolio of topics that I can address. And then when new problems emerge, then I can pull from a toolbox of capabilities. And, you know, the solutions that have previously been developed to address those wide array of problems, but then also innovate new solutions based on those experiences. So I've been really blessed to have those experiences. >> Talk to me about one of the things I heard this morning in the session I was able to attend before we came to set was about mentors and sponsors. And, you know, I actually didn't know the difference between that until a few years ago. But it's so important. Talk to me about some of the mentors you've had along the way that really helped you find your voice in research and development. >> Definitely, I mean, beyond just the mentorship of my my family and my parents, I've had amazing opportunities to meet with wonderful people, who've helped me navigate my career. One in particular, I can think of as and I'll name a number of folks, but Dr. Carlos Castillo-Chavez was one of my earlier mentors. I was an undergrad at Howard University. He encouraged me to apply to his summer research program in mathematical and theoretical biology, which was then at Cornell. And, you know, he just really developed an enthusiasm with me for applied mathematics. And for how it can be, mathematics that is, can be applied to epidemiological and theoretical immunological problems. And then I had an amazing mentor in my PhD advisor, Dr. Simon Levin at Princeton, who just continued to inspire me, in how to leverage mathematical approaches and computational thinking for ecological conservation problems. And then since then, I've had amazing mentors, you know through just a variety of people that I've met, through customers, who've inspired me to write these papers that you mentioned in the beginning. >> Yeah, you've written 55 different publications so far. 55 and counting I'm sure, right? >> Well, I hope so. I hope to continue to contribute to the conversation and the community, you know, within research, and specifically research that is computationally driven. That really is applicable to problems that we face, whether it's cyber security, or machine learning problems, or others in data science. >> What are some of the things, you're giving a a tech vision talk this afternoon. Talk to me a little bit about that, and maybe the top three takeaways you want the audience to leave with. >> Yeah, so my talk is entitled "Unsupervised Learning for Network Security, or Network Intrusion Detection" I believe. And essentially three key areas I want to convey are the following. That unsupervised learning, that is the mathematical and statistical approach, which tries to derive patterns from unlabeled data is a powerful one. And one can still innovate new algorithms in this area. Secondly, that network security, and specifically, anomaly detection, and anomaly-based methods can be really useful to discerning and ensuring, that there is information confidentiality, availability, and integrity in our data >> A CIA triad. >> There you go, you know. And so in addition to that, you know there is this wealth of data that's out there. It's coming at us quickly. You know, there are millions of packets to represent communications. And that data has, it's mixed, in terms of there's categorical or qualitative data, text data, along with numerical data. And it is streaming, right. And so we need methods that are efficient, and that are capable of being deployed real time, in order to detect these anomalies, which we hope are representative of malicious activities, and so that we can therefore alert on them and thwart them. >> It's so interesting that, you know, the amount of data that's being generated and collected is growing exponentially. There's also, you know, some concerning challenges, not just with respect to data that's reinforcing social biases, but also with cyber warfare. I mean, that's a huge challenge right now. We've seen from a cybersecurity perspective in the last couple of years during the pandemic, a massive explosion in anomalies, and in social engineering. And companies in every industry have to be super vigilant, and help the people understand how to interact with it, right. There's a human component. >> Oh, for sure. There's a huge human component. You know, there are these phishing attacks that are really a huge source of the vulnerability that corporations, governments, and universities face. And so to be able to close that gap and the understanding that each individual plays in the vulnerability of a network is key. And then also seeing the link between the network activities or the cyber realm, and physical systems, right. And so, you know, especially in cyber warfare as a remote cyber attack, unauthorized network activities can have real implications for physical systems. They can, you know, stop a vehicle from running properly in an autonomous vehicle. They can impact a SCADA system that's, you know there to provide HVAC for example. And much more grievous implications. And so, you know, definitely there's the human component. >> Yes, and humans being so vulnerable to those social engineering that goes on in those phishing attacks. And we've seen them get more and more personal, which is challenging. You talking about, you know, sensitive data, personally identifiable data, using that against someone in cyber warfare is a huge challenge. >> Oh yeah, certainly. And it's one that computational thinking and mathematics can be leveraged to better understand and to predict those patterns. And that's a very rich area for innovation. >> What would you say is the power of computational thinking in the industry? >> In industry at-large? >> At large. >> Yes, I think that it is such a benefit to, you know, a burgeoning scientist, if they want to get into industry. There's so many opportunities, because computational thinking is needed. We need to be more objective, and it provides that objectivity, and it's so needed right now. Especially with the emergence of data, and you know, across industries. So there are so many opportunities for data scientists, whether it's in aerospace and defense, like Raytheon or in the health industry. And we saw with the pandemic, the utility of mathematical modeling. There are just so many opportunities. >> Yeah, there's a lot of opportunities, and that's one of the themes I think, of WiDS, is just the opportunities, not just in data science, and for women. And there's obviously even high school girls that are here, which is so nice to see those young, fresh faces, but opportunities to build your own network and your own personal board of directors, your mentors, your sponsors. There's tremendous opportunity in data science, and it's really all encompassing, at least from my seat. >> Oh yeah, no I completely agree with that. >> What are some of the things that you've heard at this WiDS event that inspire you going, we're going in the right direction. If we think about International Women's Day tomorrow, "Breaking the Bias" is the theme, do you think we're on our way to breaking that bias? >> Definitely, you know, there was a panel today talking about the bias in data, and in a variety of fields, and how we are, you know discovering that bias, and creating solutions to address it. So there was that panel. There was another talk by a speaker from Pinterest, who had presented some solutions that her, and her team had derived to address bias there, in you know, image recognition and search. And so I think that we've realized this bias, and, you know, in AI ethics, not only in these topics that I've mentioned, but also in the implications for like getting a loan, so economic implications, as well. And so we're realizing those issues and bias now in AI, and we're addressing them. So I definitely am optimistic. I feel encouraged by the talks today at WiDS that you know, not only are we recognizing the issues, but we're creating solutions >> Right taking steps to remediate those, so that ultimately going forward. You know, we know it's not possible to have unbiased data. That's not humanly possible, or probably mathematically possible. But the steps that they're taking, they're going in the right direction. And a lot of it starts with awareness. >> Exactly. >> Of understanding there is bias in this data, regardless. All the people that are interacting with it, and touching it, and transforming it, and cleaning it, for example, that's all influencing the veracity of it. >> Oh, for sure. Exactly, you know, and I think that there are for sure solutions are being discussed here, papers written by some of the speakers here, that are driving the solutions to the mitigation of this bias and data problem. So I agree a hundred percent with you, that awareness is you know, half the battle, if not more. And then, you know, that drives creation of solutions >> And that's what we need the creation of solutions. Nandi, thank you so much for joining me today. It was a pleasure talking with you about what you're doing with Raytheon, what you've done and your path with mathematics, and what excites you about data science going forward. We appreciate your insights. >> Thank you so much. It was my pleasure. >> Good, for Nandi Leslie, I'm Lisa Martin. You're watching theCUBE's coverage of Women in Data Science 2022. Stick around, I'll be right back with my next guest. (upbeat flowing music)
SUMMARY :
have you on the program. This is your first WiDS you were saying You know, what more can you say? and the online event going on, And it's amazing, that you know, and what you're doing. and you know, the intersectional fields and he introduced me to, you And then prior to that, I and you have worked with the Navy. have you know, brought to us. And that really just And, you know, the solutions that really helped you that you mentioned in the beginning. 55 and counting I'm sure, right? and the community, you and maybe the top three takeaways that is the mathematical and so that we can therefore and help the people understand And so, you know, Yes, and humans being so vulnerable and to predict those patterns. and you know, across industries. and that's one of the themes I think, completely agree with that. that inspire you going, and how we are, you know And a lot of it starts with awareness. that's all influencing the veracity of it. And then, you know, that and what excites you about Thank you so much. of Women in Data Science 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Nandi | PERSON | 0.99+ |
Carlos Castillo-Chavez | PERSON | 0.99+ |
Simon Levin | PERSON | 0.99+ |
Nandi Leslie | PERSON | 0.99+ |
Nandi Leslie | PERSON | 0.99+ |
NATO | ORGANIZATION | 0.99+ |
Raytheon | ORGANIZATION | 0.99+ |
International Women's Day | EVENT | 0.99+ |
100,000 people | QUANTITY | 0.99+ |
Department of Homeland Security | ORGANIZATION | 0.99+ |
Raytheon Technologies | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
today | DATE | 0.99+ |
University of Maryland | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Army Research Laboratory | ORGANIZATION | 0.99+ |
Navy | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
millions of packets | QUANTITY | 0.97+ |
55 | QUANTITY | 0.97+ |
Cornell | ORGANIZATION | 0.97+ |
Howard University | ORGANIZATION | 0.97+ |
each individual | QUANTITY | 0.97+ |
about six years | QUANTITY | 0.97+ |
Howard | ORGANIZATION | 0.96+ |
55 different publications | QUANTITY | 0.96+ |
Stanford University | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
Unsupervised Learning for Network Security, or Network Intrusion Detection | TITLE | 0.96+ |
University of Maryland College Park | ORGANIZATION | 0.96+ |
Army | ORGANIZATION | 0.96+ |
WiDS | EVENT | 0.95+ |
Women in Data Science 2022 | TITLE | 0.95+ |
Women in Data Science | EVENT | 0.95+ |
Princeton | ORGANIZATION | 0.94+ |
hundred percent | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
CIA | ORGANIZATION | 0.93+ |
Secondly | QUANTITY | 0.92+ |
tomorrow | DATE | 0.89+ |
WiDS | ORGANIZATION | 0.88+ |
Doctor | PERSON | 0.88+ |
200 online | QUANTITY | 0.87+ |
WiDS 2022 | EVENT | 0.87+ |
this afternoon | DATE | 0.85+ |
three takeaways | QUANTITY | 0.84+ |
last couple of years | DATE | 0.83+ |
this morning | DATE | 0.83+ |
few years ago | DATE | 0.82+ |
SCADA | ORGANIZATION | 0.78+ |
top | QUANTITY | 0.75+ |
three | QUANTITY | 0.71+ |
2022 | DATE | 0.7+ |
DC | LOCATION | 0.64+ |
Breaking the Bias | EVENT | 0.52+ |
WiDS | TITLE | 0.39+ |
Venkat Venkataramani, Rockset & Carl Sjogreen, Seesaw | AWS Startup Showcase
(mid tempo digital music) >> Welcome to today's session of theCUBE' presentation of the AWS startup showcase. This is New Breakthroughs and DevOps, Data Analytics, and Cloud Management Tools. The segment is featuring Rockset and we're going to be talking about data analytics. I'm your host, Lisa Martin, and today I'm joined by one of our alumni, Venkat Venkataramani, the co-founder and CEO of Rockset, and Carl Sjogreen, the co-founder and CPO of Seesaw Learning. We're going to be talking about the fast path to real-time analytics at Seesaw. Guys, Thanks so much for joining me today. >> Thanks for having us >> Thank you for having us. >> Carl, let's go ahead and start with you. Give us an overview of Seesaw. >> Yeah, so Seesaw is a platform that brings educators, students, and families together to create engaging and learning experiences. We're really focused on elementary aged students, and have a suite of creative tools and engaging learning activities that helps get their learning and ideas out into the world and share that with family members. >> And this is used by over 10 million teachers and students and family members across 75% of the schools in the US and 150 countries. So you've got a great big global presence. >> Yeah, it's really an honor to serve so many teachers and students and families. >> I can imagine even more so now with the remote learning being such a huge focus for millions and millions across the country. Carl, let's go ahead and get the backstory. Let's talk about data. You've a ton of data on how your product is being used across millions of data points. Talk to me about the data goals that you set prior to using Rockset. >> Yeah, so, as you can imagine with that many users interacting with Seesaw, we have all sorts of information about how the product is being used, which schools, which districts, what those usage patterns look like. And before we started working with Rockset, a lot of data infrastructure was really custom built and cobbled together a bit over the years. We had a bunch of batch jobs processing data, we were using some tools, like Athena, to make that data visible to our internal customers. But we had a very sort disorganized data infrastructure that really as we've grown, we realized was getting in the way of helping our sales and marketing and support and customer success teams, really service our customers in the way that we wanted to past. >> So operationalizing that data to better serve internal users like sales and marketing, as well as your customers. Give me a picture, Carl, of those key technology challenges that you knew you needed to solve. >> Yeah, well, at the simplest level, just understanding, how an individual school or district is using Seesaw, where they're seeing success, where they need help, is a critical question for our customer support teams and frankly for our school and district partners. a lot of what they're asking us for is data about how Seesaw is being used in their school, so that they can help target interventions, They can understand where there is an opportunity to double down on where they are seeing success. >> Now, before you found Rockset, you did consider a more traditional data warehouse approach, but decided against it. Talk to me about the decision why was a traditional data warehouse not the right approach? >> Well, one of the key drivers is that, we are heavy users of DynamoDB. That's our main data store and has been tremendous aid in our scaling. Last year we scaled with the transition to remote learning, most of our metrics by, 10X and Dynamo didn't skip a beat, it was fantastic in that environment. But when we started really thinking about how to build a data infrastructure on top of it, using a sort of traditional data warehouse, a traditional ETL pipeline, it wasn't going to require a fair amount of work for us to really build that out on our own on top of Dynamo. And one of the key advantages of Rockset was that it was basically plug and play for our Dynamo instance. We turned Rockset on, connected it to our DynamoDB and were able within hours to start querying that data in ways that we hadn't before. >> Venkat let's bring you into the conversation. Let's talk about the problems that you're solving for Seesaw and also the complimentary relationship that you have with DynamoDB. >> Definitely, I think, Seesaw, big fan of the product. We have two kids in elementary school that are active users, so it's a pleasure to partner with Seesaw here. If you really think about what they're asking for, what Carl's vision was for their data stack. The way we look at is business observability. They have many customers and they want to make sure that they're doing the right thing and servicing them better. And all of their data is in a very scalable, large scale, no SEQUEL store like DynamoDB. So it makes it very easy for you to build applications, but it's very, very hard to do analytics on it. Rockset had comes with all batteries included, including real-time data connectors, with Amazon DynamoDB. And so literally you can just point Rockset at any of your Dynamo tables, even though it's a no SEQUEL store, Rockset will in real time replicate the data and automatically convert them into fast SEQUEL tables for you to do analytics on. And so within one to two seconds of data getting modified or new data arriving in DynamoDB from your application, within one to two seconds, it's available for query processing in Rockset with full feature SEQUEL. And not just that, I think another very important aspect that was very important for Seesaw is not just that they wanted me to do batch analytics. They wanted their analytics to be interactive because a lot of the time we just say something is wrong. It's good to know that, but oftentimes you have a lot more followup questions. Why is it wrong? When did it go wrong? Is it a particular release that we did? Is it something specific to the school district? Are they trying to use some part of the product more than other parts of the product and struggling with it? Or anything like that. It's really, I think it comes down to Seesaw's and Carl's vision of what that data stack should serve and how we can use that to better serve the customers. And Rockset's indexing technology, and whatnot allows you to not only get real-time in terms of data freshness, but also the interactivity that comes in ad-hoc drilling down and slicing and dicing kind of analytics that is just our bread and butter . And so that is really how I see not only us partnering with Seesaw and allowing them to get the business observerbility they care about, but also compliment Dynamo transactional databases that are massively scalable, born in the cloud, like DynamoDB. >> Carl talked to me about that complimentary relationship that Venkat just walked us through and how that is really critical to what you're trying to deliver at Seesaw. >> Yeah, well, just to reiterate what Venkat said, I think we have so much data that any question you ask about it, immediately leads to five other questions about it. We have a very seasonal business as one example. Obviously in the summertime when kids aren't in school, we have very different usage patterns, then during this time right now is our critical back to school season versus a steady state, maybe in the middle of the school year. And so really understanding how data is trending over time, how it compares year over year, what might be driving those things, is something that frankly we just haven't had the tools to really dig into. There's a lot about that, that we are still beginning to understand and dig into more. And so this iterative exploration of data is incredibly powerful to expose to our product team, our sales and marketing teams to really understand where Seesaw's working and where we still have work do with our customers. And that's so critical to us doing a good job for schools in districts. >> And how long have you been using Rockset, Carl? >> It's about six months now, maybe a little bit longer. >> Okay, so during the pandemic. So talk to me a little bit about in the last 18 months, where we saw the massive overnight transition to remote learning and there's still a lot of places that are in that or a hybrid environment. How critical was it to have Rockset to fuel real-time analytics interactivity, particularly in a very challenging last 18 month time period? >> The last 18 months have been hard for everyone, but I think have hit teachers and schools maybe harder than anyone, they have been struggling with. And then, overnight transition to remote learning challenges of returning to the classroom hybrid learning, teachers and schools are being asked to stretch in ways they have never been stretched before. And so, our real focus last year was in doing whatever we could to help them manage those transitions. And data around student attendance in a remote learning situation, data around which kids were completing lessons and which kids weren't, was really critical data to provide to our customers. And a lot of our data infrastructure had to be built out to support answering those questions in this really crazy time for schools. >> I want to talk about the data set, but I'd like to go back to Venkat 'cause what's interesting about this story is Seesaw is a customer of Rockset, Venkat, is a customer of Seesaw. Talk to me Venkat about how this has been helpful in the remote learning that your kids have been going through the last year and a half. >> Absolutely. I have two sons, nine and ten year olds, and they are in fourth and fifth grade now. And I still remember when I told them that Seesaw is considering using Rockset for the analytics, they were thrilled, they were overjoyed because finally they understood what I do for a living. (chuckling) And so that was really amazing. I think, it was a fantastic dual because for the first time I actually understood what kids do at school. I think every week at the end of the week, we would use Seesaw to just go look at, "Hey, well, let's see what you did last week." And we would see not only what the prompts and what the children were doing in the classroom, but also the comments from the educators, and then they comment back. And then we were like, "Hey, this is not how you speak to an educators." So it was really amazing to actually go through that, and so we are very, very big fans of the product, we really look forward to using it, whether it is remote learning or not, we try to use it as a family, me, my wife and the kids, as much as possible. And it's a very constant topic of conversation, every week when we are working with the kids and seeing how we can help them. >> So from an observability perspective, it sounds like it's giving parents and teachers that visibility that really without it, you don't get. >> That's absolutely correct . I think the product itself is about making connections, giving people more visibility into things that are constantly happening, but you're not in the know. Like, before Seesaw, I used to ask the kids, "How was school today? "what happened in the class?" And they'll say, "It was okay." It would be a very short answer, it wouldn't really have the depth that we are able to get from Seesaw. So, absolutely. And so it's only right that, that level of observability and that level of... Is also available for their business teams, the support teams so that they can also service all the organizations that Seesaw's working with, not only the parents and the educators and the students that are actually using the product. >> Carl, let's talk about that data stack And then I'm going to open the can on some of those impacts that it's making to your internal folks. We talked about DynamoDB, but give me an visual audio, visual picture of the data stack. >> Yeah. So, we use DynamoDB as our database of record. We're now in the process of centralizing all of our analytics into Rockset. So that rather than having different BaaS jobs in different systems, querying that data in different ways, trying to really set Rockset up as the source of truth for analytics on top of Dynamo. And then on top of Rockset, exposing that data, both to internal customers for that interactive iterative SEQUEL style queries, but also bridging that data into the other systems our business users use. So Salesforce, for example, is a big internal tool and have that data now piped into Salesforce so that a sales rep can run a report on a prospect to reach out to, or a customer that needs help getting started with Seesaw. And it's all plumbed through the Rockset infrastructure. >> From an outcome standpoint, So I mentioned sales and marketing getting that visibility, being able to act on real time data, how has it impacted sales in the last year and a half? six months rather since , it's now since months using it. >> Well, I don't know if I can draw a direct line between those things, but it's been a very busy year for Seesaw, as schools have transitioned to remote learning. And our business is really largely driven by teachers discovering our free product, finding it valuable in their classroom, and then asking their school or district leadership to purchase a school wide subscription. It's a very bottoms up sales motion. And so data on where teachers are starting to use Seesaw is the key input into our sales and marketing discussions with schools and districts. And so understanding that data quickly in real time is a key part of our sales strategy and a key part of how we grow at Seesaw over time. >> And it sounds like Rockset is empowering those users, the sales and marketing folks to really fine tune their interactions with existing customers, prospective customers. And I imagine you on the product side in terms of tuning the product. What are some of the things Carl that you've learned in the last six months that have helped you make better decisions on what you want Seesaw to deliver in the future? >> Well, one of the things that I think has been really interesting is how usage patterns have changed between the classroom and remote learning. We saw per student usage of Seesaw increased dramatically over the past year, and really understanding what that means for how the product needs to evolve to better meet teacher needs, to help organize that information, since it's now a lot more of it, really helped motivate our product roadmap over the last year. We launched a new progress dashboard that helps teachers get an added glance view of what's happening in their classroom. That was really in direct response to the changing usage patterns, that we were able to understand with better insights into data. >> And those insights allow you to pivot and iterate on the product. Venkat I want to just go back to the AWS relationship for a second. You both talked about the complimentary nature of Rockset and DynamoDB. Here we are at the AWS Startup Showcase. Venkat just give the audience a little overview of the partnership that you guys have with AWS. >> Rockset fully runs on AWS, so we are customer of AWS. We are also a partner. There are lots of amazing cloud data products that AWS has, including DynamoDB or AWS Kinesis. And so one with which we have built in integrations. So if you're managing data in AWS, we compliment and we can provide, very, very fast interactive real-time analytics on all of your datasets. So the partnership has been wonderful, we're very excited to be in the Startup Showcase. And so I hope this continuous for years to come. >> Let's talk about the synergies between a Rockset and Seesaw for a second. I know we talked about the huge value of real time analytics, especially in today's world, where we've learned many things in the last year and a half, including that real-time analytics is no longer a nice to have for a lot of industries, 'cause I think Carl as you said, if you can't get access to the data, then there's questions we can't ask. Or we can't iterate on operations, if we wait seconds for every query to load, then there's questions we can't ask. Talk to me Venkat, about how Rockset is benefiting from what you're learning from Seesaw's usage of the technology? >> Absolutely. I mean, if you go to the first part of the question on why do businesses really go after real time. What is the drive here? You might have heard the phrase, the world is going from batch to real-time. What does it really mean? What's the driving factor there? Our take on it is, I think it's about accelerating growth. Seesaw's product being amazing and it'll continue to grow, it'll continue to be a very, very important product in the world. With or without Rockset, that will be true. The way we look at once they have real-time business observability, is that inherent growth that they have, they can reach more people, they can put their product in the hands of more and more people, they can iterate faster. And at the end of the day, it is really about having this very interesting platform, very interesting architecture to really make a lot more data driven decisions and iterate much more quickly. And so in batch analytics, if you were able to make, let's say five decisions a quarter, in real time analytics you can make five decisions a day. So that's how we look at it. So that is really, I think, what is the underpinnings of why the world is going from batch to real time. And what have we learned from having a Seesaw as a customer? I think Seesaw has probably one of the largest DynamoDB installations that we have looked at. I think, we're talking about billions and billions of records, even though they have tens of millions of active users. And so I think it has been an incredible partnership working with them closely, and they have had a tremendous amount of input on our product roadmap and some of that like role-based access control and other things have already being a part of the product, thanks to the continuous feedback we get from their team. So we're delighted about this partnership and I am sure there's more input that they have, that we cannot wait to incorporate in our roadmap. >> I imagine Venkat as well, you as the parent user and your kids, you probably have some input that goes to the Seesaw side. So this seems like a very synergistic relationship. Carl, a couple more questions for you. I'd love to know how in this... Here we are kind of back to school timeframe, We've got a lot of students coming back, they're still remote learning. What are some of the things that you're excited about for this next school year that do you think Rockset is really going to fuel or power for Seesaw? >> Yeah, well, I think schools are navigating yet another transition now, from a world of remote learning to a world of back to the classroom. But back to the classroom feels very different than it does at any other back to school timeframe. Many of our users are in first or second grade. We serve early elementary age ranges and some of those students have never been in a classroom before. They are entering second grade and never having been at school. And that's hard. That's a hard transition for teachers in schools to make. And so as a partner to those schools, we want to do everything we can to help them manage that transition, in general and with Seesaw in particular. And the more we can understand how they're using Seesaw, where they're struggling with Seesaw, as part of that transition, the more we can be a good partner to them and help them really get the most value out of Seesaw, in this new world that we're living in, which is sort of like normal, and in many ways not. We are still not back to normal as far as schools are concerned. >> I'm sure though, the partnership that you provide to the teachers and the students can be a game changer in these, and still navigating some very uncertain times. Carl, last question for you. I want you to point folks to where they can go to learn more about Seesaw, and how for all those parents watching, they might be able to use this with their families. >> Yeah, well, seesaw.me is our website, and you can go to seesaw.me and learn more about Seesaw, and if any of this sounds interesting, ask your teacher, if they're not using Seesaw, to give it a look. >> Seesaw.me, excellent. Venkat, same question for you. Where do you want folks to go to learn more about Rockset and its capabilities? >> Rockset.com is our website. There is a free trial for... $300 worth of free trial credits. It's a self service platform, you don't need to talk to anybody, all the pricing and everything is out there. So, if real-time analytics and modernizing your data stack is on your roadmap, go give it a spin. >> Excellent guys. Thanks so much for joining me today, talking about real-time analytics, how it's really empowering both the data companies and the users to be able to navigate in challenging waters. Venkat, thank you, Carl, thank you for joining us. >> Thanks everyone. >> Thanks Lisa. >> For my guests, this has been our coverage of the AWS Startup Showcase, New Breakthroughs in DevOps, Data Analytics and Cloud Management Tools. I am Lisa Martin. Thanks for watching. (mid tempo music)
SUMMARY :
the fast path to real-time and start with you. out into the world and share across 75% of the schools to serve so many teachers and get the backstory. in the way that we wanted to past. that you knew you needed to solve. to double down on where Talk to me about the decision And one of the key advantages of Rockset that you have with DynamoDB. because a lot of the time we and how that is really critical is our critical back to school season It's about six months now, in the last 18 months, where we saw challenges of returning to the classroom in the remote learning And so that was really amazing. that visibility that really and the students that are And then I'm going to open the can and have that data now in the last year and a half? is the key input into our And I imagine you on the product side for how the product needs to evolve that you guys have with AWS. in the Startup Showcase. in the last year and a half, and it'll continue to grow, that goes to the Seesaw side. And the more we can understand the partnership that you provide and if any of this sounds interesting, to learn more about Rockset all the pricing and both the data companies and the users of the AWS Startup Showcase,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Venkat Venkataramani | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Carl Sjogreen | PERSON | 0.99+ |
Venkat | PERSON | 0.99+ |
Seesaw | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rockset | ORGANIZATION | 0.99+ |
$300 | QUANTITY | 0.99+ |
nine | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Venkat | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
fourth | QUANTITY | 0.99+ |
two kids | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
tens of millions | QUANTITY | 0.99+ |
five decisions | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
second grade | QUANTITY | 0.99+ |
five other questions | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
Dynamo | ORGANIZATION | 0.99+ |
ten year | QUANTITY | 0.99+ |
150 countries | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
billions | QUANTITY | 0.98+ |
two sons | QUANTITY | 0.98+ |
2021 045 Shiv Gupta
(upbeat electronic music) >> Welcome back to the Quantcast Industry Summit on the demise of third-party cookies. The Cookie Conundrum, A Recipe for Success. I'm John Furrier, host of theCUBE. The changing landscape of advertising is here, and Shiv Gupta, founder of U of Digital is joining us. Shiv, thanks for coming on this segment. I really appreciate it. I know you're busy. You've got two young kids, as well as providing education to the digital industry. You got some kids to take care of and train them too. So, welcome to the cube conversation here as part of the program. >> Yeah, thanks for having me. Excited to be here. >> So, the house of the changing landscape of advertising really centers around the open to walled garden mindset of the web and the big power players. We know the big three, four tech players dominate the marketplace. So, clearly in a major inflection point. And you know, we've seen this movie before. Web, now mobile revolution. Which was basically a re-platforming of capabilities, but now we're in an era of refactoring the industry, not replatforming. A complete changing over of the value proposition. So, a lot at stake here as this open web, open internet-- global internet, evolves. What are your, what's your take on this? There's industry proposals out there that are talking to this specific cookie issue? What does it mean and what proposals are out there? >> Yeah, so, you know, I really view the identity proposals in kind of two kinds of groups. Two separate groups. So, on one side you have what the walled gardens are doing. And really that's being led by Google, right? So, Google introduced something called the Privacy Sandbox when they announced that they would be deprecating third-party cookies. And as part of the Privacy Sandbox, they've had a number of proposals. Unfortunately, or you know, however you want to say, they're all bird-themed, for some reason I don't know why. But the one, the bird-themed proposal that they've chosen to move forward with is called FLOC, which stands for Federated Learning of Cohorts. And, essentially what it all boils down to is Google is moving forward with cohort level learning and understanding of users in the future after third-party cookies. Unlike what we've been accustomed to in this space, which is a user level understanding of people and what they're doing online for targeting and tracking purposes. And so, that's on one side of the equation. It's what Google is doing with FLOC and Privacy Sandbox. Now, on the other side is, you know, things like unified ID 2.0 or the work that ID5 is doing around building new identity frameworks for the entire space that actually can still get down to the user level. Right? And so again, Unified ID 2.0 comes to mind because it's the one that's probably gotten the most adoption in the space. It's an open source framework. So the idea is that it's free and pretty much publicly available to anybody that wants to use it. And Unified ID 2.0 again is user level. So, it's basically taking data that's authenticated data from users across various websites that are logging in and taking those authenticated users to create some kind of identity map. And so, if you think about those two work streams, right? You've got the walled gardens and or, you know, Google with FLOC on one side. And then you've got Unified ID 2.0 and other ID frameworks for the open internet on the other side. You've got these two very different type of approaches to identity in the future. Again, on the Google side it's cohort level, it's going to be built into Chrome. The idea is that you can pretty much do a lot of the things that we do with advertising today but now you're just doing them at a group level so that you're protecting privacy. Whereas, on the other side with the open internet you're still getting down to the user level and that's pretty powerful but the the issue there is scale, right? We know that a lot of people are not logged in on lots of websites. I think the stat that I saw was under 5% of all website traffic is authenticated. So, really if you simplify things and you boil it all down you have kind of these two very differing approaches. >> So we have a publishing business. We'd love to have people authenticate and get that closed loop journalism thing going on. But, if businesses wannna get this level too, they can have concerns. So, I guess my question is, what's the trade-off? Because you have power in Google and the huge data set that they command. They command a lot of leverage with that. And again, centralized. And you've got open. But it seems to me that the world is moving more towards decentralization, not centralization. Do you agree with that? And does that have any impact to this? Because, you want to harness the data, so it rewards people with the most data. In this case, the powerful. But the world's going decentralized, where there needs to be a new way for data to be accessed and leveraged by anyone. >> Yeah. John, it's a great point. And I think we're at kind of a crossroads, right? To answer that question. You know, I think what we're hearing a lot right now in the space from publishers, like yourself, is that there's an interesting opportunity right now for them, right? To actually have some more control and say about the future of their own business. If you think about the last, let's say 10, 15, 20 years in advertising in digital, right? Programmatic has really become kind of the primary mechanism for revenue for a lot of these publishers. Right? And so programmatic is a super important part of their business. But, with everything that's happening here with identity now, a lot of these publishers are kind of taking a look in the mirror and thinking about, "Okay, we have an interesting opportunity here to make a decision." And, the decision, the trade off to your question is, Do we continue? Right? Do we put up the login wall? The registration wall, right? Collect that data. And then what do we do with that data? Right? So it's kind of a two-fold process here. Two-step process that they have to make a decision on. First of all, do we hamper the user experience by putting up a registration wall? Will we lose consumers if we do that? Do we create some friction in the process that's not necessary. And if we do, right? We're taking a hit already potentially, to what end? Right? And, I think that's the really interesting question, is to what end? But, what we're starting to see is publishers are saying you know what? Programmatic revenue is super important to us. And so, you know, path one might be: Hey, let's give them this data. Right? Let's give them the authenticated information, the data that we collect. Because if we do, we can continue on with the path that our business has been on. Right? Which is generating this awesome kind of programmatic revenue. Now, alternatively we're starting to see some publishers say hold up. If we say no, if we say: "Hey, we're going to authenticate but we're not going to share the data." Right? Some of the publishers actually view programmatic as almost like the programmatic industrial complex, right? That's almost taken a piece of their business in the last 10, 15, 20 years. Whereas, back in the day, they were selling directly and making all the revenue for themselves, right? And so, some of these publishers are starting to say: You know what? We're not going to play nice with FLOC and Unified ID. And we're going to kind of take some of this back. And what that means in the short term for them, is maybe sacrificing programmatic revenue. But their bet is long-term, maybe some of that money will come back to them direct. Now, that'll probably only be the premium pubs, right? The ones that really feel like they have that leverage and that runway to do something like that. And even so, you know, I'm of the opinion that if certain publishers kind of peel away and do that, that's probably not great for the bigger picture. Even though it might be good for their business. But, you know, let's see what happens. To each business their own >> Yeah. I think the trade-off of monetization and user experience has always been there. Now, more than ever, people want truth. They want trust. And I think the trust factor is huge. And if you're a publisher, you wannna have your audience be instrumental. And I think the big players have sucked out of the audience from the publishers for years. And that's well-documented. People talk about that all the time. I guess the question, it really comes down to is, what alternatives are out there for cookies and which ones do you think will be more successful? Because, I think the consensus is, at least from my reporting and my view, is that the world agrees. Let's make it open. Which one's going to be better? >> Yeah. That's a great question, John. So as I mentioned, right? We have two kinds of work streams here. We've got the walled garden work stream being led by Google and their work around FLOC. And then we've got the open internet, right? Let's say Unified ID 2.0 kind of represents that. I personally don't believe that there is a right answer or an end game here. I don't think that one of them wins over the other, frankly. I think that, you know, first of all, you have those two frameworks. Neither of them are perfect. They're both flawed in their own ways. There are pros and cons to both of them. And so what we're starting to see now, is you have other companies kind of coming in and building on top of both of them as kind of a hybrid solution, right? So they're saying, hey we use, you know, an open ID framework in this way to get down to the user level and use that authenticated data. And that's important, but we don't have all the scale. So now we go to a Google and we go to FLOC to kind of fill the scale. Oh and hey, by the way, we have some of our own special sauce. Right? We have some of our own data. We have some of our own partnerships. We're going to bring that in and layer it on top, right? And so, really where I think things are headed is the right answer, frankly, is not one or the other. It's a little mishmash of both with a little extra, you know, something on top. I think that's what we're starting to see out of a lot of companies in the space. And I think that's frankly, where we're headed. >> What do you think the industry will evolve to, in your opinion? Because, I think this is going to be- You can't ignore the big guys on this Obviously the programmatic you mentioned, also the data's there. But, what do you think the market will evolve to with this conundrum? >> So, I think John, where we're headed, you know, I think right now we're having this existential crisis, right? About identity in this industry. Because our world is being turned upside down. All the mechanisms that we've used for years and years are being thrown out the window and we're being told, "Hey, we're going to have new mechanisms." Right? So cookies are going away. Device IDs are going away. And now we've got to come up with new things. And so, the world is being turned upside down and everything that you read about in the trades and you know, we're here talking about it, right? Everyone's always talking about identity, right? Now, where do I think this is going? If I was to look into my crystal ball, you know, this is how I would kind of play this out. If you think about identity today, right? Forget about all the changes. Just think about it now and maybe a few years before today. Identity, for marketers, in my opinion, has been a little bit of a checkbox activity, right? It's been, Hey, Okay. You know, ad tech company or media company. Do you have an identity solution? Okay. Tell me a little bit more about it. Okay. Sounds good. That sounds good. Now, can we move on and talk about my business and how are you going to drive meaningful outcomes or whatever for my business. And I believe the reason that is, is because identity is a little abstract, right? It's not something that you can actually get meaningful validation against. It's just something that, you know? Yes, you have it. Okay, great. Let's move on, type of thing, right? And so, that's kind of where we've been. Now, all of a sudden, the cookies are going away. The device IDs are going away. And so the world is turning upside down. We're in this crisis of: how are we going to keep doing what we were doing for the last 10 years in the future? So, everyone's talking about it and we're tryna re-engineer the mechanisms. Now, if I was to look into the crystal ball, right? Two, three years from now, where I think we're headed is, not much is going to change. And what I mean by that, John is, I think that marketers will still go to companies and say, "Do you have an ID solution? Okay, tell me more about it. Okay. Let me understand a little bit better. Okay. You do it this way. Sounds good." Now, the ways in which companies are going to do it will be different. Right now it's FLOC and Unified ID and this and that, right? The ways, the mechanisms will be a little bit different. But, the end state. Right? The actual way in which we operate as an industry and the view of the landscape in my opinion, will be very simple or very similar, right? Because marketers will still view it as a, tell me you have an ID solution, make me feel good about it, help me check the box and let's move on and talk about my business and how you're going to solve for my needs. So, I think that's where we're going. That is not by any means to discount this existential moment that we're in. This is a really important moment, where we do have to talk about and figure out what we're going to do in the future. My viewpoint is that the future will actually not look all that different than the present. >> And then I'll say the user base is the audience, their data behind it helps create new experiences, machine learning and AI are going to create those. And if you have the data, you're either sharing it or using it. That's what we're finding. Shiv Gupta, great insights. Dropping some nice gems here. Founder of U of Digital and also the adjunct professor of programmatic advertising at Leavey School of business in Santa Clara University. Professor, thank you for coming and dropping the gems here and insight. Thank you. >> Thanks a lot for having me, John. Really appreciate it. >> Thanks for watching The Cookie Conundrum This is theCUBE host, John Furrier, me. Thanks for watching. (uplifting electronic music)
SUMMARY :
on the demise of third-party cookies. Excited to be here. of the web and the big power players. Now, on the other side is, you know, Google and the huge data set kind of the primary mechanism for revenue People talk about that all the time. kind of fill the scale. Obviously the programmatic you mentioned, And I believe the reason that is, and also the adjunct professor Thanks a lot for having me, This is theCUBE host, John Furrier, me.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shiv Gupta | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
10 | QUANTITY | 0.99+ |
Shiv | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Two-step | QUANTITY | 0.99+ |
Two separate groups | QUANTITY | 0.99+ |
Chrome | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
two young kids | QUANTITY | 0.99+ |
two kinds | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
FLOC | TITLE | 0.99+ |
two-fold | QUANTITY | 0.99+ |
two frameworks | QUANTITY | 0.98+ |
Leavey School | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
under 5% | QUANTITY | 0.98+ |
four tech players | QUANTITY | 0.97+ |
one side | QUANTITY | 0.97+ |
U of Digital | ORGANIZATION | 0.97+ |
2021 045 | OTHER | 0.97+ |
20 years | QUANTITY | 0.96+ |
The Cookie Conundrum | TITLE | 0.96+ |
Quantcast Industry Summit | EVENT | 0.95+ |
each business | QUANTITY | 0.93+ |
A Recipe for Success | TITLE | 0.93+ |
First | QUANTITY | 0.92+ |
Googl | ORGANIZATION | 0.92+ |
one side | QUANTITY | 0.92+ |
first | QUANTITY | 0.91+ |
Federated Learning of Cohorts | ORGANIZATION | 0.91+ |
Privacy Sandbox | TITLE | 0.9+ |
unified ID 2.0 | TITLE | 0.87+ |
two work streams | QUANTITY | 0.87+ |
FLOC | ORGANIZATION | 0.87+ |
last 10 years | DATE | 0.86+ |
Santa Clara University | ORGANIZATION | 0.83+ |
groups | QUANTITY | 0.81+ |
years | QUANTITY | 0.8+ |
Two | QUANTITY | 0.78+ |
ID 2.0 | OTHER | 0.78+ |
theCUBE | ORGANIZATION | 0.77+ |
few years before | DATE | 0.74+ |
three years | QUANTITY | 0.72+ |
FLOC | OTHER | 0.67+ |
three | QUANTITY | 0.64+ |
ID 2.0 | TITLE | 0.63+ |
Unified ID 2.0 | TITLE | 0.6+ |
ID5 | TITLE | 0.58+ |
differing approaches | QUANTITY | 0.54+ |
Unified | TITLE | 0.53+ |
Unified | OTHER | 0.52+ |
different | QUANTITY | 0.51+ |
last | DATE | 0.5+ |
Privacy Sandbox | COMMERCIAL_ITEM | 0.37+ |
JG Chirapurath, Microsoft CLEAN
>> Okay, we're now going to explore the vision of the future of cloud computing from the perspective of one of the leaders in the field, JG Chirapurath is the Vice President of Azure Data AI and Edge at Microsoft. JG, welcome to theCUBE on Cloud, thanks so much for participating. >> Well, thank you, Dave. And it's a real pleasure to be here with you and just want to welcome the audience as well. >> Well, JG, judging from your title, we have a lot of ground to cover and our audience is definitely interested in all the topics that are implied there. So let's get right into it. We've said many times in theCUBE that the new innovation cocktail comprises machine intelligence or AI applied to troves of data with the scale of the cloud. It's no longer we're driven by Moore's law. It's really those three factors and those ingredients are going to power the next wave of value creation in the economy. So first, do you buy into that premise? >> Yes, absolutely. We do buy into it and I think one of the reasons why we put data analytics and AI together, is because all of that really begins with the collection of data and managing it and governing it, unlocking analytics in it. And we tend to see things like AI, the value creation that comes from AI as being on that continuum of having started off with really things like analytics and proceeding to be machine learning and the use of data in interesting ways. >> Yes, I'd like to get some more thoughts around data and how you see the future of data and the role of cloud and maybe how Microsoft strategy fits in there. I mean, your portfolio, you've got SQL Server, Azure SQL, you got Arc which is kind of Azure everywhere for people that aren't familiar with that you got Synapse which course does all the integration, the data warehouse and it gets things ready for BI and consumption by the business and the whole data pipeline. And then all the other services, Azure Databricks, you got you got Cosmos in there, you got Blockchain, you've got Open Source services like PostgreSQL and MySQL. So lots of choices there. And I'm wondering, how do you think about the future of cloud data platforms? It looks like your strategy is right tool for the right job. Is that fair? >> It is fair, but it's also just to step back and look at it. It's fundamentally what we see in this market today, is that customers they seek really a comprehensive proposition. And when I say a comprehensive proposition it is sometimes not just about saying that, "Hey, listen "we know you're a sequence of a company, "we absolutely trust that you have the best "Azure SQL database in the cloud. "But tell us more." We've got data that is sitting in Hadoop systems. We've got data that is sitting in PostgreSQL, in things like MongoDB. So that open source proposition today in data and data management and database management has become front and center. So our real sort of push there is when it comes to migration management modernization of data to present the broadest possible choice to our customers, so we can meet them where they are. However, when it comes to analytics, one of the things they ask for is give us lot more convergence use. It really, it isn't about having 50 different services. It's really about having that one comprehensive service that is converged. That's where things like Synapse fits in where you can just land any kind of data in the lake and then use any compute engine on top of it to drive insights from it. So fundamentally, it is that flexibility that we really sort of focus on to meet our customers where they are. And really not pushing our dogma and our beliefs on it but to meet our customers according to the way they've deployed stuff like this. >> So that's great. I want to stick on this for a minute because when I have guests on like yourself they never want to talk about the competition but that's all we ever talk about. And that's all your customers ever talk about. Because the counter to that right tool for the right job and that I would say is really kind of Amazon's approach is that you got the single unified data platform, the mega database. So it does it all. And that's kind of Oracle's approach. It sounds like you want to have your cake and eat it too. So you got the right tool with the right job approach but you've got an integration layer that allows you to have that converged database. I wonder if you could add color to that and confirm or deny what I just said. >> No, that's a very fair observation but I'd say there's a nuance in what I sort of described. When it comes to data management, when it comes to apps, we have then customers with the broadest choice. Even in that perspective, we also offer convergence. So case in point, when you think about cosmos DB under that one sort of service, you get multiple engines but with the same properties. Right, global distribution, the five nines availability. It gives customers the ability to basically choose when they have to build that new cloud native app to adopt cosmos DB and adopt it in a way that is an choose an engine that is most flexible to them. However, when it comes to say, writing a SequenceServer for example, if modernizing it, you want sometimes, you just want to lift and shift it into things like IS. In other cases, you want to completely rewrite it. So you need to have the flexibility of choice there that is presented by a legacy of what sits on premises. When you move into things like analytics, we absolutely believe in convergence. So we don't believe that look, you need to have a relational data warehouse that is separate from a Hadoop system that is separate from say a BI system that is just, it's a bolt-on. For us, we love the proposition of really building things that are so integrated that once you land data, once you prep it inside the Lake you can use it for analytics, you can use it for BI, you can use it for machine learning. So I think, our sort of differentiated approach speaks for itself there. >> Well, that's interesting because essentially again you're not saying it's an either or, and you see a lot of that in the marketplace. You got some companies you say, "No, it's the data lake." And others say "No, no, put it in the data warehouse." And that causes confusion and complexity around the data pipeline and a lot of cutting. And I'd love to get your thoughts on this. A lot of customers struggle to get value out of data and specifically data product builders are frustrated that it takes them too long to go from, this idea of, hey, I have an idea for a data service and it can drive monetization, but to get there you got to go through this complex data life cycle and pipeline and beg people to add new data sources and do you feel like we have to rethink the way that we approach data architecture? >> Look, I think we do in the cloud. And I think what's happening today and I think the place where I see the most amount of rethink and the most amount of push from our customers to really rethink is the area of analytics and AI. It's almost as if what worked in the past will not work going forward. So when you think about analytics only in the enterprise today, you have relational systems, you have Hadoop systems, you've got data marts, you've got data warehouses you've got enterprise data warehouse. So those large honking databases that you use to close your books with. But when you start to modernize it, what people are saying is that we don't want to simply take all of that complexity that we've built over, say three, four decades and simply migrate it en masse exactly as they are into the cloud. What they really want is a completely different way of looking at things. And I think this is where services like Synapse completely provide a differentiated proposition to our customers. What we say there is land the data in any way you see, shape or form inside the lake. Once you landed inside the lake, you can essentially use a Synapse Studio to prep it in the way that you like. Use any compute engine of your choice and operate on this data in any way that you see fit. So case in point, if you want to hydrate a relational data warehouse, you can do so. If you want to do ad hoc analytics using something like Spark, you can do so. If you want to invoke Power BI on that data or BI on that data, you can do so. If you want to bring in a machine learning model on this prep data, you can do so. So inherently, so when customers buy into this proposition, what it solves for them and what it gives to them is complete simplicity. One way to land the data multiple ways to use it. And it's all integrated. >> So should we think of Synapse as an abstraction layer that abstracts away the complexity of the underlying technology? Is that a fair way to think about it? >> Yeah, you can think of it that way. It abstracts away Dave, a couple of things. It takes away that type of data. Sort of complexities related to the type of data. It takes away the complexity related to the size of data. It takes away the complexity related to creating pipelines around all these different types of data. And fundamentally puts it in a place where it can be now consumed by any sort of entity inside the Azure proposition. And by that token, even Databricks. You can in fact use Databricks in sort of an integrated way with the Azure Synapse >> Right, well, so that leads me to this notion of and I wonder if you buy into it. So my inference is that a data warehouse or a data lake could just be a node inside of a global data mesh. And then it's Synapse is sort of managing that technology on top. Do you buy into that? That global data mesh concept? >> We do and we actually do see our customers using Synapse and the value proposition that it brings together in that way. Now it's not where they start, oftentimes when a customer comes and says, "Look, I've got an enterprise data warehouse, "I want to migrate it." Or "I have a Hadoop system, I want to migrate it." But from there, the evolution is absolutely interesting to see. I'll give you an example. One of the customers that we're very proud of is FedEx. And what FedEx is doing is it's completely re-imagining its logistics system. That basically the system that delivers, what is it? The 3 million packages a day. And in doing so, in this COVID times, with the view of basically delivering on COVID vaccines. One of the ways they're doing it, is basically using Synapse. Synapse is essentially that analytic hub where they can get complete view into the logistic processes, way things are moving, understand things like delays and really put all of that together in a way that they can essentially get our packages and these vaccines delivered as quickly as possible. Another example, it's one of my favorite. We see once customers buy into it, they essentially can do other things with it. So an example of this is really my favorite story is Peace Parks initiative. It is the premier of white rhino conservancy in the world. They essentially are using data that has landed in Azure, images in particular to basically use drones over the vast area that they patrol and use machine learning on this data to really figure out where is an issue and where there isn't an issue. So that this part with about 200 radios can scramble surgically versus having to range across the vast area that they cover. So, what you see here is, the importance is really getting your data in order, landing consistently whatever the kind of data it is, build the right pipelines, and then the possibilities of transformation are just endless. >> Yeah, that's very nice how you worked in some of the customer examples and I appreciate that. I want to ask you though that some people might say that putting in that layer while you clearly add simplification and is I think a great thing that there begins over time to be a gap, if you will, between the ability of that layer to integrate all the primitives and all the piece parts, and that you lose some of that fine grain control and it slows you down. What would you say to that? >> Look, I think that's what we excel at and that's what we completely sort of buy into. And it's our job to basically provide that level of integration and that granularity in the way that it's an art. I absolutely admit it's an art. There are areas where people crave simplicity and not a lot of sort of knobs and dials and things like that. But there are areas where customers want flexibility. And so I think just to give you an example of both of them, in landing the data, in consistency in building pipelines, they want simplicity. They don't want complexity. They don't want 50 different places to do this. There's one way to do it. When it comes to computing and reducing this data, analyzing this data, they want flexibility. This is one of the reasons why we say, "Hey, listen you want to use Databricks. "If you're buying into that proposition. "And you're absolutely happy with them, "you can plug it into it." You want to use BI and essentially do a small data model, you can use BI. If you say that, "Look, I've landed into the lake, "I really only want to use ML." Bring in your ML models and party on. So that's where the flexibility comes in. So that's sort of that we sort of think about it. >> Well, I like the strategy because one of our guests, Jumark Dehghani is I think one of the foremost thinkers on this notion of of the data mesh And her premise is that the data builders, data product and service builders are frustrated because the big data system is generic to context. There's no context in there. But by having context in the big data architecture and system you can get products to market much, much, much faster. So, and that seems to be your philosophy but I'm going to jump ahead to my ecosystem question. You've mentioned Databricks a couple of times. There's another partner that you have, which is Snowflake. They're kind of trying to build out their own DataCloud, if you will and GlobalMesh, and the one hand they're a partner on the other hand they're a competitor. How do you sort of balance and square that circle? >> Look, when I see Snowflake, I actually see a partner. When we see essentially we are when you think about Azure now this is where I sort of step back and look at Azure as a whole. And in Azure as a whole, companies like Snowflake are vital in our ecosystem. I mean, there are places we compete, but effectively by helping them build the best Snowflake service on Azure, we essentially are able to differentiate and offer a differentiated value proposition compared to say a Google or an AWS. In fact, that's been our approach with Databricks as well. Where they are effectively on multiple clouds and our opportunity with Databricks is to essentially integrate them in a way where we offer the best experience the best integrations on Azure Berna. That's always been our focus. >> Yeah, it's hard to argue with the strategy or data with our data partner and ETR shows Microsoft is both pervasive and impressively having a lot of momentum spending velocity within the budget cycles. I want to come back to AI a little bit. It's obviously one of the fastest growing areas in our survey data. As I said, clearly Microsoft is a leader in this space. What's your vision of the future of machine intelligence and how Microsoft will participate in that opportunity? >> Yeah, so fundamentally, we've built on decades of research around essentially vision, speech and language. That's been the three core building blocks and for a really focused period of time, we focused on essentially ensuring human parity. So if you ever wonder what the keys to the kingdom are, it's the boat we built in ensuring that the research or posture that we've taken there. What we've then done is essentially a couple of things. We've focused on essentially looking at the spectrum that is AI. Both from saying that, "Hey, listen, "it's got to work for data analysts." We're looking to basically use machine learning techniques to developers who are essentially, coding and building machine learning models from scratch. So for that select proposition manifest to us as really AI focused on all skill levels. The other core thing we've done is that we've also said, "Look, it'll only work as long "as people trust their data "and they can trust their AI models." So there's a tremendous body of work and research we do and things like responsible AI. So if you asked me where we sort of push on is fundamentally to make sure that we never lose sight of the fact that the spectrum of AI can sort of come together for any skill level. And we keep that responsible AI proposition absolutely strong. Now against that canvas Dave, I'll also tell you that as Edge devices get way more capable, where they can input on the Edge, say a camera or a mic or something like that. You will see us pushing a lot more of that capability onto the edge as well. But to me, that's sort of a modality but the core really is all skill levels and that responsibility in AI. >> Yeah, so that brings me to this notion of, I want to bring an Edge and hybrid cloud, understand how you're thinking about hybrid cloud, multicloud obviously one of your competitors Amazon won't even say the word multicloud. You guys have a different approach there but what's the strategy with regard to hybrid? Do you see the cloud, you're bringing Azure to the edge maybe you could talk about that and talk about how you're different from the competition. >> Yeah, I think in the Edge from an Edge and I even I'll be the first one to say that the word Edge itself is conflated. Okay, a little bit it's but I will tell you just focusing on hybrid, this is one of the places where, I would say 2020 if I were to look back from a COVID perspective in particular, it has been the most informative. Because we absolutely saw customers digitizing, moving to the cloud. And we really saw hybrid in action. 2020 was the year that hybrid sort of really became real from a cloud computing perspective. And an example of this is we understood that it's not all or nothing. So sometimes customers want Azure consistency in their data centers. This is where things like Azure Stack comes in. Sometimes they basically come to us and say, "We want the flexibility of adopting "flexible button of platforms let's say containers, "orchestrating Kubernetes "so that we can essentially deploy it wherever you want." And so when we designed things like Arc, it was built for that flexibility in mind. So, here's the beauty of what something like Arc can do for you. If you have a Kubernetes endpoint anywhere, we can deploy an Azure service onto it. That is the promise. Which means, if for some reason the customer says that, "Hey, I've got "this Kubernetes endpoint in AWS. And I love Azure SQL. You will be able to run Azure SQL inside AWS. There's nothing that stops you from doing it. So inherently, remember our first principle is always to meet our customers where they are. So from that perspective, multicloud is here to stay. We are never going to be the people that says, "I'm sorry." We will never say (speaks indistinctly) multicloud but it is a reality for our customers. >> So I wonder if we could close, thank you for that. By looking back and then ahead and I want to put forth, maybe it's a criticism, but maybe not. Maybe it's an art of Microsoft. But first, you did Microsoft an incredible job at transitioning its business. Azure is omnipresent, as we said our data shows that. So two-part question first, Microsoft got there by investing in the cloud, really changing its mindset, I think and leveraging its huge software estate and customer base to put Azure at the center of it's strategy. And many have said, me included, that you got there by creating products that are good enough. We do a one Datto, it's still not that great, then a two Datto and maybe not the best, but acceptable for your customers. And that's allowed you to grow very rapidly expand your market. How do you respond to that? Is that a fair comment? Are you more than good enough? I wonder if you could share your thoughts. >> Dave, you hurt my feelings with that question. >> Don't hate me JG. (both laugh) We're getting it out there all right, so. >> First of all, thank you for asking me that. I am absolutely the biggest cheerleader you'll find at Microsoft. I absolutely believe that I represent the work of almost 9,000 engineers. And we wake up every day worrying about our customer and worrying about the customer condition and to absolutely make sure we deliver the best in the first attempt that we do. So when you take the plethora of products we deliver in Azure, be it Azure SQL, be it Azure Cosmos DB, Synapse, Azure Databricks, which we did in partnership with Databricks, Azure Machine Learning. And recently when we premiered, we sort of offered the world's first comprehensive data governance solution in Azure Purview. I would humbly submit it to you that we are leading the way and we're essentially showing how the future of data, AI and the Edge should work in the cloud. >> Yeah, I'd be disappointed if you capitulated in any way, JG. So, thank you for that. And that's kind of last question is looking forward and how you're thinking about the future of cloud. Last decade, a lot about cloud migration, simplifying infrastructure to management and deployment. SaaSifying My Enterprise, a lot of simplification and cost savings and of course redeployment of resources toward digital transformation, other valuable activities. How do you think this coming decade will be defined? Will it be sort of more of the same or is there something else out there? >> I think that the coming decade will be one where customers start to unlock outsize value out of this. What happened to the last decade where people laid the foundation? And people essentially looked at the world and said, "Look, we've got to make a move. "They're largely hybrid, but you're going to start making "steps to basically digitize and modernize our platforms. I will tell you that with the amount of data that people are moving to the cloud, just as an example, you're going to see use of analytics, AI or business outcomes explode. You're also going to see a huge sort of focus on things like governance. People need to know where the data is, what the data catalog continues, how to govern it, how to trust this data and given all of the privacy and compliance regulations out there essentially their compliance posture. So I think the unlocking of outcomes versus simply, Hey, I've saved money. Second, really putting this comprehensive sort of governance regime in place and then finally security and trust. It's going to be more paramount than ever before. >> Yeah, nobody's going to use the data if they don't trust it, I'm glad you brought up security. It's a topic that is at number one on the CIO list. JG, great conversation. Obviously the strategy is working and thanks so much for participating in Cube on Cloud. >> Thank you, thank you, Dave and I appreciate it and thank you to everybody who's tuning into today. >> All right then keep it right there, I'll be back with our next guest right after this short break.
SUMMARY :
of one of the leaders in the field, to be here with you that the new innovation cocktail comprises and the use of data in interesting ways. and how you see the future that you have the best is that you got the single that once you land data, but to get there you got to go in the way that you like. Yeah, you can think of it that way. of and I wonder if you buy into it. and the value proposition and that you lose some of And so I think just to give you an example So, and that seems to be your philosophy when you think about Azure Yeah, it's hard to argue the keys to the kingdom are, Do you see the cloud, you're and I even I'll be the first one to say that you got there by creating products Dave, you hurt my We're getting it out there all right, so. that I represent the work Will it be sort of more of the same and given all of the privacy the data if they don't trust it, thank you to everybody I'll be back with our next guest
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
JG | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
FedEx | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jumark Dehghani | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
JG Chirapurath | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
50 different services | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
50 different places | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
GlobalMesh | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
first attempt | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Last decade | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
three factors | QUANTITY | 0.99+ |
Synapse | ORGANIZATION | 0.99+ |
one way | QUANTITY | 0.99+ |
COVID | OTHER | 0.99+ |
One | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
first principle | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Azure Stack | TITLE | 0.98+ |
Azure SQL | TITLE | 0.98+ |
Spark | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
MongoDB | TITLE | 0.98+ |
2020 | DATE | 0.98+ |
about 200 radios | QUANTITY | 0.98+ |
Moore | PERSON | 0.97+ |
PostgreSQL | TITLE | 0.97+ |
four decades | QUANTITY | 0.97+ |
Arc | TITLE | 0.97+ |
single | QUANTITY | 0.96+ |
Snowflake | ORGANIZATION | 0.96+ |
last decade | DATE | 0.96+ |
Azure Purview | TITLE | 0.95+ |
3 million packages a day | QUANTITY | 0.95+ |
One way | QUANTITY | 0.94+ |
three core | QUANTITY | 0.94+ |
Sandy Carter, AWS Public Sector Partners | AWS re:Invent 2020 Public Sector Day
>> From around the globe, it's theCube, with digital coverage of AWS re:Invent 2020. Special coverage sponsored by, AWS Worldwide Public Sector. >> Okay, welcome back to theCube's coverage, of re:Invent 2020 virtual. It's theCube virtual, I'm John Farrow your host, we're here celebrating, the special coverage of public sector with Sandy Carter, vice president of AWS Public Sector Partners. She heads up the partner group within Public Sector, now in multiple for about a year now. Right Sandy, or so? >> Right, you got it, John. >> About a year? Congratulations, welcome back to theCube, >> Thank you. >> for reason- >> Always a pleasure to be here and what an exciting re:Invent right? >> It's been exciting, we've got wall-to-wall coverage, multiple sets, a lot of actions, virtual it's three weeks, we're not in person we have to do it remote this year. So when real life comes back, we'll bring the Cube back. But I want to take a minute to step back, take a minute to explain your role for the folks that are new to theCube virtual and what you're doing over there at Public Sector. Take a moment to introduce yourself to the new viewers. >> Well, welcome. theCube is phenomenal, and of course we love our new virtual re:Invent as well, as John said, my name is Sandy Carter and I'm vice president with our public sector partners group. So what does that mean? That means I get to work with thousands of partners globally covering exciting verticals like, space and healthcare, education, state and local government, federal government, and more. And what I get to do is, to help our partners learn more about AWS so that they can help our customers really be successful in the marketplace. >> What has been the most, exciting thing for you in the job? >> Well, you know, I love, wow, I love everything about it, but I think one of the things I love the most, is how we in Public Sector, really make technology have a meaningful impact on the world. So John, I get to work with partners like Orbis which is a non-profit they're fighting preventable blindness. They're a partner of ours. They've got something called CyberSec AI which enables us to use machine learning over 20 different machine learning algorithms to detect common eye diseases in seconds. So, you know, that purpose for me is so important. We also work with a partner called Twist Inc it's hard to say, but it just does a phenomenal job with AWS IoT and helps make water pumps, smart pumps. So they are in 7,300 remote locations around the world helping us with clean water. So for me that's probably the most exciting and meaningful part of the job that I have today. >> And it's so impactful because you guys really knew Amazon's business model has always been about enablement from startups to now up and running Public Sector; entities, agencies, education, healthcare, again, and even in spaces, this IoT in space. But you've been on the 100 partner tour over a 100 days. What did you learn, what are you hearing from partners now? What's the messages that you're hearing? >> Well, first of all, it was so exciting. I had a 100 different partner meetings in a 100 days because John, just like you, I missed going around the world and meeting in person. So I said, well, if I can't meet in person I will do a virtual tour and I talked to partners, in 68 different countries. So a couple of things I heard, one is a lot of love for our map program and that's our migration acceleration program. We now have funding available for partners as they assess migration, we can mobilize it and as they migrate it. And you may or may not know, but we have over twice the number of migration competency partners doing business in Public Sector this year, than we did last year. The second thing we heard was that, partners really love our marketing programs. We had some really nice success this year showcasing value for our customers with cyber security. And I love that because security is so important. Andy Jassy always talks about how her customers really have that as priority zeros. So we were able to work with a couple of different areas that we were very proud at and I loved that the partners were too. We did some repeatable solutions with our consulting partners. And then I think the third big takeaway that I saw was just our partners love the AWS technology. I heard a lot about AI and ML. We offered this new program called The Rapid Adoption Assistance Program. It's going global in 2021, and so we help partners brainstorm and envision what they could do with it. And then of course, 5G. 5G is ushering in, kind of a new era of new demand. And we going to to do a PartnerCast on all about 5G for partners in the first quarter. >> Okay, I'm going to put you on the spot. What are the three most talked about programs that you heard? >> Oh, wow, let's see. The three most talked about programs that I heard about, the first one was, is something I'm really excited about. It's called a Think Big for Small Business. It really focuses in on diverse partner groups and types. What it does is it provides just a little bit of extra boost to our small and medium businesses to help them get some of the benefits of our AWS partner program. So companies like MFT they're based down in South Africa it's a husband and wife team that focus on that Black Economic Empowerment rating and they use the program to get some of the go to market capability. So that's number one. Let's see, you said three. Okay, so number two would be our ProServe ready pilot. This helps to accelerate our partner activation and enablement and provides partners a way to get badged on the ProServe best practices get trained up and does opportunity matching. And I think a lot of partners were kind of buzzing about that program and wanting to know more about it. And then ,last but not least, the one that I think of probably really has impact to time to compliance it's called ATO or Authority to Operate and what we do is we help our partners, both technology partners and consulting partners get support for compliance framework. So FedRAMP, of course, we have over 129 solutions right now that are FedRAMPed but we also added John, PCI for financial HIPPA for healthcare, for public safety, IRS 1075 for international GDPR and of course for defense, aisle four, five and six, and CMMC. That program is amazing because it cuts the time to market and have cuts across and have and really steps partners through all of our best practices. I think those are the top three. >> Yeah, I've been like a broken record for the folks that don't know all my interviews I've done with Public Sector over the years. The last one is interesting and I think that's a secret sauce that you guys have done, the compliance piece, being an entrepreneur and starting companies that first three steps in a cloud of dust momentum the flywheel to get going. It's always the hardest and getting the certification if you don't have the resources, it's time consuming. I think you guys really cracked the code on that. I really want to call that out 'cause that's I think really super valuable for the folks that pay attention to and of course sales enablement through the program. So great stuff. Now, given that's all cool, (hands claps) the question I have and I hear all the time is, okay, I'm involved I got a lot of pressure pandemic has forced me to rethink I don't have a lot of IT I don't have a big budget I always complaint but not anymore. Mandate is move fast, get built out, leverage the cloud. Okay, I want to get going. What's the best ways for me to grow with Public Sector? How do I do that if I'm a customer, I really want to... I won't say take a shortcut because there's probably no shortage. How do I throttle up? Quickly, what's your take on that? >> Well, John, first I want to give one star that came to us from a Twilio. They had interviewed a ton of companies and they found that there was more digital transformation since March since when the pandemic started to now than in the last five years. So that just blew me away. And I know all of our partners are looking to see how they can really grow based on that. So if you're a consulting partner, one of the things that we say to help you grow is we've already done some integrations and if you can take advantage of those that can speed up your time to market. So I know know this one, the VMware Cloud on AWS. what a powerful integration, it provides protection of skillsets to your customer, increases your time to market because now VMware, vSphere, VSAN is all on AWS. So it's the same user interface and it really helps to reduce costs. And there's another integration that I think really helps which is Amazon connect one of our fastest growing areas because it's a ML AI, breads solution to help with call centers. It's been integrated with Salesforce but the Service Cloud and the Sales Cloud. So how powerful is that this integrated customer workflow? So I think both of those are really interesting for our consulting partners. >> That's a great point. In fact, well, that's the big part of the story here at re:Invent. These three weeks has been the integration. Salesforce as you mentioned connect has been huge and partner- >> Huge >> so just just great success again, I've seen great momentum. People are seeing their jobs being saved, they're saving lives. People are pretty excited and it's certainly a lot of work you've done in healthcare and education two big areas of activity which is really hard corporation, really, really hard. So congratulations on that and great work. Great to see you, I going to ask you one final question. What's the big message for your customers watching as they prepare for 2021 real life is coming back vaccines on the horizon. We're hearing some good news a lot of great cloud help there. What's your message to send to 2021? >> 2021, for our partners for 2021, one, there is a tremendous growth ahead and tremendous value that our partners have added. And that's both on the mission side, which both Theresa and I discussed during our sessions as well as technology. So I think first messages is, there's lots of growth ahead and a lot of ways that we can add value. Second is, all of those programs and initiatives, there's so much help out there for partners. So look for how you could really accelerate using some of those areas on your customer journey as you're going along. And then finally, I just want John, everybody to know , that we love our partners and AWS is there to help you every step of the way. And if you need anything at all obviously reach out to your PDM or your account manager or you're always welcome to reach out to me. And my final message is just, thank you, through so many different things that have happened in 2020, our partners have come through amazingly with passion with value and just with persistence, never stopping. So thank you to all of our partners out there who've really added so much value to our customers. >> And Amazon is recognizing the leadership of partners in the work you're doing. Your leadership session was awesome for the folks who missed it, check it out on demand. Thank you very much, Sandy for coming on the sharing the update. >> Thank you, John, and great to see all your partners out there. >> Okay, this is theCube virtual covering AWS re:Invent 2020 virtual three weeks, wall-to-wall coverage. A lot of videos ,check out all the videos on demand the leadership sessions, theCube videos and of course the Public Sector video on demand. Micro-site with theCube. I'm John Furrier, thanks for watching. (upbeat music)
SUMMARY :
From around the globe, it's theCube, the special coverage for the folks that are and of course we love our new So John, I get to work What's the messages that you're hearing? and I loved that the partners were too. Okay, I'm going to put you on the spot. of the go to market capability. for the folks that pay attention to And I know all of our partners are looking of the story here at re:Invent. So congratulations on that and great work. and AWS is there to help you of partners in the work you're doing. and great to see all and of course the Public
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Theresa | PERSON | 0.99+ |
Sandy Carter | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Twist Inc | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
John Farrow | PERSON | 0.99+ |
Sandy | PERSON | 0.99+ |
South Africa | LOCATION | 0.99+ |
Second | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
AWS Public Sector Partners | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
100 days | QUANTITY | 0.99+ |
Orbis | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
one star | QUANTITY | 0.99+ |
Sales Cloud | TITLE | 0.99+ |
Salesforce | TITLE | 0.99+ |
Twilio | ORGANIZATION | 0.99+ |
68 different countries | QUANTITY | 0.99+ |
100 partner | QUANTITY | 0.99+ |
third big takeaway | QUANTITY | 0.98+ |
first messages | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
theCube | COMMERCIAL_ITEM | 0.98+ |
one final question | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
second thing | QUANTITY | 0.97+ |
AWS Worldwide Public Sector | ORGANIZATION | 0.97+ |
Cube | COMMERCIAL_ITEM | 0.97+ |
first | QUANTITY | 0.97+ |
March | DATE | 0.96+ |
IRS | ORGANIZATION | 0.96+ |
GDPR | TITLE | 0.96+ |
over 129 solutions | QUANTITY | 0.96+ |
thousands of partners | QUANTITY | 0.94+ |
PCI | ORGANIZATION | 0.94+ |
first three steps | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
over 20 different machine learning algorithms | QUANTITY | 0.92+ |
VMware Cloud | TITLE | 0.92+ |
AWS Public Sector Partners | ORGANIZATION | 0.91+ |
7,300 remote locations | QUANTITY | 0.9+ |
last five years | DATE | 0.9+ |
first quarter | DATE | 0.89+ |
theCube virtual | COMMERCIAL_ITEM | 0.88+ |
100 different partner meetings | QUANTITY | 0.88+ |
a minute | QUANTITY | 0.87+ |
about a year | QUANTITY | 0.87+ |
MFT | ORGANIZATION | 0.86+ |
two big areas | QUANTITY | 0.86+ |
top three | QUANTITY | 0.85+ |
Sathish Balakrishnan, Red Hat | AWS re:Invent 2020
>> Narrator: From around the globe, it's theCUBE. With digital coverage of AWS re:Invent 2020. Sponsored by intel, AWS, and our community partners. >> Welcome back to the CUBE's coverage of the AWS re:Invent 2020. Three weeks we're here, covering re:Invent. It's virtual. We're not in person. Normally we are on the floor. Instructing *signal from the noise, but we're virtual. This is theCUBE Virtual. We are theCUBE Virtual. I'm John Furrier, your host. Got a great interview here today. Sathish Balakrishnan, Vice president of hosted platforms for Red Hat joining us. Sathish, great to see you. Thanks for coming on. >> Thank you, John. Great to see you again. >> I wish we were in person, but we're remote because of the pandemic. But it's going to be a lot of action going on, a lot of content. Red Hat's relationship with AWS, and this is a really big story this year, at many levels. One is your relationship with Red Hat, but also the world's evolved. Clearly hybrid cloud's in play. Now you got multiple environments with the edge and other clouds around the corner. This is a huge deal. Hybrids validated multiple environments, including the edge. This is big. On premise in the cloud. What's your new update for your relationship? >> Absolutely, John, yeah. this is so you know, if anything this year has accelerated digital transformation, right The joke that COVID-19 is the biggest digital accelerator, digital transformation accelerator is no joke. I think going back to our relationship with AWS, as you rightly pointed out, we have a very storied and long relationship with AWS, we've been with AWS partnering with AWS since 2007, when we offered the Red Hat Enterprise Linux on AWS since then, you know, we've made a lot of strides, but not in the middle of our products that are layered on AWS, as well as back in 2015, we offered OpenShift dedicated Red Hat OpenShift dedicated, which is our managed offering on AWS, you know, and since then we made a bunch of announcements right around the service broker, and then you know, the operators operator hub, and the operators that AWS has for services to be accessed from Kubernetes. As well as you know, the new exciting joint service that we announced. So you know, by AWS and Red Hat, increasingly, right, our leaders in public cloud and hybrid cloud and are approached by IT decision makers who are looking for guidance or on changing requirements, and they know how they should be doing application development in a very containerized and hybrid cloud world. So you know, excited to be here. And and this is a great event, you know, three week event, but you know, usually we were in Las Vegas, but you know, this week, this year, we will do it on workshop. But you know, nevertheless, the same excitement. And you know, I'm sure there's going to be same set of announcements that are going to come out of this event as well. >> Yeah, we'll keep track of it. Because it's digital. I think it's going to be a whole another user experience personally on the Discovery sites Learning Conference. But that's great stuff. I want to dig into the news, cause I think the relevant story here that you just talked about, I want to dig into the announcement, the new offering that you have with AWS, it's a joint offering, I believe, can you take a minute to explain what was and what's discussed? Cause you guys announced some stuff in May. Now you have OpenShift services. Is it on AWS? Can you take a minute to explain the news here? >> Absolutely John yeah. So I think we had really big announcement in May, you know, the first joint offering with AWS and it is Red Hat open shift service on AWS, it's a joint service with Red Hat and AWS, we're very excited to partner with them, and you know, be on the AWS console. And you know, it's great to be working with AWS engineering team, we've been making a lot of really good strides, it just amplify, as you know, our managed services story. So we are very excited to have that new offering that's going to be completely integrated with AWS console transacted through you know AWS marketplace, but you know, customers will get all the benefit of AWS service, like you know, how just launch it off the console, basically get, you know there and be part of the enterprise discount program and you we're very really excited and you know, that kind of interest has been really, really amazing. So we just announced that, you know, it's in preview we have a lot of customers already in preview, and we have a long list of customers that are waiting to get on this program. So but this offering, right, we have three ways in which you can consume OpenShift on AWS. One is, as I mentioned previously OpenShift dedicated on AWS, which we've had since 2015. Then we have OpenShift container platform, which is our previous self managed offering. And that's been available on AWS, also since 2015. And then, of course, this new service that are that OpenShift servers on AWS. So there's multiple ways in which customers can consume AWS and leverage the power of both OpenShift and AWS. And what I want to do here as well, right, is to take a moment to explain, you know what Red Hat's been doing in managed services, because then it's not very natural for somebody to say, oh, what's the Red Hat doing in managed services? You know, Red Hat believes in choice, right. We are all about try for that it's infrastructure footprint that's public cloud on-prem. It's managed or self managed, that's also tries to be offered to customers. And we've been doing managed services since 2011. That's kind of like a puzzling statement, people will be like, what? And yeah, it is true that we've been doing this since 2011. And in fact, we are one of the, you know, the earliest providers of managed Kubernetes. Since 2015. Right, I think there's only one other provider other than us, who has been doing managed Kubernetes, since then, which is kind of really a testament to the engineering work that Red Hat's been doing in Kubernetes. And, you know, with all that experience, and all the work that we've done upstream and building Kubernetes and making Kubernetes, really the you know, the hybrid cloud platform for the entire IT industry, we are excited to bring this joint offering. So we can bring all the engineering and the management strengths, as well as combined with the AWS infrastructure, and you know and other AWS teams, to bring this offering, because this is really going to help our customers as they move to the cloud. >> That's great insight, thanks for explaining that managed service, cause I was going to ask that question, but you hit it already. But I want to just follow up on that. Can you just do a deeper dive on the offering specifically, on what the customer benefits are here from having this managed service? Because again, you said, You Red Hats get multiple choice consumption vehicles here? What's the benefits? what's under the what's the deep dive? >> Absolutely, absolutely is a really, really good question. right as I mentioned, first thing is choice. like we start with choice customers, if they want, self managed, and they can always get that anywhere in any infrastructure footprint. If they're going to the cloud, most customers tend to think that you know, I'm going to the cloud because I want to consume everything as a service. And that's when all of these services come into play. But before we even get to the customer benefits, there's a lot of advantages to our software product as well. But as a managed service, we are actually customer zero. So we go through this entire iteration, right. And you probably everybody's familiar with, how we take open source projects, and we pull them into enterprise product. But we take it a second step, after we make it an enterprise product, we actually ship it to our multi tenant software system, which is called OpenShift Online, which is publicly available to millions of customers that manage exports on the public Internet, and then all the security challenges that we have to face through and fix, help solidify the product. And then we moved on to our single tenant OpenShift dedicated or you know soon to be the Red Hat OpenShift service on AWS but, you know, pretty much all of Red Hat's mission critical applications, like quedado is a service that's serving like a billion containers, billion containers a month. So that scale is already been felt by the newly shipped product, so that you know, any challenges we have at scale, any challenges, we have security, any box that we have we fix before we really make the product available to all our customers. So that's kind of a really big benefit to just that software in general, with us being a provider of the software. The second thing is, you know, since we are actually now managing customers clusters, we exactly know, you know, when our customers are getting stock, which parts of the stock need to improve. So there's a really good product gap anticipation. So you know, as much as you know, we want still really engage with customers, and we continue to engage with customers, but we can also see the telemetry and the metrics and figure out, you know, what challenges our customers' facing. And how can we improve. Other thing that, you know, helps us with this whole thing is, since we are operators now, and all our customers are really operators of software, it gives us better insights into what the user experience should be, and in how we can do things better. So there's a whole lot of benefits that Red Hat gets out of just being a managed service provider. Because you know, drinking our own champagne really helps us you know, polish the champagne and make it really better for all our customers that are consuming. >> I always love the champagne better than dog food because champagne more taste better. Great, great, great insight. Final question. We only have a couple minutes left, only two minutes left. So take the time to explain the big customer macro trend, which is the on premise to cloud relationship. We know that's happening. It's an operating model on both sides. That's clear as it is in the industry. Everyone knows that. But the managed services piece. So what drives an organization and transition from an on-prem Red Hat cloud to a managed service at Amazon? >> Is a really good question. It does many things. And it really starts with the IT and technology strategy. The customer has, you know, it could be like a digital transformation push from the CEO. It could be a cloud native development from the CPO or it could just be a containerization or cost optimization. So you have to really figure out you know, which one of this and it could be multiple and many customers, it could be all four of them and many customers that's driving the move to the cloud and driving the move to containerization with OpenShift. And also customers are expanding into new businesses, they got to be more agile, they got to basically protect the stuff. Because you know, there are a lot of competitors, you know, that, and b&b and other analogies, you know, how they take on a big hotel chains, it's kind of, you know, customers have to be agile IT is, you know, very strategic in these days, you know, given how everything is digital, and as I pointed out, it has coverts really like the number one digital transformation(mumbles). So, for example, you know, we have BMW is a great customer of ours that uses OpenShift, for all the connected car infrastructure. So they run it out of, you know, their data centers, and, you know, they suddenly want to go to a new geo syn, in Asia, you know, they may not have the speed to go build a data center and do things, so they'll just move to the cloud very easily. And from all our strategy, you know, I think the world is hybrid, I know there's going to be a that single cloud, multi cloud on-pram, it's going to be multiple things that customers have. So they have to really start thinking about what are the compliance requirements? What is the data regulations that they need to comply to? Is that a lift and shift out(mumbles) gistic things? So they need to do cloud native development, as well as containerization to get the speed out of moving to the cloud. And then how are they measuring availability? You know, are they close to the customer? You know, what is the metrics that they have for, you know, speed to the customer, as well, as you know, what databases are they using? So we have a lot of experience with this. Because, you know, this is something that, you know, we've been advocating, you know, for at least eight years now, the open hybrid cloud, a lot of experience with open innovation labs, which is our way of telling customers, it's not just about the technology, but also about how you change processes and how you change other things with people aspects of it, as well as continued adoption programs and a bunch of other programs that Red Hat has been building to help customers with this transformation. >> Yeah, as a speed game. One of the big themes of all my interviews this week, a couple weeks here at reInvent has been speed. And BMW, what a great client. Yeah, shifting into high gear with BMW with OpenShift, you know, little slogan there, you know, free free attribute. >> Thank you, John, >> Shifting the idea, you know, OpenShift. Congratulations, and great announcement. I love the direction always been a big fan of OpenShift. I think with Kubernetes, a couple years ago, when that kind of came together, you saw everything kind of just snap into place with you guys. So congratulations Sathish. Final question. What is the top story that people should take away from you this year? Here at reInvent? What's the number one message that you'd like to share real quick? >> Yeah, I think number one is, you know, we have a Joint Service coming soon with AWS, it is one of it's kind work for us. And for AWS, it's the first time that we are partnering with them at such a deep level. So this is going to really help accelerate our customers' move to the cloud, right to the AWS cloud, and leverage all of AWS services very natively like they would if they were using another container service that's coming out of AWS and it's like a joint service. I'm really, really excited about the service because, you know, we've just seen that interest has been exploding and, you know, we look forward to continuing our collaboration with AWS and working together and you know, helping our customers, you know, move to the cloud as well as cloud native development, containerization and digital transformation in general. >> Congratulations, OpenShift on AWS. big story here, >> I was on AWS. I want to make sure that you know we comply with the brand >> OpenShifts on open shift service, on AWS >> on AWS is a pretty big thing. >> Yeah, and ecosys everyone knows that's a super high distinction on AWS has a certain the highest form of compliment, they have join engineering everything else going on. Congratulations thanks for coming on. >> Thank you John. Great talking to you. >> It's theCUBE virtual coverage we got theCUBE virtual covering reInvent three weeks we got a lot of content, wall to wall coverage, cube virtualization. We have multiple cubes out there with streaming videos, we're doing a lot of similar live all kinds of action. Thanks for watching theCUBE (upbeat music)
SUMMARY :
the globe, it's theCUBE. of the AWS re:Invent 2020. Great to see you again. and other clouds around the corner. And and this is a great event, you know, the new offering that you have with AWS, And in fact, we are one of the, you know, but you hit it already. and the metrics and figure out, you know, So take the time to explain to a new geo syn, in Asia, you know, you know, little slogan there, you know, you know, OpenShift. Yeah, I think number one is, you know, Congratulations, OpenShift on AWS. that you know we comply has a certain the highest we got a lot of content,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Sathish Balakrishnan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Sathish | PERSON | 0.99+ |
May | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two minutes | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
2011 | DATE | 0.99+ |
this week | DATE | 0.99+ |
millions | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
both sides | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
second step | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
three week | QUANTITY | 0.98+ |
Three weeks | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
billion containers | QUANTITY | 0.98+ |
2007 | DATE | 0.97+ |
Red Hat Enterprise Linux | TITLE | 0.97+ |
COVID-19 | OTHER | 0.97+ |
four | QUANTITY | 0.97+ |
theCUBE Virtual | COMMERCIAL_ITEM | 0.95+ |
Roger Barga, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, husband. Welcome back to the cubes. Live coverage of AWS reinvent 2020. We're not in person this year. We're virtual This is the Cube Virtual. I'm John for your host of the Cube. Roger Barker, the General Manager AWS Robotics and Autonomous Service. And a lot of other cool stuff was on last year. Always. Speed Racer. You got the machines. Now you have real time Robotics hitting, hitting seen Andy Jassy laid out a huge vision and and data points and announcements around Industrial this I o t it's kind of coming together. Roger, great to see you. And thanks for coming on. I want to dig in and get your perspective. Thanks for joining the Cube. >>Good to be here with you again today. >>Alright, so give us your take on the announcements yesterday and how that relates to the work that you're doing on the robotic side at a w s. And where where does this go from? You know, fun to real world to societal impact. Take us through. What? You how you see that vision? >>Yeah, sure. So we continue to see the story of how processing is moving to the edge and cloud services, or augmenting that processing at the edge with unique and new services. And he talked about five new industrial machine learning services yesterday, which are very relevant to exactly what we're trying to do with AWS robot maker. Um, a couple of them monitor on, which is for equipment monitoring for anomalies. And it's a whole solution, from an edge device to a gateway to a service. But we also heard about look out for equipment, which is if a customer already has their own censors. It's a service that can actually back up that that sensor on their on the device to actually get identify anomalies or potential failures. And we saw look out for video, which allows customers to actually use their camera and and build a service to detect anomalies and potential failures. When A. W s robot maker, we have Ross Cloud Service extensions, which allow developers to connect their robot to these services and so increasingly, that combination of being able to put sensors and processing at the edge, connecting it back with the cloud where you could do intelligent processing and understand what's going on out in the environment. So those were exciting announcements. And that story is going to continue to unfold with new services. New sensors we can put on our robots to again intelligently process the data and control these robots and industrial settings. >>You know, this brings up a great point. And, you know, I wasn't kidding. Was saying fun to real world. I mean, this is what's happening. Um, the use cases air different. You look at you mentioned, um, you know, monitor on lookout. But those depend Panorama appliance. You had computer vision, machine learning. I mean, these are all new, cool, relevant use cases, but they're not like static. It's not like you're going to see them. Just one thing is like the edge has very diverse and sometimes mostly purpose built for the edge piece. So it's not like you could build a product. Okay, fits everywhere. Talk about that dynamic and why the robotics piece has to be agile. And what do you guys doing to make that workable? Because, you know, you want purpose built. The purpose built implies supply chain years. in advance. It implies slow and you know, how do you get the trust? How do you get the security? Take us through that, please. >>So to your point, um, no single service is going to solve all problems, which is why AWS has has released a number of just primitives. Just think about Kinesis video or Aiken. Stream my raw video from an edge device and build my own machine learning model in the cloud with sage maker that will process that. Or I could use recognition. So we give customers these basic building blocks. But we also think about working customer backward. What is the finished solution that we could give a customer that just works out of the box? And the new services we heard about we heard about yesterday were exactly in that latter category. Their purpose built. They're ready to be used or trained for developers to use and and with very little customization that necessary. Um, but the point is, is that is that these customers that are working these environments, the business questions change all the time, and so they need actually re program a robot on the fly, for example, with a new mission to address the new business need that just arose is a dynamic, which we've been very tuned into since we first started with a device robo maker. We have a feature for a fleet management, which allows a developer to choose any robot that's out in their fleet and take the software stack a new software stack tested in simulation and then redeploy it to that robot so it changes its mission. And this is a This is a dialogue we've been seeing coming up over the last year, where roboticists are starting to educate their company that a robot is a device that could be dynamically program. At any point in time, they contest their application and simulation while the robots out in the field verify it's gonna work correctly and simulation and then change the mission for that robot. Dynamically. One of my customers they're working with Woods Hole Institute is sending autonomous underwater robots out into the ocean to monitor wind farms, and they realized the mission may change may change based on what they find out. If the wind farm with the equipment with their autonomous robot, the robot itself may encounter an issue and that ability because they do have connective ity to change the mission dynamically. First Testament, of course, in simulation is completely changing the game for how they think about robots no longer a static program at once, and have to bring it back in the shop to re program it. It's now just this dynamic entity that could test and modify it any time. >>You know, I'm old enough to know how hard that really is to pull off. And this highlights really kind of how exciting this is, E. I mean, just think about the idea of hardware being dynamically updated with software in real time and or near real time with new stacks. I mean, just that's just unheard of, you know, because purpose built has always been kind of you. Lock it in, you deploy it. You send the tech out there this kind of break fixed kind of mindset. Let's changes everything, whether it's space or underwater. You've been seeing everything. It's software defined, software operated model, so I have to ask you First of all, that's super awesome. Anyway, what's this like for the new generation? Because Andy talked on stage and in in my one On one way I had with him. He talked about, um, and referring to land in some of these new things. There's a new generation of developer. So you gotta look at these young kids coming out of school to them. They don't understand what how hard this is. They just look at it as lingua frank with software defined stuff. So can you share some of the cutting edge things that are coming out of these new new the new talent or the new developers? Uh, I'm sure the creativity is off the charts. Can you share some cool, um, use cases? Share your perspective? >>Absolutely. I think there's a couple of interesting cases to look at. One is, you know, roboticists historically have thought about all the processing on the robot. And if you say cloud and cloud service, they just couldn't fathom that reality that all the processing has cannot has to be, you know, could be moved off of the robot. Now you're seeing developers who are looking at the cloud services that we're launching and our cloud service extensions, which give you a secure connection to the cloud from your robot. They're starting to realize they can actually move some of that processing off the robot that could lower the bomb or the building materials, the cost of the robot. And they can have this dynamic programming surface in the cloud that they can program and change the behavior of the robot. So that's a dialogue we've seen coming over the last couple years, that rethinking of where the software should live. What makes sense to run on the robot? And what should we push out to the cloud? Let alone the fact that if you're aggregating information from hundreds of robots, you can actually build machine learning models that actually identify mistakes a single robot might make across the fleet and actually use that insight to actually retrain the models. Push new applications down, pushing machine learning models down. That is a completely different mindset. It's almost like introducing distributed computing to roboticists that you actually think this fabric of robots and another, more recent trend we're seeing that were listening very closely to customers is the ability to use simulation and machine learning, specifically reinforcement. Learning for a robot actually try different tasks up because simulations have gotten so realistic with the physics engines and the rendering quality that is almost nearly realistic for a camera. The physics are actually real world physics, so that you can put a simulation of your robot into a three D simulated world and allow it to bumble around and make mistakes while trying to perform the task that you frankly don't know how to write the code for it so complex and through reinforcement, learning, giving rewards signals if it does something right or punishment or negative rewards signals. If it does something wrong, the machine learning algorithm will learn to perform navigation and manipulation tasks, which again the programmer simply didn't have to write a line of code for other than creating the right simulation in the right set of trials >>so that it's like reversing the debugging protocol. It's like, Hey, do the simulations. The code writes itself. Debug it on the front end. It rights itself rather than writing code, compiling it, debugging it, working through the use cases. I mean, it's pretty different. >>It is. It's really a new persona. When we started out, not only are you taking that roboticist persona and again introduced him to the cloud services and distributed computing what you're seeing machine learning scientists with robotics experience is actually rising. Is a new developer persona that we have to pay attention to him. We're talking to right now about what they what they need from our service. >>Well, Roger, I get I'm getting tight on time here. I want one final question before we break. How does someone get involved with Amazon? And I'll see you know, whether it's robotics and new areas like space, which is verging, there's a lot of action, a lot of interest. Um, how does someone engaged with Amazon to get involved, Whether I'm a student or whether I'm a professional, I want a code. What's what's the absolutely, >>absolutely, so certainly reinvent. We have several sessions that reinvent on AWS robo maker. Our team is there, presenting and talking about our road map and how people can get engaged. There is, of course, the remarks conference, which will be happening next year, hopefully to get engaged. Our team is active in the Ross Open Source Community and Ross Industrial, which is happening in Europe later in December but also happens in the Americas, where were present giving demos and getting hands on tutorials. We're also very active in the academic research in education arena. In fact, we just released open source curriculum that any developer could get access to on Get Hub for Robotics and Ross, as well as how to use robo maker that's freely available. Eso There's a number of touch points and, of course, I'd be welcome to a field. Any request for people to learn more or just engage with our team? >>Arthur Parker, general manager. It is robotics and also the Autonomous Systems Group at AWS Amazon Web services. Great stuff, and this is really awesome insight. Also, you know it za candy For the developers, it's the new generation of people who are going to get put their teeth into some new science and some new problems to solve. With software again, distributed computing meets robotics and hardware, and it's an opportunity to change the world literally. >>It is an exciting space. It's still Day one and robotics, and we look forward to seeing the car customers do with our service. >>Great stuff, of course. The Cube loves this country. Love robots. We love autonomous. We love space programming all this stuff, totally cutting edge cloud computing, changing the game at many levels with the digital transformation just a cube. Thanks for watching
SUMMARY :
It's the Cube with digital You know, fun to real world to societal at the edge, connecting it back with the cloud where you could do intelligent processing and understand what's going And what do you guys doing to make that workable? for developers to use and and with very little customization that necessary. It's software defined, software operated model, so I have to ask you First of all, all the processing has cannot has to be, you know, could be moved off of the robot. so that it's like reversing the debugging protocol. persona and again introduced him to the cloud services and distributed computing what you're seeing machine And I'll see you know, whether it's robotics and There is, of course, the remarks conference, which will be happening next year, hopefully to get engaged. and hardware, and it's an opportunity to change the world literally. It's still Day one and robotics, and we look forward to seeing the car customers do with our service. all this stuff, totally cutting edge cloud computing, changing the game at many levels with the digital
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Roger | PERSON | 0.99+ |
Arthur Parker | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Roger Barker | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Woods Hole Institute | ORGANIZATION | 0.99+ |
Ross Industrial | ORGANIZATION | 0.99+ |
Americas | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
Roger Barga | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ross Open Source Community | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Ross | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
this year | DATE | 0.98+ |
Get Hub | ORGANIZATION | 0.97+ |
hundreds of robots | QUANTITY | 0.97+ |
AWS Robotics and Autonomous Service | ORGANIZATION | 0.96+ |
first | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.95+ |
one final question | QUANTITY | 0.95+ |
five new industrial machine learning services | QUANTITY | 0.92+ |
Autonomous Systems Group | ORGANIZATION | 0.92+ |
First | QUANTITY | 0.9+ |
single service | QUANTITY | 0.9+ |
last couple years | DATE | 0.87+ |
single robot | QUANTITY | 0.85+ |
Amazon Web | ORGANIZATION | 0.85+ |
Day one | QUANTITY | 0.83+ |
Kinesis | ORGANIZATION | 0.8+ |
First Testament | QUANTITY | 0.79+ |
Cube Virtual | COMMERCIAL_ITEM | 0.75+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
W | PERSON | 0.68+ |
one | QUANTITY | 0.64+ |
couple | QUANTITY | 0.63+ |
Invent | EVENT | 0.62+ |
December | DATE | 0.62+ |
Robotics | ORGANIZATION | 0.61+ |
Aiken | ORGANIZATION | 0.58+ |
reinvent 2020 | EVENT | 0.49+ |
2020 | TITLE | 0.47+ |
Cloud | TITLE | 0.47+ |
reinvent | EVENT | 0.44+ |
re | EVENT | 0.32+ |
Satish Balakrishnan, Red Hat | AWS re:Invent 2020
>> Narrator: From around the globe, it's theCUBE. With digital coverage of AWS re:Invent 2020. Sponsored by intel, AWS, and our community partners. >> Welcome back to the CUBE's coverage of the AWS re:Invent 2020. Three weeks we're here, covering re:Invent. It's virtual. We're not in person. Normally we are on the floor. Instructing *signal from the noise, but we're virtual. This is theCUBE Virtual. We are theCUBE Virtual. I'm John Furrier, your host. Got a great interview here today. Satish Balakrishnan, Vice president of hosted platforms for Red Hat joining us. Satish, great to see you. Thanks for coming on. >> Thank you, John. Great to see you again. >> I wish we were in person, but we're remote because of the pandemic. But it's going to be a lot of action going on, a lot of content. Red Hat's relationship with AWS, and this is a really big story this year, at many levels. One is your relationship with Red Hat, but also the world's evolved. Clearly hybrid cloud's in play. Now you got multiple environments with the edge and other clouds around the corner. This is a huge deal. Hybrids validated multiple environments, including the edge. This is big. On premise in the cloud. What's your new update for your relationship? >> Absolutely, John, yeah. this is so you know, if anything this year has accelerated digital transformation, right The joke that COVID-19 is the biggest digital accelerator, digital transformation accelerator is no joke. I think going back to our relationship with AWS, as you rightly pointed out, we have a very storied and long relationship with AWS, we've been with AWS partnering with AWS since 2007, when we offered the Red Hat Enterprise Linux on AWS since then, you know, we've made a lot of strides, but not in the middle of our products that are layered on AWS, as well as back in 2015, we offered OpenShift dedicated Red Hat OpenShift dedicated, which is our managed offering on AWS, you know, and since then we made a bunch of announcements right around the service broker, and then you know, the operators operator hub, and the operators that AWS has for services to be accessed from Kubernetes. As well as you know, the new exciting joint service that we announced. So you know, by AWS and Red Hat, increasingly, right, our leaders in public cloud and hybrid cloud and are approached by IT decision makers who are looking for guidance or on changing requirements, and they know how they should be doing application development in a very containerized and hybrid cloud world. So you know, excited to be here. And and this is a great event, you know, three week event, but you know, usually we were in Las Vegas, but you know, this week, this year, we will do it on workshop. But you know, nevertheless, the same excitement. And you know, I'm sure there's going to be same set of announcements that are going to come out of this event as well. >> Yeah, we'll keep track of it. Because it's digital. I think it's going to be a whole another user experience personally on the Discovery sites Learning Conference. But that's great stuff. I want to dig into the news, cause I think the relevant story here that you just talked about, I want to dig into the announcement, the new offering that you have with AWS, it's a joint offering, I believe, can you take a minute to explain what was and what's discussed? Cause you guys announced some stuff in May. Now you have OpenShift services. Is it on AWS? Can you take a minute to explain the news here? >> Absolutely John yeah. So I think we had really big announcement in May, you know, the first joint offering with AWS and it is Red Hat open shift service on AWS, it's a joint service with Red Hat and AWS, we're very excited to partner with them, and you know, be on the AWS console. And you know, it's great to be working with AWS engineering team, we've been making a lot of really good strides, it just amplify, as you know, our managed services story. So we are very excited to have that new offering that's going to be completely integrated with AWS console transacted through you know AWS marketplace, but you know, customers will get all the benefit of AWS service, like you know, how just launch it off the console, basically get, you know there and be part of the enterprise discount program and you we're very really excited and you know, that kind of interest has been really, really amazing. So we just announced that, you know, it's in preview we have a lot of customers already in preview, and we have a long list of customers that are waiting to get on this program. So but this offering, right, we have three ways in which you can consume OpenShift on AWS. One is, as I mentioned previously OpenShift dedicated on AWS, which we've had since 2015. Then we have OpenShift container platform, which is our previous self managed offering. And that's been available on AWS, also since 2015. And then, of course, this new service that are that OpenShift servers on AWS. So there's multiple ways in which customers can consume AWS and leverage the power of both OpenShift and AWS. And what I want to do here as well, right, is to take a moment to explain, you know what Red Hat's been doing in managed services, because then it's not very natural for somebody to say, oh, what's the Red Hat doing in managed services? You know, Red Hat believes in choice, right. We are all about try for that it's infrastructure footprint that's public cloud on-prem. It's managed or self managed, that's also tries to be offered to customers. And we've been doing managed services since 2011. That's kind of like a puzzling statement, people will be like, what? And yeah, it is true that we've been doing this since 2011. And in fact, we are one of the, you know, the earliest providers of managed Kubernetes. Since 2015. Right, I think there's only one other provider other than us, who has been doing managed Kubernetes, since then, which is kind of really a testament to the engineering work that Red Hat's been doing in Kubernetes. And, you know, with all that experience, and all the work that we've done upstream and building Kubernetes and making Kubernetes, really the you know, the hybrid cloud platform for the entire IT industry, we are excited to bring this joint offering. So we can bring all the engineering and the management strengths, as well as combined with the AWS infrastructure, and you know and other AWS teams, to bring this offering, because this is really going to help our customers as they move to the cloud. >> That's great insight, thanks for explaining that managed service, cause I was going to ask that question, but you hit it already. But I want to just follow up on that. Can you just do a deeper dive on the offering specifically, on what the customer benefits are here from having this managed service? Because again, you said, You Red Hats get multiple choice consumption vehicles here? What's the benefits? what's under the what's the deep dive? >> Absolutely, absolutely is a really, really good question. right as I mentioned, first thing is choice. like we start with choice customers, if they want, self managed, and they can always get that anywhere in any infrastructure footprint. If they're going to the cloud, most customers tend to think that you know, I'm going to the cloud because I want to consume everything as a service. And that's when all of these services come into play. But before we even get to the customer benefits, there's a lot of advantages to our software product as well. But as a managed service, we are actually customer zero. So we go through this entire iteration, right. And you probably everybody's familiar with, how we take open source projects, and we pull them into enterprise product. But we take it a second step, after we make it an enterprise product, we actually ship it to our multi tenant software system, which is called OpenShift Online, which is publicly available to millions of customers that manage exports on the public Internet, and then all the security challenges that we have to face through and fix, help solidify the product. And then we moved on to our single tenant OpenShift dedicated or you know soon to be the Red Hat OpenShift service on AWS but, you know, pretty much all of Red Hat's mission critical applications, like quedado is a service that's serving like a billion containers, billion containers a month. So that scale is already been felt by the newly shipped product, so that you know, any challenges we have at scale, any challenges, we have security, any box that we have we fix before we really make the product available to all our customers. So that's kind of a really big benefit to just that software in general, with us being a provider of the software. The second thing is, you know, since we are actually now managing customers clusters, we exactly know, you know, when our customers are getting stock, which parts of the stock need to improve. So there's a really good product gap anticipation. So you know, as much as you know, we want still really engage with customers, and we continue to engage with customers, but we can also see the telemetry and the metrics and figure out, you know, what challenges our customers' facing. And how can we improve. Other thing that, you know, helps us with this whole thing is, since we are operators now, and all our customers are really operators of software, it gives us better insights into what the user experience should be, and in how we can do things better. So there's a whole lot of benefits that Red Hat gets out of just being a managed service provider. Because you know, drinking our own champagne really helps us you know, polish the champagne and make it really better for all our customers that are consuming. >> I always love the champagne better than dog food because champagne more taste better. Great, great, great insight. Final question. We only have a couple minutes left, only two minutes left. So take the time to explain the big customer macro trend, which is the on premise to cloud relationship. We know that's happening. It's an operating model on both sides. That's clear as it is in the industry. Everyone knows that. But the managed services piece. So what drives an organization and transition from an on-prem Red Hat cloud to a managed service at Amazon? >> Is a really good question. It does many things. And it really starts with the IT and technology strategy. The customer has, you know, it could be like a digital transformation push from the CEO. It could be a cloud native development from the CPO or it could just be a containerization or cost optimization. So you have to really figure out you know, which one of this and it could be multiple and many customers, it could be all four of them and many customers that's driving the move to the cloud and driving the move to containerization with OpenShift. And also customers are expanding into new businesses, they got to be more agile, they got to basically protect the stuff. Because you know, there are a lot of competitors, you know, that, and b&b and other analogies, you know, how they take on a big hotel chains, it's kind of, you know, customers have to be agile IT is, you know, very strategic in these days, you know, given how everything is digital, and as I pointed out, it has coverts really like the number one digital transformation(mumbles). So, for example, you know, we have BMW is a great customer of ours that uses OpenShift, for all the connected car infrastructure. So they run it out of, you know, their data centers, and, you know, they suddenly want to go to a new geo syn, in Asia, you know, they may not have the speed to go build a data center and do things, so they'll just move to the cloud very easily. And from all our strategy, you know, I think the world is hybrid, I know there's going to be a that single cloud, multi cloud on-pram, it's going to be multiple things that customers have. So they have to really start thinking about what are the compliance requirements? What is the data regulations that they need to comply to? Is that a lift and shift out(mumbles) gistic things? So they need to do cloud native development, as well as containerization to get the speed out of moving to the cloud. And then how are they measuring availability? You know, are they close to the customer? You know, what is the metrics that they have for, you know, speed to the customer, as well, as you know, what databases are they using? So we have a lot of experience with this. Because, you know, this is something that, you know, we've been advocating, you know, for at least eight years now, the open hybrid cloud, a lot of experience with open innovation labs, which is our way of telling customers, it's not just about the technology, but also about how you change processes and how you change other things with people aspects of it, as well as continued adoption programs and a bunch of other programs that Red Hat has been building to help customers with this transformation. >> Yeah, as a speed game. One of the big themes of all my interviews this week, a couple weeks here at reInvent has been speed. And BMW, what a great client. Yeah, shifting into high gear with BMW with OpenShift, you know, little slogan there, you know, free free attribute. >> Thank you, John, >> Shifting the idea, you know, OpenShift. Congratulations, and great announcement. I love the direction always been a big fan of OpenShift. I think with Kubernetes, a couple years ago, when that kind of came together, you saw everything kind of just snap into place with you guys. So congratulations Satish. Final question. What is the top story that people should take away from you this year? Here at reInvent? What's the number one message that you'd like to share real quick? >> Yeah, I think number one is, you know, we have a Joint Service coming soon with AWS, it is one of it's kind work for us. And for AWS, it's the first time that we are partnering with them at such a deep level. So this is going to really help accelerate our customers' move to the cloud, right to the AWS cloud, and leverage all of AWS services very natively like they would if they were using another container service that's coming out of AWS and it's like a joint service. I'm really, really excited about the service because, you know, we've just seen that interest has been exploding and, you know, we look forward to continuing our collaboration with AWS and working together and you know, helping our customers, you know, move to the cloud as well as cloud native development, containerization and digital transformation in general. >> Congratulations, OpenShift on AWS. big story here, >> I was on AWS. I want to make sure that you know we comply with the brand >> OpenShifts on open shift service, on AWS >> on AWS is a pretty big thing. >> Yeah, and ecosys everyone knows that's a super high distinction on AWS has a certain the highest form of compliment, they have join engineering everything else going on. Congratulations thanks for coming on. >> Thank you John. Great talking to you. >> It's theCUBE virtual coverage we got theCUBE virtual covering reInvent three weeks we got a lot of content, wall to wall coverage, cube virtualization. We have multiple cubes out there with streaming videos, we're doing a lot of similar live all kinds of action. Thanks for watching theCUBE (upbeat music)
SUMMARY :
the globe, it's theCUBE. of the AWS re:Invent 2020. Great to see you again. and other clouds around the corner. And and this is a great event, you know, the new offering that you have with AWS, And in fact, we are one of the, you know, but you hit it already. and the metrics and figure out, you know, So take the time to explain to a new geo syn, in Asia, you know, you know, little slogan there, you know, you know, OpenShift. Yeah, I think number one is, you know, Congratulations, OpenShift on AWS. that you know we comply has a certain the highest we got a lot of content,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Satish Balakrishnan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Asia | LOCATION | 0.99+ |
May | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
Satish | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
two minutes | QUANTITY | 0.99+ |
2011 | DATE | 0.99+ |
this week | DATE | 0.99+ |
both sides | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
second step | QUANTITY | 0.98+ |
three week | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Three weeks | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
2007 | DATE | 0.97+ |
Red Hat Enterprise Linux | TITLE | 0.97+ |
COVID-19 | OTHER | 0.97+ |
today | DATE | 0.97+ |
both | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.96+ |
OpenShifts | TITLE | 0.95+ |
theCUBE Virtual | COMMERCIAL_ITEM | 0.95+ |
four | QUANTITY | 0.95+ |
Maureen Lonergan, AWS & Alyene Schneidewind, Salesforce | AWS re:Invent 2020
>>from around the >>globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. Welcome back to the Cubes Coverage Cube Virtual coverage of AWS reinvent 2020 which is also virtual. We're not in person this year. We're doing the remote interviews. But of course, getting all the stories, of course, reinvented, full of partnerships full of news. And we've got a great segment here with Salesforce and AWS. Eileen Schneider Win, who is the senior vice president of strategic partnerships, and Maureen Lundergan, director of worldwide training and certification address. Maureen Eileen. Great to see you. Thanks for coming on. And nice keynote. What's up with the partnership? Give us a quick over your lien. What's what's the Salesforce? A day was partnership. Take a minute to explain it. >>Sure, thank you. I think I'll start out by talking about how sales were thinks about strategic partnerships. So for us, it's really it starts with the customer and being where they want us to be. And we've been so fortunate to be in this relationship with AWS for over five years now. It really started out as an infrastructure based partnership as we were seeing customers start their digital transformation journeys and moved to the cloud. But what has been really exciting as we've spent more time working together and working with our customers, we have now started to move into emotion of really bringing some differentiated solutions between the number one CRM and the most broadly adopted cloud platform to market for customers, uh, in areas like productivity, security and training and certification which will talk more about in a bit Onda. Specifically, some of those solutions are service Cloud Voice Product, which we launched this summer, announced last fall, a dream force as well as our private connect product which creates great security between the AWS platform and Salesforce. >>What? Some of the impact area is actually the two clouds you mentioned CRM and Amazon. We're seeing data obviously being a part of the equation ai machine learning. Um, what's been the impact I lean to your customer specifically >>Yeah, so specifically I'd call out to areas what one is really that foundation of security. Specifically, as government regulations and data security has become more critical, we've really been able to partner together there and and that's been crucial for certain customers in certain regions as well a certain industries like government. Uh, in addition, I would call out again that service cloud voice partnership, a zoo. We see the world moving more digital. This really allows customers to go quickly and, uh, turn on. There are solutions from anywhere at any time. >>You know, I love that any time, anywhere kind of philosophy. Now more than ever. With the pandemic collaborations required more than ever, and some people are used to it. You know, I've seen more technical developers have used to working at home, but not everyone else. The workforce still needs to get the job done. So this idea of collaboration, what is the impact in for your customers and how are you guys helping them? Because I think this is a big theme of this year That's gonna not only carry over, even when the pandemics over this idea of anywhere is all about collaboration. >>Yeah, I totally agree. I mean, the exciting thing about the partnership is we've been talking digital transformation with customers for years, but I think what we saw at the beginning of this year, as we were all thrown home and forced Thio, you know, fire up our jobs from our bedrooms or our garages. It really came down to our ability to work quickly and turn on our solutions. It's and these unprecedented times, while we're going through this now, everything we're building really is the future. So it's not just the tools and technology, it's also the processes and how work is getting done that's really come into play. But again, I'll anchor back to that service blood voice solution. So for us, call centers were completely disrupted. You think of call centers and you know, pre 2020 everyone sitting in a room together, agent side by side managers, having the ability to pop over and assist with a call or managing escalation. Now that's been completely disrupted. And it's been very exciting for us to work with our customers, to reimagine what that looks like again both from a technology perspective but also from a process perspective. And along with that, you had to reimagine how employees are learning these solutions and being trained. So we're very grateful for the partnership with AWS, and we're doing some really amazing things together. >>You know this is one of my favorite things about the enablement of Cloud. But in Salesforce has been a pioneer. As you pointed out, this connectedness feature has always been there. Now more than ever, it's highlighted with call centers, not the call center more. It's the connected center. People are connecting. And I think, Maureen, I think last time you're in the Cube. A few years ago, we were talking about virtual training online, and that was pre pet pandemic. Now you're seeing surge of online training not only because people's jobs are changing and being displaced or even shut down. New roles are emerging, right? So the virtual space Virtual world digital world, there's everyone's getting more digital faster now. How has the cove in 19 changed the landscape for training and skills demand? From your perspective, I >>mean at AWS, we've been working on our virtual capabilities for a while, so we had a digital platform out. We had a great partnership, have a great partnership with Salesforce and putting content on trailhead. We had to pivot very rapidly to virtual instructor led training and also our certifications right. We were lucky that our vendors partnered with us rapidly to pivot certification toe proctor environment. And this actually has helped to expand our ability to deliver the both training and certification in locations that we may not have been able to do before. And we have seen while it slowed. Initially, we have seen such an uptake and training over the last, um, 6 to 8 months. It's been incredible. We've been working with our customers. We've been working with our partnerships like Salesforce. We've been pushing more content out. I think customers and partners air really looking for how toe upscale their employees, uh, in a in a way, that is easy for them. And so it's actually been a great surprise to see the adoption of all of our curriculum over the last couple months. >>Well, congratulations knows a lot more work to do. It's gonna get more engaging, more virtual, more rich media. But this idea of connecting lean I wanna get back to the your your thoughts earlier, um, mentioned trailhead. Maury mentioned trailhead. You guys were doing some work with the virtual training there. What? Can you tell us more about that? And how that's going so far? >>Sounds great. So trailhead is our free online learning platform. And it really started because we have a commitment to democratizing anyone's ability to enter our industry s so you could go there and both online or with our trail head go app and experience what we call trails, which our paths for learning again on different areas of knowledge and skills and technology. And late last year, we announced an incredible partnership with AWS, where we're bringing the AWS learning content and certification to trailhead. And this is really again driven by our customers to are asking us to do our part in bringing mawr of these skilled resource is into the ecosystem. But something I also wanna highlight is I feel like this moment that we're in right now has also forced everyone to reimagine how they're doing learning even businesses, how they're training their employees and again having this free platform. And the partnership with AWS has really helped us go very quickly and create a lot of impact with customers. >>I just want to say I love the trailhead metaphor because, you know, learnings nonlinear. It's asynchronous. You've got digital. So you want to take a shortcut? You gotta know the maps And I think that's, you know, people wanna learn versus the linear, you know, tracks on. And I think that's how people have been learning online. And AWS has got a data driven strategy. Marine, I want to get your take on this because as you bring content on the trailhead, can you talk about how that works? And how you working with Railhead? >>Yeah. I mean, we started conversations a couple of years ago, and I think the interesting thing is that Salesforce and AWS have a very similar philosophy about bringing education to anybody who wants it. You'll hear me talk a lot about that in my leadership talk at reinvent, but, um, we really believe that we wanna provide content where learners learn and salesforce and trailhead have this amazing captured audience. And, um, you know, we're really looking at exploring. How do we bring education to people that might not otherwise have access to it? On DSO, we started with really foundational level content, a ws Cloud, Practitioner Essentials and AWS Cloud for technical professionals. And the interesting thing is, both of those courses have been consumed. ITT's not enough to just put it out there you want people to complete the trails and we've seen such an amazing uptake on the courses with, like 85% completion rate on one of the trails and 95% completion rate on the other one. And to keep customers engage is really a credit toe. How trailhead is designed. >>You know, it's interesting. The certification people don't lose sight of the fact that that's kind of the in the end state. Then you start a new trail. I mean, this >>is >>the this is really what it's all about. Can you just share some observations that you've seen for people that are coming into this now to say, Hey, okay, what do I expect? And what are some of the outcomes? >>Yeah, I mean, first, what we're seeing is our customers are being very clear that they need more of these skills. So we're also seeing the need for Salesforce administrators out in our ecosystem. And I think with everything going on this year, it's also an opportunity for people who are looking to pivot. Their careers were moving to tech and again, this free learning platform and the content that we're bringing has been really powerful and again for us. The need for salesforce administrators and cloud practitioners out in our ecosystem are in more demand than ever. >>Maureen. From your perspective on AWS, you see a lot of the new new jobs cybersecurity, Brazilian openings. Where do you see the most needs on for training and certification? Can you highlight some of the areas that are emerging and trending, if you will? >>I would say it's interesting because what we're seeing is is both ends of the spectrum. People that are really trying to just really understand who cloud is, whether it's, ah, business leader within an organization, a finance person, a marketing person. So cloud practitioner, you know, we're seeing huge adoption and consumption on both our platform in on Salesforce. But also some other areas are security and machine learning machine learning. We have five learning paths on our digital platform. We've also extended that content out to other platforms and the consumption rate is significant. And so, you know, I think we're seeing, uh, customers consume that. But the other thing that we're doing is we're really focused on looking at who doesn't have access to education and making sure that's available. So I think the large adoption of Cloud Practitioner in Practitioner is is largely due to the other things that we're doing with programs like Restart our academic programs >>to close it out, Alina want to get your thoughts and final thoughts on the relationship and how people can find more information about this partnership and what it means. Take, take it home. >>Thank you for asking. So just like everything else we've been talking about today, we've had to reimagine how we're showing up at this event together and very exciting thing that my team has created is the AWS Virtual Park. And anyone can access that at salesforce dot com slash aws. So please go check it out. You can experience our products here from our experts and experience its innovation on your own. >>Great insight. Thanks for coming on and participating. Really appreciate Salesforce and AWS two big winning leading clouds working together Trail had great great offering. Thanks for coming on sharing the news. Appreciate >>it. Thank you. >>It's the Cube virtual covering. It was reinvent virtual. Of course. Check out all the information here All three weeks. Walter Wall coverage. I'm John Fury with the Cube. Thanks for watching
SUMMARY :
It's the Cube with digital coverage of AWS between the number one CRM and the most broadly adopted cloud platform to market Some of the impact area is actually the two clouds you mentioned CRM and Amazon. Yeah, so specifically I'd call out to areas what one is really that foundation So this idea of collaboration, what is the impact in for your customers and how having the ability to pop over and assist with a call or managing escalation. So the virtual space Virtual world digital world, there's everyone's getting more digital And this actually has helped to expand our ability But this idea of connecting lean I wanna get back to the your your And the partnership with AWS has really helped us go very quickly and create a lot of impact And how you working with Railhead? And the interesting thing is, both of those courses have been consumed. The certification people don't lose sight of the fact that that's kind of the in the end state. for people that are coming into this now to say, Hey, okay, what do I expect? And I think with everything going on this year, Can you highlight some of the areas that are emerging and trending, if you will? is is largely due to the other things that we're doing with programs like Restart our academic to close it out, Alina want to get your thoughts and final thoughts on the relationship and how people can find more information And anyone can access that at salesforce dot com slash aws. Thanks for coming on sharing the news. It's the Cube virtual covering.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Maureen Lundergan | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Maureen | PERSON | 0.99+ |
Maureen Lonergan | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Maureen Eileen | PERSON | 0.99+ |
Alyene Schneidewind | PERSON | 0.99+ |
6 | QUANTITY | 0.99+ |
Eileen Schneider Win | PERSON | 0.99+ |
85% | QUANTITY | 0.99+ |
John Fury | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
last fall | DATE | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Maury | PERSON | 0.99+ |
today | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Walter Wall | PERSON | 0.99+ |
over five years | QUANTITY | 0.98+ |
8 months | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
late last year | DATE | 0.98+ |
Thio | PERSON | 0.97+ |
this year | DATE | 0.97+ |
Alina | PERSON | 0.97+ |
Cube | COMMERCIAL_ITEM | 0.97+ |
ITT | ORGANIZATION | 0.95+ |
Marine | PERSON | 0.95+ |
trailhead | ORGANIZATION | 0.95+ |
pandemics | EVENT | 0.95+ |
trail head go | TITLE | 0.94+ |
Salesforce | TITLE | 0.93+ |
this summer | DATE | 0.93+ |
two clouds | QUANTITY | 0.93+ |
three weeks | QUANTITY | 0.92+ |
pandemic | EVENT | 0.9+ |
five learning paths | QUANTITY | 0.85+ |
last couple months | DATE | 0.84+ |
beginning of this year | DATE | 0.83+ |
couple of years ago | DATE | 0.82+ |
Brazilian | OTHER | 0.81+ |
ws | ORGANIZATION | 0.81+ |
few years ago | DATE | 0.79+ |
DSO | ORGANIZATION | 0.79+ |
19 | QUANTITY | 0.76+ |
Railhead | TITLE | 0.74+ |
two big winning | QUANTITY | 0.74+ |
Cubes Coverage Cube Virtual | COMMERCIAL_ITEM | 0.73+ |
both ends | QUANTITY | 0.71+ |
A day | QUANTITY | 0.68+ |
salesforce | ORGANIZATION | 0.66+ |
Cloud | TITLE | 0.66+ |
Practitioner Essentials | TITLE | 0.63+ |
Invent 2020 | EVENT | 0.63+ |
2020 | TITLE | 0.63+ |
pet | EVENT | 0.62+ |
Virtual Park | COMMERCIAL_ITEM | 0.56+ |
Trail | PERSON | 0.56+ |
reinvent 2020 | EVENT | 0.54+ |
Hemanth Manda, IBM Cloud Pak
(soft electronic music) >> Welcome to this CUBE Virtual Conversation. I'm your host, Rebecca Knight. Today, I'm joined by Hermanth Manda. He is the Executive Director, IBM Data and AI, responsible for Cloud Pak for Data. Thanks so much for coming on the show, Hermanth. >> Thank you, Rebecca. >> So we're talking now about the release of Cloud Pak for Data version 3.5. I want to explore it for, from a lot of different angles, but do you want to just talk a little bit about why it is unique in the marketplace, in particular, accelerating innovation, reducing costs, and reducing complexity? >> Absolutely, Rebecca. I mean, this is something very unique from an IBM perspective. Frankly speaking, this is unique in the marketplace because what we are doing is we are bringing together all of our data and AI capabilities into a single offering, single platform. And we have continued, as I said, we made it run on any cloud. So we are giving customers the flexibility. So it's innovation across multiple fronts. It's still in consolidation. It's, in doing automation and infusing collaboration and also having customers to basically modernize to the cloud-native world and pick their own cloud which is what we are seeing in the market today. So I would say this is a unique across multiple fronts. >> When we talk about any new platform, one of the big concerns is always around internal skills and maintenance tasks. What changes are you introducing with version 3.5 that does, that help clients be more flexible and sort of streamline their tasks? >> Yeah, it's an interesting question. We are doing a lot of things with respect to 3.5, the latest release. Number one, we are simplifying the management of the platform, made it a lot simpler. We are infusing a lot of automation into it. We are embracing the concept of operators that are not open shelf has introduced into the market. So simple things such as provisioning, installation, upgrades, scaling it up and down, autopilot management. So all of that is taken care of as part of the latest release. Also, what we are doing is we are making the collaboration and user onboarding very easy to drive self service and use the productivity. So overall, this helps, basically, reduce the cost for our customers. >> One of the things that's so striking is the speed of the innovation. I mean, you've only been in the marketplace for two and a half years. This is already version 3.5. Can you talk a little bit about, about sort of the, the innovation that it takes to do this? >> Absolutely. You're right, we've been in the market for slightly over two and a half years, 3.5's our ninth release. So frankly speaking, for any company, or even for startups doing nine releases in 2.5 years is unheard of, and definitely unheard of at IBM. So we are acting and behaving like a startup while addressing the go to market, and the reach of IBM. So I would say that we are doing a lot here. And as I said before, we're trying to address the unique needs of the market, the need to modernize to the cloud-native architectures to move to the cloud also while addressing the needs of our existing customers, because there are two things we are trying to focus, here. First of all, make sure that we have a modern platform across the different capabilities in data and AI, that's number one. Number two is also how do we modernize our existing install base. We have six plus billion dollar business for data and AI across significant real estates. We're providing a platform through Cloud Pak for Data to those existing install base and existing customers to more nice, too. >> I want to talk about how you are addressing the needs of customers, but I want to delve into something you said earlier, and that is that you are behaving like a startup. How do you make sure that your employees have that kind of mindset that, that kind of experimental innovative, creative, resourceful mindset, particularly at a more mature company like IBM? What kinds of skills do you try to instill and cultivate in your, in your team? >> That's a very interesting question, Rebecca. I think there's no single answer, I would say. It starts with listening to the customers, trying to pay detailed attention to what's happening in the market. How competent is it reacting. Looking at the startups, themselves. What we did uniquely, that I didn't touch upon earlier is that we are also building an open ecosystem here, so we position ourselves as an open platform. Yes, there's a lot of IBM unique technology here, but we also are leveraging open source. We are, we have an ecosystem of 50 plus third party ISVs. So by doing that, we are able to drive a lot more innovation and a lot faster because when you are trying to do everything by yourself, it's a bit challenging. But when you're part of an open ecosystem, infusing open source and third party, it becomes a lot easier. In terms of culture, I just want to highlight one thing. I think we are making it a point to emphasize speed over being perfect, progress over perfection. And that, I think, that is something net new for IBM because at IBM, we pride ourselves in quality, scalability, trying to be perfect on day one. I think we didn't do that in this particular case. Initially, when we launched our offense two and a half years back, we tried to be quick to the market. Our time to market was prioritized over being perfect. But now that is not the case anymore, right? I think we will make sure we are exponentially better and those things are addressed for the past two and one-half years. >> Well, perfect is the enemy of the good, as we know. One of the things that your customers demand is flexibility when building with machine learning pipeline. What have you done to improve IBM machine learning tools on this platform? >> So there's a lot of things we've done. Number one, I want to emphasize our building AI, the initial problem that most of our customers concerned about, but in my opinion, that's 10% of the problem. Actually deploying those AI models or managing them and covering them at scales for the enterprise is a bigger challenge. So what we have is very unique. We have the end-to-end AI lifecycle, we have tools for all the way from building, deploying, managing, governing these models. Second is we are introducing net new capabilities as part of a latest release. We have this call or this new service called WMLA, Watson Machine Learning Accelerator that addresses the unique challenges of deep learning capabilities, managing GPUs, et cetera. We are also making the auto AI capabilities a lot more robust. And finally, we are introducing a net new concept called Federator Learning that allows you to build AI across distributed datasets, which is very unique. I'm not aware of any other vendor doing this, so you can actually have your data distributed across multiple clouds, and you can build an aggregated AI model without actually looking at the data that is spread across these clouds. And this concept, in my opinion, is going to get a lot more traction as we move forward. >> One of the things that IBM has always been proud of is the way it partners with ISVs and other vendors. Can you talk about how you work with your partners and foster this ecosystem of third-party capabilities that integrate into the platform? >> Yes, it's always a challenge. I mean, for this to be a platform, as I said before, you need to be open and you need to build an ecosystem. And so we made that a priority since day one and we have 53 third party ISVs, today. It's a chicken and egg problem, Rebecca, because you need to obviously showcase success and make it a priority for your partners to onboard and work with you closely. So, we obviously invest, we co-invest with our partners and we take them to market. We have different models. We have a tactical relationship with some of our third party ISVs. We also have a strategic relationship. So we partner with them depending on their ability to partner with us and we go invest and make sure that we are not only integrating them technically, but also we are integrating with them from a go-to-market perspective. >> I wonder if you can talk a little bit about the current environment that we're in. Of course, we're all living through a global health emergency in the form of the COVID-19 pandemic. So much of the knowledge work is being done from home. It is being done remotely. Teams are working asynchronously over different kinds of digital platforms. How have you seen these changes affect the team, your team at IBM, what kinds of new kinds of capabilities, collaborations, what kinds of skills have you seen your team have to gain and have to gain quite quickly in this environment? >> Absolutely. I think historically, IBM had quite a, quite a portion of our workforce working remotely so we are used to this, but not at the scale that the current situation has compelled us to. So we made a lot more investments earlier this year in digital technologies, whether it is Zoom and WebEx or trying to use tools, digital tools that helps us coordinate and collaborate effectively. So part of it is technical, right? Part of it is also a cultural shift. And that came all the way from our CEO in terms of making sure that we have the necessary processes in place to ensure that our employees are not in getting burnt out, that they're being productive and effective. And so a combination of what I would say, technical investments, plus process and leadership initiatives helped us essentially embrace the changes that we've seen, today. >> And I want you to close us out, here. Talk a little bit about the future, both for Cloud Pak for Data, but also for the companies and clients that you work for. What do you see in the next 12 to 24 months changing in the term, in terms of how we have re-imagined the future of work. I know you said this was already version nine. You've only been in the marketplace for, for not even three years. That's incredible innovation and speed. Talk a little bit about changes you see coming down the pike. >> So I think everything that we have done is going to get amplified and accelerated as we move forward, shift to cloud, embracing AI, adopting AI into business processes to automate and amplify new business models, collaboration, to a certain extent, consolidation of the different offerings into platforms. So all of this, we, I obviously see that being accelerated and that acceleration will continue as we move forward. And the real challenge I see with our customers and all the enterprises is, I see them in two buckets. There's one bucket which are resisting change, like to stick to the old concepts, and there's one bucket of enterprises who are embracing the change and moving forward, and actually get accelerating this transformation and change. I think it will be successful over the next one to five years. You know, it could be under the other bucket and if you're not, I think it's, you're going to get, you're going to miss out and that is getting amplified and accelerated, as we speak. >> So for those ones in the bucket that are resistant to the change, how do you get them onboard? I mean, this is classic change management that they teach at business schools around the world. But what are some advice that you would have to those who are resisting the change? >> So, and again, frankly speaking, we, at IBM, are going through that transition so I can speak from experience. >> Rebecca: You're drinking the Kool-Aid. >> Yeah, when, when I think, one way to address this is basically take one step at a time, like as opposed to completely revolutionizing the way you do your business. You can transform your business one step at a time while keeping the end objective as your goal, as your end goal. So, and it just want a little highlight that with full factor, that's exactly what we are enabling because what we do is we enable you to actually run anywhere you like. So if most of your systems, most of your data and your models, and analytics are on-premise, you can actually start your journey there while you plan for the future of a public cloud or a managed service. So my advice is pretty simple. You start the journey, but you can take, you can, you don't need to, you don't need to do it as a big bang. You, it could be a journey, it could be a gradual transformation, but you need to start the journey today. If you don't, you're going to miss out. >> Baby steps. Hey Hermanth Manda, thank you so much for joining us for this Virtual CUBE Conversation >> Thank you very much, Rebecca. >> I'm Rebecca Knight, stay tuned for more of theCUBE Virtual. (soft electronic music)
SUMMARY :
He is the Executive but do you want to just talk a little bit So we are giving one of the big concerns is of the platform, made it a lot simpler. the innovation that it takes to do this? the need to modernize to the and that is that you are is that we are also building of the good, as we know. that addresses the unique challenges One of the things that IBM has always and we have 53 third party ISVs, today. So much of the knowledge And that came all the way from our CEO and clients that you work for. over the next one to five years. in the bucket that are So, and again, frankly speaking, is we enable you to actually Hey Hermanth Manda, thank you so much for more of theCUBE Virtual.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Hermanth | PERSON | 0.99+ |
Hemanth Manda | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
two and a half years | QUANTITY | 0.99+ |
nine releases | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Hermanth Manda | PERSON | 0.99+ |
Second | QUANTITY | 0.99+ |
IBM Data | ORGANIZATION | 0.99+ |
one bucket | QUANTITY | 0.99+ |
2.5 years | QUANTITY | 0.99+ |
ninth release | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
50 plus | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
over two and a half years | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
two buckets | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
First | QUANTITY | 0.97+ |
three years | QUANTITY | 0.97+ |
WMLA | ORGANIZATION | 0.97+ |
COVID-19 pandemic | EVENT | 0.96+ |
Kool-Aid | ORGANIZATION | 0.96+ |
Watson Machine Learning Accelerator | ORGANIZATION | 0.96+ |
Cloud Pak for Data | TITLE | 0.96+ |
single platform | QUANTITY | 0.96+ |
24 months | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
Zoom | ORGANIZATION | 0.95+ |
WebEx | ORGANIZATION | 0.94+ |
Number two | QUANTITY | 0.92+ |
day one | QUANTITY | 0.9+ |
Cloud Pak | TITLE | 0.9+ |
single offering | QUANTITY | 0.89+ |
version 3.5 | OTHER | 0.87+ |
12 | QUANTITY | 0.87+ |
one step | QUANTITY | 0.86+ |
53 third party | QUANTITY | 0.84+ |
two and a half years back | DATE | 0.84+ |
single answer | QUANTITY | 0.81+ |
year | QUANTITY | 0.8+ |
nine | OTHER | 0.79+ |
3.5 | OTHER | 0.78+ |
Cloud Pak for Data version 3.5 | TITLE | 0.76+ |
one way | QUANTITY | 0.74+ |
Number one | QUANTITY | 0.74+ |
six plus billion dollar | QUANTITY | 0.7+ |
party | QUANTITY | 0.61+ |
one-half years | QUANTITY | 0.61+ |
past two | DATE | 0.57+ |
3.5 | TITLE | 0.56+ |
version | QUANTITY | 0.56+ |
Cloud Pak | ORGANIZATION | 0.52+ |
Learning | OTHER | 0.46+ |
CUBE | ORGANIZATION | 0.43+ |
Cloud | COMMERCIAL_ITEM | 0.4+ |
Thomas Henson and Chhandomay Mandal, Dell Technologies | Dell Technologies World 2020
>>from around the globe. It's the Cube with digital coverage of Dell Technologies. World Digital Experience Brought to You by Dell Technologies. >>Welcome to the Cubes Coverage of Dell Technologies World 2020. The Digital Experience. I'm Lisa Martin, and I'm pleased to welcome back a Cube alumni and a new Cube member to the program today. China. My Mondal is back with US Director of Solutions Marketing for Dell Technologies China. But it's great to see you at Dell Technologies world, even though we're very specially death. >>Happy to be back. Thank you, Lisa. >>And Thomas Henson is joining us for the first time. Global business development manager for a I and analytics. Thomas, Welcome to the Cube. >>I am excited to be here. It's my first virtual cube. >>Yeah, well, you better make it a good one. All right. I said we're talking about a I so so much has changed John to me. The last time I saw you were probably were sitting a lot closer together. So much has changed in the last 67 months, but a lot has changed with the adoption of Ai Thomas. Kick us off. What are some of the big things feeling ai adoption right now? >>Yeah, I >>would have to >>say the two biggest things right now or as we look at accelerated compute and by accelerated compute we're not just talking about the continuation of Moore's law, but how In Data Analytics, we're actually doing more processing now with GP use, which give us faster insights. And so now we have the ability to get quicker insights in jobs that may have taken, you know, taking weeks to months a song as we were measuring. And then the second portion is when we start to talk about the innovation going on in the software and framework world, right? So no longer do you have toe know C plus plus or a lower level language. You can actually do it in Python and even pull it off of Get Hub. And it's all part of that open source community. So we're seeing Mawr more folks in the field of data science and deep learning that can actually implement some code. And then we've got faster compute to be able to process that. >>Tell me, what are your thoughts? >>Think I want to add? Is the explosive growth off data on that's actually are fulfilling the AI adoption. Think off. Like all the devices we have, the i o t. On age devices are doing data are pumping data into the pipeline. Our high resolution satellite imagery, all social media generating data. No. All of this data are actually helping the adoption off a I because now we have very granular data tow our friend the AI model Make the AI models are much better. Besides, so the combination off both in, uh, data the power off Like GPU, power surfers are coupled with the inefficient in the eye after and tools helping off. Well, the AI growth that we're seeing today >>trying to make one of the things that we've known for a while now is that it's for a I to be valuable. It's about extracting value from that. Did it? You talked about the massive explosion and data, but yet we know for a long time we've been talking about AI for decades. Initiatives can fail. What can Dell Technologies do now to help companies have successfully I project? >>Yeah, eso As you were saying, Lisa, what we're seeing is the companies are trying to add up AI Technologies toe Dr Value and extract value from their data set. Now the way it needs to be framed is there is a business challenge that customers air trying to solve. The business challenge gets transformed into a data science problem. That data scientist is going toe work with the high technology, trained them on it. That data science problem gets to the data science solution on. Then it needs to be mapped to production deployment as a business solution. What happens? Ah, lot off. The time is the companies do not plan for output transition from all scale proof of concept that it a scientists are playing with, like a smaller set of data two, when it goes toe the large production deployment dealing with terabytes toe terabyte self data. Now that's where we come in. At their technologies, we have into end solutions for the, uh for the ai for pollution in the customers journeys starting from proof of concept to production. And it is all a seamless consular and very scalable. >>So if some of the challenges there are just starting with iterations. Thomas question for you as business development manager, those folks that John um I talked about the data scientists, the business. How are you helping them come together from the beginning so that when the POC is initiated, it actually can go on the right trajectory to be successful? >>No, that's a great point. And just to kind of build off of what Shonda my was talking about, You know, we call it that last mile, right? Like, Hey, I've got a great POC. How do I get into production? Well, if you have executive sponsorship and it's like, Hey, everybody was on board, but it's gonna take six months to a year. It's like, Whoa, you're gonna lose some momentum. So where we help our customers is, you know, by partnering with them to show them how to build, you know, from an i t. And infrastructure perspective what that ai architectural looks like, right? So we have multiple solutions around that, and at the end of the day, it's about just like Sean. Um, I was saying, You know, we may start off with a project that maybe it's only half a terabyte. Maybe it's 10 terabytes, but once you go into production, if it turns out to be three petabytes four petabytes. Nobody really, you know, has the infrastructure built unless they built on those solid practices. And that's where our solutions come in. So we can go from small scale laboratory all the way large scale production without having to move any of that data. Right? So, you know, at the heart of that is power scale and giving you that ability to scale your data and no more data migration so that you can handle one PC or multiple PCs as those models continue to improve as you start to move into production >>and I'm sticking with you 1st. 2nd 0, sorry. Trying to go ahead. >>So I was going to add that, uh, just like posthumous said right. So if you were a data scientist, you are working with this data science workstations, but getting the data from, uh, L M c our scales thes scale out platform and, uh, as it is growing from, you see two large kills production data can stay in place with the power scale platform. You can add notes, and it can grow to petabytes. And you can add in not just the workstations, but also our They'll power it, solve our switches building out our enter A I ready solutions are already solution for your production. Giving are very seamless experience from the data scientist with the i t. >>So China may will stick with you then. I'm curious to know in the last 6 to 7 months since 2020 has gone in a very different direction thing we all would have predicted our last Dell Technologies world together. What are you seeing? China. My in terms of acceleration or maybe different industries. What our customers needs, how they changed. I guess I should say in the in 2020. >>So in 2020 we're seeing the adoption off a I even more rapidly. Uh, if you think about customers ranging from like say, uh, media and entertainment industry toe, uh, the customer services off any organization to, uh the healthcare and life sciences with lots off genome analysts is going on in all of these places where we're dealing with large are datasets. We're seeing ah, lot off adoption foster processing off A. I R. Technologies, uh, giving with, say, the all the research that the's Biosciences organizations are happening. Uh, Thomas, I know like you are working with, like, a customer. So, uh, can you give us a little bit more example in there? >>Yes, one of the areas. You know, we're talking about 2021 of the things that we're seeing Mawr and Mawr is just the expansion of Just look at the need for customer support, right arm or folks working remotely their arm or folks that are learning remote. I know my child is going through virtual schools, So think about your I t organization and how Maney calls you're having now to expand. And so this is a great area where we're starting to see innovation within a I and model building to be ableto have you know, let's call it, you know, the next generation of chatbots rights. You can actually build these models off the data toe, augment those soup sports systems >>because you >>have two choices, right? You can either. You know, you you can either expand out your call center right for for we're not sure how long or you can use AI and analytics to help augment to help maybe answer some of those first baseline questions. The great thing about customers who are choosing power scale and Dell Technologies. Their partner is they already have. The resource is to be able to hold on to that data That's gonna help them train those models to help. >>So, Thomas, whenever we're talking about data, the explosions it brings to mind compliance. Protection, security. We've seen ransom where really skyrocket in 2020. Just you know, the other week there was the VA was hit. Um, I think there was also a social media Facebook instagram ticktock, 235 million users because there was an unsecured cloud database. So that vector is expanding. How can you help customers? Customers accelerate their AI projects? Well, ensuring compliance and protection and security of that data. >>Really? That's the sweet spot for power scale. We're talking with customers, right? You know, built on one FS with all the security features in mind. And I, too, came from the analytics world. So I remember in the early days of Hadoop, where, you know, as a software developer, we didn't need security, right? We you know, we were doing researching stuff, but then when we took it to the customer and and we're pushing to production, But what about all the security features. We needed >>the same thing >>for artificial intelligence, right? We want toe. We want to make sure that we're putting those security features and compliance is in. And that's where you know, from from an AI architecture perspective, by starting with one FS is at the heart of that solution. You can know that you're protecting for you know, all the enterprise features that you need, whether it be from compliance, thio, data strategy, toe backup and recovery as well. >>So when we're talking about big data volumes Chanda, mind we have to talk about the hyper scale er's talk to us about, you know, they each offer azure A W s Google cloud hundreds of AI services. So how does DEL help customers use the public cloud the data that's created outside of it and use all of those use that the right AI services to extract that value? >>Yeah. Now, as you mentioned, all of these hyper scholars are they differentiate with our office is like a i m l r Deep Learning Technologies, right? And as our customer, you want toe leverage based off all the, uh, all the cloud has to offer and not stuck with one particular cloud provider. However, we're talking about terabytes off data, right? So if you are happy with what doing service A from cloud provider say Google what you want to move to take advantage off another surface off from Asia? It comes with a very high English p a migration risk on time it will take to move the data itself. Now that's not good, right? As the customer, we should be able to live for it. Best off breed our cloud services for AI and for that matter, for anything across the board. Now, how we help customers is you can have all of your data say, in a managed, uh, managed cloud service provider running on power scale. But then you can connect from this managed cloud service provider directly toe any off the hyper scholars. You can connect toe aws, azure, Google Cloud and even, like even, uh, the in place analytics that power scale offers you can run. Uh, those, uh I mean, run those clouds AI services directly on that data simultaneously from these three, and I'll add like one more thing, right? Thes keep learning. Technologies need GPU power solvers, right? and cloud even within like one cloud is not homogeneous environment. Like sometimes you'll find a US East has or gp part solvers. But like you are in the West and the same for other providers. No, with our still our technologies cloud power scale for multi cloud our scale is sitting outside off those hyper scholars connected directly to our any off this on. Then you can burst into different clouds, take advantage off our spot. Instances on are like leverage. All the GP is not from one particular service provider part. All of those be our hyper scholars. So those are some examples off the work we're doing in the multi cloud world for a I >>So that's day. You're talking about data there. So powers failed for multi cloud for data that's created outside the public club. But Thomas, what about for data that's created inside the cloud? How does Del help with that? >>Yes. So, this year, we actually released a solution, uh, in conjunction with G C. P. So within Google Cloud, you can have power scale for one fs, right? And so that's that native native feature. So, you know, goes through all the compliance and all the features within being a part of that G c p natively eso counts towards your credits and your GP Google building as well. But it's still all the features that you have. And so we've been running some, actually, some benchmarks. So we've got a couple of white papers out there, that kind of detail. You know what we can do from an artificial intelligence perspective back to Sean Demise Example. We were just talking about, you know, being able to use more and more GPU. So we we've done that to run some of our AI benchmarks against that and then also, you know, jumped into the Hadoop space. But because you know, that's 11 area from a power scale, prospective customers were really interested. Um, and they have been for years. And then, really, the the awesome portion about this is for customers that are looking for a hybrid solution. Or maybe it's their first kickoff to it. So back Lisa to those compliance features that we were talking about those air still inherent within that native Google G C P one fs version, but then also for customers that have it on prim. You can use those same features to burst your data into, um, your isil on cluster using all the same native tools that you've been using for years within your enterprise. >>God, it's so starting out for power. Skill for Google Cloud Trying to get back to you Kind of wrapping things up here. What are some of the things that we're going to see next from Dell from an AI Solutions perspective? >>Yes. So we are working on many different interesting projects ranging from, uh, the latest, uh, in video Salford's that they have announced d d x a 100. And in fact, two weeks ago at GTC, uh, Syria announced take too far parts with, uh, it takes a 100 solvers. We're part off that ecosystem. And we are working with, uh, the leading, uh uh, solutions toe benchmark, our ai, uh, environments, uh, for all the storage, uh, ensuring, like we are providing, like, all the throughput and scalability that we have to offer >>Thomas finishing with you from the customer perspective. As we talked about so many changes this year alone as we approach calendar year 2021 what are some of the things that Dell is doing with its customers with its partners, the hyper scale er's and video, for example, Do you think customers are really going to be able to truly accelerate successful AI projects? >>Yeah. So the first thing I'd like to talk about is what we're doing with the D. G. S A 100. So this month that GTC you saw our solution for a reference architecture for the G s, a 100 plus power scale. So you talk about speed and how we can move customers insights. I mean, some of the numbers that we're seeing off of that are really a really amazing right. And so this is gives the customers the ability to still, you know, take all the features and use use I salon and one f s, um, like they have in the past, but now combined with the speed of the A 100 still be ableto speed up. How fast they're using those building out those deep learning models and then secondly, with that that gives them the ability to scale to. So there's some features inherent within this reference architecture that allow for you to make more use, right? So bring mawr data scientists and more modelers GP use because that's one thing you don't see Data scientist turning away right there always like, Hey, you know, I mean, this this project here needs needs a GPU. And so, you know, from a power scale one fs perspective, we want to be able to make sure that we're supporting that. So that as that data continues to grow, which, you know we're seeing is one of the large factors. Whenever we're talking about artificial intelligence is the scale for the data. We wanna them to be able to continue to build out that data consolidation area for all these multiple different workloads. That air coming in. >>Excellent, Thomas. Thanks for sharing that. Hopefully next time we get to see you guys in person and we can talk about a customer who has done something very successful with you guys. Kind of me. Always great to talk to you. Thank you for joining us. >>Thank you. Thank you >>for China. May Mandel and Thomas Henson. I'm Lisa Martin. You're watching the cubes Coverage of Dell Technologies, World 2020
SUMMARY :
It's the Cube with digital coverage of Dell But it's great to see you at Dell Technologies world, Happy to be back. Thomas, Welcome to the Cube. I am excited to be here. So much has changed in the last 67 months, but a lot has changed with And so now we have the ability to get quicker insights in jobs that may have taken, you know, Well, the AI growth that we're seeing today You talked about the massive explosion Yeah, eso As you were saying, Lisa, what we're seeing is the So if some of the challenges there are just starting with iterations. at the heart of that is power scale and giving you that ability to scale your data and no more and I'm sticking with you 1st. So if you were a data scientist, you are working with this data science workstations, So China may will stick with you then. So, uh, can you give us a little bit more to be ableto have you know, let's call it, you know, the next generation of chatbots rights. for for we're not sure how long or you can use AI and analytics to help Just you know, the other week there was the VA was hit. So I remember in the early days of Hadoop, where, you know, as a software developer, And that's where you know, from from an AI architecture perspective, talk to us about, you know, they each offer azure A W s Google cloud hundreds of So if you are happy with what doing created outside the public club. to run some of our AI benchmarks against that and then also, you know, jumped into the Hadoop space. Skill for Google Cloud Trying to get back to you Kind of wrapping things up And we are working with, uh, the leading, uh uh, Thomas finishing with you from the customer perspective. And so this is gives the customers the ability to still, you know, take all the features and use use I salon Hopefully next time we get to see you guys in person and we can talk about a customer who has Thank you. of Dell Technologies, World 2020
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Thomas | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Thomas Henson | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lisa | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Sean | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
six months | QUANTITY | 0.99+ |
C plus plus | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
two weeks ago | DATE | 0.99+ |
second portion | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
three petabytes | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
three | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
four petabytes | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
first time | QUANTITY | 0.98+ |
Chhandomay Mandal | PERSON | 0.98+ |
May Mandel | PERSON | 0.98+ |
half a terabyte | QUANTITY | 0.98+ |
11 area | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
both | QUANTITY | 0.97+ |
235 million users | QUANTITY | 0.97+ |
two choices | QUANTITY | 0.97+ |
Moore | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
Deep Learning Technologies | ORGANIZATION | 0.95+ |
first kickoff | QUANTITY | 0.94+ |
100 solvers | QUANTITY | 0.94+ |
petabytes | QUANTITY | 0.94+ |
US | LOCATION | 0.93+ |
GTC | LOCATION | 0.93+ |
A 100 | COMMERCIAL_ITEM | 0.92+ |
English | OTHER | 0.92+ |
decades | QUANTITY | 0.91+ |
1st. 2nd 0 | QUANTITY | 0.91+ |
first thing | QUANTITY | 0.91+ |
two large kills | QUANTITY | 0.9+ |
ORGANIZATION | 0.9+ | |
this month | DATE | 0.9+ |
Hadoop | TITLE | 0.9+ |
today | DATE | 0.89+ |
US East | LOCATION | 0.89+ |
terabytes | QUANTITY | 0.88+ |
Mondal | PERSON | 0.88+ |
D. | COMMERCIAL_ITEM | 0.88+ |
first baseline | QUANTITY | 0.87+ |
secondly | QUANTITY | 0.86+ |
two biggest things | QUANTITY | 0.86+ |
100 plus | COMMERCIAL_ITEM | 0.86+ |
Technologies World 2020 | EVENT | 0.85+ |
Get Hub | TITLE | 0.84+ |
Del | PERSON | 0.84+ |
G C. P. | ORGANIZATION | 0.83+ |
Susie Wee, Mandy Whaley and Eric Thiel, Cisco DevNet | Accelerating Automation with DevNet 2020
>>from around the globe. It's the Cube presenting accelerating automation with definite brought to you by Cisco. >>Hello and welcome to the Cube. I'm John for a year host. We've got a great conversation virtual event, accelerating automation with definite Cisco. Definite. And of course, we got the Cisco Brain Trust here. Cube alumni Suzy we Vice President, senior Vice President GM and also CTO of Cisco. Definite and ecosystem Success C X, All that great stuff. Many Wadley Who's the director? Senior director of definite certifications. Eric Field, director of developer advocacy. Susie Mandy. Eric, Great to see you. Thanks for coming on. >>Great to see you down. So >>we're not in >>person. We >>don't Can't be at the definite zone. We can't be on site doing definite created All the great stuff we've been doing in the past three years were virtual the cube Virtual. Thanks for coming on. Uh, Susie, I gotta ask you because you know, we've been talking years ago when you started this mission and just the succession had has been awesome. But definite create has brought on a whole nother connective tissue to the definite community. This is what this ties into the theme of accelerating automation with definite because you said to me, I think four years ago everything should be a service or X a s is it's called and automation plays a critical role. Um, could you please share your vision? Because this is really important. And still only 5 to 10% of the enterprises have containerized things. So there's a huge growth curve coming with developing and program ability. What's your What's your vision? >>Yeah, absolutely. I mean, what we know is that is, more and more businesses are coming online is I mean, they're all online, But is there growing into the cloud? Is their growing in new areas as we're dealing with security is everyone's dealing with the pandemic. There's so many things going on. But what happens is there's an infrastructure that all of this is built on and that infrastructure has networking. It has security. It has all of your compute and everything that's in there. And what matters is how can you take a business application and tie it to that infrastructure. How can you take, you know, customer data? How can you take business applications? How can you connect up the world securely and then be ableto really satisfy everything that businesses need. And in order to do that, you know, the whole new tool that we've always talked about is that the network is programmable, the infrastructure is programmable, and you don't need just acts writing on top. But now they get to use all of that power of the infrastructure to perform even better. And in order to get there, what you need to do is automate everything. You can't configure networks manually. You can't be manually figuring out policies, but you want to use that agile infrastructure in which you can really use automation. You can rise to a higher level business processes and tie all of that up and down the staff by leveraging automation. >>You remember a few years ago when definite create first started, I interviewed Todd Nightingale and we're talking about Muraki. You know, not to get in the weeds, but you know, switches and hubs and wireless. But if you look at what we were talking about, then this is kind of what's going on now. And we were just recently, I think our last physical event was Cisco um Europe in Barcelona before all the cove it hit and you had the massive cloud surgeon scale happening going on right when the pandemic hit. And even now, more than ever, the cloud scale the modern APS. The momentum hasn't stopped because there's more pressure now to continue addressing Mawr innovation at scale. Because the pressure to do that because >>the stay alive get >>your thoughts on, um, what's going on in your world? Because you were there in person. Now we're six months in scale is huge. >>We are, Yeah, absolutely. And what happened is as all of our customers as businesses around the world as we ourselves all dealt with, How do we run a business from home? You know, how do we keep people safe? How do we keep people at home and how do we work? And then it turns out, you know, business keeps rolling, but we've had to automate even more because >>you >>have to go home and then figure out how from home can I make sure that my I t infrastructure is automated out from home? Can I make sure that every employee is out there in working safely and securely? You know, things like call center workers, which had to go into physical locations and being kind of, you know, just, you know, blocked off rooms to really be secure with their company's information. They had to work from home. So we had to extend business applications to people's homes in countries like, you know, well around the world. But also in India, where it was actually not, you know, not they wouldn't let They didn't have rules toe let people work from home in these areas. So then what we had to do was automate everything and make sure that we could administer. You know, all of our customers could administer these systems from home, so that puts extra stress on automation. It puts extra stress on our customers digital transformation. And it just forced them toe, you know, automate digitally transform quicker. And they had to because you couldn't just go into a server room and tweak your servers. You have to figure out how to automate all of that. >>You know, one of them >>were still there, all in that environment today. >>You know, one of the hottest trends before the pandemic was observe ability, uh, kubernetes serve micro services. So those things again. All Dev ups. And you know, if you guys got some acquisitions, you thought about 1000 eyes. Um, you got a new one you just bought recently Port shift to raise the game in security, Cuban, All these micro services, So observe, ability, superhot. But then people go work at home, as you mentioned. How do you think? Observe, What do you observing? The network is under huge pressure. I mean, it's crashing on. People zooms and WebEx is and education, huge amount of network pressure. How are people adapting to this in the upside? How are you guys looking at the what's being programmed? What are some of the things that you're seeing with use cases around this program? Ability, challenge and observe ability, challenges? It's a huge deal. >>Yeah, absolutely. And, you know, going back to Todd Nightingale, right? You know, back when we talked to Todd before he had Muraki and he had designed this simplicity, this ease of use, this cloud managed, you know, doing everything from one central place. And now he has This goes entire enterprise and cloud business. So he is now applying that at that Bigger Attn. Bigger scale. Francisco and for our customers. And he is building in the observe ability and the dashboards and the automation of the A P. I s and all of it. But when we take a look at what our customers needed is again, they had to build it all in, um, they had to build in. And what happened was how your network was doing, how secure your infrastructure was, how well you could enable people toe work from home and how well you could reach customers. All of that used to be a nightie conversation. It became a CEO and a board level conversation. So all of a sudden CEOs were actually, you know, calling on the heads of I t and the CEO and saying, You know, how is our VPN connectivity? Is everybody working from home? How many people are, you know, connected and ableto work and watch their productivity? Eso All of a sudden, all these things that were really infrastructure I t stuff became a board level conversation and you know, once again, at first everybody was panicked and just figuring out how to get people working. But now what we've seen in all of our customers is that they're now building in automation, additional transformation and these architectures, and that gives them a chance to build in that observe ability. You know, looking for those events. The dashboards, you know? So it really has been fantastic to see what our customers are doing and what our partners air doing to really rise to that next level. >>Susan, I know you gotta go, but real quick, um, describe what? Accelerating automation with definite means. >>Well, you've been fault. You know, we've been working together on definite in the vision of the infrastructure program ability and everything for quite some time. And the thing that's really happened is yes, you need to automate, but yes, it takes people to do that. And you need the right skill sets in the program ability. So a networker can't be a networker. A networker has to be a network automation developer. And so it is about people. And it is about bringing infrastructure expertise together with software expertise and letting people run. Things are definite. Community has risen to this challenge. People have jumped in. They've gotten their certifications. We have thousands of people getting certified. You know, we have you know, Cisco getting certified. We have individuals. We have partners, you know, They're just really rising to the occasion. So accelerate accelerating automation while it is about going digital. It's also about people rising to the level of, you know, being able to put infrastructure and software expertise together to enable this next chapter of business applications of cloud directed businesses and cloud growth. So it actually is about people, Justus, much as it is about automation and technology. >>And we got definite create right around the corner virtual. Unfortunately, being personal will be virtual Susie. Thank you for your time. We're gonna dig into those people challenges with Mandy and Eric. Thank you for coming on. I know you got to go, but stay with us. We're gonna dig in with Mandy and Eric. Thanks. >>Thank you so much. Thank you. Thanks, John. Okay. >>Mandy, you heard Susie is about people, and one of the things that's close to your heart you've been driving is a senior director of definite certifications. Um is getting people leveled up? I mean, the demand for skills cybersecurity, network program, ability, automation, network design solution, architect cloud multi cloud design thes are new skills that are needed. Can you give us the update on what you're doing to help people get into the acceleration of automation game? >>Oh, yes, absolutely. The you know what we've been seeing is a lot of those business drivers that Susie was mentioning those air. What's accelerating? A lot of the technology changes, and that's creating new job roles or new needs on existing job roles where they need new skills. We are seeing, uh, customers, partners, people in our community really starting to look at, you know, things like Dev SEC ops engineer, network Automation engineer, network automation developer, which sues you mentioned and looking at how these fit into their organization, the problems that they solve in their organization. And then how do people build the skills to be able to take on these new job roles or add that job role to their current, um, scope and broaden out and take on new challenges? >>Eric, I want to go to you for a quick second on this, um uh, piece of getting the certifications. Um, first, before you get started, describe what your role is. Director of developer advocacy, because that's always changing and evolving what's the state of it now? Because with Cove and people are working at home, they have more time to contact, switch and get some certifications and that they can code more. What's your >>What's your role? Absolutely So it's interesting. It definitely is changing a lot. A lot of our historically a lot of focus for my team has been on those outward events. So going to the definite creates the Cisco lives and helping the community connect and help share technical information with them, doing hands on workshops and really getting people into. How do you really start solving these problems? Eso that's had to pivot quite a bit. Obviously, Sisco live us. We pivoted very quickly to a virtual event when when conditions changed and we're able to actually connect, as we found out with a much larger audience. So you know, as opposed to in person where you're bound by the parameters of you know how big the convention center is. We were actually able to reach a worldwide audience with are definite day that was kind of attached onto Sisco Live, and we got great feedback from the audience that now we're actually able to get that same enablement out to so many more people that otherwise might not have been able to make it. But to your broader question of you know what my team does. So that's one piece of it is is getting that information out to the community. So as part of that, there's a lot of other things we do as well. We were always helping out build new sandboxes, new learning labs, things like that that they can come and get whenever they're looking for it out on the definite site. And then my team also looks after communities such as the Cisco Learning Network, where there's there's a huge community that has historically been there to support people working on their Cisco certifications. We've seen a huge shift now in that group that all of the people that have been there for years are now looking at the definite certifications and helping other people that are trying to get on board with program ability. They're taking a lot of those same community enablement skills and propping up community with, you know, helping answer questions, helping provide content. They move now into the definite spaces well and are helping people with that sort of certifications. So it's great seeing the community come along and really see that >>I gotta ask you on the trends around automation. What skills and what developer patterns are you seeing with automation? Are Is there anything in particular? Obviously, network automation been around for a long time. Cisco's been leader in that. But as you move up, the staff has modern applications or building. Do you see any patterns or trends around what is accelerating automation? What people learning? >>Yeah, absolutely. So you mentioned observe ability was big before Cove it and we actually really saw that amplified during co vid. So a lot of people have come to us looking for insights. How can I get that better observe ability now that we needed? Well, we're virtual eso. That's actually been a huge uptick, and we've seen a lot of people that weren't necessarily out looking for things before that air. Now, figuring out how can I do this at scale? I think one good example that Susie was talking about the VPN example, and we actually had a number of SCS in the Cisco community that had customers dealing with that very thing where they very quickly had to ramp up and one in particular actually wrote a bunch of automation to go out and measure all of the different parameters that I T departments might care about about their firewalls, things that you didn't normally look at. The old days you would size your firewalls based on, you know, assuming a certain number of people working from home. And when that number went to 100% things like licenses started coming into play where they need to make sure they had the right capacity in their platforms that they weren't necessarily designed for. So one of the essays actually wrote a bunch of code to go out, use them open source, tooling to monitor and alert on these things, and then published it so the whole community code could go out and get a copy of it. Try it out in their own environment. And we saw a lot of interest around that and trying to figure out Okay, now I could take that. I can adapt into what I need to see for my observe ability. >>That's great, Mandy, I want to get your thoughts on this, too, because as automation continues to scale. Um, it's gonna be a focus. People are at home. And you guys had a lot of content online for you. Recorded every session that in the definite zone learning is going on sometimes literally and non linearly. You've got the certifications, which is great. That's key. Great success there. People are interested. But what other learnings are you seeing? What are people, um, doing? What's the top top trends? >>Yeah. So what we're seeing is like you said, people are at home, they've got time, they want toe advance, their skill set. And just like any kind of learning, people want choice. They wanna be able to choose which matches their time that's available and their learning style. So we're seeing some people who want to dive into full online study groups with mentors leading them through a study plan. On we have two new expert lead study groups like that. We're also seeing whole teams at different companies who want to do an immersive learning experience together with projects and office hours and things like that. And we have a new offer that we've been putting together for people who want those kind of team experiences called Automation Boot Camp. And then we're also seeing individual who want to be able to, you know, dive into a topic, do a hands on lab, gets, um, skills, go to the rest of the day of do their work and then come back the next day. And so we have really modular, self driven hands on learning through the Definite Fundamentals course, which is available through DEV. Net. And then there's also people who are saying, I just want to use the technology. I like Thio experiment and then go, you know, read the instructions, read the manual, do the deeper learning. And so they're They're spending a lot of time in our definite sandbox, trying out different technologies. Cisco Technologies with open source technologies, getting hands on and building things, and three areas where we're seeing a lot of interest in specific technologies. One is around SD wan. There's a huge interest in people Skilling up there because of all the reasons that we've been talking about. Security is a focus area where people are dealing with new scale, new kinds of threats, having to deal with them in new ways and then automating their data center using infrastructure as code type principles. So those were three areas where we're seeing a lot of interest and you'll be hearing more about that at definite create. >>Awesome Eric and man, if you guys can wrap up the accelerated automated with definite package and virtual event here, um, and also t up definite create because definite create has been a very kind of grassroots, organically building momentum over the years. Again, it's super important because it's now the app world coming together with networking, you know, end to end program ability. And with everything is a service that you guys were doing everything with a piece. Um Onley can imagine the enablement that's gonna enable create Can >>you hear the >>memory real quick on accelerating automation with definite and TF definite create. Mandy will start with you. >>Yes, I'll go first, and then Eric can close this out. Um, so just like we've been talking about with you at every definite event over the past years, you know, Devon, it's bringing a p I s across our whole portfolio and up and down the stack and accelerating automation with definite. Suzy mentioned the people aspect of that the people Skilling up and how that transformed team transforms teams. And I think that it's all connected in how businesses are being pushed on their transformation because of current events. That's also a great opportunity for people to advance their careers and take advantage of some of that quickly changing landscape. And so would I think about accelerating automation with definite. It's about the definite community. It's about people getting those new skills and all the creativity and problem solving that will be unleashed by that community with those new skills. >>Eric, take us home. He accelerate automation. Definite and definite create a lot of developer action going on cloud native right now, your thoughts? >>Absolutely. I I think it's exciting. I mentioned the transition to virtual for definite day this year for Cisco Live, and we're seeing we're able to leverage it even further with create this year. So whereas it used to be, you know, confined by the walls that we were within for the event. Now we're actually able to do things like we're adding a start now track for people that I want to be there. They want to be a developer. Network automation developer, for instance, We've now got a track just for them where they could get started and start learning some of the skills they'll need, even if some of the other technical sessions were a little bit deeper than what they were ready for. Eso. I love that we're able to bring that together with the experience community that we usually do from across the industry, bringing us all kinds of innovative talks, talking about ways that they're leveraging technology, leveraging the cloud to do new and interesting things to solve their business challenges. So I'm really excited to bring that whole mixed together as well as getting some of our business units together to and talk straight from their engineering departments. What are they doing? What are they seeing? What are they thinking about when they're building new AP eyes into their platforms? What are the what problems are they hoping that customers will be able to solve with them? So I think together, seeing all of that and then bringing the community together from all of our usual channels. So, like I said, Cisco Learning Network, we've got a ton of community coming together, sharing their ideas and helping each other grow those skills. I see nothing but acceleration ahead of us for automation. >>Awesome. Thanks so much. God, man, can >>I add one had >>one more thing. >>Yeah, I was just going to say the other really exciting thing about create this year with the virtual nature of it is that it's happening in three regions. And, you know, we're so excited to see the people joining from all the different regions. And, uh, content and speakers and the region stepping upto have things personalized to their area to their community. And so that's a whole new experience for definite create that's going to be fantastic this year. >>You know, that's what God is going to close out and just put the final bow on that by saying that you guys have always been successful with great content focused on the people in the community. I think now, during with this virtual definite virtual definite create virtual the cube virtual, I think we're learning new things. People working in teams and groups on sharing content. We're gonna learn new things. We're gonna try new things, and ultimately people will rise up and will be resilient. I think when you have this kind of opportunity, it's really fun. And whoa, we'll ride the wave with you guys. So thank you so much for taking the time to come on. The Cuban talk about your awesome accelerate automation and definitely looking forward to it. Thank you. >>Thank you so much. >>Happy to be here. >>Okay, I'm John for the Cube. Virtual here in Palo Alto studios doing the remote content amendment Virtual until we're face to face. Thank you so much for watching. And we'll see you at definite create. Thanks for watching.
SUMMARY :
automation with definite brought to you by Cisco. And of course, Great to see you down. We of accelerating automation with definite because you said to me, I think four years ago And in order to do that, you know, the whole new tool that we've always talked about is that the network You know, not to get in the weeds, but you know, switches and hubs and wireless. Because you were there in person. And then it turns out, you know, business keeps rolling, but we've had to automate even more because And they had to because you couldn't just go into a server room and tweak your servers. And you know, if you guys got some acquisitions, you thought about 1000 eyes. So all of a sudden CEOs were actually, you know, calling on the heads of I t and the CEO and Susan, I know you gotta go, but real quick, um, describe what? to the level of, you know, being able to put infrastructure and software expertise together to I know you got to go, but stay with us. Thank you so much. Mandy, you heard Susie is about people, and one of the things that's close to your heart partners, people in our community really starting to look at, you know, things like Dev SEC Eric, I want to go to you for a quick second on this, um uh, piece of getting the certifications. So you know, as opposed to in person where you're bound by the parameters of you know how big the convention center I gotta ask you on the trends around automation. that I T departments might care about about their firewalls, things that you didn't normally look at. And you guys had a lot of content online for And then we're also seeing individual who want to be able to, you know, dive into a topic, together with networking, you know, end to end program ability. Mandy will start with you. with you at every definite event over the past years, you know, Devon, it's bringing a p I s across our Definite and definite create a lot of developer So whereas it used to be, you know, confined by the walls that we were within for the event. God, man, can And, you know, we're so excited to see the You know, that's what God is going to close out and just put the final bow on that by saying that you guys And we'll see you at definite create.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mandy | PERSON | 0.99+ |
Susie | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Susan | PERSON | 0.99+ |
Eric Field | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Susie Mandy | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Eric Thiel | PERSON | 0.99+ |
Susie Wee | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Mandy Whaley | PERSON | 0.99+ |
Suzy | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Cisco Brain Trust | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Cisco Learning Network | ORGANIZATION | 0.99+ |
Todd Nightingale | PERSON | 0.99+ |
three regions | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
four years ago | DATE | 0.98+ |
one piece | QUANTITY | 0.98+ |
pandemic | EVENT | 0.98+ |
three areas | QUANTITY | 0.98+ |
Todd | PERSON | 0.98+ |
One | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Wadley | PERSON | 0.97+ |
today | DATE | 0.96+ |
one | QUANTITY | 0.96+ |
one central place | QUANTITY | 0.96+ |
Cisco DevNet | ORGANIZATION | 0.95+ |
10% | QUANTITY | 0.95+ |
Cisco Technologies | ORGANIZATION | 0.95+ |
Sisco live | ORGANIZATION | 0.94+ |
one good example | QUANTITY | 0.91+ |
Vice President | PERSON | 0.91+ |
Cube | COMMERCIAL_ITEM | 0.91+ |
three | QUANTITY | 0.9+ |
Onley | PERSON | 0.87+ |
one more thing | QUANTITY | 0.87+ |
Cove | ORGANIZATION | 0.86+ |
about 1000 eyes | QUANTITY | 0.86+ |
Cuban | OTHER | 0.86+ |
second | QUANTITY | 0.86+ |
past three years | DATE | 0.85+ |
Justus | PERSON | 0.85+ |
Muraki | PERSON | 0.84+ |
next day | DATE | 0.84+ |
Devon | PERSON | 0.83+ |
two new expert lead study | QUANTITY | 0.81+ |
Europe | LOCATION | 0.8+ |
2020 | DATE | 0.79+ |
DevNet | ORGANIZATION | 0.79+ |
few years ago | DATE | 0.79+ |
Suzie Wee, Mandy Whaley, and Eric Thiel V2
>>from around the globe. It's the Cube presenting accelerating automation with definite brought to you by Cisco. >>Hello and welcome to the Cube. I'm John for a year host. We've got a great conversation virtual event, accelerating automation with definite Cisco. Definite. And of course, we got the Cisco Brain Trust here. Cube alumni Suzy we Vice President, senior Vice President GM and also CTO of Cisco. Definite and ecosystem Success C X, All that great stuff. Many Wadley Who's the director? Senior director of definite certifications. Eric Field, director of developer advocacy. Susie Mandy. Eric, Great to see you. Thanks for coming on. >>Great to see you >>down. So we're not in person. We >>don't Can't be at the definite zone. We can't be on site doing definite created All the great stuff we've been doing in the past three years were virtual the cube Virtual. Thanks for coming on. Uh, Susie, I gotta ask you because you know, we've been talking years ago when you started this mission and just the succession had has been awesome. But definite create has brought on a whole nother connective tissue to the definite community. This is what this ties into the theme of accelerating automation with definite because you said to me, I think four years ago everything should be a service or X a s is it's called and automation plays a critical role. Um, could you please share your vision? Because this is really important. And still only 5 to 10% of the enterprises have containerized things. So there's a huge growth curve coming with developing and program ability. What's your What's your vision? >>Yeah, absolutely. I mean, what we know is that is, more and more businesses are coming online is I mean, they're all online, But is there growing into the cloud? Is their growing in new areas as we're dealing with security is everyone's dealing with the pandemic. There's so many things going on. But what happens is there's an infrastructure that all of this is built on and that infrastructure has networking. It has security. It has all of your compute and everything that's in there. And what matters is how can you take a business application and tie it to that infrastructure. How can you take, you know, customer data? How can you take business applications? How can you connect up the world securely and then be ableto really satisfy everything that businesses need. And in order to do that, you know, the whole new tool that we've always talked about is that the network is programmable, the infrastructure is programmable, and you don't need just acts writing on top. But now they get to use all of that power of the infrastructure to perform even better. And in order to get there, what you need to do is automate everything. You can't configure networks manually. You can't be manually figuring out policies, but you want to use that agile infrastructure in which you can really use automation. You can rise to a higher level business processes and tie all of that up and down the staff by leveraging automation. >>You remember a few years ago when definite create first started, I interviewed Todd Nightingale and we're talking about Muraki. You know, not to get in the weeds, but you know, switches and hubs and wireless. But if you look at what we were talking about, then this is kind of what's going on now. And we were just recently, I think our last physical event was Cisco um Europe in Barcelona before all the cove it hit and you had the massive cloud surgeon scale happening going on right when the pandemic hit. And even now, more than ever, the cloud scale the modern APS. The momentum hasn't stopped because there's more pressure now to continue addressing Mawr innovation at scale. Because the pressure to do that because >>the stay alive get >>your thoughts on, um, what's going on in your world? Because you were there in person. Now we're six months in scale is huge. >>We are, Yeah, absolutely. And what happened is as all of our customers as businesses around the world as we ourselves all dealt with, How do we run a business from home? You know, how do we keep people safe? How do we keep people at home and how do we work? And then it turns out, you know, business keeps rolling, but we've had to automate even more because >>you >>have to go home and then figure out how from home can I make sure that my I t infrastructure is automated out from home? Can I make sure that every employee is out there in working safely and securely? You know, things like call center workers, which had to go into physical locations and being kind of, you know, just, you know, blocked off rooms to really be secure with their company's information. They had to work from home. So we had to extend business applications to people's homes in countries like, you know, well around the world. But also in India, where it was actually not, you know, not they wouldn't let They didn't have rules toe let people work from home in these areas. So then what we had to do was automate everything and make sure that we could administer. You know, all of our customers could administer these systems from home, so that puts extra stress on automation. It puts extra stress on our customers digital transformation. And it just forced them toe, you know, automate digitally transform quicker. And they had to because you couldn't just go into a server room and tweak your servers. You have to figure out how to automate all of that. >>You know, one of them >>were still there, all in that environment today. >>You know, one of the hottest trends before the pandemic was observe ability, uh, kubernetes serve micro services. So those things again. All Dev ups. And you know, if you guys got some acquisitions, you thought about 1000 eyes. Um, you got a new one you just bought recently Port shift to raise the game in security, Cuban, All these micro services, So observe, ability, superhot. But then people go work at home, as you mentioned. How do you think? Observe, What do you observing? The network is under huge pressure. I mean, it's crashing on. People zooms and WebEx is and education, huge amount of network pressure. How are people adapting to this in the upside? How are you guys looking at the what's being programmed? What are some of the things that you're seeing with use cases around this program? Ability, challenge and observe ability, challenges? It's a huge deal. >>Yeah, absolutely. And, you know, going back to Todd Nightingale, right? You know, back when we talked to Todd before he had Muraki and he had designed this simplicity, this ease of use, this cloud managed, you know, doing everything from one central place. And now he has This goes entire enterprise and cloud business. So he is now applying that at that Bigger Attn. Bigger scale. Francisco and for our customers. And he is building in the observe ability and the dashboards and the automation of the A P. I s and all of it. But when we take a look at what our customers needed is again, they had to build it all in, um, they had to build in. And what happened was how your network was doing, how secure your infrastructure was, how well you could enable people toe work from home and how well you could reach customers. All of that used to be a nightie conversation. It became a CEO and a board level conversation. So all of a sudden CEOs were actually, you know, calling on the heads of I t and the CEO and saying, You know, how is our VPN connectivity? Is everybody working from home? How many people are, you know, connected and ableto work and watch their productivity? Eso All of a sudden, all these things that were really infrastructure I t stuff became a board level conversation and you know, once again, at first everybody was panicked and just figuring out how to get people working. But now what we've seen in all of our customers is that they're now building in automation, additional transformation and these architectures, and that gives them a chance to build in that observe ability. You know, looking for those events. The dashboards, you know? So it really has been fantastic to see what our customers are doing and what our partners air doing to really rise to that next level. >>Susan, I know you gotta go, but real quick, um, describe what? Accelerating automation with definite means. >>Well, you've been fault. You know, we've been working together on definite in the vision of the infrastructure program ability and everything for quite some time. And the thing that's really happened is yes, you need to automate, but yes, it takes people to do that. And you need the right skill sets in the program ability. So a networker can't be a networker. A networker has to be a network automation developer. And so it is about people. And it is about bringing infrastructure expertise together with software expertise and letting people run. Things are definite. Community has risen to this challenge. People have jumped in. They've gotten their certifications. We have thousands of people getting certified. You know, we have you know, Cisco getting certified. We have individuals. We have partners, you know, They're just really rising to the occasion. So accelerate accelerating automation while it is about going digital. It's also about people rising to the level of, you know, being able to put infrastructure and software expertise together to enable this next chapter of business applications of cloud directed businesses and cloud growth. So it actually is about people, Justus, much as it is about automation and technology. >>And we got definite create right around the corner virtual. Unfortunately, being personal will be virtual Susie. Thank you for your time. We're gonna dig into those people challenges with Mandy and Eric. Thank you for coming on. I know you got to go, but stay with us. We're gonna dig in with Mandy and Eric. Thanks. >>Thank you so much. Thank you. Thanks, John. Okay. >>Mandy, you heard Susie is about people, and one of the things that's close to your heart you've been driving is a senior director of definite certifications. Um is getting people leveled up? I mean, the demand for skills cybersecurity, network program, ability, automation, network design solution, architect cloud multi cloud design thes are new skills that are needed. Can you give us the update on what you're doing to help people get into the acceleration of automation game? >>Oh, yes, absolutely. The you know what we've been seeing is a lot of those business drivers that Susie was mentioning those air. What's accelerating? A lot of the technology changes, and that's creating new job roles or new needs on existing job roles where they need new skills. We are seeing, uh, customers, partners, people in our community really starting to look at, you know, things like Dev SEC ops engineer, network Automation engineer, network automation developer, which sues you mentioned and looking at how these fit into their organization, the problems that they solve in their organization. And then how do people build the skills to be able to take on these new job roles or add that job role to their current, um, scope and broaden out and take on new challenges? >>Eric, I want to go to you for a quick second on this, um uh, piece of getting the certifications. Um, first, before you get started, describe what your role is. Director of developer advocacy, because that's always changing and evolving what's the state of it now? Because with Cove and people are working at home, they have more time to contact, switch and get some certifications and that they can code more. What's your >>What's your role? Absolutely So it's interesting. It definitely is changing a lot. A lot of our historically a lot of focus for my team has been on those outward events. So going to the definite creates the Cisco lives and helping the community connect and help share technical information with them, doing hands on workshops and really getting people into. How do you really start solving these problems? Eso that's had to pivot quite a bit. Obviously, Sisco live us. We pivoted very quickly to a virtual event when when conditions changed and we're able to actually connect, as we found out with a much larger audience. So you know, as opposed to in person where you're bound by the parameters of you know how big the convention center is. We were actually able to reach a worldwide audience with are definite day that was kind of attached onto Sisco Live, and we got great feedback from the audience that now we're actually able to get that same enablement out to so many more people that otherwise might not have been able to make it. But to your broader question of you know what my team does. So that's one piece of it is is getting that information out to the community. So as part of that, there's a lot of other things we do as well. We were always helping out build new sandboxes, new learning labs, things like that that they can come and get whenever they're looking for it out on the definite site. And then my team also looks after communities such as the Cisco Learning Network, where there's there's a huge community that has historically been there to support people working on their Cisco certifications. We've seen a huge shift now in that group that all of the people that have been there for years are now looking at the definite certifications and helping other people that are trying to get on board with program ability. They're taking a lot of those same community enablement skills and propping up community with, you know, helping answer questions, helping provide content. They move now into the definite spaces well and are helping people with that sort of certifications. So it's great seeing the community come along and really see that >>I gotta ask you on the trends around automation. What skills and what developer patterns are you seeing with automation? Are Is there anything in particular? Obviously, network automation been around for a long time. Cisco's been leader in that. But as you move up, the staff has modern applications or building. Do you see any patterns or trends around what is accelerating automation? What people learning? >>Yeah, absolutely. So you mentioned observe ability was big before Cove it and we actually really saw that amplified during co vid. So a lot of people have come to us looking for insights. How can I get that better observe ability now that we needed? Well, we're virtual eso. That's actually been a huge uptick, and we've seen a lot of people that weren't necessarily out looking for things before that air. Now, figuring out how can I do this at scale? I think one good example that Susie was talking about the VPN example, and we actually had a number of SCS in the Cisco community that had customers dealing with that very thing where they very quickly had to ramp up and one in particular actually wrote a bunch of automation to go out and measure all of the different parameters that I T departments might care about about their firewalls, things that you didn't normally look at. The old days you would size your firewalls based on, you know, assuming a certain number of people working from home. And when that number went to 100% things like licenses started coming into play where they need to make sure they had the right capacity in their platforms that they weren't necessarily designed for. So one of the essays actually wrote a bunch of code to go out, use them open source, tooling to monitor and alert on these things, and then published it so the whole community code could go out and get a copy of it. Try it out in their own environment. And we saw a lot of interest around that and >>trying >>to figure out Okay, now I could take that. I can adapt into what I need to see for my observe ability. >>That's great, Mandy, I want to get your thoughts on this, too, because as automation continues to scale. Um, it's gonna be a focus. People are at home. And you guys had a lot of content online for you. Recorded every session that in the definite zone learning is going on sometimes literally and non linearly. You've got the certifications, which is great. That's key. Great success there. People are interested. But what other learnings are you seeing? What are people, um, doing? What's the top top trends? >>Yeah. So what we're seeing is like you said, people are at home, they've got time, they want toe advance, their skill set. And just like any kind of learning, people want choice. They wanna be able to choose which matches their time that's available and their learning style. So we're seeing some people who want to dive into full online study groups with mentors leading them through a study plan. On we have two new expert lead study groups like that. We're also seeing whole teams at different companies who want to do an immersive learning experience together with projects and office hours and things like that. And we have a new offer that we've been putting together for people who want those kind of team experiences called Automation Boot Camp. And then we're also seeing individual who want to be able to, you know, dive into a topic, do a hands on lab, gets, um, skills, go to the rest of the day of do their work and then come back the next day. And so we have really modular, self driven hands on learning through the Definite Fundamentals course, which is available through DEV. Net. And then there's also people who are saying, I just want to use the technology. I like Thio experiment and then go, you know, read the instructions, read the manual, do the deeper learning. And so they're They're spending a lot of time in our definite sandbox, trying out different technologies. Cisco Technologies with open source technologies, getting hands on and building things, and three areas where we're seeing a lot of interest in specific technologies. One is around SD wan. There's a huge interest in people Skilling up there because of all the reasons that we've been talking about. Security is a focus area where people are dealing with new scale, new kinds of threats, having to deal with them in new ways and then automating their data center using infrastructure as code type principles. So those were three areas where we're seeing a lot of interest and you'll be hearing more about that at definite create. >>Awesome Eric and man, if you guys can wrap up the accelerated automated with definite package and virtual event here, um, and also t up definite create because definite create has been a very kind of grassroots, organically building momentum over the years. Again, it's super important because it's now the app world coming together with networking, you know, end to end program ability. And with everything is a service that you guys were doing everything with a piece. Um Onley can imagine the enablement that's gonna enable create Can >>you hear the >>memory real quick on accelerating automation with definite and TF definite create. Mandy will start with you. >>Yes, I'll go first, and then Eric can close this out. Um, so just like we've been talking about with you at every definite event over the past years, you know, Devon, it's bringing a p I s across our whole portfolio and up and down the stack and accelerating automation with definite. Suzy mentioned the people aspect of that the people Skilling up and how that transformed team transforms teams. And I think that it's all connected in how businesses are being pushed on their transformation because of current events. That's also a great opportunity for people to advance their careers and take advantage of some of that quickly changing landscape. And so would I think about accelerating automation with definite. It's about the definite community. It's about people getting those new skills and all the creativity and problem solving that will be unleashed by that community with those new skills. >>Eric, take us home. He accelerate automation. Definite and definite create a lot of developer action going on cloud native right now, your thoughts? >>Absolutely. I I think it's exciting. I mentioned the transition to virtual for definite day this year for Cisco Live, and we're seeing we're able to leverage it even further with create this year. So whereas it used to be, you know, confined by the walls that we were within for the event. Now we're actually able to do things like we're adding a start now track for people that I want to be there. They want to be a developer. Network automation developer, for instance, We've now got a track just for them where they could get started and start learning some of the skills they'll need, even if some of the other technical sessions were a little bit deeper than what they were ready for. Eso. I love that we're able to bring that together with the experience community that we usually do from across the industry, bringing us all kinds of innovative talks, talking about ways that they're leveraging technology, leveraging the cloud to do new and interesting things to solve their business challenges. So I'm really excited to bring that whole mixed together as well as getting some of our business units together to and talk straight from their engineering departments. What are they doing? What are they seeing? What are they thinking about when they're building new AP eyes into their platforms? What are the what problems are they hoping that customers will be able to solve with them? So I think together, seeing all of that and then bringing the community together from all of our usual channels. So, like I said, Cisco Learning Network, we've got a ton of community coming together, sharing their ideas and helping each other grow those skills. I see nothing but acceleration ahead of us for automation. >>Awesome. Thanks so much. God, man, can >>I add one had >>one more thing. >>Yeah, I was just going to say the other really exciting thing about create this year with the virtual nature of it is that it's happening in three regions. And, you know, we're so excited to see the people joining from all the different regions. And, uh, content and speakers and the region stepping upto have things personalized to their area to their community. And so that's a whole new experience for definite create that's going to be fantastic this year. >>You know, that's what God is going to close out and just put the final bow on that by saying that you guys have always been successful with great content focused on the people in the community. I think now, during with this virtual definite virtual definite create virtual the cube virtual, I think we're learning new things. People working in teams and groups on sharing content. We're gonna learn new things. We're gonna try new things, and ultimately people will rise up and will be resilient. I think when you have this kind of opportunity, it's really fun. And whoa, we'll ride the wave with you guys. So thank you so much for taking the time to come on. The Cuban talk about your awesome accelerate automation and definitely looking forward to it. Thank you. >>Thank you so much. >>Happy to be here. >>Okay, I'm John for the Cube. Virtual here in Palo Alto studios doing the remote content amendment Virtual until we're face to face. Thank you so much for watching. And we'll see you at definite create. Thanks for watching.
SUMMARY :
automation with definite brought to you by Cisco. Great to see you. So we're not in person. of accelerating automation with definite because you said to me, I think four years ago And in order to do that, you know, the whole new tool that we've always talked about is that the network You know, not to get in the weeds, but you know, switches and hubs and wireless. Because you were there in person. And then it turns out, you know, business keeps rolling, but we've had to automate even more because And they had to because you couldn't just go into a server room and tweak your servers. And you know, if you guys got some acquisitions, you thought about 1000 eyes. So all of a sudden CEOs were actually, you know, calling on the heads of I t and the CEO and Susan, I know you gotta go, but real quick, um, describe what? to the level of, you know, being able to put infrastructure and software expertise together to I know you got to go, but stay with us. Thank you so much. Mandy, you heard Susie is about people, and one of the things that's close to your heart partners, people in our community really starting to look at, you know, things like Dev SEC Eric, I want to go to you for a quick second on this, um uh, piece of getting the certifications. So you know, as opposed to in person where you're bound by the parameters of you know how big the convention center I gotta ask you on the trends around automation. that I T departments might care about about their firewalls, things that you didn't normally look at. I can adapt into what I need to see for my observe ability. And you guys had a lot of content online for And then we're also seeing individual who want to be able to, you know, dive into a topic, together with networking, you know, end to end program ability. Mandy will start with you. with you at every definite event over the past years, you know, Devon, it's bringing a p I s across our Definite and definite create a lot of developer So whereas it used to be, you know, confined by the walls that we were within for the event. God, man, can And, you know, we're so excited to see the You know, that's what God is going to close out and just put the final bow on that by saying that you guys And we'll see you at definite create.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mandy | PERSON | 0.99+ |
Susie | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Susan | PERSON | 0.99+ |
Suzie Wee | PERSON | 0.99+ |
Eric Field | PERSON | 0.99+ |
Susie Mandy | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Mandy Whaley | PERSON | 0.99+ |
Eric Thiel | PERSON | 0.99+ |
Suzy | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Cisco Brain Trust | ORGANIZATION | 0.99+ |
Cisco Learning Network | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
pandemic | EVENT | 0.99+ |
four years ago | DATE | 0.98+ |
5 | QUANTITY | 0.98+ |
Todd Nightingale | PERSON | 0.98+ |
three regions | QUANTITY | 0.98+ |
Todd | PERSON | 0.98+ |
this year | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
three areas | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
One | QUANTITY | 0.97+ |
Wadley | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
10% | QUANTITY | 0.95+ |
Cisco Technologies | ORGANIZATION | 0.95+ |
one central place | QUANTITY | 0.93+ |
Sisco live | ORGANIZATION | 0.92+ |
one good example | QUANTITY | 0.91+ |
Vice President | PERSON | 0.91+ |
Cove | ORGANIZATION | 0.91+ |
Muraki | PERSON | 0.91+ |
Cube | COMMERCIAL_ITEM | 0.9+ |
Devon | PERSON | 0.9+ |
Justus | PERSON | 0.89+ |
about 1000 eyes | QUANTITY | 0.89+ |
Onley | PERSON | 0.87+ |
past three years | DATE | 0.87+ |
Cuban | OTHER | 0.87+ |
second | QUANTITY | 0.86+ |
next day | DATE | 0.85+ |
Francisco | PERSON | 0.83+ |
three | QUANTITY | 0.82+ |
one more thing | QUANTITY | 0.81+ |
Europe | LOCATION | 0.8+ |
few years ago | DATE | 0.78+ |
Net | OTHER | 0.76+ |
definite create | ORGANIZATION | 0.75+ |
Cisco Live | EVENT | 0.74+ |
Suzie Wee, Mandy Whaley, and Eric Thiel V1
>> Narrator: From around the globe, it's theCUBE. Presenting Accelerating Automation with DevNet. Brought to you by Cisco. >> Hello and welcome to theCUBE. I'm John Furrier, your host. We've got a great conversation and a virtual event, Accelerating Automation with DevNet , Cisco DevNet. And of course we got the Cisco brain trust here. Cube alumni, Susie Wee, Vice President, Senior Vice President, GM, and also CTO of Cisco DevNet and Ecosystem Success CX, all that great stuff. Mandy Whaley, who's the Director, Senior Director of DevNet Certifications, And Eric Thiel, Director of Developer Advocacy, Susie, Mandy, Eric, great to see you. Thanks for coming on. >> Great to see you, John. >> So we're not in person >> It's great to be here. >> We don't, can't be at the DevNet Zone. We can't be on site doing DevNet Create, all the great stuff we've been doing over the past few years. We're virtual, theCUBE virtual. Thanks for coming on. Susie, I got to ask you because you know, we've been talking years ago when you started this mission and just the success you had has been awesome, but DevNet Create has brought on a whole nother connective tissue to the DevNet community. This ties into the theme of accelerating automation with DevNet, because you said to me, I think four years ago, everything should be a service or XaaS as it's called (Susie laughs) and automation plays a critical role. Could you please share your vision because this is really important and still only five to 10% of the enterprises have containerized things. So there's a huge growth curve coming with developing and programmability. What's your vision? >> Yeah, absolutely. I mean, what we know is that as more and more businesses are coming online as, well I mean, they're all online, but as they're growing into the cloud, as they're growing in new areas, as we're dealing with security, as everyone's dealing with the pandemic, there's so many things going on. But what happens is, there's an infrastructure that all of this is built on and that infrastructure has networking, it has security, it has all of your compute and everything that's in there. And what matters is how can you take a business application and tie it to that infrastructure? How can you take, you know, customer data? How can you take business applications? How can you connect up the world securely and then be able to, you know, really satisfy everything that businesses need? And in order to do that, you know, the whole new tool that we've always talked about is that the network is programmable. The infrastructure is programmable, and you don't need just apps riding on top, but now they get to use all of that power of the infrastructure to perform even better. And in order to get there, what you need to do is automate everything. You can't configure networks manually. You can't be manually figuring out policies, but you want to use that agile infrastructure in which you can really use automation, you can rise to higher level business processes and tie all of that up and down the stack by leveraging automation. >> You know, I remember a few years ago when DevNet Create first started, I interviewed Todd Nightingale, and we were talking about Meraki, you know, not to get in the weeds about you know, switches and hubs and wireless. But if you look at what we were talking about then, this is kind of what's going on now. And we were just recently, I think our last physical event was at Cisco Europe in Barcelona before all the COVID hit. And you had >> Susie: Yeah. >> The massive cloud surge and scale happening going on, right when the pandemic hit. And even now more than ever the cloud scale, the modern apps, the momentum hasn't stopped because there's more pressure now to continue addressing more innovation at scale because the pressure to do that, because the businesses need to stay alive. >> Absolutely, yeah. >> I just want to get your thoughts on what's going on in your world, because you were there in person. Now we're six months in, scale is huge. >> We are. Yeah, absolutely. And what happened is as all of our customers, as businesses around the world, as we ourselves all dealt with, how do we run a business from home? You know, how do we keep people safe? How do we keep people at home and how do we work? And then it turns out, you know, business keeps rolling, but we've had to automate even more because you have to go home and then figure out how from home can I make sure that my IT infrastructure is automated? How from home can I make sure that every employee is out there and working safely and securely? You know, things like call center workers, which had to go into physical locations and be in kind of, you know, just, you know, blocked off rooms to really be secure with their company's information. They had to work from home. So we had to extend business applications to people's homes in countries like, you know, well around the world, but also in India where it was actually not, you know, not, they didn't have rules to let people work from home in these areas. So then what we had to do was automate everything and make sure that we could administer, you know, all of our customers could administer these systems from home. So that put extra stress on automation. It put extra stress on our customer's digital transformation and it just forced them to, you know, automate digitally transform quicker. And they had to, because you couldn't just go into a server room and tweak your servers, you had to figure out how to automate all of that. And we're still in that environment today. >> You know one of the hottest trends before the pandemic was observability, Kubernetes microservices. So those things, again, all DevOps and, you know, you guys got some acquisitions, you've bought ThousandEyes, you got a new one. You just bought recently PortShift to raise the game in security, Kuber and all these microservices. So observability super hot, but then people go work at home as you mentioned. How do you (chuckles) >> Yeah What are you observing? The network is under a huge pressure. I mean, it's crashing on people's Zooms and Web Ex's and education, huge amount of network pressure. How are people adapting to this in the app side? How are you guys looking at the, what's being programmed? What are some of the things that you're seeing with use cases around this programmability challenge and observability challenge that's such a huge deal? >> Yeah, absolutely. And you know, going back to Todd Nightingale, right? You know, back when we talked to Todd before, he had Meraki and he had designed this simplicity, this ease of use, this cloud managed, you know, doing everything from one central place. And now he has Cisco's entire enterprise and cloud business. So he is now applying that at that bigger, at that bigger scale for Cisco and for our customers. And he is building in the observability and the dashboards and the automation and the APIs into all of it. But when we take a look at what our customers needed is again, they had to build it all in. They had to build in. And what happened was how your network was doing, how secure your infrastructure was, how well you could enable people to work from home and how well you could reach customers. All of that used to be an IT conversation. It became a CEO and a board-level conversation. So all of a sudden, CEOs were actually, you know, calling on the Heads of IT and the CIO and saying, you know, "How's our VPN connectivity? Is everybody working from home? How many people are connected and able to work and what's their productivity?" So all of a sudden, all these things that were really infrastructure IT stuff became a board level conversation and, you know, once again, at first everybody was panicked and just figuring out how to get people working, but now what we've seen in all of our customers is that they are now building in automation and digital transformation and these architectures, and that gives them a chance to build in that observability, you know, looking for those events, the dashboards, you know, so it really has been fantastic to see what our customers are doing and what our partners are doing to really rise to that next level. >> Susie, I know you got to go, but real quick, describe what accelerating automation with DevNet means. >> (giggles)Well, you've been, you know, we've been working together on DevNet and the vision of the infrastructure programmability and everything for quite some time and the thing that's really happened is yes, you need to automate, but yes, it takes people to do that and you need the right skill sets and the programmability. So a networker can't be a networker. A networker has to be a network automation developer. And so it is about people and it is about bringing infrastructure expertise together with software expertise and letting people run things. Our DevNet community has risen to this challenge. People have jumped in, they've gotten their certifications. We have thousands of people getting certified. You know, we have, you know, Cisco getting certified. We have individuals, we have partners, you know, they're just really rising to the occasion. So accelerating automation, while it is about going digital. It's also about people rising to the level of, you know, being able to put infrastructure and software expertise together to enable this next chapter of business applications, of, you know, cloud directed businesses and cloud growth. So it actually is about people, just as much as it is about automation and technology. >> And we got DevNet Create right around the corner, Virtual, unfortunately, won't be in person, but will be virtual. Susie, thank you for your time. We're going to dig into those people challenges with Mandy and Eric. Thank you for coming on. I know you've got to go, but stay with us. We're going to dig in with Mandy and Eric. Thanks. >> Thank you so much. Have fun. >> Thank you. >> Thanks John. >> Okay. Mandy, you heard Susie, it's about people. And one of the things that's close to your heart, you've been driving as Senior Director of DevNet Certifications, is getting people leveled up. I mean the demand for skills, cybersecurity, network programmability, automation, network design, solution architect, cloud, multi-cloud design. These are new skills that are needed. Can you give us the update on what you're doing to help people get into the acceleration of automation game? >> Oh yes, absolutely. You know, what we've been seeing is a lot of those business drivers, that Susie was mentioning. Those are what's accelerating a lot of the technology changes and that's creating new job roles or new needs on existing job roles where they need new skills. We are seeing customers, partners, people in our community really starting to look at, you know, things like DevSecOps engineer, network automation engineer, network automation developer, which Susie mentioned, and looking at how these fit into their organization, the problems that they solve in their organization. And then how do people build the skills to be able to take on these new job roles or add that job role to their current scope and broaden out and take on new challenges. >> Eric, I want to go to you for a quick second on this piece of getting the certifications. First, before we get started, describe what your role is as Director of Developer Advocacy, because that's always changing and evolving. What's the state of it now because with COVID people are working at home, they have more time to contact Switch, and get some certifications and yet they can code more. What's your role? >> Absolutely. So it's interesting. It definitely is changing a lot. A lot of our, historically a lot of focus for my team has been on those outward events. So going to the DevNet Creates, the Cisco Lives and helping the community connect and to help share technical information with them, doing hands on workshops and really getting people into how do you really start solving these problems? So that's had to pivot quite a bit. Obviously Cisco Live US, we pivoted very quickly to a virtual event when conditions changed. And we're able to actually connect as we found out with a much larger audience. So, you know, as opposed to in person where you're bound by the parameters of, you know, how big the convention center is, we were actually able to reach a worldwide audience with our DevNet Day that was kind of attached onto Cisco Live. And we got great feedback from the audience that now we were actually able to get that same enablement out to so many more people that otherwise might not have been able to make it, but to your broader question of, you know, what my team does. So that's one piece of it is getting that information out to the community. So as part of that, there's a lot of other things we do as well. We are always helping out build new sandboxes, new learning labs, things like that, that they can come and get whenever they're looking for it out on the DevNet site. And then my team also looks after communities, such as the Cisco Learning Network where there's a huge community that has historically been there to support people working on their Cisco certifications. We've seen a huge shift now in that group, that all of the people that have been there for years are now looking at the DevNet certifications and helping other people that are trying to get onboard with programmability. They're taking a lot of those same community enablement skills and propping up the community with helping you answer questions, helping provide content. They've moved now into the DevNet space as well, and are helping people with that set of certifications. So it's great seeing the community come along and really see that. >> I got to ask you on the trends around automation, what skills and what developer patterns are you seeing with automation? Is there anything in particular, obviously network automation has been around for a long time. Cisco has been leader in that, but as you move up the stack as modern applications are building, do you see any patterns or trends around what is accelerating automation? What are people learning? >> Yeah, absolutely. So you mentioned observability was big before COVID and we actually really saw that amplified during COVID. So a lot of people have come to us looking for insights. How can I get that better observability now that we need it while we're virtual. So that's actually been a huge uptick and we've seen a lot of people that weren't necessarily out looking for things before that are now figuring out' how can I do this at scale? And I think one good example that Susie was talking about the VPN example. And we actually had a number of SEs in the Cisco community that had customers dealing with that very thing where they very quickly had to ramp up. And one in particular actually wrote a bunch of automation to go out and measure all of the different parameters that IT departments might care about, about their firewalls, things that you didn't normally look at in the old days. You would size your firewalls based on, you know, assuming a certain number of people working from home. And when that number went to 100%, things like licenses started coming into play, where they needed to make sure they have the right capacity in their platforms that they weren't necessarily designed for. So one of the SEs actually wrote a bunch of code to go out, use some open source tooling to monitor and alert on these things and then published it, so the whole community could go out and get a copy of it, try it out in their own environment. And we saw a lot of interest around that in trying to figure out, okay, now I can take that and I can adapt it to what I need to see for my observability. >> That's great. Mandy, I want to get your thoughts on this too, because as automation continues to scale, it's going to be a focus and people are at home and you guys had a lot of content online for you recorded every session in the DevNet Zone. Learning's going on, sometimes linearly and non linearly. You got the certifications, which is great. That's key, great success there. People are interested, but what other learnings are you seeing? What are people doing? What's the top top trends? >> Yeah. So what we're seeing is like you said, people are at home, they've got time. They want to advance their skillset. And just like any kind of learning, people want choice they want to be able to choose what matches their time that's available and their learning style. So we're seeing some people who want to dive into full online study groups with mentors leading them through a study plan. And we have two new expert-led study groups like that. We're also seeing whole teams at different companies who want to do an immersive learning experience together with projects and office hours and things like that. And we have a new offer that we've been putting together for people who want those kinds of team experiences called Automation Bootcamp. And then we're also seeing individuals who want to be able to, you know, dive into a topic, do a hands-on lab, get some skills, go to the rest of the day of do their work and then come back the next day. And so we have really modular self-driven hands-on learning through the DevNet Fundamentals course, which is available through DevNet. And then there's also people who are saying, "I just want to use the technology. "I like to experiment and then go, you know, "read the instructions, read the manual, "do the deeper learning." And so they're spending a lot of time in our DevNet sandbox, trying out different technologies, Cisco technologies with open source technologies, getting hands-on and building things. And three areas where we're seeing a lot of interest in specific technologies. One is around SD-WAN. There's a huge interest in people skilling up there because of all the reasons that we've been talking about. Security is a focus area where people are dealing with new scale, new kinds of threats, having to deal with them in new ways. and then automating their data center using infrastructure as code type principles. So those are three areas where we're seeing a lot of interest and you'll be hearing some more about that at DevNet Create. >> Awesome. Eric and Mandy, if you guys can wrap up this Accelerating Automation with DevNet package and virtual event here and also tee up DevNet Create because DevNet Create has been a very kind of grassroots, organically building momentum over the years. And again, it's super important cause it's now the app world coming together with networking, you know, end to end programmability and with everything as a service that you guys are doing, everything with APIs, I only can imagine the enablement that's going to create. >> Mandy: Yeah >> Can you share the summary real quick on Accelerating Automation with DevNet and tee up DevNet Create. Mandy, we'll start with you. >> Yes, I'll go first and then Eric can close this out. So just like we've been talking about with you at every DevNet event over the past years, you know, DevNet's bringing APIs across our whole portfolio, and up and down the stack and Accelerating Automation with DevNet , Susie mentioned the people aspect of that. The people skilling up and how that transforms teams, And I think that it's all connected in how businesses are being pushed on their transformation because of current events. That's also a great opportunity for people to advance their careers and take advantage of some of that quickly changing landscape. And so what I think about Accelerating Automation with DevNet, it's about the DevNet community. It's about people getting those new skills and all the creativity and problem solving that will be unleashed by that community with those new skills. >> Eric, take us home here, Accelerating Automation with DevNet and DevNet Create, a lot of developer action going on in Cloud Native right now, your thoughts. >> Absolutely. I think it's exciting. I mentioned the transition to virtual for DevNet Day this year, for Cisco Live and we're seeing, we're able to leverage it even further with Create this year. So, whereas it used to be, you know, confined by the walls that we were within for the event. Now we're actually able to do things like we're adding the Start Now track for people that want to be there. They want to be a developer, a network automation developer for instance, we've now got a track just for them where they can get started and start learning some of the skills they'll need, even if some of the other technical sessions were a little bit deeper than what they were ready for. So I love that we're able to bring that together with the experienced community that we usually do from across the industry bringing us all kinds of innovative talks, talking about ways that they're leveraging technology, leveraging the cloud to do new and interesting things to solve their business challenges. So I'm really excited to bring that whole mix together, as well as getting some of our business units together too and talk straight from their engineering departments. What are they doing? What are they seeing? What are they thinking about when they're building new APIs into their platforms? What problems are they hoping that customers will be able to solve with them? So I think together seeing all of that and then bringing the community together from all of our usual channels. So like I said, Cisco learning network, we've got a ton of community coming together, sharing their ideas and helping each other grow those skills. I see nothing but acceleration ahead of us for automation. >> Awesome. Thanks so much. >> I would >> Go ahead, Mandy. >> Can I add one more thing? >> Add one more thing. >> Yeah, I was just going to say the other really exciting thing about Create this year with the virtual nature of it is that it's happening in three regions and you know, we're so excited to see the people joining from all the different regions and content and speakers and the regions stepping up to have things personalized to their area, to their community. And so that's a whole new experience for DevNet Create that's going to be fantastic this year. >> Yeah, that's it. I was going to close out and just put the final bow on that by saying that you guys have always been successful with great content focused on the people in the community. I think now during, with this virtual DevNet, virtual DevNet create virtual theCUBE virtual, I think we're learning new things. People are working in teams and groups and sharing content, we're going to learn new things. We're going to try new things and ultimately people will rise up and will be resilient. And I think when you have this kind of opportunity, it's really fun. And we'll ride the wave with you guys. >> So thank you so much (Susie laughs) for taking the time to come on theCUBE and talk about your awesome Accelerating Automation and DevNet Create Looking forward to it, thank you. >> Thank you so much, >> All right, thanks a lot. >> Happy to be here. >> Okay, I'm John Furrier with theCUBE virtual here in Palo Alto studios doing the remote content and men, we stay virtual until we're face to face. Thank you so much for watching and we'll see you at DevNet Create. Thanks for watching. (upbeat outro) >> Controller: Okay John, Here we go, John. Here we go. John, we're coming to you in five, four, three, two. >> Hello, and welcome to theCUBE. I'm John Furrier, your host. We've got a great conversation and a virtual event, Accelerating Automation with DevNet, Cisco DevNet. And of course we got the Cisco brain trust here. Cube alumni, Susie Wee, Senior Vice President GM and also CTO at Cisco DevNet and Ecosystem Success CX, all that great stuff. Mandy Whaley, who's the Director, Senior Director of DevNet Certifications, and Eric Thiel, Director of Developer Advocacy. Susie, Mandy, Eric, great to see you. Thanks for coming on. >> Great to see you, John. So we're not in person. >> It's great to be here >> We don't, can't be at the DevNet zone. We can't be on site doing DevNet Create, all the great stuff we've been doing over the past few years. We're virtual, theCUBE virtual. Thanks for coming on. Susie, I got to ask you because you know, we've been talking years ago when you started this mission and just the success you had has been awesome. But DevNet Create has brought on a whole nother connective tissue to the DevNet community. This ties into the theme of Accelerating Automation with DevNet, because you said to me, I think four years ago, everything should be a service or XaaS as it's called. And automation plays (Susie laughs) a critical role. Could you please share your vision because this is really important and still only five to 10% of the enterprises have containerized things. So there's a huge growth curve coming with developing and programmability. What's your vision? >> Yeah, absolutely. I mean, what we know is that as more and more businesses are coming online as ,well I mean, they're all online, but as they're growing into the cloud, as they're growing in new areas, as we're dealing with security, as everyone's dealing with the pandemic, there's so many things going on, but what happens is there's an infrastructure that all of this is built on and that infrastructure has networking. It has security. It has all of your compute and everything that's in there. And what matters is how can you take a business application and tie it to that infrastructure? How can you take, you know, customer data? How can you take business applications? How can you connect up the world securely and then be able to, you know, really satisfy everything that businesses need. And in order to do that, you know, the whole new tool that we've always talked about is that the network is programmable. The infrastructure is programmable and you don't need just apps riding on top, but now they get to use all of that power of the infrastructure to perform even better. And in order to get there, what you need to do is automate everything. You can't configure networks manually. You can't be manually figuring out policies, but you want to use that agile infrastructure in which you can really use automation. You can rise to higher level business processes and tie all of that up and down the stack by leveraging automation. >> You know, I remember a few years ago when DevNet Create first started, I interviewed Todd Nightingale and we were talking about Meraki, you know, not to get in the weeds, but you know, switches and hubs and wireless. But if you look at what we were talking about then, this is kind of what's going on now. And we were just recently, I think our last physical event was Cisco Europe in Barcelona before all the COVID hit. And you had this massive cloud surge and scale happening going on right when the pandemic hit. And even now more than ever, the cloud scale, the modern apps, the momentum hasn't stopped because there's more pressure now to continue addressing more innovation at scale because the pressure to do that because the businesses need >> Absolutely. >> to stay alive. I just want to get your thoughts on what's going on in your world, because you were there in person now we're six months in scale is huge. >> We are. Yeah, absolutely. And what happened is, as all of our customers, as businesses around the world, as we ourselves all dealt with, how do we run a business from home? You know, how do we keep people safe? How do we keep people at home and how do we work? And then it turns out, you know, business keeps rolling, but we've had to automate even more because you have to go home and then figure out how from home, can I make sure that my IT infrastructure is automated? How from home can I make sure that every employee is out there and working safely and securely, you know, things like call center workers, which had to go into physical locations and be in kind of, you know, just, you know, blocked off rooms to really be secure with their company's information. They had to work from home. So we had to extend business applications to people's homes in countries like, you know, well around the world, but also in India where it was actually not, you know, not, they wouldn't let, they didn't have rules to let people work from home in these areas. So then what we had to do was automate everything and make sure that we could administer, you know, all of our customers could administer these systems from home. So that put extra stress on automation. It put extra stress on our customer's digital transformation and it just forced them to, you know, automate, digitally transform quicker. And they had to, because you couldn't just go into a server room and tweak your servers, you had to figure out how to automate all of that. And we're still all in that environment today. >> You know one of the hottest trends before the pandemic was observability, Kubernetes microservices. So those things, again, all DevOps and you know, you guys got some acquisitions, you bought ThousandEyes, you got a new one. You just bought recently PortShift to raise the game in security, Kuber and all these microservices. So observability is super hot, but then people go work at home as you mentioned. How do you observe, what are you observing? The network is under a huge pressure. I mean, it's crashing on people's Zooms and Web Ex's and education, huge amount of network pressure. How are people adapting to this in the app side? How are you guys looking at the, what's being programmed? What are some of the things that you're seeing with use cases around this programmability challenge and observability challenges? It's a huge deal. >> Yeah, absolutely. And you know, going back to Todd Nightingale, right? You know, back when we talked to Todd before he had Meraki and he had designed this simplicity, this ease of use, this cloud managed, you know, doing everything from one central place. And now he has Cisco's entire enterprise and cloud business. So he is now applying that at that bigger scale for Cisco and for our customers and he is building in the observability and the dashboards and the automation and the APIs into all of it. But when we take a look at what our customers needed is again, they had to build it all in. They had to build in. And what happened was how your network was doing, how secure your infrastructure was, how well you could enable people to work from home and how well you could reach customers. All of that used to be an IT conversation. It became a CEO and a board level conversation. So all of a sudden CEOs were actually, you know, calling on the heads of IT and the CIO and saying, you know, how's our VPN connectivity? Is everybody working from home. How many people are you know, connected and able to work and what's their productivity? So all of a sudden, all these things that were really infrastructure IT stuff became a board level conversation. And, you know once again, at first everybody was panicked and just figuring out how to get people working. But now what we've seen in all of our customers is that they are now building in automation and digital transformation and these architectures, and that gives them a chance to build in that observability, you know, looking for those events, the dashboards, you know, so it really has been fantastic to see what our customers are doing and what our partners are doing to really rise to that next level. >> Susie, I know you got to go, but real quick, describe what Accelerating Automation with DevNet means. >> (laughs) Well, you know, we've been working together on DevNet in the vision of the infrastructure programmability and everything for quite some time. And the thing that's really happened is yes, you need to automate, but yes, it takes people to do that and you need the right skill sets and the programmability. So a networker can't be a networker. A networker has to be a network automation developer. And so it is about people and it is about bringing infrastructure expertise together with software expertise and letting people run things. Our DevNet community has risen to this challenge. People have jumped in, they've gotten their certifications. We have thousands of people getting certified. You know, we have, you know, Cisco getting certified. We have individuals, we have partners, you know, they're just really rising to the occasion. So accelerating automation, while it is about going digital, it's also about people rising to the level of, you know, being able to put infrastructure and software expertise together to enable this next chapter of business applications, of you know, cloud directed businesses and cloud growth. So it actually is about people just as much as it is about automation and technology. >> And we got DevNet Create right around the corner virtual, unfortunately won't be in person, but will be virtual. Susie, thank you for your time. We're going to dig into those people challenges with Mandy and Eric. Thank you for coming on. I know got to go, but stay with us. We're going to dig in with Mandy and Eric. Thanks. >> Thank you so much. Have fun. >> Thank you. >> Thanks, John. >> Okay, Mandy, you heard Susie, it's about people. And one of the things that's close to your heart you've been driving is, as senior director of DevNet Certifications is getting people leveled up. I mean the demand for skills, cybersecurity, network programmability, automation, network design, solution architect, cloud multicloud design. These are new skills that are needed. Can you give us the update on what you're doing to help people get into the acceleration of automation game? >> Oh yes, absolutely. You know, what we've been seeing is a lot of those business drivers that Susie was mentioning. Those are what's accelerating a lot of the technology changes and that's creating new job roles or new needs on existing job roles where they need new skills. We are seeing customers, partners, people in our community really starting to look at, you know, things like DevSecOps engineer, network automation engineer, network automation developer which Susie mentioned, and looking at how these fit into their organization, the problems that they solve in their organization. And then how do people build the skills to be able to take on these new job roles or add that job role to their current scope and broaden out and take on new challenges. And this is why we created the DevNet certification. Several years ago, our DevNet community, who's been some of those engineers who have been coming into that software and infrastructure side and meeting. They ask us to help create a more defined pathway to create resources, training, all the things they would need to take all those steps to go after those new jobs. >> Eric, I want to go to you for a quick second on this piece of getting the certifications. First, before we get started, describe what your role is as Director of Developer Advocacy, because that's always changing and evolving. What's the state of it now because with COVID people are working at home, they have more time to contact Switch, and get some certifications and yet they can code more. What's your role >> Absolutely. So it's interesting. It definitely is changing a lot. A lot of our, historically a lot of focus for my team has been on those outward events. So going to the DevNet Creates, the Cisco Lives and helping the community connect and to help share technical information with them, doing hands-on workshops and really getting people into how do you really start solving these problems? So that's had to pivot quite a bit. Obviously Cisco Live US, we pivoted very quickly to a virtual event when conditions changed and we were able to actually connect as we found out with a much larger audience. So, you know, as opposed to in-person where you're bound by the parameters of you know, how big the convention center is. We were actually able to reach a worldwide audience with our DevNet Day that was kind of attached onto Cisco Live. And we got great feedback from the audience that now we were actually able to get that same enablement out to so many more people that otherwise might not have been able to make it, but to your broader question of, you know, what my team does. So that's one piece of it is getting that information out to the community. So as part of that, there's a lot of other things we do as well. We were always helping out build new sandboxes new learning labs, things like that, that they can come and get whenever they're looking for it out on the DevNet site. And then my team also looks after communities such as the Cisco Learning Network where there's a huge community that has historically been there to support people working on their Cisco certifications. And we've seen a huge shift now in that group that all of the people that have been there for years are now looking at the DevNet certifications and helping other people that are trying to get on board with programmability, they're taking a lot of those same community enablement skills and propping up the community with, you know, helping answer questions, helping provide content. They've moved now into the DevNet space as well, and are helping people with that set of certifications. So it's great seeing the community come along and really see that. >> Yeah, I mean, it's awesome, and first of all, you guys done a great job. I'm always impressed when we were at physical events in the DevNet Zone, just the learning, the outreach. Again, very open, collaborative, inclusive, and also, you know, you had one-on-one classes and talks to full blown advanced, (sneezes)Had to sneeze there >> Yeah, and that's the point. >> (laughs)That was coming out, got to cut that out. I love prerecords. >> Absolutely. >> That's never happened to me to live by the way. I've never sneezed live on a thousand--. (Eric laughs) >> You're allergic to me. >> We'll pick up. >> It happens. >> So Eric, so I got to ask you on the trends around automation, what skills and what developer patterns are you seeing with automation? Is there anything in particular? Obviously network automation has been around for a long time. Cisco has been a leader in that, but as you move up the stack, as modern applications are building, do you see any patterns or trends around what is accelerating automation? What are people learning? >> Yeah, absolutely. So you mentioned observability was big before COVID and we actually really saw that amplified during COVID. So a lot of people have come to us looking for insights. How can I get that better observability now that we need it while we're virtual. So that's actually been a huge uptick. And we've seen a lot of people that weren't necessarily out looking for things before that are now figuring out how can I do this at scale? And I think one good example that Susie was talking about the VPN example. And we actually had a number of SEs in the Cisco community that had customers dealing with that very thing where they very quickly had to ramp up. And one in particular actually wrote a bunch of automation to go out and measure all of the different parameters that IT departments might care about, about their firewalls, things that you didn't normally look at in the old days, you would size your firewalls based on, you know, assuming a certain number of people working from home. And when that number went to 100%, things like licensing started coming into play, where they needed to make sure they had the right capacity in their platforms that they weren't necessarily designed for. So one of the SEs actually wrote a bunch of code to go out, used some open source tooling to monitor and alert on these things and then published it, so the whole community could go out and get a copy of it, try it out in their own environment. And we saw a lot of interest around that in trying to figure out, okay, now I can take that and I can adapt it to what I need to see for my observability. >> That's huge and you know, you brought up this sharing concept. I mean, one of the things that's interesting is you've got more sharing going on. >> Controller: John, let's pause right here. Let's pause right here. I'm going to try and bring Eric and Mandy and everybody out. And then just start right from here to bring Eric and Mandy back in and close up. Stand by Eric just hold tight. >> All right, hold on >> Controller: just for one moment. Hold tight, we got Mandy back >> Controller: Standby. Standby. Standby. Standby, standby, standby. Hold hold hold.
SUMMARY :
Brought to you by Cisco. And of course we got the and just the success you And in order to do that, you know, the weeds about you know, because the pressure to do that, because you were there in person. And then it turns out, you all DevOps and, you know, How are you guys looking at and how well you could reach customers. Susie, I know you got You know, we have, you know, We're going to dig in with Mandy and Eric. Thank you so much. And one of the things the skills to be able to take they have more time to contact Switch, by the parameters of, you know, I got to ask you on the firewalls based on, you know, and you guys had a lot of and then go, you know, coming together with networking, you know, Can you share the summary the past years, you know, DevNet and DevNet Create, leveraging the cloud to do Thanks so much. and the regions stepping up And we'll ride the wave with you guys. for taking the time to come Thank you so much for John, we're coming to you And of course we got the Great to see you, John. and just the success you And in order to do that, you know, because the pressure to do that because you were there in and it just forced them to, you know, and you know, you guys the CIO and saying, you know, Susie, I know you got You know, we have, you know, I know got to go, but stay with us. Thank you so much. And one of the things the skills to be able to take Eric, I want to go to you by the parameters of you know, and also, you know, you out, got to cut that out. to me to live by the way. So Eric, so I got to firewalls based on, you know, know, you brought up I'm going to try and bring Eric Hold tight, we got Mandy back Controller: Standby.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Susie | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Suzie Wee | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Susie Wee | PERSON | 0.99+ |
Mandy | PERSON | 0.99+ |
Eric Thiel | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Mandy Whaley | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
Todd Nightingale | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Todd | PERSON | 0.99+ |
Cisco DevNet | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
Cisco Learning Network | ORGANIZATION | 0.99+ |
DevNet | ORGANIZATION | 0.99+ |
Dr. Eng Lim Goh, Joachim Schultze, & Krishna Prasad Shastry | HPE Discover 2020
>> Narrator: From around the globe it's theCUBE, covering HPE Discover Virtual Experience brought to you by HPE. >> Hi everybody. Welcome back. This is Dave Vellante for theCUBE, and this is our coverage of discover 2020, the virtual experience of HPE discover. We've done many, many discoveries, as usually we're on the show floor, theCUBE has been virtualized and we talk a lot at HPE discovers, a lot of storage and server and infrastructure and networking which is great. But the conversation we're going to have now is really, we're going to be talking about helping the world solve some big problems. And I'm very excited to welcome back to theCUBE Dr. Eng Lim Goh. He's a senior vice president of and CTO for AI, at HPE. Hello, Dr. Goh. Great to see you again. >> Hello. Thank you for having us, Dave. >> You're welcome. And then our next guest is Professor Joachim Schultze, who is the Professor for Genomics, and Immunoregulation at the university of Bonn amongst other things Professor, welcome. >> Thank you all. Welcome. >> And then Prasad Shastry, is the Chief Technologist for the India Advanced Development Center at HPE. Welcome, Prasad. Great to see you. >> Thank you. Thanks for having me. >> So guys, we have a CUBE first. I don't believe we've ever had of three guests in three separate times zones. I'm in a fourth time zone. (guests chuckling) So I'm in Boston. Dr. Goh, you're in Singapore, Professor Schultze, you're in Germany and Prasad, you're in India. So, we've got four different time zones. Plus our studio in Palo Alto. Who's running this program. So we've got actually got five times zones, a CUBE first. >> Amazing. >> Very good. (Prasad chuckles) >> Such as the world we live in. So we're going to talk about some of the big problems. I mean, here's the thing we're obviously in the middle of this pandemic, we're thinking about the post isolation economy, et cetera. People compare obviously no surprise to the Spanish flu early part of last century. They talk about the great depression, but the big difference this time is technology. Technology has completely changed the way in which we've approached this pandemic. And we're going to talk about that. Dr. Goh, I want to start with you. You've done a lot of work on this topic of swarm learning. If we could, (mumbles) my limited knowledge of this is we're kind of borrowing from nature. You think about, bees looking for a hive as sort of independent agents, but somehow they come together and communicate, but tell us what do we need to know about swarm learning and how it relates to artificial intelligence and we'll get into it. >> Oh, Dave, that's a great analogy using swarm of bees. That's exactly what we do at HPE. So let's use the of here. When deploying artificial intelligence, a hospital does machine learning of the outpatient data that could be biased, due to demographics and the types of cases they see more also. Sharing patient data across different hospitals to remove this bias is limited, given privacy or even sovereignty the restrictions, right? Like for example, across countries in the EU. HPE, so I'm learning fixers this by allowing each hospital, let's still continue learning locally, but at each cycle we collect the lumped weights of the neural networks, average them and sending it back down to older hospitals. And after a few cycles of doing this, all the hospitals would have learned from each other, removing biases without having to share any private patient data. That's the key. So, the ability to allow you to learn from everybody without having to share your private patients. That's swarm learning, >> And part of the key to that privacy is blockchain, correct? I mean, you you've been too involved in blockchain and invented some things in blockchain and that's part of the privacy angle, is it not? >> Yes, yes, absolutely. There are different ways of doing this kind of distributed learning, which swarm learning is over many of the other distributed learning methods. Require you to have some central control. Right? So, Prasad, and the team and us came up together. We have a method where you would, instead of central control, use blockchain to do this coordination. So, there is no more a central control or coordinator, especially important if you want to have a truly distributed swamp type learning system. >> Yeah, no need for so-called trusted third party or adjudicator. Okay. Professor Schultze, let's go to you. You're essentially the use case of this swarm learning application. Tell us a little bit more about what you do and how you're applying this concept. >> I'm actually by training a physician, although I haven't seen patients for a very long time. I'm interested in bringing new technologies to what we call precision medicine. So, new technologies both from the laboratories, but also from computational sciences, married them. And then I basically allow precision medicine, which is a medicine that is built on new measurements, many measurements of molecular phenotypes, how we call them. So, basically that process on different levels, for example, the genome or genes that are transcribed from the genome. We have thousands of such data and we have to make sense out of this. This can only be done by computation. And as we discussed already one of the hope for the future is that the new wave of developments in artificial intelligence and machine learning. We can make more sense out of this huge data that we generate right now in medicine. And that's what we're interesting in to find out how can we leverage these new technologies to build a new diagnostics, new therapy outcome predictors. So, to know the patient benefits from a disease, from a diagnostics or a therapy or not, and that's what we are doing for the last 10 years. The most exciting thing I have been through in the last three, four, five years is really when HPE introduced us to swarm learning. >> Okay and Prasad, you've been helping Professor Schultze, actually implements swarm learning for specific use cases that we're going to talk about COVID, but maybe describe a little bit about what you've been or your participation in this whole equation. >> Yep, thank. As Dr Eng Lim Goh, mentioned. So, we have used blockchain as a backbone to implement the decentralized network. And through that we're enabling a privacy preserved these centralized network without having any control points, as Professor explained in terms of depression medicines. So, one of the use case we are looking at he's looking at the blood transcriptomes, think of it, different hospitals having a different set of transcriptome data, which they cannot share due to the privacy regulations. And now each of those hospitals, will clean the model depending upon their local data, which is available in that hospital. And shared the learnings coming out of that training with the other hospitals. And we played to over several cycles to merge all these learnings and then finally get into a global model. So, through that we are able to kind of get into a model which provides the performance is equal of collecting all the data into a central repository and trying to do it. And we could really think of when we are doing it, them, could be multiple kinds of challenges. So, it's good to do decentralized learning. But what about if you have a non ID type of data, what about if there is a dropout in the network connections? What about if there are some of the compute nodes we just practice or probably they're not seeing sufficient amount of data. So, that's something we tried to build into the swarm learning framework. You'll handle the scenarios of having non ID data. All in a simple word we could call it as seeing having the biases. An example, one of the hospital might see EPR trying to, look at, in terms of let's say the tumors, how many number of cases and whereas the other hospital might have very less number of cases. So, if you have kind of implemented some techniques in terms of doing the merging or providing the way that different kind of weights or the tuneable parameters to overcome these set of challenges in the swarm learning. >> And Professor Schultze, you you've applied this to really try to better understand and attack the COVID pandemic, can you describe in more detail your goals there and what you've actually done and accomplished? >> Yeah. So, we have actually really done it for COVID. The reason why we really were trying to do this already now is that we have to generate it to these transcriptomes from COVID-19 patients ourselves. And we realized that the scene of the disease is so strong and so unique compared to other infectious diseases, which we looked at in some detail that we felt that the blood transcriptome would be good starting point actually to identify patients. But maybe even more important to identify those with severe diseases. So, if you can identify them early enough that'd be basically could care for those more and find particular for those treatments and therapies. And the reason why we could do that is because we also had some other test cases done before. So, we used the time wisely with large data sets that we had collected beforehand. So, use cases learned how to apply swarm learning, and we are now basically ready to test directly with COVID-19. So, this is really a step wise process, although it was extremely fast, it was still a step wise probably we're guided by data where we had much more knowledge of which was with the black leukemia. So, we had worked on that for years. We had collected many data. So, we could really simulate a Swarm learning very nicely. And based on all the experience we get and gain together with Prasad, and his team, we could quickly then also apply that knowledge to the data that are coming now from COVID-19 patients. >> So, Dr. Goh, it really comes back to how we apply machine intelligence to the data, and this is such an interesting use case. I mean, the United States, we have 50 different States with 50 different policies, different counties. We certainly have differences around the world in terms of how people are approaching this pandemic. And so the data is very rich and varied. Let's talk about that dynamic. >> Yeah. If you, for the listeners who are or viewers who are new to this, right? The workflow could be a patient comes in, you take the blood, and you send it through an analysis? DNA is made up of genes and our genes express, right? They express in two steps the first they transcribe, then they translate. But what we are analyzing is the middle step, the transcription stage. And tens of thousands of these Transcripts that are produced after the analysis of the blood. The thing is, can we find in the tens of thousands of items, right? Or biomarkers a signature that tells us, this is COVID-19 and how serious it is for this patient, right? Now, the data is enormous, right? For every patient. And then you have a collection of patients in each hospitals that have a certain demographic. And then you have also a number of hospitals around. The point is how'd you get to share all that data in order to have good training of your machine? The ACO is of course a know privacy of data, right? And as such, how do you then share that information if privacy restricts you from sharing the data? So in this case, swarm learning only shares the learnings, not the private patient data. So we hope this approach would allow all the different hospitals to come together and unite sharing the learnings removing biases so that we have high accuracy in our prediction as well at the same time, maintaining privacy. >> It's really well explained. And I would like to add at least for the European union, that this is extremely important because the lawmakers have clearly stated, and the governments that even non of these crisis conditions, they will not minimize the rules of privacy laws, their compliance to privacy laws has to stay as high as outside of the pandemic. And I think there's good reasons for that, because if you lower the bond, now, why shouldn't you lower the bar in other times as well? And I think that was a wise decision, yes. If you would see in the medical field, how difficult it is to discuss, how do we share the data fast enough? I think swarm learning is really an amazing solution to that. Yeah, because this discussion is gone basically. Now we can discuss about how we do learning together. I'd rather than discussing what would be a lengthy procedure to go towards sharing. Which is very difficult under the current privacy laws. So, I think that's why I was so excited when I learned about it, the first place with faster, we can do things that otherwise are either not possible or would take forever. And for a crisis that's key. That's absolutely key. >> And is the byproduct. It's also the fact that all the data stay where they are at the different hospitals with no movement. >> Yeah. Yeah. >> Learn locally but only shared the learnings. >> Right. Very important in the EU of course, even in the United States, People are debating. What about contact tracing and using technology and cell phones, and smartphones to do that. Beside, I don't know what the situation is like in India, but nonetheless, that Dr. Goh's point about just sharing the learnings, bubbling it up, trickling just kind of metadata. If you will, back down, protects us. But at the same time, it allows us to iterate and improve the models. And so, that's a key part of this, the starting point and the conclusions that we draw from the models they're going to, and we've seen this with the pandemic, it changes daily, certainly weekly, but even daily. We continuously improve the conclusions and the models don't we. >> Absolutely, as Dr. Goh explained well. So, we could look at like they have the clinics or the testing centers, which are done in the remote places or wherever. So, we could collect those data at the time. And then if we could run it to the transcripting kind of a sequencing. And then as in, when we learn to these new samples and the new pieces all of them put kind of, how is that in the local data participate in the kind of use swarm learning, not just within the state or in a country could participate into an swarm learning globally to share all this data, which is coming up in a new way, and then also implement some kind of continuous learning to pick up the new signals or the new insight. It comes a bit new set of data and also help to immediately deploy it back into the inference or into the practice of identification. To do these, I think one of the key things which we have realized is to making it very simple. It's making it simple, to convert the machine learning models into the swarm learning, because we know that our subject matter experts who are going to develop these models on their choice of platforms and also making it simple to integrate into that complete machine learning workflow from the time of collecting a data pre processing and then doing the model training and then putting it onto inferencing and looking performance. So, we have kept that in the mind from the beginning while developing it. So, we kind of developed it as a plug able microservices kind of packed data with containers. So the whole library could be given it as a container with a kind of a decentralized management command controls, which would help to manage the whole swarm network and to start and initiate and children enrollment of new hospitals or the new nodes into the swarm network. At the same time, we also looked into the task of the data scientists and then try to make it very, very easy for them to take their existing models and convert that into the swarm learning frameworks so that they can convert or enabled they're models to participate in a decentralized learning. So, we have made it to a set callable rest APIs. And I could say that the example, which we are working with the Professor either in the case of leukemia or in the COVID kind of things. The noodle network model. So we're kind of using the 10 layer neural network things. We could convert that into the swarm model with less than 10 lines of code changes. So, that's kind of a simply three we are looking at so that it helps to make it quicker, faster and loaded the benefits. >> So, that's an exciting thing here Dr. Goh is, this is not an R and D project. This is something that you're actually, implementing in a real world, even though it's a narrow example, but there are so many other examples that I'd love to talk about, but please, you had a comment. >> Yes. The key thing here is that in addition to allowing privacy to be kept at each hospital, you also have the issue of different hospitals having day to day skewed differently. Right? For example, a demographics could be that this hospital is seeing a lot more younger patients, and other hospitals seeing a lot more older patients. Right? And then if you are doing machine learning in isolation then your machine might be better at recognizing the condition in the younger population, but not older and vice versa by using this approach of swarm learning, we then have the biases removed so that both hospitals can detect for younger and older population. All right. So, this is an important point, right? The ability to remove biases here. And you can see biases in the different hospitals because of the type of cases they see and the demographics. Now, the other point that's very important to reemphasize is what precise Professor Schultze mentioned, right? It's how we made it very easy to implement this.Right? This started out being so, for example, each hospital has their own neural network and they training their own. All you do is we come in, as Pasad mentioned, change a few lines of code in the original, machine learning model. And now you're part of the collective swarm. This is how we want to easy to implement so that we can get again, as I like to call, hospitals of the world to uniting. >> Yeah. >> Without sharing private patient data. So, let's double click on that Professor. So, tell us about sort of your team, how you're taking advantage of this Dr. Goh, just describe, sort of the simplicity, but what are the skills that you need to take advantage of this? What's your team look like? >> Yeah. So, we actually have a team that's comes from physicians to biologists, from medical experts up to computational scientists. So, we have early on invested in having these interdisciplinary research teams so that we can actually spend the whole spectrum. So, people know about the medicine they know about them the biological basics, but they also know how to implement such new technology. So, they are probably a little bit spearheading that, but this is the way to go in the future. And I see that with many institutions going this way many other groups are going into this direction because finally medicine understands that without computational sciences, without artificial intelligence and machine learning, we will not answer those questions with this large data that we're using. So, I'm here fine. But I also realize that when we entered this project, we had basically our model, we had our machine learning model from the leukemia's, and it really took almost no efforts to get this into the swarm. So, we were really ready to go in very short time, but I also would like to say, and then it goes towards the bias that is existing in medicine between different places. Dr. Goh said this very nicely. It's one aspect is the patient and so on, but also the techniques, how we do clinical essays, we're using different robots a bit. Using different automates to do the analysis. And we actually try to find out what the Swan learning is doing if we actually provide such a bias by prep itself. So, I did the following thing. We know that there's different ways of measuring these transcriptomes. And we actually simulated that two hospitals had an older technology and a third hospital had a much newer technology, which is good for understanding the biology and the diseases. But it is the new technology is prone for not being able anymore to generate data that can be used to learn and then predicting the old technology. So, there was basically, it's deteriorating, if you do take the new one and you'll make a classifier model and you try old data, it doesn't work anymore. So, that's a very hard challenge. We knew it didn't work anymore in the old way. So, we've pushed it into swarm learning and to swarm recognize that, and it didn't take care of it. It didn't care anymore because the results were even better by bringing everything together. I was astonished. I mean, it's absolutely amazing. That's although we knew about this limitations on that one hospital data, this form basically could deal with it. I think there's more to learn about these advantages. Yeah. And I'm very excited. It's not only a transcriptome that people do. I hope we can very soon do it with imaging or the DCNE has 10 sites in Germany connected to 10 university hospitals. There's a lot of imaging data, CT scans and MRIs, Rachel Grimes. And this is the next next domain in medicine that we would like to apply as well as running. Absolutely. >> Well, it's very exciting being able to bring this to the clinical world And make it in sort of an ongoing learnings. I mean, you think about, again, coming back to the pandemic, initially, we thought putting people on ventilators was the right thing to do. We learned, okay. Maybe, maybe not so much the efficacy of vaccines and other therapeutics. It's going to be really interesting to see how those play out. My understanding is that the vaccines coming out of China, or built to for speed, get to market fast, be interested in U.S. Maybe, try to build vaccines that are maybe more longterm effective. Let's see if that actually occurs some of those other biases and tests that we can do. That is a very exciting, continuous use case. Isn't it? >> Yeah, I think so. Go ahead. >> Yes. I, in fact, we have another project ongoing to use a transcriptome data and other data like metabolic and cytokines that data, all these biomarkers from the blood, right? Volunteers during a clinical trial. But the whole idea of looking at all those biomarkers, we talking tens of thousands of them, the same thing again, and then see if we can streamline it clinical trials by looking at it data and training with that data. So again, here you go. Right? We have very good that we have many vaccines on. In candidates out there right now, the next long pole in the tenth is the clinical trial. And we are working on that also by applying the same concept. Yeah. But for clinical trials. >> Right. And then Prasad, it seems to me that this is a good, an example of sort of an edge use case. Right? You've got a lot of distributed data. And I know you've spoken in the past about the edge generally, where data lives bringing moving data back to sort of the centralized model. But of course you don't want to move data if you don't have to real time AI inferencing at the edge. So, what are you thinking in terms of other other edge use cases that were there swarm learning can be applied. >> Yeah, that's a great point. We could kind of look at this both in the medical and also in the other fields, as we talked about Professor just mentioned about this radiographs and then probably, Using this with a medical image data, think of it as a scenario in the future. So, if we could have an edge note sitting next to these medical imaging systems, very close to that. And then as in when this the systems producers, the medical immediate speed could be an X-ray or a CT scan or MRI scan types of thing. The system next to that, sitting on the attached to that. From the modernity is already built with the swarm lending. It can do the inferencing. And also with the new setup data, if it looks some kind of an outlier sees the new or images are probably a new signals. It could use that new data to initiate another round up as form learning with all the involved or the other medical images across the globe. So, all this can happen without really sharing any of the raw data outside of the systems but just getting the inferencing and then trying to make all of these systems to come together and try to build a better model. >> So, the last question. Yeah. >> If I may, we got to wrap, but I mean, I first, I think we've heard about swarm learning, maybe read about it probably 30 years ago and then just ignored it and forgot about it. And now here we are today, blockchain of course, first heard about with Bitcoin and you're seeing all kinds of really interesting examples, but Dr. Goh, start with you. This is really an exciting area, and we're just getting started. Where do you see swarm learning, by let's say the end of the decade, what are the possibilities? >> Yeah. You could see this being applied in many other industries, right? So, we've spoken about life sciences, to the healthcare industry or you can't imagine the scenario of manufacturing where a decade from now you have intelligent robots that can learn from looking at across men building a product and then to replicate it, right? By just looking, listening, learning and imagine now you have multiple of these robots, all sharing their learnings across boundaries, right? Across state boundaries, across country boundaries provided you allow that without having to share what they are seeing. Right? They can share, what they have lunch learnt You see, that's the difference without having to need to share what they see and hear, they can share what they have learned across all the different robots around the world. Right? All in the community that you allow, you mentioned that time, right? That will even in manufacturing, you get intelligent robots learning from each other. >> Professor, I wonder if as a practitioner, if you could sort of lay out your vision for where you see something like this going in the future, >> I'll stay with the medical field at the moment being, although I agree, it will be in many other areas, medicine has two traditions for sure. One is learning from each other. So, that's an old tradition in medicine for thousands of years, but what's interesting and that's even more in the modern times, we have no traditional sharing data. It's just not really inherent to medicine. So, that's the mindset. So yes, learning from each other is fine, but sharing data is not so fine, but swarm learning deals with that, we can still learn from each other. We can, help each other by learning and this time by machine learning. We don't have to actually dealing with the data sharing anymore because that's that's us. So for me, it's a really perfect situation. Medicine could benefit dramatically from that because it goes along the traditions and that's very often very important to get adopted. And on top of that, what also is not seen very well in medicine is that there's a hierarchy in the sense of serious certain institutions rule others and swarm learning is exactly helping us there because it democratizes, onboarding everybody. And even if you're not sort of a small entity or a small institutional or small hospital, you could become remembering the swarm and you will become as a member important. And there is no no central institution that actually rules everything. But this democratization, I really laugh, I have to say, >> Pasad, we'll give you the final word. I mean, your job is very helping to apply these technologies to solve problems. what's your vision or for this. >> Yeah. I think Professor mentioned about one of the very key points to use saying that democratization of BI I'd like to just expand a little bit. So, it has a very profound application. So, Dr. Goh, mentioned about, the manufacturing. So, if you look at any field, it could be health science, manufacturing, autonomous vehicles and those to the democratization, and also using that a blockchain, we are kind of building a framework also to incentivize the people who own certain set of data and then bring the insight from the data into the table for doing and swarm learning. So, we could build some kind of alternative monetization framework or an incentivization framework on top of the existing fund learning stuff, which we are working on to enable the participants to bring their data or insight and then get rewarded accordingly kind of a thing. So, if you look at eventually, we could completely make dais a democratized AI, with having the complete monitorization incentivization system which is built into that. You may call the parties to seamlessly work together. >> So, I think this is just a fabulous example of we hear a lot in the media about, the tech backlash breaking up big tech but how tech has disrupted our lives. But this is a great example of tech for good and responsible tech for good. And if you think about this pandemic, if there's one thing that it's taught us is that disruptions outside of technology, pandemics or natural disasters or climate change, et cetera, are probably going to be the bigger disruptions then technology yet technology is going to help us solve those problems and address those disruptions. Gentlemen, I really appreciate you coming on theCUBE and sharing this great example and wish you best of luck in your endeavors. >> Thank you. >> Thank you. >> Thank you for having me. >> And thank you everybody for watching. This is theCUBE's coverage of HPE discover 2020, the virtual experience. We'll be right back right after this short break. (upbeat music)
SUMMARY :
the globe it's theCUBE, But the conversation we're Thank you for having us, Dave. and Immunoregulation at the university Thank you all. is the Chief Technologist Thanks for having me. So guys, we have a CUBE first. Very good. I mean, here's the thing So, the ability to allow So, Prasad, and the team You're essentially the use case of for the future is that the new wave Okay and Prasad, you've been helping So, one of the use case we And based on all the experience we get And so the data is very rich and varied. of the blood. and the governments that even non And is the byproduct. Yeah. shared the learnings. and improve the models. And I could say that the that I'd love to talk about, because of the type of cases they see sort of the simplicity, and the diseases. and tests that we can do. Yeah, I think so. and then see if we can streamline it about the edge generally, and also in the other fields, So, the last question. by let's say the end of the decade, All in the community that you allow, and that's even more in the modern times, to apply these technologies You may call the parties to the tech backlash breaking up big tech the virtual experience.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Prasad | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
Joachim Schultze | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
China | LOCATION | 0.99+ |
Schultze | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
Singapore | LOCATION | 0.99+ |
United States | LOCATION | 0.99+ |
10 sites | QUANTITY | 0.99+ |
Prasad Shastry | PERSON | 0.99+ |
10 layer | QUANTITY | 0.99+ |
10 university hospitals | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Goh | PERSON | 0.99+ |
50 different policies | QUANTITY | 0.99+ |
two hospitals | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
two steps | QUANTITY | 0.99+ |
Krishna Prasad Shastry | PERSON | 0.99+ |
pandemic | EVENT | 0.99+ |
thousands of years | QUANTITY | 0.99+ |
Eng Lim Goh | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
ACO | ORGANIZATION | 0.99+ |
DCNE | ORGANIZATION | 0.99+ |
European union | ORGANIZATION | 0.99+ |
each hospitals | QUANTITY | 0.99+ |
less than 10 lines | QUANTITY | 0.99+ |
both hospitals | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Rachel Grimes | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
three guests | QUANTITY | 0.99+ |
each cycle | QUANTITY | 0.99+ |
third hospital | QUANTITY | 0.99+ |
each hospital | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
30 years ago | DATE | 0.98+ |
India Advanced Development Center | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
fourth time zone | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
one aspect | QUANTITY | 0.97+ |
EU | LOCATION | 0.96+ |
five years | QUANTITY | 0.96+ |
2020 | DATE | 0.96+ |
today | DATE | 0.96+ |
Dr. | PERSON | 0.95+ |
Pasad | PERSON | 0.95+ |
Swami Sivasubramanian, AWS | AWS Summit Online 2020
>> Narrator: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hello everyone, welcome to this special CUBE interview. We are here at theCUBE Virtual covering AWS Summit Virtual Online. This is Amazon's Summits that they normally do all around the world. They're doing them now virtually. We are here in the Palo Alto COVID-19 quarantine crew getting all the interviews here with a special guest, Vice President of Machine Learning, we have Swami, CUBE Alumni, who's been involved in not only the machine learning, but all of the major activity around AWS around how machine learning's evolved, and all the services around machine learning workflows from transcribe, recognition, you name it. Swami, you've been at the helm for many years, and we've also chatted about that before. Welcome to the virtual CUBE covering AWS Summit. >> Hey, pleasure to be here, John. >> Great to see you. I know times are tough. Everything okay at Amazon? You guys are certainly cloud scaled, not too unfamiliar of working remotely. You do a lot of travel, but what's it like now for you guys right now? >> We're actually doing well. We have been I mean, this many of, we are working hard to make sure we continue to serve our customers. Even from their site, we have done, yeah, we had taken measures to prepare, and we are confident that we will be able to meet customer demands per capacity during this time. So we're also helping customers to react quickly and nimbly, current challenges, yeah. Various examples from amazing startups working in this area to reorganize themselves to serve customer. We can talk about that common layer. >> Large scale, you guys have done a great job and fun watching and chronicling the journey of AWS, as it now goes to a whole 'nother level with the post pandemic were expecting even more surge in everything from VPNs, workspaces, you name it, and all these workloads are going to be under a lot of pressure to do more and more value. You've been at the heart of one of the key areas, which is the tooling, and the scale around machine learning workflows. And this is where customers are really trying to figure out what are the adequate tools? How do my teams effectively deploy machine learning? Because now, more than ever, the data is going to start flowing in as virtualization, if you will, of life, is happening. We're going to be in a hybrid world with life. We're going to be online most of the time. And I think COVID-19 has proven that this new trajectory of virtualization, virtual work, applications are going to have to flex, and adjust, and scale, and be reinvented. This is a key thing. What's going on with machine learning, what's new? Tell us what are you guys doing right now. >> Yeah, I see now, in AWS, we offer broadest-- (poor audio capture obscures speech) All the way from like expert practitioners, we offer our frameworks and infrastructure layer support for all popular frameworks from like TensorFlow, Apache MXNet, and PyTorch, PowerShell, (poor audio capture obscures speech) custom chips like inference share. And then, for aspiring ML developers, who want to build their own custom machine learning models, we're actually building, we offer SageMaker, which is our end-to-end machine learning service that makes it easy for customers to be able to build, train, tune, and debug machine learning models, and it is one of our fastest growing machine learning services, and many startups and enterprises are starting to standardize their machine learning building on it. And then, the final tier is geared towards actually application developers, who did not want to go into model-building, just want an easy API to build capabilities to transcribe, run voice recognition, and so forth. And I wanted to talk about one of the new capabilities we are about to launch, enterprise search called Kendra, and-- >> So actually, so just from a news standpoint, that's GA now, that's being announced at the Summit. >> Yeah. >> That was a big hit at re:Invent, Kendra. >> Yeah. >> A lot of buzz! It's available. >> Yep, so I'm excited to say that Kendra is our new machine learning powered, highly accurate enterprise search service that has been made generally available. And if you look at what Kendra is, we have actually reimagined the traditional enterprise search service, which has historically been an underserved market segment, so to speak. If you look at it, on the public search, on the web search front, it is a relatively well-served area, whereas the enterprise search has been an area where data in enterprise, there are a huge amount of data silos, that is spread in file systems, SharePoint, or Salesforce, or various other areas. And deploying a traditional search index has always that even simple persons like when there's an ID desk open or when what is the security policy, or so forth. These kind of things have been historically, people have to find within an enterprise, let alone if I'm actually in a material science company or so forth like what 3M was trying to do. Enable collaboration of researchers spread across the world, to search their experiment archives and so forth. It has been super hard for them to be able to things, and this is one of those areas where Kendra has enabled the new, of course, where Kendra is a deep learning powered search service for enterprises, which breaks down data silos, and collects actually data across various things all the way from S3, or file system, or SharePoint, and various other data sources, and uses state-of-art NLP techniques to be able to actually index them, and then, you can query using natural language queries such as like when there's my ID desk-scoping, and the answer, it won't just give you a bunch of random, right? It'll tell you it opens at 8:30 a.m. in the morning. >> Yeah. >> Or what is the credit card cashback returns for my corporate credit card? It won't give you like a long list of links related to it. Instead it'll give you answer to be 2%. So it's that much highly accurate. (poor audio capture obscures speech) >> People who have been in the enterprise search or data business know how hard this is. And it is super, it's been a super hard problem, the old in the old guard models because databases were limiting to schemas and whatnot. Now, you have a data-driven world, and this becomes interesting. I think the big takeaway I took away from Kendra was not only the new kind of discovery navigation that's possible, in terms of low latency, getting relevant content, but it's really the under-the-covers impact, and I think I'd like to get your perspective on this because this has been an active conversation inside the community, in cloud scale, which is data silos have been a problem. People have had built these data silos, and they really talk about breaking them down but it's really again hard, there's legacy problems, and well, applications that are tied to them. How do I break my silos down? Or how do I leverage either silos? So I think you guys really solve a problem here around data silos and scale. >> Yeah. >> So talk about the data silos. And then, I'm going to follow up and get your take on the kind of size of of data, megabytes, petabytes, I mean, talk about data silos, and the scale behind it. >> Perfect, so if you look at actually how to set up something like a Kendra search cluster, even as simple as from your Management Console in the AWS, you'll be able to point Kendra to various data sources, such as Amazon S3, or SharePoint, and Salesforce, and various others. And say, these are kind of data I want to index. And Kendra automatically pulls in this data, index these using its deep learning and NLP models, and then, automatically builds a corpus. Then, I, as in user of the search index, can actually start querying it using natural language, and don't have to worry where it comes from, and Kendra takes care of things like access control, and it uses finely-tuned machine learning algorithms under the hood to understand the context of natural language query and return the most relevant. I'll give a real-world example of some of the field customers who are using Kendra. For instance, if you take a look at 3M, 3M is using Kendra to support search, support its material science R&D by enabling natural language search of their expansive repositories of past research documents that may be relevant to a new product. Imagine what this does to a company like 3M. Instead of researchers who are spread around the world, repeating the same experiments on material research over and over again, now, their engineers and researchers will allow everybody to quickly search through documents. And they can innovate faster instead of trying to literally reinvent the wheel all the time. So it is better acceleration to the market. Even we are in this situation, one of the interesting work that you might be interested in is the Semantic Scholar team at Allen Institute for AI, recently opened up what is a repository of scientific research called COVID-19 Open Research Dataset. These are expert research articles. (poor audio capture obscures speech) And now, the index is using Kendra, and it helps scientists, academics, and technologists to quickly find information in a sea of scientific literature. So you can even ask questions like, "Hey, how different is convalescent plasma "treatment compared to a vaccine?" And various in that question and Kendra automatically understand the context, and gets the summary answer to these questions for the customers, so. And this is one of the things where when we talk about breaking the data silos, it takes care of getting back the data, and putting it in a central location. Understanding the context behind each of these documents, and then, being able to also then, quickly answer the queries of customers using simple query natural language as well. >> So what's the scale? Talk about the scale behind this. What's the scale numbers? What are you guys seeing? I see you guys always do a good job, I've run a great announcement, and then following up with general availability, which means I know you've got some customers using it. What are we talking about in terms of scales? Petabytes, can you give some insight into the kind of data scale you're talking about here? >> So the nice thing about Kendra is it is easily linearly scalable. So I, as a developer, I can keep adding more and more data, and that is it linearly scales to whatever scale our customers want. So and that is one of the underpinnings of Kendra search engine. So this is where even if you see like customers like PricewaterhouseCoopers is using Kendra to power its regulatory application to help customers search through regulatory information quickly and easily. So instead of sifting through hundreds of pages of documents manually to answer certain questions, now, Kendra allows them to answer natural language question. I'll give another example, which is speaks to the scale. One is Baker Tilly, a leading advisory, tax, and assurance firm, is using Kendra to index documents. Compared to a traditional SharePoint-based full-text search, now, they are using Kendra to quickly search product manuals and so forth. And they're able to get answers up to 10x faster. Look at that kind of impact what Kendra has, being able to index vast amount of data, with in a linearly scalable fashion, keep adding in the order of terabytes, and keep going, and being able to search 10x faster than traditional, I mean traditional keyword search based algorithm is actually a big deal for these customers. They're very excited. >> So what is the main problem that you're solving with Kendra? What's the use case? If I'm the customer, what's my problem that you're solving? Is it just response to data, whether it's a call center, or support, or is it an app? I mean, what's the main focus that you guys came out? What was the vector of problem that you're solving here? >> So when we talked to customers before we started building Kendra, one of the things that constantly came back for us was that they wanted the same ease of use and the ability to search the world wide web, and customers like us to search within an enterprise. So it can be in the form of like an internal search to search within like the HR documents or internal wiki pages and so forth, or it can be to search like internal technical documentation or the public documentation to help the contact centers or is it the external search in terms of customer support and so forth, or to enable collaboration by sharing knowledge base and so forth. So each of these is really dissected. Why is this a problem? Why is it not being solved by traditional search techniques? One of the things that became obvious was that unlike the external world where the web pages are linked that easily with very well-defined structure, internal world is very messy within an enterprise. The documents are put in a SharePoint, or in a file system, or in a storage service like S3, or on naturally, tell-stores or Box, or various other things. And what really customers wanted was a system which knows how to actually pull the data from various these data silos, still understand the access control behind this, and enforce them in the search. And then, understand the real data behind it, and not just do simple keyword search, so that we can build remarkable search service that really answers queries in a natural language. And this has been the theme, premise of Kendra, and this is what had started to resonate with our customers. I talked with some of the other examples even in areas like contact centers. For instance, Magellan Health is using Kendra for its contact centers. So they are able to seamlessly tie like member, provider, or client specific information with other inside information about health care to its agents so that they can quickly resolve the call. Or it can be on internally to do things like external search as well. So very satisfied client. >> So you guys took the basic concept of discovery navigation, which is the consumer web, find what you're looking for as fast as possible, but also took advantage of building intelligence around understanding all the nuances and configuration, schemas, access, under the covers and allowing things to be discovered in a new way. So you basically makes data be discoverable, and then, provide an interface. >> Yeah. >> For discovery and navigation. So it's a broad use cat, then. >> Right, yeah that's sounds somewhat right except we did one thing more. We actually understood not just, we didn't just do discovery and also made it easy for people to find the information but they are sifting through like terabytes or hundreds of terabytes of internal documentation. Sometimes, one other things that happens is throwing a bunch of hundreds of links to these documents is not good enough. For instance, if I'm actually trying to find out for instance, what is the ALS marker in an health care setting, and for a particular research project, then, I don't want to actually sift through like thousands of links. Instead, I want to be able to correctly pinpoint which document contains answer to it. So that is the final element, which is to really understand the context behind each and every document using natural language processing techniques so that you not only find discover the information that is relevant but you also get like highly accurate possible precise answers to some of your questions. >> Well, that's great stuff, big fan. I was really liking the announcement of Kendra. Congratulations on the GA of that. We'll make some room on our CUBE Virtual site for your team to put more Kendra information up. I think it's fascinating. I think that's going to be the beginning of how the world changes, where this, this certainly with the voice activation and API-based applications integrating this in. I just see a ton of activity that this is going to have a lot of headroom. So appreciate that. The other thing I want to get to while I have you here is the news around the augmented artificial intelligence has been brought out as well. >> Yeah. >> So the GA of that is out. You guys are GA-ing everything, which is right on track with your cadence of AWS laws, I'd say. What is this about? Give us the headline story. What's the main thing to pay attention to of the GA? What have you learned? What's the learning curve, what's the results? >> So augmented artificial intelligence service, I called it A2I but Amazon A2I service, we made it generally available. And it is a very unique service that makes it easy for developers to augment human intelligence with machine learning predictions. And this is historically, has been a very challenging problem. We look at, so let me take a step back and explain the general idea behind it. You look at any developer building a machine learning application, there are use cases where even actually in 99% accuracy in machine learning is not going to be good enough to directly use that result as the response to back to the customer. Instead, you want to be able to augment that with human intelligence to make sure, hey, if my machine learning model is returning, saying hey, my confidence interval for this prediction is less than 70%, I would like it to be augmented with human intelligence. Then, A2I makes it super easy for customers to be, developers to use actually, a human reviewer workflow that comes in between. So then, I can actually send it either to the public pool using Mechanical Turk, where we have more than 500,000 Turkers, or I can use a private workflow as a vendor workflow. So now, A2I seamlessly integrates with our Textract, Rekognition, or SageMaker custom models. So now, for instance, NHS is integrated A2I with Textract, so that, and they are building these document processing workflows. The areas where the machine learning model confidence load is not as high, they will be able augment that with their human reviewer workflows so that they can actually build in highly accurate document processing workflow as well. So this, we think is a powerful capability. >> So this really kind of gets to what I've been feeling in some of the stuff we worked with you guys on our machine learning piece. It's hard for companies to hire machine learning people. This has been a real challenge. So I like this idea of human augmentation because humans and machines have to have that relationship, and if you build good abstraction layers, and you abstract away the complexity, which is what you guys do, and that's the vision of cloud, then, you're going to need to have that relationship solidified. So at what point do you think we're going to be ready for theCUBE team, or any customer that doesn't have the or can't find a machine learning person? Or may not want to pay the wages that's required? I mean it's hard to find a machine learning engineer, and when does the data science piece come in with visualization, the spectrum of pure computer science, math, machine learning guru to full end user productivity? Machine learning is where you guys are doing a lot of work. Can you just share your opinion on that evolution of where we are on that? Because people want to get to the point where they don't have to hire machine learning folks. >> Yeah. >> And have that kind support too. >> If you look at the history of technology, I actually always believe that many of these highly disruptive technology started as a way that it is available only to experts, and then, they quickly go through the cycles, where it becomes almost common place. I'll give an example with something totally outside the IT space. Let's take photography. I think more than probably 150 years ago, the first professional camera was invented, and built like three to four years still actually take a really good picture. And there were only very few expert photographers in the world. And then, fast forward to time where we are now, now, even my five-year-old daughter takes actually very good portraits, and actually gives it as a gift to her mom for Mother's Day. So now, if you look at Instagram, everyone is a professional photographer. I kind of think the same thing is about to, it will happen in machine learning too. Compared to 2012, where there were very few deep learning experts, who can really build these amazing applications, now, we are starting to see like tens of thousands of actually customers using machine learning in production in AWS, not just proof of concepts but in production. And this number is rapidly growing. I'll give one example. Internally, if you see Amazon, to aid our entire company to transform and make machine learning as a natural part of the business, six years ago, we started a Machine Learning University. And since then, we have been training all our engineers to take machine learning courses in this ML University, and a year ago, we actually made these coursework available through our Training and Certification platform in AWS, and within 48 hours, more than 100,000 people registered. Think about it, that's like a big all-time record. That's why I always like to believe that developers are always eager to learn, they're very hungry to pick up new technology, and I wouldn't be surprised if four or five years from now, machine learning is kind of becomes a normal feature of the app, the same with databases are, and that becomes less special. If that day happens, then, I would see it as my job is done, so. >> Well, you've got a lot more work to do because I know from the conversations I've been having around this COVID-19 pandemic is it's that there's general consensus and validation that the future got pulled forward, and what used to be an inside industry conversation that we used to have around machine learning and some of the visions that you're talking about has been accelerated on the pace of the new cloud scale, but now that people now recognize that virtual and experiencing it firsthand globally, everyone, there are now going to be an acceleration of applications. So we believe there's going to be a Cambrian explosion of new applications that got to reimagine and reinvent some of the plumbing or abstractions in cloud to deliver new experiences, because the expectations have changed. And I think one of the things we're seeing is that machine learning combined with cloud scale will create a whole new trajectory of a Cambrian explosion of applications. So this has kind of been validated. What's your reaction to that? I mean do you see something similar? What are some of the things that you're seeing as we come into this world, this virtualization of our lives, it's every vertical, it's not one vertical anymore that's maybe moving faster. I think everyone sees the impact. They see where the gaps are in this new reality here. What's your thoughts? >> Yeah, if you see the history from machine learning specifically around deep learning, while the technology is really not new, especially because the early deep learning paper was probably written like almost 30 years ago. And why didn't we see deep learning take us sooner? It is because historically, deep learning technologies have been hungry for computer resources, and hungry for like huge amount of data. And then, the abstractions were not easy enough. As you rightfully pointed out that cloud has come in made it super easy to get like access to huge amount of compute and huge amount of data, and you can literally pay by the hour or by the minute. And with new tools being made available to developers like SageMaker and all the AI services, we are talking about now, there is an explosion of options available that are easy to use for developers that we are starting to see, almost like a huge amount of like innovations starting to pop up. And unlike traditional disruptive technologies, which you usually see crashing in like one or two industry segments, and then, it crosses the chasm, and then goes mainstream, but machine learning, we are starting to see traction almost in like every industry segment, all the way from like in financial sector, where fintech companies like Intuit is using it to forecast its call center volume and then, personalization. In the health care sector, companies like Aidoc are using computer vision to assist radiologists. And then, we are seeing in areas like public sector. NASA has partnered with AWS to use machine learning to do anomaly detection, algorithms to detect solar flares in the space. And yeah, examples are plenty. It is because now, machine learning has become such common place that and almost every industry segment and every CIO is actually already looking at how can they reimagine, and reinvent, and make their customer experience better covered by machine learning. In the same way, Amazon actually asked itself, like eight or 10 years ago, so very exciting. >> Well, you guys continue to do the work, and I agree it's not just machine learning by itself, it's the integration and the perfect storm of elements that have come together at this time. Although pretty disastrous, but I think ultimately, it's going to come out, we're going to come out of this on a whole 'nother trajectory. It's going to be creativity will be emerged. You're going to start seeing really those builders thinking, "Okay hey, I got to get out there. "I can deliver, solve the gaps we are exposed. "Solve the problems, "pre-create new expectations, new experience." I think it's going to be great for software developers. I think it's going to change the computer science field, and it's really bringing the lifestyle aspect of things. Applications have to have a recognition of this convergence, this virtualization of life. >> Yeah. >> The applications are going to have to have that. So and remember virtualization helped Amazon formed the cloud. Maybe, we'll get some new kinds of virtualization, Swami. (laughs) Thanks for coming on, really appreciate it. Always great to see you. Thanks for taking the time. >> Okay, great to see you, John, also. Thank you, thanks again. >> We're with Swami, the Vice President of Machine Learning at AWS. Been on before theCUBE Alumni. Really sharing his insights around what we see around this virtualization, this online event at the Amazon Summit, we're covering with the Virtual CUBE. But as we go forward, more important than ever, the data is going to be important, searching it, finding it, and more importantly, having the humans use it building an application. So theCUBE coverage continues, for AWS Summit Virtual Online, I'm John Furrier, thanks for watching. (enlightening music)
SUMMARY :
leaders all around the world, and all the services around Great to see you. and we are confident that we will the data is going to start flowing in one of the new capabilities we are about announced at the Summit. That was a big hit A lot of buzz! and the answer, it won't just give you list of links related to it. and I think I'd like to get and the scale behind it. and then, being able to also then, into the kind of data scale So and that is one of the underpinnings One of the things that became obvious to be discovered in a new way. and navigation. So that is the final element, that this is going to What's the main thing to and explain the general idea behind it. and that's the vision of cloud, And have that and built like three to four years still and some of the visions of options available that are easy to use and it's really bringing the are going to have to have that. Okay, great to see you, John, also. the data is going to be important,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
NASA | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Swami | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
99% | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Kendra | ORGANIZATION | 0.99+ |
Aidoc | ORGANIZATION | 0.99+ |
2% | QUANTITY | 0.99+ |
hundreds of pages | QUANTITY | 0.99+ |
Swami Sivasubramanian | PERSON | 0.99+ |
four years | QUANTITY | 0.99+ |
less than 70% | QUANTITY | 0.99+ |
thousands of links | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
10x | QUANTITY | 0.99+ |
more than 100,000 people | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Intuit | ORGANIZATION | 0.99+ |
Mother's Day | EVENT | 0.99+ |
3M | ORGANIZATION | 0.99+ |
six years ago | DATE | 0.99+ |
SharePoint | TITLE | 0.99+ |
Magellan Health | ORGANIZATION | 0.99+ |
hundreds of links | QUANTITY | 0.98+ |
eight | DATE | 0.98+ |
a year ago | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
8:30 a.m. | DATE | 0.98+ |
48 hours | QUANTITY | 0.98+ |
Mechanical Turk | ORGANIZATION | 0.98+ |
PricewaterhouseCoopers | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.98+ |
Textract | TITLE | 0.97+ |
Amazon Summit | EVENT | 0.97+ |
five-year-old | QUANTITY | 0.97+ |
Salesforce | TITLE | 0.97+ |
ML University | ORGANIZATION | 0.97+ |
hundreds of terabytes | QUANTITY | 0.97+ |
Allen Institute for AI | ORGANIZATION | 0.97+ |
first professional camera | QUANTITY | 0.96+ |
COVID-19 pandemic | EVENT | 0.96+ |
A2I | TITLE | 0.96+ |
One | QUANTITY | 0.95+ |
COVID-19 | OTHER | 0.95+ |
Machine Learning University | ORGANIZATION | 0.95+ |
GA | LOCATION | 0.94+ |
ORGANIZATION | 0.94+ | |
pandemic | EVENT | 0.93+ |
theCUBE Studios | ORGANIZATION | 0.93+ |
COVID | TITLE | 0.93+ |
Baker Tilly | ORGANIZATION | 0.92+ |
AWS Summit | EVENT | 0.92+ |
Sriram Raghavan, IBM Research AI | IBM Think 2020
(upbeat music) >> Announcer: From the cube Studios in Palo Alto and Boston, it's the cube! Covering IBM Think. Brought to you by IBM. >> Hi everybody, this is Dave Vellante of theCUBE, and you're watching our coverage of the IBM digital event experience. A multi-day program, tons of content, and it's our pleasure to be able to bring in experts, practitioners, customers, and partners. Sriram Raghavan is here. He's the Vice President of IBM Research in AI. Sriram, thanks so much for coming on thecUBE. >> Thank you, pleasure to be here. >> I love this title, I love the role. It's great work if you're qualified for it.(laughs) So, tell us a little bit about your role and your background. You came out of Stanford, you had the pleasure, I'm sure, of hanging out in South San Jose at the Almaden labs. Beautiful place to create. But give us a little background. >> Absolutely, yeah. So, let me start, maybe go backwards in time. What do I do now? My role's responsible for AI strategy, planning, and execution in IBM Research across our global footprint, all our labs worldwide and their working area. I also work closely with the commercial parts. The parts of IBM, our Software and Services business that take the innovation, AI innovation, from IBM Research to market. That's the second part of what I do. And where did I begin life in IBM? As you said, I began life at our Almaden Research Center up in San Jose, up in the hills. Beautiful, I had in a view. I still think it's the best view I had. I spent many years there doing work at the intersection of AI and large-scale data management, NLP. Went back to India, I was running the India lab there for a few years, and now I'm back here in New York running AI strategy. >> That's awesome. Let's talk a little bit about AI, the landscape of AI. IBM has always made it clear that you're not doing consumer AI. You're really tying to help businesses. But how do you look at the landscape? >> So, it's a great question. It's one of those things that, you know, we constantly measure ourselves and our partners tell us. I think we, you've probably heard us talk about the cloud journey . But look barely 20% of the workloads are in the cloud, 80% still waiting. AI, at that number is even less. But, of course, it varies. Depending on who you ask, you would say AI adoption is anywhere from 4% to 30% depending on who you ask in this case. But I think it's more important to look at where is this, directionally? And it's very, very clear. Adoption is rising. The value is more, it's getting better appreciated. But I think more important, I think is, there is broader recognition, awareness and investment, knowing that to get value out of AI, you start with where AI begins, which is data. So, the story around having a solid enterprise information architecture as the base on which to drive AI, is starting to happen. So, as the investments in data platform, becoming making your data ready for AI, starts to come through. We're definitely seeing that adoption. And I think, you know, the second imperative that businesses look for obviously is the skills. The tools and the skills to scale AI. It can't take me months and months and hours to go build an AI model, I got to accelerate it, and then comes operationalizing. But this is happening, and the upward trajectory is very, very clear. >> We've been talking a lot on theCUBE over the last couple of years, it's not the innovation engine of our industry is no longer Moore's Law, it's a combination of data. You just talked about data. Applying machine technology to that data, being able to scale it, across clouds, on-prem, wherever the data lives. So. >> Right. >> Having said that, you know, you've had a journey. You know, you started out kind of playing "Jeopardy!", if you will. It was a very narrow use case, and you're expanding that use case. I wonder if you could talk about that journey, specifically in the context of your vision. >> Yeah. So, let me step back and say for IBM Research AI, when I think about how we, what's our strategy and vision, we think of it as in two parts. One part is the evolution of the science and techniques behind AI. And you said it, right? From narrow, bespoke AI that all it can do is this one thing that it's really trained for, it takes a large amount of data, a lot of computing power. Two, how do you have the techniques and the innovation for AI to learn from one use case to the other? Be less data hungry, less resource hungry. Be more trustworthy and explainable. So, we call that the journey from narrow to broad AI. And one part of our strategy, as scientists and technologists, is the innovation to make that happen. So that's sort of one part. But, as you said, as people involved in making AI work in the enterprise, and IBM Research AI vision would be incomplete without the second part, which is, what are the challenges in scaling and operationalizing AI? It isn't sufficient that I can tell you AI can do this, how do I make AI do this so that you get the right ROI, the investment relative to the return makes sense and you can scale and operationalize. So, we took both of these imperatives. The AI narrow-to-broad journey, and the need to scale and operationalize. And what of the things that are making it hard? The things that make scaling and operationalizing harder: data challenges, we talked about that, skills challenges, and the fact that in enterprises, you have to govern and manage AI. And we took that together and we think of our AI agenda in three pieces: Advancing, trusting, and scaling AI. Advancing is the piece of pushing the boundary, making AI narrow to broad. Trusting is building AI which is trustworthy, is explainable, you can control and understand its behavior, make sense of it and all of the technology that goes with it. And scaling AI is when we address the problem of, how do I, you know, reduce the time and cost for data prep? How do I reduce the time for model tweaking and engineering? How do I make sure that a model that you build today, when something changes in the data, I can quickly allow for you to close the loop and improve the model? All of the things, think of day-two operations of AI. All of that is part of our scaling AI strategy. So advancing, trusting, scaling is sort of the three big mantras around which the way we think about our AI. >> Yeah, so I've been doing a little work in this around this notion of DataOps. Essentially, you know, DevOps applied to the data and the data pipeline, and I had a great conversation recently with Inderpal Bhandari, IBM's Global Chief Data Officer, and he explained to me how, first of all, customers will tell you, it's very hard to operationalize AIs. He and his team took that challenge on themselves and have had some great success. And, you know, we all know the problem. It's that, you know AI has to wait for the data. It has to wait for the data to be cleansed and wrangled. Can AI actually help with that part of the problem, compressing that? >> 100%. In fact, the way we think of the automation and scaling story is what we call the "AI For AI" story. So, AI in service of helping you build the AI that helps you make this with speed, right? So, and I think of it really in three parts. It's AI for data automation, our DataOps. AI used in better discovery, better cleansing, better configuration, faster linking, quality assessment, all of that. Using AI to do all of those data problems that you had to do. And I called it AI for data automation. The second part is using AI to automatically figure out the best model. And that's AI for data science automation, which is, feature engineering, hyperparameter optimization, having them all do work, why should a data scientist take weeks and months experimenting? If the AI can accelerate that from weeks to a matter of hours? That's data science automation. And then comes the important part, also, which is operations automation. Okay, I've put a data model into an application. How do I monitor its behavior? If the data that it's seeing is different from the data it was trained on, how do I quickly detect it? And a lot of the work from Research that was part of that Watson OpenScale offering is really addressing the operational side. So AI for data, AI for data science automation, and AI to help automate production of AI, is the way we break that problem up. >> So, I always like to ask folks that are deep into R&D, how they are ultimately are translating into commercial products and offerings? Because ultimately, you got to make money to fund more R&D. So, can you talk a little bit about how you do that, what your focus is there? >> Yeah, so that's a great question, and I'm going to use a few examples as well. But let me say at the outset, this is a very, very closed partnership. So when we, the Research part of AI and our portfolio, it's a closed partnership where we're constantly both drawing problem as well as building technology that goes into the offering. So, a lot of our work, much of our work in AI automation that we were talking about, is part of our Watson Studio, Watson Machine Learning, Watson OpenScale. In fact, OpenScale came out of Research working Trusted AI, and is now a centerpiece of our Watson project. Let me give a very different example. We have a very, very strong portfolio and focus in NLP, Natural Language Processing. And this directly goes into capabilities out of Watson Assistant, which is our system for conversational support and customer support, and Watson Discovery, which is about making enterprise understand unstructurally. And a great example of that is the Working Project Debater that you might have heard, which is a grand challenge in Research about building a machine that can do debate. Now, look, we weren't looking to go sell you a debating machine. But what did we build as part of doing that, is advances in NLP that are all making their way into assistant and discovery. And we actually just talked about earlier this year, announced a set of capabilities around better clustering, advanced summarization, deeper sentiment analysis. These made their way into Assistant and Discovery but are born out of research innovation and solving a grand problem like building a debating machine. That's just an example of how that journey from research to product happens. >> Yeah, the Debater documentary, I've seen some of that. It's actually quite astounding. I don't know what you're doing there. It sounds like you're taking natural language and turning it into complex queries with data science and AI, but it's quite amazing. >> Yes, and I would encourage you, you will see that documentary, by the way, on Channel 7, in the Think Event. And I would encourage you, actually the documentary around how Debater happened, sort of featuring back of the you know, backdoor interviews with the scientist who created it was actually featured last minute at Copenhagen International Documentary Festival. I'll invite viewers to go to Channel 7 and Data and AI Tech On-Demand to go take a look at that documentary. >> Yeah, you should take a look at it. It's actually quite astounding and amazing. Sriram, what are you working on these days? What kind of exciting projects or what's your focus area today? >> Look, I think there are three imperatives that we're really focused on, and one is very, you know, just really the project you're talking about, NLP. NLP in the enterprise, look, text is a language of business, right? Text is the way business is communicated. Within each other, with their partners, with the entire world. So, helping machines understand language, but in an enterprise context, recognizing that data and the enterprises live in complex documents, unstructured documents, in e-mail, they live in conversations with the customers. So, really pushing the boundary on how all our customers and clients can make sense of this vast volume of unstructured data by pushing the advances of NLP, that's one focus area. Second focus area, we talked about trust and how important that is. And we've done amazing work in monitoring and explainability. And we're really focused now on this emerging area of causality. Using causality to explain, right? The model makes this because the model believes this is what it wants, it's a beautiful way. And the third big focus continues to be on automation. So, NLP, trust, automation. Those are, like, three big focus areas for us. >> sriram, how far do you think we can take AI? I know it's a topic of conversation, but from your perspective, deep into the research, how far can it go? And maybe how far should it go? >> Look, I think we are, let me answer it this way. I think the arc of the possible is enormous. But I think we are at this inflection point in which I think the next wave of AI, the AI that's going to help us this narrow-to-broad journey we talked about, look, the narrow-to-broad journey's not like a one-week, one-year. We're talking about a decade of innovation. But I think we are at a point where we're going to see a wave of AI that we like to call "neuro-symbolic AI," which is AI that brings together two sort of fundamentally different approaches to building intelligence systems. One approach of building intelligence system is what we call "knowledge driven." Understand data, understand concept, logically, reasonable. We human beings do that. That was really the way AI was born. The more recent last couple of decades of AI was data driven, Machine learning. Give me vast volumes of data, I'll use neural techniques, deep learning, to to get value. We're at a point where we're going to bring both of them together. Cause you can't build trustworthy, explainable systems using only one, you can't get away from not using all of the data that you have to make them. So, neuro-symbolic AI is, I think, going to be the linchpin of how we advance AI and make it more powerful and trustworthy. >> So, are you, like, living your childhood dream here or what? >> Look, for me I'm fascinated. I've always been fascinated. And any time you can't find a technology person who hasn't dreamt of building an intelligent machine. To have a job where I can work across our worldwide set of 3,000 plus researchers and think and brainstorm on strategy with AI. And then, most importantly, not to forget, right? That you talked about being able to move it into our portfolios so it actually makes a difference for our clients. I think it's a dream job and a whole lot of fun. >> Well, Sriram, it was great having you on theCUBE. A lot of fun, interviewing folks like you. I feel a little bit smarter just talking to you. So thanks so much for coming on. >> Fantastic. It's been a pleasure to be here. >> And thank you for watching, everybody. You're watching theCUBE's coverage of IBM Think 2020. This is Dave Vellante. We'll be right back right after this short break. (upbeat music)
SUMMARY :
Brought to you by IBM. and it's our pleasure to be at the Almaden labs. that take the innovation, AI innovation, But how do you look at the landscape? But look barely 20% of the it's not the innovation I wonder if you could and the innovation for AI to learn and the data pipeline, and And a lot of the work from So, can you talk a little that goes into the offering. Yeah, the Debater documentary, of featuring back of the Sriram, what are you and the enterprises live the data that you have to make them. And any time you can't just talking to you. a pleasure to be here. And thank you for watching, everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Sriram Raghavan | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Sriram | PERSON | 0.99+ |
IBM Research | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Inderpal Bhandari | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
4% | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
One part | QUANTITY | 0.99+ |
one part | QUANTITY | 0.99+ |
Channel 7 | ORGANIZATION | 0.99+ |
one-year | QUANTITY | 0.99+ |
San Jose | LOCATION | 0.99+ |
sriram | PERSON | 0.99+ |
one-week | QUANTITY | 0.99+ |
3,000 plus researchers | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
three parts | QUANTITY | 0.98+ |
Copenhagen International Documentary Festival | EVENT | 0.98+ |
South San Jose | LOCATION | 0.98+ |
Second focus | QUANTITY | 0.98+ |
30% | QUANTITY | 0.98+ |
three pieces | QUANTITY | 0.98+ |
Data | ORGANIZATION | 0.98+ |
One approach | QUANTITY | 0.97+ |
earlier this year | DATE | 0.97+ |
Jeopardy | TITLE | 0.96+ |
Almaden | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.95+ |
OpenScale | ORGANIZATION | 0.95+ |
three | QUANTITY | 0.94+ |
one focus area | QUANTITY | 0.94+ |
third big | QUANTITY | 0.93+ |
Watson Assistant | TITLE | 0.92+ |
one use case | QUANTITY | 0.92+ |
Moore | ORGANIZATION | 0.92+ |
today | DATE | 0.91+ |
Stanford | LOCATION | 0.91+ |
Almaden Research Center | ORGANIZATION | 0.9+ |
one thing | QUANTITY | 0.88+ |
2020 | TITLE | 0.87+ |
wave | EVENT | 0.87+ |
Watson | TITLE | 0.86+ |
three big mantras | QUANTITY | 0.85+ |
> 100% | QUANTITY | 0.85+ |
two sort | QUANTITY | 0.84+ |
Think | COMMERCIAL_ITEM | 0.83+ |
second imperative | QUANTITY | 0.81+ |
Global Chief Data Officer | PERSON | 0.8+ |
three imperatives | QUANTITY | 0.76+ |
last couple of years | DATE | 0.76+ |
Debater | TITLE | 0.76+ |
Watson | ORGANIZATION | 0.72+ |
NLP | ORGANIZATION | 0.72+ |
Studio | ORGANIZATION | 0.72+ |
day | QUANTITY | 0.67+ |
two | QUANTITY | 0.65+ |
Vice | PERSON | 0.65+ |
theCUBE | ORGANIZATION | 0.63+ |
Watson Discovery | TITLE | 0.62+ |
theCUBE | TITLE | 0.6+ |