Closing Panel | Generative AI: Riding the Wave | AWS Startup Showcase S3 E1
(mellow music) >> Hello everyone, welcome to theCUBE's coverage of AWS Startup Showcase. This is the closing panel session on AI machine learning, the top startups generating generative AI on AWS. It's a great panel. This is going to be the experts talking about riding the wave in generative AI. We got Ankur Mehrotra, who's the director and general manager of AI and machine learning at AWS, and Clem Delangue, co-founder and CEO of Hugging Face, and Ori Goshen, who's the co-founder and CEO of AI21 Labs. Ori from Tel Aviv dialing in, and rest coming in here on theCUBE. Appreciate you coming on for this closing session for the Startup Showcase. >> Thanks for having us. >> Thank you for having us. >> Thank you. >> I'm super excited to have you all on. Hugging Face was recently in the news with the AWS relationship, so congratulations. Open source, open science, really driving the machine learning. And we got the AI21 Labs access to the LLMs, generating huge scale live applications, commercial applications, coming to the market, all powered by AWS. So everyone, congratulations on all your success, and thank you for headlining this panel. Let's get right into it. AWS is powering this wave here. We're seeing a lot of push here from applications. Ankur, set the table for us on the AI machine learning. It's not new, it's been goin' on for a while. Past three years have been significant advancements, but there's been a lot of work done in AI machine learning. Now it's released to the public. Everybody's super excited and now says, "Oh, the future's here!" It's kind of been going on for a while and baking. Now it's kind of coming out. What's your view here? Let's get it started. >> Yes, thank you. So, yeah, as you may be aware, Amazon has been in investing in machine learning research and development since quite some time now. And we've used machine learning to innovate and improve user experiences across different Amazon products, whether it's Alexa or Amazon.com. But we've also brought in our expertise to extend what we are doing in the space and add more generative AI technology to our AWS products and services, starting with CodeWhisperer, which is an AWS service that we announced a few months ago, which is, you can think of it as a coding companion as a service, which uses generative AI models underneath. And so this is a service that customers who have no machine learning expertise can just use. And we also are talking to customers, and we see a lot of excitement about generative AI, and customers who want to build these models themselves, who have the talent and the expertise and resources. For them, AWS has a number of different options and capabilities they can leverage, such as our custom silicon, such as Trainium and Inferentia, as well as distributed machine learning capabilities that we offer as part of SageMaker, which is an end-to-end machine learning development service. At the same time, many of our customers tell us that they're interested in not training and building these generative AI models from scratch, given they can be expensive and can require specialized talent and skills to build. And so for those customers, we are also making it super easy to bring in existing generative AI models into their machine learning development environment within SageMaker for them to use. So we recently announced our partnership with Hugging Face, where we are making it super easy for customers to bring in those models into their SageMaker development environment for fine tuning and deployment. And then we are also partnering with other proprietary model providers such as AI21 and others, where we making these generative AI models available within SageMaker for our customers to use. So our approach here is to really provide customers options and choices and help them accelerate their generative AI journey. >> Ankur, thank you for setting the table there. Clem and Ori, I want to get your take, because the riding the waves, the theme of this session, and to me being in California, I imagine the big surf, the big waves, the big talent out there. This is like alpha geeks, alpha coders, developers are really leaning into this. You're seeing massive uptake from the smartest people. Whether they're young or around, they're coming in with their kind of surfboards, (chuckles) if you will. These early adopters, they've been on this for a while; Now the waves are hitting. This is a big wave, everyone sees it. What are some of those early adopter devs doing? What are some of the use cases you're seeing right out of the gate? And what does this mean for the folks that are going to come in and get on this wave? Can you guys share your perspective on this? Because you're seeing the best talent now leaning into this. >> Yeah, absolutely. I mean, from Hugging Face vantage points, it's not even a a wave, it's a tidal wave, or maybe even the tide itself. Because actually what we are seeing is that AI and machine learning is not something that you add to your products. It's very much a new paradigm to do all technology. It's this idea that we had in the past 15, 20 years, one way to build software and to build technology, which was writing a million lines of code, very rule-based, and then you get your product. Now what we are seeing is that every single product, every single feature, every single company is starting to adopt AI to build the next generation of technology. And that works both to make the existing use cases better, if you think of search, if you think of social network, if you think of SaaS, but also it's creating completely new capabilities that weren't possible with the previous paradigm. Now AI can generate text, it can generate image, it can describe your image, it can do so many new things that weren't possible before. >> It's going to really make the developers really productive, right? I mean, you're seeing the developer uptake strong, right? >> Yes, we have over 15,000 companies using Hugging Face now, and it keeps accelerating. I really think that maybe in like three, five years, there's not going to be any company not using AI. It's going to be really kind of the default to build all technology. >> Ori, weigh in on this. APIs, the cloud. Now I'm a developer, I want to have live applications, I want the commercial applications on this. What's your take? Weigh in here. >> Yeah, first, I absolutely agree. I mean, we're in the midst of a technology shift here. I think not a lot of people realize how big this is going to be. Just the number of possibilities is endless, and I think hard to imagine. And I don't think it's just the use cases. I think we can think of it as two separate categories. We'll see companies and products enhancing their offerings with these new AI capabilities, but we'll also see new companies that are AI first, that kind of reimagine certain experiences. They build something that wasn't possible before. And that's why I think it's actually extremely exciting times. And maybe more philosophically, I think now these large language models and large transformer based models are helping us people to express our thoughts and kind of making the bridge from our thinking to a creative digital asset in a speed we've never imagined before. I can write something down and get a piece of text, or an image, or a code. So I'll start by saying it's hard to imagine all the possibilities right now, but it's certainly big. And if I had to bet, I would say it's probably at least as big as the mobile revolution we've seen in the last 20 years. >> Yeah, this is the biggest. I mean, it's been compared to the Enlightenment Age. I saw the Wall Street Journal had a recent story on this. We've been saying that this is probably going to be bigger than all inflection points combined in the tech industry, given what transformation is coming. I guess I want to ask you guys, on the early adopters, we've been hearing on these interviews and throughout the industry that there's already a set of big companies, a set of companies out there that have a lot of data and they're already there, they're kind of tinkering. Kind of reminds me of the old hyper scaler days where they were building their own scale, and they're eatin' glass, spittin' nails out, you know, they're hardcore. Then you got everybody else kind of saying board level, "Hey team, how do I leverage this?" How do you see those two things coming together? You got the fast followers coming in behind the early adopters. What's it like for the second wave coming in? What are those conversations for those developers like? >> I mean, I think for me, the important switch for companies is to change their mindset from being kind of like a traditional software company to being an AI or machine learning company. And that means investing, hiring machine learning engineers, machine learning scientists, infrastructure in members who are working on how to put these models in production, team members who are able to optimize models, specialized models, customized models for the company's specific use cases. So it's really changing this mindset of how you build technology and optimize your company building around that. Things are moving so fast that I think now it's kind of like too late for low hanging fruits or small, small adjustments. I think it's important to realize that if you want to be good at that, and if you really want to surf this wave, you need massive investments. If there are like some surfers listening with this analogy of the wave, right, when there are waves, it's not enough just to stand and make a little bit of adjustments. You need to position yourself aggressively, paddle like crazy, and that's how you get into the waves. So that's what companies, in my opinion, need to do right now. >> Ori, what's your take on the generative models out there? We hear a lot about foundation models. What's your experience running end-to-end applications for large foundation models? Any insights you can share with the app developers out there who are looking to get in? >> Yeah, I think first of all, it's start create an economy, where it probably doesn't make sense for every company to create their own foundation models. You can basically start by using an existing foundation model, either open source or a proprietary one, and start deploying it for your needs. And then comes the second round when you are starting the optimization process. You bootstrap, whether it's a demo, or a small feature, or introducing new capability within your product, and then start collecting data. That data, and particularly the human feedback data, helps you to constantly improve the model, so you create this data flywheel. And I think we're now entering an era where customers have a lot of different choice of how they want to start their generative AI endeavor. And it's a good thing that there's a variety of choices. And the really amazing thing here is that every industry, any company you speak with, it could be something very traditional like industrial or financial, medical, really any company. I think peoples now start to imagine what are the possibilities, and seriously think what's their strategy for adopting this generative AI technology. And I think in that sense, the foundation model actually enabled this to become scalable. So the barrier to entry became lower; Now the adoption could actually accelerate. >> There's a lot of integration aspects here in this new wave that's a little bit different. Before it was like very monolithic, hardcore, very brittle. A lot more integration, you see a lot more data coming together. I have to ask you guys, as developers come in and grow, I mean, when I went to college and you were a software engineer, I mean, I got a degree in computer science, and software engineering, that's all you did was code, (chuckles) you coded. Now, isn't it like everyone's a machine learning engineer at this point? Because that will be ultimately the science. So, (chuckles) you got open source, you got open software, you got the communities. Swami called you guys the GitHub of machine learning, Hugging Face is the GitHub of machine learning, mainly because that's where people are going to code. So this is essentially, machine learning is computer science. What's your reaction to that? >> Yes, my co-founder Julien at Hugging Face have been having this thing for quite a while now, for over three years, which was saying that actually software engineering as we know it today is a subset of machine learning, instead of the other way around. People would call us crazy a few years ago when we're seeing that. But now we are realizing that you can actually code with machine learning. So machine learning is generating code. And we are starting to see that every software engineer can leverage machine learning through open models, through APIs, through different technology stack. So yeah, it's not crazy anymore to think that maybe in a few years, there's going to be more people doing AI and machine learning. However you call it, right? Maybe you'll still call them software engineers, maybe you'll call them machine learning engineers. But there might be more of these people in a couple of years than there is software engineers today. >> I bring this up as more tongue in cheek as well, because Ankur, infrastructure's co is what made Cloud great, right? That's kind of the DevOps movement. But here the shift is so massive, there will be a game-changing philosophy around coding. Machine learning as code, you're starting to see CodeWhisperer, you guys have had coding companions for a while on AWS. So this is a paradigm shift. How is the cloud playing into this for you guys? Because to me, I've been riffing on some interviews where it's like, okay, you got the cloud going next level. This is an example of that, where there is a DevOps-like moment happening with machine learning, whether you call it coding or whatever. It's writing code on its own. Can you guys comment on what this means on top of the cloud? What comes out of the scale? What comes out of the benefit here? >> Absolutely, so- >> Well first- >> Oh, go ahead. >> Yeah, so I think as far as scale is concerned, I think customers are really relying on cloud to make sure that the applications that they build can scale along with the needs of their business. But there's another aspect to it, which is that until a few years ago, John, what we saw was that machine learning was a data scientist heavy activity. They were data scientists who were taking the data and training models. And then as machine learning found its way more and more into production and actual usage, we saw the MLOps become a thing, and MLOps engineers become more involved into the process. And then we now are seeing, as machine learning is being used to solve more business critical problems, we're seeing even legal and compliance teams get involved. We are seeing business stakeholders more engaged. So, more and more machine learning is becoming an activity that's not just performed by data scientists, but is performed by a team and a group of people with different skills. And for them, we as AWS are focused on providing the best tools and services for these different personas to be able to do their job and really complete that end-to-end machine learning story. So that's where, whether it's tools related to MLOps or even for folks who cannot code or don't know any machine learning. For example, we launched SageMaker Canvas as a tool last year, which is a UI-based tool which data analysts and business analysts can use to build machine learning models. So overall, the spectrum in terms of persona and who can get involved in the machine learning process is expanding, and the cloud is playing a big role in that process. >> Ori, Clem, can you guys weigh in too? 'Cause this is just another abstraction layer of scale. What's it mean for you guys as you look forward to your customers and the use cases that you're enabling? >> Yes, I think what's important is that the AI companies and providers and the cloud kind of work together. That's how you make a seamless experience and you actually reduce the barrier to entry for this technology. So that's what we've been super happy to do with AWS for the past few years. We actually announced not too long ago that we are doubling down on our partnership with AWS. We're excited to have many, many customers on our shared product, the Hugging Face deep learning container on SageMaker. And we are working really closely with the Inferentia team and the Trainium team to release some more exciting stuff in the coming weeks and coming months. So I think when you have an ecosystem and a system where the AWS and the AI providers, AI startups can work hand in hand, it's to the benefit of the customers and the companies, because it makes it orders of magnitude easier for them to adopt this new paradigm to build technology AI. >> Ori, this is a scale on reasoning too. The data's out there and making sense out of it, making it reason, getting comprehension, having it make decisions is next, isn't it? And you need scale for that. >> Yes. Just a comment about the infrastructure side. So I think really the purpose is to streamline and make these technologies much more accessible. And I think we'll see, I predict that we'll see in the next few years more and more tooling that make this technology much more simple to consume. And I think it plays a very important role. There's so many aspects, like the monitoring the models and their kind of outputs they produce, and kind of containing and running them in a production environment. There's so much there to build on, the infrastructure side will play a very significant role. >> All right, that's awesome stuff. I'd love to change gears a little bit and get a little philosophy here around AI and how it's going to transform, if you guys don't mind. There's been a lot of conversations around, on theCUBE here as well as in some industry areas, where it's like, okay, all the heavy lifting is automated away with machine learning and AI, the complexity, there's some efficiencies, it's horizontal and scalable across all industries. Ankur, good point there. Everyone's going to use it for something. And a lot of stuff gets brought to the table with large language models and other things. But the key ingredient will be proprietary data or human input, or some sort of AI whisperer kind of role, or prompt engineering, people are saying. So with that being said, some are saying it's automating intelligence. And that creativity will be unleashed from this. If the heavy lifting goes away and AI can fill the void, that shifts the value to the intellect or the input. And so that means data's got to come together, interact, fuse, and understand each other. This is kind of new. I mean, old school AI was, okay, got a big model, I provisioned it long time, very expensive. Now it's all free flowing. Can you guys comment on where you see this going with this freeform, data flowing everywhere, heavy lifting, and then specialization? >> Yeah, I think- >> Go ahead. >> Yeah, I think, so what we are seeing with these large language models or generative models is that they're really good at creating stuff. But I think it's also important to recognize their limitations. They're not as good at reasoning and logic. And I think now we're seeing great enthusiasm, I think, which is justified. And the next phase would be how to make these systems more reliable. How to inject more reasoning capabilities into these models, or augment with other mechanisms that actually perform more reasoning so we can achieve more reliable results. And we can count on these models to perform for critical tasks, whether it's medical tasks, legal tasks. We really want to kind of offload a lot of the intelligence to these systems. And then we'll have to get back, we'll have to make sure these are reliable, we'll have to make sure we get some sort of explainability that we can understand the process behind the generated results that we received. So I think this is kind of the next phase of systems that are based on these generated models. >> Clem, what's your view on this? Obviously you're at open community, open source has been around, it's been a great track record, proven model. I'm assuming creativity's going to come out of the woodwork, and if we can automate open source contribution, and relationships, and onboarding more developers, there's going to be unleashing of creativity. >> Yes, it's been so exciting on the open source front. We all know Bert, Bloom, GPT-J, T5, Stable Diffusion, that work up. The previous or the current generation of open source models that are on Hugging Face. It has been accelerating in the past few months. So I'm super excited about ControlNet right now that is really having a lot of impact, which is kind of like a way to control the generation of images. Super excited about Flan UL2, which is like a new model that has been recently released and is open source. So yeah, it's really fun to see the ecosystem coming together. Open source has been the basis for traditional software, with like open source programming languages, of course, but also all the great open source that we've gotten over the years. So we're happy to see that the same thing is happening for machine learning and AI, and hopefully can help a lot of companies reduce a little bit the barrier to entry. So yeah, it's going to be exciting to see how it evolves in the next few years in that respect. >> I think the developer productivity angle that's been talked about a lot in the industry will be accelerated significantly. I think security will be enhanced by this. I think in general, applications are going to transform at a radical rate, accelerated, incredible rate. So I think it's not a big wave, it's the water, right? I mean, (chuckles) it's the new thing. My final question for you guys, if you don't mind, I'd love to get each of you to answer the question I'm going to ask you, which is, a lot of conversations around data. Data infrastructure's obviously involved in this. And the common thread that I'm hearing is that every company that looks at this is asking themselves, if we don't rebuild our company, start thinking about rebuilding our business model around AI, we might be dinosaurs, we might be extinct. And it reminds me that scene in Moneyball when, at the end, it's like, if we're not building the model around your model, every company will be out of business. What's your advice to companies out there that are having those kind of moments where it's like, okay, this is real, this is next gen, this is happening. I better start thinking and putting into motion plans to refactor my business, 'cause it's happening, business transformation is happening on the cloud. This kind of puts an exclamation point on, with the AI, as a next step function. Big increase in value. So it's an opportunity for leaders. Ankur, we'll start with you. What's your advice for folks out there thinking about this? Do they put their toe in the water? Do they jump right into the deep end? What's your advice? >> Yeah, John, so we talk to a lot of customers, and customers are excited about what's happening in the space, but they often ask us like, "Hey, where do we start?" So we always advise our customers to do a lot of proof of concepts, understand where they can drive the biggest ROI. And then also leverage existing tools and services to move fast and scale, and try and not reinvent the wheel where it doesn't need to be. That's basically our advice to customers. >> Get it. Ori, what's your advice to folks who are scratching their head going, "I better jump in here. "How do I get started?" What's your advice? >> So I actually think that need to think about it really economically. Both on the opportunity side and the challenges. So there's a lot of opportunities for many companies to actually gain revenue upside by building these new generative features and capabilities. On the other hand, of course, this would probably affect the cogs, and incorporating these capabilities could probably affect the cogs. So I think we really need to think carefully about both of these sides, and also understand clearly if this is a project or an F word towards cost reduction, then the ROI is pretty clear, or revenue amplifier, where there's, again, a lot of different opportunities. So I think once you think about this in a structured way, I think, and map the different initiatives, then it's probably a good way to start and a good way to start thinking about these endeavors. >> Awesome. Clem, what's your take on this? What's your advice, folks out there? >> Yes, all of these are very good advice already. Something that you said before, John, that I disagreed a little bit, a lot of people are talking about the data mode and proprietary data. Actually, when you look at some of the organizations that have been building the best models, they don't have specialized or unique access to data. So I'm not sure that's so important today. I think what's important for companies, and it's been the same for the previous generation of technology, is their ability to build better technology faster than others. And in this new paradigm, that means being able to build machine learning faster than others, and better. So that's how, in my opinion, you should approach this. And kind of like how can you evolve your company, your teams, your products, so that you are able in the long run to build machine learning better and faster than your competitors. And if you manage to put yourself in that situation, then that's when you'll be able to differentiate yourself to really kind of be impactful and get results. That's really hard to do. It's something really different, because machine learning and AI is a different paradigm than traditional software. So this is going to be challenging, but I think if you manage to nail that, then the future is going to be very interesting for your company. >> That's a great point. Thanks for calling that out. I think this all reminds me of the cloud days early on. If you went to the cloud early, you took advantage of it when the pandemic hit. If you weren't native in the cloud, you got hamstrung by that, you were flatfooted. So just get in there. (laughs) Get in the cloud, get into AI, you're going to be good. Thanks for for calling that. Final parting comments, what's your most exciting thing going on right now for you guys? Ori, Clem, what's the most exciting thing on your plate right now that you'd like to share with folks? >> I mean, for me it's just the diversity of use cases and really creative ways of companies leveraging this technology. Every day I speak with about two, three customers, and I'm continuously being surprised by the creative ideas. And the future is really exciting of what can be achieved here. And also I'm amazed by the pace that things move in this industry. It's just, there's not at dull moment. So, definitely exciting times. >> Clem, what are you most excited about right now? >> For me, it's all the new open source models that have been released in the past few weeks, and that they'll keep being released in the next few weeks. I'm also super excited about more and more companies getting into this capability of chaining different models and different APIs. I think that's a very, very interesting development, because it creates new capabilities, new possibilities, new functionalities that weren't possible before. You can plug an API with an open source embedding model, with like a no-geo transcription model. So that's also very exciting. This capability of having more interoperable machine learning will also, I think, open a lot of interesting things in the future. >> Clem, congratulations on your success at Hugging Face. Please pass that on to your team. Ori, congratulations on your success, and continue to, just day one. I mean, it's just the beginning. It's not even scratching the service. Ankur, I'll give you the last word. What are you excited for at AWS? More cloud goodness coming here with AI. Give you the final word. >> Yeah, so as both Clem and Ori said, I think the research in the space is moving really, really fast, so we are excited about that. But we are also excited to see the speed at which enterprises and other AWS customers are applying machine learning to solve real business problems, and the kind of results they're seeing. So when they come back to us and tell us the kind of improvement in their business metrics and overall customer experience that they're driving and they're seeing real business results, that's what keeps us going and inspires us to continue inventing on their behalf. >> Gentlemen, thank you so much for this awesome high impact panel. Ankur, Clem, Ori, congratulations on all your success. We'll see you around. Thanks for coming on. Generative AI, riding the wave, it's a tidal wave, it's the water, it's all happening. All great stuff. This is season three, episode one of AWS Startup Showcase closing panel. This is the AI ML episode, the top startups building generative AI on AWS. I'm John Furrier, your host. Thanks for watching. (mellow music)
SUMMARY :
This is the closing panel I'm super excited to have you all on. is to really provide and to me being in California, and then you get your product. kind of the default APIs, the cloud. and kind of making the I saw the Wall Street Journal I think it's important to realize that the app developers out there So the barrier to entry became lower; I have to ask you guys, instead of the other way around. That's kind of the DevOps movement. and the cloud is playing a and the use cases that you're enabling? the barrier to entry And you need scale for that. in the next few years and AI can fill the void, a lot of the intelligence and if we can automate reduce a little bit the barrier to entry. I'd love to get each of you drive the biggest ROI. to folks who are scratching So I think once you think Clem, what's your take on this? and it's been the same of the cloud days early on. And also I'm amazed by the pace in the past few weeks, Please pass that on to your team. and the kind of results they're seeing. This is the AI ML episode,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ankur Mehrotra | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Clem | PERSON | 0.99+ |
Ori Goshen | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Ori | PERSON | 0.99+ |
Clem Delangue | PERSON | 0.99+ |
Hugging Face | ORGANIZATION | 0.99+ |
Julien | PERSON | 0.99+ |
Ankur | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Ankur | ORGANIZATION | 0.99+ |
second round | QUANTITY | 0.99+ |
AI21 Labs | ORGANIZATION | 0.99+ |
two separate categories | QUANTITY | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
over 15,000 companies | QUANTITY | 0.98+ |
Both | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
over three years | QUANTITY | 0.98+ |
three customers | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Trainium | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Alexa | TITLE | 0.98+ |
Stable Diffusion | ORGANIZATION | 0.97+ |
Swami | PERSON | 0.97+ |
Inferentia | ORGANIZATION | 0.96+ |
GPT-J | ORGANIZATION | 0.96+ |
SageMaker | TITLE | 0.96+ |
AI21 Labs | ORGANIZATION | 0.95+ |
Riding the Wave | TITLE | 0.95+ |
ControlNet | ORGANIZATION | 0.94+ |
one way | QUANTITY | 0.94+ |
a million lines | QUANTITY | 0.93+ |
Startup Showcase | EVENT | 0.92+ |
few months ago | DATE | 0.92+ |
second wave | EVENT | 0.91+ |
theCUBE | ORGANIZATION | 0.91+ |
few years ago | DATE | 0.91+ |
CodeWhisperer | TITLE | 0.9+ |
AI21 | ORGANIZATION | 0.89+ |
Jay Marshall, Neural Magic | AWS Startup Showcase S3E1
(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)
SUMMARY :
of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jay | PERSON | 0.99+ |
Jay Marshall | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Nir Shavit | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
Alexa | TITLE | 0.99+ |
2010s | DATE | 0.99+ |
seven | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
each core | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
nine years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
BERT | TITLE | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
ChatGPT | TITLE | 0.98+ |
20 years | QUANTITY | 0.98+ |
over 50% | QUANTITY | 0.97+ |
second nature | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
ARM | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.95+ |
DeepSparse | TITLE | 0.94+ |
neuralmagic.com/graviton | OTHER | 0.94+ |
SiliconANGLE | ORGANIZATION | 0.94+ |
WebSphere | TITLE | 0.94+ |
nine | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
Startup Showcase | EVENT | 0.93+ |
five milliseconds | QUANTITY | 0.92+ |
AWS Startup Showcase | EVENT | 0.91+ |
two | QUANTITY | 0.9+ |
YOLO | ORGANIZATION | 0.89+ |
CUBE | ORGANIZATION | 0.88+ |
OPT | TITLE | 0.88+ |
last six months | DATE | 0.88+ |
season three | QUANTITY | 0.86+ |
double | QUANTITY | 0.86+ |
one customer | QUANTITY | 0.86+ |
Supercloud | EVENT | 0.86+ |
one side | QUANTITY | 0.85+ |
Vice | PERSON | 0.85+ |
x86 | OTHER | 0.83+ |
AI/ML: Top Startups Building Foundational Models | TITLE | 0.82+ |
ECS | TITLE | 0.81+ |
$100 billion | QUANTITY | 0.81+ |
DevOps | TITLE | 0.81+ |
WebLogic | TITLE | 0.8+ |
EKS | TITLE | 0.8+ |
a minute | QUANTITY | 0.8+ |
neuralmagic.com | OTHER | 0.79+ |
Oracle Aspires to be the Netflix of AI | Cube Conversation
(gentle music playing) >> For centuries, we've been captivated by the concept of machines doing the job of humans. And over the past decade or so, we've really focused on AI and the possibility of intelligent machines that can perform cognitive tasks. Now in the past few years, with the popularity of machine learning models ranging from recent ChatGPT to Bert, we're starting to see how AI is changing the way we interact with the world. How is AI transforming the way we do business? And what does the future hold for us there. At theCube, we've covered Oracle's AI and ML strategy for years, which has really been used to drive automation into Oracle's autonomous database. We've talked a lot about MySQL HeatWave in database machine learning, and AI pushed into Oracle's business apps. Oracle, it tends to lead in AI, but not competing as a direct AI player per se, but rather embedding AI and machine learning into its portfolio to enhance its existing products, and bring new services and offerings to the market. Now, last October at Cloud World in Las Vegas, Oracle partnered with Nvidia, which is the go-to AI silicon provider for vendors. And they announced an investment, a pretty significant investment to deploy tens of thousands more Nvidia GPUs to OCI, the Oracle Cloud Infrastructure and build out Oracle's infrastructure for enterprise scale AI. Now, Oracle CEO, Safra Catz said something to the effect of this alliance is going to help customers across industries from healthcare, manufacturing, telecoms, and financial services to overcome the multitude of challenges they face. Presumably she was talking about just driving more automation and more productivity. Now, to learn more about Oracle's plans for AI, we'd like to welcome in Elad Ziklik, who's the vice president of AI services at Oracle. Elad, great to see you. Welcome to the show. >> Thank you. Thanks for having me. >> You're very welcome. So first let's talk about Oracle's path to AI. I mean, it's the hottest topic going for years you've been incorporating machine learning into your products and services, you know, could you tell us what you've been working on, how you got here? >> So great question. So as you mentioned, I think most of the original four-way into AI was on embedding AI and using AI to make our applications, and databases better. So inside mySQL HeatWave, inside our autonomous database in power, we've been driving AI, all of course are SaaS apps. So Fusion, our large enterprise business suite for HR applications and CRM and ELP, and whatnot has built in AI inside it. Most recently, NetSuite, our small medium business SaaS suite started using AI for things like automated invoice processing and whatnot. And most recently, over the last, I would say two years, we've started exposing and bringing these capabilities into the broader OCI Oracle Cloud infrastructure. So the developers, and ISVs and customers can start using our AI capabilities to make their apps better and their experiences and business workflow better, and not just consume these as embedded inside Oracle. And this recent partnership that you mentioned with Nvidia is another step in bringing the best AI infrastructure capabilities into this platform so you can actually build any type of machine learning workflow or AI model that you want on Oracle Cloud. >> So when I look at the market, I see companies out there like DataRobot or C3 AI, there's maybe a half dozen that sort of pop up on my radar anyway. And my premise has always been that most customers, they don't want to become AI experts, they want to buy applications and have AI embedded or they want AI to manage their infrastructure. So my question to you is, how does Oracle help its OCI customers support their business with AI? >> So it's a great question. So I think what most customers want is business AI. They want AI that works for the business. They want AI that works for the enterprise. I call it the last mile of AI. And they want this thing to work. The majority of them don't want to hire a large and expensive data science teams to go and build everything from scratch. They just want the business problem solved by applying AI to it. My best analogy is Lego. So if you think of Lego, Lego has these millions Lego blocks that you can use to build anything that you want. But the majority of people like me or like my kids, they want the Lego death style kit or the Lego Eiffel Tower thing. They want a thing that just works, and it's very easy to use. And still Lego blocks, you still need to build some things together, which just works for the scenario that you're looking for. So that's our focus. Our focus is making it easy for customers to apply AI where they need to, in the right business context. So whether it's embedding it inside the business applications, like adding forecasting capabilities to your supply chain management or financial planning software, whether it's adding chat bots into the line of business applications, integrating these things into your analytics dashboard, even all the way to, we have a new platform piece we call ML applications that allows you to take a machine learning model, and scale it for the thousands of tenants that you would be. 'Cause this is a big problem for most of the ML use cases. It's very easy to build something for a proof of concept or a pilot or a demo. But then if you need to take this and then deploy it across your thousands of customers or your thousands of regions or facilities, then it becomes messy. So this is where we spend our time making it easy to take these things into production in the context of your business application or your business use case that you're interested in right now. >> So you mentioned chat bots, and I want to talk about ChatGPT, but my question here is different, we'll talk about that in a minute. So when you think about these chat bots, the ones that are conversational, my experience anyway is they're just meh, they're not that great. But the ones that actually work pretty well, they have a conditioned response. Now they're limited, but they say, which of the following is your problem? And then if that's one of the following is your problem, you can maybe solve your problem. But this is clearly a trend and it helps the line of business. How does Oracle think about these use cases for your customers? >> Yeah, so I think the key here is exactly what you said. It's about task completion. The general purpose bots are interesting, but as you said, like are still limited. They're getting much better, I'm sure we'll talk about ChatGPT. But I think what most enterprises want is around task completion. I want to automate my expense report processing. So today inside Oracle we have a chat bot where I submit my expenses the bot ask a couple of question, I answer them, and then I'm done. Like I don't need to go to our fancy application, and manually submit an expense report. I do this via Slack. And the key is around managing the right expectations of what this thing is capable of doing. Like, I have a story from I think five, six years ago when technology was much inferior than it is today. Well, one of the telco providers I was working with wanted to roll a chat bot that does realtime translation. So it was for a support center for of the call centers. And what they wanted do is, Hey, we have English speaking employees, whatever, 24/7, if somebody's calling, and the native tongue is different like Hebrew in my case, or Chinese or whatnot, then we'll give them a chat bot that they will interact with and will translate this on the fly and everything would work. And when they rolled it out, the feedback from customers was horrendous. Customers said, the technology sucks. It's not good. I hate it, I hate your company, I hate your support. And what they've done is they've changed the narrative. Instead of, you go to a support center, and you assume you're going to talk to a human, and instead you get a crappy chat bot, they're like, Hey, if you want to talk to a Hebrew speaking person, there's a four hour wait, please leave your phone and we'll call you back. Or you can try a new amazing Hebrew speaking AI powered bot and it may help your use case. Do you want to try it out? And some people said, yeah, let's try it out. Plus one to try it out. And the feedback, even though it was the exact same technology was amazing. People were like, oh my God, this is so innovative, this is great. Even though it was the exact same experience that they hated a few weeks earlier on. So I think the key lesson that I picked from this experience is it's all about setting the right expectations, and working around the right use case. If you are replacing a human, the level is different than if you are just helping or augmenting something that otherwise would take a lot of time. And I think this is the focus that we are doing, picking up the tasks that people want to accomplish or that enterprise want to accomplish for the customers, for the employees. And using chat bots to make those specific ones better rather than, hey, this is going to replace all humans everywhere, and just be better than that. >> Yeah, I mean, to the point you mentioned expense reports. I'm in a Twitter thread and one guy says, my favorite part of business travel is filling out expense reports. It's an hour of excitement to figure out which receipts won't scan. We can all relate to that. It's just the worst. When you think about companies that are building custom AI driven apps, what can they do on OCI? What are the best options for them? Do they need to hire an army of machine intelligence experts and AI specialists? Help us understand your point of view there. >> So over the last, I would say the two or three years we've developed a full suite of machine learning and AI services for, I would say probably much every use case that you would expect right now from applying natural language processing to understanding customer support tickets or social media, or whatnot to computer vision platforms or computer vision services that can understand and detect objects, and count objects on shelves or detect cracks in the pipe or defecting parts, all the way to speech services. It can actually transcribe human speech. And most recently we've launched a new document AI service. That can actually look at unstructured documents like receipts or invoices or government IDs or even proprietary documents, loan application, student application forms, patient ingestion and whatnot and completely automate them using AI. So if you want to do one of the things that are, I would say common bread and butter for any industry, whether it's financial services or healthcare or manufacturing, we have a suite of services that any developer can go, and use easily customized with their own data. You don't need to be an expert in deep learning or large language models. You could just use our automobile capabilities, and build your own version of the models. Just go ahead and use them. And if you do have proprietary complex scenarios that you need customer from scratch, we actually have the most cost effective platform for that. So we have the OCI data science as well as built-in machine learning platform inside the databases inside the Oracle database, and mySQL HeatWave that allow data scientists, python welding people that actually like to build and tweak and control and improve, have everything that they need to go and build the machine learning models from scratch, deploy them, monitor and manage them at scale in production environment. And most of it is brand new. So we did not have these technologies four or five years ago and we've started building them and they're now at enterprise scale over the last couple of years. >> So what are some of the state-of-the-art tools, that AI specialists and data scientists need if they're going to go out and develop these new models? >> So I think it's on three layers. I think there's an infrastructure layer where the Nvidia's of the world come into play. For some of these things, you want massively efficient, massively scaled infrastructure place. So we are the most cost effective and performant large scale GPU training environment today. We're going to be first to onboard the new Nvidia H100s. These are the new super powerful GPU's for large language model training. So we have that covered for you in case you need this 'cause you want to build these ginormous things. You need a data science platform, a platform where you can open a Python notebook, and just use all these fancy open source frameworks and create the models that you want, and then click on a button and deploy it. And it infinitely scales wherever you need it. And in many cases you just need the, what I call the applied AI services. You need the Lego sets, the Lego death style, Lego Eiffel Tower. So we have a suite of these sets for typical scenarios, whether it's cognitive services of like, again, understanding images, or documents all the way to solving particular business problems. So an anomaly detection service, demand focusing service that will be the equivalent of these Lego sets. So if this is the business problem that you're looking to solve, we have services out there where we can bring your data, call an API, train a model, get the model and use it in your production environment. So wherever you want to play, all the way into embedding this thing, inside this applications, obviously, wherever you want to play, we have the tools for you to go and engage from infrastructure to SaaS at the top, and everything in the middle. >> So when you think about the data pipeline, and the data life cycle, and the specialized roles that came out of kind of the (indistinct) era if you will. I want to focus on two developers and data scientists. So the developers, they hate dealing with infrastructure and they got to deal with infrastructure. Now they're being asked to secure the infrastructure, they just want to write code. And a data scientist, they're spending all their time trying to figure out, okay, what's the data quality? And they're wrangling data and they don't spend enough time doing what they want to do. So there's been a lack of collaboration. Have you seen that change, are these approaches allowing collaboration between data scientists and developers on a single platform? Can you talk about that a little bit? >> Yeah, that is a great question. One of the biggest set of scars that I have on my back from for building these platforms in other companies is exactly that. Every persona had a set of tools, and these tools didn't talk to each other and the handoff was painful. And most of the machine learning things evaporate or die on the floor because of this problem. It's very rarely that they are unsuccessful because the algorithm wasn't good enough. In most cases it's somebody builds something, and then you can't take it to production, you can't integrate it into your business application. You can't take the data out, train, create an endpoint and integrate it back like it's too painful. So the way we are approaching this is focused on this problem exactly. We have a single set of tools that if you publish a model as a data scientist and developers, and even business analysts that are seeing a inside of business application could be able to consume it. We have a single model store, a single feature store, a single management experience across the various personas that need to play in this. And we spend a lot of time building, and borrowing a word that cellular folks used, and I really liked it, building inside highways to make it easier to bring these insights into where you need them inside applications, both inside our applications, inside our SaaS applications, but also inside custom third party and even first party applications. And this is where a lot of our focus goes to just because we have dealt with so much pain doing this inside our own SaaS that we now have built the tools, and we're making them available for others to make this process of building a machine learning outcome driven insight in your app easier. And it's not just the model development, and it's not just the deployment, it's the entire journey of taking the data, building the model, training it, deploying it, looking at the real data that comes from the app, and creating this feedback loop in a more efficient way. And that's our focus area. Exactly this problem. >> Well thank you for that. So, last week we had our super cloud two event, and I had Juan Loza on and he spent a lot of time talking about how open Oracle is in its philosophy, and I got a lot of feedback. They were like, Oracle open, I don't really think, but the truth is if you think about database Oracle database, it never met a hardware platform that it didn't like. So in that sense it's open. So, but my point is, a big part of of machine learning and AI is driven by open source tools, frameworks, what's your open source strategy? What do you support from an open source standpoint? >> So I'm a strong believer that you don't actually know, nobody knows where the next slip fog or the next industry shifting innovation in AI is going to come from. If you look six months ago, nobody foreseen Dali, the magical text to image generation and the exploding brought into just art and design type of experiences. If you look six weeks ago, I don't think anybody's seen ChatGPT, and what it can do for a whole bunch of industries. So to me, assuming that a customer or partner or developer would want to lock themselves into only the tools that a specific vendor can produce is ridiculous. 'Cause nobody knows, if anybody claims that they know where the innovation is going to come from in a year or two, let alone in five or 10, they're just wrong or lying. So our strategy for Oracle is to, I call this the Netflix of AI. So if you think about Netflix, they produced a bunch of high quality shows on their own. A few years ago it was House of Cards. Last month my wife and I binge watched Ginny and Georgie, but they also curated a lot of shows that they found around the world and bought them to their customers. So it started with things like Seinfeld or Friends and most recently it was Squid games and those are famous Israeli TV series called Founder that Netflix bought in, and they bought it as is and they gave it the Netflix value. So you have captioning and you have the ability to speed the movie and you have it inside your app, and you can download it and watch it offline and everything, but nobody Netflix was involved in the production of these first seasons. Now if these things hunt and they're great, then the third season or the fourth season will get the full Netflix production value, high value budget, high value location shooting or whatever. But you as a customer, you don't care whether the producer and director, and screenplay writing is a Netflix employee or is somebody else's employee. It is fulfilled by Netflix. I believe that we will become, or we are looking to become the Netflix of AI. We are building a bunch of AI in a bunch of places where we think it's important and we have some competitive advantage like healthcare with Acellular partnership or whatnot. But I want to bring the best AI software and hardware to OCI and do a fulfillment by Oracle on that. So you'll get the Oracle security and identity and single bill and everything you'd expect from a company like Oracle. But we don't have to be building the data science, and the models for everything. So this means both open source recently announced a partnership with Anaconda, the leading provider of Python distribution in the data science ecosystem where we are are doing a joint strategic partnership of bringing all the goodness into Oracle customers as well as in the process of doing the same with Nvidia, and all those software libraries, not just the Hubble, both for other stuff like Triton, but also for healthcare specific stuff as well as other ISVs, other AI leading ISVs that we are in the process of partnering with to get their stuff into OCI and into Oracle so that you can truly consume the best AI hardware, and the best AI software in the world on Oracle. 'Cause that is what I believe our customers would want the ability to choose from any open source engine, and honestly from any ISV type of solution that is AI powered and they want to use it in their experiences. >> So you mentioned ChatGPT, I want to talk about some of the innovations that are coming. As an AI expert, you see ChatGPT on the one hand, I'm sure you weren't surprised. On the other hand, maybe the reaction in the market, and the hype is somewhat surprising. You know, they say that we tend to under or over-hype things in the early stages and under hype them long term, you kind of use the internet as example. What's your take on that premise? >> So. I think that this type of technology is going to be an inflection point in how software is being developed. I truly believe this. I think this is an internet style moment, and the way software interfaces, software applications are being developed will dramatically change over the next year two or three because of this type of technologies. I think there will be industries that will be shifted. I think education is a good example. I saw this thing opened on my son's laptop. So I think education is going to be transformed. Design industry like images or whatever, it's already been transformed. But I think that for mass adoption, like beyond the hype, beyond the peak of inflected expectations, if I'm using Gartner terminology, I think certain things need to go and happen. One is this thing needs to become more reliable. So right now it is a complete black box that sometimes produce magic, and sometimes produce just nonsense. And it needs to have better explainability and better lineage to, how did you get to this answer? 'Cause I think enterprises are going to really care about the things that they surface with the customers or use internally. So I think that is one thing that's going to come out. And the other thing that's going to come out is I think it's going to come industry specific large language models or industry specific ChatGPTs. Something like how OpenAI did co-pilot for writing code. I think we will start seeing this type of apps solving for specific business problems, understanding contracts, understanding healthcare, writing doctor's notes on behalf of doctors so they don't have to spend time manually recording and analyzing conversations. And I think that would become the sweet spot of this thing. There will be companies, whether it's OpenAI or Microsoft or Google or hopefully Oracle that will use this type of technology to solve for specific very high value business needs. And I think this will change how interfaces happen. So going back to your expense report, the world of, I'm going to go into an app, and I'm going to click on seven buttons in order to get some job done like this world is gone. Like I'm going to say, hey, please do this and that. And I expect an answer to come out. I've seen a recent demo about, marketing in sales. So a customer sends an email that is interested in something and then a ChatGPT powered thing just produces the answer. I think this is how the world is going to evolve. Like yes, there's a ton of hype, yes, it looks like magic and right now it is magic, but it's not yet productive for most enterprise scenarios. But in the next 6, 12, 24 months, this will start getting more dependable, and it's going to change how these industries are being managed. Like I think it's an internet level revolution. That's my take. >> It's very interesting. And it's going to change the way in which we have. Instead of accessing the data center through APIs, we're going to access it through natural language processing and that opens up technology to a huge audience. Last question, is a two part question. And the first part is what you guys are working on from the futures, but the second part of the question is, we got data scientists and developers in our audience. They love the new shiny toy. So give us a little glimpse of what you're working on in the future, and what would you say to them to persuade them to check out Oracle's AI services? >> Yep. So I think there's two main things that we're doing, one is around healthcare. With a new recent acquisition, we are spending a significant effort around revolutionizing healthcare with AI. Of course many scenarios from patient care using computer vision and cameras through automating, and making better insurance claims to research and pharma. We are making the best models from leading organizations, and internal available for hospitals and researchers, and insurance providers everywhere. And we truly are looking to become the leader in AI for healthcare. So I think that's a huge focus area. And the second part is, again, going back to the enterprise AI angle. Like we want to, if you have a business problem that you want to apply here to solve, we want to be your platform. Like you could use others if you want to build everything complicated and whatnot. We have a platform for that as well. But like, if you want to apply AI to solve a business problem, we want to be your platform. We want to be the, again, the Netflix of AI kind of a thing where we are the place for the greatest AI innovations accessible to any developer, any business analyst, any user, any data scientist on Oracle Cloud. And we're making a significant effort on these two fronts as well as developing a lot of the missing pieces, and building blocks that we see are needed in this space to make truly like a great experience for developers and data scientists. And what would I recommend? Get started, try it out. We actually have a shameless sales plug here. We have a free deal for all of our AI services. So it typically cost you nothing. I would highly recommend to just go, and try these things out. Go play with it. If you are a python welding developer, and you want to try a little bit of auto mail, go down that path. If you're not even there and you're just like, hey, I have these customer feedback things and I want to try out, if I can understand them and apply AI and visualize, and do some cool stuff, we have services for that. My recommendation is, and I think ChatGPT got us 'cause I see people that have nothing to do with AI, and can't even spell AI going and trying it out. I think this is the time. Go play with these things, go play with these technologies and find what AI can do to you or for you. And I think Oracle is a great place to start playing with these things. >> Elad, thank you. Appreciate you sharing your vision of making Oracle the Netflix of AI. Love that and really appreciate your time. >> Awesome. Thank you. Thank you for having me. >> Okay. Thanks for watching this Cube conversation. This is Dave Vellante. We'll see you next time. (gentle music playing)
SUMMARY :
AI and the possibility Thanks for having me. I mean, it's the hottest So the developers, So my question to you is, and scale it for the thousands So when you think about these chat bots, and the native tongue It's just the worst. So over the last, and create the models that you want, of the (indistinct) era if you will. So the way we are approaching but the truth is if you the movie and you have it inside your app, and the hype is somewhat surprising. and the way software interfaces, and what would you say to them and you want to try a of making Oracle the Netflix of AI. Thank you for having me. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Netflix | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Elad Ziklik | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Safra Catz | PERSON | 0.99+ |
Elad | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
two part | QUANTITY | 0.99+ |
fourth season | QUANTITY | 0.99+ |
House of Cards | TITLE | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
first seasons | QUANTITY | 0.99+ |
Seinfeld | TITLE | 0.99+ |
Last month | DATE | 0.99+ |
third season | QUANTITY | 0.99+ |
four hour | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Hebrew | OTHER | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
last October | DATE | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two fronts | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
Juan Loza | PERSON | 0.99+ |
Founder | TITLE | 0.99+ |
four | DATE | 0.99+ |
six weeks ago | DATE | 0.99+ |
today | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
python | TITLE | 0.99+ |
five | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
two developers | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
H100s | COMMERCIAL_ITEM | 0.98+ |
five years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Friends | TITLE | 0.98+ |
one guy | QUANTITY | 0.98+ |
10 | QUANTITY | 0.97+ |
James Labocki, Red Hat & Ruchir Puri, IBM | KubeCon + CloudNativeCon Europe 2021 - Virtual
>>from around the globe. It's the cube with coverage of Kublai >>Khan and Cloud Native Con, Europe 2021 >>virtual brought to you by red hat. The cloud Native >>computing foundation >>and ecosystem partners. >>Welcome back to the cubes coverage everyone of Coop Con 2021 Cloud Native Con 21 virtual europe. I'm john for your host of the cube. We've got two great guests here, James Labaki, senior Director of Product management, Red Hat and Richer Puree. IBM fellow and chief scientist at IBM Gentlemen, thanks for coming on the cube, appreciate it. >>Thank you for having us. >>So, um, got an IBM fellow and Chief scientist, Senior Director Product management. You guys have the keys to the kingdom on cloud Native. All right, it's gonna be fun. So let's just jump into it. So I want to ask you before we get into some of the questions around the projects, what you guys take of cube con this year, in terms of the vibe, I know it's virtual in europe north America, we looked like we might be in person but this year with the pandemic cloud native just seems to have a spring to its step, it's got more traction. I've seen the cloud native piece even more than kubernetes in a way. So scott cooper diseases continues to have traction, but it's always about kubernetes now. It's more cloud native. I what do you guys think about that? >>Yeah, I'm sure you have thoughts and I could add on >>Yes, I I think well I would really think of it as almost sequential in some ways. Community is too cold now there's a layer which comes above it which is where all our, you know, clients and enterprises realize the value, which is when the applications really move. It's about the applications and what they can deliver to their end customers. And the game now is really about moving those applications and making them cloud native. That's when the value of that software infrastructure will get realized and that's why you are seeing that vibe in the, in the clients and enterprises and at two corners. Well, >>yeah, I mean, I think it's exciting. I've been covering this community since the beginning as you guys know the cube. This is the enablement moment where the fruit is coming off the tree is starting to see that first wave of you mentioned that enablement, it's happening and you can see it in the project. So I want to get into the news here, the conveyor community. What is this about? Can you take a minute to explain what is the conveyor community? >>Yeah, yeah. I think uh, you know, uh, what, what we discovered is we were starting to work with a lot of end users and practitioners. Is that what we're finding is that they kind of get tired of hearing about digital transformation and from multiple vendors and and from sales folks and these sorts of things. And when you speak to the practitioners, they just want to know what are the practical implications of moving towards a more collaborative architecture. And so, um, you know, when you start talking to them at levels beyond, uh, just generic kind of, you know, I would say marketing speak and even the business cases, the developers and sys admins need to know what it is they need to do to their application architecture is the ways they're working for to successfully modernize their applications. And so the idea behind the conveyor community was really kind of two fold. One was to help with knowledge sharing. So we started running meetups where people can come and share their knowledge of what they've done around specific topics like strangling monoliths or carving offside containers or things that sidecar containers are things that they've done successfully uh to help uh kind of move things forward. So it's really about knowledge sharing. And then the second piece we discovered was that there's really no place where you can find open source tools to help you re host re platform and re factor your applications to kubernetes. And so that's really where we're trying to fill that void is provide open source options in that space and kind of inviting everybody else to collaborate with us on that. >>Can you give an example of something uh some use cases of people doing this, why the need the drivers? It makes sense. Right. As a growing, you've got, you have to move applications. People want to have um applications moved to communities. I get that. But what are some of the use cases that were forcing this? >>Yeah, absolutely, for sure. I don't know if you have any you want to touch on um specifically I could add on as well. >>Yeah, I think some of the key use cases, I would really say it will be. So let let me just, I think James just talked about re host, re hosting, re platform ng and re factoring, I'm gonna put some numbers on it and then they talk about the use case a little bit as well. I would really say 30 virtual machines movement. That's it. That's the first one to happen. Easy, easier one, relatively speaking. But that's the first one to happen. The re platform in one where you are now really sort of changing the stack as well but not changing the application in any major way yet. And the hardest one happened around re factoring, which is, you are, you know, this is when we start talking about cloud native, you take a monolithic application which you know legacy applications which have been running for a long time and try to re factor them so that you can build microservices out of them. The very first, I would say set of clients that we are seeing at the leading edge around this will be around banking and insurance. Legacy applications, banking is obviously finances a large industry and that's the first movement you start seeing which is where the complexity of the application in terms of some of the legacy code that you are seeing more onto the, into the cloud. That for a cloud native implementation as well as their as well as a diversity of scenarios from a re hosting and re platform ng point of view. And we'll talk about some of the tools that we are putting in the community uh to help the users and uh and the developer community in many of these enterprises uh move into a cloud native implementation lot of their applications. And also from the point of view of helping them in terms of practice, is what I describe as best practices. It is not just about tools, it's about the community coming together. How do I do this? How do I do that? Actually, there are best practices that we as a community have gathered. It's about that sharing as well, James. >>Yeah, I think you hit the nail on the head. Right. So you re hosting like for example, you might have uh an application that was delivered, you buy an SV that is not available containerized yet. You need to bring that over as a VM. So you can bring that into Q Bert, you know, and actually bring that and just re hosted. You can, you might have some things that you've already containerized but they're sitting on a container orchestration layer that is no longer growing, right? So the innovation has kind of left that platform and kind of kubernetes has become kind of that standard one, the container orchestration layer, if you want become the de facto standard. And so you want to re platform that that takes massaging and transforming metadata to do that to create the right objects and so on and so forth. So there's a bunch of different use cases around that that kind of fall into that re host tree platform all the way up to re factoring >>So just explain for the audience and I know I love I love the three things re hosting re platform in and re factoring what's the difference between re platform NG and re factoring specifically, what's the nuance there? >>Yeah, yeah, so so a lot of times I think people have a lot of people, you know, I think obviously amazon kind of popularized the six hours framework years ago, you know, with, with, with, with that. And so if you look at what they kind of what they popularize it was replied corn is really kind of like a lift tinker and shift. So maybe it's, I, I'm not just taking my VM and putting it on new infrastructure, I'm gonna take my VM, maybe put on new infrastructure, but I'm gonna switch my observer until like a lighter weight observer or something like that at the same time. So that would fall into like a re platform or in the case, you know, one of the things we're seeing pretty heavily right now is the move from cloud foundry to kubernetes for example, where people are looking to take their application and actually transform it and run it on kubernetes, which requires you to really kind of re platform as well. And re factoring >>is what specific I get the >>report re factoring is, I think just following on to what James said re factoring is really about um the complexity of the application, which was mainly a monolithic large application, many of these legacy applications which have so many times, actually hundreds of millions of dollars of assets for these uh these enterprises, it's about taking the code and re factoring it in terms of dividing it into uh huh different pieces of court which can themselves be spun as microservices. So then it becomes true, it takes starting advantage of agility or development in a cloud native environment as well. It's not just about either lift and shift of the VM or or lift tinker and shift from a, from a staff point of view. It's really about not taking applications and dividing them so that we can spin microservices and it has the identity of the development of a cloud. >>I totally got a great clarification, really want to get that out there because re platform ng is really a good thing to go to the cloud. Hey, I got reticent open source, I'll use that, I can do this over here and then if we use that vendor over there, use open source over there. Really good way to look at it. I like the factory, it's like a complete re architecture or re factoring if you will. So thank you for the clarification. Great, great topic. Uh, this is what practitioners think about. So I gotta ask the next question, what projects are involved in in the community that you guys are working? It seems like a really valuable service uh and group. Um can you give an overview and what's going on in the community specifically? >>Yeah, so there's really right now, there's kind of five projects that are in the community and they're all in different, I would say different stages of maturity as well. So, um there's uh when you look at re hosting, there's two kind of primary projects focused on that. One is called forklift, which is about migrating your virtual machines into cuba. So covert is a way that you can run virtual machines orchestrated by kubernetes. We're seeing kind of a growth in demand there where people want to have a common orchestration for both their VMS and containers running on bare metal. And so forklift helps you actually mass migrate VMS into that environment. Um The second one on the re hosting side is called Crane. So Crane is really a tool that helps you migrate applications between kubernetes clusters. So you imagine you have all your you know, you might have persistent data and one kubernetes cluster and you want to migrate a name space from one cluster to another. Um That's where Crane comes in and actually helps you migrate between those um on the re platforms that we have moved to cube, which actually came from the IBM research team. So they actually open source that uh you sure you want to speak about uh moved to >>cube. Yeah, so so moved to cuba is really as we discuss the re platform scenario already, it is about, you know, if you are in a docker environment or hungry environment uh and you know, kubernetes has become a de facto standard now you are containerized already, but you really are actually moving into the communities based environment as the name implies, It's about moved to cuba back to me and this is one of the things we were looking at and as we were looking, talking to a lot of, a lot of users, it became evident to us that they are adapting now the de facto standard. Uh and it's a tool that helps you enable your applications in that new environment and and move to the new stuff. >>Yeah. And then the the the only other to our tackle which is uh probably like the one of the newest projects which is focused on kind of assessment and analysis of applications for container reservation. So actually looking at and understanding what the suitability is of an application for being containerized and start to be like being re factored into containers. Um and that's that's uh, you know, we have kind of engineers across both uh Red hat IBM research as well as uh some folks externally that are starting to become interested in that project as well. Um and the last, the last project is called Polaris, which is a tool to help you measure your software delivery performance. So this might seem a little odd to have in the community. But when you think about re hosting re platform and re factoring, the idea is that you want to measure your software delivery performance on top of kubernetes and that's what this does. It kind of measures the door metrics. If you're familiar with devops realization metrics. Um so things like, you know, uh you know, your change failure rate and other things on top of their to see are you actually improving as you're making these changes? >>Great. Let me ask the question for the folks watching or anyone interested, how do they get involved? Who can contribute, explain how people get involved? Is our site, is there up location slack channel? What's out there? >>Yeah, yeah, all of the above. So we have a, we have, we have a slack channel, we're on slack dot kubernetes dot io on town conveyor, but if you go to www dot conveyor dot io conveyor with a K. Uh, not like the cube with a C. Uh, but like cube with a K. Uh, they can go to a conveyor to Ohio and um, there they can find everything they need. So, um, we have a, you know, a governance model that's getting put in place, contributor ladder, all the things you'd expect. We're kind of talking into the C N C F around the gap delivery groups to kind of understand if we can um, how we can align ourselves so that in the future of these projects take off, they can become kind of sandbox projects. Um and uh yeah, we would welcome any and all kind of contribution and collaboration >>for sure. I don't know if you have >>anything to add on that, I >>think you covered it at the point has already um, just to put a plug in for uh we have already been having meetups, so on the best practices you will find the community, um, not just on convert or die. Oh, but as you start joining the community and those of meet ups and the help you can get whether on the slack channel, very helpful on the day to day problems that you are encountering as you are taking your applications to a cloud native environment. >>So, and I can see this being a big interest enterprises as they have a mix and match environment and with container as you can bring and integrate old legacy. And that's the beautiful thing about hybrid cloud that I find fascinating right now is that with all the goodness of stade Coubertin and cloud native, if you've got a legacy environments, great fit now. So you don't have to kill the old to bring in the news. So this is gonna be everything a real popular project for, you know, the class, what I call the classic enterprise, So what you guys both have your companies participated in. So with that is that the goal is that the gulf of this community is to reach out to the classic enterprise or open source because certainly and users are coming in like, like, like you read about, I mean they're coming in fast into the community. >>What's the goal for the community really is to provide assistant and help and guidance to the users from a community point of view. It's not just from us whether it is red hat or are ideal research, but it's really enterprises start participating and we're already seeing that interest from the enterprises because there was a big gap in this area, a lot of vendor. Exactly when you start on this journey, there will be 100 people who will be telling you all you have to do is this Yeah, that's easy. All you have to do. I know there is a red flag goes up, >>it's easy just go cloud native all the way everything is a service. It's just so easy. Just you know, just now I was going to brian gracefully, you get right on that. I want to just quickly town tangent here, brian grazer whose product strategist at red hat, you're gonna like this because he's like, look at the cloud native pieces expanding because um, the enterprises now are, are in there and they're doing good work before you saw projects like envoy come from the hyper scales like lift and you know, the big companies who are building their own stuff, so you start to see that transition, it's no longer the debate on open source and kubernetes and cloud native. It's the discussion is integration legacy. So this is the big discussion this week. Do you guys agree with that? And what would, what would be your reaction? >>Yeah, no, I, I agree with you. Right. I mean, I think, you know, I think that the stat you always here is that the 1st 20 of kind of cloud happened and now there's all the rest of it. Right? And, and modernization is going to be the big piece right? You have to be able to modernize those applications and those workloads and you know, they're, I think they're gonna fall in three key buckets, right? Re host free platform re factor and dependent on your business justification and you know, your needs, you're going to choose one of those paths and we just want to be able to provide open tools and a community based approach to those folks too to help that certainly will have and just, you know, just like it always does, you know, upstream first and then we'll have enterprise versions of these migration tool kits based on these projects, but you know, we really do want to kind of build them, you know, and make sure we have the best solution to the problem, which we believe community is the way to do that. >>And I think just to add to what James said, typically we are talking about enterprises, these enterprises will have thousands of applications, so we're not talking about 10 40 number. We're talking thousands or 20% is not a small number is still 233 400. But man, the work is remaining and that's why they are getting excited about cloud negative now, okay, now we have seen the benefit but this little bit here, but now, let's get, you know serious about about that transformation and this is about helping them in a cloud native uh in an open source way, which is what red hat. XL Sad. Let's bring the community together. >>I'm actually doing a story on that. You brought that up with thousands of applications because I think it's, it's under underestimate, I think it's going to be 1000s and thousands more because businesses now, software driven everywhere and observe ability has pointed this out. And I was talking to the founder of uh Ravana project and it's like, how many thousands of dashboards you're gonna need? Roads are So so this is again, this is the problems and the opportunities are coming together, the abstraction will get you to move up the stack in terms of automation. So it's kind of fascinating when you start thinking about the impact as this goes the next level. And so I have to ask your roaches since you're an IBM fellow and chief scientist, which by the way, is a huge distinction. Congratulations. Being an IBM fellow is is a big deal. Uh IBM takes that very seriously. Only a few of them. You've seen many waves and cycles of innovation. How would you categorize this one now? Because maybe I'm getting old and and loving this right now. But this seems like everything kind of coming together in one flash 10.1 major inflection point. All the other waves combined seemed to be like in this one movement very fast. What's your what's your take on this wave that we're in? >>Yes, I would really say there is a lot of technology has been developed but that technology needs to have its value unleashed and that's exactly where the intersection of those applications and that technology occurs. Um I'm gonna put in yet another. You talked about everything becoming software. This was Anderson I think uh Jack Lee said the software is eating the world another you know, another wave that has started as a i eating software as well. And I do believe these two will go inside uh to uh like let me just give you a brief example re factoring how you take your application and smart ways of using ai to be able to recommend the right microservices for you is another one that we've been working towards and some of those capabilities will actually come in this community as well. So when we talk about innovations in this area, We are we are bringing together the best of IBM research as well. As we are hoping the community actually uh joints as well and enterprises are already starting to join to bring together the latest of the innovations bringing their applications and the best practices together to unleash that value of the technology in moving the rest of that 80%. And to be able to seamlessly bridge from my legacy environment to the cloud native environment. >>Yeah. And hybrid cloud is gonna be multi cloud really is the backbone and operating system of business and life society. So as these apps start to come on a P i is an integration, all of these things are coming together. So um yeah, this conveyor project and conveyor community looks like a really strong approach. Congratulations. Good >>job bob. >>Yeah, great stuff. Kubernetes, enabling companies is enabling all kinds of value here in the cube. We're bringing it to you with two experts. Uh, James Richard, thanks for coming on the Cuban sharing. Thank you. >>Thank you. >>Okay, cube con and cloud native coverage. I'm john furry with the cube. Thanks for watching. Yeah.
SUMMARY :
It's the cube with coverage of Kublai virtual brought to you by red hat. IBM fellow and chief scientist at IBM Gentlemen, thanks for coming on the cube, So I want to ask you before we get into some of the questions around the layer which comes above it which is where all our, you know, This is the enablement moment where the fruit is coming off the tree is starting to see that first wave of you mentioned And so, um, you know, when you start talking to them at levels beyond, Can you give an example of something uh some use cases of people doing this, I don't know if you have any you want to touch on um specifically I could add on as well. complexity of the application in terms of some of the legacy code that you are seeing more the container orchestration layer, if you want become the de facto standard. of popularized the six hours framework years ago, you know, with, with, with, with that. It's not just about either lift and shift of the VM or or lift tinker and in the community that you guys are working? So you imagine you have all your you know, uh and you know, kubernetes has become a de facto standard now you are containerized already, hosting re platform and re factoring, the idea is that you want to measure your software delivery performance on Let me ask the question for the folks watching or anyone interested, how do they get involved? So, um, we have a, you know, a governance model I don't know if you have day to day problems that you are encountering as you are taking your applications to a for, you know, the class, what I call the classic enterprise, So what you guys both have your companies participated Exactly when you start on this journey, there will be 100 people who will be telling you all you have and you know, the big companies who are building their own stuff, so you start to see that transition, I mean, I think, you know, I think that the stat you always here is that And I think just to add to what James said, typically we are talking about the abstraction will get you to move up the stack in terms of automation. uh like let me just give you a brief example re factoring how you take So as these apps start to come on a P We're bringing it to you with two experts. I'm john furry with the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James Labaki | PERSON | 0.99+ |
James | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
1000s | QUANTITY | 0.99+ |
Ohio | LOCATION | 0.99+ |
James Richard | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
James Labocki | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Jack Lee | PERSON | 0.99+ |
two experts | QUANTITY | 0.99+ |
cuba | LOCATION | 0.99+ |
100 people | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
five projects | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
233 400 | OTHER | 0.99+ |
30 virtual machines | QUANTITY | 0.99+ |
Crane | TITLE | 0.99+ |
Anderson | PERSON | 0.99+ |
this year | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
two kind | QUANTITY | 0.98+ |
brian grazer | PERSON | 0.98+ |
thousands of applications | QUANTITY | 0.98+ |
Europe | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
hundreds of millions of dollars | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
two corners | QUANTITY | 0.97+ |
two great guests | QUANTITY | 0.97+ |
john | PERSON | 0.97+ |
red hat | ORGANIZATION | 0.96+ |
KubeCon | EVENT | 0.96+ |
one | QUANTITY | 0.95+ |
second one | QUANTITY | 0.95+ |
IBM Gentlemen | ORGANIZATION | 0.94+ |
three things | QUANTITY | 0.93+ |
Cloud Native Con | EVENT | 0.91+ |
brian | PERSON | 0.91+ |
Cuban | OTHER | 0.9+ |
Ruchir Puri | PERSON | 0.89+ |
one movement | QUANTITY | 0.88+ |
Kublai | PERSON | 0.88+ |
one cluster | QUANTITY | 0.87+ |
europe | LOCATION | 0.87+ |
1st 20 | QUANTITY | 0.83+ |
first movement | QUANTITY | 0.83+ |
Coop Con 2021 Cloud Native Con 21 virtual | EVENT | 0.82+ |
slack | ORGANIZATION | 0.81+ |
2021 | DATE | 0.81+ |
CloudNativeCon Europe | EVENT | 0.81+ |
europe north America | LOCATION | 0.8+ |
pandemic cloud | EVENT | 0.77+ |
Puree | ORGANIZATION | 0.77+ |
10.1 | QUANTITY | 0.77+ |
about 10 40 | QUANTITY | 0.76+ |
slack channel | ORGANIZATION | 0.73+ |
a lot of users | QUANTITY | 0.73+ |
dot conveyor dot io | ORGANIZATION | 0.71+ |
two fold | QUANTITY | 0.71+ |
bob | PERSON | 0.69+ |
years ago | DATE | 0.67+ |
Q | PERSON | 0.67+ |
Kubernetes | TITLE | 0.66+ |
Keynote | Red Hat Summit 2019 | DAY 2 Morning
>> Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Paul Cormier. Boring. >> Welcome back to Boston. Welcome back. And welcome back after a great night last night of our opening with with Jim and talking to certainly saw ten Jenny and and especially our customers. It was so great last night to hear our customers in how they set their their goals and how they met their goals. All possible because certainly with a little help from red hat, but all possible because of because of open source. And, you know, sometimes we have to all due that has set goals. And I'm going to talk this morning about what we as a company and with community, have set for our goals along the way. And sometimes you have to do that. You know, audacious goals. It can really change the perception of what's even possible. And, you know, if I look back, I can't think of anything, at least in my lifetime, that's more important. Or such a big golden John F. Kennedy setting the gold to the American people to go to the moon. I believe it or not, I was really, really only three years old when he said that, honestly. But as I grew up, I remember the passion around the whole country and the energy to make that goal a reality. So let's sort of talk about in compare and contrast, a little bit of where we are technically at that time, you know, tto win and to beat and winning the space race and even get into the space race. There was some really big technical challenges along the way. I mean, believe it or not. Not that long ago. But even But back then, math Malik mathematical calculations were being shifted from from brilliant people who we trusted, and you could look in the eye to A to a computer that was programmed with the results that were mostly printed out. This this is a time where the potential of computers was just really coming on the scene and, at the time, the space race at the time of space race it. It revolved around an IBM seventy ninety, which was one of the first transistor based computers. It could perform mathematical calculations faster than even the most brilliant mathematicians. But just like today, this also came with many, many challenges And while we had the goal of in the beginning of the technique and the technology to accomplish it, we needed people so dedicated to that goal that they would risk everything. And while it may seem commonplace to us today to trust, put our trust in machines, that wasn't the case. Back in nineteen sixty nine, the seven individuals that made up the Mercury Space crew were putting their their lives in the hands of those first computers. But on Sunday, July twentieth, nineteen sixty nine, these things all came together. The goal, the technology in the team and a human being walked on the moon. You know, if this was possible fifty years ago, just think about what Khun B. Accomplished today, where technology is part of our everyday lives. And with technology advances at an ever increasing rate, it's hard to comprehend the potential that sitting right at our fingertips every single day, everything you know about computing is continuing to change. Today, let's look a bit it back. A computing In nineteen sixty nine, the IBM seventy ninety could process one hundred thousand floating point operations per second, today's Xbox one that sitting in most of your living rooms probably can process six trillion flops. That's sixty million times more powerful than the original seventy ninety that helped put a human being on the moon. And at the same time that computing was, that was drastically changed. That this computing has drastically changed. So have the boundaries of where that computing sits and where it's been where it lives. At the time of the Apollo launch, the computing power was often a single machine. Then it moved to a single data center, and over time that grew to multiple data centers. Then with cloud, it extended all the way out to data centers that you didn't even own or have control of. But but computing now reaches far beyond any data center. This is also referred to as the edge. You hear a lot about that. The Apollo's, the Apollo's version of the Edge was the guidance system, a two megahertz computer that weighed seventy pounds embedded in the capsule. Today, today the edge is right here on my wrist. This apple watch weighs just a couple of ounces, and it's ten ten thousand times more powerful than that seventy ninety back in nineteen sixty nine But even more impactful than computing advances, combined with the pervasive availability of it, are the changes and who in what controls those that similar to social changes that have happened along the way. Shifting from mathematicians to computers, we're now facing the same type of changes with regards to operational control of our computing power. In its first forms. Operational control was your team, your team within your control? In some cases, a single person managed everything. But as complexity grows, our team's expanded, just like in the just like in the computing boundaries, system integrators and public cloud providers have become an extension of our team. But at the end of the day, it's still people that are still making all the decisions going forward with the progress of things like a I and software defined everything. It's quite likely that machines will be managing machines, and in many cases that's already happening today. But while the technology at our finger tips today is so impressive, the pace of changing complexity of the problems we aspire to solve our equally hard to comprehend and they are all intertwined with one another learning from each other, growing together faster and faster. We are tackling problems today on a global scale with unsinkable complexity beyond anyone beyond what any one single company or even one single country Khun solve alone. This is why open source is so important. This is why open source is so needed today in software. This is why open sources so needed today, even in the world, to solve other types of complex problems. And this is why open source has become the dominant development model which is driving the technology direction. Today is to bring two brother to bring together the best innovation from every corner of the planet. Toe fundamentally change how we solve problems. This approach and access the innovation is what has enabled open source To tackle The challenge is big challenges, like creating the hybrid cloud like building a truly open hybrid cloud. But even today it's really difficult to bridge the gap of the innovation. It's available in all in all of our fingertips by open source development, while providing the production level capabilities that are needed to really dip, ploy this in the enterprise and solve RIA world business problems. Red Hat has been committed to open source from the very, very beginning and bringing it to solve enterprise class problems for the last seventeen plus years. But when we built that model to bring open source to the enterprise, we absolutely knew we couldn't do it halfway tow harness the innovation. We had to fully embrace the model. We made a decision very early on. Give everything back and we live by that every single day. We didn't do crazy crazy things like you hear so many do out there. All this is open corps or everything below. The line is open and everything above the line is closed. We didn't do that, and we gave everything back Everything we learned in the process of becoming an enterprise class technology company. We gave it all of that back to the community to make better and better software. This is how it works. And we've seen the results of that. We've all seen the results of that and it could only have been possible within open source development model we've been building on the foundation of open source is most successful Project Lennox in the architecture of the future hybrid and bringing them to the Enterprise. This is what made Red Hat, the company that we are today and red hats journey. But we also had the set goals, and and many of them seemed insert insurmountable at the time, the first of which was making Lennox the Enterprise standard. And while this is so accepted today, let's take a look at what it took to get there. Our first launch into the Enterprise was rail two dot one. Yes, I know we two dot one, but we knew we couldn't release a one dato product. We knew that and and we didn't. But >> we didn't want to >> allow any reason why anyone of any customer anyone shouldn't should look past rail to solve their problems as an option. Back then, we had to fight every single flavor of Unix in every single account. But we were lucky to have a few initial partners and Big Eyes v partners that supported Rehl out of the gate. But while we had the determination, we knew we also had gaps in order to deliver on our on our priorities. In the early days of rail, I remember going to ask one of our engineers for a past rehl build because we were having a customer issue on it on an older release. And then I watched in horror as he rifled through his desk through a mess of CDs and magically came up and said, I found it here It is told me not to worry that the build this was he thinks this was the bill. This was the right one, and at that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. The not only convinced the world that Lennox was secure, stable, an enterprise ready, but also to make that a reality. But we did. And today this is our reality. It's all of our reality. From the Enterprise Data Center standard to the fastest computers on the planet, Red Hat Enterprise, Lennox has continually risen to the challenge and has become the core foundation that many mission critical customers run and bet their business on. And an even bigger today Lennox is the foundation of which practically every single technology initiative is built upon. Lennox is not only standard toe build on today, it's the standard for innovation that builds around it. That's the innovation that's driving the future as well. We started our story with rail two dot one, and here we are today, seventeen years later, announcing rally as we did as we did last night. It's specifically designed for applications to run across the open hybrid. Clyde Cloud. Railed has become the best operating simp system for on premise all the way out to the cloud, providing that common operating model and workload foundation on which to build hybrid applications. Let's take it. Let's take a look at how far we've come and see this in action. >> Please welcome Red Hat Global director of developer experience, burst Sutter with Josh Boyer, Timothy Kramer, Lars Carl, it's Key and Brent Midwood. All right, we have some amazing things to show you. In just a few short moments, we actually have a lot of things to show you. And actually, Tim and Brandt will be with us momentarily. They're working out a few things in the back because we have a lot of this is gonna be a live demonstration, some incredible capabilities. Now you're going to see clear innovation inside the operating system where we worked incredibly hard to make it vast cities. You're free to manage many, many machines. I want you thinking about that as we go to this process. Now, also, keep in mind that this is the basis our core platform for everything we do here. Red hat. So it is an honor for me to be able to show it to you live on stage today. And so I recognize the many of you in the audience right now. Her hand's on systems administrators, systems, architect, citizens, engineers. And we know that you're under ever growing pressure to deliver needed infrastructure. Resource is ever faster, and that is a key element to what you're thinking about every day. Well, this has been a core theme, and our design decisions find red Odd Enterprise Lennox eight and intelligent operating system, which is making it fundamentally easier for you manage machines that scale. So hold what you're about to see next. Feels like a new superpower and and that redhead azure force multiplier. So first, let me introduce you to a large. He's totally my limits guru. >> I wouldn't call myself a girl, but I I guess you could say that I want to bring Lennox and light meant to more people. >> Okay, Well, let's let's dive in. And we're not about the clinic's eight. >> Sure. Let me go. And Morgan, >> wait a >> second. There's windows. >> Yeah, way Build the weft Consul into Really? That means that for the first time, you can log in from any device including your phone or this standard windows laptop. So you just go ahead and and to my Saturday lance credentials here. >> Okay, so now >> you're putting >> your limits password and over the web. >> Yeah, that might sound a bit scary at first, but of course, we're using the latest security tech by T. L s on dh csp on. Because that's the standard Lennox off site. You can use everything that you used to like a stage keys, OTP, tokens and stuff like this. >> Okay, so now I see the council right here. I love the dashboard overview of the system, but what else can you tell us about this council? >> Right? Like right here. You see the load of the system, some some of its properties. But you can also dive into logs everything that you're used to from the command line, right? Or lookit, services. This's all the services I've running, can start and stuff them and enable >> OK, I love that feature right there. So what about if I have to add a whole new application to this environment? >> Good that you're bringing that up. We build a new future into hell called application streams. Which the way for you to install different versions of your half stack that are supported I'LL show you with Youngmin a command line. But since Windows doesn't have a proper terminal, I'll just do it in the terminal that we built into the Web console Since the browser, I can even make this a bit bigger. Go to, for example, to see the application streams that we have for Poskus. Ijust do module list and I see you know we have ten and nine dot six Both supported tennis a default on defy enable ninety six Now the next time that I installed prescribes it will pull all their lady towards from them at six. >> Ok, so this is very cool. I see two verses of post Chris right here What tennis to default. That is fantastic and the application streams making that happen. But I'm really kind of curious, right? I loved using know js and Java. So what about multiple versions of those? >> Yeah, that's exactly the idea way. Want to keep up with the fast moving ecosystems off programming language? Isn't it a business? >> Okay, now, But I have another key question. I know some people were thinking it right now. What about Python? >> Yeah. In fact, in a minimum and still like this, python gives you command. Not fact. Just have to type it correctly. You can't just install which everyone you want two or three or whichever your application needs. >> Okay, Well, that is I've been burned on that one before. Okay, so no actual. Have a confession for all you guys. Right here. You guys keep this amongst yourselves. Don't let Paul No, I'm actually not a linnet systems administrator. I'm an application developer, an application architect, And I recently had to go figure out how to extend the file system. This is for real. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, extend resized to f s. And I have to admit, that's hard, >> right? I've opened the storage space for you right here, where you see an overview of your storage. And the council has made for people like you as well not only for people that I knew that when you two lunatics, right? It's if you're running, you're running some of the commands only, you know, some of the time you don't remember them. So, for example, I haven't felt twosome here. That's a little bit too small. Let me just throw it. It's like, you know, dragging this lighter. It calls all the command in the background for you. >> Oh, that is incredible. Is that simple? Just drag and drop. That is fantastic. Well, so I actually, you know, we'll have another question for you. It looks like now this linen systems administration is no longer a dark heart involving arcane commands typed into a black terminal. Like using when those funky ergonomic keyboards you know I'm talking about right? Do >> you know a lot of people, including me and people in the audience like that dark out right? And this is not taking any of that away. It's on additional tool to bring limits to more people. >> Okay, well, that is absolute fantastic. Thank you so much for that Large. And I really love him installing everything is so much easier, including a post gra seeker and, of course, the python that we saw right there. So now I want to change gears for a second because I actually have another situation that I'm always dealing with. And that is every time I want to build a new Lenox system, not only I don't want to have to install those commands again and again, it feels like I'm doing it over and over. So, Josh, how would I create a golden image? One VM image that can use and we have everything pre baked in? >> Yeah, absolutely. But >> we get that question all the time. So really includes image builder technology. Image builder technology is actually all of our hybrid cloud operating system image tools that we use to build our own images and rolled up in a nice, easy to integrate new system. So if I come here in the web console and I go to our image builder tab, it brings us to blueprints, right? Blueprints or what we used to actually control it goes into our golden image. Uh, and I heard you and Lars talking about post present python. So I went and started typing here. So it brings us to this page, but you could go to the selected components, and you can see here I've created a blueprint that has all the python and post press packages in it. Ah, and the interesting thing about this is it build on our existing kickstart technology. But you can use it to deploy that whatever cloud you want. And it's saved so that you don't actually have to know all the various incantations from Amazon toe azure to Google, whatever it's all baked in on. When you do this, you can actually see the dependencies that get brought in as well. Okay. Should we create one life? Yes, please. All right, cool. So if we go back to the blueprints page and we click create blueprint Let's, uh let's make a developer brute blueprint here. So we click great, and you can see here on the left hand side. I've got all of my content served up by Red Hat satellite. We have a lot of great stuff, and really, But we can go ahead and search. So we'LL look for post grows and you know, it's a developer image at the client for some local testing. Um, well, come in here and at the python bits. Probably the development package. We need a compiler if we're going to actually build anything. So look for GCC here and hey, what's your favorite editor? >> A Max, Of course, >> Max. All right. Hey, Lars, about you. I'm more of a person. You Maxim v I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. But we're going to go ahead and Adam Ball, sweetie, I'm a fight on stage. So wait, just point and click. Let the graphical one. And then when we're all done, we just commit our changes, and our image is ready to build. >> Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily deploys of deploy this across multiple cloud providers. And as well as this on stage are where we have right now. >> Yeah, absolutely. We can to play on Amazon as your google any any infrastructure you're looking for so you can really hit your Clyburn hybrid cloud operating system images. >> Okay. All right, listen, we >> just go on, click, create image. Uh, we can select our different types here. I'm gonna go ahead and create a local VM because it's available image, and maybe they want to pass it around or whatever, and I just need a few moments for it to build. >> Okay? So while that's taking a few moments, I know there's another key question in the minds of the audience right now, and you're probably thinking I love what I see. What Right eye right hand Priceline say. But >> what does it >> take to upgrade from seven to eight? So large can you show us and walk us through an upgrade? >> Sure, this's my little Thomas Block that I set up. It's powered by what Chris and secrets over, but it's still running on seven six. So let's upgrade that jump over to my house fee on satellite on. You see all my relate machines here, including the one I showed you what Consul on before. And there is that one with my sun block and there's a couple others. Let me select those as well. This one on that one. Just go up here. Schedule remote job. And she was really great. And hit Submit. I made it so that it makes the booms national before. So if anything was wrong Kans throwback! >> Okay, okay, so now it's progressing. Here, >> it's progressing. Looks like it's running. Doing >> live upgrade on stage. Uh, >> seems like one is failing. What's going on here? Okay, we checked the tree of great Chuck. Oh, yeah, that's the one I was playing around with Butter fest backstage. What? Detective that and you know, it doesn't run the Afghan cause we don't support operating that. >> Okay, so what I'm hearing now? So the good news is, we were protected from possible failed upgrade there, So it sounds like these upgrades are perfectly safe. Aiken, basically, you know, schedule this during a maintenance window and still get some sleep. >> Totally. That's the idea. >> Okay, fantastic. All right. So it looks like upgrades are easy and perfectly safe. And I really love what you showed us there. It's good point. Click operation right from satellite. Ok, so Well, you know, we were checking out upgrades. I want to know Josh. How those v ems coming along. >> They went really well. So you were away for so long. I got a little bored and I took some liberties. >> What do you mean? >> Well, the image Bill And, you know, I decided I'm going to go ahead and deploy here to this Intel machine on stage Esso. I have that up and running in the web. Counsel. I built another one on the arm box, which is actually pretty fast, and that's up and running on this. Our machine on that went so well that I decided to spend up some an Amazon. So I've got a few instances here running an Amazon with the web console accessible there as well. On even more of our pre bill image is up and running an azure with the web console there. So the really cool thing about this bird is that all of these images were built with image builder in a single location, controlling all the content that you want in your golden images deployed across the hybrid cloud. >> Wow, that is fantastic. And you might think that so we actually have more to show you. So thank you so much for that large. And Josh, that is fantastic. Looks like provisioning bread. Enterprise Clinic Systems ate a redhead. Enterprise Enterprise. Rhetta Enterprise Lennox. Eight Systems is Asian ever before, but >> we have >> more to talk to you about. And there's one thing that many of the operations professionals in this room right now no, that provisioning of'em is easy, but it's really day two day three, it's down the road that those viens required day to day maintenance. As a matter of fact, several you folks right now in this audience to have to manage hundreds, if not thousands, of virtual machines I recently spoke to. Gentleman has to manage thirteen hundred servers. So how do you manage those machines? A great scale. So great that they have now joined us is that it looks like they worked things out. So now I'm curious, Tim. How will we manage hundreds, if not thousands, of computers? >> Welbourne, one human managing hundreds or even thousands of'em says, No problem, because we have Ansel automation. And by leveraging Ansel's integration into satellite, not only can we spin up those V em's really quickly, like Josh was just doing, but we can also make ongoing maintenance of them really simple. Come on up here. I'm going to show you here a satellite inventory and his red hat is publishing patches. Weaken with that danceable integration easily apply those patches across our entire fleet of machines. Okay, >> that is fantastic. So he's all the machines can get updated in one fell swoop. >> He sure can. And there's one thing that I want to bring your attention to today because it's brand new. And that's cloud that red hat dot com And here, a cloud that redhead dot com You can view and manage your entire inventory no matter where it sits. Of Redhead Enterprise Lennox like on Prem on stage. Private Cloud or Public Cloud. It's true Hybrid cloud management. >> OK, but one thing. One thing. I know that in the minds of the audience right now. And if you have to manage a large number servers this it comes up again and again. What happens when you have those critical vulnerabilities that next zero day CV could be tomorrow? >> Exactly. I've actually been waiting for a while patiently for you >> to get to the really good stuff. So >> there's one more thing that I wanted to let folks know about. Red Hat Enterprise. The >> next eight and some features that we have there. Oh, >> yeah? What is that? >> So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate all the knowledge that we've gained and turn that into insights that we can use to keep our red hat Enterprise Lennox servers running securely, inefficiently. And so what we actually have here is a few things that we could take a look at show folks what that is. >> OK, so we basically have this new feature. We're going to show people right now. And so one thing I want to make sure it's absolutely included within the redhead enterprise in that state. >> Yes. Oh, that's Ah, that's an announcement that we're making this week is that this is a brand new feature that's integrated with Red Hat Enterprise clinics, and it's available to everybody that has a red hat enterprise like subscription. So >> I believe everyone in this room right now has a rail subscriptions, so it's available to all of them. >> Absolutely, absolutely. So let's take a quick look and try this out. So we actually have. Here is a list of about six hundred rules. They're configuration security and performance rules. And this is this list is growing every single day, so customers can actually opt in to the rules that are most that are most applicable to their enterprises. So what we're actually doing here is combining the experience and knowledge that we have with the data that our customers opt into sending us. So customers have opted in and are sending us more data every single night. Then they actually have in total over the last twenty years via any other mechanism. >> Now there's I see now there's some critical findings. That's what I was talking about. But it comes to CVS and things that nature. >> Yeah, I'm betting that those air probably some of the rail seven boxes that we haven't actually upgraded quite yet. So we get back to that. What? I'd really like to show everybody here because everybody has access to this is how easy it is to opt in and enable this feature for real. Okay, let's do that real quick, so I gotta hop back over to satellite here. This is the satellite that we saw before, and I'll grab one of the hosts and we can use the new Web console feature that's part of Railly, and via single sign on I could jump right from satellite over to the Web console. So it's really, really easy. And I'LL grab a terminal here and registering with insights is really, really easy. Is one command troops, and what's happening right now is the box is going to gather some data. It's going to send it up to the cloud, and within just a minute or two, we're gonna have some results that we can look at back on the Web interface. >> I love it so it's just a single command and you're ready to register this box right now. That is super easy. Well, that's fantastic, >> Brent. We started this whole series of demonstrations by telling the audience that Red Hat Enterprise Lennox eight was the easiest, most economical and smartest operating system on the planet, period. And well, I think it's cute how you can go ahead and captain on a single machine. I'm going to show you one more thing. This is Answerable Tower. You can use as a bell tower to managing govern your answerable playbook, usage across your entire organization and with this. What I could do is on every single VM that was spun up here today. Opt in and register insights with a single click of a button. >> Okay, I want to see that right now. I know everyone's waiting for it as well, But hey, you're VM is ready. Josh. Lars? >> Yeah. My clock is running a little late now. Yeah, insights is a really cool feature >> of rail. And I've got it in all my images already. All >> right, I'm doing it all right. And so as this playbook runs across the inventory, I can see the machines registering on cloud that redhead dot com ready to be managed. >> OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, fantastic. >> That's awesome. Thanks to him. Nothing better than a Red Hat Summit speaker in the first live demo going off script deal. Uh, let's go back and take a look at some of those critical issues affecting a few of our systems here. So you can see this is a particular deanna's mask issue. It's going to affect a couple of machines. We saw that in the overview, and I can actually go and get some more details about what this particular issue is. So if you take a look at the right side of the screen there, there's actually a critical likelihood an impact that's associated with this particular issue. And what that really translates to is that there's a high level of risk to our organization from this particular issue. But also there's a low risk of change. And so what that means is that it's really, really safe for us to go ahead and use answerable to mediate this so I can grab the machines will select those two and we're mediate with answerable. I can create a new playbook. It's our maintenance window, but we'LL do something along the lines of like stuff Tim broke and that'LL be our cause. We name it whatever we want. So we'Ll create that playbook and take a look at it, and it's actually going to give us some details about the machines. You know what, what type of reboots Efendi you're going to be needed and what we need here. So we'LL go ahead and execute the playbook and what you're going to see is the outputs goingto happen in real time. So this is happening from the cloud were affecting machines. No matter where they are, they could be on Prem. They could be in a hybrid cloud, a public cloud or in a private cloud. And these things are gonna be remediated very, very easily with answerable. So it's really, really awesome. Everybody here with a red hat. Enterprise licks Lennox subscription has access to this now, so I >> kind of want >> everybody to go try this like, we really need to get this thing going and try it out right now. But >> don't know, sent about the room just yet. You get stay here >> for okay, Mr. Excitability, I think after this keynote, come back to the red hat booth and there's an optimization section. You can come talk to our insights engineers. And even though it's really easy to get going on your own, they can help you out. Answer any questions you might have. So >> this is really the start of a new era with an intelligent operating system and beauty with intelligence you just saw right now what insights that troubles you. Fantastic. So we're enabling systems administrators to manage more red in private clinics, a greater scale than ever before. I know there's a lot more we could show you, but we're totally out of time at this point, and we kind of, you know, when a little bit sideways here moments. But we need to get off the stage. But there's one thing I want you guys to think about it. All right? Do come check out the in the booth. Like Tim just said also in our debs, Get hands on red and a prize winning state as well. But really, I want you to think about this one human and a multitude of servers. And if you remember that one thing asked you upfront. Do you feel like you get a new superpower and redhead? Is your force multiplier? All right, well, thank you so much. Josh and Lars, Tim and Brent. Thank you. And let's get Paul back on stage. >> I went brilliant. No, it's just as always, >> amazing. I mean, as you can tell from last night were really, really proud of relate in that coming out here at the summit. And what a great way to showcase it. Thanks so much to you. Birth. Thanks, Brent. Tim, Lars and Josh. Just thanks again. So you've just seen this team demonstrate how impactful rail Khun b on your data center. So hopefully hopefully many of you. If not all of you have experienced that as well. But it was super computers. We hear about that all the time, as I just told you a few minutes ago, Lennox isn't just the foundation for enterprise and cloud computing. It's also the foundation for the fastest super computers in the world. In our next guest is here to tell us a lot more about that. >> Please welcome Lawrence Livermore National Laboratory. HPC solution Architect Robin Goldstone. >> Thank you so much, Robin. >> So welcome. Welcome to the summit. Welcome to Boston. And thank thank you so much for coming for joining us. Can you tell us a bit about the goals of Lawrence Livermore National Lab and how high high performance computing really works at this level? >> Sure. So Lawrence Livermore National >> Lab was established during the Cold War to address urgent national security needs by advancing the state of nuclear weapons, science and technology and high performance computing has always been one of our core capabilities. In fact, our very first supercomputer, ah Univac one was ordered by Edward Teller before our lab even opened back in nineteen fifty two. Our mission has evolved since then to cover a broad range of national security challenges. But first and foremost, our job is to ensure the safety, security and reliability of the nation's nuclear weapons stockpile. Oh, since the US no longer performs underground nuclear testing, our ability to certify the stockpile depends heavily on science based science space methods. We rely on H P C to simulate the behavior of complex weapons systems to ensure that they can function as expected, well beyond their intended life spans. That's actually great. >> So are you really are still running on that on that Univac? >> No, Actually, we we've moved on since then. So Sierra is Lawrence Livermore. Its latest and greatest supercomputer is currently the Seconds spastic supercomputer in the world and for the geeks in the audience, I think there's a few of them out there. We put up some of the specs of Syrah on the screen behind me, a couple of things worth highlighting our Sierra's peak performance and its power utilisation. So one hundred twenty five Pata flops of performance is equivalent to about twenty thousand of those Xbox one excess that you mentioned earlier and eleven point six megawatts of power required Operate Sierra is enough to power around eleven thousand homes. Syria is a very large and complex system, but underneath it all, it starts out as a collection of servers running Lin IX and more specifically, rail. >> So did Lawrence. Did Lawrence Livermore National Lab National Lab used Yisrael before >> Sierra? Oh, yeah, most definitely. So we've been running rail for a very long time on what I'll call our mid range HPC systems. So these clusters, built from commodity components, are sort of the bread and butter of our computer center. And running rail on these systems provides us with a continuity of operations and a common user environment across multiple generations of hardware. Also between Lawrence Livermore in our sister labs, Los Alamos and Sandia. Alongside these commodity clusters, though, we've always had one sort of world class supercomputer like Sierra. Historically, these systems have been built for a sort of exotic proprietary hardware running entirely closed source operating systems. Anytime something broke, which was often the Vander would be on the hook to fix it. And you know, >> that sounds >> like a good model, except that what we found overtime is most the issues that we have on these systems were either due to the extreme scale or the complexity of our workloads. Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified codes. So their ability to reproduce our problem was was pretty limited. In some cases, they've even sent an engineer on site to try to reproduce our problems. But even then, sometimes we wouldn't get a fix for months or else they would just tell us they weren't going to fix the problem because we were the only ones having it. >> So for many of us, for many of us, the challenges is one of driving reasons for open source, you know, for even open source existing. How has how did Sierra change? Things are on open source for >> you. Sure. So when we developed our technical requirements for Sierra, we had an explicit requirement that we want to run an open source operating system and a strong preference for rail. At the time, IBM was working with red hat toe add support Terrell for their new little Indian power architecture. So it was really just natural for them to bid a red. A rail bay system for Sierra running Raylan Cyril allows us to leverage the model that's worked so well for us for all this time on our commodity clusters any packages that we build for X eighty six, we can now build those packages for power as well as our market texture using our internal build infrastructure. And while we have a formal support relationship with IBM, we can also tap our in house colonel developers to help debug complex problems are sys. Admin is Khun now work on any of our systems, including Sierra, without having toe pull out their cheat sheet of obscure proprietary commands. Our users get a consistent software environment across all our systems. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo es fenders. >> You know, you've been able, you've been able to extend your foundation from all the way from X eighty six all all the way to the extract excess Excuse scale supercomputing. We talk about giving customers all we talked about it all the time. A standard operational foundation to build upon. This isn't This isn't exactly what we've envisioned. So So what's next for you >> guys? Right. So what's next? So Sierra's just now going into production. But even so, we're already working on the contract for our next supercomputer called El Capitan. That's scheduled to be delivered the Lawrence Livermore in the twenty twenty two twenty timeframe. El Capitan is expected to be about ten times the performance of Sierra. I can't share any more details about that system right now, but we are hoping that we're going to be able to continue to build on a solid foundation. That relish provided us for well over a decade. >> Well, thank you so much for your support of realm over the years, Robin. And And thank you so much for coming and tell us about it today. And we can't wait to hear more about El Capitan. Thank you. Thank you very much. So now you know why we're so proud of realm. And while you saw confetti cannons and T shirt cannons last night, um, so you know, as as burned the team talked about the demo rail is the force multiplier for servers. We've made Lennox one of the most powerful platforms in the history of platforms. But just as Lennox has become a viable platform with access for everyone, and rail has become viable, more viable every day in the enterprise open source projects began to flourish around the operating system. And we needed to bring those projects to our enterprise customers in the form of products with the same trust models as we did with Ralph seeing the incredible progress of software development occurring around Lennox. Let's let's lead us to the next goal that we said tow, tow ourselves. That goal was to make hybrid cloud the default enterprise for the architecture. How many? How many of you out here in the audience or are Cesar are? HC sees how many out there a lot. A lot. You are the people that our building the next generation of computing the hybrid cloud, you know, again with like just like our goals around Lennox. This goals might seem a little daunting in the beginning, but as a community we've proved it time and time again. We are unstoppable. Let's talk a bit about what got us to the point we're at right right now and in the work that, as always, we still have in front of us. We've been on a decade long mission on this. Believe it or not, this mission was to build the capabilities needed around the Lenox operating system to really build and make the hybrid cloud. When we saw well, first taking hold in the enterprise, we knew that was just taking the first step. Because for a platform to really succeed, you need applications running on it. And to get those applications on your platform, you have to enable developers with the tools and run times for them to build, to build upon. Over the years, we've closed a few, if not a lot of those gaps, starting with the acquisition of J. Boss many years ago, all the way to the new Cuban Eddie's native code ready workspaces we launched just a few months back. We realized very early on that building a developer friendly platform was critical to the success of Lennox and open source in the enterprise. Shortly after this, the public cloud stormed onto the scene while our first focus as a company was done on premise in customer data centers, the public cloud was really beginning to take hold. Rehl very quickly became the standard across public clouds, just as it was in the enterprise, giving customers that common operating platform to build their applications upon ensuring that those applications could move between locations without ever having to change their code or operating model. With this new model of the data center spread across so many multiple environments, management had to be completely re sought and re architected. And given the fact that environments spanned multiple locations, management, real solid management became even more important. Customers deploying in hybrid architectures had to understand where their applications were running in how they were running, regardless of which infrastructure provider they they were running on. We invested over the years with management right alongside the platform, from satellite in the early days to cloud forms to cloud forms, insights and now answerable. We focused on having management to support the platform wherever it lives. Next came data, which is very tightly linked toe applications. Enterprise class applications tend to create tons of data and to have a common operating platform foyer applications. You need a storage solutions. That's Justus, flexible as that platform able to run on premise. Just a CZ. Well, as in the cloud, even across multiple clouds. This let us tow acquisitions like bluster, SEF perma bitch in Nubia, complimenting our Pratt platform with red hat storage for us, even though this sounds very condensed, this was a decade's worth of investment, all in preparation for building the hybrid cloud. Expanding the portfolio to cover the areas that a customer would depend on to deploy riel hybrid cloud architectures, finding any finding an amplifying the right open source project and technologies, or filling the gaps with some of these acquisitions. When that necessarily wasn't available by twenty fourteen, our foundation had expanded, but one big challenge remained workload portability. Virtual machine formats were fragmented across the various deployments and higher level framework such as Java e still very much depended on a significant amount of operating system configuration and then containers happened containers, despite having a very long being in existence for a very long time. As a technology exploded on the scene in twenty fourteen, Cooper Netease followed shortly after in twenty fifteen, allowing containers to span multiple locations and in one fell swoop containers became the killer technology to really enable the hybrid cloud. And here we are. Hybrid is really the on ly practical reality in way for customers and a red hat. We've been investing in all aspects of this over the last eight plus years to make our customers and partners successful in this model. We've worked with you both our customers and our partners building critical realm in open shift deployments. We've been constantly learning about what has caused problems and what has worked well in many cases. And while we've and while we've amassed a pretty big amount of expertise to solve most any challenge in in any area that stack, it takes more than just our own learning's to build the next generation platform. Today we're also introducing open shit for which is the culmination of those learnings. This is the next generation of the application platform. This is truly a platform that has been built with our customers and not simply just with our customers in mind. This is something that could only be possible in an open source development model and just like relish the force multiplier for servers. Open shift is the force multiplier for data centers across the hybrid cloud, allowing customers to build thousands of containers and operate them its scale. And we've also announced open shift, and we've also announced azure open shift. Last night. Satya on this stage talked about that in depth. This is all about extending our goals of a common operating platform enabling applications across the hybrid cloud, regardless of whether you run it yourself or just consume it as a service. And with this flagship release, we are also introducing operators, which is the central, which is the central feature here. We talked about this work last year with the operator framework, and today we're not going to just show you today. We're not going to just show you open shift for we're going to show you operators running at scale operators that will do updates and patches for you, letting you focus more of your time and running your infrastructure and running running your business. We want to make all this easier and intuitive. So let's have a quick look at how we're doing. Just that >> painting. I know all of you have heard we're talking to pretend to new >> customers about the travel out. So new plan. Just open it up as a service been launched by this summer. Look, I know this is a big quest for not very big team. I'm open to any and all ideas. >> Please welcome back to the stage. Red Hat Global director of developer Experience burst Sutter with Jessica Forrester and Daniel McPherson. All right, we're ready to do some more now. Now. Earlier we showed you read Enterprise Clinic St running on lots of different hardware like this hardware you see right now And we're also running across multiple cloud providers. But now we're going to move to another world of Lennox Containers. This is where you see open shift four on how you can manage large clusters of applications from eggs limits containers across the hybrid cloud. We're going to see this is where suffer operators fundamentally empower human operators and especially make ups and Deb work efficiently, more efficiently and effectively there together than ever before. Rights. We have to focus on the stage right now. They're represent ops in death, and we're gonna go see how they reeled in application together. Okay, so let me introduce you to Dan. Dan is totally representing all our ops folks in the audience here today, and he's telling my ops, comfort person Let's go to call him Mr Ops. So Dan, >> thanks for with open before, we had a much easier time setting up in maintaining our clusters. In large part, that's because open shit for has extended management of the clusters down to the infrastructure, the diversity kinds of parent. When you take >> a look at the open ship console, >> you can now see the machines that make up the cluster where machine represents the infrastructure. Underneath that Cooper, Eddie's node open shit for now handles provisioning Andy provisioning of those machines. From there, you could dig into it open ship node and see how it's configured and monitor how it's behaving. So >> I'm curious, >> though it does this work on bare metal infrastructure as well as virtualized infrastructure. >> Yeah, that's right. Burn So Pa Journal nodes, no eternal machines and open shit for can now manage it all. Something else we found extremely useful about open ship for is that it now has the ability to update itself. We can see this cluster hasn't update available and at the press of a button. Upgrades are responsible for updating. The entire platform includes the nodes, the control plane and even the operating system and real core arrests. All of this is possible because the infrastructure components and their configuration is now controlled by technology called operators. Thes software operators are responsible for aligning the cluster to a desired state. And all of this makes operational management of unopened ship cluster much simpler than ever before. All right, I >> love the fact that all that's been on one console Now you can see the full stack right all way down to the bare metal right there in that one console. Fantastic. So I wanted to scare us for a moment, though. And now let's talk to Deva, right? So Jessica here represents our all our developers in the room as my facts. He manages a large team of developers here Red hat. But more importantly, she represents our vice president development and has a large team that she has to worry about on a regular basis of Jessica. What can you show us? We'LL burn My team has hundreds of developers and were constantly under pressure to deliver value to our business. And frankly, we can't really wait for Dan and his ops team to provisioned the infrastructure and the services that we need to do our job. So we've chosen open shift as our platform to run our applications on. But until recently, we really struggled to find a reliable source of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us install through the cluster. But now, with operator, How bio, we're really seeing the V ecosystem be unlocked. And the technology's there. Things that my team needs, its databases and message cues tracing and monitoring. And these operators are actually responsible for complex applications like Prometheus here. Okay, they're written in a variety of languages, danceable, but that is awesome. So I do see a number of options there already, and preaches is a great example. But >> how do you >> know that one? These operators really is mature enough and robust enough for Dan and the outside of the house. Wilbert, Here we have the operator maturity model, and this is going to tell me and my team whether this particular operator is going to do a basic install if it's going to upgrade that application over time through different versions or all the way out to full auto pilot, where it's automatically scaling and tuning the application based on the current environment. And it's very cool. So coming over toothy open shift Consul, now we can actually see Dan has made the sequel server operator available to me and my team. That's the database that we're using. A sequel server. That's a great example. So cynics over running here in the cluster? But this is a great example for a developer. What if I want to create a new secret server instance? Sure, we're so it's as easy as provisioning any other service from the developer catalog. We come in and I can type for sequel server on what this is actually creating is, ah, native resource called Sequel Server, and you can think of that like a promise that a sequel server will get created. The operator is going to see that resource, install the application and then manage it over its life cycle, KAL, and from this install it operators view, I can see the operators running in my project and which resource is its managing Okay, but I'm >> kind of missing >> something here. I see this custom resource here, the sequel server. But where the community's resource is like pods. Yeah, I think it's cool that we get this native resource now called Sequel Server. But if I need to, I can still come in and see the native communities. Resource is like your staple set in service here. Okay, that is fantastic. Now, we did say earlier on, though, like many of our customers in the audience right now, you have a large team of engineers. Lost a large team of developers you gotta handle. You gotta have more than one secret server, right? We do one for every team as we're developing, and we use a lot of other technologies running on open shift as well, including Tomcat and our Jenkins pipelines and our dough js app that is gonna actually talk to that sequel server database. Okay, so this point we can kind of provisions, Some of these? Yes. Oh, since all of this is self service for me and my team's, I'm actually gonna go and create one of all of those things I just said on all of our projects, right Now, if you just give me a minute, Okay? Well, right. So basically, you're going to knock down No Jazz Jenkins sequel server. All right, now, that's like hundreds of bits of application level infrastructure right now. Live. So, Dan, are you not terrified? Well, I >> guess I should have done a little bit better >> job of managing guests this quota and historically just can. I might have had some conflict here because creating all these new applications would admit my team now had a massive back like tickets to work on. But now, because of software operators, my human operators were able to run our infrastructure at scale. So since I'm long into the cluster here as the cluster admin, I get this view of pods across all projects. And so I get an idea of what's happening across the entire cluster. And so I could see now we have four hundred ninety four pods already running, and there's a few more still starting up. And if I scroll to the list, we can see the different workloads Jessica just mentioned of Tomcats. And no Gs is And Jenkins is and and Siegel servers down here too, you know, I see continues >> creating and you have, like, close to five hundred pods running >> there. So, yeah, filters list down by secret server, so we could just see. Okay, But >> aren't you not >> running going around a cluster capacity at some point? >> Actually, yeah, we we definitely have a limited capacity in this cluster. And so, luckily, though, we already set up auto scale er's And so because the additional workload was launching, we see now those outer scholars have kicked in and some new machines are being created that don't yet have noticed. I'm because they're still starting up. And so there's another good view of this as well, so you can see machine sets. We have one machine set per availability zone, and you could see the each one is now scaling from ten to twelve machines. And the way they all those killers working is for each availability zone, they will. If capacities needed, they will add additional machines to that availability zone and then later effect fast. He's no longer needed. It will automatically take those machines away. >> That is incredible. So right now we're auto scaling across multiple available zones based on load. Okay, so looks like capacity planning and automation is fully, you know, handle this point. But I >> do have >> another question for year logged in. Is the cluster admin right now into the console? Can you show us your view of >> operator suffer operators? Actually, there's a couple of unique views here for operators, for Cluster admits. The first of those is operator Hub. This is where a cluster admin gets the ability to curate the experience of what operators are available to users of the cluster. And so obviously we already have the secret server operator installed, which which we've been using. The other unique view is operator management. This gives a cluster I've been the ability to maintain the operators they've already installed. And so if we dig in and see the secret server operator, well, see, we haven't set up for manual approval. And what that means is if a new update comes in for a single server, then a cluster and we would have the ability to approve or disapprove with that update before installs into the cluster, we'LL actually and there isn't upgrade that's available. Uh, I should probably wait to install this, though we're in the middle of scaling out this cluster. And I really don't want to disturb Jessica's application. Workflow. >> Yeah, so, actually, Dan, it's fine. My app is already up. It's running. Let me show it to you over here. So this is our products application that's talking to that sequel server instance. And for debugging purposes, we can see which version of sequel server we're currently talking to. Its two point two right now. And then which pod? Since this is a cluster, there's more than one secret server pod we could be connected to. Okay, I could see right there the bounder screeners they know to point to. That's the version we have right now. But, you know, >> this is kind of >> point of software operators at this point. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. Let's do it. Live here on stage. Right, then. All >> right. All right. I could see where this is going. So whenever you updated operator, it's just like any other resource on communities. And so the first thing that happens is the operator pot itself gets updated so we actually see a new version of the operator is currently being created now, and what's that gets created, the overseer will be terminated. And that point, the new, softer operator will notice. It's now responsible for managing lots of existing Siegel servers already in the environment. And so it's then going Teo update each of those sickle servers to match to the new version of the single server operator and so we could see it's running. And so if we switch now to the all projects view and we filter that list down by sequel server, then we should be able to see us. So lots of these sickle servers are now being created and the old ones are being terminated. So is the rolling update across the cluster? Exactly a So the secret server operator Deploy single server and an H A configuration. And it's on ly updates a single instance of secret server at a time, which means single server always left in nature configuration, and Jessica doesn't really have to worry about downtime with their applications. >> Yeah, that's awesome dance. So glad the team doesn't have to worry about >> that anymore and just got I think enough of these might have run by Now, if you try your app again might be updated. >> Let's see Jessica's application up here. All right. On laptop three. >> Here we go. >> Fantastic. And yet look, we're We're into two before we're onto three. Now we're on to victory. Excellent on. >> You know, I actually works so well. I don't even see a reason for us to leave this on manual approval. So I'm going to switch this automatic approval. And then in the future, if a new single server comes in, then we don't have to do anything, and it'll be all automatically updated on the cluster. >> That is absolutely fantastic. And so I was glad you guys got a chance to see that rolling update across the cluster. That is so cool. The Secret Service database being automated and fully updated. That is fantastic. Alright, so I can see how a software operator doesn't able. You don't manage hundreds if not thousands of applications. I know a lot of folks or interest in the back in infrastructure. Could you give us an example of the infrastructure >> behind this console? Yeah, absolutely. So we all know that open shift is designed that run in lots of different environments. But our teams think that as your redhead over, Schiff provides one of the best experiences by deeply integrating the open chief Resource is into the azure console, and it's even integrated into the azure command line toll and the easy open ship man. And, as was announced yesterday, it's now available for everyone to try out. And there's actually one more thing we wanted to show Everyone related to open shit, for this is all so new with a penchant for which is we now have multi cluster management. This gives you the ability to keep track of all your open shift environments, regardless of where they're running as well as you can create new clusters from here. And I'll dig into the azure cluster that we were just taking a look at. >> Okay, but is this user and face something have to install them one of my existing clusters? >> No, actually, this is the host of service that's provided by Red hat is part of cloud that redhead that calm and so all you have to do is log in with your red hair credentials to get access. >> That is incredible. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red update. Right and red embers. Thank Satan. Now we see it for multi cluster management. But home shift so you can fundamentally see. Now the suffer operators do finally change the game when it comes to making human operators vastly more productive and, more importantly, making Devon ops work more efficiently together than ever before. So we saw the rich ice vehicle system of those software operators. We can manage them across the Khyber Cloud with any, um, shift instance. And more importantly, I want to say Dan and Jessica for helping us with this demonstration. Okay, fantastic stuff, guys. Thank you so much. Let's get Paul back out here >> once again. Thanks >> so much to burn his team. Jessica and Dan. So you've just seen how open shift operators can help you manage hundreds, even thousands of applications. Install, upgrade, remove nodes, control everything about your application environment, virtual physical, all the way out to the cloud making, making things happen when the business demands it even at scale, because that's where it's going to get. Our next guest has lots of experience with demand at scale. and they're using open source container management to do it. Their work, their their their work building a successful cloud, First platform and there, the twenty nineteen Innovation Award winner. >> Please welcome twenty nineteen Innovation Award winner. Cole's senior vice president of technology, Rich Hodak. >> How you doing? Thanks. >> Thanks so much for coming out. We really appreciate it. So I guess you guys set some big goals, too. So can you baby tell us about the bold goal? Helped you personally help set for Cole's. And what inspired you to take that on? Yes. So it was twenty seventeen and life was pretty good. I had no gray hair and our business was, well, our tech was working well, and but we knew we'd have to do better into the future if we wanted to compete. Retails being disrupted. Our customers are asking for new experiences, So we set out on a goal to become an open hybrid cloud platform, and we chose Red had to partner with us on a lot of that. We set off on a three year journey. We're currently in Year two, and so far all KP eyes are on track, so it's been a great journey thus far. That's awesome. That's awesome. So So you Obviously, Obviously you think open source is the way to do cloud computing. So way absolutely agree with you on that point. So So what? What is it that's convinced you even more along? Yeah, So I think first and foremost wait, do we have a lot of traditional IAS fees? But we found that the open source partners actually are outpacing them with innovation. So I think that's where it starts for us. Um, secondly, we think there's maybe some financial upside to going more open source. We think we can maybe take some cost out unwind from these big fellas were in and thirdly, a CZ. We go to universities. We started hearing. Is we interviewed? Hey, what is Cole's doing with open source and way? Wanted to use that as a lever to help recruit talent. So I'm kind of excited, you know, we partner with Red Hat on open shift in in Rail and Gloucester and active M Q and answerable and lots of things. But we've also now launched our first open source projects. So it's really great to see this journey. We've been on. That's awesome, Rich. So you're in. You're in a high touch beta with with open shift for So what? What features and components or capabilities are you most excited about and looking forward to what? The launch and you know, and what? You know what? What are the something maybe some new goals that you might be able to accomplish with with the new features. And yeah, So I will tell you we're off to a great start with open shift. We've been on the platform for over a year now. We want an innovation award. We have this great team of engineers out here that have done some outstanding work. But certainly there's room to continue to mature that platform. It calls, and we're excited about open shift, for I think there's probably three things that were really looking forward to. One is we're looking forward to, ah, better upgrade process. And I think we saw, you know, some of that in the last demo. So upgrades have been kind of painful up until now. So we think that that that will help us. Um, number two, A lot of our open shift workloads today or the workloads. We run an open shifts are the stateless apse. Right? And we're really looking forward to moving more of our state full lapse into the platform. And then thirdly, I think that we've done a great job of automating a lot of the day. One stuff, you know, the provisioning of, of things. There's great opportunity o out there to do mohr automation for day two things. So to integrate mohr with our messaging systems in our database systems and so forth. So we, uh we're excited. Teo, get on board with the version for wear too. So, you know, I hope you, Khun, we can help you get to the next goals and we're going to continue to do that. Thank you. Thank you so much rich, you know, all the way from from rail toe open shift. It's really exciting for us, frankly, to see our products helping you solve World War were problems. What's you know what? Which is. Really? Why way do this and and getting into both of our goals. So thank you. Thank you very much. And thanks for your support. We really appreciate it. Thanks. It has all been amazing so far and we're not done. A critical part of being successful in the hybrid cloud is being successful in your data center with your own infrastructure. We've been helping our customers do that in these environments. For almost twenty years now, we've been running the most complex work loads in the world. But you know, while the public cloud has opened up tremendous possibilities, it also brings in another type of another layer of infrastructure complexity. So what's our next goal? Extend your extend your data center all the way to the edge while being as effective as you have been over the last twenty twenty years, when it's all at your own fingertips. First from a practical sense, Enterprises air going to have to have their own data centers in their own environment for a very long time. But there are advantages of being able to manage your own infrastructure that expand even beyond the public cloud all the way out to the edge. In fact, we talked about that very early on how technology advances in computer networking is storage are changing the physical boundaries of the data center every single day. The need, the need to process data at the source is becoming more and more critical. New use cases Air coming up every day. Self driving cars need to make the decisions on the fly. In the car factory processes are using a I need to adapt in real time. The factory floor has become the new edge of the data center, working with things like video analysis of a of A car's paint job as it comes off the line, where a massive amount of data is on ly needed for seconds in order to make critical decisions in real time. If we had to wait for the video to go up to the cloud and back, it would be too late. The damage would have already been done. The enterprise is being stretched to be able to process on site, whether it's in a car, a factory, a store or in eight or nine PM, usually involving massive amounts of data that just can't easily be moved. Just like these use cases couldn't be solved in private cloud alone because of things like blatant see on data movement, toe address, real time and requirements. They also can't be solved in public cloud alone. This is why open hybrid is really the model that's needed in the only model forward. So how do you address this class of workload that requires all of the above running at the edge? With the latest technology all its scale, let me give you a bit of a preview of what we're working on. We are taking our open hybrid cloud technologies to the edge, Integrated with integrated with Aro AM Hardware Partners. This is a preview of a solution that will contain red had open shift self storage in K V M virtual ization with Red Hat Enterprise Lennox at the core, all running on pre configured hardware. The first hardware out of the out of the gate will be with our long time. Oh, am partner Del Technologies. So let's bring back burn the team to see what's right around the corner. >> Please welcome back to the stage. Red Hat. Global director of developer Experience burst Sutter with Kareema Sharma. Okay, We just how was your Foreign operators have redefined the capabilities and usability of the open hybrid cloud, and now we're going to show you a few more things. Okay, so just be ready for that. But I know many of our customers in this audience right now, as well as the customers who aren't even here today. You're running tens of thousands of applications on open chef clusters. We know that disappearing right now, but we also know that >> you're not >> actually in the business of running terminators clusters. You're in the business of oil and gas from the business retail. You're in a business transportation, you're in some other business and you don't really want to manage those things at all. We also know though you have lo latest requirements like Polish is talking about. And you also dated gravity concerns where you >> need to keep >> that on your premises. So what you're about to see right now in this demonstration is where we've taken open ship for and made a bare metal cluster right here on this stage. This is a fully automated platform. There is no underlying hyper visor below this platform. It's open ship running on bare metal. And this is your crew vanities. Native infrastructure, where we brought together via mes containers networking and storage with me right now is green mush arma. She's one of her engineering leaders responsible for infrastructure technologies. Please welcome to the stage, Karima. >> Thank you. My pleasure to be here, whether it had summit. So let's start a cloud. Rid her dot com and here we can see the classroom Dannon Jessica working on just a few moments ago From here we have a bird's eye view ofthe all of our open ship plasters across the hybrid cloud from multiple cloud providers to on premises and noticed the spare medal last year. Well, that's the one that my team built right here on this stage. So let's go ahead and open the admin console for that last year. Now, in this demo, we'LL take a look at three things. A multi plaster inventory for the open Harbor cloud at cloud redhead dot com. Second open shift container storage, providing convert storage for virtual machines and containers and the same functionality for cloud vert and bare metal. And third, everything we see here is scuba unit is native, so by plugging directly into communities, orchestration begin common storage. Let working on monitoring facilities now. Last year, we saw how continue native actualization and Q Bert allow you to run virtual machines on Cabinet is an open shift, allowing for a single converge platform to manage both containers and virtual machines. So here I have this dark net project now from last year behead of induced virtual machine running it S P darknet application, and we had started to modernize and continue. Arise it by moving. Parts of the application from the windows began to the next containers. So let's take a look at it here. I have it again. >> Oh, large shirt, you windows. Earlier on, I was playing this game back stage, so it's just playing a little solitaire. Sorry about that. >> So we don't really have time for that right now. Birds. But as I was saying, Over here, I have Visions Studio Now the window's virtual machine is just another container and open shift and the i d be service for the virtual machine. It's just another service in open shift open shifts. Running both containers and virtual machines together opens a whole new world of possibilities. But why stop there? So this here be broadened to come in. It is native infrastructure as our vision to redefine the operation's off on premises infrastructure, and this applies to all matters of workloads. Using open shift on metal running all the way from the data center to the edge. No by your desk, right to main benefits. Want to help reduce the operation casts And second, to help bring advance good when it is orchestration concept to your infrastructure. So next, let's take a look at storage. So open shift container storage is software defined storage, providing the same functionality for both the public and the private lads. By leveraging the operator framework, open shift container storage automatically detects the available hardware configuration to utilize the discs in the most optimal vein. So then adding my note, you don't have to think about how to balance the storage. Storage is just another service running an open shift. >> And I really love this dashboard quite honestly, because I love seeing all the storage right here. So I'm kind of curious, though. Karima. What kind of storage would you What, What kind of applications would you use with the storage? >> Yeah, so this is the persistent storage. To be used by a database is your files and any data from applications such as a Magic Africa. Now the A Patrick after operator uses school, been at this for scheduling and high availability, and it uses open shift containers. Shortest. Restore the messages now Here are on premises. System is running a caf co workload streaming sensor data on DH. We want toe sort it and act on it locally, right In a minute. A place where maybe we need low latency or maybe in a data lake like situation. So we don't want to send the starter to the cloud. Instead, we want to act on it locally, right? Let's look at the griffon a dashboard and see how our system is doing so with the incoming message rate of about four hundred messages for second, the system seems to be performing well, right? I want to emphasize this is a fully integrated system. We're doing the testing An optimization sze so that the system can Artoo tune itself based on the applications. >> Okay, I love the automated operations. Now I am a curious because I know other folks in the audience want to know this too. What? Can you tell us more about how there's truly integrated communities can give us an example of that? >> Yes. Again, You know, I want to emphasize everything here is managed poorly by communities on open shift. Right. So you can really use the latest coolest to manage them. All right. Next, let's take a look at how easy it is to use K native with azure functions to script alive Reaction to a live migration event. >> Okay, Native is a great example. If actually were part of my breakout session yesterday, you saw me demonstrate came native. And actually, if you want to get hands on with it tonight, you can come to our guru night at five PM and actually get hands on like a native. So I really have enjoyed using K. Dated myself as a software developer. And but I am curious about the azure functions component. >> Yeah, so as your functions is a function is a service engine developed by Microsoft fully open source, and it runs on top of communities. So it works really well with our on premises open shift here. Right now, I have a simple azure function that I already have here and this azure function, you know, Let's see if this will send out a tweet every time we live My greater Windows virtual machine. Right. So I have it integrated with open shift on DH. Let's move a note to maintenance to see what happens. So >> basically has that via moves. We're going to see the event triggered. They trigger the function. >> Yeah, important point I want to make again here. Windows virtue in machines are equal citizens inside of open shift. We're investing heavily in automation through the use of the operator framework and also providing integration with the hardware. Right, So next, Now let's move that note to maintain it. >> But let's be very clear here. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. This is open ship running on bear. Meddle with these bare metal host. >> That is absolutely right. The system can automatically discover the bare metal hosts. All right, so here, let's move this note to maintenance. So I start them Internets now. But what will happen at this point is storage will heal itself, and communities will bring back the same level of service for the CAFTA application by launching a part on another note and the virtual machine belive my great right and this will create communities events. So we can see. You know, the events in the event stream changes have started to happen. And as a result of this migration, the key native function will send out a tweet to confirm that could win. It is native infrastructure has indeed done the migration for the live Ian. Right? >> See the events rolling through right there? >> Yeah. All right. And if we go to Twitter? >> All right, we got tweets. Fantastic. >> And here we can see the source Nord report. Migration has succeeded. It's a pretty cool stuff right here. No. So we want to bring you a cloud like experience, but this means is we're making operational ease a fuse as a top goal. We're investing heavily in encapsulating management knowledge and working to pre certify hardware configuration in working with their partners such as Dell, and they're dead already. Note program so that we can provide you guidance on specific benchmarks for specific work loads on our auto tuning system. >> All right, well, this is tow. I know right now, you're right thing, and I want to jump on the stage and check out the spare metal cluster. But you should not right. Wait After the keynote didn't. Come on, check it out. But also, I want you to go out there and think about visiting our partner Del and their booth where they have one. These clusters also. Okay, So this is where vmc networking and containers the storage all come together And a Kurban in his native infrastructure. You've seen right here on this stage, but an agreement. You have a bit more. >> Yes. So this is literally the cloud coming down from the heavens to us. >> Okay? Right here, Right now. >> Right here, right now. So, to close the loop, you can have your plaster connected to cloud redhead dot com for our insights inside reliability engineering services so that we can proactively provide you with the guidance through automated analyses of telemetry in logs and help flag a problem even before you notice you have it Beat software, hardware, performance, our security. And one more thing. I want to congratulate the engineers behind the school technology. >> Absolutely. There's a lot of engineers here that worked on this cluster and worked on the stack. Absolutely. Thank you. Really awesome stuff. And again do go check out our partner Dale. They're just out that door I can see them from here. They have one. These clusters get a chance to talk to them about how to run your open shift for on a bare metal cluster as well. Right, Kareema, Thank you so much. That was totally awesome. We're at a time, and we got to turn this back over to Paul. >> Thank you. Right. >> Okay. Okay. Thanks >> again. Burned, Kareema. Awesome. You know, So even with all the exciting capabilities that you're seeing, I want to take a moment to go back to the to the first platform tenant that we learned with rail, that the platform has to be developer friendly. Our next guest knows something about connecting a technology like open shift to their developers and part of their company. Wide transformation and their ability to shift the business that helped them helped them make take advantage of the innovation. Their Innovation award winner this year. Please, Let's welcome Ed to the stage. >> Please welcome. Twenty nineteen. Innovation Award winner. BP Vice President, Digital transformation. Ed Alford. >> Thanks, Ed. How your fake Good. So was full. Get right into it. What we go you guys trying to accomplish at BP and and How is the goal really important in mandatory within your organization? Support on everyone else were global energy >> business, with operations and over seventy countries. Andi. We've embraced what we call the jewel challenge, which is increasing the mind for energy that we have as individuals in the world. But we need to produce the energy with fuel emissions. It's part of that. One of our strategic priorities that we >> have is to modernize the whole group on. That means simplifying our processes and enhancing >> productivity through digital solutions. So we're using chlo based technologies >> on, more importantly, open source technologies to clear a community and say, the whole group that collaborates effectively and efficiently and uses our data and expertise to embrace the jewel challenge and actually try and help solve that problem. That's great. So So how did these heart of these new ways of working benefit your team and really the entire organ, maybe even the company as a whole? So we've been given the Innovation Award for Digital conveyor both in the way it was created and also in water is delivering a couple of guys in the audience poll costal and brewskies as he they they're in the team. Their teams developed that convey here, using our jail and Dev ops and some things. We talk about this stuff a lot, but actually the they did it in a truly our jail and develops we, um that enabled them to experiment and walking with different ways. And highlight in the skill set is that we, as a group required in order to transform using these approaches, we can no move things from ideation to scale and weeks and days sometimes rather than months. Andi, I think that if we can take what they've done on DH, use more open source technology, we contain that technology and apply across the whole group to tackle this Jill challenge. And I think that we use technologists and it's really cool. I think that we can no use technology and open source technology to solve some of these big challenges that we have and actually just preserve the planet in a better way. So So what's the next step for you guys at BP? So moving forward, we we are embracing ourselves, bracing a clothed, forced organization. We need to continue to live to deliver on our strategy, build >> over the technology across the entire group to address the jewel >> challenge and continue to make some of these bold changes and actually get into and really use. Our technology is, I said, too addresses you'LL challenge and make the future of our planet a better place for ourselves and our children and our children's children. That's that's a big goal. But thank you so much, Ed. Thanks for your support. And thanks for coming today. Thank you very much. Thank you. Now comes the part that, frankly, I think his best part of the best part of this presentation We're going to meet the type of person that makes all of these things a reality. This tip this type of person typically works for one of our customers or with one of with one of our customers as a partner to help them make the kinds of bold goals like you've heard about today and the ones you'll hear about Maura the way more in the >> week. I think the thing I like most about it is you feel that reward Just helping people I mean and helping people with stuff you enjoy right with computers. My dad was the math and science teacher at the local high school. And so in the early eighties, that kind of met here, the default person. So he's always bringing in a computer stuff, and I started a pretty young age. What Jason's been able to do here is Mohr evangelize a lot of the technologies between different teams. I think a lot of it comes from the training and his certifications that he's got. He's always concerned about their experience, how easy it is for them to get applications written, how easy it is for them to get them up and running at the end of the day. We're a loan company, you know. That's way we lean on accounting like red. That's where we get our support front. That's why we decided to go with a product like open shift. I really, really like to product. So I went down. The certification are out in the training ground to learn more about open shit itself. So my daughter's teacher, they were doing a day of coding, and so they asked me if I wanted to come and talk about what I do and then spend the day helping the kids do their coding class. The people that we have on our teams, like Jason, are what make us better than our competitors, right? Anybody could buy something off the shelf. It's people like him. They're able to take that and mold it into something that then it is a great offering for our partners and for >> customers. Please welcome Red Hat Certified Professional of the Year Jason Hyatt. >> Jason, Congratulations. Congratulations. What a what a big day, huh? What a really big day. You know, it's great. It's great to see such work, You know that you've done here. But you know what's really great and shows out in your video It's really especially rewarding. Tow us. And I'm sure to you as well to see how skills can open doors for for one for young women, like your daughters who already loves technology. So I'd liketo I'd like to present this to you right now. Take congratulations. Congratulations. Good. And we I know you're going to bring this passion. I know you bring this in, everything you do. So >> it's this Congratulations again. Thanks, Paul. It's been really exciting, and I was really excited to bring my family here to show the experience. It's it's >> really great. It's really great to see him all here as well going. Maybe we could you could You guys could stand up. So before we leave before we leave the stage, you know, I just wanted to ask, What's the most important skill that you'LL pass on from all your training to the future generations? >> So I think the most important thing is you have to be a continuous learner you can't really settle for. Ah, you can't be comfortable on learning, which I already know. You have to really drive a continuous Lerner. And of course, you got to use the I ninety. Maxwell. Quite. >> I don't even have to ask you the question. Of course. Right. Of course. That's awesome. That's awesome. And thank you. Thank you for everything, for everything that you're doing. So thanks again. Thank you. You know what makes open source work is passion and people that apply those considerable talents that passion like Jason here to making it worked and to contribute their idea there. There's back. And believe me, it's really an impressive group of people. You know you're family and especially Berkeley in the video. I hope you know that the redhead, the certified of the year is the best of the best. The cream of the crop and your dad is the best of the best of that. So you should be very, very happy for that. I also and I also can't wait. Teo, I also can't wait to come back here on this stage ten years from now and present that same award to you. Berkeley. So great. You should be proud. You know, everything you've heard about today is just a small representation of what's ahead of us. We've had us. We've had a set of goals and realize some bold goals over the last number of years that have gotten us to where we are today. Just to recap those bold goals First bait build a company based solely on open source software. It seems so logical now, but it had never been done before. Next building the operating system of the future that's going to run in power. The enterprise making the standard base platform in the op in the Enterprise Olympics based operating system. And after that making hybrid cloud the architecture of the future make hybrid the new data center, all leading to the largest software acquisition in history. Think about it around us around a company with one hundred percent open source DNA without. Throughout. Despite all the fun we encountered over those last seventeen years, I have to ask, Is there really any question that open source has won? Realizing our bold goals and changing the way software is developed in the commercial world was what we set out to do from the first day in the Red Hat was born. But we only got to that goal because of you. Many of you contributors, many of you knew toe open source software and willing to take the risk along side of us and many of partners on that journey, both inside and outside of Red Hat. Going forward with the reach of IBM, Red hat will accelerate. Even Mohr. This will bring open source general innovation to the next generation hybrid data center, continuing on our original mission and goal to bring open source technology toe every corner of the planet. What I what I just went through in the last hour Soul, while mind boggling to many of us in the room who have had a front row seat to this overto last seventeen plus years has only been red hats. First step. Think about it. We have brought open source development from a niche player to the dominant development model in software and beyond. Open Source is now the cornerstone of the multi billion dollar enterprise software world and even the next generation hybrid act. Architecture would not even be possible without Lennox at the core in the open innovation that it feeds to build around it. This is not just a step forward for software. It's a huge leap in the technology world beyond even what the original pioneers of open source ever could have imagined. We have. We have witnessed open source accomplished in the last seventeen years more than what most people will see in their career. Or maybe even a lifetime open source has forever changed the boundaries of what will be possible in technology in the future. And in the one last thing to say, it's everybody in this room and beyond. Everyone outside continue the mission. Thanks have a great sum. It's great to see it
SUMMARY :
Ladies and gentlemen, please welcome Red Hat President Products and Technologies. Kennedy setting the gold to the American people to go to the moon. that point I knew that despite the promise of Lennox, we had a lot of work ahead of us. So it is an honor for me to be able to show it to you live on stage today. And we're not about the clinic's eight. And Morgan, There's windows. That means that for the first time, you can log in from any device Because that's the standard Lennox off site. I love the dashboard overview of the system, You see the load of the system, some some of its properties. So what about if I have to add a whole new application to this environment? Which the way for you to install different versions of your half stack that That is fantastic and the application streams Want to keep up with the fast moving ecosystems off programming I know some people were thinking it right now. everyone you want two or three or whichever your application needs. And I'm going to the rat knowledge base and looking up things like, you know, PV create VD, I've opened the storage space for you right here, where you see an overview of your storage. you know, we'll have another question for you. you know a lot of people, including me and people in the audience like that dark out right? much easier, including a post gra seeker and, of course, the python that we saw right there. Yeah, absolutely. And it's saved so that you don't actually have to know all the various incantations from Amazon I All right, Well, if you want to prevent a holy war in your system, you can actually use satellite to filter that out. Okay, So this VM image we just created right now from that blueprint this is now I can actually go out there and easily so you can really hit your Clyburn hybrid cloud operating system images. and I just need a few moments for it to build. So while that's taking a few moments, I know there's another key question in the minds of the audience right now, You see all my relate machines here, including the one I showed you what Consul on before. Okay, okay, so now it's progressing. it's progressing. live upgrade on stage. Detective that and you know, it doesn't run the Afghan cause we don't support operating that. So the good news is, we were protected from possible failed upgrade there, That's the idea. And I really love what you showed us there. So you were away for so long. So the really cool thing about this bird is that all of these images were built So thank you so much for that large. more to talk to you about. I'm going to show you here a satellite inventory and his So he's all the machines can get updated in one fell swoop. And there's one thing that I want to bring your attention to today because it's brand new. I know that in the minds of the audience right now. I've actually been waiting for a while patiently for you to get to the really good stuff. there's one more thing that I wanted to let folks know about. next eight and some features that we have there. So, actually, one of the key design principles of relate is working with our customers over the last twenty years to integrate OK, so we basically have this new feature. So And this is this list is growing every single day, so customers can actually opt in to the rules that are most But it comes to CVS and things that nature. This is the satellite that we saw before, and I'll grab one of the hosts and I love it so it's just a single command and you're ready to register this box right now. I'm going to show you one more thing. I know everyone's waiting for it as well, But hey, you're VM is ready. Yeah, insights is a really cool feature And I've got it in all my images already. the machines registering on cloud that redhead dot com ready to be managed. OK, so all those onstage PM's as well as the hybrid cloud VM should be popping in IRC Post Chris equals Well, We saw that in the overview, and I can actually go and get some more details about what this everybody to go try this like, we really need to get this thing going and try it out right now. don't know, sent about the room just yet. And even though it's really easy to get going on and we kind of, you know, when a little bit sideways here moments. I went brilliant. We hear about that all the time, as I just told Please welcome Lawrence Livermore National Laboratory. And thank thank you so much for coming for But first and foremost, our job is to ensure the safety, and for the geeks in the audience, I think there's a few of them out there. before And you know, Vendors seldom had a system anywhere near the size of ours, and we couldn't give them our classified open source, you know, for even open source existing. And if the security vulnerability comes out, we don't have to chase around getting fixes from Multan slo all the way to the extract excess Excuse scale supercomputing. share any more details about that system right now, but we are hoping that we're going to be able of the data center spread across so many multiple environments, management had to be I know all of you have heard we're talking to pretend to new customers about the travel out. Earlier we showed you read Enterprise Clinic St running on lots of In large part, that's because open shit for has extended management of the clusters down to the infrastructure, you can now see the machines that make up the cluster where machine represents the infrastructure. Thes software operators are responsible for aligning the cluster to a desired state. of Cooper Netease Technologies that have the operational characteristics that Dan's going to actually let us has made the sequel server operator available to me and my team. Okay, so this point we can kind of provisions, And if I scroll to the list, we can see the different workloads Jessica just mentioned Okay, But And the way they all those killers working is Okay, so looks like capacity planning and automation is fully, you know, handle this point. Is the cluster admin right now into the console? This gives a cluster I've been the ability to maintain the operators they've already installed. So this is our products application that's talking to that sequel server instance. So, you know, everyone in this room, you know, wants to see you hit that upgrade button. And that point, the new, softer operator will notice. So glad the team doesn't have to worry about that anymore and just got I think enough of these might have run by Now, if you try your app again Let's see Jessica's application up here. And yet look, we're We're into two before we're onto three. So I'm going to switch this automatic approval. And so I was glad you guys got a chance to see that rolling update across the cluster. And I'll dig into the azure cluster that we were just taking a look at. all you have to do is log in with your red hair credentials to get access. So one console, one user experience to see across the entire hybrid cloud we saw earlier with Red Thanks so much to burn his team. of technology, Rich Hodak. How you doing? center all the way to the edge while being as effective as you have been over of the open hybrid cloud, and now we're going to show you a few more things. You're in the business of oil and gas from the business retail. And this is your crew vanities. Well, that's the one that my team built right here on this stage. Oh, large shirt, you windows. open shift container storage automatically detects the available hardware configuration to What kind of storage would you What, What kind of applications would you use with the storage? four hundred messages for second, the system seems to be performing well, right? Now I am a curious because I know other folks in the audience want to know this too. So you can really use the latest coolest to manage And but I am curious about the azure functions component. and this azure function, you know, Let's see if this will We're going to see the event triggered. So next, Now let's move that note to maintain it. I wanna make sure you understand one thing, and that is there is no underlying virtual ization software here. You know, the events in the event stream changes have started to happen. And if we go to Twitter? All right, we got tweets. No. So we want to bring you a cloud like experience, but this means is I want you to go out there and think about visiting our partner Del and their booth where they have one. Right here, Right now. So, to close the loop, you can have your plaster connected to cloud redhead These clusters get a chance to talk to them about how to run your open shift for on a bare metal Thank you. rail, that the platform has to be developer friendly. Please welcome. What we go you guys trying to accomplish at BP and and How is the goal One of our strategic priorities that we have is to modernize the whole group on. So we're using chlo based technologies And highlight in the skill part of this presentation We're going to meet the type of person that makes And so in the early eighties, welcome Red Hat Certified Professional of the Year Jason Hyatt. So I'd liketo I'd like to present this to you right now. to bring my family here to show the experience. before we leave before we leave the stage, you know, I just wanted to ask, What's the most important So I think the most important thing is you have to be a continuous learner you can't really settle for. And in the one last thing to say, it's everybody in this room and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adam Ball | PERSON | 0.99+ |
Jessica | PERSON | 0.99+ |
Josh Boyer | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Timothy Kramer | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jason | PERSON | 0.99+ |
Lars Carl | PERSON | 0.99+ |
Kareema Sharma | PERSON | 0.99+ |
Wilbert | PERSON | 0.99+ |
Jason Hyatt | PERSON | 0.99+ |
Brent | PERSON | 0.99+ |
Lenox | ORGANIZATION | 0.99+ |
Rich Hodak | PERSON | 0.99+ |
Ed Alford | PERSON | 0.99+ |
ten | QUANTITY | 0.99+ |
Brent Midwood | PERSON | 0.99+ |
Daniel McPherson | PERSON | 0.99+ |
Jessica Forrester | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
Lars | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Robin | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Karima | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
seventy pounds | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John F. Kennedy | PERSON | 0.99+ |
Ansel | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Edward Teller | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Teo | PERSON | 0.99+ |
Kareema | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Python | TITLE | 0.99+ |
seven individuals | QUANTITY | 0.99+ |
BP | ORGANIZATION | 0.99+ |
ten ten thousand times | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Chris | PERSON | 0.99+ |
Del Technologies | ORGANIZATION | 0.99+ |
python | TITLE | 0.99+ |
Today | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Robin Goldstone | PERSON | 0.99+ |
Day 1 Kickoff | Red Hat Summit 2019
>> live from Boston, Massachusetts. It's the queue covering your red hat. Some twenty nineteen lots. You buy bread >> and good morning. Welcome to Beantown, Boston, Massachusetts to Mina Mons Hometown by the police Town of residents. John Wallis was stupid from here on the Q. Bert had summit and stew for you. Good to see you here. And a home game. >> Yeah, John, Thanks so much. Nice. You know, Boston, The Cube loves Boston. The B C E C is actually where the first cube event was way back in twenty ten. And we wish there were more conferences here in Boston. Gorgeous weather here in the spring. Ah, little chilly at night with the wind coming off the water, but really good. Here is the sixth year we've had the Cube here, right? Had some in my fifth year at the show. Great energy. And, you know, thirty four billion reasons why people are spending a lot of time keeping a close eye on. Let's just know. Yeah, >> jump right in thirty four billion dollar deal. I am red hatt gotta prove by doj uh, here in the States. But there's still some hurdles that they have to get over in order for that to come to fruition, Maybe later this year. That's the expectation. But just your thoughts right now about about that synergy about that opportunity that that we think is about to have. >> Yeah, so? So right, let's get this piece out of the way. Because here at the conference, we're talking about Red Hat. The acquisition has not completed. So while the CEO of IBM you know Jenny will be up on stage tonight along with, you know, Jim White Hirsi over at Hat and Sakina della, you know, flying in from Seattle, where you might get your name yesterday. So you know, at least two of those three your Cuba Lem's. So we'LL get Jenny on one of these days. But, you know, this is a big acquisition, the largest software acquisition ever, and third largest acquisition in tech history. Now we watched the first biggest tech acquisition in history, which was Del buying AMC just a couple of years ago. And this is not the normal. Okay? Hey, we announced it and you know, it closed quietly in a few months. So as you mentioned, DOJ approved it. There's a few more government agencies Europe needs to go through. You never know what China might ask to come in here, but, you know, really, at the core if you look at it, you know, IBM and Red Hat have worked together for decades. You know, we wrote a lot about this when the announcement happened. You know, IBM is no stranger to open source. IBM is no stranger to the clinics and the areas where Red Hat has been growing and expanded too. You see, IBM, they're so communities, you know, super hot space. If you look, you know, Red hat is they're they're open shift platform, which is what Red Hat does for cloud. Native Development has over a thousand customers. They're adding between one hundred one hundred fifty a quarter is what they talk about publicly. We're gonna have some of those customers on this week. So huge area. That multi cloud hybrid cloud world absolutely is where it's at. We did four days of broadcast from IBM. Think earlier this year in San Francisco. And, you know, once again, Jim white hairs and Jenny were on stage together. They're talking about where they've been working together for a long time. and just, you know, some things will change, but from IBM standpoint, they said, Look, you know, the day after this closes, you know, Red Hat doesn't go away. That had just announced new branding, and everybody's like, Well, why are they changing their branding? You know, when you know IBM is taking over and the answer was, Look, Red Hat's going to stay as a standalone entity. IBM says they're not going to have a single lay off, not even HR consolidation, at least in the beginning. We understand, you know, give me your stuff to work out some of these pieces, but there are ears. They will work together. I look at it. John is like the core. What is the biggest piece of IBM's business is services. That Army of services, both from IBM and all of their Esai partners and everybody they worked with Khun really supercharge and help scale some of the environment that red hats doing so really interesting. Expect them to talk a little bit about it. Red hat is way more transparent than your average company. They had an analyst event like a week or two after it happened, and I was really surprised how much they would tell us and that we could talk about publicly. As I said, just cause I've seen so many acquisitions happen, including some you know, mega ones in the past. And we know how little usually you talk about until it it's done and it's signed. And, you know, the bankers and lawyers have been paid all their fees. >> Let me ask you, you raise an interesting point. Um, you know that there are some different approaches, obviously, between IBM redhead, just in terms of their institutional legacies in terms of processes. Red hat. You mentioned very transparent organization. Open source. Right. So we're all about the rebrand. They come out, you know, the drop shadow, man, They got the hat. What's that cultural mix going to be like? Can they truly run independently? Yeah, they're a big piece. So And if your IBM can you let that run on its own? >> So, John, that is the question most of us have. So, you know, I've worked with Red Hat for coming up on twenty years now, you know, Remember when Lennox was just this mess of colonel dot organ. So much changes that red hat came and gave, you know, adult supervision to help move that forward on. The thing I I wrote about is what Red Hat is really, really good at. If you look at the core, there do is managing that chaos and change on the industry. If you look how many changes happen, toe Lennox, you know every you know, day, week, month and they package all that together and they test all that same thing in Kou Burnett is the same thing in so many different spaces where that open source world is just frenetic and changing. So they're really geared for today's industry. You talk what's the only constant in our industry? John is it is changed. IBM, on the other hand, is like, you know, over one hundred years old, and I tried and true, you know, Big Blue. You know, I ibm is this, you know, the big tanker, you know, it's not like they turn on a dime and you know, rapid pace of change. You think of IBM, you think of innovation. You think of, you know, trust. You think of all the innovations that have come out over the century. Plus do there and absolutely there is a little bit of impeded mismatch there and we'LL see So if ibm Khun truly let them do their own thing and not kind of merged suit groups and take over where the inertia of a larger group can slow things down I hope it will be successful But they're definitely our concerns And time will tell we'll see But you know analytics front You know, they just announced this morning Rehl eight Red hat enterprise linen, you know, just got announced and definitely something will be spent a lot of time So >> let's just jump in a relative Look again, We're gonna hear a little bit later on. We have several folks coming on board to talk aboutthe availability. Now what? What do you see from the outside? Looking at that. What is it going to allow you or us to do that? Seven Didn't know. Where did they improve? Is that on the automation side? Is it being maybe more attentive, Teo Hybrid environment or just What is it about? Really? That makes that special? >> Yes. So you know, first of all, you know these things take a while in the nice thing about being open sources. We've had transparency. If you wanted to know it was going to be in relate. You just look in the Colonel and and it's all out there. They've been working on this since twenty thirteen. Well, seven came out back in June of twenty fourteen. This has been a number of years in the mix. You know, security. The new, like crypto policy is a big piece that that's in their thie bullets that I got when I got the pre briefing on, It was, you know, faster and easier Deploy faster on boarding for non lennox users on, you know, seamless nondestructive migration from earlier versions of rail. So that's one of the things they really want to focus on is that it needs to be predictable, and I need to be able to move from one version the other. If you look at the cloud world, you know, when you don't go asking customers say, Hey, what version of Azure a ws are you running on your running on the latest and greatest? But if you look at traditional shrink wrap software, it was well, what virginity running? Well, I'm running in minus two and Why is that? Because I have to get it. I have to test it out. And then I, you know, find a time that I'm gonna roll that out, work it in my environment. So there is stability and understanding of the release cycle. My understanding is that they're going to do major releases every three years and minor releases every six months. So that cadence a little bit more like the cloud. And as I said, getting from one version a rail to the next should be easier and more non disruptive. Ah, a lot of people are going to want manage offerings where they don't really think about this. I have the latest version because that has not just the latest features but the latest security setting, which, of course, is a major piece of my infrastructure today to make sure that if there was some vulnerability released, I can't wait, You know, six or nine months for me to bake that in there. The limits community's always good have done a good job of getting fixes into it. But how fast can I roll that out into my environment is >> something I would assume that's that's a major factor in any consideration right now is is on the security front, because every day we hear about one more problem and these are just small little issues. These these air are could be multi billion dollar problems. But in terms of making products available today, how Muchmore important? How's that security shift? If you could put a percentage on it used to be, you know, axe and now it's X plus. I mean I mean, what kind of considerations are being given? >> You know what I'd say? Used to be that security got great lip service A. Said it was usually top of mind, but often towards bottom of budget. When you talk to administrators and you say, Oh, hey, where's your last security initiative? And that, like I've had that thing sitting on my desk for the last six months and I haven't had a chance to roll that out. I will get to it, but I want to again. If you go to that cloud operating model. If you talk about you know Dev, Ops movement is, I need to bake security into the process. If I'm doing C i D. It's not, I do something and then think about security afterwards. Security needs to be built in from the ground level. A CZ. You know, I I've heard people in the industry. Security is everyone's responsibility, and security must be baked in everywhere. So from the application all the way down to the chipset, we need to be thinking about security along the bar. Mind it is a board level discussion. Any user you talk too, you know, you don't say, Hey, where's the security sitting? Your priorities. You know, it's up there towards the top, if not vey top, because that's the thing that could put us out of business or, you know, definitely ruin careers. If if it doesn't go >> right, so there are there are probably a couple of platforms, every will or pillars. I think you like to call them that. You're looking forward to learning more about this week. I think in terms of red hats work one of those green hybrid cloud infrastructure, and we'LL get to the other to a little bit. But just your thoughts about how they're addressing that with the products that they offered the services they offer and where they're going in that >> Yeah, so look everything for red at start with rail. Everything is built on Lenox, and that's a good thing, because Lennox Endeavor is everywhere. If last year is that Microsoft ignite for the first time. And when you hear them talking a Microsoft talking about how Lennox is the majority of the environment, more than fifty percent of the environment are running linen goto a ws Same thing. All the cloud deployment Lennox is the preferred substrate underneath and Rehl doing very well to live in all those environment. So what we look at is, you know, some people say, is this olynyk show. It's like, well, at the core. Lin IX is the piece of it and relate the latest and greatest substantiation. But everywhere you go, there's going to be Lennox there from doing container ization. If a building on top of it with the the new cloud native models, it's there. And if you talk about how I get from my data center to a multi cloud environment, it's building things like Cooper Netease, which read that of course, uses open shift and you know those ties to eight of us and azure and you know, Google they're all there. So we mention Santina della's on stage tonight at Microsoft build. Yesterday there was announcement of this thing called Kita ke e d A, which has, like as your functions and ties in with open shift and spend a little time squinting it, trying to tease it apart. We've got some guests this week that'LL hopefully give some clarity, but it is. The answer is people today have multiple clouds and they have a lot of different ways they want. They want to do things, and Red has going to make sure that they help bridge the gap and simplify those environments across the board. Two years ago, when we were at the show big announcement about how open shift integrates with a W s so that if I'm using a ws But I want to have things in my environment still leverage some of those services. That was something that that Red had announced. I was, you know, quite impressed a time it was, you know, just last week being at the Del Show, it's V m. Where is the del strategy for how they get you know, A W, S, G, C, P and Azure and, you know, Red Hat does that themselves. Their software company. They live in all these cloud worlds, and therefore, open shift will help you extend from your data center through all of those public cloud environments on DH, you know? Yeah. So it's fascinating >> you've talked about Lennox to we're going to hear a little bit later on to about a fascinating the global economic study, that Red Hat Commission with the I. D. C. Of that talks about this ten trillion dollar impact of Lennox around the globe like to dive into that a little bit later on. >> Yeah, well, it's interesting, you know, it's the line I used is you say, and you say, Oh, well, how much impact is Lennox had? You know? You know, Red hats now, a three billion dollar company. That's good. But I was like, Okay, let's just take Google. You know, no slots of a company. Google underneath. It's not Red Hat Lennox, but Lennox is the foundation. I don't really think that Google could become the global search and advertising powerhouse they were. If it wasn't for Lennox to be able to help them get environment, there's a CZ we always talk with these technologies. You talk about Lennox, you talk about How do you talk about, you know, Cooper Netease? There are companies that will monetize it, but the real value is what business models and creation by. You know, all the enterprise is the service riders in the hyper scales that those technologies help enable. And that's where open source really shines is, you know, the order of magnitude network effect, that open source solutions have that its you say okay, three billion dollars? And is that what ten trillion dollars? It doesn't faze me, doesn't surprise me at all, but because my attention it look it. I'm not trying to trivialize. There's no But, you know, I've been watching clinics for twenty years, and I've seen the ripples of that effect. And if you dig down underneath your often finding it inside, >> I mentioned pillars that you were talking about cloud native development being another. But automation, let's just hit on that real quick before we head off on DH just again, with how that is being, I guess, highlighted. Or that's a central focus at and relate and and what automation? How that's playing in there I guess the new efficiencies they're trying to squeeze out. >> Yes. So? So what we always looked for it shows you're probably the last year is you know, you. How are they getting beyond the buzzwords? Aye, aye. When you talk about automation on area that that we've really enjoyed digging into is like robotic process automation. How do I take something that was manual? And maybe it was a fish injure? Not great. How can I make it perfectly efficient and use software robots to do that? So where are the places where I know that the amount of change and the scale and the growth that we have that I couldn't just put somebody to keyboard, you know, and have them typing or even a dashboard to be able to monitor and keep up with things? If I don't have the automation and intelligence in the system to manage things, I can't reach the scale and the growth that I need to. So where are you know, real solutions that are helping customers, you know, get over a little bit of the fear of Oh, my gosh, I'm losing a job. Or will this work or will this keep my business running and oh, my gosh, this will actually enabled me to be able to grow work on that security issue if I need to, rather than some of the other pieces and help really allow it agility to meet the requirements of what the business requires to help me move forward. So those are some of the things we kind of look across the shows. So, you know? Yeah. How much do we get? You know, buzzword, Bingo at the show. Where How much do we hear? You know, real customers with real solutions digging in and having, you know, new technologies that a couple of years ago would have had a saying, Wow, that's magic. >> But you say, Oh, my gosh. Yeah, and I don't want gosh right back with more. You're watching to serve the cube with the red had summit. We're in Boston, Massachusetts, that we'll be back with more coverage right after this
SUMMARY :
It's the queue covering Good to see you here. And, you know, thirty four billion reasons why people are spending a lot of time But there's still some hurdles that they have to get over in order for that to come to fruition, they said, Look, you know, the day after this closes, you know, Red Hat doesn't go away. They come out, you know, the drop shadow, man, They got the hat. So much changes that red hat came and gave, you know, adult supervision to help move that forward on. What is it going to allow you or us to do that? you know, when you don't go asking customers say, Hey, what version of Azure a ws are you running on your you know, axe and now it's X plus. you know, definitely ruin careers. I think you like to call them that. So what we look at is, you know, some people say, that Red Hat Commission with the I. D. C. Of that talks about this ten And that's where open source really shines is, you know, the order of magnitude network I mentioned pillars that you were talking about cloud native development being another. real solutions that are helping customers, you know, get over a little bit of the fear of Oh, But you say, Oh, my gosh.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
John Wallis | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Boston | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
June | DATE | 0.99+ |
ten trillion dollars | QUANTITY | 0.99+ |
Red Hat Commission | ORGANIZATION | 0.99+ |
fifth year | QUANTITY | 0.99+ |
twenty years | QUANTITY | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Jenny | PERSON | 0.99+ |
Red | ORGANIZATION | 0.99+ |
Red hats | ORGANIZATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
one hundred | QUANTITY | 0.99+ |
twenty years | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
three billion dollars | QUANTITY | 0.99+ |
Jim White Hirsi | PERSON | 0.99+ |
Lenox | ORGANIZATION | 0.99+ |
red hatt | ORGANIZATION | 0.99+ |
sixth year | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
nine months | QUANTITY | 0.99+ |
more than fifty percent | QUANTITY | 0.98+ |
Two years ago | DATE | 0.98+ |
thirty four billion dollar | QUANTITY | 0.98+ |
AMC | ORGANIZATION | 0.98+ |
tonight | DATE | 0.98+ |
four days | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
three billion dollar | QUANTITY | 0.98+ |
minus two | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
Santina della | PERSON | 0.98+ |
Beantown, Boston, Massachusetts | LOCATION | 0.98+ |
Yesterday | DATE | 0.97+ |
Jim | PERSON | 0.97+ |
Red hat | ORGANIZATION | 0.97+ |
DOJ | ORGANIZATION | 0.97+ |
Red Hat Lennox | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
two | QUANTITY | 0.97+ |
one version | QUANTITY | 0.96+ |
red hat | ORGANIZATION | 0.96+ |
Lennox | PERSON | 0.96+ |
twenty nineteen lots | QUANTITY | 0.96+ |
over a thousand customers | QUANTITY | 0.96+ |
first time | QUANTITY | 0.96+ |
Esai | ORGANIZATION | 0.96+ |
eight | QUANTITY | 0.95+ |
every six months | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
Mike McGibbney, SAP | SAP SAPPHIRE NOW 2018
>> From Orlando, Florida, it's theCUBE. Covering SAP SAPPHIRE NOW 2018. Brought to you by NetApp. >> Hi, welcome to theCUBE. I'm Lisa Martin, with Keith Townsend, and we are with NetApp in their booth at SAP SAPPHIRE 2018. Welcoming Mike McGibbney to theCUBE, from SAP. You're the SVP of SuccessFactors Service, Delivery and Operations. Welcome. >> Well, thank you. >> So, SuccessFactors, largest people cloud in the world. So you probably a little bit busy. >> Just a little bit. >> Tell us about what you're doing at SuccessFactors. >> So I'm responsible for the delivery and operation of the cloud service. So we service all of our customers and continue to introduce new capabilities into that cloud. We support them from payroll, all the way through recruitment. Basically, from hire to retire. >> So Mike, not your first cloud. Little background and history. Me and Mike have been on the, well probably one of the toughest projects, politically, I've ever been on. >> Yes, definitely. >> So there's history, but great history. We deliver success. This isn't your first cloud. >> No. >> You've built clouds before. What's fundamentally different about the SAP people cloud versus clouds you've built in the past? >> I think the speed. The way this is accelerating, both the breadth of the capabilities that we're offering when you think about the integrations into SAP, and the growth. So this is moving truly at cloud speed. The things that we're shooting for today are already past. So we constantly have to be focused out there on the horizon. We've gotta adapt very quickly. And we've gotta implement very quickly. Our customers need it to accelerate their business. And our services need that support underneath them as well. >> So you guys, as you said, have this, have this long history, so I'll let you guys chat in a minute. But in terms of customer experience, customer engagement, customer influence, that was kind of a lot of undertone in the keynote this morning. 50 million business users on SuccessFactors and 60 industries. How do you, needing to get to the speed that you just mentioned how do you get that customer feedback to drive evolution of the product as fast as they're demanding it? >> Well, so the product and engineering team have a whole system around customer engagements with delivery panels and steering committees. But from an operations side, we felt that it was important as well. We have a whole organization that is focused on engaging the customer. We built our operational centers. And we do probably about 60 customer tours a year through our operational centers. We also do about 200 customer calls from the operational team a month. So globally, we work with the pre-sales, the CEE groups, and some of the other SAP support groups, to make sure that we have boots on the ground, understanding what our customers want, understanding what their experience is, so we can continue to adjust and reset the bar where it needs to be. >> So Lisa, I'm not gonna dominate the conversation. Me and Mike can probably, we'll crack open a beer in a minute, (laughter) and we'll continue. But there's other hero numbers on the stage. Let's talk about the high level first and then me and Mike can geek out. What are some of the other Xers reveals? >> Oh, good question. I think just some of the industries. I always like to see which industries are kind of leading edge here. So he mentioned 23,000 HANA users and 25 different industries. And I'm curious, that's a lot. And I'm curious to see what some of the key use cases are that you guys are driving with helping some customers in many industries that hire to retire. What are some of the key use cases that you're helping those customers to drive? >> Well, I think we have a good presence in about every vertical, from both the public and the private sector. The suite of tools that we have, service the entire, each of those use cases. I think when you start to think about the SAP suite and the integration story that they talked about, with the intelligence and the analytics on top, that just takes it to another level. And I think that's really underlying important message. I think and that's what's gonna help, not only SuccessFactors, but SAP continue to drive and lead across the board. >> So can we talk a little bit about customer interaction? I think traditionally, you've served up infrastructures to developers directly. But a lot of cases, your direct customer may be your actual business user looking to transform digitally. Talk about the experience, the difference in experience of running the cloud that was consumed by other technologies, to potentially running a cloud that's centered on people who are thinking about people and customers. >> Yeah, that's a great question because these are business-critical activities. You think about something like learning, right? That's used to certify pilots before they can take off. So we can actually, the availability and the delivery of that service, is critical. Large amusement parks have to certify all the ride handlers. So this thing has to be available 24 by seven, 365 days a week. And that's just something like learning. When you think about some of the other facets, they are entrenched in our customers' modern business processes. And they're all critical. So when we look at these, we have to look at 'em like we used to, some of the most critical functions in the backend. So we run them like you would, from an operational perspective, like a bank, okay? With that resilience, those practices, that focus. But we also have to do it at the speed of cloud. (laughs) >> I was just gonna ask that question. You have two competing episodes. You know, I like to, well, people. Well, SAP process is 70 percent of the transactions in the world. It is called, has been called, the cash register of the cloud. It is the ultimate system of record. Therefore, it should never be touched. However, we have to move fast. We have to digitally transform their commercial entities that want to build cool new applications on Fiori, et cetera. There are other business integrations. How do you weigh those two, what seems like competing interests? >> I think Bert laid out the data strategy and how we're gonna integrate the data across the suite. And that's gonna be the key, right? Instead of integrating and porting to, we're gonna have single sources of data where data is gonna reside. We're gonna use that as a system of record, as the suite evolves. That'll give it the data integrity that it needs, also the performance and integration perspective. >> So we're sponsored by the data driven company, NetApp, who is powering one of the most powerful data platforms on the planet, SAP. Talk about the relationship and importance of NetApps, NetApp vision in supporting your vision. >> So NetApp was here at SAP long before I started, but I have a, probably a 20 year, probably 17 to 20 year history, with that app. And you know, data is critical. The storage, the access, the performance. And they've been a critical part of almost every architecture I've worked on today. Rock solid performance, rock solid reliability, but more important to me, is the partnership with the company, and the support that we get. Not just on the stuff that we're doing today, but thinking about how we're gonna change in the future, and supporting us as we evolve, and helping us plan and think through that as well. >> One of the things that Bill talked about this morning, as well, is getting to this 4th gen of customer experience. That these expectations, we've talked about speed. That it's, everything has to be done yesterday, right? How are you guys working with NetApp delivering that 4th generation customer experience, internally and to your 50 million business users? >> Well, I think you touched on bits and pieces of it. It's a whole suite of-- It's a whole program of plans, right? Between Fiori, you know, all those things in the front end, where the customer touches. But in the backend, it's about speed and reliability to their data, right? So our architectures are getting simplified. Our data's getting condensed. We need the compliance pieces and that's where NetApp kinda play a core role in, in those pieces. >> So back in traditional infrastructures and operations, we could tell speeds and feeds as one of the best features of why you should use one service over another. As you describe the way, everyone expects speeds and feeds. What are some of the value props or KPIs for your new environment? >> So, we've really shifted. So one of the things that we've done is we've actually added operational intelligence. So we have basically a brain that sits on top of our cloud environment. It looks at all of the transactions. It filters out all the noise. So the speeds and feeds are part of a, now a service or a business function, that we're delivering. That metric down by itself is important. But unless you can correlate it to some business impact, or something happening, it doesn't really have the weight that it needs. >> Right. >> So now what we're looking at is we've ingested and mapped all of the business transactions. We can proactively focus on the ones. So we filter out 99 and change percent of the noise. And then we prorate the things that we need to kinda pivot and focus on. We have three global operational centers around the world. One in Budapest. One in Bangalore. And one in Reston. And then we have a global operation center that sits on the top, so the regionals sit in the region. And they look at all of that feedback from that intelligence. >> So getting those key performance indicators out of the system As I looked at LinkedIn, I looked at some of the common folks we have. You have a pretty consistent core team that support you over the past two or three different major iterations you've done. Talk through how collectively your team has looked at new innovations and operation deliveries such as DevOps. And you've changed the way that your core team approaches these challenges and the outcomes that you've been able to realize. >> So for us, it's about, you know the architecture and technology evolves. As it evolves, it makes a few things simpler. And also, introduces some usually more complex challenges. But it's mitigating risk, delivering performance and reliability, and maturing your actions. So if we do those basic things as we mature the technology underneath, we can drive that. So the team has been focused on, when we think about DevOps, we think about delivering seamlessly new capabilities, features into the cloud. How do we do that with a minimized risk, through automation, and seamless, right? So it's how we segmented the application, how we built the resilience in, how our processes understand and validate and be able to stand in if something happens. >> I'm wondering on that, from maybe a pivot is, we talk about often times, at different events. Whether we're talking about advanced analytics or data science skills gap. Or I think Bill even said like, upskilling. Think I heard that term this morning. I'm curious, as you were saying that, that the folks that you've been working with for a long time on different projects. What are some of the skills that they're able to, you may be able to enable them to learn, by being part of SAP? Is it something that helps accelerate their ability to develop even better, more competitive products? >> Yeah, so SAP has one of the best talent pools I've ever seen across. Some very brilliant people in every business line. So there's best practices that can be learned from everything that we do. All you have to do is be able to have the conversations and look around. When we brought the team in, about two years ago, we did a whole skills analyses, gap analyses, of the skills that we had. We looked at our operating model, created a new operating model that was enabling us to evolve from an operational perspective. And then put plans in place, and use the tools that we sell to help deliver development to the team. So basically, we became our own customer. We drove development of our, upskilling our existing resources, and we supplemented where needed. And we also pulled from the collective knowledge of SAP. So doing those three things, helped us really accelerate and execute something that typically would take three years in less than 12 months. >> Last question, Mike, for you. This morning's energetic keynote, we've talked about it a number of times already today. Really, I think somebody on the show earlier said, likened Bill McDermott to kind of, really an evangelist, which is really refreshing. You don't see a lot of C-levels that are that, where you can feel and kinda see their passion. The SAP has been very vocal for a while about really wanting to disrupt the marketplace for CRM. Some big news coming out today. I'm just wondering, kind of culturally, to wrap this up, what excites you about this train that you're on at SAP? >> I think that the message is electrifying. And inside of SAP, you feel that. So we've been feeling it as these bits and pieces have been coming out over the last year. So this is just a culmination of all the little pieces that we've known inside and we're able to share externally. So I'm extremely excited about where we're at and where we're going. And obviously, anytime I get to hear Bill speak, it just amplifies it. >> Yeah, that energy was really, you can feel it from wherever you were. It was awesome. Mike, thanks so much for stopping by and catching up with your old buddy Keith and me and sharing what you guys are doing with SuccessFactors. >> Excellent, excellent. Thanks very much. >> Thanks for -- Oh sorry, and thanks for watching theCUBE. Lisa Martin with Keith Townsend, from SAP SAPPHIRE in the NetApp booth. Thanks for watching. (fast tempo music)
SUMMARY :
Brought to you by NetApp. and we are with NetApp in their booth at SAP SAPPHIRE 2018. So, SuccessFactors, largest people cloud in the world. So I'm responsible for the delivery and operation one of the toughest projects, So there's history, but great history. What's fundamentally different about the SAP people cloud and the growth. in the keynote this morning. to make sure that we have boots on the ground, So Lisa, I'm not gonna dominate the conversation. What are some of the key use cases that and the integration story that they talked about, of running the cloud that was consumed So we run them like you would, in the world. And that's gonna be the key, right? Talk about the relationship and importance of NetApps, Not just on the stuff that we're doing today, One of the things that Bill talked about But in the backend, it's about speed and reliability as one of the best features of why you should use So one of the things that we've done is that sits on the top, I looked at some of the common folks we have. So the team has been focused on, that the folks that you've been working with of the skills that we had. to wrap this up, what excites you have been coming out over the last year. and sharing what you guys are doing with SuccessFactors. Thanks very much. in the NetApp booth.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Mike McGibbney | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Bill McDermott | PERSON | 0.99+ |
three years | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Bangalore | LOCATION | 0.99+ |
17 | QUANTITY | 0.99+ |
70 percent | QUANTITY | 0.99+ |
Budapest | LOCATION | 0.99+ |
24 | QUANTITY | 0.99+ |
99 | QUANTITY | 0.99+ |
Reston | LOCATION | 0.99+ |
Bert | PERSON | 0.99+ |
SuccessFactors | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
20 year | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
25 different industries | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
less than 12 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
23,000 | QUANTITY | 0.99+ |
Bill | PERSON | 0.99+ |
4th gen | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
4th generation | QUANTITY | 0.99+ |
first cloud | QUANTITY | 0.99+ |
yesterday | DATE | 0.98+ |
last year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
seven | QUANTITY | 0.98+ |
NetApp | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
about 200 customer calls | QUANTITY | 0.98+ |
Fiori | PERSON | 0.96+ |
a month | QUANTITY | 0.95+ |
SAPPHIRE | TITLE | 0.94+ |
60 industries | QUANTITY | 0.94+ |
DevOps | TITLE | 0.94+ |
about two years ago | DATE | 0.94+ |
one service | QUANTITY | 0.94+ |
365 days a week | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
this morning | DATE | 0.93+ |
50 million business users | QUANTITY | 0.93+ |
This morning | DATE | 0.92+ |
SAP SAPPHIRE | TITLE | 0.92+ |
NetApp | TITLE | 0.91+ |
Xers | ORGANIZATION | 0.9+ |
three things | QUANTITY | 0.9+ |
first | QUANTITY | 0.89+ |
two competing episodes | QUANTITY | 0.88+ |
about 60 customer tours | QUANTITY | 0.86+ |
Day One Morning Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Laughter] [Laughter] [Laughter] [Laughter] [Music] [Music] [Music] [Music] you you [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Applause] [Music] wake up feeling blessed peace you warned that Russia ain't afraid to show it I'll expose it if I dressed up riding in that Chester roasted nigga catch you slippin on myself rocks on I messed up like yes sir [Music] [Music] [Music] [Music] our program [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] you are not welcome to Red Hat summit 2018 2018 [Music] [Music] [Music] [Laughter] [Music] Wow that is truly the coolest introduction I've ever had thank you Wow I don't think I feel cool enough to follow an interaction like that Wow well welcome to the Red Hat summit this is our 14th annual event and I have to say looking out over this audience Wow it's great to see so many people here joining us this is by far our largest summit to date not only did we blow through the numbers we've had in the past we blew through our own expectations this year so I know we have a pretty packed house and I know people are still coming in so it's great to see so many people here it's great to see so many familiar faces when I had a chance to walk around earlier it's great to see so many new people here joining us for the first time I think the record attendance is an indication that more and more enterprises around the world are seeing the power of open source to help them with their challenges that they're facing due to the digital transformation that all of enterprises around the world are going through the theme for the summit this year is ideas worth exploring and we intentionally chose that because as much as we are all going through this digital disruption and the challenges associated with it one thing I think is becoming clear no one person and certainly no one company has the answers to these challenges right this isn't a problem where you can go buy a solution this is a set of capabilities that we all need to build it's a set of cultural changes that we all need to go through and that's going to require the best ideas coming from so many different places so we're not here saying we have the answers we're trying to convene the conversation right we want to serve as a catalyst bringing great minds together to share ideas so we all walk out of here at the end of the week a little wiser than when we first came here we do have an amazing agenda for you we have over 7,000 attendees we may be pushing 8,000 by the time we got through this morning we have 36 keynote speakers and we have a hundred and twenty-five breakout sessions and have to throw in one plug scheduling 325 breakout sessions is actually pretty difficult and so we used the Red Hat business optimizer which is an AI constraint solver that's new in the Red Hat decision manager to help us plan the summit because we have individuals who have a clustered set of interests and we want to make sure that when we schedule two breakout sessions we do it in a way that we don't have overlapping sessions that are really important to the same individual so we tried to use this tool and what we understand about people's interest in history of what they wanted to do to try to make sure that we spaced out different times for things of similar interests for similar people as well as for people who stood in the back of breakouts before and I know I've done that too we've also used it to try to optimize room size so hopefully we will do our best to make sure that we've appropriately sized the spaces for those as well so it's really a phenomenal tool and I know it's helped us a lot this year in addition to the 325 breakouts we have a lot of our customers on stage during the main sessions and so you'll see demos you'll hear from partners you'll hear stories from so many of our customers not on our point of view of how to use these technologies but their point of views of how they actually are using these technologies to solve their problems and you'll hear over and over again from those keynotes that it's not just about the technology it's about how people are changing how people are working to innovate to solve those problems and while we're on the subject of people I'd like to take a moment to recognize the Red Hat certified professional of the year this is known award we do every year I love this award because it truly recognizes an individual for outstanding innovation for outstanding ideas for truly standing out in how they're able to help their organization with Red Hat technologies Red Hat certifications help system administrators application developers IT architects to further their careers and help their organizations by being able to advance their skills and knowledge of Red Hat products and this year's winner really truly is a great example about how their curiosity is helped push the limits of what's possible with technology let's hear a little more about this year's winner when I was studying at the University I had computer science as one of my subjects and that's what created the passion from the very beginning they were quite a few institutions around my University who were offering Red Hat Enterprise Linux as a course and a certification paths through to become an administrator Red Hat Learning subscription has offered me a lot more than any other trainings that have done so far that gave me exposure to so many products under red hair technologies that I wasn't even aware of I started to think about the better ways of how these learnings can be put into the real life use cases and we started off with a discussion with my manager saying I have to try this product and I really want to see how it really fits in our environment and that product was Red Hat virtualization we went from deploying rave and then OpenStack and then the open shift environment we wanted to overcome some of the things that we saw as challenges to the speed and rapidity of release and code etc so it made perfect sense and we were able to do it in a really short space of time so you know we truly did use it as an Innovation Lab I think idea is everything ideas can change the way you see things an Innovation Lab was such an idea that popped into my mind one fine day and it has transformed the way we think as a team and it's given that playpen to pretty much everyone to go and test their things investigate evaluate do whatever they like in a non-critical non production environment I recruited Neha almost 10 years ago now I could see there was a spark a potential with it and you know she had a real Drive a real passion and you know here we are nearly ten years later I'm Neha Sandow I am a Red Hat certified engineer all right well everyone please walk into the states to the stage Neha [Music] [Applause] congratulations thank you [Applause] I think that - well welcome to the red has some of this is your first summit yes it is thanks so much well fantastic sure well it's great to have you here I hope you have a chance to engage and share some of your ideas and enjoy the week thank you thank you congratulations [Applause] neha mentioned that she first got interest in open source at university and it made me think red hats recently started our Red Hat Academy program that looks to programmatically infuse Red Hat technologies in universities around the world it's exploded in a way we had no idea it's grown just incredibly rapidly which i think shows the interest that there really is an open source and working in an open way at university so it's really a phenomenal program I'm also excited to announce that we're launching our newest open source story this year at Summit it's called the science of collective discovery and it looks at what happens when communities use open hardware to monitor the environment around them and really how they can make impactful change based on that technologies the rural premier that will be at 5:15 on Wednesday at McMaster Oni West and so please join us for a drink and we'll also have a number of the experts featured in that and you can have a conversation with them as well so with that let's officially start the show please welcome red hat president of products and technology Paul Cormier [Music] Wow morning you know I say it every year I'm gonna say it again I know I repeat myself it's just amazing we are so proud here to be here today too while you all week on how far we've come with opens with open source and with the products that we that we provide at Red Hat so so welcome and I hope the pride shows through so you know I told you Seven Summits ago on this stage that the future would be open and here we are just seven years later this is the 14th summit but just seven years later after that and much has happened and I think you'll see today and this week that that prediction that the world would be open was a pretty safe predict prediction but I want to take you just back a little bit to see how we started here and it's not just how Red Hat started here this is an open source in Linux based computing is now in an industry norm and I think that's what you'll you'll see in here this week you know we talked back then seven years ago when we put on our prediction about the UNIX error and how Hardware innovation with x86 was it was really the first step in a new era of open innovation you know companies like Sun Deck IBM and HP they really changed the world the computing industry with their UNIX models it was that was really the rise of computing but I think what we we really saw then was that single company innovation could only scale so far could really get so far with that these companies were very very innovative but they coupled hardware innovation with software innovation and as one company they could only solve so many problems and even which comp which even complicated things more they could only hire so many people in each of their companies Intel came on the scene back then as the new independent hardware player and you know that was really the beginning of the drive for horizontal computing power and computing this opened up a brand new vehicle for hardware innovation a new hardware ecosystem was built around this around this common hardware base shortly after that Stallman and leanness they had a vision of his of an open model that was created and they created Linux but it was built around Intel this was really the beginning of having a software based platform that could also drive innovation this kind of was the beginning of the changing of the world here that system-level innovation now having a hardware platform that was ubiquitous and a software platform that was open and ubiquitous it really changed this system level innovation and that continues to thrive today it was only possible because it was open this could not have happened in a closed environment it allowed the best ideas from anywhere from all over to come in in win only because it was the best idea that's what drove the rate of innovation at the pace you're seeing today and it which has never been seen before we at Red Hat we saw the need to bring this innovation to solve real-world problems in the enterprise and I think that's going to be the theme of the show today you're going to see us with our customers and partners talking about and showing you some of those real-world problems that we are sought solving with this open innovation we created rel back then for this for the enterprise it started it's it it wasn't successful because it's scaled it was secure and it was enterprise ready it once again changed the industry but this time through open innovation this gave the hardware ecosystem a software platform this open software platform gave the hardware ecosystem a software platform to build around it Unleashed them the hardware side to compete and thrive it enabled innovation from the OEMs new players building cheaper faster servers even new architectures from armed to power sprung up with this change we have seen an incredible amount of hardware innovation over the last 15 years that same innovation happened on the software side we saw powerful implementations of bare metal Linux distributions out in the market in fact at one point there were 300 there are over 300 distributions out in the market on the foundation of Linux powerful open-source equivalents were even developed in every area of Technology databases middleware messaging containers anything you could imagine innovation just exploded around the Linux platform in innovation it's at the core also drove virtualization both Linux and virtualization led to another area of innovation which you're hearing a lot about now public cloud innovation this innovation started to proceed at a rate that we had never seen before we had never experienced this in the past in this unprecedented speed of innovation and software was now possible because you didn't need a chip foundry in order to innovate you just needed great ideas in the open platform that was out there customers seeing this innovation in the public cloud sparked it sparked their desire to build their own linux based cloud platforms and customers are now are now bringing that cloud efficiency on-premise in their own data centers public clouds demonstrated so much efficiency the data centers and architects wanted to take advantage of it off premise on premise I'm sorry within their own we don't within their own controlled environments this really allowed companies to make the most of existing investments from data centers to hardware they also gained many new advantages from data sovereignty to new flexible agile approaches I want to bring Burr and his team up here to take a look at what building out an on-premise cloud can look like today Bure take it away I am super excited to be with all of you here at Red Hat summit I know we have some amazing things to show you throughout the week but before we dive into this demonstration I want you to take just a few seconds just a quick moment to think about that really important event your life that moment you turned on your first computer maybe it was a trs-80 listen Claire and Atari I even had an 83 b2 at one point but in my specific case I was sitting in a classroom in Hawaii and I could see all the way from Diamond Head to Pearl Harbor so just keep that in mind and I turn on an IBM PC with dual floppies I don't remember issuing my first commands writing my first level of code and I was totally hooked it was like a magical moment and I've been hooked on computers for the last 30 years so I want you to hold that image in your mind for just a moment just a second while we show you the computers we have here on stage let me turn this over to Jay fair and Dini here's our worldwide DevOps manager and he was going to show us his hardware what do you got Jay thank you BER good morning everyone and welcome to Red Hat summit we have so many cool things to show you this week I am so happy to be here and you know my favorite thing about red hat summit is our allowed to kind of share all of our stories much like bird just did we also love to you know talk about the hardware and the technology that we brought with us in fact it's become a bit of a competition so this year we said you know let's win this thing and we actually I think we might have won we brought a cloud with us so right now this is a private cloud for throughout the course of the week we're going to turn this into a very very interesting open hybrid cloud right before your eyes so everything you see here will be real and happening right on this thing right behind me here so thanks for our four incredible partners IBM Dell HP and super micro we've built a very vendor heterogeneous cloud here extra special thanks to IBM because they loaned us a power nine machine so now we actually have multiple architectures in this cloud so as you know one of the greatest benefits to running Red Hat technology is that we run on just about everything and you know I can't stress enough how powerful that is how cost-effective that is and it just makes my life easier to be honest so if you're interested the people that built this actual rack right here gonna be hanging out in the customer success zone this whole week it's on the second floor the lobby there and they'd be glad to show you exactly how they built this thing so let me show you what we actually have in this rack so contained in this rack we have 1056 physical chorus right here we have five and a half terabytes of RAM and just in case we threw 50 terabytes of storage in this thing so burr that's about two million times more powerful than that first machine you boot it up thanks to a PC we're actually capable of putting all the power needs and cooling right in this rack so there's your data center right there you know it occurred to me last night that I can actually pull the power cord on this thing and kick it up a notch we could have the world's first mobile portable hybrid cloud so I'm gonna go ahead and unplug no no no no no seriously it's not unplug the thing we got it working now well Berg gets a little nervous but next year we're rolling this thing around okay okay so to recap multiple vendors check multiple architectures check multiple public clouds plug right into this thing check and everything everywhere is running the same software from Red Hat so that is a giant check so burn Angus why don't we get the demos rolling awesome so we have totally we have some amazing hardware amazing computers on this stage but now we need to light it up and we have Angus Thomas who represents our OpenStack engineering team and he's going to show us what we can do with this awesome hardware Angus thank you Beth so this was an impressive rack of hardware to Joe has bought a pocket stage what I want to talk about today is putting it to work with OpenStack platform director we're going to turn it from a lot of potential into a flexible scalable private cloud we've been using director for a while now to take care of managing hardware and orchestrating the deployment of OpenStack what's new is that we're bringing the same capabilities for on-premise manager the deployment of OpenShift director deploying OpenShift in this way is the best of both worlds it's bare-metal performance but with an underlying infrastructure as a service that can take care of deploying in new instances and scaling out and a lot of the things that we expect from a cloud provider director is running on a virtual machine on Red Hat virtualization at the top of the rack and it's going to bring everything else under control what you can see on the screen right now is the director UI and as you see some of the hardware in the rack is already being managed at the top level we have information about the number of cores in the amount of RAM and the disks that each machine have if we dig in a bit there's information about MAC addresses and IPs and the management interface the BIOS kernel version dig a little deeper and there is information about the hard disks all of this is important because we want to be able to make sure that we put in workloads exactly where we want them Jay could you please power on the two new machines at the top of the rack sure all right thank you so when those two machines come up on the network director is going to see them see that they're new and not already under management and is it immediately going to go into the hardware inspection that populates this database and gets them ready for use so we also have profiles as you can see here profiles are the way that we match the hardware in a machine to the kind of workload that it's suited to this is how we make sure that machines that have all the discs run Seth and machines that have all the RAM when our application workouts for example there's two ways these can be set when you're dealing with a rack like this you could go in an individually tag each machine but director scales up to data centers so we have a rules matching engine which will automatically take the hardware profile of a new machine and make sure it gets tagged in exactly the right way so we can automatically discover new machines on the network and we can automatically match them to a profile that's how we streamline and scale up operations now I want to talk about deploying the software we have a set of validations we've learned over time about the Miss configurations in the underlying infrastructure which can cause the deployment of a multi node distributed application like OpenStack or OpenShift to fail if you have the wrong VLAN tags on a switch port or DHCP isn't running where it should be for example you can get into a situation which is really hard to debug a lot of our validations actually run before the deployment they look at what you're intending to deploy and they check in the environment is the way that it should be and they'll preempts problems and obviously preemption is a lot better than debugging something new that you probably have not seen before is director managing multiple deployments of different things side by side before we came out on stage we also deployed OpenStack on this rack just to keep me honest let me jump over to OpenStack very quickly a lot of our opens that customers will be familiar with this UI and the bare metal deployment of OpenStack on our rack is actually running a set of virtual machines which is running Gluster you're going to see that put to work later on during the summit Jay's gone to an awful lot effort to get this Hardware up on the stage so we're going to use it as many different ways as we can okay let's deploy OpenShift if I switch over to the deployed a deployment plan view there's a few steps first thing you need to do is make sure we have the hardware I already talked about how director manages hardware it's smart enough to make sure that it's not going to attempt to deploy into machines they're already in use it's only going to deploy on machines that have the right profile but I think with the rack that we have here we've got enough next thing is the deployment configuration this is where you get to customize exactly what's going to be deployed to make sure that it really matches your environment if they're external IPs for additional services you can set them here whatever it takes to make sure that the deployment is going to work for you as you can see on the screen we have a set of options around enable TLS for encryption network traffic if I dig a little deeper there are options around enabling ipv6 and network isolation so that different classes of traffic there are over different physical NICs okay then then we have roles now roles this is essentially about the software that's going to be put on each machine director comes with a set of roles for a lot of the software that RedHat supports and you can just use those or you can modify them a little bit if you need to add a monitoring agent or whatever it might be or you can create your own custom roles director has quite a rich syntax for custom role definition and custom Network topologies whatever it is you need in order to make it work in your environment so the rawls that we have right now are going to give us a working instance of openshift if I go ahead and click through the validations are all looking green so right now I can click the button start to the deploy and you will see things lighting up on the rack directors going to use IPMI to reboot the machines provisioned and with a trail image was the containers on them and start up the application stack okay so one last thing once the deployment is done you're going to want to keep director around director has a lot of capabilities around what we call de to operational management bringing in new Hardware scaling out deployments dealing with updates and critically doing upgrades as well so having said all of that it is time for me to switch over to an instance of openshift deployed by a director running on bare metal on our rack and I need to hand this over to our developer team so they can show what they can do it thank you that is so awesome Angus so what you've seen now is going from bare metal to the ultimate private cloud with OpenStack director make an open shift ready for our developers to build their next generation applications thank you so much guys that was totally awesome I love what you guys showed there now I have the honor now I have the honor of introducing a very special guest one of our earliest OpenShift customers who understands the necessity of the private cloud inside their organization and more importantly they're fundamentally redefining their industry please extend a warm welcome to deep mar Foster from Amadeus well good morning everyone a big thank you for having armadillos here and myself so as it was just set I'm at Mario's well first of all we are a large IT provider in the travel industry so serving essentially Airlines hotel chains this distributors like Expedia and others we indeed we started very early what was OpenShift like a bit more than three years ago and we jumped on it when when Retta teamed with Google to bring in kubernetes into this so let me quickly share a few figures about our Mario's to give you like a sense of what we are doing and the scale of our operations so some of our key KPIs one of our key metrics is what what we call passenger borders so that's the number of customers that physically board a plane over the year so through our systems it's roughly 1.6 billion people checking in taking the aircrafts on under the Amarillo systems close to 600 million travel agency bookings virtually all airlines are on the system and one figure I want to stress out a little bit is this one trillion availability requests per day that's when I read this figure my mind boggles a little bit so this means in continuous throughput more than 10 million hits per second so of course these are not traditional database transactions it's it's it's highly cached in memory and these applications are running over like more than 100,000 course so it's it's it's really big stuff so today I want to give some concrete feedback what we are doing so I have chosen two applications products of our Mario's that are currently running on production in different in different hosting environments as the theme here is of this talk hybrid cloud and so I want to give some some concrete feedback of how we architect the applications and of course it stays relatively high level so here I have taken one of our applications that is used in the hospitality environment so it's we have built this for a very large US hotel chain and it's currently in in full swing brought into production so like 30 percent of the globe or 5,000 plus hotels are on this platform not so here you can see that we use as the path of course on openshift on that's that's the most central piece of our hybrid cloud strategy on the database side we use Oracle and Couchbase Couchbase is used for the heavy duty fast access more key value store but also to replicate data across two data centers in this case it's running over to US based data centers east and west coast topology that are fit so run by Mario's that are fit with VMware on for the virtualization OpenStack on top of it and then open shift to host and welcome the applications on the right hand side you you see the kind of tools if you want to call them tools that we use these are the principal ones of course the real picture is much more complex but in essence we use terraform to map to the api's of the underlying infrastructure so they are obviously there are differences when you run on OpenStack or the Google compute engine or AWS Azure so some some tweaking is needed we use right at ansible a lot we also use puppet so you can see these are really the big the big pieces of of this sense installation and if we look to the to the topology again very high high level so these two locations basically map the data centers of our customers so they are in close proximity because the response time and the SLA is of this application is are very tight so that's an example of an application that is architectures mostly was high ability and high availability in minds not necessarily full global worldwide scaling but of course it could be scaled but here the idea is that we can swing from one data center to the unit to the other in matters of of minutes both take traffic data is fully synchronized across those data centers and while the switch back and forth is very fast the second example I have taken is what we call the shopping box this is when people go to kayak or Expedia and they're getting inspired where they want to travel to this is really the piece that shoots most of transit of the transactions into our Mario's so we architect here more for high scalability of course availability is also a key but here scaling and geographical spread is very important so in short it runs partially on-premise in our Amarillo Stata Center again on OpenStack and we we deploy it mostly in the first step on the Google compute engine and currently as we speak on Amazon on AWS and we work also together with Retta to qualify the whole show on Microsoft Azure here in this application it's it's the same building blocks there is a large swimming aspect to it so we bring Kafka into this working with records and another partner to bring Kafka on their open shift because at the end we want to use open shift to administrate the whole show so over time also databases and the topology here when you look to the physical deployment topology while it's very classical we use the the regions and the availability zone concept so this application is spread over three principal continental regions and so it's again it's a high-level view with different availability zones and in each of those availability zones we take a hit of several 10,000 transactions so that was it really in very short just to give you a glimpse on how we implement hybrid clouds I think that's the way forward it gives us a lot of freedom and it allows us to to discuss in a much more educated way with our customers that sometimes have already deals in place with one cloud provider or another so for us it's a lot of value to set two to leave them the choice basically what up that was a very quick overview of what we are doing we were together with records are based on open shift essentially here and more and more OpenStack coming into the picture hope you found this interesting thanks a lot and have a nice summer [Applause] thank you so much deeper great great solution we've worked with deep Marv and his team for a long for a long time great solution so I want to take us back a little bit I want to circle back I sort of ended talking a little bit about the public cloud so let's circle back there you know even so even though some applications need to run in various footprints on premise there's still great gains to be had that for running certain applications in the public cloud a public cloud will be as impactful to to the industry as as UNIX era was of computing was but by itself it'll have some of the same limitations and challenges that that model had today there's tremendous cloud innovation happening in the public cloud it's being driven by a handful of massive companies and much like the innovation that sundeck HP and others drove in a you in the UNIX era of community of computing many customers want to take advantage of the best innovation no matter where it comes from buddy but as they even eventually saw in the UNIX era they can't afford the best innovation at the cost of a siloed operating environment with the open community we are building a hybrid application platform that can give you access to the best innovation no matter which vendor or which cloud that it comes from letting public cloud providers innovate and services beyond what customers or anyone can one provider can do on their own such as large scale learning machine learning or artificial intelligence built on the data that's unique probably to that to that one cloud but consumed in a common way for the end customer across all applications in any environment on any footprint in in their overall IT infrastructure this is exactly what rel brought brought to our customers in the UNIX era of computing that consistency across any of those footprints obviously enterprises will have applications for all different uses some will live on premise some in the cloud hybrid cloud is the only practical way forward I think you've been hearing that from us for a long time it is the only practical way forward and it'll be as impactful as anything we've ever seen before I want to bring Byrne his team back to see a hybrid cloud deployment in action burr [Music] all right earlier you saw what we did with taking bare metal and lighting it up with OpenStack director and making it openshift ready for developers to build their next generation applications now we want to show you when those next turn and generation applications and what we've done is we take an open shift and spread it out and installed it across Asia and Amazon a true hybrid cloud so with me on stage today as Ted who's gonna walk us through an application and Brent Midwood who's our DevOps engineer who's gonna be making sure he's monitoring on the backside that we do make sure we do a good job so at this point Ted what have you got for us Thank You BER and good morning everybody this morning we are running on the stage in our private cloud an application that's providing its providing fraud detection detect serves for financial transactions and our customer base is rather large and we occasionally take extended bursts of traffic of heavy traffic load so in order to keep our latency down and keep our customers happy we've deployed extra service capacity in the public cloud so we have capacity with Microsoft Azure in Texas and with Amazon Web Services in Ohio so we use open chip container platform on all three locations because openshift makes it easy for us to deploy our containerized services wherever we want to put them but the question still remains how do we establish seamless communication across our entire enterprise and more importantly how do we balance the workload across these three locations in such a way that we efficiently use our resources and that we give our customers the best possible experience so this is where Red Hat amq interconnect comes in as you can see we've deployed a MQ interconnect alongside our fraud detection applications in all three locations and if I switch to the MQ console we'll see the topology of the app of the network that we've created here so the router inside the on stage here has made connections outbound to the public routers and AWS and Azure these connections are secured using mutual TLS authentication and encrypt and once these connections are established amq figures out the best way auda matically to route traffic to where it needs to get to so what we have right now is a distributed reliable broker list message bus that expands our entire enterprise now if you want to learn more about this make sure that you catch the a MQ breakout tomorrow at 11:45 with Jack Britton and David Ingham let's have a look at the message flow and we'll dive in and isolate the fraud detection API that we're interested in and what we see is that all the traffic is being handled in the private cloud that's what we expect because our latencies are low and they're acceptable but now if we take a little bit of a burst of increased traffic we're gonna see that an EQ is going to push a little a bi traffic out onto the out to the public cloud so as you're picking up some of the load now to keep the Layton sees down now when that subsides as your finishes up what it's doing and goes back offline now if we take a much bigger load increase you'll see two things first of all asher is going to take a bigger proportion than it did before and Amazon Web Services is going to get thrown into the fray as well now AWS is actually doing less work than I expected it to do I expected a little bit of bigger a slice there but this is a interesting illustration of what's going on for load balancing mq load balancing is sending requests to the services that have the lowest backlog and in order to keep the Layton sees as steady as possible so AWS is probably running slowly for some reason and that's causing a and Q to push less traffic its way now the other thing you're going to notice if you look carefully this graph fluctuate slightly and those fluctuations are caused by all the variances in the network we have the cloud on stage and we have clouds in in the various places across the country there's a lot of equipment locked layers of virtualization and networking in between and we're reacting in real-time to the reality on the digital street so BER what's the story with a to be less I noticed there's a problem right here right now we seem to have a little bit performance issue so guys I noticed that as well and a little bit ago I actually got an alert from red ahead of insights letting us know that there might be some potential optimizations we could make to our environment so let's take a look at insights so here's the Red Hat insights interface you can see our three OpenShift deployments so we have the set up here on stage in San Francisco we have our Azure deployment in Texas and we also have our AWS deployment in Ohio and insights is highlighting that that deployment in Ohio may have some issues that need some attention so Red Hat insights collects anonymized data from manage systems across our customer environment and that gives us visibility into things like vulnerabilities compliance configuration assessment and of course Red Hat subscription consumption all of this is presented in a SAS offering so it's really really easy to use it requires minimal infrastructure upfront and it provides an immediate return on investment what insights is showing us here is that we have some potential issues on the configuration side that may need some attention from this view I actually get a look at all the systems in our inventory including instances and containers and you can see here on the left that insights is highlighting one of those instances as needing some potential attention it might be a candidate for optimization this might be related to the issues that you were seeing just a minute ago insights uses machine learning and AI techniques to analyze all collected data so we combine collected data from not only the system's configuration but also with other systems from across the Red Hat customer base this allows us to compare ourselves to how we're doing across the entire set of industries including our own vertical in this case the financial services industry and we can compare ourselves to other customers we also get access to tailored recommendations that let us know what we can do to optimize our systems so in this particular case we're actually detecting an issue here where we are an outlier so our configuration has been compared to other configurations across the customer base and in this particular instance in this security group were misconfigured and so insights actually gives us the steps that we need to use to remediate the situation and the really neat thing here is that we actually get access to a custom ansible playbook so if we want to automate that type of a remediation we can use this inside of Red Hat ansible tower Red Hat satellite Red Hat cloud forms it's really really powerful the other thing here is that we can actually apply these recommendations right from within the Red Hat insights interface so with just a few clicks I can select all the recommendations that insights is making and using that built-in ansible automation I can apply those recommendations really really quickly across a variety of systems this type of intelligent automation is really cool it's really fast and powerful so really quickly here we're going to see the impact of those changes and so we can tell that we're doing a little better than we were a few minutes ago when compared across the customer base as well as within the financial industry and if we go back and look at the map we should see that our AWS employment in Ohio is in a much better state than it was just a few minutes ago so I'm wondering Ted if this had any effect and might be helping with some of the issues that you were seeing let's take a look looks like went green now let's see what it looks like over here yeah doesn't look like the configuration is taking effect quite yet maybe there's some delay awesome fantastic the man yeah so now we're load balancing across the three clouds very much fantastic well I have two minute Ted I truly love how we can route requests and dynamically load transactions across these three clouds a truly hybrid cloud native application you guys saw here on on stage for the first time and it's a fully portable application if you build your applications with openshift you can mover from cloud to cloud to cloud on stage private all the way out to the public said it's totally awesome we also have the application being fully managed by Red Hat insights I love having that intelligence watching over us and ensuring that we're doing everything correctly that is fundamentally awesome thank you so much for that well we actually have more to show you but you're going to wait a few minutes longer right now we'd like to welcome Paul back to the stage and we have a very special early Red Hat customer an Innovation Award winner from 2010 who's been going boldly forward with their open hybrid cloud strategy please give a warm welcome to Monty Finkelstein from Citigroup [Music] [Music] hi Marty hey Paul nice to see you thank you very much for coming so thank you for having me Oh our pleasure if you if you wanted to we sort of wanted to pick your brain a little bit about your experiences and sort of leading leading the charge in computing here so we're all talking about hybrid cloud how has the hybrid cloud strategy influenced where you are today in your computing environment so you know when we see the variable the various types of workload that we had an hour on from cloud we see the peaks we see the valleys we see the demand on the environment that we have we really determined that we have to have a much more elastic more scalable capability so we can burst and stretch our environments to multiple cloud providers these capabilities have now been proven at City and of course we consider what the data risk is as well as any regulatory requirement so how do you how do you tackle the complexity of multiple cloud environments so every cloud provider has its own unique set of capabilities they have they're own api's distributions value-added services we wanted to make sure that we could arbitrate between the different cloud providers maintain all source code and orchestration capabilities on Prem to drive those capabilities from within our platforms this requires controlling the entitlements in a cohesive fashion across our on Prem and Wolfram both for security services automation telemetry as one seamless unit can you talk a bit about how you decide when you to use your own on-premise infrastructure versus cloud resources sure so there are multiple dimensions that we take into account right so the first dimension we talk about the risk so low risk - high risk and and really that's about the data classification of the environment we're talking about so whether it's public or internal which would be considered low - ooh confidential PII restricted sensitive and so on and above which is really what would be considered a high-risk the second dimension would be would focus on demand volatility and responsiveness sensitivity so this would range from low response sensitivity and low variability of the type of workload that we have to the high response sensitivity and high variability of the workload the first combination that we focused on is the low risk and high variability and high sensitivity for response type workload of course any of the workloads we ensure that we're regulatory compliant as well as we achieve customer benefits with within this environment so how can we give developers greater control of their their infrastructure environments and still help operations maintain that consistency in compliance so the main driver is really to use the public cloud is scale speed and increased developer efficiencies as well as reducing cost as well as risk this would mean providing develop workspaces and multiple environments for our developers to quickly create products for our customers all this is done of course in a DevOps model while maintaining the source and artifacts registry on-prem this would allow our developers to test and select various middleware products another product but also ensure all the compliance activities in a centrally controlled repository so we really really appreciate you coming by and sharing that with us today Monte thank you so much for coming to the red echo thanks a lot thanks again tamati I mean you know there's these real world insight into how our products and technologies are really running the businesses today that's that's just the most exciting part so thank thanks thanks again mati no even it with as much progress as you've seen demonstrated here and you're going to continue to see all week long we're far from done so I want to just take us a little bit into the path forward and where we we go today we've talked about this a lot innovation today is driven by open source development I don't think there's any question about that certainly not in this room and even across the industry as a whole that's a long way that we've come from when we started our first summit 14 years ago with over a million open source projects out there this unit this innovation aggregates into various community platforms and it finally culminates in commercial open source based open source developed products these products run many of the mission-critical applications in business today you've heard just a couple of those today here on stage but it's everywhere it's running the world today but to make customers successful with that interact innovation to run their real-world business applications these open source products have to be able to leverage increase increasingly complex infrastructure footprints we must also ensure a common base for the developer and ultimately the application no matter which footprint they choose as you heard mati say the developers want choice here no matter which no matter which footprint they are ultimately going to run their those applications on they want that flexibility from the data center to possibly any public cloud out there in regardless of whether that application was built yesterday or has been running the business for the last 10 years and was built on 10-year old technology this is the flexibility that developers require today but what does different infrastructure we may require different pieces of the technical stack in that deployment one example of this that Effects of many things as KVM which provides the foundation for many of those use cases that require virtualization KVM offers a level of consistency from a technical perspective but rel extends that consistency to add a level of commercial and ecosystem consistency for the application across all those footprints this is very important in the enterprise but while rel and KVM formed the foundation other technologies are needed to really satisfy the functions on these different footprints traditional virtualization has requirements that are satisfied by projects like overt and products like Rev traditional traditional private cloud implementations has requirements that are satisfied on projects like OpenStack and products like Red Hat OpenStack platform and as applications begin to become more container based we are seeing many requirements driven driven natively into containers the same Linux in different forms provides this common base across these four footprints this level of compatible compatibility is critical to operators who must best utilize the infinite must better utilize secure and deploy the infrastructure that they have and they're responsible for developers on the other hand they care most about having a platform that can creates that consistency for their applications they care about their services and the services that they need to consume within those applications and they don't want limitations on where they run they want service but they want it anywhere not necessarily just from Amazon they want integration between applications no matter where they run they still want to run their Java EE now named Jakarta EE apps and bring those applications forward into containers and micro services they need able to orchestrate these frameworks and many more across all these different footprints in a consistent secure fashion this creates natural tension between development and operations frankly customers amplify this tension with organizational boundaries that are holdover from the UNIX era of computing it's really the job of our platforms to seamlessly remove these boundaries and it's the it's the goal of RedHat to seamlessly get you from the old world to the new world we're gonna show you a really cool demo demonstration now we're gonna show you how you can automate this transition first we're gonna take a Windows virtual machine from a traditional VMware deployment we're gonna convert it into a KVM based virtual machine running in a container all under the kubernetes umbrella this makes virtual machines more access more accessible to the developer this will accelerate the transformation of those virtual machines into cloud native container based form well we will work this prot we will worked as capability over the product line in the coming releases so we can strike the balance of enabling our developers to move in this direction we want to be able to do this while enabling mission-critical operations to still do their job so let's bring Byrne his team back up to show you this in action for one more thanks all right what Red Hat we recognized that large organizations large enterprises have a substantial investment and legacy virtualization technology and this is holding you back you have thousands of virtual machines that need to be modernized so what you're about to see next okay it's something very special with me here on stage we have James Lebowski he's gonna be walking us through he's represents our operations folks and he's gonna be walking us through a mass migration but also is Itamar Hine who's our lead developer of a very special application and he's gonna be modernizing container izing and optimizing our application all right so let's get started James thanks burr yeah so as you can see I have a typical VMware environment here I'm in the vSphere client I've got a number of virtual machines a handful of them that make up my one of my applications for my development environment in this case and what I want to do is migrate those over to a KVM based right at virtualization environment so what I'm gonna do is I'm gonna go to cloud forms our cloud management platform that's our first step and you know cloud forms actually already has discovered both my rev environment and my vSphere environment and understands the compute network and storage there so you'll notice one of the capabilities we built is this new capability called migrations and underneath here I could begin to there's two steps and the first thing I need to do is start to create my infrastructure mappings what this will allow me to do is map my compute networking storage between vSphere and Rev so cloud forms understands how those relate let's go ahead and create an infrastructure mapping I'll call that summit infrastructure mapping and then I'm gonna begin to map my two environments first the compute so the clusters here next the data stores so those virtual machines happen to live on datastore - in vSphere and I'll target them a datastore data to inside of my revenue Arman and finally my networks those live on network 100 so I'll map those from vSphere to rover so once my infrastructure is map the next step I need to do is actually begin to create a plan to migrate those virtual machines so I'll continue to the plan wizard here I'll select the infrastructure mapping I just created and I'll select migrate my development environment from those virtual machines to Rev and then I need to import a CSV file the CSV file is going to contain a list of all the virtual machines that I want to migrate that were there and that's it once I hit create what's going to happen cloud forms is going to begin in an automated fashion shutting down those virtual machines begin converting them taking care of all the minutia that you'd have to do manually it's gonna do that all automatically for me so I don't have to worry about all those manual interactions and no longer do I have to go manually shut them down but it's going to take care of that all for me you can see the migrations kicked off here this is the I've got the my VMs are migrating here and if I go back to the screen here you can see that we're gonna start seeing those shutdown okay awesome but as people want to know more information about this how would they dive deeper into this technology later this week yeah it's a great question so we have a workload portability session in the hybrid cloud on Wednesday if you want to see a presentation that deep dives into this topic and how some of the methodologies to migrate and then on Thursday we actually have a hands-on lab it's the IT optimization VM migration lab that you can check out and as you can see those are shutting down here yeah we see a powering off right now that's fantastic absolutely so if I go back now that's gonna take a while you got to convert all the disks and move them over but we'll notice is previously I had already run one migration of a single application that was a Windows virtual machine running and if I browse over to Red Hat virtualization I can see on the dashboard here I could browse to virtual machines I have migrated that Windows virtual machine and if I open up a tab I can now browse to my Windows virtual machine which is running our wingtip toy store application our sample application here and now my VM has been moved over from Rev to Vita from VMware to Rev and is available for Itamar all right great available to our developers all right Itamar what are you gonna do for us here well James it's great that you can save cost by moving from VMware to reddit virtualization but I want to containerize our application and with container native virtualization I can run my virtual machine on OpenShift like any other container using Huebert a kubernetes operator to run and manage virtual machines let's look at the open ship service catalog you can see we have a new virtualization section here we can import KVM or VMware virtual machines or if there are already loaded we can create new instances of them for the developer to work with just need to give named CPU memory we can do other virtualization parameters and create our virtual machines now let's see how this looks like in the openshift console the cool thing about KVM is virtual machines are just Linux processes so they can act and behave like other open shipped applications we build in more than a decade of virtualization experience with KVM reddit virtualization and OpenStack and can now benefit from kubernetes and open shift to manage and orchestrate our virtual machines since we know this virtual machine this container is actually a virtual machine we can do virtual machine stuff with it like shutdown reboot or open a remote desktop session to it but we can also see this is just a container like any other container in openshift and even though the web application is running inside a Windows virtual machine the developer can still use open shift mechanisms like services and routes let's browse our web application using the OpenShift service it's the same wingtip toys application but this time the virtual machine is running on open shift but we're not done we want to containerize our application since it's a Windows virtual machine we can open a remote desktop session to it we see we have here Visual Studio and an asp.net application let's start container izing by moving the Microsoft sequel server database from running inside the Windows virtual machine to running on Red Hat Enterprise Linux as an open shipped container we'll go back to the open shipped Service Catalog this time we'll go to the database section and just as easily we'll create a sequel server container just need to accept the EULA provide password and choose the Edition we want and create a database and again we can see the sequel server is just another container running on OpenShift now let's take let's find the connection details for our database to keep this simple we'll take the IP address of our database service go back to the web application to visual studio update the IP address in the connection string publish our application and go back to browse it through OpenShift fortunately for us the user experience team heard we're modernizing our application so they pitched in and pushed new icons to use with our containerized database to also modernize the look and feel it's still the same wingtip toys application it's running in a virtual machine on openshift but it's now using a containerized database to recap we saw that we can run virtual machines natively on openshift like any other container based application modernize and mesh them together we containerize the database but we can use the same approach to containerize any part of our application so some items here to deserve repeating one thing you saw is Red Hat Enterprise Linux burning sequel server in a container on open shift and you also saw Windows VM where the dotnet native application also running inside of open ships so tell us what's special about that that seems pretty crazy what you did there exactly burr if we take a look under the hood we can use the kubernetes commands to see the list of our containers in this case the sequel server and the virtual machine containers but since Q Bert is a kubernetes operator we can actually use kubernetes commands like cube Cpl to list our virtual machines and manage our virtual machines like any other entity in kubernetes I love that so there's your crew meta gem oh we can see the kind says virtual machine that is totally awesome now people here are gonna be very excited about what they just saw we're gonna get more information and when will this be coming well you know what can they do to dive in this will be available as part of reddit Cloud suite in tech preview later this year but we are looking for early adopters now so give us a call also come check our deep dive session introducing container native virtualization Thursday 2:00 p.m. awesome that is so incredible so we went from the old to the new from the close to the open the Red Hat way you're gonna be seeing more from our demonstration team that's coming Thursday at 8 a.m. do not be late if you like what you saw this today you're gonna see a lot more of that going forward so we got some really special things in store for you so at this point thank you so much in tomorrow thank you so much you guys are awesome yeah now we have one more special guest a very early adopter of Red Hat Enterprise Linux we've had over a 12-year partnership and relationship with this organization they've been a steadfast Linux and middleware customer for many many years now please extend a warm welcome to Raj China from the Royal Bank of Canada thank you thank you it's great to be here RBC is a large global full-service is back we have the largest bank in Canada top 10 global operate in 30 countries and run five key business segments personal commercial banking investor in Treasury services capital markets wealth management and insurance but honestly unless you're in the banking segment those five business segments that I just mentioned may not mean a lot to you but what you might appreciate is the fact that we've been around in business for over 150 years we started our digital transformation journey about four years ago and we are focused on new and innovative technologies that will help deliver the capabilities and lifestyle our clients are looking for we have a very simple vision and we often refer to it as the digitally enabled bank of the future but as you can appreciate transforming a hundred fifty year old Bank is not easy it certainly does not happen overnight to that end we had a clear unwavering vision a very strong innovation agenda and most importantly a focus towards a flawless execution today in banking business strategy and IT strategy are one in the same they are not two separate things we believe that in order to be the number one bank we have to have the number one tactic there is no question that most of today's innovations happens in the open source community RBC relies on RedHat as a key partner to help us consume these open source innovations in a manner that it meets our enterprise needs RBC was an early adopter of Linux we operate one of the largest footprints of rel in Canada same with tables we had tremendous success in driving cost out of infrastructure by partnering with rahat while at the same time delivering a world-class hosting service to your business over our 12 year partnership Red Hat has proven that they have mastered the art of working closely with the upstream open source community understanding the needs of an enterprise like us in delivering these open source innovations in a manner that we can consume and build upon we are working with red hat to help increase our agility and better leverage public and private cloud offerings we adopted virtualization ansible and containers and are excited about continuing our partnership with Red Hat in this journey throughout this journey we simply cannot replace everything we've had from the past we have to bring forward these investments of the past and improve upon them with new and emerging technologies it is about utilizing emerging technologies but at the same time focusing on the business outcome the business outcome for us is serving our clients and delivering the information that they are looking for whenever they need it and in whatever form factor they're looking for but technology improvements alone are simply not sufficient to do a digital transformation creating the right culture of change and adopting new methodologies is key we introduced agile and DevOps which has boosted the number of adult projects at RBC and increase the frequency at which we do new releases to our mobile app as a matter of fact these methodologies have enabled us to deliver apps over 20x faster than before the other point about around culture that I wanted to mention was we wanted to build an engineering culture an engineering culture is one which rewards curiosity trying new things investing in new technologies and being a leader not necessarily a follower Red Hat has been a critical partner in our journey to date as we adopt elements of open source culture in engineering culture what you seen today about red hearts focus on new technology innovations while never losing sight of helping you bring forward the investments you've already made in the past is something that makes Red Hat unique we are excited to see red arts investment in leadership in open source technologies to help bring the potential of these amazing things together thank you that's great the thing you know seeing going from the old world to the new with automation so you know the things you've seen demonstrated today they're they're they're more sophisticated than any one company could ever have done on their own certainly not by using a proprietary development model because of this it's really easy to see why open source has become the center of gravity for enterprise computing today with all the progress open-source has made we're constantly looking for new ways of accelerating that into our products so we can take that into the enterprise with customers like these that you've met what you've met today now we recently made in addition to the Red Hat family we brought in core OS to the Red Hat family and you know adding core OS has really been our latest move to accelerate that innovation into our products this will help the adoption of open shift container platform even deeper into the enterprise and as we did with the Linux core platform in 2002 this is just exactly what we did with with Linux back then today we're announcing some exciting new technology directions first we'll integrate the benefits of automated operations so for example you'll see dramatic improvements in the automated intelligence about the state of your clusters in OpenShift with the core OS additions also as part of open shift will include a new variant of rel called Red Hat core OS maintaining the consistency of rel farhat for the operation side of the house while allowing for a consumption of over-the-air updates from the kernel to kubernetes later today you'll hear how we are extending automated operations beyond customers and even out to partners all of this starting with the next release of open shift in July now all of this of course will continue in an upstream open source innovation model that includes continuing container linux for the community users today while also evolving the commercial products to bring that innovation out to the enterprise this this combination is really defining the platform of the future everything we've done for the last 16 years since we first brought rel to the commercial market because get has been to get us just to this point hybrid cloud computing is now being deployed multiple times in enterprises every single day all powered by the open source model and powered by the open source model we will continue to redefine the software industry forever no in 2002 with all of you we made Linux the choice for enterprise computing this changed the innovation model forever and I started the session today talking about our prediction of seven years ago on the future being open we've all seen so much happen in those in those seven years we at Red Hat have celebrated our 25th anniversary including 16 years of rel and the enterprise it's now 2018 open hybrid cloud is not only a reality but it is the driving model in enterprise computing today and this hybrid cloud world would not even be possible without Linux as a platform in the open source development model a build around it and while we have think we may have accomplished a lot in that time and we may think we have changed the world a lot we have but I'm telling you the best is yet to come now that Linux and open source software is firmly driving that innovation in the enterprise what we've accomplished today and up till now has just set the stage for us together to change the world once again and just as we did with rel more than 15 years ago with our partners we will make hybrid cloud the default in the enterprise and I will take that bet every single day have a great show and have fun watching the future of computing unfold right in front of your eyes see you later [Applause] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] anytime [Music]
SUMMARY :
account right so the first dimension we
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James Lebowski | PERSON | 0.99+ |
Brent Midwood | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Monty Finkelstein | PERSON | 0.99+ |
Ted | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
2002 | DATE | 0.99+ |
Canada | LOCATION | 0.99+ |
five and a half terabytes | QUANTITY | 0.99+ |
Marty | PERSON | 0.99+ |
Itamar Hine | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
David Ingham | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
RBC | ORGANIZATION | 0.99+ |
two machines | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Jay | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Hawaii | LOCATION | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
Byrne | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
second floor | QUANTITY | 0.99+ |
Red Hat Enterprise Linux | TITLE | 0.99+ |
Asia | LOCATION | 0.99+ |
Raj China | PERSON | 0.99+ |
Dini | PERSON | 0.99+ |
Pearl Harbor | LOCATION | 0.99+ |
Thursday | DATE | 0.99+ |
Jack Britton | PERSON | 0.99+ |
8,000 | QUANTITY | 0.99+ |
Java EE | TITLE | 0.99+ |
Wednesday | DATE | 0.99+ |
Angus | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
today | DATE | 0.99+ |
two applications | QUANTITY | 0.99+ |
two new machines | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Burr | PERSON | 0.99+ |
Windows | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
Citigroup | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
each machine | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Visual Studio | TITLE | 0.99+ |
July | DATE | 0.99+ |
Red Hat | TITLE | 0.99+ |
aul Cormier | PERSON | 0.99+ |
Diamond Head | LOCATION | 0.99+ |
first step | QUANTITY | 0.99+ |
Neha Sandow | PERSON | 0.99+ |
two steps | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
UNIX | TITLE | 0.99+ |
second dimension | QUANTITY | 0.99+ |
seven years later | DATE | 0.99+ |
seven years ago | DATE | 0.99+ |
this week | DATE | 0.99+ |
36 keynote speakers | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
first step | QUANTITY | 0.99+ |
16 years | QUANTITY | 0.99+ |
30 countries | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Miles Kingston, Intel | AWS re:Invent
>> Narrator: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2017 presented by AWS, Intel and our ecosystem of partners. >> Hello and welcome back. Live here is theCUBE's exclusive coverage here in Las Vegas. 45,000 people attending Amazon Web Services' AWS re:Invent 2017. I'm John Furrier with Lisa Martin. Our next guest is Miles Kingston, he is the General Manager of the Smart Home Group at Intel Corporation. Miles, it's great to have you. >> Thank you so much for having me here, I'm really happy to be here. >> Welcome to theCUBE Alumni Club. First time on. All the benefits you get as being an Alumni is to come back again. >> Can't wait, I'll be here next year, for sure. >> Certainly, you are running a new business for Intel, I'd like to get some details on that, because smart homes. We were at the Samsung Developer Conference, we saw smart fridge, smart living room. So we're starting to see this become a reality, for the CES, every 10 years, that's smart living room. So finally, with cloud and all of the computing power, it's arrived or has it? >> I believe we're almost there. I think the technology has finally advanced enough and there is so much data available now that you have this combination of this technology that can analyze all of this data and truly start doing some of the artificial intelligence that will help you make your home smarter. >> And we've certainly seen the growth of Siri with Apple, Alexa for the home with Amazon, just really go crazy. In fact, during the Industry Day, yesterday, you saw the repeat session most attended by developers, was Alexa. So Alexa's got the minds and has captured the imagination of the developers. Where does it go from here and what is the difference between a smart home and a connected home? Can you just take a minute to explain and set the table on that? >> Yeah and I agree, the voice capability in the home, it's absolutely foundational. I think I saw a recent statistic that by 2022, 55% of US households are expected to have a smart speaker type device in their home. So that's a massive percentage. So I think, if you look in the industry, connected home and smart home, they're often use synonymously. We personally look at it as an evolution. And so what I mean by that is, today, we think the home is extremely connected. If I talk about my house, and I'm a total geek about this stuff, I've got 60 devices connected to an access point, I've got another 60 devices connected to an IOT hub. My home does not feel very smart. It's crazy connected, I can turn on lights on and off, sprinklers on and off, it's not yet smart. What we're really focused on at Intel, is accelerating that transition for your home to truly become a smart home and not just a connected home. >> And software is a key part of it, and I've seen developers attack this area very nicely. At the same time, the surface area with these Smart Homes for security issues, hackers. Cause WiFi is, you can run a process on, these are computers. So how does security fit into all of this? >> Yeah, security is huge and so at Intel we're focused on four technology pillars, which we'll get through during this discussion. One of the first ones is connectivity, and we actually have technology that goes into a WiFi access point, the actual silicon. It's optimized for many clients to be in the home, and also, we've partnered with companies, like McAfee, on security software that will sit on top of that. That will actually manage all of the connected devices in your home, as that extra layer of security. So we fundamentally agree that the security is paramount. >> One of the things that I saw on the website that says, Intel is taking a radically different approach based on proactive research into ways to increase smart home adoption. What makes Intel's approach radically different? >> Yeah, so I'm glad that you asked that. We've spent years going into thousands of consumers' homes in North America, Western Europe, China, etc. To truly understand some of the pain points they were experiencing. From that, we basically, gave all this information to our architects and we really synthesized it into what areas we need to advance technology to enable some of these richer use cases. So we're really working on those foundational building blocks and so those four ones I mentioned earlier, connectivity, that one is paramount. You know, if you want to add 35 to 100 devices in your home, you better make sure they're all connected, all the time and that you've got good bandwidth between them. The second technology was voice, and it's not just voice in one place in your home, it's voice throughout your home. You don't want to have to run to the kitchen to turn your bedroom lights on. And then, vision. You know, making sure your home has the ability to see more. It could be cameras, could be motion sensors, it could be vision sensors. And then this last one is this local intelligence. This artificial intelligence. So the unique approach that Intel is taking is across all of our assets. In the data center, in our artificial intelligence organization, in our new technology organization, our IOT organization, in our client computing group. We're taking all of these assets and investing them in those four pillars and kind of really delivering unique solutions, and there's actually a couple of them that have been on display this week so far. >> How about DeepLens? That certainly was an awesome keynote point, and the device that Andy introduced is essentially a wireless device, that is basically that machine learning an AI in it. And that is awesome, because it's also an IOT device, it's got so much versatility to it. What's behind that? Can you give some color to DeepLens? What does it mean for people? >> So, we're really excited about that one. We partnered with Amazon at AWS on that for quite some time. So, just a reminder to everybody, that is the first Deep Learning enabled wireless camera. And what we're helped do in that, is it's got an Intel Atom processor inside that actually runs the vision processing workload. We also contributed a Deep Learning toolkit, kind of a software middleware layer, and we've also got the Intel Compute Library for deep neural networks. So basically, a lot of preconfigured algorithms that developers can use. The bigger thing, though, is when I talked about those four technology pillars; the vision pillar, as well as the artificial intelligence pillar, this is a proof point of exactly that. Running an instance of the AWS service on a local device in the home to do this computer vision. >> When will that device be available? And what's the price point? Can we get our hands on one? And how are people going to be getting this? >> Yeah, so what was announced during the keynote today is that there are actually some Deep Learning workshops today, here at re:Invent where they're actually being given away, and then actually as soon as the announcement was made during the keynote today, they're actually available for pre-order on Amazon.com right now. I'm not actually sure on the shipping date on Amazon, but anybody can go and check. >> Jeff Frick, go get one of those quickly. Order it, put my credit card down. >> Miles: Yes, please do. >> Well, that's super exciting and now, where's the impact in that? Because it seems like it could be a great IOT device. It seems like it would be a fun consumer device. Where do you guys see the use cases for these developing? >> So the reason I'm excited about this one, is I fundamentally believe that vision is going to enable some richer use cases. The only way we're going to get those though, is if you get these brilliant developers getting their hands on the hardware, with someone like Amazon, who's made all of the machine learning, and the cloud and all of the pieces easier. It's now going to make it very easy for thousands, ideally, hundreds of thousands of developers to start working on this, so they can enable these new use cases. >> The pace of innovation that AWS has set, it's palpable here, we hear it, we feel it. This is a relatively new business unit for Intel. You announced this, about a year ago at re:Invent 2016? Are you trying to match the accelerated pace of innovation that AWS has? And what do you see going on in the next 12 months? Where do you think we'll be 12 months from now? >> Yeah, so I think we're definitely trying to be a fantastic technology partner for Amazon. One of the things we have since last re:Invent is we announced we were going to do some reference designs and developer kits to help get Alexa everywhere. So during this trade show, actually, we are holding, I can't remember the exact number, but many workshops, where we are providing the participants with a Speech Enabling Developer toolkit. And basically, what this is, is it's got an Intel platform, with Intel's dual DSP on it, a microarray, and it's paired with Raspberry Pi. So basically, this will allow anybody who already makes a product, it will allow them to easily integrate Alexa into that product with Intel inside. Which is perfect for us. >> So obviously, we're super excited, we love the cloud. I'm kind of a fanboy of the cloud, being a developer in my old days, but the resources that you get out of the cloud are amazing. But now when you start looking at these devices like DeepLens, the possibilities are limitless. So it's really interesting. The question I have for you is, you know, we had Tom Siebel on earlier, pioneer, invented the CRM category. He's now the CEO of C3 IOT, and I asked him, why are you doing a startup, you're a billionaire. You're rich, you don't need to do it. He goes, "I'm a computer guy, I love doing this." He's an entrepreneur at heart. But he said something interesting, he said that the two waves that he surfs, they call him a big time surfer, he's hanging 10 on the waves, is IOT and AI. This is an opportunity for you guys to reimagine the smart home. How important is the IOT trend and the AI trend for really doing it right with smart home, and whatever we're calling it. There's an opportunity there. How are you guys viewing that vision? What progress points have you identified at Intel, to kind of, check? >> Completely agree. For me, AI really is the key turning point here. 'Cause even just talking about connected versus smart, the thing that makes it smart is the ability to learn and think for itself. And the reason we have focused on those technology pillars, is we believe that by adding voice everywhere in the home, and the listening capability, as well as adding the vision capability, you're going to enable all of this rich new data, which you have to have some of these AI tools to make any sense of, and when you get to video, you absolutely have to have some amount of it locally. So, that either for bandwidth reasons, for latency reasons, for privacy reasons, like some of the examples that were given in the keynote today, you just want to keep that stuff locally. >> And having policy and running on it, you know, access points are interesting, it gives you connectivity, but these are computers, so if someone gets malware on the home, they can run a full threaded process on these machines. Sometimes you might not want that. You want to be able to control that. >> Yes, absolutely. We would really believe that the wireless access point in the home is one of the greatest areas where you can add additional security in the home and protect all of the devices. >> So you mentioned, I think 120 different devices in your home that are connected. How far away do you think your home is from being, from going from connected to smart? What's that timeline like? >> You know what I think, honestly, I think a lot of the hardware is already there. And the examples I will give is, and I'm not just saying this because I'm here, but I actually do have 15 Echos in my house because I do want to be able to control all of the infrastructure everywhere in the home. I do believe in the future, those devices will be listening for anomalies, like glass breaking, a dog barking, a baby crying, and I believe the hardware we have today is very capable of doing that. Similarly, I think that a lot of the cameras today are trained to, whenever they see motion, to do certain things and to start recording. I think that use case is going to evolve over time as well, so I truly believe that we are probably two years away from really seeing, with some of the existing infrastructure, truly being able to enable some smarter home use cases. >> The renaissance going on, the creativity is going to be amazing. I'm looking at a tweet that Bert Latimar, from our team made, on our last interview with the Washington County Sheriff, customer of Amazon, pays $6 a month for getting all the mugshots. He goes, "I'm gonna use DeepLens for things like "recognizing scars and tattoos." Because now they have to take pictures when someone comes in as a criminal, but now with DeepLens, they can program it to look for tattoos. >> Yeah, absolutely. And if you see things like the Ring Doorbell today, they have that neighborhood application of it so you can actually share within your local neighborhood if somebody had a package stolen, they can post a picture of that person. And even just security cameras, my house, it feels like Fort Knox sometimes, I've got so many security cameras. It used to be, every time there was a windstorm, I got 25 alerts on my phone, because a branch was blowing. Now I have security cameras that actually can do facial recognition and say, your son is home, your daughter is home, your wife is home. >> So are all the houses going to have a little sign that says,"Protected by Alexa and Intel and DeepLens" >> Don't you dare, exactly. (laughs) >> Lisa: And no sneaking out for the kids. >> Yes, exactly. >> Alright, so real quick to end the segment, quickly summarize and share, what is the Intel relationship with Amazon Web Services? Talk about the partnership. >> It's a great relationship. We've been partnering with Amazon for over a decade, starting with AWS. Over the last couple of years, we've started working closely with them on their first party products. So, many of you have seen the Echo Show and the Echo Look, that has Intel inside. It also has a RealSense Camera in the Look. We've now enabled the Speech Enabling Developer Kit, which is meant to help get Alexa everywhere, running on Intel. We've now done DeepLens, which is a great example of local artificial intelligence. Partnered with all the work we've done with them in the cloud, so it really is, I would say the partnership expands all the way from the very edge device in the home, all the way to the cloud. >> Miles, thanks for coming, Miles Kingston with Intel, General Manager of the Smart Home Group, new business unit at Intel, really reimagining the future for people's lives. I think in this great case where technology can actually help people, rather than making it any more complicated. Which we all know if we have access points and kids gaming, it can be a problem. It's theCUBE, live here in Las Vegas. 45,000 people here at Amazon re:Invent. Five years ago, our first show, only 7,000. Now what amazing growth. Thanks so much for coming out, Lisa Martin and John Furrier here, reporting from theCUBE. More coverage after this short break. (light music)
SUMMARY :
and our ecosystem of partners. he is the General Manager of the Smart Home Group I'm really happy to be here. All the benefits you get as being an Alumni for the CES, every 10 years, that's smart living room. that will help you make your home smarter. and has captured the imagination of the developers. Yeah and I agree, the voice capability in the home, At the same time, the surface area with these Smart Homes One of the first ones is connectivity, and we actually One of the things that I saw on the website that says, Yeah, so I'm glad that you asked that. and the device that Andy introduced in the home to do this computer vision. I'm not actually sure on the shipping date on Amazon, Jeff Frick, go get one of those quickly. Where do you guys see the use cases for these developing? and all of the pieces easier. And what do you see going on in the next 12 months? One of the things we have since last re:Invent in my old days, but the resources that you get And the reason we have focused on those technology so if someone gets malware on the home, in the home is one of the greatest areas where you How far away do you think your home is from being, and I believe the hardware we have today is very the creativity is going to be amazing. so you can actually share within your local neighborhood Don't you dare, exactly. Talk about the partnership. and the Echo Look, that has Intel inside. General Manager of the Smart Home Group,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bert Latimar | PERSON | 0.99+ |
Tom Siebel | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
60 devices | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Miles Kingston | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
Miles | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Siri | TITLE | 0.99+ |
35 | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Western Europe | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Amazon Web Services' | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Five years ago | DATE | 0.99+ |
first show | QUANTITY | 0.99+ |
45,000 people | QUANTITY | 0.99+ |
CES | EVENT | 0.99+ |
today | DATE | 0.99+ |
2022 | DATE | 0.99+ |
Smart Home Group | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Amazon.com | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
Echo Show | COMMERCIAL_ITEM | 0.98+ |
Intel Corporation | ORGANIZATION | 0.98+ |
120 different devices | QUANTITY | 0.98+ |
100 devices | QUANTITY | 0.98+ |
four ones | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
$6 a month | QUANTITY | 0.97+ |
four technology pillars | QUANTITY | 0.97+ |
55% | QUANTITY | 0.97+ |
7,000 | QUANTITY | 0.96+ |
First time | QUANTITY | 0.96+ |
first ones | QUANTITY | 0.96+ |
Echos | COMMERCIAL_ITEM | 0.96+ |
Alexa | TITLE | 0.96+ |
one place | QUANTITY | 0.95+ |
thousands of consumers' | QUANTITY | 0.95+ |
first party | QUANTITY | 0.95+ |
US | LOCATION | 0.94+ |
12 months | QUANTITY | 0.94+ |
Day Two Wrap Up | Nutanix .NEXT 2017
>> Announcer: Live from Washington D.C., it's theCube, covering .Next conference. Brought to you by Nutanix. >> We're back, this is Dave Vellante with Stu Miniman, and this the wrap of .Next, Nutanix's customer event, #NEXTConf and this is theCube, the leader in the live tech coverage for enterprise technology. Stu, second day. I got to say, Nutanix has always done a good job, innovative venues, they do funky, fun stuff with marketing, we haven't seen the end of it. We have another keynote today, there's a keynote tomorrow morning, big names, Bill McDermott's here, we just saw Peter MacKay, Chad Sakac is here. Who am I missing? >> Stu: Diane Greene >> Diane Gree was up yesterday. >> Y'know, thought leaders, had the CEO of NASDAQ on this morning Dave, y'know really good customers, thought leaders, Nutanix always makes me think a little bit, which I really enjoy. My fourth one of these Dave, usually by the fourth show I've gotten to, it's like I've seen it. Have we made progress, where are we going? >> I thought Sunil Podi's comment was really interesting, he said, "Look, we saw the trends, "we knew that hardware was going down." I mean, they're essentially admitting that they were a hardware oriented company, infrastructure company, we saw what was happening to infrastructure and hyper-converge, and we could just packed it up then, sold the company for a bunch of money, there were rumors floating around, you know they were pre-IPO, they easily could have sold this thing for a billion plus, all could have cashed out and made a buncha dough, and they said, "Y'know what, we're going to do something "different, we're going to go for it." You got to love the ambition, and so many companies today just can't weather that independent storm. I mean, you've seen it over and over and over again. The last billion dollar storage company that remained independent was NetApp, that was 14 years ago, now Nutanix isn't a storage company, but look around here, look at the analysts, a buncha storage guys that have grown up, and it's to me, Stu, it's a representation of what's happening in the marketplace. Storage as we know it is going away, and it always has transformed, y'know it used to be spinning disc drives, then it was subsystems, then it was the SAN, now it's evolving, these guys call it invisible infrastructure, call it whatever you want, but it's moving toward infrastructure as code, which is just a stepping stone to cloud. So your thoughts on the event, the ecosystem, and their position in the marketplace. >> Right, they reach a certain point, they've gone public, can they keep innovating? Look at a number of announcements there, we spent a lot of time talking about the new CloudZi service out there. >> Si? >> Zi. >> Zi, zi, sorry, you got it. (chuckles) >> Pronunciation of some of these, "it's Nutanix, right?" >> Nutonix, Nutanicks, (chuckles) >> They made jokes about the company last year, but this year, that's product, we're talking vision. The ink is still drying on the relationship with Google, doesn't mean they haven't been working for a while, but where this deal goes, interesting to see where it is six months from now, a year from now, because also Google, small player, I mean it wasn't to be honest, I was at the Red Hat Summit and they had a video of Andy Jassy saying, "We've extending AWS with OpenShift." And you're like wow. Red Hat has a position in a lot of clouds, but for Andy Jassy to make an appearance, Amazon, the behemoth in the cloud, that's good. Look, getting Diane Greene here, I said number one, it gives Nutanix credibility, number two it really pokes at VMware a little bit, she's like, "Oh, I did this before." And everybody's like, "Well, she's here now at Nutanix." Nutanix wants to be, that they've compared themselves to both Amazon, I think we hear it was Sunil or Dheeraj in an analyst session said they "want to be like the A Block." Not the V Block that EMC did, but the Amazon Block for the enterprise, or the next VMware, they talked about the new operating system. It's funny, in a lot of my circles, we've been trying to kill the operating system for a while, I need just enough operating system, I want to serverless and containerize all of these things because we need to modernize, and the old general-purpose processor and general-purpose operating system has come and gone, it's seen its day, but Nutanix has a play there. When I look at some of the things going on, we're talking about microsegmentation Dave, we're talking about multi-cloud and some interesting pieces. I like the ecosystem, I like that balance of how do you keep growing and expand where they can go into, leading the customers, but they're delivering today, they've got real products, they've got real growth, sure they have some challenges as to that competitive back and forth, but you asked Chad Sakac if this reminded him of Dell EMC, and kind of that partnership that they had for years, reminded me a little bit of kind of EMC and VMware too, once EMC bought VMware, VMware, the relationship they had, HP, and IBM, and other companies that they needed to treat as good or better than EMC. They're some of those tough relationships, and Dell with Nutanix, their partner, not only do they do Dell XC, but now they're doing like Pivotal on top of it, they can do Hyper-V deployments, Lenovo's another partner, Nutanix is broadening their approach, there's a lot of options out there and a lot of things to dig into, interesting, they keep growing their customers, keep delighting their customers, it reminds me of other shows we go to, Dave, like Amazon re:Invent, customers are super excited, You tell me about the Splunk conference and the ServiceNow conference where those customers are in there, they're excited, and Nutanix is another one of those, that every year you come, there's good solid content, there's a customer base that is growing and exciting and sharing, and that's a fun one to be part of. >> So, I want to ask you about VMware, it's kind of a good reference model. EMC paid out, I don't know, $630 million for VMware, which was the greatest acquisition in enterprise IT history, no question about it in terms of return. A couple questions for you, you were there at the time, you signed the original NDA between EMC and VMware, kind of sniffed em out. Would VMware's ascendancy been as fast and as successful, or even more successful, without EMC? Would VMware have got there on its own? >> I don't think so Dave, because my information that I had, and some of it's piecing together after the fact is VMware was really looking for that company to help them get to the next state. The fundraising was a little bit different back in 2003 than it was later, but rumors were Semantic was going to buy them. Everybody I talked to, you'd know better than me Dave, if Semantic had bought them, they would have integrated into all their pieces, they would have squashed it, the original talent probably would have fled much sooner. EMC didn't really know what they had, I had worked on some of the due diligence for some of the product integration, which took years and years to deliver, and it was mostly we're going to buy them. Diane had a bit of a tense relationship with Joe Tucci kind of from day one, and it was like okay, you're out there in Palo Alto, we're on the other coast, you go and do your thing, and you grow, and by the time EMC had gotten into VMware a little bit more, they were much bigger. So I think as you said, they're one of the great success stories, EMC did best in a lot of its acquisitions where it either let it ran a division and go, or let it kind of sit on its own and just funded it more, so I think that was a-- >> Well, and the story was always that Diane was pissed because she sold out at such a low price, but that's sort of ancient history. The reason I brought that up is I want to try to draw the parallel with Nutanix today, and come back to what you were saying about the A Block. When you look at Amazon, we agree, they have a lead, whether that lead is five years, seven years, four years, probably more like five to seven, but whatever, whatever it is, it's a lead, it's substantive. Beyond the infrastructure, the storage and the compute, they're building out just all kinds of services, I mean just look at their website, whether it's messaging, on and on and on, there's database, there's AI, there's their version of VDI, there's all this big data stuff, with things like Kinesis, and on and on and on, so many services that are much, much larger than the entire Nutanix ecosystem. So the reason for all this background is does Nutanix need a bigger, can Nutanix become it's ambition, which is essentially to be the next VMware, without some kind of white knight? >> So my answer, Dave, is if you look at Nutanix's ambition, one of the challenges for every infrastructure company today, if you think okay, we've talked about True Private Cloud, Dave, what services can I run on that? How can I leverage that? Look at Amazon, y'know a thousand new services coming every year, look at Google, they've got TensorFlow, really cool stuff, they've got those brilliant people coming up with the next stuff, how do I get that in my environment? Well, Nutanix's answer, coming at the show was we're going to partner with Google, we're going to have that partnership, you're going to be able to plug in, and you want to do your analytics and everything, use GCP, they're great at that, we're not, we know that you need to be able to leverage Google services to do that. The Red Hat announcement that I mentioned before, another way how I can take OpenShift and bridge from my data center and my environment and get access to those services. The promise of VMware on Amazon, yeah we're going to have a similar stack that I can go there, but I want to be able to access those VMware servers. Now, could it suck them eventually into all of Amazon and leave VMware behind? Absolutely, it's tough to partner with Amazon. So, the thing I've been looking at at almost every show this year is how are you tying into and working with those public clouds, we talked about it at VMON, Dave, they have Microsoft up on stage, they have partnerships with the public cloud-- >> David: HPE was up there. >> But the public cloud players, if you're not allowing your customers and the infrastructure that you're building to find ways to leverage and access those public cloud services, which not only are they spending $10 billion a year for each one of the big guys on infrastructure to get all around the globe, but it's all of those new services ahead, moving up the stack. To stitch together that in your own environment is going to be really challenging, how many different software pieces, how do I license it? How do I get it on, as opposed to oh, I'm in the public cloud, it's a checkbox, okay I want to access that, and I consume it as I need it, that consumption model needs to change, so I think Nutanix understands that's directionally where they want to go, I look at the Calm software that they launched and say hey, you want to use TensorFlow? Oh, it's just a choice here, absolutely, go. Where is it and how do I use it? Well, some of these details need to be worked out, as Detu said, "it's not like it's one click for every application, any cloud, anywhere." But that's directionally where they're going to make it easy, so all that cool analytic stuff that we cover a lot on theCube, a lot of that is now happening in the cloud, and I should be able to access it whether I'm in my private cloud or public cloud, and it's just going to be consumption model, whether I have certain characteristics that make it that I'm going to want to have that infrastructure for whether that's governance or locality, we talked to Scholastic yesterday, and they said, "Well when you've got manufacturing "in books, I need things close "to where they're coming off the production line, "otherwise there's things that I'm doing "in the public cloud." So that's there we see, when I talk to companies like I do here, at the Vienna show last year, when I talk to Christian Reilly with Citrix, who had been at Bechtel for many years, there's reasons why things need to live close to what's happening, y'know we've talked a lot about Edge, and therefore public cloud doesn't win it all, I know we had one guest on this week that said, "Right, depending on what industry you're is, "is it a 30/70 mix or a 70/30 mix?" There's a lot of nuance to sort this out, and this is long game, Dave, there's this change of the way we do things is a journey, and Nutanix has positioned themselves to continue to grow, continue to expand, some good ambition to expand on, like the five vectors of support that they have, so I've liked what I've heard this week. >> So in thinking about what we're talking about VMware, the imperative for virtualization was so high in the early 2000's because we were coming out of the dot com bust, IT was out of favor, VMware was really the only game in town, there really wasn't a strong alternative, had by far the best product, Microsoft Hyper-V was sort of in-concept, and KVM and others were just really not there, so there really was no choice, it appealed to 100% of the IT shops, I mean essentially. So I wonder though, today, is the imperative for multi-cloud the same? The fundamental is yes, everybody has multiple clouds. But this industry has lived in stovepipes forever, and has figured out how to manage stovepipes, it manages them by fencing things off. So I wonder is the imperative as high, you could maybe make an argument that it's higher, but I'm still not quite getting it yet, as it was in the early 2000's, where the aspirin of virtualization to soothe the pain of do more with less was such an obvious and game changing paradigm shift. I don't see it as much here, I see people still trying to figure out okay, what is our cloud strategy? Number one, number two is the competition seems to be much more wide open, it's unclear at this time that any one company has a fast-track to multi-cloud. >> I think you've got some really good points there, Dave. A thing that I've pointed out a few times is that one of the things that bothered me from the early days with VMware is from an application standpoint, it tended to freeze my application. I didn't have a reason to kind of move forward and modernize my application. Back in 2002 it was like oh, I'm running Windows NT with a really old application, my operating system going to end of life, well maybe it's time to uplift. Oh wait, there's this great virtualization stuff, my hardware's going end of life too. No, shove it in a VM, let's keep it for another five years. Oh my god, that application sucked then, it's going to suck even more in five years, and workforce productivity was way down. So, the vision for Nutanix is they're going to be a platform that are going to be able to help you modernize your environment and how do we get beyond, is it virtualization, is it containerization, is it a lot of the cloud-native pieces, how does that fit in? Starting to hear a little bit more of it, a critique I'd have on HCI about two years ago was it was the same applications that were in my VMware SAN, not VSAN, but my just traditional storage area network was what was running on Nutanix. We're starting to see more interesting applications going on there, and look, Nutanix has a bullseye on them, there are all the HCI direct replacements, there is the threat of the cloud, and I haven't heard as many SAAS applications living on Nutanix as I do when we talk to all flash-array companies, Dave, every single on of them can roll out, here's all these SAAS deployments on our environment, just scalable environments that build that for the future. I haven't heard it as much from Nutanix. >> So VMware was aspirin , Nutanix originally started as aspirin, and now they're pivoting to vitamin. Who are they up against? Who do you like? Who are the horses on the track? Let's analyze the race and then wrap. >> Yeah, so when Nutanix got into this business, it was well, they're helping VMware environments, it was 100% VMware when they first started that relationship with VMware was really tough, they've lowered that too, they've now got what, 28% is running HV, they've got a little bit on Hyper-V, but they've still got about 60% of their customers are VMware. So VMware, y'know, huge challenge, VSAN has more customers than anyone in the hyper-convergent infrastructure space, easy, number of customers, but virtualization admin has taken that. Microsoft, huge potential threat, Azure Stack's coming this year, it's been coming, it's been coming, it's really close there, all the server guys are lining up. Microsoft's a huge player, Microsoft owns applications, they're pulling applications into their SAAS offerings, they're pulling applications into Azure, when they launch Azure Stack, even if the 1.0, if you looked at it on paper and say Nutanix is better, well, Microsoft's a huge threat to both VMware, which uses a lot of Microsoft apps, as well as Nutanix. So those are the two biggest threats, then of course, there's just the general trend of push to SAAS and push to public cloud where Nutanix is starting to play in the multi-cloud, as we talked about, and COM and the DR cloud services are good, but can Nutanix continue to stay ahead of their customers? They're ahead of the vast majority of enterprises, but can they convince them to come on board to them, rather than some of these big guys? Nutanix is a public company now, they're doing great, but yeah, it's a big TAM that they're going after, but that means they're going to have a tax from every side of the market. >> I see HCI as one where you got a leader, and that leader can make some good money. I don't see multi-cloud as a winner-take-all market because I think IBM's going to have its play in multi-cloud, HPE has its play in multi-cloud, Dell EMC is going to have its play in multi-cloud. You got guys coming out of different places like ServiceNow, who's got an IT operations management practice, builds business big, hundreds of millions of dollars of business there, coming at multi-cloud, so a lot of different competitors that are going to be going for it, and some of them with very large service organizations that I think are going to get there fair share, so I would predict, Stu, that this is going to continue to be, multi-cloud is going to be a multi-stovepipe cloud for a long, long time. Now, if Nutanix can come in and solve that control plane problem, and demonstrate substantial business value, and deliver competitive advantage, y'know that might change the game. It's difficult at this point in 2017 to see that Nutanix, over those other guys that I just mentioned, has an advantage, clear advantage, maybe from a product standpoint, maybe. But from a resource standpoint, a distribution channel, services organization, ecosystem, all those other things, they seem to me to be counterbalancing. Alright, I'll give you last thought. >> Yeah, so it's great to see Nutanix, they're aiming high, they're expanding into a couple of areas, and they keep listening, so I hope they keep listening to their customers, expand their partnerships, SAAS customers would be really interesting, service provider is something that they've gotten into little bit, but plenty more opportunity for them to go there. Dave, personally for me, to it have been a company I've watched since the earliest days, it's been a pleasure to watch, y'know I think back, right, VMware you said, I think it was a hundred person company when I first started talking to them and Diane Greene, and I look at where VMware went. I've been tracking VMware for now five years, and reminds me a lot of some of those trends, for a 20 person company, I said to hear almost 3000 boggles the mind, I've been to their headquarters a bunch. So it's been fun to watch the Newton army, and they've been loving watching it from our angles. >> Well and these events are very good events, and so there's a lot of passion here, and that's a great fundamental for this company. So I'm a fan, I think it may be undervalued, I think it very well may be undervalued. >> Wall Street definitely doesn't understand this stuff. >> Alright Stu, great working with you this year, (chuckles) this month, this quarter, this month, certainly this show, so great job. I really appreciate it >> Stu: Thanks, Dave. >> There's a big crew behind what Stu and I, and John Ferrier, and Jeff Frick, and others do here. Here today with us Ava, Patrick, Alex, Jay, you guys have had an awesome spring. Brendan is somewhere, I guess Brendan is doing the keynote right now. So, fantastic job, as always, Kristen Nicole and her team, writing up the articles. Jay Johanson back at the controls, Bert with the crowd shots. Everybody, really appreciate all your support, thanks for watching everybody. We'll see you, we got a little break, I think, in the action, cause it's July Fourth, well it's Canada year, or Canada week-- >> Canada Day and Independence Day next week. >> And Independence Day in the United States, and then we'll be at Infor Inforum, second week of July, I'll be there with Rebecca Knight and the crew, so watch for that, check out SiliconAngle.com for all the news, Wikibon.com for all the research, and theCube.net to find all these videos, Youtube.com/SiliconAngle, it's everywhere, if you can't find it, you're not on Twitter, you're not on social. Thanks for watching, everybody. This is Dave Vellante with Stu Miniman, we're out. (lo-fi synthesizer music)
SUMMARY :
Brought to you by Nutanix. I got to say, Nutanix has always done a good job, Have we made progress, where are we going? and it's to me, Stu, it's a representation Look at a number of announcements there, (chuckles) HP, and IBM, and other companies that they needed to treat it's kind of a good reference model. and it was mostly we're going to buy them. and come back to what you were saying about the A Block. and get access to those services. and it's just going to be consumption model, and has figured out how to manage stovepipes, be a platform that are going to be able to help you Who are the horses on the track? but that means they're going to have that are going to be going for it, boggles the mind, I've been to their headquarters a bunch. and so there's a lot of passion here, Alright Stu, great working with you this year, is doing the keynote right now. and theCube.net to find all these videos,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Sanjay Poonen | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Clarke | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Diane Greene | PERSON | 0.99+ |
Michele Paluso | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sam Lightstone | PERSON | 0.99+ |
Dan Hushon | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
Andy Armstrong | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lisa Martin | PERSON | 0.99+ |
Kevin Sheehan | PERSON | 0.99+ |
Leandro Nunez | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
Bob Metcalfe | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Sam | PERSON | 0.99+ |
Larry Biagini | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Brendan | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Clarke Patterson | PERSON | 0.99+ |
Day 3 Wrap Up | ServiceNow Knowledge15
live from Las Vegas Nevada it's the kue covering knowledge 15 brought to you by service now we're back this is Dave vellante with Jeff Frick this is the cube SiliconANGLE is continuous live production of knowledge 15 service now's awesome I have to say customer conference 9,000 people we always say Jeff that this is you know one of our favorite conference absolutely it really is it's just tremendous the innovation the excitement customer stories you never seen so many satisfied happy you know excited customers a great management story the messaging matches what's going on in the market a lot of fun cloud we heard about productivity increases expanding beyond IT some really cool new development environments some new capabilities mobile modern technologies that this company is using audience loved it and we heard today about a lot of cloud high availability ready for primetime lot going on and always the passionate customers I mean I think it's an interesting gauge for all the shows that we do to look at the percentage of customers that are on our own show and are willing to come on and talk about what they do versus just executives and partners and kind of more normal set and we continue to have just a tremendous representation here at servicenow now we've been coming for three years our third year in a row we're getting a bunch of new customers that we hadn't on before and really that's the thing that I think that's great i love that the kind of the completion of full circle of the vision that that for it talks about when he sits down he tells the story of year about building the platform that nobody wanted to buy because it was just a platform we known as budget for platform may have passed the budget for applications are solved problems put the application in play sell it be successful and then slowly that platform play comes back out as other people jump on and develop new apps new places to go and it really seems to kind of be hitting a stride not that it wasn't hitting us try it a year ago in Moscow knee remember my friend Omer Peres who was the CIO of Aetna international when I first met him in the early 2000s David floor and I had a CIO consultancy and Omer came in and was our sort of you know advisor and he worked for us many years we had a lot of fun and I used to ask him as a CIO what what's the one thing that you would want out of a software company for your IT operations and he said I want the ERP of IT so this was 2001-2002 we were like wow that's big task so not something we were going to build but that's essentially what service now has built right the ERP of IT they've used that terminology you know that whole notion of them making changes to my infrastructure and I need a single system of record that can manage those changes and document them make sure I'm in compliance with those changes have an audit for those changes and then extend into other business processes and that's exactly what these guys have built but but the neat thing is erp has with it's such a heavy connotation and big implementation and classic old-school Accenture and SI p coming in that's not going to sell best marketing right but now these guys are delivering the function but using today's modern technologies its cloud-based its continuous innovation its ongoing improvements you know the talking about rolling 30 days in not having this big monolithic let's design it let's build it let's deliver it now as we do that and push out well that's the thing they have to worry about it because people know what their platform looks like and it's like when moriches talked about the software mainframe and all the more people said oh don't use that term but essentially that's pretty powerful concept in virtualization world and I think ERP of IT is very powerful here the other interesting thing is we see service now extending into non IT domains throughout the organization we saw there was announcements Salesforce extending inward taking you know what is normally sort of their CRM system and now driving toward HR and we've been saying all week with two years ago we said wow app creator service creator that's like a pass layer that's kind of like Salesforce and interesting to see how the opportunity is going to collide down the road and that's exactly what's happening you'd expect that for a company like service now that has a 40 to 45 billion dollar Tam they're going to run into a lot of places and their advantage is they're running into those places with their what Frank sleeping calls their homies which our IT people why is that an advantage the reason why that's an advantage because I t touches every aspect of the business everybody gets an IT tax right right why do I get it's like the government they're everywhere in your life you can't get away from it the same thing with IT it's everywhere whether it's marketing finance sales logistics a chart doesn't IT technology is the substrate and touches every part of the business as a result I tea has purview over that entire view maybe not the right word but it's got visibility around the entire process is so it's going to be a really interesting dynamic as these this company grows into new spaces look at a company like Salesforce they're coming at it from a sales force right angle right very important function within the company but you know does it touch HR directly does it touch logistics that I touch you know to your effects finance but do they support the processes no so that's why i would say that service now has the advantage the flip side of that is you get a company like salesforce big company hot company huge community very very interesting dynamic emerging there yeah and it is it is kind of the base in the community from which you grow and i thought some of the interesting stories that came up over the last couple days where where is where the IT guy had an efficient process and effective process that gets people a new laptop to onboard new employees and the people in the department said hey that's pretty cool and you got that done pretty well how could we do that for some of our internal processes so you know they almost have IT now is an internal sales force we hear over and over again about the IT role changing and really building stores for their services and really getting entrepreneurial and changing the company there's just there's this a really good vibe and you know most great tech companies have a really strong leader at the helm who's got a personality that helps really define that company see it with Oracle you see it with Apple you know the jobs and and fred is ease and rock star but he's so he's such a humble guy he's so approachable he walks around and people are running up taking selfies with him and he you know he's one so humble but then too don't discount the vision the guy is super smart and still one of our favorite enemies we ever did was with Doug Leone two years ago describing his impression when he first talked two to Fred and listening to that vision and I I can't remember the exact quote but basically he's a really smart guy and he can make it a really simple and he knows where he's going well what I like about Fred laude well first of all I'm a groupie I admitted I tweeted out I'm a Fred ludie groupie and I with a bunch of our homie I guess I owe me here's the better I'm groupie I mean I am only because I just his a guy who's got tremendous vision you can talk to him about virtually any kind of technology subject obviously can talk about service now I just remember one of our interviews I think it was last year or maybe two years ago we're like Fred you know know you're super busy you probably got a runny goes no I got time let's keep going yeah all right right which I love I mean it's just like a lot of these you know times at these conferences that executives are so stressed out because they're being pulled in a million different directions and Fred just kind of takes it all in stride he loves talking to the people pressing the flesh people come up they want to touch him right like I lean right but you know you're that you're good analyst you study the numbers you look at this where do you think potential head winds are obviously they're growing the bigger profile they get the more targets are going to start coming on their back what do you think some of the head ones are going to come well I mean the near-term head wounds obviously our currency related and that's what sort of noctum knock service now off the of the 12 billion dollar market cap peak last Friday it has recovered that's a financial analyst this week and clearly they communicated the story in fact it's talking to Mike scarpelli CFO and he said look when you compare the the currency you know pre currency fluctuation numbers we blew it out okay and I think what the what the street did you know Ferrari was saying well the street really doesn't understand i think the street generally understands the opportunity generally right as best thing because they see high growth they see big Tim they see great management they see happy customers I mean what more do you need very own investment right and his valuation metrics obviously in cash flow but I think that that what what the street does understand is that there is a big opportunity here so i think that scarpelli and slew been communicated in a way that scared the street a little bit because they were being conservative they gave a little lighter guidance right and this street is used to service now just blowing away its numbers i said i said on friday this is a really healthy taking some air out of the bubble great love it very good good good it's a really healthy thing I like to see this kind of dynamic you get scared when companies start to you know expand beyond their their cam so so this to answer your question specifically and it sounds like cliche but I really do see that service nows headwinds and risks are execution risks I think they control their own destiny it's like a football team that can win out and make the playoffs I think that's the situation that service now is in right now its execution we heard from jay anderson i think i t scale internal IT scale is a risk and that's that's he's got a very very important job number one number two is I think you know we heard from dan McGee on the availability piece they are making some very bold claims about availability focus on security so that obviously is something that they've got to pay attention to the ability to scale their cloud but I really do see it as execution risk I don't speak competition right now as if everybody you know has said for the last 70 s all we got the ServiceNow killer we're not seeing the ServiceNow killer emerged nothing close to it you talk to customers it's very clear they're not spitting on there just admin seats and then what do you think in terms of is now we've seen you know amazon kind of lift up the covers on their cloud business and now expose that a little bit more to the street and start to break those numbers out and the impact of that on on these cloud based businesses and how they continue to to grow I think that's interesting so amazon today announced earnings in a broke out AWS 1.56 billion in revenue 256 million dollars in operating profit that's a 17-percent operating profit I have been saying for two or three years now that AWS is far more profitable than people realize everybody calls it a race 2 0.o race 20 race 20 race 20 the guys are say it's a race 20 the guys who can't compete with Amazon's cost structure seventeen percent operating profit is not erased 20 now what Jeff Bezos and Andy Jassy decide to do with that operating profit is a different story they'll pour it back into the business they'll expand their capex because the Amazon is one big lifestyle business for Jeff Bezos so but that's fine but so I have been saying and I've drawn the curves that what essentially Amazon is doing is they're they're taking the old outsourcing marginal economics of outsourcing which was my mess for less as you grow scale as you do more volume your marginal economics actually get worse there's diseconomies of scale the opposite of software and software we learned from Microsoft and the PC era the more volume you do the better your marginal economics and essentially your cost your economic marginal costs go to zero what Amazon is doing is they're taking the outsourcing line the provisioning of services you know technology services infrastructure services servers and storage and they're bringing that they're they're tracking the software curve so that means they're driving costs down lower than any I tea shop on the planet I don't care if the big banks think that they can compete with Amazon on on cost structure a long term they can't in my opinion now they can compete in other ways right you know with proprietary sort of you know value-added IP but on cost amazon google microsoft they are going to have a volume advantage and we're seeing it now in the numbers it's not a coincidence than amazon is seventeen percent AWS operating profits is because it's not a race to 0 they've got better marginal economics and so now does that have to do with service now we've heard a lot about multi-tenant versus multi-instance i think on balance from a pure infrastructure standpoint amazon is going to have better cost structure than service now but companies like service now an Oracle who have differentiable advantage through software it can sell software subscriptions or software licenses in the case of Oracle can make up that cost when my opinions that cost disadvantage in higher margin software and that's exactly what you see with service now I don't think they'll have the marginal economics of Microsoft but it's a great great business model long term yeah and the other two pieces of it that I think are really important and with bezels especially I mean the guy's a visionary and he's making enough money to execute what he wants to do and people don't believe it but they haven't believed it for 20 20 years and he continues to evolve the business and the other thing that still people have been outsourcing their payroll for how long why'd it take so long to start to outsource your IT infrastructure when people been outsourcing payroll forever I mean if you are focused on a particular business you can out execute people trying to do the same thing and that's the other advantage natick service now is they're very focused and I think some of the guests this week's agenda be a general purpose cloud we run our application and we run our application better than anyone else and it oh by the way just so happens that our application is really a platform and there's a whole lot of other applications that you can build on and beyond the ones that we did so I think it's I think it's really good opportunity I kind of like the data point that we heard this week I don't if you picked up on the nuance but several executives at servicenow said that their intelligence says that most customers are saying we want to place most of our workload over time into the public cloud now you could say service now is biased okay emc is gonna say the exact in vmware they can say the exact opposite right ibm's going to say the up no most most of the world is going to be hybrid okay so you got Andy Jassy on one side say the whole world's going to the public cloud you got you know joe tucci and the other end say and the most of the world's going to be hybrid you know how do you square that circle and i think that the growth workloads are very clearly going into the to the public cloud Andy there's no question about that and you know it's just the way numbers work if you got public cloud workloads growing at twenty thirty fifty percent a year and you got a private cloud workloads growing at zero percent a year a two percent a year at some point they're going to catch up right so I think the vast majority of work is going to be done over time in in the public cloud that's not to say everybody's going to you know big do a big switch there's still plenty of applications there they're 20 years old that are going to stay you know behind the four walls of the the data center within a company but the economics of doing that are not going to be as good so you have to have other reason there's got to be whether it's you know really good business value reasons competitive advantage reasons security or compliance compliance i think is up in is a huge one well i mean amazon has great security the issue with amazon is they won't do one offs service now you know we'll go belly to belly with customers and bend over backwards and do things for the enterprise customers that amazon won't this is why you saw when workday launched its analytics service on AWS nobody bought it because they said well i just negotiated an SLA and a security you know deal with you and and we've agreed on the parameters of that now you're saying to access my analytics piece I got to go with Amazon's SLA that's not cool I can't get that by my lawyers forget it it's too hard right so yeah so I think people really kind of need to think about that service now is in an interesting position to be able to do those things for the enterprise that are what Amazon would consider on natural amazon strategy is any color you want as long as it's black let's add things over time that everybody can take advantage of by the way I think that's a great strategy and it's going to it's a long term winning strategy but so the way you compete with Amazon it's interesting somebody tweeted it's it's it's kind of weird to see Dan McGee compare infrastructure-as-a-service from amazon with service now okay yes that's true on the other hand you know from a conceptual standpoint I'm putting stuff in the cloud why not think about it so what does that mean how do you compete with Amazon's ecosystem the way you compete is you have differentiable advantage with IP that allows you to capture margins that reflect the value that you're delivering service now has that I think very clearly you know Oracle has that I'd mentioned Oracle even though they don't have the volume that many of the people have in and there are many many others you know that have niches that Amazon doesn't want to try and it's for cle and it's worth a little specific right it's really it's a good focus on something well i think i'm at salesforce very clearly has that differentiable advantage in may and a work day i mean many many you know companies out there that have that but workdays winning sorry at work days winning but service now is winning you're clearly seeing amazon when the cloud ification thus asif occation of IT is here it's now and it's not going to stop no it's like a stop so we've been here for three days i think we had 45 or so interviews you're fine i'm going to get you with the i won't go bumper sticker because we know you got to fly back to boston so it would be a long drive what's your what's the flag that hangs off the back of the of the year playing your banner as you leave after 40-some odd interviews three days on our third consecutive service now knowledge show so to me it's attacking the productivity problem within organizations which by the way is a whole nother vector of discussion focused our MIT of cube action right you know so that's a whole nother discussion i have concerns about that you know what are we going to do with all this increased productivity we better put it into innovation and we better educate our young people so that they can create you know new value so that's sort of one piece i think the second to me is the innovation on the software platform the developer focus the technology behind service now and the mobile capabilities and emphasis on new tech in on real time very very impressive and then i think the third is the cloud the cloud piece the devops the cloud the the the developer ecosystem adding value for the enterprise big opportunity and I guess that stuff really that that ecosystem to me is my big takeaway of service now knowledge 15 no 15 is that ecosystem development that expansion of the ecosystem that's where this company this community gets its leverage and I think that's a winning formula yeah my takes is a slightly different angle and really just go back to dine are less guest is is people are always chasing innovation for their internal how do I get my own people not necessarily who are building our core products but who are executing our strategy we're how do i get innovation and to me what we've seen so many things in initial specifically is if you simply enable more people to be able to innovate and you lower the barriers for them to try to execute ideas just a simple math by having more people contributing you're going to get more innovation and the other piece that's really important for that is it needs to be a low cost of entry to try and if it fails you need to be able to fast fail and get out so now and you've got all these people in all these departments seeing an opportunity to build a new application that that that saves time it is a little bit more efficient than what they were doing that before you multiply that by hundreds and thousands of people suddenly you're really getting significant improvements in efficiency and met Beth what I think is the most exciting about these cloud baths cloud-based applications the software world in which we live in where the barriers to actually develop things you know a coder lyst a codeless developer is a really exciting opportunity that will enable companies to expose more innovation within their own workforce I think it's for good stuff all right I think we wrap I think we're at I want to thank service now our awesome hosts for this conference will holding this conference creating a great event and having us here now for the for the third year in a row really is a pleasure for us and the cube team to be a part of this Greg Stewart shut up a great job Patrick Leonard Thank You Matthew we hear you back there doing the countdown to thank you awesome awesome job you know as always the entire cube team John my co-host as well John furrier John is getting everything up on on YouTube and on SiliconANGLE SiliconANGLE TV go to SiliconANGLE TV where all the action is go to SiliconANGLE calm kristen nicole and her team or pumping out content Bert Lattimore's on the crowd chat Crouch at net / no 15 great job thank you for all your help and check out Wikibon premium dot Wikibon comm check out all the research will be summarized in this show you know we're always on top of things they're really appreciate everybody you know watching sending in your comments your tweets we're app thanks everybody thank you we will see you next time let's see what's next is a easy world yeah emc world two weeks back here in Vegas so again thanks to everybody in the ServiceNow knowledge community that's a wrap this is dave vellante with Jeff Frick for John furrier we'll see you next time
SUMMARY :
that are going to stay you know behind
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Omer Peres | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Greg Stewart | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Mike scarpelli | PERSON | 0.99+ |
40 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Dan McGee | PERSON | 0.99+ |
Bert Lattimore | PERSON | 0.99+ |
Doug Leone | PERSON | 0.99+ |
Moscow | LOCATION | 0.99+ |
Patrick Leonard | PERSON | 0.99+ |
Matthew | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
45 | QUANTITY | 0.99+ |
boston | LOCATION | 0.99+ |
three years | QUANTITY | 0.99+ |
jay anderson | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
17-percent | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
dan McGee | PERSON | 0.99+ |
last year | DATE | 0.99+ |
third year | QUANTITY | 0.99+ |
seventeen percent | QUANTITY | 0.99+ |
30 days | QUANTITY | 0.99+ |
9,000 people | QUANTITY | 0.99+ |
1.56 billion | QUANTITY | 0.99+ |
two pieces | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
seventeen percent | QUANTITY | 0.99+ |
20 20 years | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Omer | PERSON | 0.99+ |
amaz | ORGANIZATION | 0.99+ |
last Friday | DATE | 0.99+ |
two years ago | DATE | 0.99+ |
Dave vellante | PERSON | 0.99+ |
12 billion dollar | QUANTITY | 0.99+ |
John furrier | PERSON | 0.99+ |
three years | QUANTITY | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
twenty thirty fifty percent a year | QUANTITY | 0.98+ |
Andy | PERSON | 0.98+ |
friday | DATE | 0.98+ |
this week | DATE | 0.98+ |
early 2000s | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
a year ago | DATE | 0.98+ |
third | QUANTITY | 0.98+ |
Ferrari | ORGANIZATION | 0.98+ |
Jeff | PERSON | 0.98+ |
John furrier | PERSON | 0.98+ |
Apple | ORGANIZATION | 0.97+ |
Beth | PERSON | 0.97+ |
Frank | PERSON | 0.97+ |
Salesforce | TITLE | 0.97+ |
today | DATE | 0.97+ |
Las Vegas Nevada | LOCATION | 0.96+ |
Day 2 Wrap Up w/ Holger Mueller - IBM Impact 2014 - theCUBE
>>The cube at IBM. Impact 2014 is brought to you by headline sponsor. IBM. Here are your hosts, John furrier and Paul Gillin. >>Hey, welcome back everyone. This is Silicon angle's the cube. It's our flagship program. We go out to the events district as soon from the noise. We're ending out day two of two days of wall to wall coverage with myself and Paul Galen. Uh, 10 to six 30 every day. I'm just, we'll take as much as we can just to get the data. Share that with you. Restrict the signal from the noise. I'm John furrier the bonus look at angle Miko is Paul Gilliam and our special guests, Holger Mueller, Mueller from constellation research analyst covering the space. Ray Wang was here earlier. You've been here for the duration. Um, we're going to break down the event. We'll do a wrap up here. Uh, we have huge impact event for 9,000 people. Uh, Paul, I want to go to you first and get your take on just the past two days. And we've got a lot of Kool-Aid injection attempts for Kool-Aid injection, but IBM people were very, very candid. I mean, I didn't find it, uh, very forceful at all from IBM. They're pragmatic. What's your thoughts on it? >>I think pragmatism is, is what I take away, John, if it gets a good, that's a good word for it. Uh, what I saw was a, uh, not a blockbuster. Uh, there was not a lot of, of, uh, of hype and overstatement about what the company was doing. I was impressed with Steve mills, but our interview with him yesterday, we asked about blockbuster acquisitions and he said basically, why, why, I mean, why should we take on a big acquisition that is going to create a headache, uh, for us in integrating into your organization? Let's focus on the spots where we have gaps and let's fill those. And that's really what they've, you know, they really have put their money where their mouth is and doing these 150 or more acquisitions over the last, uh, three or four years. Um, I think that the, the one question that I would have, I don't think there's any doubt about IBM's commitment to cloud as the future about their investment in big data analytics. They certainly have put their money where their mouth is. They're over $25 billion invested in big data analytics. One question I have coming out of this conference is about power and about the decision to exit the x86 market and really create confusion in a part of their business partners, their customers about about how they're going to fill that gap and where are they going to go for their actually needs and the power. Clearly power eight clearly is the future. It's the will fill that role in the IBM portfolio, but they've got to act fast. >>Do you think there's a ripple effect then so that that move I'll see cause a ripple effect in their ecosystem? >>Well, I was talking to a, I've talked to two IBM partners today, fairly large IBM partners and both of them have expressed that their customers are suffering some whiplash right now because all of a sudden the x86 option from IBM has gone away. And so it's frozen there. Their purchasing process and some of them are going to HP, some of them are looking at other providers. Um, I don't think IBM really has has told a coherent story to the markets yet about how >>and power's new. So they've got to prop that up. So you, so you're saying is okay, HP is going to get some new sales out of this, so frozen the for IBM and yet the power story's probably not clear. Is that what you're hearing? >>I don't think the power story is clear. I mean certainly it was news to me that IBM is taking on Intel at the, at this event and I was surprised that, that, >>that that was a surprise. Hold on, I've got to go to you because we've been sitting here the Cuban, we've been having all the execs come here and we've been getting briefed here in the cube. Shared that with the audience. You've been out on the ground, we've bumped into you guys, all, all the other analysts and all the briefings you've been in, the private sessions you've been in the rooms you've been, you've been, you've been out, out in the trenches there. What have you, what are you finding, what have you been hearing and what are the, some of the soundbites that you could share with the audience? It's not the classic God, Yemen, what are the differences? >>The Austin executives in cloud pedal, can you give me your body language? He had impact one year ago because they didn't have self layer at a time, didn't want to immediately actionable to do something involving what? A difference things. What in itself is fine, but I agree with what you said before is the messaging is they don't tell the customers, here's where we are right now. Take you by the hand. It's going to be from your door. And there's something called VMs. >>So it's very interesting. I mean I would consider IBM finalized the acquisition only last July. It's only been nine months since was acquired. Everything is software now. It leads me to think of who acquired who IBM acquired a software or did soflar actually acquire IBM because it seems to, SoftLayer is so strategic. IBM's cloud strategy going forward. >>Very strategic. I think it's probably why most transformative seemed like the Nexans agenda. And you've heard me say assault on a single thing. who makes it seven or eight weeks ago? It's moving very far. >>What do you think about the social business? Is that hanging together, that story? Hang on. It's obviously relevant direction. It's kind of a smarter planet positioning. Certainly businesses will be social. Are you seeing any meat on the bone there? On the collaboration side, >>one of the weakest parts, they have to be built again. Those again, they also have an additional for HR, which was this position, this stuff. It's definitely something which gives different change. >>I have to say, John, I was struck by the lack of discussion of social business in the opening keynote in particular a mobile mobile, big data. I mean that that came across very clear, but I've been accustomed to hearing that the social business rugby, they didn't, it didn't come out of this conference. >>Yeah. I mean my take on that was, is that >>I think it's pretty late. I don't think there's a lot of meat in the bone with the social, and I'll tell you why. I think it's like it's like the destination everyone wants to go to, but there's no really engine yet. Right. I think there's a lot of bicycle riding when they need a car. Right? So the infrastructure is just not is too embryonic, if you will. A lot of manual stuff going on. Even the analytics and you know you're seeing in the leaderboard here in the social media side and big data analytics. Certainly there are some core engine parts around IBM, but that social engine, I just don't see it happening. You risk requires a new kind of automation. It's got some real times, but I think that this is some, some nice bright spots. I love the streams. I love this zone's concept that we heard from Watson foundations. >>I think that is something that they need to pull out the war chest there and bring that front and center. I think the thinking about data as zones is really compelling and then I'll see mobile, they've got all the messaging on that and to give IBM to the benefit of the doubt. I mean they have a story now that they have a revenue generating story with cloud and with big data and social was never a revenue generating story. That's a software story. It's not big. It's not big dollars. And they've got something now that really they're really can drive. >>I'll tell you Chris Kristin from mobile first. She was very impressive and, and I'll tell you that social is being worked on. So I put the people are getting it. I mean IBM 100% gets social. I think the, the, it's not a gimmick to them. It's not like, Oh, we got some social media stuff. I think in the DNA of their soul, they, they come from that background of social. So I give them high marks on that. I just don't see the engine yet. I'm looking for analytics. I'm looking for a couple of eight cylinders. I just don't see it yet. You know, the engine, the engines, lupus and she wants to build the next generation of education. Big data, tons of mobile as the shoulder equivalent to social. I'm skeptical. I'm skeptical on Bloomix. I'll tell you why. I'm not skeptical. I shouldn't say that. >>It's going to get some plane mail for that. Okay. I'll say I'll see what's out there. I'll say it. I'm skeptical of Blumix because it could be a Wright brothers situation. Okay, look, I'm wrong guys building the wrong airplane. So the question is they might be on the wrong side of history if they don't watch the open source foundations because here's the problem. I have a blue mix, gets rushed to the market. Certainly IBM has got muscle solutions together. No doubt debting on cloud Foundry is really a risk and although people are pumping it up and it's got some momentum, they don't have a big community, they have a lot of marketing behind it and I know Jane's Wars over there is doing a great job and I'm Josh McKinsey over there with piston cloud. It'll behind it. It has all the elements of open collaboration and architecture or collaboration. However, if it's not a done deal yet in my mind, so that's a, that is a risk factor in my my mind. >>We've met a number of amazing, maybe you can help to do, to put these in order, a number of new concepts out there. We've got Bloomex the soft player, and we've got the marketplace, and these are all three concepts that approval, which is a subset of which, what's the hierarchy of these different platforms? >>That's hopefully, that's definitely at the bottom. The gives >>us visibility. You talk about the CIO and CSI all the time. Something you securities on every stupid LCO one on OCS and the marketplace. Basically naming the applications. Who would folded? IBM. IBM would have to meet opensource platform as a service. >>Well, it's not, even though it's not even open source and doing a deal with about foundries, so, so they've got, I think they're going in the middle. Where's their angle on that? But again, I like, again, the developer story's good, the people are solid. So I think it's not a fail of my, in my mind that all the messaging is great. But you know, we went to red hat summit, you know, they have a very active community, multiple generations in the data center, in the Indiana prize with Linux and, and open, you know, they're open, open shift is interesting. It's got traction and it's got legit traction. So that's one area. The other area I liked with Steve mills was he's very candid about this turf. They're staking out. Clearly the cloud game is up, is there is hardcore for them and in the IBM flavor enterprise cloud, they want to win the enterprise cloud. They clearly see Amazon, they see Amazon and its rhetoric and Grant's narrative and rhetoric against Amazon was interesting saying that there's more links on SoftLayer and Amazon. Now if you count links, then I think that number is skewed. So it's, you know, there's still a little bit of gamification going to have to dig into that. I didn't want to call him out on that, but know there's also a hosting business versus, you know, cloud parse the numbers. But what's your take on Amazon soft layer kind of comparison. >>It's, it's fundamentally different, right? Mustn't all shows everything. Why did see retailers moves is what to entirely use this software, gives them that visibility machine, this accommodation more conservatively knowing that I buy them, I can see that I can even go and physically touch that machine and I can only did the slowly into any cloud virtualization shed everything. >>Oh, Paul, I gotta say my favorite interview and I want to get your take on this. It was a Grady food. She was sat down with us and talk with us earlier today. IBM fell up, walks on water with an IBM Aussie legend in the computer industry. Just riveting conversation. I mean, it was really just getting started. I mean, it felt like we were like, you know, going into cruising altitude and then he just walked away. So they w what's your take on that conversation? >>Well, I mean, certainly he, uh, the gritty boujee interview, he gave us the best story of, of the two days, which is, uh, they're being in the hospital for open heart surgery, looking up, seeing the equipment, and it's going to be used to go into his chest and open his heart and knowing that he knows the people who program that, that equipment and they programmed it using a methodology that he invented. Uh, that, that, that's a remarkable story. But I think, uh, uh, the fact that that a great igloo can have a job at a company like IBM is a tribute to IBM. The fact that they can employ people like that who don't have a hard revenue responsibility. He's not a P. and. L, he's just, he's just a genius and he's a legend and he's an IBM to its crude, finds a place for people like that all throughout his organization. >>And that's why they never lost their soul in my opinion. You look at what HP and IBM, you know, IBM had a lot of reorganizations, a lot of pivots, so to speak, a lot of battleship that's turned this in way. But you know, for the most part they kept their R and D culture. >>But there's an interesting analogy too. Do you remember the case methodology was mutual support of them within the finance language that you mailed something because it was all about images, right? You would use this, this methodology, different vendors that were prior to the transport itself. Then I've yet to that credit, bring it together. bring and did a great service to all for software engineering. And maybe it's the same thing at the end, can play around diversity. >>You've got to give IBM process a great point. Earlier we, Steve mills made a similar reference around, it wasn't animosity, it was more of Hey, we've helped make Intel a big business, but the PC revolution, you know, where, what's in it for us? Right? You know, where's our, you know, help us out, throw us a bone. Or you know, you say you yell to Microsoft to go of course with the licensing fee with Gates, but this is the point, the unification story and with grays here, you know IBM has some real good cultural, you know industry Goodwill, you agree >>true North for IBM is the Antal quest customer. They'll do what's right where the money and the budget of the enterprise customers and press most want compatibility. They don't want to have staff, of course they want to have investment protection >>guys. I'd be able to do a good job of defining that as their cloud strategy that clearly are not going head to head with Amazon. It's a hybrid cloud strategy. They want to, they see the enterprise customers that legacy as as an asset and it's something they want to build on. Of course the risk of that is that Amazon right now is the pure play. It has all the momentum. It has all the buzz and and being tied to a legacy is not always the greatest thing in this industry, but from a practical revenue generating standpoint, it's pretty good. >>Hey guys, let's go down and wrap up here and get your final thoughts on the event. Um, and let's just go by the numbers, kind of the key things that IBM was promoting and then our kind of scorecard on kind of where they, where they kind of played out and new things that popped out of the woodwork that got your attention. You see the PO, the power systems thing was big on their messaging. Um, the big data story continues to be part of it. Blue mix central to the operations and the openness. You had a lot of open, open openness in their messaging and for the most part that's pretty much it. Um, well Watson, yeah, continue. Agents got up to Watson. >>Wow. A lot of news still to come out of Watson I think in many ways that is their, is their ACE in the hole and then that is their diamond. Any other thoughts? >>Well, what I missed is, which I think sets IBM apart from this vision, which is the idea of the API. Everybody else at that pure name stops the platform or says, I'm going to build like the org, I'm going to build you. That's a clear differentiator on the IBM side, which you still have to build part. They still have to figure out granularity surface that sets them apart that they have to give one. >>Yeah, and I think I give him an a plus on messaging. I think they're on all the right fault lines on the tectonic shifts that we're seeing. Everyone, I asked every every guest interview, what's the game changing moment? Why is it so important? And almost consistently the answers were, you know, we're living in a time of fast change data, you know, efficiency spare or you're going to be left behind. This is the confluence of all these trends, these fall lines. So I think IBM is sitting on these fall lines. Now the question is how fast can they cobbled together the tooling from the machineries that they have built over the years. Going back to the mainframe anniversary, it's out there. A lot of acquisitions, but, but so far the story and the story >>take the customer by the hand. That's the main challenge. I see. This wasn't often we do in Mexico, they want zero due to two times or they're chilling their conferences. It's the customer event and you know, and it's 9,000 people somehow have to do something to just show, right? So why is my wave from like distinguished so forth and so and so into? Well Lou mentioned, sure for the cloud, but how do we get there, right? What can we use, what am I SS and leverage? How do I call >>guys, really appreciate the commentary. Uh, this is going to be a wrap for us when just do a shout out to Matt, Greg and Patrick here doing a great job with the production here in the cube team and we have another cube team actually doing a simultaneous cube up in San Francisco service. Now you guys have done a great job here. And also shout out to Bert Latta Moore who's been doing a great job of live tweeting and help moderate the proud show, which was really a huge success and a great crowd chat this time. Hopefully we'll get some more influencers thought leaders in there for the next event and of course want to thank Paul Gillen for being an amazing cohost on this trip. Uh, I thought the questions and the and the cadence was fantastic. The guests were happy and hold there. Thank you for coming in on our wrap up. >>Really appreciate it. Constellation research. Uh, this is the cube. We are wrapping it up here at the IBM impact event here live in Las Vegas. It's the cube John furrier with Paul Gillen saying goodbye and see it. Our next event and stay tuned if it's look at angel dot DV cause we have continuous coverage of service now and tomorrow we will be broadcasting and commentating on the Facebook developer conference in San Francisco. We're running here, Mark Zuckerberg and all Facebook's developers and all their developer programs rolling out. So watch SiliconANGLE TV for that as well. Again, the cube is growing with thanks to you watching and thanks to all of our friends in the industry. Thanks for watching..
SUMMARY :
Impact 2014 is brought to you by headline sponsor. Uh, Paul, I want to go to you first and get your take on just the I don't think there's any doubt about IBM's commitment to cloud as the future about their investment in big data Their purchasing process and some of them are going to HP, some of them are looking at other providers. so frozen the for IBM and yet the power story's probably not clear. I don't think the power story is clear. You've been out on the ground, we've bumped into you guys, all, all the other analysts and all the briefings you've been in, What in itself is fine, but I agree with what you said before is the messaging It leads me to think of who acquired who IBM acquired a software or did soflar actually acquire like the Nexans agenda. On the collaboration side, one of the weakest parts, they have to be built again. I have to say, John, I was struck by the lack of discussion of social business in the opening keynote I don't think there's a lot of meat in the bone with the social, and I'll tell you why. I think that is something that they need to pull out the war chest there and bring that front and center. I just don't see the engine yet. So the question is they might be on the wrong side of history if they don't watch the open source foundations because here's We've got Bloomex the soft player, and we've got the marketplace, That's hopefully, that's definitely at the bottom. You talk about the CIO and CSI all the time. I didn't want to call him out on that, but know there's also a hosting business versus, you know, cloud parse the numbers. is what to entirely use this software, I mean, it felt like we were like, you know, going into cruising altitude and then he just walked away. of the two days, which is, uh, they're being in the hospital for open heart surgery, You look at what HP and IBM, you know, And maybe it's the same thing at the end, can play around diversity. but this is the point, the unification story and with grays here, you know IBM has some real good cultural, of the enterprise customers and press most want compatibility. It has all the buzz and and being tied to a legacy is not always the and let's just go by the numbers, kind of the key things that IBM was promoting and then our kind of scorecard is their ACE in the hole and then that is their diamond. Everybody else at that pure name stops the platform or says, I'm going to build like the org, And almost consistently the answers were, you know, It's the customer event and you know, and it's 9,000 people somehow have to do something to just show, for the next event and of course want to thank Paul Gillen for being an amazing cohost on this trip. Again, the cube is growing with thanks to you watching and thanks to all of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Paul Gillen | PERSON | 0.99+ |
Ray Wang | PERSON | 0.99+ |
Paul Gilliam | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Bert Latta Moore | PERSON | 0.99+ |
Paul Galen | PERSON | 0.99+ |
Holger Mueller | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Josh McKinsey | PERSON | 0.99+ |
Mexico | LOCATION | 0.99+ |
John furrier | PERSON | 0.99+ |
Chris Kristin | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Matt | PERSON | 0.99+ |
two times | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Grant | PERSON | 0.99+ |
150 | QUANTITY | 0.99+ |
over $25 billion | QUANTITY | 0.99+ |
Steve mills | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
9,000 people | QUANTITY | 0.99+ |
Kool-Aid | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
Greg | PERSON | 0.99+ |
one year ago | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Mueller | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Bloomix | ORGANIZATION | 0.98+ |
four years | QUANTITY | 0.98+ |
last July | DATE | 0.98+ |
Bloomex | ORGANIZATION | 0.98+ |
one question | QUANTITY | 0.98+ |
seven | DATE | 0.98+ |
10 | QUANTITY | 0.98+ |