Stephanie Walter, Maia Sisk, & Daniel Berg, IBM | CUBEconversation
(upbeat music) >> Hello everyone and welcome to theCUBE. In this special power panel we're going to dig into and take a peek at the future of cloud. You know a lot has transpired in the last decade. The cloud itself, we've seen a data explosion. The AI winter turned into machine intelligence going mainstream. We've seen the emergence of As-a-Service models. And as we look forward to the next 10 years we see the whole idea of cloud expanding, new definitions occurring. Yes, the world is hybrid but the situation is more nuanced than that. You've got remote locations, smaller data centers, clandestine facilities, oil rigs, autonomous vehicles, windmills, you name it. Technology is connecting our world, data is flowing through the pipes like water, and AI is helping us make sense of the noise. All of this, and more is driving a new digital economy. And with me to talk about these topics are three great guests from IBM. Maia Sisk is the Director of SaaS Offering Management, at IBM Data and AI. And she's within the IBM Cloud and Cognitive Software Group. Stephanie Walter is the Program Director for data and AI Offering Management, same group IBM Cloud and Cognitive Software. And Daniel Berg is a Distinguished Engineer. He's focused on IBM Cloud Kubernetes Service. He's in the Cloud Organization. And he's going to talk today a lot about IBM's cloud Satellite and of course Containers. Wow, two girls, two boys on a panel, we did it. Folks welcome to theCUBE. (chuckles) >> Thank you. >> Thank you. >> Glad to be here. >> So Maia, I want to start with you and have some other folks chime in here. And really want to dig into the problem statement and what you're seeing with customers and you know, what are some of the challenges that you're hearing from customers? >> Yeah, I think a big challenge that we face is, (indistinct) talked about it earlier just data is everywhere. And when we look at opportunities to apply the cloud and apply an As-a-Service model, one of the challenges that we typically face is that the data isn't all nice cleanly package where you can bring it all together, and you know, one AI models on it, run analytics on it, get it in an easy and clean way. It's messy. And what we're finding is that customers are challenged with the problem of having to bring all of the data together on a single cloud in order to leverage it. So we're now looking at IBM and how we flip that paradigm around. And instead of bringing the data to the cloud bring the cloud to the data , in order to help clients manage that challenge and really harness the value of the data, regardless of where you live. >> I love that because data is distributed by its very nature it's silo, Daniel, anything you'd add? >> Yeah, I mean, I would definitely echo that, what Maia was saying, because we're seeing this with a number of customers that they have certain amount of data that while they're strategically looking that moving to the cloud, there's data that for various reasons they can not move itself into the cloud. And in order to reduce latency and get the fastest amount of processing time, they going to move the processing closer to that data. And that's something that we're looking at providing for our customers as well. The other services within IBM Cloud, through our notion of IBM Cloud Satellite. How to help teams and organizations get processing power manage them to service, but closer to where their data may reside. >> And just to play off of that with one other comment. Then the other thing I think we see a lot today is heightened concerned about risks, about data security, about data privacy. And you're trying to figure out how to manage that challenge of especially when you start sending data over the wire, wanting to make sure that it is still safe, it is still secure and it is still resident in the appropriate places. And that kind of need to manage the governance of the data kind of adds an additional layer of complexity. >> Right, if it's not secure, it's a, non-starter, Stephanie let's bring you into the conversation and talk about, you know, some of the waves that you're seeing. Maybe some of the trends, we've certainly seen digital accelerate as a result of the pandemic. It's no longer I'll get to that someday. It's really, it become a mandate you're out of business, if you don't have a digital business. What are some of the markets shifts that you're seeing? >> Well, I mean, really at the end of the day our clients want to infuse AI into their organizations. And so, you know, really the goal is to achieve ambient AI, AI that's just running in the background unchoosibly helping our clients make these really important business decisions. They're also really focused on trust. That's a big issue here. They're really focused on, you know, being able to explain how their AI is making these decisions and also being able to feel confident that they're not introducing harmful biases into their decision-making. So I say that because when you think about, you know digital organization going digital, that's what our customers want to focus on. They don't want to focus on managing IT. They don't want to focus on managing software. They don't want to to have to focus on, you know, patching and upgrading. And so we're seeing more of a move to manage services As-a-Service technologies, where the clients can really focus on their business problems and using The technologies like AI, to help improve their businesses. And not have to worry so much about building them from the ground up. >> So let's stay on that for a minute. And maybe Maia, Daniel, you can comment. So you, Stephanie, you said that customers want to infuse AI and kind of gave some reasons why, but I want to stay on that for a minute. That, what is that really that main outcome that they're looking for? Maybe there are several, they're trying to get to insight. You mentioned that trynna be more efficient it sounds like they're trynna automate governance and compliance, Maia, Daniel can you sort of add anything to this conversation? >> Yeah, well, I would, I would definitely say that, you know at the end of the day, customers are looking to use the data that they have to make smarter decisions. And in order to make smarter decisions it's not enough to just have the insight. The insight has to, you know, meet the business person that needs it, you know in the context, you know, in the application, in the customer interaction. So I think that that's really important. And then everything else becomes like the the superstructure that helps power, that decision and the decision being embedded in the business process. So we at IBM talk a lot about a concept we call the Ladder to AI. And the the short tagline is there is no AI without IA. You know, there is no Artificial Intelligence without Information Architecture. It is so critical, you know, Maia's version this is the garbage in garbage out. You have to have high quality data. You have to have that data be well-organized and well-managed so that you're using it appropriately. And all of that is just, you know then becomes the fuel that powers your AI. But if you have the AI without having that super structure, you know, you're going to end up making, get bad decisions. And ultimately, you know our customers making their customers experience less than it could and should be. And in a digital world, that's, you know, at the end of the day, it's all about digitizing that interaction with whoever the end customer whoever the end consumer is and making that experience the best it can be, because that's what fuels innovation and growth. >> Okay. So we've heard Arvind Krishna talk about, he actually made this statement IBM has to win the architectural battle for cloud. And I'm wondering maybe Daniel you can comment, on what that architectural framework looks like. I mean Maia just talked about the Information Architecture. You can't have AI without that foundation but we know what does Arvind mean by that? How is IBM thinking about that? >> Yeah, I mean, this is where we're really striving to allow our customers really focusing on their business and focusing on the goals that they're trying to achieve without forcing them to worry as much about the IT and the infrastructure and the platform for which they're going to run. Typically, if you're anchored by your data and the data is not able to move into the cloud, generally we would say that you don't have access to cloud services. You must go and install and run and operate your own software to perform the duties or the processing that you require. And that's a huge burden to push onto a customer because they couldn't move their data to your cloud. Now you're pushing a lot of responsibilities back onto them. So what we're really striving for here is, how can we give them that cloud experience where they can process their data? They can run their run book. They can have all of that managed As-a-Service so that they could focus on their business but get that closer to where the data actually resides. And that's what we're really striving for as far as the architecture is concerned. So with IBM Cloud Satellite, we're pushing the core platform and the platform services that we support in IBM Cloud outside of our data centers and into locations where it's closer to your data. And all of that is underpinned by Containerizations, Containers, Kubernetes and OpenShift. Is fundamentally the platform for which we're building upon. >> Okay. So that, so really it's still it's always a data problem, right? Data is you don't want to move it if you don't have to. Right. So it's, so Stephanie, should we think about this as a new emergent data architecture I guess that's what IA is all about. How do you see that evolving? >> Well I mean, I see it evolving as, I mean, first of all our clients, you know, we know that data is the lifeblood of AI. We know the vast majority of our clients are using more than one cloud. And we know that the client's data may be located in different clouds, and that could be due to costs, that could be due to location. So we have to ask the question, how are our clients supposed to deal with this? This is incredibly complex environments they're are incredibly complex reasons sometimes for the data to be where it is. It can include anything from costs to laws, that our clients have to abide by. So what we need to do, is we need to adapt to these different environments and provide clients with the consistent experience and lower complexity to be able to handle data and be able to use AI in these complex environments. And so, you know, we know data, we also know data science talent is scarce. And if each one of these environments have their own tools that need to be used, depending on where the data is located, that's a huge time sink, for these data scientist and our clients don't want to waste their talents time on problems like this. So what we're seeing is, we're seeing more of a acceptance and realization that this is what our clients are dealing with. We have to make it easier. We have to do Innovative things like figure out how to bring the AI to the data, how to bring the AI to where the clients need it and make it much easier and accessible for them to take advantage of. >> And I think there's an additional point to make on this one, which is it's not just easy and accessible but it's also unified. I mean, one of the challenges that customers face in this multicloud environment and many customers are multicloud, you know, not necessarily by intent but just because of how, you know, businesses have adopted as a service. But to then have all of that experience be fragmented and have different tools not just of data, but different pools of, again catalog, different pools of data science it's extremely complex to manage. So I think one of the powerful things that we're doing here, is we're kind of bringing those multiple clouds together, into more of an integrated or a unified, you know window into the client's data in AI state. So not only does the end-user not have to worry about you know, the technologies of dealing with multiple individual clouds, but also, you know it all comes together in one place. So it can be give managed in a more unified way so that assets can be shared across, and it becomes more of a unified approach. The way I like to think of it is, you know, it's true hybrid multicloud, in that it is all connected as opposed to multi-cloud, but it's pools of multiple clouds, one cloud at a time. >> So it can we stay on that for a second because it's, you're saying it's unified but the data stays where it is. The data is distributed by nature. So it's unified logically, but it's decentralized. Is that, am I getting that right? Correct. Okay. Correct. All right. I'm really interested in how you do this. And maybe we can talk about maybe the approach that you take for some of your offerings and maybe get specific on that. So maybe Stephanie, why don't you start, you know, Yes so, what do you have in your basket? Like Cloud Pak So what we have in our basket I mean lets talk about that. >> We have, so Cloud Pak for Data as a Service. This is our premier data and AI platform. It's offered as a service, its fully managed, and there's roughly, there's 30 services integrated services in our services catalog and growing. So we have services to help you through the entire AI life cycle from preparing your data, which is Maia was saying it's very, very, very important. It's critical to any successful AI project. From building your models, from running the models and then monitoring them to make sure that as I was saying before, you can trust them. You don't have to make sure that, you need to make sure that there's not biased. You need to be able to manage these models and then the life cycle them retrain them if needed. So our platform handles all of that. It's hosted on IBM Cloud. And what we're doing now, which is really exciting, is we're going to use, and you mentioned before IBM Cloud Satellite, as a way for us to send our AI to data that perhaps is located on another cloud or another environment. So how this would work is that the services that are integrated with Cloud Pak for Data as a Service they'll be able to use satellite locations to send their AI workloads, to run next to the data. And this means that the data doesn't need to be moved. You don't have to worry about high egress charges. You can see, you can reduce latency and see much stronger performance by running these AI workloads where it counts. We're really excited to to add this capability to our platform. Because, you know, we spent a lot of time talking about earlier all of these challenges that our clients have and this is going to make a big difference in helping them overcome them. Okay. So Daniel, how to Containers fit in? I mean, obviously the Red Hat acquisition was so strategic. We're seeing the real, the ascendancy of OpenShift in particular. Talk about Containers and where it fits into the IBM Cloud Satellite strategy. >> Yeah. So a lot of this builds on top of how we run our cloud business today. Today the vast majority of the services that are available in IBM cloud catalog, actually runs as Containers, runs in a Kubernetes based environment and runs on top of the services that we provide to our customers. So the Container Platform that we provide to our customers is the same one that we're using to run our own cloud services. And those are underpinned with Containers, Kubernetes, and OpenShift. And IBM cloud satellite, based on the way that the designed our Container Platform using Kubernetes and Containers and OpenShift, allows us to take that same design and the same principles and extended outside of our data centers with user provided infrastructure. And this, this goes back to what Stephanie was saying is a satellite location. So using that technology, that same technology and the fact that we've already containerized many of our services and run them on our own platform, we are now distributing our platform outside of IBM Cloud Data Centers using satellite locations and making those available for our cloud service teams, to make their services available in those locations. >> I see and Maia, this, it is as a service. It's a OPEX. Is that right? Absolutely Okay. Absolutely >> Yeah, it's with the two different options on how we can run. One is we can leverage IBM Cloud Satellite and reach into a customer's operating environment. They provide the infrastructure, but we've provide the As-a-Service experience for the Container on up. The other option that we have is for some of our capabilities like our data science capability, where, you know customer might need something a little bit more turnkey because it's, you know, more of a business person or somebody in the CTO's office consuming the As-a-Service. We'll also offer select workloads in an IBM own satellite and environment. I, you know, so that it kind of soup to nuts managed by us. But that is the key is that other than, you know providing the operating environment and then connecting what we do to, you know, their data sources, really the rest is up to us. We're responsible for, you know everything that you would expect in an As-a-Service environment. That things are running, that they're updated, that they're secure, that they're compliant, that's all part of our responsibility. >> Yeah. So a lot of options for customers and it's kind of the way they want to consume. Let's talk about the business impact. You know, you guys, IBM, very consultative selling, you know, tight relationships with customers. What's the business case look like when you go into a client? What's the conversation like? What's possible? What can you share? Stephanie, can you maybe start things off there? Any examples, use-cases, business case, help us understand the metrics. >> Yeah. I mean, so let's talk about a couple of use cases here. So let's say I'm an investment firm, and I'm using data points from all kinds of data sources right? To use AI, to create models to inform my investment decisions. So I'm going to be using, I may be using data sources you know, like regulatory filings, newspaper articles that are pretty standard. I may also be using things like satellite data that monitors parking lots or maybe even weather data, weather forecast data. And all of this data is coming together and being, it needs to be used for models to predict, you know when to buy, sell, trade, however, due to costs, due to just availability of the data they may be located on completely different clouds. You know, and we know that especially capital markets things are fast, fast, fast. So I need to bring my AI to my data, and need to do it quickly so that I can build these models where the data resides, and then be able to make my investment decisions, very fast. And these models get updated often because conditions change, markets change. And this is one way to provide a unified set of AI tools that my data scientists can use. We don't have to be trained on I'm told depending on what cloud the data is stored on. And they can actually build these models much faster and even cheaper. If you would take into egress charges into consideration, you know, moving all the all this data around. Another use case that we're seeing is you know, something like let's say, a multinational telecommunications company that has locations in multiple countries and maybe they want to reduce their customer churn. So they have say customer data that it's stored in different countries and different countries may have different regulations, or the company may have policies that, that data can't be moved out to those country. So what can we do? Again, what we can do is we can send our AI to this data. We can make a customer churn prediction model, that when my customer service representative is on the phone with a customer, and put their information, and see how likely they are to stop using my service and tailor my phone interaction and the offers that I would offer them as this customer service representative to them. If there's a high likelihood that they're going to churn I will probably sweeten the deal. And I can do all that while I'm being fast, right. Because we know that these interactions need to happen quickly. But also while complying with whatever policies or even regulations that are in place for my multinational company. So you know, if you think back to the use cases that I was just talking about you know, latency, performance, reducing costs and also being able to comply with any policy or regulations that our customers might have are really, are really the key pieces of the use cases that we've been seeing. >> Yeah. So Maia there's a theme here. I bring five megabytes of code to a petabyte of data kind of thing. And so Stephanie was talking about speed. There's a an inherent compliance and governance piece. It's it sounds like it's not a bolt on, it's not an afterthought, it's fundamental. So maybe you could add to the conversation, just specifically interested in, you know, what should a client expect? I mean, you're putting data in the hands of you know domain experts in the line of business. There's a self-serve component here, presumably. So there's cross selling is what I heard in some of what Stephanie was just talking about. So it was revenue, there's cost cutting, there's risk reduction, that I'm seeing the business case form. What can you add? >> Yeah, absolutely. I think that the only other thing I would add, is going back to the conversation that we had about, Oh you know, a lot of this is being driven by, you know the digitization of business and you know even moreso this year. You know, at the end of the day there's a lot of costs benefits to leveraging and As-a-Service model, you know, to leveraging that experience in economies of scale from a service provider, as well as, you know leveraging satellite kind of takes that to the next level of, you know, reducing some other costs. But I always go back to, you know at the end of the day, this is about customer experience. It's about revenue creation, and it's about, you know, creating, you know enhanced customer satisfaction and loyalty. So there's a top-line benefits here, you know, of having the best possible AI, you know plugging that into the customer experience, the application where that application resides. So it's not just about where the data resides. You can also put it on the other side and say, you know, we're bringing the AI, we're bringing the machine learning model to the application so that the experiences at excellent the application is responsive there's less latency and that can help clients then leverage AI to create those revenue benefits, you know, of having the the satisfied customer and of having the, you know the right decision at the right time in order to, you know propel them to, to spend and spend more. >> So Daniel bring us home. I mean, there's a lot of engineering going on here. There's the technology, the people in the process if I'm a client, I'm going to say, okay, I'm going to rely on IBM R&D to cut my labor costs, to drive automation, to help me, you know, automate governance and reduce my risks, you know, take care of the technology. You know, I'll focus my efforts on my process, my people but it's a journey. So how do you see that shaping out in the next, you know several years or, or the coming decade, bring us home. >> Yeah. I mean what we're seeing here is that there's a realization that customers have highly skilled individuals. And we're not saying that these highly skilled individuals couldn't run and operate these platforms and the software themselves, they absolutely could. In some cases, maybe they can't but in many cases they could. But we're also talking about these are they're highly skilled individuals that are focusing on platform and platform services and not their business. And the realization here is that companies want their best and brightest focused on their business, not the platform. If they can get that platform from another vendor that they rely on and can provide the necessary compute services, in a timely and available fashion. The other aspect of this is, people have grown to appreciate those cloud services. They like that on demand experience. And they want that in almost every aspect of what they're working on. And the problem is, sometimes you have to have that experience in localities that are remote. They're very difficult. There's no cloud in some of these remote parts of the world. You might think that clouds everywhere, but it's not. It's actually in very specific locations across the world, but there are many remote locations that they want and need these services from the cloud that they can get. Something like IBM Cloud Satellite. That is what we're pursuing here, is being able to bring that cloud experience into these remote locations where you can't get it today. And that's where you can run your AI workloads. You don't have to run it yourself, we will run it and you can put it in those remote locations. And remote locations don't actually have to be like in the middle of a jungle, they could be in your, on your plant floor or within a port that you have across the world, right? It could be in a warehouse. I mean, there's lots of areas where there's data that needs to be processed quickly, and you want to have that cloud experience, that usage pay model for that processing. And that's exactly what we're trying to achieve with IBM Cloud Satellite and what we're trying to achieve with the IBM Cloud Pak for Data as a Service as well. Running on satellite is to give you those cloud experiences. Those services managed as a service in those remote locations that you absolutely need them and want them. >> Well, you guys are making a lot of progress in the next decade is not going to look like the last decade. I can pretty confident in that prediction. Guys thanks so much for coming on the cube and sharing your insights, really great conversation. >> Absolutely. Thank you, Dave. >> Thank you. >> You're welcome, and thank you for watching everybody. This is Dave Vellante from the cube. We'll see you next time. (upbeat music)
SUMMARY :
And he's going to talk today a and you know, what are the data to the cloud that moving to the cloud, And that kind of need to manage and talk about, you know, to focus on, you know, And maybe Maia, Daniel, you can comment. And in a digital world, that's, you know, has to win the architectural but get that closer to where Data is you don't want to and that could be due to costs, just because of how, you know, the approach that you take is that the services and the fact that we've Is that right? But that is the key is that other than, and it's kind of the way and being, it needs to be that I'm seeing the business case form. kind of takes that to the to help me, you know, automate governance and can provide the in the next decade is not going This is Dave Vellante from the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephanie | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Richberg | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
John Frower | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Jim Casey | PERSON | 0.99+ |
Steve Hershkowitz | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Stephanie Walter | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Kenny Holmes | PERSON | 0.99+ |
National Institute of Standards and Technology | ORGANIZATION | 0.99+ |
Justin | PERSON | 0.99+ |
Bobby Patrick | PERSON | 0.99+ |
Michael Gilfix | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Aaron Powell | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
Daniel Berg | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Michelle | PERSON | 0.99+ |
Jim Casey | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Kenny Holmes | PERSON | 0.99+ |
Monty Barlow | PERSON | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
58% | QUANTITY | 0.99+ |
Maia | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Bobby | PERSON | 0.99+ |
SMBC Bank | ORGANIZATION | 0.99+ |
Daniel Berg, IBM | IBM Think 2019
>> Live from San Francisco, it's theCUBE. Covering IBM Think 2019. Brought to you by IBM. >> Welcome back to San Francisco, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante and I'm with my cohost, Stu Minman, Lisa Martin is also here. John Furrier'll be up tomorrow. This is day one of IBM Think. Kind of the pregame, Stu. The festivities kick off tomorrow, they're building out the Solutions Center, they got Howard Street takeover. We're in Moscone North, stop by and see us. Daniel Berg is here. He's a distinguished engineer with IBM Cloud Kubernetes service IBM, of course. Dan, great to see you again. >> Thank you. Thank you very much. >> Thanks for coming on. So everybody's got a Kubernetes story these days. What's IBM's Kubernetes story? >> So, IBM has taken a big bet on Kubernetes, two, two and a half years ago. Never really looked back, it's our primary foundation for our platform services. And we have two key distributions for the Kubernetes service, we have IBM Cloud Private, which is a software distribution for on premises, set up your own private cloud based on Kubernetes, behind your firewall. And then we have a manage service in the public cloud. So you're moving to public cloud, doing cloud native, grab an API, CLI, you get a cluster. >> So a lot of people think Kubernetes, oh, I can be able to move it anywhere, private cloud, public cloud. But there are other benefits of just, say, for instance, a private cloud. Maybe explain those. >> Yeah, I mean the biggest benefit for us is that we're able to give you the IBM cloud experience and IBM cloud content, so IBM content, middleware, things that you've been using for a decade. We've modernized it, put it in containers, install it and manage it on Kubernetes. The nice thing is that content, you can bring on premises where it's needed the most, and run it in ICP, IBM Cloud Private, and also take that and run it in our public cloud, as you migrate and move those workloads into the public sector. >> Dan, one of the things we've been watching is, you talk about a hybrid cloud or a multi-cloud world. There's a lot of pieces and it can be complicated. >> Yes. >> Now, Kubernetes itself, not exactly the simplest solution out there, but when you can deliver it as a service, but you can take a certain piece of your environment and IBM helps to simplify that. Maybe explain what it simplifies and, you know, what still are some of the hard places that we have to play at in these environments? >> Yeah, definitely. So, I mean, the IBM cloud Kubernetes service, we, anyone that has dealt with Kubernetes knows it's easy to install , pretty easy to set up, and basically easy to get started. It's the day two, it's the operations, it's the long pull. It's doing all the updates, the maintenance, the security patches, the securing it. Making it highly available, that's hard. And that's hard over time, and it takes a lot of resources. So IKS is a service that, we do that. Let the experts do it, is basically what we tell people. We are experts at managing Kubernetes. We do this as our day job, 24/7, right? Literally, because we manage a 24/7 service. So we operate it 24/7 and we keep it updated. That allows our customers to focus on their business problem. Focus on their app, not building the platform. But there are still some complexities, because you have, you don't have just one cluster. If you only had one cluster, it'd be no big deal. I probably wouldn't have a job. But you have many clusters. You've got development clusters, you've got test clusters. But if you're doing a global service, you've got many clusters throughout the world. Highly available clusters. You put clusters in various data centers for keeping your data in one location, right? So you've got many clusters, so it gets complicated to manage all of those clusters. So, with Kubernetes service we provide all the capabilities to manage and set up and secure your cluster, but then the content, like, moving and configuring things across all those clusters, becomes complicated. And that's where we released recently a new product called Multicloud Manager. >> Tell us, you know, tell us more. (laughter) >> I thought you were going to ask a question. (laughs) So, Multicloud Manager, what it basically does is it provides a control plane that allows you to manage, and today it manages resources, Kubernetes resources, across many different clouds, across many different cloud platforms. So it works with our Cloud Private, which runs on premises, but it also works with our public cloud, IKS. And it can work with other cloud providers, it can work with Amazon, it can work with Google, it can work with Azure. And it works with OpenShift, as well, obviously. So those, having that one tool, then, gives you the mechanism to drive consistency of the resources across all of those distribution of Kubernetes clusters that you have. And another big thing that it does, and it helps with, is security compliance. So it has ability to define security postures that you need to have across your clusters, and then apply it and run it in both a check mode, to see is that policy, or, provided across all your clusters, and where do you have gaps? And then it also has a setting to do enforcement. So, if it's not there, it'll make it there, it'll make it so. >> So, IBM hides all that complexity from the customer. >> Yes. >> But I'm curious as to what the conversations are like, Dan, with the customer. In other words, you're basically figuring out how to do it. Customer knows what it's doing. Do you ever get into a situation, no, of course, at scale you wan consistency and standards. So, do you ever get into a situation where a customer says, well, I'd like you to do it this way, and what's that conversation like? >> Yeah, so that's where, and that's where it's nice having multiple distributions, right? So having, so in our public cloud with IKS, having variations and unique configurations for each and every customer, I don't, we don't do that, right? It's a service. And services scale and provide value by doing consistency, right? So we consistently set up and manage clusters, thousands of, tens of thousands of clusters that way. But if you need something that's highly, highly specific to a given use case or you have differences in your infrastructure that you need to have more flexibility, that's where IBM Cloud Private comes in. And we do have customers like, especially on premises, right? On premises, those ae unique beasts, right? The infrastructure, the hardware, the network. You got to have a custom configuration. So coupling our ICP production with global services team, they can come in and they can customize it to suit any customer's needs. >> So, Dan, you talked about living in multiple environments, whether that be public cloud, your private cloud, you also mentioned Red Hat, I think, in there. Tell us where customers are today with OpenShift, where that fits, and give as a little bit compare contrast as to what IBM's doing today. >> Yeah, definitely. So, and it's interesting, watching what's hapepening in the industry, because there's the whole push to cloud, and everybody knows they want to get there, but trying to get there all in one fell swoop with all the workloads that you have on premises is quite complicated and difficult and almost impossible to do on day one. So, the story is all about how do I modernize what I have today, on premises? And how does IBM help with that in my journey to move into public cloud? And that's where, I know it's a buzzword, but hybrid cloud comes in. But for me, the hybrid cloud, and what our customers are saying, is that I want to modernize what I have, so give me a platform there. And ICP, IBM Cloud Private, and OpenShift are the two best products in the market, bar none, that provide that experience there. And our ICP runs on top of OpenShift, so for those customers that have already been invested in the OpenShift space, you still get the value of IBM's content and integrated monitoring, integrated logging, right there in that product space, on the platform for which they're already standardized. >> How do you define best? What are the attributes of high quality and best? >> So, I guess best is (laughs) kind of difficult to really define. But for us it's all about ensuring that we have a solid platform, a solid strategy and technology set that we're building our offerings from. And we gain a lot of experience from our public cloud. Because we built and standardized on Kubernetes, we provide Kubernetes service, and we do that at scale and secure as well as highly available. We take a lot of those same lessons, because we have hundreds of customers running on it at scale. We take those lessons and we help evolve our private cloud offering as well. So we bring those down, we provide a very tuned somewhat customizable, but, highly tuned supporting IBM content in that environment. So when I say best, it is definitely the best platform for running IBM content, right? It's tuned for running IBM content, bare none. >> Okay, and my other question is, you know, you'd mentioned hybrid, said it was a buzzword, okay, fine. But at least we know what hybrid is. You got resources on pram, you've got resources in the public cloud, multi cloud is the other buzzword. Sometimes we worry that companies that are, vendors like yourselves going after this multi cloud opportunity, which is, you know, clearly a large opportunity and one that's needed, because I want a consistent way of managing at scale. But there seems to be a lot of different initiatives within organizations. There might be different lines of business, there might be, you know, international people. Are you seeing any hope or sense that the customer constituents are getting together? The different constituents saying, hey, this is the strategy that we want to use to manage all of our clouds. Or is sort of, you know, fiefdoms that are popping up? What do you see there? >> Yeah, so it's funny, when you do go into a large organization, a large enterprise. You're having a conversation, they've made a choice down one path using, let's say, IKS as an example. But then you realize you're having another conversation with another group that hasn't made any choices. I don't think that within an organization, within a large enterprise, coming together and saying we're all going to go down one path with one tool to rule them all. I just don't, I just don't see it, right? And also, even just going down the path of saying, I'm only going to stick and use one cloud vendor. That's also somewhat a thing of the past, you don't see that anymore, at least where customers are moving, so within an organization, yes, you still have the lines of businesses, and they might have different tools and they might decide on different tools and how they manage their environments. But the thing that customers do need to look at, and what they do need to standardize across an enterprise, is just some of the core tenets and the core technologies. So, for example, if they're moving the cloud, whether it's one premises or off premises, what's the platform that you're going to build to so you have portability? It's got to be Kubernetes, right? That is a decision that as an organization, as an enterprise, you've got to agree on as you move forward. Because, whether you use the same provider or the same set of tools doesn't matter as much. It'd be nice. But you got to have some agreement on the core technologies and platforms. >> Because ultimately you can get there. It might be a little harder, but still, if you're core Kubernetes, it's not, it's going to be easier than different flavors of UNIOS, for example. (laughs) >> There's path, >> there's at least a path that as they mature and as they simplify and they converge, they can do that seamlessly. >> Dan, back to the cloud monitoring tool that IBM has. Who's the constituency, who uses that? And give us a little bit of color inside, you know, kind of the administrator, developer, you know cloud architect, you know, what do you see? >> Well, yeah, so that's a great one. The cloud monitoring, IBM cloud monitoring provides visibility into your workloads within your environment. And that's not specific to just Kubernetes, either, right? There's Kubernetes, but then there's VMs and bare metal workloads, more traditional workloads that the monitoring service works just fine. The, our developers, have to have a monitoring solution. You can't build a cloud native solution without monitoring, right? Monitoring and log, they, it's like peanut butter and jelly. You got to have 'em. And if you're building a cloud native solution, you're building Kubernetes, you're dealing with multiple clusters, like I said earlier. Hundreds, if not thousands, of workloads. You can't log into each one of 'em. You need, you need a system where you can monitor and log. So the monitoring service is necessary here for simple developers to understand what's happening in their environment. And our partnership STEG provides us with a very rich monitoring solution, which we've done extensive integration in IBM cloud to make it simple for even developers. They don't have to go and install and set up STEG themselves. All they do is a simple I want a new instance. Directly in the IBM cloud catalog they get a new instance of STEG and it gets installed into their cluster and they're off and running. Simple as that. >> And we're talking, we're talking visibility on things like performance management, security? >> Network. >> Problem, change management. >> Yes, yes, absolutely. So you get, and obviously that's all configurable, but what's nice with STEG and one of the reasons I like it, especially as a developer, as soon as you turn it on for one of your clusters, there's so much rich data that's available there, just out of the box. And they support other projects too and provide integration, deep integration, like the Istio project, for example. Great little project for service mesh. STEG supports that out of the box as well. Built in polling metrics, dashboards built specifically for Istio, and I don't have to do anything as a developer. I just turn it on, and then I start watching. (laughs) Seeing all the metrics coming. >> So it's kind of day zero here at IBM Think. Dan, what are some of the things that you're hoping to accomplish this week? I know you've got a bunch of customer meetings. Some of the things you're excited about. >> Yeah, definitely, lots of sessions, great sessions. But it is the customer meetings I'm most excited about. I have a large number of 'em. I want to hear what they're doing, right? I want to understand a little bit better what they would like us to do, and moving forward, how can we help them? How can we help accelerate their adoption of cloud? Get on the cloud native, and obviously, I'm here to talk Kubernetes and containers, so the more I get to talk about that, the happier I'm going to be. >> Well, it's a hot space. We're bringing you theCUBE inside of our little container here. Dan Berg, thanks very much for coming on today. >> Thank you. >> All right, Dave Vellante for Stu Miniman. You're watching theCUBE from IBM Think, day one. We'll be right back right after this short break. (light music)
SUMMARY :
Brought to you by IBM. Dan, great to see you again. Thank you very much. So everybody's got a for the Kubernetes service, to move it anywhere, you can bring on premises Dan, one of the things and IBM helps to simplify that. and basically easy to get started. Tell us, you know, tell us more. and where do you have gaps? complexity from the customer. So, do you ever get into a But if you need something So, Dan, you talked about that you have on premises and we do that at scale Or is sort of, you know, build to so you have portability? Because ultimately you can get there. and as they simplify and they converge, of color inside, you know, And that's not specific to and one of the reasons Some of the things you're excited about. But it is the customer meetings We're bringing you theCUBE Vellante for Stu Miniman.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Stu Minman | PERSON | 0.99+ |
Daniel Berg | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dan Berg | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dan | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
one cluster | QUANTITY | 0.99+ |
Moscone North | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Howard Street | LOCATION | 0.99+ |
Hundreds | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.99+ |
Stu | PERSON | 0.98+ |
two | DATE | 0.98+ |
one path | QUANTITY | 0.98+ |
Multicloud Manager | TITLE | 0.98+ |
one location | QUANTITY | 0.98+ |
OpenShift | TITLE | 0.98+ |
UNIOS | TITLE | 0.98+ |
Kubernetes | TITLE | 0.97+ |
two key distributions | QUANTITY | 0.96+ |
hundreds of customers | QUANTITY | 0.96+ |
IBM Think | ORGANIZATION | 0.96+ |
day one | QUANTITY | 0.95+ |
this week | DATE | 0.95+ |
both | QUANTITY | 0.95+ |
Red Hat | ORGANIZATION | 0.94+ |
Think | COMMERCIAL_ITEM | 0.93+ |
IKS | ORGANIZATION | 0.93+ |
2019 | DATE | 0.93+ |
tens of thousands | QUANTITY | 0.92+ |
STEG | ORGANIZATION | 0.9+ |
two best products | QUANTITY | 0.9+ |
two and a half years ago | DATE | 0.89+ |
one cloud | QUANTITY | 0.88+ |
STEG | TITLE | 0.88+ |
Daniel Berg, IBM Cloud & Norman Hsieh, LogDNA | KubeCon 2018
>> Live from Seattle, Washington it's theCUBE, covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the Cloud Native Computing Foundation, and its ecosystem partners. >> Hey, welcome back everyone, it's theCUBE live here in Seattle for day three of three of wall-to-wall coverage. We've been analyzing here on theCUBE for three days, talking to all the experts, the CEOs, CTOs, developers, startups. I'm John Furrier, Stu Miniman, with theCUBE coverage of here at dock, not DockerCon, KubeCon and CloudNativeCon. Getting down to the last Con. >> So close, John, so close. >> Lot of Docker containers around here. We'll check it on the Kubernetes. Our next two guests got a startup, hot startup here. You got Norman Hsieh, head of business development, LogDNA. New compelling solution on Kubernetes give them a unique advantage, and of course, Daniel Berg who's distinguished engineer at IBM. They have a deal. We're going to talk about the startup and the deal with IBM. The highlights, kind of a new model, a new world's developing. Thanks for joining us. >> Yeah, no problem, thanks for having us. >> May get you on at DockerCon sometimes. (Daniel laughing) Get you DockerCon. The container certainly been great, talk about your product first. Let's get your company out there. What do you guys do? You got something new and different. Something needed. What's different about it? >> Yeah, so we started building this product. One thing we were trying to do is finding a login solution that was built for developers, especially around DevOps. We were running our own multi-tenant SaaS product at the time and we just couldn't find anything great. We tried open source Elastic and it turned out to be a lot to manage, there was a lot of configuration we had to do. We tried a bunch of the other products out there which were mostly built for log analysis, so you'd analyze logs, maybe a week or two after, and there was nothing just realtime that we wanted, and so we decided to build our own. We overcame a lot of challenges where we just felt that we could build something that was easier to use than what was out there today. Our philosophy is for developers in the terms of we want to make it as simple as possible. We don't want you to manage where you're going to think about how logs work today. And so, the whole idea, even you can go down to some of the integrations that we have, our Kubernetes integration's two lines. You essentially hit two QCTL lines, your entire cluster will get logged, directly logged in in seconds. That's something we show often times at demos as well. >> Norman, I wonder if you can drill in a little bit more for us. Always look at is a lot of times the new generation, they've got just new tools to play with and new things to do. What was different, what changes? Just the composability and what a small form factor. I would think that you could just change the order of magnitude in some of the pricing of some of these. Tell us why it's different. >> Yeah, I mean, I think there's, three major things was speed. So what we found was that there weren't a lot of solutions that were optimized really, really well for finding logs. There were a lot of log solutions out there, but we wanted to optimize that so we fine-tuned Elasticsearch. We do a lot of stuff around there to make that experience really pleasurable for our users. The other is scale. So we're noticing now is if you kind of expand on the world of back in the day we had single machines that people got logs off of, then you went to VMware where you're taking a single machine and splitting up to multiple different things, and now you have containers, and all of a sudden you have Kubernetes, you're talking about thousands and thousands of nodes running and large production service. How do you find logs in those things? And so we really wanted to build for that scale and that usability where, for Kubernetes, we'll automatically tag all your logs coming through. So you might get a single log line, but we'll tag it with all the meta-data you need to find exactly what you want. So if I want to, if my container dies and I no longer know that containers around, how am I going to get the logs off of that, well, you can go to LogDNA, find the container that you're looking for, know exactly where that error's coming from as well. >> So you're basically storing all this data, making it really easy for the integration piece. Where does the IBM relationship fit in? What's the partnership? What are you guys doing together? >> I don't know if Dan wants to-- >> Go ahead, go ahead. >> Yeah, so we're partnering with IBM. We are one of their major partners for login. So if you go into Observability tab under IMB Cloud and click on Login, login is there, you can start the login instance. What we've done is, IBM's brought us a great opportunity where we could take our product and help benefit their own customers and also IBM themselves with a lot of the login that we do. They saw that we are very simplistic way of thinking about logs and it was really geared towards when you think about IBM Cloud and the shift that they're moving towards, which is really developer-focused, it was a really, really good match for us. It brought us the visibility into the upmarket with larger customers and also gives us the ability to kind of deploy globally across IBM Cloud as well. >> I mean, IBMs got a great channel on the sales side too, and you guys got a great relationship. We've seen that playbook before where I think we've interviewed in all the other events with IBM. Startups can really, if they fit in with IBM, it's just massive, but what's the reason? Why the partnership? Explain. >> Well, I mean, first of all we were looking for a solution, a login solution, that fit really well with IKS, our Kubernetes service. And it's cloud-native, high scale, large number of cluster, that's what our customers are building. That's what we want to use internally as well. I mean, we were looking for a very robust cloud-native login service that we could use ourselves, and that's when we ran across these guys. What, about a year ago? >> Yeah, I mean, I think we kind of first got introduced at last year's KubeCon and then it went to Container World, and we just kept seeing each other. >> And we just kept on rolling with it so what we've done with that integration, what's nice about the integration, is it's directly in the catalog. So it's another service in the catalog, you go and select it, and provision it very easily. But what's really cool about it is we wanted to have that integration directly with the Kubernetes services as well, so there's the tab on the Integration tab on the Kubernetes, literally one button, two lines of code that you just have to execute, bam! All your logs are now streaming for the entire cluster with all the index and everything. It just makes it a really nice, rich experience to capture your logs. >> This is infrastructure as code, that's what the promise was. >> Absolutely, yes. >> You have very seamless integration and the backend just works. Now talk about the Kubernetes pieces. I think this is fascinating 'cause we've been pontificating and evaluating all the commentary here in theCUBE, and we've come to the conclusion that cloud's great, but there's other new platform-like things emerging. You got Edge and all these things, so there's a whole new set, new things are going to come up, and it's not going to be just called cloud, it's going to be something else. There's Edge, you got cameras, you got data, you got all kinds of stuff going on. Kubernetes seems to fit a lot of these emerging use cases. Where does the Kubernetes fit in? You say you built on Kubernetes, just why is that so important? Explain that one piece. >> Yeah, I mean, I think there's, Kubernetes obviously brought a lot of opportunities for us. The big differentiator for us was because we were built on Kubernetes from the get go, we made that decision a long time ago, we didn't realize we could actually deploy this package anywhere. It didn't have to be, we didn't have to just run as a multi-tenant SaaS product anymore and I think part of that is for IBM, their customers are actually running, when they're talking about an integrated login service, we're actually running on IBM Cloud, so their customers can be sure that the data doesn't actually move anywhere else. It's going to stay in IBM Cloud and-- >> This is really important and because they're on the Kubernetes service, it gives them the opportunity, running on Kubernetes, running automatic service, they're going to be able to put LogDNA in each of the major regions. So customer will be able to keep their logged data in the regions that they want it to stay. >> Great for compliance. >> Absolutely. >> I mean, compliance, dreams-- >> Got to have it. >> Especially with EU. >> How about search and discovery, that's fit in too? Just simple, what's your strategy on that? >> Yeah, so our strategy is if you look at a lot of the login solutions out there today, a lot of times they require you to learn complex query languages and things like that. And so the biggest thing we were hearing was like, man, onboarding is really hard because some of our developers don't look at logs on a daily basis. They look at it every two weeks. >> Jerry Chen from Greylock Ventures said machine learning is the new, ML is the new SQL. >> Yup. (Daniel laughing) >> To your point, this complex querying is going to be automated away. >> Yup. >> Yes. >> And you guys agree with that. >> Oh, yeah. >> You actually, >> Totally agree with that. >> you talked about it on our interview. >> Norman, wonder if you can bring us in a little bit of compliance and what discussions you're having with customers. Obviously GDPR, big discussion point we had. We've got new laws coming from California soon. So how important is this to your customers, and what's the reality kind of out there in your user base? >> Yeah, compliance was, our founders had run a lot of different businesses before. They had two major startups where they worked with eBay, compliance was the big thing, so we made a decision early on to say, hey, look, we're about 50 people right now, let's just do compliance now. I've been at startups where we go, let's just keep growing and growing and we'll worry about compliance later-- >> Yeah, bite you in the ass, big time. >> Yeah, we made a decision to say, hey, look, we're smaller, let's just implement all the processes and necessary needs, so. >> Well, the need's there too, that's two things, right? I mean, get it out early. Like security, build it up front and you got it in. >> Exactly. >> And remember earlier we were talking and I was telling you how within the Kubernetes service we like to use our own services to build expertise? It's the same thing here. Not only are they running on top of IKS, we're using LogDNA to manage the logs and everything, and cross the infrastructure for IKS as well. So we're heavily using it. >> This also highlights, Daniel, the ecosystem dynamic of having when you break down this monolithic type of environments and their sets of services, you benefit because you can tap into a startup, they can tap in to IBM's goodness. It's like somewhat simple Biz Dev deal other than the RevShare component of the sales, but technically, this is what customers want at the endgame is they want the right tool, the right job, the right product. If it comes from a startup, you guys don't have to build it. >> I mean, exactly. Let the experts do it, we'll integrate it. It's a great relationship. And the teams work really well together which is fantastic. >> What do you guys do with other startups? If a startup watches and says, hey, I want to be like LogDNA. I want to plug into IBM's Cloud. I want to be just like them and make all that cash. What do they got to do? What's the model? >> I mean, we're constantly looking at startups and new business opportunities obviously. We do this all the time. But it's got to be the right fit, alright? And that's important. It's got to be the right fit with the technology, it's got to be the right fit as far as culture, and team dynamics of not only my team but the startup's teams and how we're going to work together, and this is why it worked really great with LogDNA. I mean, everything, it just all fit, it all made sense, and it had a good business model behind that as well. So, yes, there's opportunities for others but we have to go through and explore all those. >> So, Norman, wonder if you can share, how's your experience been at the show here? We'd love to hear, you're going to have so many startups here. You got record-setting attendance for the show. What were your expectations coming in? What are the KPIs you're measuring with and how has it met what you thought you were going to get? >> No, it's great, I mean, previous to the last year's KubeCon we had not really done any events. We're a small company, we didn't want to spend the resources, but we came in last year and I think what was refreshing was people would talk to us and we're like, oh, yeah, we're not an open source technology, we're actually a log vendor and we can, and we'll-- (Stu laughing) So what we said was, hey, we'll brush that into an experience, and people were like, oh, wow, this is actually pretty refreshing. I'm not configuring my fluentd system, fluentd to tap into another Elasticsearch. There was just not a lot of that. I think this year expectation was we need the size doubled. We still wanted to get the message out there. We knew we were hot off the presses with the IMB public launch of our service on IBM Cloud. And I think we we're expecting a lot. I mean, we more than doubled what our lead count was and it's been an amazing conference. I mean, I think the energy that you get and the quality of folks that come by, it's like, yeah, everybody's running Kubernetes, they know what they're talking about, and it makes that conversation that much easier for us as well. >> Now you're CUBE alumni now too. It's the booth, look at that. (everyone laughing) Well, guys, thanks for coming on, sharing the insight. Good to see you again. Great commentary, again, having distinguished engineering, and these kinds of conversations really helps the community figure out kind of what's out there, so I appreciate that. And if everything's going to be on Kubernetes, then we should put theCUBE on Kubernetes. With these videos, we'll be on it, we'll be out there. >> Hey, yeah, absolutely, that'd be great. >> TheCUBE covers day three. Breaking it down here. I'm John Furrier, Stu Miniman. That's a wrap for us here in Seattle. Thanks for watching and look for us next year, 2019. That's a wrap for 2018, Stu, good job. Thanks for coming on, guys, really appreciate it. >> Thanks. >> Thank you. >> Thanks for watching, see you around. (futuristic instrumental music)
SUMMARY :
Brought to you by Red Hat, the CEOs, CTOs, developers, startups. We're going to talk about the startup and the deal with IBM. What do you guys do? And so, the whole idea, even you can go down and new things to do. and all of a sudden you have Kubernetes, What are you guys doing together? about IBM Cloud and the shift that they're moving towards, and you guys got a great relationship. Well, I mean, first of all we were looking for a solution, Yeah, I mean, I think we kind of first got introduced And we just kept on rolling with it so what we've done that's what the promise was. and it's not going to be just called cloud, It didn't have to be, we didn't have to just run in each of the major regions. And so the biggest thing we were hearing was like, machine learning is the new, ML is the new SQL. is going to be automated away. you talked about it So how important is this to your customers, so we made a decision early on to say, Yeah, we made a decision to say, and you got it in. And remember earlier we were talking and I was telling you of having when you break down this monolithic type And the teams work really well together which is What do you guys do It's got to be the right fit with the technology, and how has it met what you thought you were going to get? I mean, I think the energy that you get Good to see you again. Hey, yeah, absolutely, That's a wrap for us here in Seattle. see you around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Daniel Berg | PERSON | 0.99+ |
Norman Hsieh | PERSON | 0.99+ |
Norman | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
eBay | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
two lines | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Dan | PERSON | 0.99+ |
Greylock Ventures | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
Daniel | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Elastic | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
IBMs | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Seattle, Washington | LOCATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
LogDNA | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.98+ |
one piece | QUANTITY | 0.98+ |
IMB | ORGANIZATION | 0.98+ |
Stu | PERSON | 0.98+ |
IKS | ORGANIZATION | 0.98+ |
single machines | QUANTITY | 0.98+ |
single machine | QUANTITY | 0.98+ |
IBM Cloud | ORGANIZATION | 0.98+ |
IMB Cloud | TITLE | 0.97+ |
one button | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
two | QUANTITY | 0.97+ |
each | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
CloudNativeCon | EVENT | 0.96+ |
today | DATE | 0.94+ |
CloudNativeCon North America 2018 | EVENT | 0.94+ |
single log line | QUANTITY | 0.93+ |
KubeCon 2018 | EVENT | 0.93+ |
thousands | QUANTITY | 0.92+ |
first | QUANTITY | 0.91+ |
GDPR | TITLE | 0.91+ |
about 50 people | QUANTITY | 0.91+ |
Container World | ORGANIZATION | 0.91+ |
day three | QUANTITY | 0.9+ |
this year | DATE | 0.9+ |
two major startups | QUANTITY | 0.9+ |
three | QUANTITY | 0.89+ |
Edge | TITLE | 0.88+ |
DevOps | TITLE | 0.88+ |
EU | ORGANIZATION | 0.87+ |
about a year ago | DATE | 0.86+ |
a week | QUANTITY | 0.86+ |
Elasticsearch | TITLE | 0.85+ |
Daniel Berg, IBM | KubeCon 2018
>> Narrator: Live From Seattle, Washington it's theCUBE, covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the cloud-native computing foundation and antiquo system partners. >> Okay welcome back everyone it's live coverage here at theCUBE at KubeCon and CloudNativeCon here at Seattle for 2018 event. 8,000 people, up from 4,000 last year. I'm John Furrier with Stuart Miniman, my cohost. Next guest Daniel Berg, distinguished engineer at IBM Cloud Kubernetes Service. Daniel, great to have you on. >> Thank you. >> Thanks for joining us. Good to see you. I'll say you guys know a lot about Kubernetes. You've been using it for a while. >> Yes very much. >> Blue mix, you guys did a lot of cloud, a lot of open source. What's going on with the service? Take a minute to explain your role, what you guys are doing, how it all fits into the big picture here. >> Yeah yeah yeah so I'm the distinguished engineer over top of the architecture and everything around the Kubernetes Service. I'm backed by a crazy wicked awesome team. Right? They are amazing. They're the real wizards behind the curtain right? I'm the curtain is basically all it is. But we've done a phenomenal amount of work on IKS. We've delivered it. We've delivered some amazing HA capabilities, highly reliable but what's really great about it is the service that we provide to all of our customers? We're actually running all of IBM cloud on it, so all of our services, the Watson service, the cloud dataset base services, our keepertech service, identity management, billing, all of it, it's all running. First of all it's moving to containers and Kubernetes and it's running on our managed service. >> So just to make sure I get it all out there, I know we talked to a lot of other folks at IBM. I want to make sure we table it. You guys are highly contributing to the upstream. >> Daniel: Yes. >> As well as running your workload and other customers' workloads on Kubernetes within the IBM cloud. >> Unmodified right? I mean we're taking upstream and we're packed in and the key thing that we're doing is we're providing it as a managed service with our extensions into it. But yeah we're running, we've hit problems over the last 18, 20 months right? There's lots of problems. >> Take us into people always wonder what happens when this reaches real scale. So what experiences, what can you share with us? >> So when you really start hitting like real scale, real scale being like 500, 1,000, couple thousand nodes, right, then you're hitting real scale there. And we're dealing with tens of thousands of clusters, right? You start hitting different pressure points inside of Kubernetes, things that most customers are not going to hit and they're gnarly problems, right? They're really complicated problems. One of the most recent ones that we hit is just scaling problems with CRDs. Now that we've been promoting heavily CRDs, customized Kubernetes, which is a good thing. Well, it starts to hit another pressure point that you then have to start working through scaling of Kubernetes, scaling of the master, dealing with scheduling problems. Once you start getting into these larger numbers that's when you start hitting these pressure points and yes we are making changes and then we're contributing those back up to the upstream. >> One of the things we've been hearing in the interviews here and obviously in the coverage is that the maturation of Kubernetes, great, check, you guys are pushing these pressure points, which is great cause you're actually using it. What are the key visibility points that you're seeing where value's being created, and two what're some of the key learnings that you guys have had? I mean, so you're starting to see some visibility around where people can have value in the stack. Well, or not stack, but in the open source and create value and then learnings that you guys have had. >> Right, right, right. I mean for us the key value here is first of all providing a certified Kubernetes platform, right? I mean, Kubernetes has matured. It has gotten better. It's very mature. You can run production workloads on it no doubt. We've got many many examples of it so providing a certified managed solution around that where customers can focus on their application and not so much the platform, highly valuable right? Because it's certified, they can code to Kubernetes. We always push our teams both internal and external focus on Kubernetes, focus on building a Kube native experience cause that's going to give you the best portability ability moving whether you're using IBM cloud or another cloud provider right? It's a fully certified platform for that. >> Dan, you know, it's one thing if you're building on that platform but what experience do you have of taking big applications and moving it on there? I remember a year or two ago it seemed like it was sexy to talk about lift and shift and most people understand it's like really you just can't take what you had and take advantage of it. You need to be, it might be part of the journey but I'm sure you've got a lot of experiences there. >> Yeah we've got, I mean, we've seen almost every type of workload now cause a lot of people were asking Well, what kind of workloads can you containerize? Can you move to Kubernetes? Based on what we've seen pretty much all of them can move so and we do see a lot of the whole lifT and shift and just put it on Kubernetes but they really don't get the value and we've seen some really crazy uses of Kubernetes where they're on Kubernetes but they're not really, like what I say Kube native. They're not adhering to the Kubernetes principles and practices and therefore they don't get the full value so they're on Kubernetes and they get some of the okay we're doing some health checking but they don't have the proper probes right? They don't have the proper scheduling hints. They don't have the proper quotas. They don't have the proper limits. So they're not properly using Kubernetes so therefore they don't get the full advantage out of it. So what we're seeing a lot though is that customers do that lift and shift, but ultimately they have to, they have to rewrite a lot of what they're doing. To get the most value, and this is true of cloud and cloud native, ultimately at the end of the day if you truly want to get the value of cloud and cloud native you're going to do a rewrite eventually and that will be full cloud native. You're going to take advantage of the APIs and you're going to follow the best practices and the concepts of the platform. >> Containers give you some luxury to play with workloads that you don't maybe have time to migrate over but this brings up the point of the question that we hear a lot and I want to get your thoughts on this because the world's getting educated very fast on cloud native and rearchitecting, replatforming, whatever word you want to use, reimagining their infrastructure. How do you see multicloud driving the investment or architectural thinking with customers? What're they, what're some of the things that you see that are important for 2019 as people are saying you know what? My IT is transforming, we know that, we're going to be a multicloud world. I've got to make investments. >> You definitely have to make those. >> What are those investments architecturally, how should they lay those out? What're your thoughts? >> So my thought there is ultimately, you've got focus on a standardized platform that you're going to use across those because multicloud it's here. It's here to stay whether it's just on premises and you're doing off premises or you're doing on premises and multiple cloud vendors and that's where everybody's going and it's going to be give it another six, 12 months. That's going to be the practice. That's going to be what everybody does. You're not one cloud provider, you're multiple. So standardization, community, massive. Do you have a community around that? You can't vendor lock in if you're going to be doing portability across all of these cloud providers. Standardization governance around the platform the certification so Kubernetes you have a certified process that you certify every version so you at least know I'm using a vendor that's certified. Right? I have some promise that my application's going to run on that. Now is that as simple as well I picked a certified Kubernetes and therefore I should be able to run my application. Not so simple. >> And operationally, they're running CICD, you got to run that over the top. >> You've got to have a common, yeah, You've got to have a common observability model across all of that, what you're logging, what're you're monitoring, what's your CICD process. You've got to have a common CICD process that's going to go across all of those cloud providers, right, all of your cloud environments. >> Dan, take us inside. How're we doing with security? It's one of those sort of choke points. Go back to containers when they first started through to Kubernetes. Are we doing well on security now and where do we need to go? >> Are we doing well on it? Yes we are. I think we're doing extremely well on security. Do we have room for improvement? Absolutely everybody does. I've just spent the last eight months doing compliance and compliance work. That's not necessarily security but it dips into it quite often right? Security is a central focus. Anybody doing public cloud, especially providers, we're highly focused on security and you've got to secure your platforms. I think with Kubernetes and providing first of all proper isolation and customers need to understand what levels of isolation am I getting? What levels of sharing am I getting? Are those well documented and I understand what my providers providing me. But the community's improving. Things that we're seeing around like Kubernetes and what they're doing with secrets and proper encryption, encryption, notary with the image repositories and everything. All that plays into providing a more secure platform so we're getting there, things are getting better. >> Well there was a recent vulnerability that just got patched rather fast. >> Daniel: There was. >> It seemed like it moved really quick. What do we learn from that? >> Well we've learned that Kubernetes itself is not perfect, right? Actually I would be a little bit concerned if we didn't find a security hole because then that means there's not enough adoption, where we just haven't found the problems. Yes we found a security hole. The thing is the community addressed it, communicated it, and all of the vendors provided a patch very quickly and many of them like with IKS we rolled out the patch to all of our clusters, all of our customers, they didn't have to do anything and I believe Google did the same thing so these are things that the community is improving, we're maturing and we're handling those security problems. >> Dan, talk about the flexibility that Kubernetes provides. Certainly you mentioned earlier the value that can be extracted if you do it properly. Some people like to roll their own Kubernetes or they want the managed service because it streamlines things a bit faster. When do I want management? When do I want to roll my own? Is there kind of a feel? Is it more of a staffing thing? Is it more scale? Is it more application, like financial services might want to roll their own? We're starting to maybe see a different industry. What's your take on this? >> Well obviously I'm going to be super biased on this. But my belief there is that I mean obviously if you're going to be doing on premises and you need a lot of flexibility. You need flexibility of the kernel you may need to roll your own right? Because at that point you can control and drive a lot of the flexibility in there, understanding that you take on the responsibility of deploying and managing and updating your platform, which means generally that's an investment you're going to make that takes away from your critical investment of your developers on your business so personally I would say first and foremost... >> It's a big investment. >> It's a massive investment. I mean look at what the vendor, look at IKS. I've got a large team. They live and breathe Kubernetes. Live and breathe every single release, test it, validate it, roll updates. We're experts at updating Kubernetes without any down time. That's a massive investment. Let the experts do it. Focus on your business. >> John: And that's where the manage piece shines. >> That's where the mange piece absolutely shines. >> Okay so the question about automation comes up. I want to get your thoughts on the future state of Kubernetes because you know we go down the cloud native devops model. We want to automate away things. >> Daniel: Yes. >> Kubernetes is that some differentiation but I don't want to manage clusters. I don't want to manage it. I want it automated. >> Daniel: Yeah. >> So is it automating faster? Is it going to be automated? What's your take on the automation component? When and where and how? >> Well, I mean through the manage services I mean it's cloud native. It's all API driven, CLIs. You've got one command and you're scaling up a cluster. You get a cluster with one command, you can go across multiple zones with one command. Your cluster needs to be updated you call one command and you go home. >> John: That sounds automated to me. >> I mean that's fully and that's the only way that we can scale that. We're talking about thousands of updates on a daily basis. We're talking about tens of thousands of clusters fully automated. >> A lot of people have been talking the past couple of weeks around this notion of well all containers might have security boundary issues. Let's put a VM around it maybe stay for is it maybe just more of a fix? Cause why do I want to have a VM or is it better to just keep native core? Is that real conversation or is that fud? >> I mean it is a real conversation because people are starting to understand what are the proper isolation levels with my cluster. My personal belief around that is you really only need that level of isolation, those mini VMs, around your containers. Running a single container in a single VM seems overkill to me. However if you're running a multitenant cluster with untrusted content you better be taking extra precautions. First and foremost I would say don't do it because you're adding risk, right? But if you're going to do it yes, you might start looking at those types but if you're running a cluster in its an isolated cluster with full isolation levels all the way down to the hardware in a trusted environment, trust being it's your organization, it's your code. I think it's overkill then. >> Future of Kubernetes what happens next? People are hot on this. You've got service meshes, a lot of other goodness. People are really trying to stay with the pace, a lot of change and again a lot of education. But it's not a stack like I hear words like Kubernetes stack and the CNCM has a stack. So it's not necessarily a stack per se. >> Right it's not. >> Clarify the linguistic language around what we're talking about here. What's a stack? What's not a stack? It's all services. >> Look at it this way. So Kubernetes has done a phenomenal job as a project in the community to state exactly what it's trying to achieve, right? It is a platform. It is platform for running cloud native applications. That is what it is and it allows vendors to build on top of it. It allows customers to build on it and it's not trying to grow larger than that. It's just trying to improve overall that platform and that's what's fantastic about Kubernetes because that allows us and when you see the stack it's really cloud native. What pieces am I going to add to that awesome platform to make my life even better? Knative, Istio, a service measure. I'm going to put that on because I'm evolving, I'm doing more microservices. I'm going to build that on top of it. Inside of IBM we did cloud foundry enterprise environment, CFEE, cloud foundry on Kubernetes. Why not, right? It's a perfect combination. It's just going up the level and it's providing more usability, better different prescriptive uses of Kubernetes but Kubernetes is the platform. >> When I think about the composability of services it's not a stack. It's lego blocks. >> Daniel: Yeah it's pieces. I'm using different pieces here, there, everywhere. >> All right well Daniel thanks for coming on, sharing great insight. Congratulations on your success running major workloads within IBM for you guys and the customers. Again just the beginning, Kubernetes the beginning. Congratulations. Here inside the Cube we're breaking down all the action. Three days of live coverage. We're at day one at KubeCon and CloudNativeCon. We'll be right back with more coverage after this short break.
SUMMARY :
Brought to you by Red Hat, Daniel, great to have you on. I'll say you guys know a lot about Kubernetes. Take a minute to explain your role, First of all it's moving to containers and Kubernetes So just to make sure I get it all out there, and other customers' workloads on Kubernetes and the key thing that we're doing So what experiences, what can you share with us? One of the most recent ones that we hit is just the key learnings that you guys have had? experience cause that's going to give you the best but what experience do you have of taking and the concepts of the platform. that you don't maybe have time to migrate over the certification so Kubernetes you have you got to run that over the top. across all of that, what you're logging, Go back to containers when they first started and what they're doing with secrets that just got patched rather fast. What do we learn from that? communicated it, and all of the vendors provided that can be extracted if you do it properly. and drive a lot of the flexibility in there, Let the experts do it. Okay so the question about automation comes up. I don't want to manage it. Your cluster needs to be updated I mean that's fully and that's the only way A lot of people have been talking the past couple with untrusted content you better be taking Kubernetes stack and the CNCM has a stack. Clarify the linguistic language around as a project in the community to state it's not a stack. Daniel: Yeah it's pieces. and the customers.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stuart Miniman | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Daniel Berg | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Red Hat | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Three days | QUANTITY | 0.99+ |
Dan | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
one command | QUANTITY | 0.99+ |
8,000 people | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
500 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
1,000 | QUANTITY | 0.98+ |
last year | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
IKS | ORGANIZATION | 0.98+ |
IBM Cloud Kubernetes | ORGANIZATION | 0.97+ |
tens of thousands of clusters | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.96+ |
Seattle, Washington | LOCATION | 0.96+ |
one | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
CloudNativeCon North America 2018 | EVENT | 0.94+ |
couple thousand nodes | QUANTITY | 0.93+ |
Kubernetes | ORGANIZATION | 0.93+ |
one thing | QUANTITY | 0.91+ |
past couple of weeks | DATE | 0.89+ |
single VM | QUANTITY | 0.89+ |
2018 | DATE | 0.87+ |
two ago | DATE | 0.87+ |
KubeCon 2018 | EVENT | 0.87+ |
single container | QUANTITY | 0.85+ |
six, 12 months | QUANTITY | 0.85+ |
20 months | QUANTITY | 0.84+ |
CNCM | ORGANIZATION | 0.84+ |
a year or | DATE | 0.83+ |
day one | QUANTITY | 0.82+ |
18 | QUANTITY | 0.79+ |
thousands | QUANTITY | 0.78+ |
about tens of thousands | QUANTITY | 0.78+ |
4,000 | QUANTITY | 0.77+ |
one cloud | QUANTITY | 0.77+ |
last eight months | DATE | 0.76+ |
Kubernetes Service | ORGANIZATION | 0.75+ |