Steve Canepa & Jeffrey Hammond | CUBE Conversation, December 2020
(upbeat music) >> From ''theCUBE studios,'' in Palo Alto, in Boston, connecting with thought leaders all around the world. This is ''theCUBE Conversation.'' >> Hi, I'm John Walls. And as we're all aware, technology continues to evolve these days at an incredible pace and it's changing the way industries are doing their business all over the world and that's certainly true in telecommunications, CSPs all around the globe are developing plans on how to leverage the power of 5G technology and their network operations are certainly central to that mission. That is the genesis of ''IBM's Cloud for Telecommunications Service.'' That's a unified open hybrid architecture, that was recently launched and was developed to provide telecoms with the solutions they need to meet their very unique network demands and needs. I want us to talk more about that. I'm joined by Steve Canepa, who is the Global GM and Managing Director of the communication sector at IBM. Steve, good to see you today. >> Yeah, you too, John. >> And Jeffrey Hammond. So, he's the Principal Analyst and Vice President at Forrester. Jeffrey, thank you for your time as well today. Good to see you. >> Thanks a lot. It's great to be here. >> Yeah, Steve, let's just jump right in. First off, I mean, to me, the overarching question is, why telecom, I know that IBM has been very focused on providing these kinds of industries specific services, you've done very well in finance, now you're shifting over to telecom. What was the driver there? >> First, great to be with you today, John, and, you know, if we look at the marketplace, especially in 2020, I think the one thing that's, everyone can agree with, is that the rate and pace of change is just really accelerating and is a very, very dynamic marketplace. And so, if we look at the way both our personal lives are now guided by connectivity, and the use of multiple devices throughout the day, the same with our professional lives. So, connectivity really sits at the heart of how value and solutions are delivered and for businesses, this is becoming a critical issue. So, as we work with the telecommunication providers around the world, we're helping them transform their business to make it much more agile, to make it open and make them deliver new services much more quickly and to engage digitally with their clients to bring that kind of experience that we all expect now, so, that the rate pace of change, and the need for the telecommunications industry to bring new value, is really driving a tremendous opportunity for us to work with them. >> Jeffrey what's happening in the telecom space? That, I mean, these aren't just small trends, right? These are tectonic shifts that are going on in terms of their new capabilities and their needs. I'm sure this digital transformation has been driven in some part by COVID, but there are other forces going on here, I would assume too. What do you see from your analyst seat? >> Yeah, I look at it, you know, from a glass half full and a glass half empty approach. From a half empty approach, the shifts to remote work and remote learning, and from traditional retail channels, brick and mortar channels to digital ones, have really put a strain on the existing networking infrastructure, especially, at the Edge, but they've also demonstrated just how critical it is to get that right. You know, as an example, I'm actually talking to you today over my hotspot on my iPhone. So, I think a lot more about the performance of my local cell tower now than I ever did a year ago. and I want it to be as good as it can possibly be and give me as many capabilities as it can. From a glass half full perspective, the opportunities that a modernized network infrastructure gives us are, I think, more readily apparent than ever, you know, most of my wife's doctor's appointments have shifted to remote appointments and every time she calls up to connect, I kind of cringe in the other room and it's like, are they going to get video working? Are they going to get audio working? Are they actually going to have to shift to an old-style phone call to make this happen? Well, things like 5G really are poised to solve those kinds of challenges. They promise, 5G promises, exponential improvements in connectivity speed, capacity, and reductions in latency that are going to allow us to look at some really interesting workloads, IOT workloads, automation workloads, and a lot of Edge use cases. I think 5G sets the stage or Edge compute. Expanding Edge compute scenarios, make it possible to distribute data and services where businesses can best optimize their outcomes, whether it's IOT enabled assets, whether it's connected environments, whether it's personalization, whether it's rich content, AI, or even extended reality workloads. So, you might seem like, that's what a little over the horizon, but it's actually not that far away. And as companies gain the ability to manage and analyze and localize their data, and unlocks real-time insights in a way that they just haven't had before, it can drive expanded engagement and automation in close proximity to the end point devices and customers. And none of that happens without the telco providers and the infrastructure that they own being on board and providing the capabilities for developers like me to take advantage of the infrastructure that they've put in place. So, my perspective on it is, that transformation, that digital transformation, is not going to happen on its own. Someone's got to provision the infrastructure, someone's got to write the code, someone's got to get the services as close to my cell tower or to the Edge as possible and so, that's one of the reasons that when we ask decision makers in the telco space about their priorities from a business perspective, what they tell us is, one of their top three priorities is, we need to improve our ability to innovate and the other two are, we need to grow our revenue and we need to improve our product and services. What's going on from a software perspective in the telco space, is set to make all three of those possible, from my perspective. >> You know, Steve, Jeffrey just unpacked an awful lot there, did a really nice job of that. So, let's talk about first off, that telco relationship IBM's had, or has. You work with data, the 10 largest communication service providers in the world, and I'm sure you're on this journey with them, right? They've been telling you about their challenges and you recognize their needs. This is, you have had maybe some specific examples of that dialogue, that has progressed as your relationship has matured and you provide a different service to them. What are they telling you? What did they tell you say, '' This is where we have got to get better. We've got to get a little sharper, a little leaner.'' And then how did IBM respond to that? >> Yeah, I mean, critical to what Jeffrey just shared is under the covers. You know, 5G is going to take five times the cost that 4G took to deploy. So, if you're a telco, you have to get much more efficient. You have to drive a much more effective TCO into cost of deploying and managing and running that network architecture. When the network becomes a software defined platform, it opens up the opportunity to use open source, open technology, and to drive a tremendous ecosystem of innovation that you can then capture that value onto that open software network. And as the Edge emerged as compute and storage and connectivity, both to the Edge as Jeffrey described, then the opportunity to deliver B2B use cases to take advantage of the latency improvements with 5G, take advantage of the bandwidth capabilities that you have moving video and AI out to the Edge, so, you can create insights as a service. These are the underlying transformations that the telcos are making right now to capture this value. And in fact, we have an institute for business value on our website. You can see some of the surveys and analysis we've done but 84% of the telco clients say, you know, '' Improving the automation and the intelligence of this network platform becomes critical.'' So, from our standpoint, we see a tremendous opportunity to create an open architecture to allow the telcos to regain control of their architecture so that they can pick the solutions and services that work best for them to create value for their customers and then allows them to deploy them incredibly quickly. In fact, just this last week, we announced a milestone with Bharti, a project that we're doing in India, already has over 300 million subscribers. We've taken their ability to deploy their run environment, one of the core domains of the network, where you actually do the access over the cell towers. We've improved that from weeks down to a few days. In fact, our objective is to get to a few minutes. Applying that kind of automation dramatically improves the kind of service they can deliver. When we talk about relationships we have with Vodafone, AT$T, Verizon, about working with them on their mobile Edge compute platforms, it will allow them to extend their network. In fact, with our cloud announcement that you highlighted at the top, we announced a capability called the IBM Cloud Satellite and what IBM Cloud Satellite does is, it's built with Red Hat, so, it's open architecture, it takes advantage of the millions and millions of upstream developers, that are developing every single day to build a foundational shift architecture that allows us to deploy these services so quickly and we can move that capability right now to the Edge. What that means for a telco, is they can deploy those services wherever they want to deploy them, on their private infrastructure or on a public cloud, on a customer's premise, that gives them the flexibility. The automation allows them to do it smartly and very quickly and then in partnering with clients, they can create new end Edge services, things like, you know, manufacturing 4.0 you may have heard of or as you mentioned, advanced healthcare services. Every single industry is going to take advantage of these changes and we're really excited about the opportunity to work in combination with the telcos and speed the pace of innovation in the market. >> Jeffrey, I'd like to go back to the Bharti there. I was going to get into it a little bit later but Steve brought it up. This major Indian CSP, as you mentioned, 300 million subs, 400 million around the world. What does that say to you in terms of its commitment and its, the needs that are being addressed and how it's going to fundamentally change the way it is doing business as far as setting the pace in the telecom industry? >> Well, I think, one of the things that highlights it is, you know, this isn't just a U.S phenomenon or a European phenomenon. Indeed, in some cases we're seeing countries outside the U.S in advance, moving faster, Switzerland, as an example. We expect 90% of the population in Germany to be covered by 5G By 2025, we expect 90% of the population in South Korea to be covered by 2026, 160 million connections in in China as well. So, in some ways, what's happening in the telco world is mirroring what has happened in the public cloud world, which is the world's gone flat. And that's great from a developer perspective because that means that I don't have to learn specialized technologies or specialized services, in order to look at these network infrastructure platforms as part of the addressable surface that I have. That's one of the things that I think has always held the larger developer population back and has kept them from taking advantage of the telco networks. Is, they've always been bit of a black box to the vast majority of developers, you know, IP goes in, IP comes out but that's about all the control I have, unless I want to go and dig deep into those, you know, industry specific specifications. I was cleaning out my office last week because I'm in the process of moving and I came across my '' IMS Explained Handbook from 2006,'' and I remember going deep into that because, you know, we were told that that's going to make it so that IT infrastructure and telco infrastructure is going to converge and it did to a little bit, but not in a way that all the developers out there could really take advantage of telco infrastructure. And then I remember the next thing was like, well, '' Java Amiens on the front end with mobile clients, that's going to make everything different and we're going to be able to build apps everywhere.'' What ended up being was we would write once and test everywhere, across all the different devices that we had to support. And you know, what really drove you equity? Was the iPhone and apps that we could use HTML like technology or that we could use Java to build and it exploded. And we got millions of applications on the front end of the network. What I see potentially happening now, is the same thing on the backend infrastructure side, because the reality is for any developer that is trying to build modern applications, that's trying to take advantage of cloud native technologies, things start with containers and specifically, OCI compliant containers. That is the basis for how we think about building services and handing them off to operators to run them for us. And with what's going on here, by building on top of OpenShift, you take that, you know, essentially de facto standard of containers as the way that we communicate on the infrastructure side globally, from a software development perspective and you make that the entry point for developers into the modern telco outcome system. And so, basically, it means that if I want to push all the way out to the Edge and I want to get as close as I possibly can, as long as I can give you a container to execute that capability, I'm well on the way to making that a reality, that's a game changer in my opinion. >> Yeah, I was on. >> Just to pick it, just if I could, just to pick up on that because I think Jeffrey made a really important point. So, it's kind of like, in a way, an auntie to the ball here is this open architecture because it empowers the entire ecosystem and it allows the telcos to take advantage of enormous innovation that's happening in the marketplace. And that's why, you know, the 35 ecosystem partners that we announced when we announced the IBM Cloud for telco, that's why they're so important because it allows you to have choice. But the other piece, which he hinted at, I wanted to just underscore, is today, in it kind of the first wave of cloud, only about 20% of the applications move to cloud. They were mostly funny digital applications. In fact, we moved our funny digital applications as well into Watson, we have over 1.5 billion customers of telcos today around the world that can access Watson, through our various chatbot and call center or an agent assist solutions we've deployed. But the 80% of applications that haven't moved yet, haven't moved because it's tough to move them, because they're mission critical, they need, you know, regulatory controls, they have to have world-class security, they need to be able to provide data sovereignty as you're operating in different countries around the world and you have to make sure that you have the data in places that you need, these are the attributes, that kind of open up the opportunity for all these other workloads to move. And those are the exact kind of capabilities that we've built into the IBM Cloud for telco, so that we can enable telcos to move their applications into this environment safely, securely, and do it, as Jeffrey described, on an open architecture that gives them that agility and flexibility. And we're seeing it happen real time, you know, I'll just give you another quick example, Vodafone India, their CTO has said publicly and moving to this cloud architecture, he sees it as a universal cloud architecture, so, they're going to run not just their internal it workloads, not just their network services, their voice data and multimedia network services workloads, but also their B2B enterprise workloads, as Jeffrey was starting to describe. Those workloads that are going to move out to the Edge. And by being able to run on a common platform, he's said publicly that they're seeing an 80% improvement in their CapEx, a 50% improvement in their OPEX, and then 90% improvement in the cost to get productions and services deployed. So, the ability to embrace this open architecture and to have the underlying capabilities and attributes in a cloud platform that responds to the specific needs of telco and enterprise workloads, we think is a really powerful combination. >> Steve, the ecosystem, Jeffrey, you brought it up as well. So, I'd like, just to give you a moment to talk about that a little bit, not a small point, by any means you have nearly 40 partners lined up in this respect, from a hardware vendor, software vendors, SAS providers. I mean, it's a pretty impressive lineup and what kind of a statement is that in your, from your perspective, that you're making to the marketplace when you bring that kind of breadth and depth, that kind of bench, basically the game? >> From our view, it's exciting, and we're only getting started. I mean, we literally have not made the announcement, just a matter of a couple of months ago, and every day that passes, we have additional partners that see the power in joining this open architecture approach that we've put in place. The reason that it delivers such values for all the players, you know, one of the hallmarks of a platform approach is that for every player that joins the platform, it brings value to all the players on the cloud. So as we build this ecosystem and we take the leverage of the open source community, and we build on the power of OpenShift and containers, as Jeffrey was saying, we're creating momentum in the marketplace and back to my very first point I made, when the market's moving really quickly, you've got to be agile. And to be agile in today's market, you have to infuse automation at scale, you have to infuse security at scale and you have to infuse intelligence at scale. And that's exactly what we can help the telcos do, and do it in partnership with these enterprise clients. Instinctively >> One of the values of that is that, you know, we're seeing the larger trend in the cloud native space of folks that used to build packaged software services, is essentially taking advantage of these architectural capabilities and containerizing their applications as part of their future strategy. I mean, just two weeks ago, Salesforce basically said, we're reinvisioning Salesforce as a set of containerized workloads that we deliver, SAP is going in very much the same direction. So as you think about these business workloads, where you get data coming from the infrastructure and you want to go all the way back to the back office and you want to make sure that data gets updated in your supply chain management system, being able to do that with a consistent architecture makes these integration challenges just an order of magnitude easier. I actually want to drill in on that data point for a minute because I think that that's also key to understanding what's going on here, because, you know, during the early days of the public cloud and even WebDuo before that, one of the things that drove WebDuo was the idea that data is the new Intel inside and in some ways that was around centralized data because we had 40 or 50 years to get all the data into the data centers and into the, and then put it in the public cloud. But that's not what is happening today. So much of the new data is actually originating at the Edge and increasingly it needs to stay at the Edge if for no other reason than to make sure that the folks that are trying to use it well aren't running up huge ingestion costs, trying to move it all back to the public cloud providers, analyze it and then push it back out and do that within the realm of the laws of physics. So, you know, one of the big things that's driving the Edge is, in the move toward the Edge, and the interest in 5G is that allows us to do more with data where the data originates. So, as an example, a manufacturer that I've been working with that basically came across exactly that problem, as they stood up more and more connected devices, they were seeing their data ingestion volume spiking and kind of running ahead of their budgets for data ingestion but they were like, well, we can't just leave this data and discard it at the Edge, because what happens if it turns out to be valuable for the maintenance, preventative maintenance use cases that we want to run, or for the machine wear characteristics that we want to run. So, we need to find a way to get our models out close to the data so we don't have to bring it all back to the core. In retailing, personalization is something that a lot of folks are looking at right now and even clientelling and that's, again, another situation where you want to get the data close to where the customer actually lives from a geographic basis and into the hands of the person that's in the store but you don't want to necessarily have to go and install a lot of complex hardware in the retail outlet because then somebody has to manage, you know, those servers and manage all those capabilities. So, you know, in the case of the retailer that I was working with, what they wanted was to get that capability as close as possible to the store, but no closer. And the idea of essentially a virtual back office that they could stand up whenever they opened up a new retail outlet, or even had a franchisee open up an outlet, was an extremely powerful concept and that's the kind of thing that you can do when you're saying,'' Well it's just a set of containers and if I have a, you know, essentially a control plane that I deploy it to, then I can do that on top of that telco provider that they sign up to be a strategic services provider.'' There are lots of other interesting scenarios, tourism, if you think about, you know, the tourist economies that we have around the world and the data that, you know, mobile devices throw off that let us get anonymized information about who's coming, where they're going, what they're spending, how long they're staying, there's a huge set of data there that you can use to grow revenue. You know, other types of use cases, transportation? We see, you know, municipal governments kind of looking at how they can use anonymized data around commute patterns to impact their planning. That's all data that's coming from the the telco infrastructure. >> You know, when we're talking about these massive advantages, right, as this hybrid cloud approach about skill ones, build one's, easy management, efficient management, all of these things, Steve, I think we almost, we'd be derelict to duty if we didn't talk about security a little bit. Just ultimately at the end of the day, you've got to provide this as you pointed out, world-class secure environment. And so, in terms of the hybrid approach, what kind of considerations do you have to make that are special to that and that are being deployed and have been considered >> You know, that's a great point. One of the benefits to Comms from moving to an open architecture, is that you componentize the framework of that architecture, and you have suppliers supplying applications for the various different services that we just talked through. And the ability then to integrate security is essentially a foundational element to the entire Premack architecture. We've stayed very compliant with the Nanci framework architecture and the way that we've worked with the telcos and bringing forth a solution, because we specifically want them to have the choice but how is that choice being married with the kind of security you just talked about. And to Jeffrey's point, you know, when you move those applications out to the Edge and that data, you know, many of the analysts are saying now by 2025, as much as 75% of the data created in the world will happen at the Edge. So, this is a massive shift. And when that shift occurs, you have to have the security to make sure that you're going to take care of that data in the way that it should be and that meets all regulatory, you know, governance already rules and regulations. So, that becomes really critical. The other piece though, is just the amount of value that gets created. The reason that data is at the Edge is because now you can act on it at the Edge, you can extract insights and in fact, most of the analysts will say,'' In the next three years, we'll see $675 billion of new value created at the Edge with these kinds of applications.'' And going back on the manufacturing example, I mean, we're already working today with manufacturers and they already had, you know, hundreds of IOT sensors deployed in the factory and we have an Edge application manager that extends right out to the far Edge, if you will, right out onto that factory floor to help get intelligence from those devices. But now think about adding to that the AI capabilities, the video capabilities, watching that manufacturing line to make sure every product that comes off that line is absolutely perfect, Watching the employees to make sure they're staying in safety zones, you know, watching the actual equipment itself to make sure it is performing the way it's supposed to, maybe using an analytics and AI capabilities to predict, you know, issues that might arise before they even happen, so you can take preventative action. This kind of intelligence, you know, makes the business run smarter, faster, more effective. So, that's where we see tremendous service. So, it's not just the fact that data will be created and it will be higher fidelity data to include the analytics, AI, you don't include unstructured data like video data and image data, audio data, but the ability to then extract insights and value out of it. And this is why we believe the ecosystem we talked about earlier, our partnership with the telco's and the ability to bring ecosystem partners and they can add value is just a tremendous momentum that we're going to build. >> Well, the market opportunity is certainly great. As you pointed out, a lot of additional value yet to be created, significant value and obviously, a lot of money to be spent as well by telcos, by some estimates, a hundred billion plus, just by the year 2022 and getting this new software defined platforms up and running. So, congratulations to IBM for this launch and we wish you continued success, Steve, in that endeavor and thank you for your time and Jeffrey, thank you as well for your insights from Forester. >> Always a pleasure. (upbeat music)
SUMMARY :
all around the world. and it's changing the way industries So, he's the Principal Analyst It's great to be here. the overarching question is, is that the rate and pace of change in the telecom space? and the other two are, we and you recognize their needs. and AI out to the Edge, What does that say to you and it did to a little and it allows the telcos to take advantage that kind of bench, basically the game? that see the power and the data that, you know, that are special to that and the ability to and we wish you continued success, Steve, Always a pleasure.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Jeffrey | PERSON | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
Steve Canepa | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeffrey Hammond | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
India | LOCATION | 0.99+ |
December 2020 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
$675 billion | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
40 | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
South Korea | LOCATION | 0.99+ |
90% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
millions | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
U.S | LOCATION | 0.99+ |
Vodafone India | ORGANIZATION | 0.99+ |
84% | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
telco | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
Bharti | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
five times | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
SAS | ORGANIZATION | 0.99+ |
IMS Explained Handbook | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
AT$T | ORGANIZATION | 0.99+ |
5G | ORGANIZATION | 0.99+ |
300 million | QUANTITY | 0.99+ |
Forrester | ORGANIZATION | 0.99+ |
Switzerland | LOCATION | 0.99+ |
400 million | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Edge | ORGANIZATION | 0.99+ |
two weeks ago | DATE | 0.98+ |
2022 | DATE | 0.98+ |
today | DATE | 0.98+ |
Watson | TITLE | 0.98+ |
35 ecosystem partners | QUANTITY | 0.98+ |
2006 | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
over 300 million subscribers | QUANTITY | 0.98+ |
2026 | DATE | 0.98+ |
telcos | ORGANIZATION | 0.98+ |
over 1.5 billion customers | QUANTITY | 0.98+ |
Archana Kesavan, ThousandEyes | CUBEConversation, September 2019
(upbeat instrumental music) >> Narrator: From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We're in our Palo Alto offices for a CUBE Conversation today. We're going to talk about an interesting topic. You know as all these applications get more complex and they're all Internet based. I'm sure you know that feeling when you're at home and you lose your Internet power you pretty much can't do much of anything. So what can we do about that? Who are some of the companies that are working on this problem? We're real excited to have an innovator in this space from ThousandEyes. She's Archana Kesavan, Director of Product Marketing for ThousandEyes, welcome. >> Archana: Thank you Jeff, it's good to be here. >> Absolutely, so this is crazy. Give us kind of the run-down on ThousandEyes and what you do and then we'll jump into it. >> Sure, so ThousandEyes is a company that provides and enables enterprises. Gives them visibility into how the Internet is impacting end-user experience, right? When you think of it, of what users are, what this user experience is, it could be twofold. One is if you're an enterprise providing a digital service then they're your customers, right? So that customer experience we provide visibility into that. Then also if you're an enterprise moving towards using cloud applications or SaaS applications, employees using those applications, we provide visibility into that space as well. Really the thought and the idea behind ThousandEyes and the reason we are here is as enterprises are moving to the cloud and relying on this Internet-based delivery infrastructure, they're are starting to lose visibility into their critical customer-facing and employee-facing applications. What ThousandEyes does is it gives them back that control by giving them that visibility into that environment. >> Okay so then just to be clear because there's a ton of kind of monitoring applications, we use the Sumo Logic, we do Splunk. So there's a lot of things around operations where they're monitoring these apps, and they're super complex apps. But your guys main focus if I understand, is the network. The network piece and the transportation of that app across the wire. >> Right, let me unpack that and explain with an example, right. Let's think you're an enterprise that's moving towards Office 365 and you have a global workforce, right? Your users are connecting report and your VP of sales happens to connect from a Starbucks or a Philz because we're in Palo Alto. Can't download emails, can't get to emails. What's the first step this person or this employee's going to take is call corporate IT and say hey, I can't get to my emails. Now it's up the the corporate IT team to go and troubleshoot that scenario, right? Because if you can't get to your emails or you can't get to these collaboration apps today it's productivity down the hill. The IT team now starts troubleshooting it and where do they start? Is it the WiFi at the Philz that's a problem? Is it Microsoft that's a problem because which I can't get to my email. Or is it that access in between which is the Internet, right? How do you get from a Philz all the way to Office 365 is through that Internet transport. So where we come in is irrespective of the application or even the network, right, we've very agnostic to it. And we combine application performance all the way to the network performance. We take it one step further and we see how the Internet is impacting the services throughout. Because what we see is our customers be that in enterprises consuming SaaS, or enterprises delivering these SaaS services, the production teams and the corporate IT teams they feel the brunt of this every day. They have people calling and say hey, I can't get to this, I can't get to that application. They have their own customers complaining that something's wrong. Unfortunately in this world of the Internet and the cloud, while it's enabled convenience and flexibility they've traded in that for control and visibility. So if you again go back to this Office 365 example that I was just talking about, the enterprise does not own the WiFi in force. It does not own the Internet. Not one entity owns the Internet. It doesn't own Office 365. So monitoring tools that have existed and that have been in place to understand issues within the four walls of an enterprise flatline when it comes to Internet-based delivery and connectivity, which is where we come in. >> What about VPNs, because isn't kind of the purpose of a VPN on one hand is to be secure 'cause Lord knows who's sniffing on the Philz WiFi. But does that not put you into kind of a higher grade Internet line back to the server to get to my email? >> Archana: Is anybody using VPN these days? >> I hear the ads all the time on the radio. (laughing) I don't know, that's a good question. You guys are sitting on there, are people not using VPN? Does VPN solve their problem? Or is it something that's in the backside that regardless of whether you're using VPN or not these are kind of back hall issues that have to get worked out? >> So VPN, if you think about it, it's kind of an encapsulation over the underlying network. You still have to move packets through this network. So you might be connecting through a VPN, but it's the underlying, if you're going through the Internet than that can result in performance degradation, too. So irrespective of these techniques that enable, or so-called enable, performance and make performance better, you still need to know how the transport's behaving and how it's influencing performance just because you don't control it. >> And as I understand, the way you guys are doing this is you have a lot, a lot, a lot of monitoring points all over the place, hence ThousandEyes. Tell us a little bit about kind of how that works, what's the network? How has that been growing over time? >> We've been growing our infrastructure, monitoring infrastructure, over the last few years. The way ThousandEyes gathers its data which you know all the way from the application layer to the network, kind of then looking at Internet performance is our fleet of agents are distributed, are pre-deployed in about 185 cities around the world. We call them Cloud Agents. Now these agents are actively monitoring the services that might be of interest to an enterprise. You can also take a form of these agents and enterprises can deploy them within their own branch offices and their data centers. You can also use them in cloud providers. We actually have agents pre-deployed in AWS, Azure, Google Cloud, and Alibaba too, which we recently announced. You can use these agents to monitor applications. You can use these agents to monitor your API endpoints which is another growing area that we see. So, fleet of our agents distributed. You can use that, a combination of agents that we own and pre-deployed along with agents that enterprises would like to put in their own infrastructure. >> Right, so you've got the ones already out there, you've got the ones in the clouds and then I can put some additional ones into my remote offices or places that are of interest to me. So if there's an issue because you said for tech support when the person can't get into email there's a whole host of potential things it could be, right? Office 365 could be down, there's all kinds of things. How does your application communicate to this poor person on the end of this service call that hey, it's a network issue between these two points? Or maybe it's a big exchange that's getting attacked like happened on the East Coast a couple of years ago. How did they work that into their triage so they know hey, we've been able to kind of identify that this is the issue not one of the other 47 things that's impacting that application? >> Right so we are a SaaS-based product. Our uniqueness and our secret sauce is how we look at all of these different layers that affect performance and we correlate them, visually correlate them in a time sequence. We present it to the corporate IT person or a production IT person who is actually triaging this issue. We help them very quickly pinpoint. It's very visual there. You can see how application performance ebbs and flows. You can look at what does a network pack look like? If I'm seeing an outage of the Internet service provider we're going to call that out. Obviously all of this is tied in with an alerting system which the platform enables as well. I think one of the most interesting changes that's happening in the industry is in the past when you found an issue, you could fix an issue because the chances are you owned that entire environment, right? It was a router that failed or a switch was dropping packets. You owned that switch, you owned that router. You could go and make changes to it. But in today's Internet-dependent and cloud-heavy environment, it's more about having the right evidence so you can escalate it to the right person. So knowing which neck to choke is absolutely critical in this distributed environment that enterprises are losing control over slowly. >> So the people start to make active changes in the way they route their traffic based on what they find? Is there either consistent good or consistent bad behavior in certain networks or certain public clouds that you can get a better latency performance by switching that? >> Sure, we've seen cases where usually enterprises have, let's take an example of an Internet service provider having an outage. Usually enterprises for redundancy they have two upstream providers, for instance, and they're probably load balancing traffic equally across these providers. Once ThousandEyes detects that one provider is completely down, could be a routing issue, could be a router failed within their environment. Once we alert them it's up to the enterprise to make that decision saying hey, we want to bypass this route, right? And we've seen that happen in a lot of cases. They do bypass routes if it's possible. It also depends on the severity of the issue, how long the issue lasts and things like that. But that definitely happens. >> You guys talk about a concept called Internet-aware Synthetic. What does that mean? >> Synthetics, it's interesting as a term. What it really means is trying to mimic something that's natural. Just the term synthetics in layman's language, right? Synthetic monitoring is really just that. While you're trying to understand application performance or how a website performs, synthetic monitoring replicates how a user would interact with that application. You replicate those steps and you periodically repeat them over time. Let's take an example. You're shopping online, you're going to Amazon.com. You're searching for whatever it is you're searching for. You get a list of results. You are interested in one item, you look at a review, you seem happy, you move it to your checkout, pay and move on, right? Those sequence of steps is what synthetic monitoring can actually craft. We keep executing those steps periodically so you can understand if there's any degradation of performance, has it slipped from baseline? So IT operations team can use that to understand if there's any change that's happening or if there is a particular area in the world where users are starting to see degradation and so on. The nice thing about synthetics is it's proactive. There's a lot of monitoring techniques out there that looks at real user interaction with the website. And to typically do that you need to insert a piece of code within the application itself that tracks that user's activity. That's great information. You want to see what your users are really doing and engaging with your website. That's very useful but it fundamentally doesn't tell you if performance is completely degraded or the checkout button's not working, for instance. That's where synthetic comes in. >> So is that the primary way that you maintain kind of this testing of the health of the network? Or are you using more of a passive, waiting for something to be slow and then running something like the synthetics to try to figure out where it is? >> The recommendation is to keep synthetics running constantly because you don't want something to slow down and then react. That's a very reactive approach. Really in today's digital economy you don't want an outage to last too long because customer loyalty is fleeting. You don't want even 10 seconds of wait time, right? The way I see it is every time I try to find a cab through Uber, if Uber makes me wait 30 seconds I'm moving on to Lyft. I don't have the patience to wait that long. You don't want outages to prolong so you definitely don't want to understand performance after they have degraded, right? So synthetics recommendation is to continuously monitor so you can find out what's happening and if there's any drift from required baselines. >> Okay and then are you running that concurrently across a number of geographies for the same customer? Because if this same shopper's sitting in Seattle versus if that same shopper is sitting in Mexico City or they're sitting in London are you running that concurrently to make sure that you're checking all the different potential hiccups? >> Our agents, because they are so pervasive across the globe you can pick an agent in one of those 185 cities and you can execute those same sequence of steps over time to actually run that. Now synthetics as a technology is not new. It really predates the cloud. The action of mimicking a user journey through a website, that really predates the cloud which is why it's fundamentally broken when it comes to these cloud and Internet-heavy environments. What we introduce, ThousandEyes Internet-aware Synthetics tries to take this age-old technique and tie that together with how the network and how the underlying Internet performs. So when you're looking at performance you're not looking at it in a silo. Because that's the other thing we hear all the time from our customers. Like the application team has blinders on. They're wanting to see if anything's gone wrong at the application. The network team has its own blinders on wanting to see if anything's gone wrong with the network, right? And usually what's happening is if they figure out it's not an application issue then they punt it over to the network team. The network team says ah, not my problem, you take care of it. So there's this constant finger-pointing that happens in today's environment. This pain has really gotten worse in the era of the cloud and Internet-based deliveries because guess what? Your application is first of all split into these microservices. The number of API calls that you are making has gone up, right? And all of these components don't sit in the same place. You're probably running into a hybrid infrastructure environment where some pieces of your code resides in your data center, the other may be in the cloud. Or you're making API calls which is resulting in a multi-cloud scenario. And what is it that's connecting all of these different environments is the actual network and the Internet. So understanding just hey, my app is down, is not good enough any more. You need to know my app is down, it's down because the Internet is causing problems for instance, right? So what ThousandEyes Internet-aware or network-aware Synthetics does is we look at performance right from the application stage, look at all those transactions see if they are run correctly or not. We tie them into how the underlying network is performing. And hey, if the Internet is causing issues we tie that into in a single correlated pin. So you're looking at one single platform and you're able to pinpoint quickly. You gather the evidence to escalate it to the right person. And at the same time you are bringing the application and the network teams together so it's more collaboration. It's not finger-pointing. Then that's what we really want to enable and what most of our customers actually do with ThousandEyes. >> Before I let you know I want to dig into the Alibaba announcement a little bit more. China is a special challenge on the Internet space. We've done some work over there and none of the Google services work and we use a lot of Google services. How did that come about? Is this a new growing area for you? I would presume there's all kinds of demand from the customers to try to get a little bit deeper penetration into that marketplace. >> China definitely is an interesting space. I mean because of the great firewall and all of the techniques China implements, performance is known to be relatively suboptimal in that region. Fortunately or unfortunately it's the fastest growing market, too. So enterprises want to invest in China. We're seeing a trend where they are moving their services to Ali Cloud. What does that mean for enterprises? You need to monitor that environment, too. Which means you want to understand how performances from Ali Cloud to Ali Cloud and so on. What we did recently is we increase our vantage points within Ali Cloud. Now you can look at user experience for users connecting from all around the world into Ali Cloud. You can look at API performance going from Ali Cloud to GCP or AWS, right? I think the key point to remember is that not just in China, but across the world not all cloud providers are created equal. We found some very interesting data for traffic between Beijing and Singapore, Ali Cloud performed relatively better, no surprises there. But AWS has relatively high performance. Same user from Beijing to AWS's data center in Singapore, they had a very circuitous route to get to Singapore. They were going from China to Tokyo to Singapore. During peak times, eight a.m. to eight p.m. Beijing time there was a lot of fluctuation showing some kind of congestion in the network, right? Ali Cloud we didn't see that. Understanding cloud provider performance is absolutely critical. What we do is our vantage points enable enterprises to do that. One of the initiatives that ThousandEyes we've been doing for a couple of years now is do a comparison of all these providers, AWS, Azure, and Google Cloud, and Ali Cloud now. Last year we had our first report, it's called a Public Cloud Performance Benchmark report that compared AWS, GCP, and Azure. This year we're expanding it to Ali Cloud as well. So that's launching in November so it's going to be interesting to see. >> Jeff: A lot of people will want to see that one. >> Yes, it's going to be interesting to see who performed better and where. It's always good information. >> Jeff: I was going to ask you if you could share, but I didn't want you to give away any secrets. But I guess we'll have to wait 'til the report comes out. >> Yes, mid-November it's going to be there. >> All right Archana, we'll look forward to that. I'm sure it will be more variable than what most people expect. >> Archana: We'll see. Thanks for having me, Jeff. >> Thanks you very much. All right, she's Archana, I'm Jeff, you're watching theCUBE. We're in our Palo Alto studios having a CUBE Conversation. Thanks for watching, we'll see you next time. (upbeat instrumental music)
SUMMARY :
Narrator: From our studios in the heart and you lose your Internet power you pretty much and what you do and then we'll jump into it. and the reason we are here is as enterprises are moving The network piece and the transportation of that app and that have been in place to understand issues What about VPNs, because isn't kind of the purpose Or is it something that's in the backside but it's the underlying, if you're going through all over the place, hence ThousandEyes. that might be of interest to an enterprise. or places that are of interest to me. because the chances are you owned It also depends on the severity of the issue, What does that mean? And to typically do that you need to insert a piece of code I don't have the patience to wait that long. You gather the evidence to escalate it to the right person. from the customers to try to get a little bit I mean because of the great firewall and all Yes, it's going to be interesting to see who performed but I didn't want you to give away any secrets. All right Archana, we'll look forward to that. Thanks for having me, Jeff. Thanks for watching, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Archana Kesavan | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Archana | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Tokyo | LOCATION | 0.99+ |
Mexico City | LOCATION | 0.99+ |
September 2019 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Singapore | LOCATION | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
November | DATE | 0.99+ |
ThousandEyes | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Beijing | LOCATION | 0.99+ |
eight p.m. | DATE | 0.99+ |
Office 365 | TITLE | 0.99+ |
eight a.m. | DATE | 0.99+ |
This year | DATE | 0.99+ |
47 things | QUANTITY | 0.99+ |
Amazon.com | ORGANIZATION | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
one item | QUANTITY | 0.99+ |
one provider | QUANTITY | 0.99+ |
Ali Cloud | TITLE | 0.99+ |
mid-November | DATE | 0.99+ |
two points | QUANTITY | 0.99+ |
Starbucks | ORGANIZATION | 0.99+ |
Philz | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.98+ | |
185 cities | QUANTITY | 0.98+ |
first report | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
GCP | ORGANIZATION | 0.98+ |
East Coast | LOCATION | 0.97+ |
Azure | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Lyft | ORGANIZATION | 0.97+ |
single | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
about 185 cities | QUANTITY | 0.96+ |
two upstream providers | QUANTITY | 0.94+ |
one single platform | QUANTITY | 0.93+ |
first step | QUANTITY | 0.93+ |
Guru Chahal, Avi Networks | Cisco Live US 2018
(techno music) >> Live from Orlando, Florida it's theCUBE, covering Cisco Live 2018, brought to you by Cisco, NetApp and theCUBE's ecosystem partner. >> Okay, welcome back everyone it's theCUBE live here in Orlando, Florida for Cisco Live 2018 I'm John Furrier with theCUBE, my cohost Stu Miniman. So our third day of three days of wall-to-wall coverage, the big story here is the transformation, the power of the network, it's becoming computable, it's a great, great story. Our next guest is Guru Chahal, who is the Vice President of Product, AVI Networks. Welcome back to theCUBE, great to see you. >> Thank you, John. Thanks for having me John and Stuart. It's a pleasure being here again. >> So we just talking before the camera came on about STO cause Stu wants to go there right away, but we've got to hold off on that, but service meshes is certainly going to be a great thing with Kubernetes and containers but the story here is the changing nature and power of the network. Suzzy, who you came on with DevNet, was talking about the success of DevNet has been a combination of great timing, of open-source, hitting the network but making the network programmable, opening up new innovations. This is a really big thing, I want to get your reaction to this because Europe tied into this trend big time. What does that mean for people that are watching this? They're trying to grok the new way. What is this intent-based network? What's this programmable network? Is it the iPhone, kind of moment where for networks, where new apps are coming that we've never seen before? Or is it something different? What's your take? >> That's such a great example John, so just a fundamental transformation that iPhone had on how we think about telephony in general, we're at that sort of moment in the network. And the reason for that, frankly, is how we deploy applications, how we design applications, and where we deploy applications has fundamentally changed. You know 20 years ago, you had one choice to deploy an application and it was that server, right over there, in your data center. And today you can do it as a container, or bare-metal server, a virtual machine, on-prem or one of hundreds of data centers, public cloud data centers all over the world. And then architecturally, everything is moving from these monoliths to microservices, or much more tiny and more manageable components, and what that does to the network is fundamentally different from what's been going on in the network for the past couple of decades. It elevates the position of the network from just connectivity, to something that is fundamental to how these services talk to each other unlike 100 things that live inside a box and talk to each other, now you have 100 things on the network talking to each other. So think about what that does to you from a availability strategy perspective, from a security strategy perspective, from a surface area of security, from a monitoring perspective, I mean the reason why you see, I mean walk the show floor here, so much innovation in the network and the reason for that is instead of an enterprise running 1000 applications, within the next few years each enterprise is going to be running 100,000 applications and their budget is not going up 100 times so you need innovation, you need automation and that's where the intent-based movement comes in. >> So new opportunities are going to be created, new wealth creation, more innovation. What are you guys doing? Take a minute to explain why you guys are here with your company? What are you contributing, what's your role in the ecosystem, what's your product differentiations? What's the story? >> Yeah, great, so we play in the application services space. If you think about the network traditionally people have thought about it as connectivity, which is layer two, layer three, and then network services are the services that the network offers to an application, that's load balancing, it's application security, SSL offload, it's web application firewall and so on. So services that are tied to the application that's basically what our company is about. So we have a fabric-based platform, software only, the fabric can be instantiated on bare-metal appliances, or containers, or virtual machines, all centrally managed, and it's intent-based which means it's policy-driven. So you go to a single place you say, "please I need load balancing capabilities "for this application, I need SSL "and I need to turn on my web application firewall." And no matter where the application is, in Azure, in AWS or on-prem, or a mainframe, the fabric is able to instantiate that service automatically infront without the operator having to worry about where is it, what do I need to do, do I have enough capacity, none of that. >> Guru, in Chuck Robbins' keynote on Monday you talked about kind of the old way, this kind of bespoke, it was silos, it was like, well, oh, you know we have the wiring guys over here doing the physical layer two, layer three, four through seven is over there. Today it's software, up and down the stack, you know, changes a lot, maybe talk a little bit about that dynamic as to how applications, you know intent-based networking really is having, the application doesn't just use, but it's heavily involved with the network. >> So here's the single biggest thing that's driving this change, applications used to be secondary for IT in some sense, certainly infrastructure teams, and infrastructure was primal. And I had my ADCs and load balancers here and my routers and my switches and so on, and this is my infrastructure, now let's figure out how to fit the application on my infrastructure. And that world is gone. That's the old way. You can't hug your load balancers anymore that's (laughs) if you do that today, those days are, if not gone, they're almost nearing an end. And increasingly the infrastructure is going to live for applications. The center world is my need as a business to role out an application quickly, to understand how people are interacting with that application, to make changes to it in real time, and all of infrastructure is now wrapping itself around that notion. So intent-based networking, in our case, intent-based application services is all about how can I, in an automated way, quickly deploy load balancing, application security for applications, no matter where they are, how can I monitor the applications in real time. That's really what the movement is about. >> Well, that's a great point. I'd like to just add and get your thoughts on this, and react to another concept, to add to that is that you've got all that happening, okay, that's because of the cloud and great new tech but then you factor in that the programming models are changing too, so the perfect storm is everything that you've said, but now the expectation of the developer-- >> API. >> With open source-- >> Everything is in API. >> Has to be programmable and it's like the classic, let infrastructure take care of it's business but no one's got to do all this manual work. This is a huge dynamic and I think the DevNet story this year at Cisco Live really puts an exclamation point on the fact that this has got traction. We kind of know, we see open-source but from the networking world it's a whole new, essentially greenfield opportunity. You agree with that? >> Totally, I mean you know there's in most of our largest customers, and by the way we didn't talk about our solar business side, but just to give you a quick flavor for what our customer base looks like we primarily sell to Global 2000, three of the top five banks in the US are our customers, two of the top five banks in ME are our customers, 20% of the Fortune 50 are our customers, we've replaced traditional load balancing solutions and so on. And the primary reason, the number one reason is automation. And by automation, everybody talks about automation, but by automation what our customers mean is infrastructure as API. Simple things. I want to capture all the packets going to that application and I want to do that with a single REST API, I want to talk to an IP endpoint and say here's the REST API, give me all the traffic. Can you do that in your network today? Our customers can. >> What's the alternative, if they don't use APIs? >> Oh yeah, so you've got two choices, one you walk into your data center, turn on the SPAN port take all that traffic, take it to some sort of a monitoring fabric blah, blah, blah, three days later if you're lucky you get traffic. Second approach, call AWS tell them to turn on the SPAN port, and good luck with that. (laughs) So, you know increasingly you frankly don't have much of a choice, you need infrastructure to be-- >> Scale is also a tsunami of data coming in so one time is a massive problem, that's never going to happen, so people are going to give up-- >> Number of events, number of alerts, you know it's speed. Talk about the top three trends that are going on in our customer base, speed, speed, and speed. >> Okay, you've got some great clients. Why are they going with you, and how does someone engage with you guys? What do they do? Do they just call you up and say bring in some software, do I get a box, is it software, how do I configure it, how do they onboard? How do you guys engage with your customers? >> Right, so why do they buy us? Three quick reasons, one amazing automation fabric-approach central management. Two, amazing analytics to your point about great events, we want to help our customers address this deluge of events and things that are happening in the data center and provide great insight, so that's all built in to the product. And three, much more cost effective. I mean these traditional solutions, believe it or not, that have been around for 20 years, they're not just traditional, as in legacy, they're also extremely expensive. Our competitors sell load balancers at 84% gross margins. You know how many of my customers run their businesses at 84% gross margins? Zero. So how can you afford that, right? So those are three big reasons why they buy. How they get engaged with us is they typically have a public cloud project, they'll say alright, like Adobe, "they'll say alright, we need to go to Azure, "move the applications right away." Well that's easy for the CIO to say, in practice, that's a beast, right. So they need to get in there, they need to figure out how am I going to meet application SLAs on Azure, how am I going to do application availability, or security, or monitor these, and they could do a Google search or something and get that connected with us. Two, we're a Cisco partner, Cisco resells us, and Cisco is everywhere. So when people approach their trusted vendor, like Cisco, and say, "Cisco, "I've got this public cloud issue, "a network monitorization issue "and load balancing is a consistent thorn "in my neck, like, what do we do?" And Cisco goes, "oh we've got a great partner, "we resell their technology, I'd love "to help you understand more, and then "they pull us in, and we close." >> Yeah, that's a great point Guru, one of the things we've been talking to a lot of customers, is how do I manage and deal with my network when I don't own a lot of the pieces of the network. And that's the story we've been hearing. Cisco talking about multi-cloud. Up on stage, Chuck Robbins brought Diane Greene out and talked a lot about Kubernetes and STO, we know AVI Networks, I've seen your team at theCUBE con show, John was just at the Copenhagen show, I unfortunately missed that one, I'll be back at the Seattle show. Talk about what your team is doing with Kubernetes and STO, and how does Cisco fit in to that discussion? >> Yes, we love that space it's actually, I think at this point, after public cloud after Azure and AWS in particular, and GCP as well. So after public cloud, is the fastest growing part of our business today and what we've been shipping for over two years now, is an enterprise-class service mesh targeted at, not just Kubernetes, but Kubernetes, OpenShift, Mesos or Consisto, and the beautiful thing is our fabric is just a fabric it can, the same fabric in one corner of the data center could be serving a traditional bare-metal application and another corner of our data center is serving a containerized, a Kubernetes application and what we do there is, we provide both North-South load balancing capabilities, as well as, the East-West load balancing capabilities for that entire cluster. And to give you a sense for scale, our largest customers, we've got large banks and technology companies running us in production with Kubernetes, at the other, at the highest end we've got customers running eight to ten clusters of somewhere between 50 to 100 nodes each. So we're talking about 500 to 1,000 nodes running in both public cloud and on-prem of Kubernetes where we are providing the distributed load balancing capabilities. >> Well that's great. So if you've been doing service mesh for two years, that's pre STO? How does that relate to the STO project? >> Yes, it is, and in sometimes it's still pre STO right, cause I love STO, on slides (laughs) but the era of STO is 2019 and maybe 2020. So it's going to take some time we love it because here's what happens today, this is the problem for solution providers like us, what happens is, we're forced to integrate with Kubernetes, the Kubernetes master service. At some point customers are like, "alright, so you're integrated with Kubernetes, "and this person is integrated, "and this other piece of software integrated." What STO does is it very cleanly separates the network policy from Kubernetes to STO. So we have to integrate only with STO and we are doing that integration right now. So from our perspective these are northbound orchestration systems and policies systems, once STO solidifies, and I expect sometime next year, maybe the middle of next year, maybe late next year, and we're ready for production and then you can continue to use us within the system. >> Yeah Guru, I'm going to have to say you're the hipster service mesh company then, right? You were doing it before it was cool. (Guru, Stu and John laugh) >> Yes and then perhaps we can move-- >> Alright so I got-- >> on to something else >> We love the STO is a total geek conversation but this is super important, I want to get you thoughts on this, I do agree it's definitely got some work to do but there's, it's the number one open-source project within the CNCF, so clearly there's a ton of interest. And a lot of the alpha geeks are going there they see great, great value there. Containers, check. Containers are great. Kubernetes, check, on a good path. STO is interesting cause its service meshes is a concept that kind of ties networking with apps and you guys are in the middle of this. What does that mean for the network engineer out there or for the company, why should they pay attention to this service mesh concept or STO and the role of mircoservices? Clearly microservices makes sense if you're APIing everything you want to have more services developing. but what's going on under the hood? Why is STO getting so much traction in your opinion? >> It's a very simple reason John. So this was my world as a network engineer. I had a few of these applications I would look at them, they're like my little puppy, and I would configure my entire network to support these applications. The world of microservices, and really this new world that we live in, I don't have one of these, I have 100 of these per application, so I have 100,000 of these floating around. I can't do it without using policy. Policy is at the root of all this, intent-based networking, declarative policies, STO, declarative policies, our platform, declarative policies. So the entire world of networking is moving away from, let me go to one of my 50 switches and configure the CLI, to let me define a set of ten policies that we will then apply to 100,000 applications, cause frankly, there's only ten different things I want to do. I don't want to configure a 100,000 endpoints. I just want to do ten things, that's something I can do as a human and that's really what's at the root of this. So it's really intent-based networking sort of at different layers. >> So there's been conversation, we've been obviously talking about this on theCUBE since day one here about, we believe the network engineer, the Cisco customer, if you will, or people getting all of these certifications, they're going to be so much more powerful because there's been a conversation in other press and media around the death of the network engineer (Guru laughs) We should, look they're the mainframe guy-- >> Which iteration of that are we on? 'Cause I hear that every five years. >> They better learn how to code so they don't lose their job. When actually, the network is getting more and more powerful, so what you're talking about, we think connects and validates that the network engineer, the one doing Cyber Ops, data center, service provider, industrial IOT, CCNA, CCIEs, these guys are going to be a fish to water when they hear words like policy, dynamic provisioning these are-- >> Automation, APIs. >> These are concepts they're used to. What's your thoughts on that because this is a kind of a new emerging connect point that DevNet's kind of pointed with DevNet Create and DevNet proper, what are you're thoughts? >> Yeah, listen I have tremendous empathy for our customer base, I used to be a customer on the other side a couple of decades ago, and there's this sort of fashion in Silicon Valley to come up with new innovations and then say, "oh, all those people, they're going to be left behind "and my technology is going to be awesome." I don't subscribe to that, the hunger I see in networking teams to continually add value is unparalleled today. The hunger I see for automation, for learning REST API, STKs, Python, Ansible, interacting with DevNet is unparalleled. And in some sense if that wasn't there, why would you have intent-based networking, why would a vendor like Cisco, a vendor like AVI emerge? Why would we build these amazing things if there wasn't a hunger for this? So, I think the network is going to be extremely important and most of the networking teams today will make that transition. I'm not going to discount the fact that there will be some who will want to hug their load balancers for the next 10 years, and I have bad news for them, there was a time when you could ride it out for five or 10 years before the next tech showed up. Those days are gone, man. The new tech shows up today and then you're like, "no, not going to happen for about 12 or 18 months." And then boom! Everything just changes. >> So what's your advice to that, of those networking engineers out there, those folks do, and that are going to be the power players in this new configuration? What should they do? >> Engage. >> Engage, be the person in the organization that brings in a new technology, never in my entire career, two decades now, have I seen individuals in networking teams at banks, at technology companies, at retailers, at grocery store companies, at radiology centers, you know, go out there and ask questions is there a better load balancer, is there a better switching solution, is there a better X, Y, Z, is there a better way to monitor my apps, and then pull in that, play around with that, call the vendor. You know, traditionally it never used to happen. So I'm excited about it. >> Yeah, and it's awesome it's great. It's a great opportunity to be, the timing is perfect. Alright, final question, actually two questions. What's up for next for you guys at AVI Networks on the road map, what's coming next? And then you're take on the show, what's the vibe, what's it like for the folks who didn't make it to Orlando, what'd they miss? >> So our vision is double down on multi-cloud, it's so real, all our customers, all, almost a 100%, are both on-prem and in AWS or Azure and we're continuing to invest in making that easier through the introduction of several sort of initiatives on the platform including SAS, including increased investments in security. So that's on our vision side. Invest in our partnership with Cisco, as I said Cisco is a reseller and now an investor in our last round of funding, so we're pretty excited about that. And they're excited about being close to a company that frankly, is seeing the kind of traction we're seeing. So that's what we're doing over the next three to five years. Show floor, I've got to say 80% of it sounds like, give me your data and I will provide you insights. And that's trivializing that a little bit but I think it goes back to the point, John, you made earlier, where things are moving so fast, so much is changing that there's just an increased excitement around technologies which help you automate, which help you provide better insight, which help you just manage this. >> And then final question, one more, it just popped into my head, got to get out there. Programmability, obviously we believe it is happening, APIs are happening, microservices are right around the corner, you guys are first-generation service mesh and production. What are some of those new apps we're going to see? If the network programmable is first-generation, like an iPhone was for telephony, what kinds of network apps, app-networking apps, are we going to see in the new paradigm that DevNet's pioneering? >> So, actually two kind of apps I'm already seeing in my customer base right now. The first one is self-service and provisioning apps. So as soon as the network becomes programmable the first thing networking teams do, this is a little bit counter intuitive, remember the old world where networking teams were like, "my network, don't touch it." The first thing they're doing now is, they're saying "oh, it's programmable? "Let me build a sandbox for you quickly. "You do it, don't call me. "Don't call me. "Just do your thing, if you hit " the bounds of the sandbox, then "call me and we'll talk about it." So, self-service automation provisioning is the first kind of applications I'm seeing emerging. And the second one is monitoring. You know the age-old problem, I don't know what's going on. So people are building these amazing solutions, I mean our, I thought people would be logging into our CLI or UI and getting insights. No, they're taking my data, right now I counted about 15 upstream solutions from Tetration, to Splunk, to other SIMs, Datadog, AppDynamics, New Relic, they're exporting this wherever they can. And so those are the two classes. Self-service automation and monitoring. >> And this all is underpinning value for safe security monitoring and scripts is right around the corner. Anyway thanks for coming. Okay, AVI Networks' VP of Product here inside theCUBE day three, it's theCUBE coverage here. I'm John Furrier with Stu Miniman at Cisco Live in Orlando. Stay with us, we'll be right back. (techno music)
SUMMARY :
covering Cisco Live 2018, brought to you by Cisco, the big story here is the transformation, It's a pleasure being here again. and power of the network. on the network talking to each other. in the ecosystem, what's your product differentiations? that the network offers to an application, about that dynamic as to how applications, So here's the single biggest thing that's driving and react to another concept, to add to that is on the fact that this has got traction. and by the way we didn't talk to turn on the SPAN port, and good luck with that. Talk about the top three trends and how does someone engage with you guys? Well that's easy for the CIO to say, and how does Cisco fit in to that discussion? And to give you a sense for scale, How does that relate to the STO project? the network policy from Kubernetes to STO. Yeah Guru, I'm going to have to say And a lot of the alpha geeks are going there So the entire world of networking is moving away from, Which iteration of that are we on? that the network engineer, the one doing Cyber Ops, and DevNet proper, what are you're thoughts? and most of the networking teams Engage, be the person in the organization on the road map, what's coming next? the next three to five years. are right around the corner, you guys So as soon as the network becomes programmable monitoring and scripts is right around the corner.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Chuck Robbins | PERSON | 0.99+ |
100 | QUANTITY | 0.99+ |
84% | QUANTITY | 0.99+ |
Stuart | PERSON | 0.99+ |
100,000 applications | QUANTITY | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
1000 applications | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
100 things | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Monday | DATE | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Suzzy | PERSON | 0.99+ |
two questions | QUANTITY | 0.99+ |
Orlando | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Avi Networks | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
AVI Networks | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
two classes | QUANTITY | 0.99+ |
ten policies | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
Chuck Robbins' | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
two choices | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
10 years | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
50 switches | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
ME | LOCATION | 0.99+ |
100 times | QUANTITY | 0.99+ |
AVI Networks' | ORGANIZATION | 0.99+ |
ten things | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
one time | QUANTITY | 0.99+ |
first-generation | QUANTITY | 0.99+ |
Diane Greene | PERSON | 0.99+ |
100,000 endpoints | QUANTITY | 0.98+ |
three days later | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
20 years | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
third day | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
DevNet | ORGANIZATION | 0.98+ |
late next year | DATE | 0.98+ |
John Allessio & Margaret Dawson, Red Hat | OpenStack Summit 2018
(ambient Music) >> Announcer: Live from Vancouver, Canada, it's theCUBE. Covering OpenStack Summit North America 2018. Brought to you by Red Hat, The OpenStack Foundation and its ecosystem partners. >> Welcome back, this is theCUBE's coverage of OpenStack Summit 2018 in Vancouver. I'm Stu Miniman, my cohost for the week is John Troyer, happy to welcome back to the program two CUBE alumni, we have Margaret Dawson and John Alessio. Margaret is the vice-president of Portfolio Product Marketing and John is the vice-president of Global Services. Thanks so much for joining us. >> Thank you. >> Thanks for having us. >> Good to be here. >> Alright so, John has gotten the week and a half now of the red hat greatness of being at summit last week, I unfortunately missed Summit, first time in five years I hadn't been at the show, did watch some of the interviews, caught up on it, and of course we talked to a lot of your team but, Margaret, let's start with you >> Margaret: Okay. >> One of the things we were looking at was, really, it's not just a maturation of OpenStack, but it's beyond where we were, how it fits into the greater picture, something we've been observing is when you think about open sourced projects, it's not one massive stack that you just deploy, it's you take what you need, it kind of gets embedded all over the place, and help us frame for us where we are today. >> Wow, that's a big question. So I think there's a couple things, I mean, in talking to customers, I think there's a couple trends that are happening. One is one you've probably talked about a lot and we probably covered at the Red Hat Summit which is just this overall digital transformation, digital leadership, whatever you want to call it, digital disruption tends to be a thing, and open sources definitely playing, really, the critical role of that, right, you will not be able to innovate and disrupt or even manage a disruption if you're not able to get to those technologies and innovations quickly and be able to adapt to it and have it work with other things. So the need for openness, for open APIs, for open technologies, inner-operability allows us to move faster and have that innovation and agility that every enterprise and organization needs world wide. And tied to that is kind of this overall hybrid cloud, so it's not just, OpenStack is a part of a much bigger kind of solution or goal that enterprises have in order to win and transform and be a digital leader. >> Margaret, I love that. Digital transformation, absolutely something we hear time and again from customers. >> Margaret: Yup. >> John, I've got a confession to make. I'm an infrastructure person and sometimes we're always like, why, come on, we spend all our time talking about how all the widgets and doo-dads and things-- >> Margaret: Blinky lights. >> Blinky lights, up on stage we have the-- >> He missed the blinking lights >> He did miss the blinking light. >> They had a similar stack up on stage yesterday. >> Oh, that's right. >> Same fans you could hear in the back of the room. But the whole goal of infrastructure always, of course, is to run the application, the whole reason for applications is to run and transform and do-- >> John: Serve the business >> Yeah, so that's where I'm going with this is we're talking more about not only that foundational layer of OpenStack but everything that goes with it and on it so maybe you could talk about the services-- >> Sure. So I think, Stu, that's exactly what we're seeing. So if you think about the last year and what we're seeing with services and projects here on OpenStack, I think the first thing to talk about is the fact that it's been growing quite a bit, in fact, from a 2017 versus 2018 perspective, our number of OpenStack projects have increased 36% year on year globally. So we're seeing a lot of demand, but we're seeing the projects be a lot more comprehensive. So these are OpenStack projects, but they're OpenStack with Open Shift, with Cloud Form, with Suff, as an example, and this combination is, really, a very very powerful combination. In fact, it's been so powerful that we started to see some common patterns of customers building a hybrid cloud solution, using OpenStack as their kind of private cloud infrastructure, but then using Open Shift as their way to kind of deploy applications in containers in that hybrid way, that we created a whole solution, which we announced two weeks ago, when John was at our Red Hat Summit, called Containers on Cloud. And that's taking all of our best practices around combining these products together in a very comprehensive, programmatic approach to deploying those solutions together. >> And I think it's really important, I mean, as you know, I think you and I met when we were both in networking, so coming from that infrastructure background but we really all need to talk about the workload down, starting with the application, starting with the business goal, and then how the infrastructure is almost becoming a services-based abstraction layer where you just need it to be always there. >> John: Yup. >> And whether it's public cloud or private cloud or traditional infrastructure, what developers in the business want is that agility and flexibility and containers provide that. There's other kind of architectural fabrics that allow that consistency and that's when it gets really exciting. >> One thing that's really interesting to me this week at OpenStack, as we've drilled into different customers, and talking to different people, even at lunch, is one, it's real. Everyone I've talked to, stuff in deployment, it went quickly, it's rock solid, it's powering, as we know, actually a lot of that is technical infrastructure that's powering a lot of the world's infrastructure at this point. >> That's right. >> The other thing that was interesting to me is some folks I talked to were saying, "Well, actually we have enough knowledge "that we're actually doing a lot of it ourselves, "we're going upstream." However, so that's great, and that's right for some people, but what I'm kind of been interested in both just coming from Red Hat Summit is both the portfolio, the breadth of the stack, and then all the different offerings that Red Hat, you know, it's not Rel anymore, it's not just Linux anymore, there's everything that's been built up and around and on top for orchestration and management, and then also the training, the services, the support, and that sort of thing, and I was wondering, that's kind of a two-part question, but maybe you all could tackle that. What does Red Hat bring to the table then? >> So, let me just start with, again, just to kind of position what we do as global services, our number one priority is customer success with Red Hat technology, that's the first and foremost thing we do and second is really around building expertise in the ecosystem so our customers have choice and where to go to get that expertise. So, if you start to look at kind of what's been going on as it relates to OpenStack, and, again, many customers are using Upstream bits, but many customers are using Red Hat bits, we see that and we look at the number of people who are getting trained around our technology. So over the last three years, we've trained, through our fee-based programs, 55,000 people on our OpenStack portfolio and in fact from 2017 to 2018 that was up 50% year on year and so the momentum is super super strong. So, that's the first point. The second is it's not just our customers. So part of my remit is, yes, to run consulting and, yes, to drive customer enablement and training, but it's also to build an ecosystem through our business partners. Our business partners use a program we call OPEN, Online Partner Enablement Network, which actually will just be celebrating five years just like OpenStack will, we'll be celebrating five years for OPEN. And our business partner accreditations on OpenStack specifically are up 49% year on year. So we're seeing the momentum in our regional systems integrators, our global systems integrators, our partners at large, building their solutions and capabilities around OpenStack, which I think is fantastic. >> No and it helps a lot with the verticalization of that, right, 'cause every industry has slightly different things they need. The thing I that would add to that, in terms of do-it-yourself community versus a dis-ter that's supported from someone like Red Hat, is it really comes down to core competency. And so even though OpenStack has become vastly simplified from a day one, day two, ongoing management, it is still a complex project. I mean that's the power of it, it can be highly customizable, right, it is an incredibly powerful infrastructure capability and so for most people their core competency is not that, and they need that support at least initially to get it going. What we have done is a couple things. I've actually talked to customers a lot about doing that training earlier and it's for a couple reasons, one is so that they actually have the people in house that have that competency but, two, you're giving infrastructure folks a chance to be part of that future cool stuff, right? I mean, OpenStack's written in Python and there's other languages that are newer and sexier, I guess, but it's still kind of moving them towards that future and for a lot of guys that have been in the data center and the ops world for a long time, they're looking out there at developers and going, I'm not the cool kid anymore, right? So OpenStack actually is a little bit of a window, not just to help companies go through that digital transformation, but actually help your ops personnel get a taste of that future and be part of that transformation instead of being stuck in just mainframe land or whatever, so training them early in the process is a really powerful way to do a lot of things. You know, skillset, retention, as well as then you can manage more of that yourself. >> And then all the way up to Stack, right? I mean, we're talking about containers, and then there's containers but then there's container data storage, container data networking. I mean, you've got the rest of the pieces in that, in Open Shift, in the rest. >> Absolutely. >> That is correct. >> And I think, John, you were at Red Hat Summit, we had a number of different innovation award winners. So I think one good example of kind of this kind of transformation from a digital transformation perspective, but also kind of leveraging a lot of what our Stack has to offer is Cafe Pacific. And so we talked about Cafe, they were one of our innovation award winners and what their challenge really was is how do they create a new modern infrastructure that gave them more flexibility so they could be more responsive to their customers. >> Yeah. >> In the airline industry. And so what they were really looking for was really, truly a hybrid cloud solution. They wanted to be able to have some things run in their infrastructure, have some things run in the public cloud, and we worked with them over the last, little over a year now, Red Hat consulting, Red Hat training, the Red Hat engineering team, in really building a solution that leverage OpenStack, yes, but also a number of other capabilities in the Red Hat portfolio, Open Shift, so they can deploy these applications, containerized applications now both to the public cloud as well as to the private cloud, but also automation through Ansible, which we're hearing a lot about Ansible and products like Ansible here at the conference-- >> Well the Open Stock and Ansible communities are starting to really work well together, just like Kubernetes, you've got a lot of this collaboration happening at the project level not to mention when we actually productize it and take it to customers. >> Yeah, so it's been super super powerful and I think it's a good one where it really hit on what Margaret was saying, which was giving the guys in infrastructure an opportunity to be a part of this huge transformation that Cafe went through, 'cause they were a very very key part of it. >> Yeah. Well, I think we're seeing that also with the open innovation labs. So this is something, which is really an innovation incubation process, it's agile, scrum, whatever, and in those we're not just talking to the developers, we're actually combining developers, functional lines of business leaders, infrastructure, architects, who all come together in a very typical six week kind of agile methodology and what comes out of that, I don't know, I've seen it a couple times, it's magical is all I can say, but having those different perspectives and having those different people work together to innovate is so powerful and they all feel like they're moving that forward and you come out with pilots, and we've seen things where they come out with two apps at the end of six weeks or eight weeks, it's just incredible when they're all focused on that and you start to understand those different perspectives and to me that's open source culture, right? It's awesome. >> And, Margaret, I'd love to hear your perspective also on that hybrid cloud discussion because so many people look at OpenStack and be like, oh, that's private cloud. >> Margaret: Right. >> And, of course, every customer we talk to, they have a cloud strategy. And they're doing lots of SaaS, they've got public cloud, multiple, Red Hat, I know you play across all of them, big announcement with Microsoft last week, last year was Amazon big partnerships with, so is Kubernetes the story, or is Kubernetes a piece of the story, how do all these play together for customers? >> I think Kubernetes is one and so, especially when you look at the broader architectural level, OpenStack becomes obviously the private cloud and enables them to start to do things that are more cloud-native even in their own data center, or if it's hosted or management or more traditional infrastructure, but it really has to be fluid. And a lot of customers initially were saying that their strategy was cloud first, and they would say, "Oh, we're going to put "everything in the public cloud." And then you actually start going through the workloads, you start going through the cost, you start going through the data privacy, or whatever the criteria capabilities are, and that's just not practical, frankly. And so this hybrid reality with private cloud, traditional, and public is going to be the reality for a very very long time, if not forever. There's always going to be things that you want to have better control of. And so Kubernetes at the orchestration layer becomes really critical to be able to have that agility across all those environments, but you have other fabrics like that in your architecture too, we talked about Ansible, it allows you to have common automation and do those play books that you can use across all those different infrastructure, KVM, what's your virtualization fabric, and can KVM take you from traditional virtualization all through public cloud? The answer is yes. So we're going to see increasingly these kind of layers of the overall architecture that allows you to have that flexibility, that management that's still the consistency, which is what you need to keep your policies the same, your access controls, you security, your compliance, and your sanity, whereas before it was kind of Ad Hoc. People would be like, oh, we're just going to put this here, go to public cloud. We're going to do this here, and now people are finding standardizing on things like even Red Hat Enterprise Linux, that's my OS layer, and that allows me to easily do Linux containers in a secure way, et cetera, et cetera. So, doing hybrid cloud means both the agility but you got to have some consistency in order to have the security and control that you need. So it's a little bit different than what we were talking about a few years ago, even. >> And I think one of the things that we've learned in the services world is that we started this idea about 18 months ago, we called these journey adoption programs, which were really the fact that some of these transformations are big, they're not about a single project that's going to last four to six weeks, it's a journey that the customer's going to go on and so when we talk about hybrid cloud, we've actually created this adoption program which can really start with the customer in this whole discovery phase, really, what are you trying to accomplish from a business perspective then take them into a design phase, take them into a deployment phase, take them into an enablement phase, and then take them into a sustainment phase. And there's a number of different services that we'll do across consulting, training, even within Marco Bill Peters Organization, which is our customer experience and engagement organization, around what role a technical account manager can play and really help our customer in the operational phases. And so we've learned this from some of the very large deployments, like Verizon, where we've seen some very-- >> And it's cyclical, right? You can do that many times. >> We do. In fact, you absolutely do. And so we've created now a program, specifically, around hybrid clouded option to try and de-mistify it. >> Yeah. >> Because it is complex. >> Well, and the reality is, there's somewhere around 30% of organizations still do not actually have a clear cloud strategy. And we see that in our own research, our own experiences, but industry analysts come up with the exact same number. >> And Margaret, by the way, the other 70%, the ink still pretty-- >> Yeah. >> Still wet! (laughing) >> Yes, it is. I'll tell you, I love saying cloud first to people because they kind of giggle. It's like, yeah, that's our strategy but we know we don't really know what that means. >> Which cloud? >> Exactly. >> Exactly. >> All the clouds. >> Exactly. >> Alright, well Margaret and John, want to give you a final word, key takeaways you want to have or anything new to the show that you want to point out? >> I would just say we are still in early days. I think sometimes we forget that we, both in the open source communities, in the industry for a long time, tend to be 10 years ahead of where most people are and so when you hear jokes about, oh, is OpenStack still viable or is everything doing this, it's like right now we only have a very small percentage of actual enterprise workloads in the cloud and so we need to just now get to the point where we're all getting mature in this and really start to help our customers and our partners and our communities take this to the next level and work on inter-operability, and ease of use, and management. We're so mature now in technology, now let's put the polish on it, so that the consumption and the utilization can really go to the next level. >> Yeah, and I'll play off what Margaret said. I think it's very very key. When I look at where we've had the biggest success, as defined by, in that discovery phase, the customer lays out for us, here's what our business objectives were, did we achieve those business objectives, it's all about figuring out how we can create the solution and integrate into their environment today. So Margaret said I think very very well which is we have to integrate into these other solutions and every one of these big customer deployments has some Red Hat software, but it also has some other software that we're integrating into because customers have investments. So it's not about rip and replace, it's about integrate, it's about leverage, it's about time to market, and that's what most of the customers I've talked to, they're very worried about time to value, and so that's what we're trying to focus in, I think as a whole company, around Red Hat. >> Margaret: Agree. >> Absolutely. Summed it up very well. John Alessio, Margaret Dawson, thanks so much for joining us again. >> Thanks again. >> For John Troyer, I'm Stu Miniman, watch more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching theCUBE.
SUMMARY :
Brought to you by Red Hat, The OpenStack Foundation and John is the vice-president of Global Services. One of the things we were looking at and be able to adapt to it we hear time and again from customers. and sometimes we're always like, why, come on, is to run the application, In fact, it's been so powerful that we started to see and then how the infrastructure is almost becoming and that's when it gets really exciting. and talking to different people, even at lunch, and that sort of thing, and in fact from 2017 to 2018 that was up 50% year on year and going, I'm not the cool kid anymore, right? and then there's containers and what their challenge really was and products like Ansible here at the conference-- and take it to customers. and I think it's a good one where it really hit on and to me that's open source culture, right? and be like, oh, that's private cloud. so is Kubernetes the story, and that allows me to easily do Linux containers it's a journey that the customer's going to go on And it's cyclical, right? And so we've created now a program, Well, and the reality is, but we know we don't really know what that means. and so when you hear jokes about, and so that's what we're trying to focus in, Summed it up very well. from OpenStack Summit 2018 in Vancouver.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
John Alessio | PERSON | 0.99+ |
Margaret | PERSON | 0.99+ |
Margaret Dawson | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Cafe Pacific | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Vancouver | LOCATION | 0.99+ |
70% | QUANTITY | 0.99+ |
55,000 people | QUANTITY | 0.99+ |
two apps | QUANTITY | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
two-part | QUANTITY | 0.99+ |
John Allessio | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Vancouver, Canada | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Open Shift | TITLE | 0.99+ |
first point | QUANTITY | 0.99+ |
six week | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
two weeks ago | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
OpenStack Summit 2018 | EVENT | 0.99+ |
OpenStack | TITLE | 0.99+ |
eight weeks | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
Stu | PERSON | 0.98+ |
Python | TITLE | 0.98+ |
this week | DATE | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
first thing | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.97+ |
OpenStack Summit North America 2018 | EVENT | 0.97+ |
One thing | QUANTITY | 0.97+ |
OPEN | ORGANIZATION | 0.97+ |
red hat | ORGANIZATION | 0.97+ |
Roland Cabana, Vault Systems | OpenStack Summit 2018
>> Announcer: Live from Vancouver, Canada it's theCUBE, covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack foundation, and its Ecosystem partners. >> Welcome back, I'm Stu Miniman and my cohost John Troyer and you're watching theCUBE's coverage of OpenStack Summit 2018 here in Vancouver. Happy to welcome first-time guest Roland Cabana who is a DevOps Manager at Vault Systems out of Australia, but you come from a little bit more local. Thanks for joining us Roland. >> Thank you, thanks for having me. Yes, I'm actually born and raised in Vancouver, I moved to Australia a couple years ago. I realized the potential in Australian cloud providers, and I've been there ever since. >> Alright, so one of the big things we talk about here at OpenStack of course is, you know, do people really build clouds with this stuff, where does it fit, how is it doing, so a nice lead-in to what does Vault Systems do for the people who aren't aware. >> Definitely, so yes, we do build cloud, a cloud, or many clouds, actually. And Vault Systems provides cloud services infrastructure service to Australian Government. We do that because we are a certified cloud. We are certified to handle unclassified DLM data, and protected data. And what that means is the sensitive information that is gathered for the Australian citizens, and anything to do with big user-space data is actually secured with certain controls set up by the Australian Government. The Australian Government body around this is called ASD, the Australian Signals Directorate, and they release a document called the ISM. And this document actually outlines 1,088 plus controls that dictate how a cloud should operate, how data should be handled inside of Australia. >> Just to step back for a second, I took a quick look at your website, it's not like you're listed as the government OpenStack cloud there. (Roland laughs) Could you give us, where does OpenStack fit into the overall discussion of the identity of the company, what your ultimate end-users think about how they're doing, help us kind of understand where this fits. >> Yeah, for sure, and I mean the journey started long ago when we, actually our CEO, Rupert Taylor-Price, set out to handle a lot of government information, and tried to find this cloud provider that could handle it in the prescribed way that the Australian Signals Directorate needed to handle. So, he went to different vendors, different cloud platforms, and found out that you couldn't actually meet all the controls in this document using a proprietary cloud or using a proprietary platform to plot out your bare-metal hardware. So, eventually he found OpenStack and saw that there was a great opportunity to massage the code and change it, so that it would comply 100% to the Australian Signals Directorate. >> Alright, so the keynote this morning were talking about people that build, people that operate, you've got DevOps in your title, tell us a little about your role in working with OpenStack, specifically, in broader scope of your-- >> For sure, for sure, so in Vault Systems I'm the DevOps Manager, and so what I do, we run through a lot of tests in terms of our infrastructure. So, complying to those controls I had mentioned earlier, going through the rigmarole of making sure that all the different services that are provided on our platform comply to those specific standards, the specific use cases. So, as a DevOps Manger, I handle a lot of the pipelining in terms of where the code goes. I handle a lot of the logistics and operations. And so it actually extends beyond just operation and development, it actually extends into our policies. And so marrying all that stuff together is pretty much my role day-to-day. I have a leg in the infrastructure team with the engineering and I also have a leg in with sort of the solutions architects and how they get feedback from different customers in terms of what we need and how would we architect that so it's safe and secure for government. >> Roland, so since one of your parts of your remit is compliance, would you say that you're DevSecOps? Do you like that one or not? >> Well I guess there's a few more buzzwords, and there's a few more roles I can throw in there but yeah, I guess yes. DevSecOps there's a strong security posture that Vault holds, and we hold it to a higher standard than a lot of the other incumbents or a lot of platform providers, because we are actually very sensitive about how we handle this information for government. So, security's a big portion of it, and I think the company culture internally is actually centered around how we handle the security. A good example of this is, you know, internally we actually have controls about printing, you know, most modern companies today, they print pages, and you know it's an eco thing. It's an eco thing for us too, but at the same time there are controls around printed documents, and how sensitive those things are. And so, our position in the company is if that control exists because Australian Government decides that that's a sensitive matter, let's adopt that in our entire internal ecosystem. >> There was a lot of talk this morning at the keynote both about upgrades, and I'm blanking on the name of the new feature, but also about Zuul and about upgrading OpenStack. You guys are a full Upstream, OpenStack expert cloud provider. How do you deal with upgrades, and what do you think the state of the OpenStack community is in terms of kind of upgrades, and maintenance, and day two kind of stuff? >> Well I'll tell you the truth, the upgrade path for OpenStack is actually quite difficult. I mean, there's a lot of moving parts, a lot of components that you have to be very specific in terms of how you upgrade to the next level. If you're not keeping in step of the next releases, you may fall behind and you can't upgrade, you know, Keystone from a Liberty all the way up to Alcatel, right? You're basically stuck there. And so what we do is we try to figure out what the government needs, what are the features that are required. And, you know, it's also a conversation piece with government, because we don't have certain features in this particular release of OpenStack, it doesn't mean we're not going to support it. We're not going to move to the next version just because it's available, right? There's a lot of security involved in fusing our controls inside our distribution of OpenStack. I guess you can call it a distribution, on our build of OpenStack. But it's all based on a conversation that we start with the government. So, you know, if they need VGPUs for some reason, right, with the Queens release that's coming out, that's a conversation we're starting. And we will build into that functionality as we need it. >> So, does that mean that you have different entities with different versions, and if so, how do you manage all of that? >> Well, okay, so yes that's true. We do have different versions where we have a Liberty release, and we have an Alcatel release, which is predominant in our infrastructure. And that's only because we started with the inception of the Liberty release before our certification process. A lot of the things that we work with government for is how do they progress through this cloud maturity model. And, you know, the forklift and shift is actually a problem when you're talking about releases. But when you're talking about containerization, you're talking about Agile Methodologies and things like that, it's less of a reliance on the version because you now have the ability to respawn that same application, migrate the data, and have everything live as you progress through different cloud platforms. And so, as OpenStack matures, this whole idea of the fast forward idea of getting to the next release, because now they have an integration step, or they have a path to the next version even though you're two or three versions behind, because let's face it, most operators will not go to the latest and greatest, because there's a lot of issues you're going to face there. I mean, not that the software is bad, it's just that early adopters will come with early adopter problems. And, you know, you need that userbase. You need those forum conversations to be able to be safe and secure about, you know, whether or not you can handle those kinds of things. And there's no need for our particular users' user space to have those latest and greatest things unless there is an actual request. >> Roland, you are an IAS provider. How are you handling containers, or requests for containers from your customers? >> Yes, containers is a big topic. There's a lot of maturity happening right now with government, in terms of what a container is, for example, what is orchestration with containers, how does my Legacy application forklift and shift to a container? And so, we're handling it in stages, right, because we're working with government in their maturity. We don't do container services on the platform, but what we do is we open-source a lot of code that allows people to deploy, let's say a terraform file, that creates a Docker Host, you know, and we give them examples. A good segue into what we've just launched last week was our Vault Academy, which we are now training 3,000 government public servants on new cloud technologies. We're not talking about how does an OS work, we're talking about infrastructures, code, we're talking about Kubernetes. We're talking about all these cool, fun things, all the way up to function as a service, right? And those kinds of capabilities is what's going to propel government in Australia moving forward in the future. >> You hit on one of my hot buttons here. So functions as a service, do you have serverless deployed in your environment, or is it an education at this point? >> It's an education at this point. Right now we have customers who would like to have that available as a native service in our cloud, but what we do is we concentrate on the controls and the infrastructure as a service platform first and foremost, just to make sure that it's secure and compliant. Everyone has the ability to deploy functions as a service on their platform, or on their accounts, or on their tenancies, and have that available to them through a different set of APIs. >> Great. There's a whole bunch of open-source versions out there. Is that what they're doing? Do you any preference toward the OpenWhisk, or FN, or you know, Fission, all the different versions that are out there? >> I guess, you know, you can sort of like, you know, pick your racehorse in that regard. Because it's still early days, and I think open to us is pretty much what I've been looking at recently, and it's just a discovery stage at this point. There are more mature customers who are coming in, some partners who are championing different technologies, so the great is that we can make sure our platform is secure and they can build on top of it. >> So you brought up security again, one of the areas I wanted to poke at a little bit is your network. So, it being an IS provider, networking's critical, what are you doing from a networking standpoint is micro-segmentation part of your environment? >> Definitely. So natively to build in our cloud, the functions that we build in our cloud are all around security, obviously. Micro-segmentation's a big part of that, training people in terms of how micro-segmentation works from a forklift and shift perspective. And the network connectivity we have with the government is also a part of this whole model, right? And so, we use technologies like Mellanox, 400G fabric. We're BGP internally, so we're routing through the host, or routing to the host, and we have this... Well so in Australia there's this, there's service from the Department of Finance, they create this idea of an icon network. And what it is, is an actually direct media fiber from the department directly to us. And that means, directly to the edge of our cloud and pipes right through into their tenancy. So essentially what happens is, this is true, true hybrid cloud. I'm not talking about going through gateways and stuff, I'm talking about I speed up an instance in the Vault cloud, and I can ping it from my desktop in my agency. Low latency, submillisecond direct fiber link, up to 100g. >> Do you have certain programmability you're doing in your network? I know lots of service providers, they want to play and get in there, they're using, you know, new operating models. >> Yes, I mean, we're using the... I draw a blank. There's a lot of technologies we're using for network, and the Cumulus Networking OS is what we're using. That allows us to bring it in to our automation team, and actually use more of a DevOps tool to sort of create the deployment from a code perspective instead of having a lot of engineers hardcoding things right on the actual production systems. Which allows us to gate a lot of the changes, which is part of the security posture as well. So, we were doing a lot of network offloading on the ConnectX-5 cards in the data center, we're using cumulus networks for bridging, we're working with Neutron to make sure that we have Neutron routers and making sure that that's secure and it's code reviewed. And, you know, there's a lot of moving parts there as well, and I think from a security standpoint and from a network functionality standpoint, we've come to a happy place in terms of providing the fastest network possible, and also the most secure and safe network as possible. >> Roland, you're working directly with the Upstream OpenStack projects, and it sounds like some others as well. You're not working with a vendor who's packaging it for you or supporting it. So that's a lot of responsibility on you and your team, I'm kind of curious how you work with the OpenStack community, and how you've seen the OpenStack community develop over the years. >> Yeah, so I mean we have a lot of talented people in our company who actually OpenStack as a passion, right? This is what they do, this is what they love. They've come from different companies who worked in OpenStack and have contributed a lot actually, to the community. And actually that segues into how we operate inside culturally in our company. Because if we do work with Upstream code, and it doesn't have anything to do with the security compliance of the Australian Signals Directorate in general, we'd like to Upstream that as much as possible and contribute back the code where it seems fit. Obviously, there's vendor mixes and things we have internally, and that's with the Mellanox and Cumulus stuff, but anything else beyond that is usually contributed up. Our team's actually very supportive of each other, we have network specialists, we have storage specialists. And it's a culture of learning, so there's a lot of synchronizations, a lot of synergies inside the company. And I think that's part to do with the people who make up Vault Systems, and that whole camaraderie is actually propagated through our technology as well. >> One of the big themes of the show this year has been broadening out of what's happening. We talked a little bit about containers already, Edge Computing is a big topic here. Either Edge, or some other areas, what are you looking for next from this ecosystem, or new areas that Vault is looking at poking at? >> Well, I mean, a lot of the exciting things for me personally, I guess, I can't talk to Vault in general, but, 'cause there's a lot of engineers who have their own opinions of what they like to see, but with the Queens release with the VGPUs, something I'd like, that all's great, a long-term release cycle with the OpenStack foundation would be great, or the OpenStack platform would be great. And that's just to keep in step with the next releases to make sure that we have the continuity, even though we're missing one release, there's a jump point. >> Can you actually put a point on that, what that means for you. We talked to Mark Collier a little bit about it this morning but what you're looking and why that's important. >> Well, it comes down to user acceptance, right? So, I mean, let's say you have a new feature or a new project that's integrated through OpenStack. And, you know, some people find out that there's these new functions that are available. There's a lot of testing behind-the-scenes that has to happen before that can be vetted and exposed as part of our infrastructure as a service platform. And so, by the time that you get to the point where you have all the checks and balances, and marrying that next to the Australian controls that we have it's one year, two years, or you know, however it might be. And you know by that time we're at the night of the release and so, you know, you do all that work, you want to make sure that you're not doing that work and refactoring it for the next release when you're ready to go live. And so, having that long-term release is actually what I'm really keen about. Having that point of, that jump point to the latest and greatest. >> Well Roland, I think that's a great point. You know, it used to be we were on the 18 month cycle, OpenStack was more like a six month cycle, so I absolutely understand why this is important that I don't want to be tied to a release when I want to get a new function. >> John: That's right. >> Roland Cabana, thank you the insight into Vault Systems and congrats on all the progress you have made. So for John Troyer, I'm Stu Miniman. Back here with lots more coverage from the OpenStack Summit 2018 in Vancouver, thanks for watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by Red Hat, the OpenStack foundation, but you come from a little bit more local. I realized the potential in Australian cloud providers, Alright, so one of the big things we talk about and anything to do with big user-space data into the overall discussion of the identity of the company, that the Australian Signals Directorate needed to handle. I have a leg in the infrastructure team with the engineering A good example of this is, you know, of the new feature, but also about Zuul a lot of components that you have to be very specific A lot of the things that we work with government for How are you handling containers, that creates a Docker Host, you know, So functions as a service, do you have serverless deployed and the infrastructure as a service platform or you know, Fission, all the different versions so the great is that we can make sure our platform is secure what are you doing from a networking standpoint And the network connectivity we have with the government they're using, you know, new operating models. and the Cumulus Networking OS is what we're using. So that's a lot of responsibility on you and your team, and it doesn't have anything to do with One of the big themes of the show this year has been And that's just to keep in step with the next releases Can you actually put a point on that, And so, by the time that you get to the point where that I don't want to be tied to a release and congrats on all the progress you have made.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Australia | LOCATION | 0.99+ |
Vancouver | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
one year | QUANTITY | 0.99+ |
Roland Cabana | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Mark Collier | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Roland | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Vault Systems | ORGANIZATION | 0.99+ |
Alcatel | ORGANIZATION | 0.99+ |
Australian Signals Directorate | ORGANIZATION | 0.99+ |
Rupert Taylor-Price | PERSON | 0.99+ |
Department of Finance | ORGANIZATION | 0.99+ |
18 month | QUANTITY | 0.99+ |
six month | QUANTITY | 0.99+ |
ASD | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
Neutron | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Australian Government | ORGANIZATION | 0.99+ |
OpenStack | TITLE | 0.99+ |
Vancouver, Canada | LOCATION | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
1,088 plus controls | QUANTITY | 0.99+ |
OpenStack Summit 2018 | EVENT | 0.99+ |
first-time | QUANTITY | 0.98+ |
Vault Academy | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Vault | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
Liberty | TITLE | 0.96+ |
three versions | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
Zuul | ORGANIZATION | 0.95+ |
one release | QUANTITY | 0.95+ |
DevSecOps | TITLE | 0.93+ |
up to 100g | QUANTITY | 0.93+ |
today | DATE | 0.93+ |
OpenStack Summit North America 2018 | EVENT | 0.91+ |
ConnectX-5 cards | COMMERCIAL_ITEM | 0.9+ |
3,000 government public servants | QUANTITY | 0.9+ |
ISM | ORGANIZATION | 0.9+ |
Upstream | ORGANIZATION | 0.9+ |
this morning | DATE | 0.89+ |
Agile Methodologies | TITLE | 0.88+ |
a second | QUANTITY | 0.87+ |
Queens | ORGANIZATION | 0.87+ |
couple years ago | DATE | 0.87+ |
DevOps | TITLE | 0.86+ |
day two | QUANTITY | 0.86+ |
Liberty | ORGANIZATION | 0.85+ |
OLD VERSION: Rob Young, Red Hat | VMworld 2017
>> Narrator: Live from Las Vegas. It's The Cube covering VMworld 2017 brought to you by VMware and its ecosystem partners. >> Welcome back to The Cube on day three of our continuing coverage of Vmworld 2017. I'm Lisa Martin, our cohost for this segment is John Troyer and we're excited to be joined by Rob Young, who is a Cube alumni, and the manager of product and strategy at RedHat. Welcome back to the Cube, Rob. >> Thanks, Lisa, it's great to be here. >> So RedHat and VM where you, you get a lot of customers in common. I imagine you've been to many, many Vmworlds. What are you hearing from some of the folks you were talking to on during the show this week? >> So a lot of the interest that we're seeing is how RedHat can help customers, VMware or otherwise, continue to maintain mode one applications, like Z applications while planning for mode two, more cloud based deployments. And we're seeing a large interest in open source technologies and how that model could work for them to lower cost, to innovate more quickly, deliver things in a more agile way. So there's a mixture of messages that we're getting, but we're receiving them loud and clear. >> Excellent. You guys have a big investment in OpenStack. >> Yes we do, and even back in the early days when OpenStack was struggling as a technology, we recognized that it was an enabler for customers, partners, large enterprises that wanted to create, maintain their own private clouds or even to maintain a hybrid cloud environment where they maintained and managed controlled some aspect of it while having some of it, some of the work loads on a public cloud environment as well, so RedHat has invested heavily in OpenStack to this point. We're now in our 11th version of RedHat/OpenStack platform and we continue to lead that market as far as OpenStack development, animation, and contributions. >> Rob, we were with the Cube at the last Openstack summit in Boston, big Redhat presence there obviously, I was very impressed with the maturity of the Openstack market and community, I mean we're past the hype cycle now, we're down to real people, real uses, real people using it, a lot of varied people with strong business critical investment in Openstack in many different use cases. Can you kind of give us a picture of the state of the Openstack market and the userbase now that we are past that hype cycle. >> So I think what we're witnessing now in the market is a thirst for Openstack, one because it's a very efficient architecture, it's very extensible, there's a tremendous ecosystem around the Redhat distribution of Openstack, and what we're seeing from enterprises, specifically in the telecom industry is that they see Openstack as away to lower their costs, raise their margins in a very competitive environment, so anywhere you see an industry where there's very heavy competition for customers, that type of thing, Openstack is going to play a role, if it's not already doing so, it's going to be there at some point because of the simplification of what was once complex, but also In the cost savings can be realized by managing your own cloud within a hybrid cloud environment. >> You mentioned Telco, and specifically Openstack and the value for companies that need to compete for customers, besides Telco, what other industries are really primed for embracing Openstack technologies? >> So we're seeing across many industries, finance and banking, healthcare, public sector, anywhere where there is a emphasis on the move to opensource and to open compute environments, open APIs we're seeing a tremendous growth in traction, and because Redhat has been later than Linux, many of these same customers, who trust for Redhat Enterprise Linux and now looking to us for the very same reason on Openstack platform, because we much like we have done with Enterprise Linux, we have adopted an upstream community driven project we have made it safe to use within an environment, in an enterprise way, in a supported way as well, via subscription, so many industries, many versicles, we expect to see more, but primary use cases in FE, in Telco, healthcare, banking, public sector are among the top dogs out there. >> IS there a customer story that sort of stands out in you mind as a hallmark that showcases the success of working with Redhat and Openstack? >> Well there are many customers, many partners out there that we work with, if you look at four out of the five large Telcos, Orange, Ericsson, Nokia, others that we've recently done business with, would be really good examples, of not only customer use cases, but how they're using Openstack to allow their customers to have better experience with their cell networks with their billing with their availability, that type of thing, and we had two press announcements that came out in May, one of them is an educational institution of a consortium of very high profile Northeast learning institutions, public institutions that are now standardized on Openstack and are contributing, and we've also got Oakridge, forgive me, it escapes me, but there's a case study out there on the Redhat website that was posted on May 8th that depicts how they're using our product and how others can do the same. >> Rob, switching over a little bit to talking a little bit more about the tech and how the levers get pulled, we're talking about cloud, another term past the hype cycle, it's a reality, but when you're talking about cloud you're talking about scale, we mentioned Linux and Openstack and Redhat, built on a foundation of Linux, super solid super huge community, super rich, super long history, but can you talk about scale up, scale out, data center, public cloud, private, how are you seeing enterprises of various seizes address the scale problem and using technologies like the Redhat cloud stack to address that? >> So there's a couple of things, there's many aspects to that question, but what we have seen from Openstack, is when we first got involved with the project, it was very much bounded by the number of servers that you needed to deploy an Openstack infrastructure on, what we're done as a company is we've looked at the components and we have unshackled them from each other, so that you can scale individual storage, individual network, individual high availability on the number of servers that best for your needs, so if you want to have a very large footprint with many nodes of storage, you can do that, if you want to scale that just when peak season hits you can do that as well, but we have led the community efforts to deshackle the dependencies between components, so from that aspect we have scaled the technology, now scaling operational capabilities and skillsets as well, we've also led the effort to create open APIS for management tools, we've created communities around Openstack and other Opensource technologies. >> Automation a big part of that. >> Automation as well. So if you look at Anserable, Redhat has a major stake in Anserable, and it is predominately the management scripting language of choice, or the management platform of choice, so we have baked that in our products, we have made it very simple for customers to not only deploy things like openstack but Openshift Cloudforms, other management capabilities that we have, but we've also added APIs to these products, so that if you choose not to use a Redhat solution, you can easily plugin a third party solution, or a homegrown solution, into our framework for our stack so that you can use our toolset, single pane of glass to manage it all. >> So with that, can you tell us a little bit about the partner ecosystem that Redhat has, and what you've done to expand that to make your customers successful in Openstack environments? >> Absolutely, as you're aware, Redhat Enterprise Linux, we certified most of the hardware, all of of the hardware OEMs on Redhat Enterprise Linux, we have a tremendous ecosystem around Enterprise Linux for Openstack, this is probably one of the most exciting aspects of Redhat right now, if you look at the ecosystem and the partners that are around Openstack on its own, we've got an entire catalog of hundreds of partners, some at a deeper level than others, integration wise, business wise whatever, but the ecosystem is growing and it's not because of Redhat's efforts, we have customers and partners that are coming to us, we need a storage solution, we're using Netapp as an example, you need to figure out a way to integrate with these guys, and certify, and make sure that it's something that we've already invested in is going to work with your product as well as it works with our legacy stuff, so the ecosystem around openstack is growing, we're also looking at growing the ecosystem around Openshift, around Rethat virtualization as well, so I think you'll see a tremendous amount of overlap in those ecosystems as well, which his a great thing for us, the synergies are there, and I think it's only going to help us multiply our efforts in the market. >> Go on John. >> So Rob, taking again partnerships, I've always been intrigued at the role of Opensource Upstream, the Opensource community, and the people who then take that Opensource and then package for customers and do the training enablement, so can you maybe talk a little bit about some of the Opensource training partners, and how the role of Redhat in translating all that upstream code into a product that is integrated and has training and is available for consumption for the IT side. >> Sure, so at Redhat we partner not only with opensource community member and providers, but also with proprietary, so I just wanted to make sure everybody understands, we're not exclusive to who we partner with. Upstream, we look for partners that have the opensource spirit in mind, so everything that they're asking us to either consider as a component within our solution or to integrate with we want to make sure that they are to the letter of the law, contributing their code back, and there's no strings attached, really the value comes in, are they providing value to their customers, with the contribution, and also to our combined customers, and what we're seeing in our partnerships, is that many of our partners even proprietary partners such as Microsoft for example, are looking at opensource in a different way, and they're providing opensource options for their customers and consumption based models as well, so we hope that we're having a positive impact in that way, because if you look at our industry, it's really headed towards the opensource openAPI open model and the proprietary model still has a time and place I believe, but I think it's going to diminish over time, and opensource is going to be the way people do business together. >> One of the things that you were talking about reminded me of one of the things that Michael Delft said yesterday, during the keynote with Pat Gelsinger, and that was about innovation, and that you really got companies to be successfully innovating with their customers, and that sounds like that definitely one of the core elements of what you're doing with customers, he said customers and partners are bringing us together to really drive that innovation. >> Yeah, I couldn't agree more, and it's an honor to be mentioned in the same breath as Michael Delft by the way, but what we see is because of the opensource model, you can release early and often, and you can fail early, and what that does is it encourages innovation, so its not only corporations like Redhat that are contributing to upstream projects, Openstack as an example, or Linux as an example, or KBM as an example, there's also college students, there's people out there who work for Bank of America, across the plains all over the world, and the one thing that unites us is to recognize the value of our contributions to an opensource community, and we think that really helps with agile development, agile delivery, and if you look a tour project deliveries for Openstack as an example, Openstack releases a major version of its product every six months, and because of contributions that we get from our community, we're able to release our, in testing, it's not just, contributions come in many forms, testing is a huge part of that, because of the testing we get from a world wide community, we're able to release shorty after a major version of upstream Openstack because that innovation in a pure waterfall model, its not even possible, in an opensource model, it's just a way of life. >> So as we're kind of wrapping up VM World day three, what are some of the key takeaways for you personally from the event and that Redhat has observed in the last couple of days here in Las Vegas? >> So there's a couple of observations that have been burned into my brain, one is we believe at Redhat, that virtualization as a model will remain core, not only to legacy application, Mode one, but also to Mode two, and the trend that we see in the model, for mode two virtualization is going to be a commodity feature, people are going to expect it to be baked into the operating system, or into the infrastructure where they're running the operating system where their application's on, so we see that trend, and we suspected, but coming to VMware this week helped confirm that, and I say that because the folks I've talked to after sessions, at dinner, in the partner pavilion, so I really se that as a trend, the other thing I see is that there's a tremendous thirst within the VMware customer base to learn more about opensource and learn more about how they can leverage this, not only to lower their total cost of ownership, and to to replace VMware, but how they can compliment what they've already invested in with faster more agile based Mode two development, and that's where we see the market from a Redhat standpoint. >> Thanks Dan, well there's a great TEI study that you guys did recently, Total Economic Impact on virtualization that you can find on the website, and Rob we thank you for sticking around and sharing some of your insights and innovations that Redhat is pioneering, and we look forward to having you back on the show. >> It's great to be here, thanks. >> Absolutely, and for my co-host John, I am Lisa Martin, you're watching the Cube continuing coverage, day three of VMware 2017
SUMMARY :
brought to you by VMware and its ecosystem partners. and the manager of product and strategy at RedHat. So RedHat and VM where you, So a lot of the interest that we're seeing is You guys have a big investment in OpenStack. having some of it, some of the work loads on a public Openstack market and the userbase now that we but also In the cost savings can be realized by because we much like we have done with Enterprise Linux, and we had two press announcements that came out in May, so from that aspect we have scaled the technology, so that if you choose not to use a Redhat solution, and I think it's only going to help us and how the role of Redhat in translating all that so we hope that we're having a positive impact in that way, and that sounds like that definitely one of the and because of contributions that we get from our community, and I say that because the folks I've talked to and we look forward to having you back on the show.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dan | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
May 8th | DATE | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Rob | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Rob Young | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Orange | ORGANIZATION | 0.99+ |
May | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Michael Delft | PERSON | 0.99+ |
Redhat | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Enterprise Linux | TITLE | 0.99+ |
four | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
hundreds of partners | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
Openstack | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
RedHat | ORGANIZATION | 0.99+ |
Oakridge | ORGANIZATION | 0.99+ |
two press announcements | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
this week | DATE | 0.98+ |
Opensource | ORGANIZATION | 0.98+ |
openstack | TITLE | 0.98+ |
Bank of America | ORGANIZATION | 0.97+ |
Openstack | TITLE | 0.97+ |
five large | QUANTITY | 0.97+ |
RedHat | TITLE | 0.96+ |
VMworld 2017 | EVENT | 0.96+ |
VM World | EVENT | 0.96+ |
Opensource Upstream | ORGANIZATION | 0.96+ |
Anserable | ORGANIZATION | 0.96+ |
OpenStack | TITLE | 0.95+ |
Red Hat | ORGANIZATION | 0.93+ |
Redhat Enterprise Linux | TITLE | 0.93+ |
day three | QUANTITY | 0.93+ |
many partners | QUANTITY | 0.92+ |
Netapp | TITLE | 0.92+ |
every six months | QUANTITY | 0.91+ |
Z | TITLE | 0.91+ |
VM | ORGANIZATION | 0.9+ |
Vmworlds | ORGANIZATION | 0.9+ |
many customers | QUANTITY | 0.9+ |
opensource | ORGANIZATION | 0.89+ |
Upstream | ORGANIZATION | 0.87+ |
Chip Childers, Cloud Foundry Foundation - Cloud Foundry Summit 2017 - #CloudFoundry - #theCUBE
>> Narrator: Live, from Santa Clara in the heart of Silicon Valley, it's theCUBE. Covering Cloud Foundry Summit 2017. Brought to you by the Cloud Foundry Foundation and Pivotal. >> Hi this is Stu Miniman, joined with my cohost, John Troyer. Happy to welcome to the program a first-time guest, Chip Childers, who's the CTO of the Cloud Foundry Foundation. Chip, fresh off the keynote stage, >> Yep. >> how's everything going? >> It's going great. We're really happy with the turnout of the conference. We are really happy with the number of large enterprises that are here to share their story. The really active vendor ecosystem around the project. It's great. It's a wonderful event so far. >> Yeah, I was looking back, I think the last time I came to the Cloud Foundry Show, it was before the Foundation existed, We were in the Hilton in San Francisco, it was obviously a way smaller group. Tell us kind of the goals of the Foundation, doing the event, bringing the community in. >> Yeah, you can think about our goals as being of course, we're the stewards of the intellectual property, the actual software that the vendors distribute. We see our role in the ecosystem as being really two key things. One: we're focused on supporting the users, the customers, and the direct uses of the Open Source software. That's first and foremost. Second though, we want to make sure there is a really robust market ecosystem that is wrapped around this project, right. Both in terms of the distribution, the regional providers that offer Cloud Foundry based services, but also large system integrators that are helping those customers go through digital transformation. Re-platform applications, you know really figure out their way through this process. So, it's all about supporting the users and then supporting the market around it. >> Yeah, as we go to a lot of these events, you know, there are certain themes that emerge. There were two big ones that both of them showed up in what you did in the Keynote. Number one is Multicloud, number two is you got all of these various open sourced pieces, >> Chip: Yep. you know, what fits together, what interlocks together, you know which ones sit side by side. Why don't we start with kind of the open source piece first? Because you're heavily involved in a lot of those. Cloud Foundry, you know, what are the new pieces that are bolting on, or sitting on top, or digging into it, and what's going on there? >> You know, I think first I want to start with a basic philosophy of our upstream community. There are billions of dollars that rely on this platform today. And that continues to grow. Right, because we're showing up in Fortune 500, Global 2000, as well as lots of small start-ups, that are using Cloud Foundry to get code shipped faster. So our community that builds the UpStream software, spends a lot of time being very thoughtful about their technical decisions. So what we release and that what gets productized by the down streams is a complete system. From operating system all the way up to including the various programming languages and frameworks and everything in between. And because we release a complete platform, at a really high velocity, so many people rely on it's quality, we're very thoughtful about when is the right time to build our own, when should we adopt and embrace and continue to support another OpenSource project, so we spend a lot of time really thinking about that. And the areas today that I highlight around specific collaborations include the Open Service Broker API which we actually spun out of being just a Club Foundry implementation. And we embrace other communities, and found a way to share the governance of that. So we move forward as a big industry together. >> Stu: Yeah and speaking on that a little bit more. Very interesting to see. I saw Red Hat for instance speaking with Open Shift, Kubernetes is there. So, how should customers think about this? Are the path wars over? Now you can choose all the pieces that you want? Or, it's probably oversimplifying it. >> I think it's over simplifying it, it depends. You can go try to build your own platform if you want, through a number of serious components, or you can just use something like Cloud Foundry, that has solve for that. But the important thing is that we have specifically designed Cloud Foundry to allow for the backing services to come from anywhere. And so, it's both a differentiator for the various distributions of Cloud Foundry, but also an opportunity for Cloud providers, and even more importantly, it's an opportunity for the enterprise users that live in complex worlds, right? They're going to have multiple platforms, they're going be multiple levels of abstraction from Bms to containers, you know, to the path abstraction even event driven frameworks. We want that all to work really well together. Regardless of the choices you make, because that's what's most valuable to the customers. >> Okay, the other piece, networking you talked about. Why don't you share. >> Yeah, yeah so, besides the Service Broker API, we've added support for what's called Container to Container Networking. I don't necessarily need to dig into the details there, but let's just say that when you're building microservices that the application that the user is experiencing is actually a combination of a lot of different applications. That all talk to each other and rely on each other. So we want to make sure there's a policy-based framework for describing how the webs here is going to talk to the authentication service or is going to talk to the booking service, or the inventory service. They all need to have rules about how they communicate with each other. And we want to do that in the most efficient way possible. So we've adopted the Containing Networking interface as the standard plugin that is now at CNCF, the Cloud Native Computing Foundation. We think it's the right abstraction, we think it's great. It gives us access to all the fascinating work that is going on around software networking, overlay networking, industry standard API plugin to our policy-driven framework. >> Along the same theme, Kubo, a big new news project also kind of integration of some Cloud Foundry concepts with a broader ecosystem, in this case another CNCF project, Kubernetes. Could you speak a little bit to that? >> The Kubernetes community is doing a great job creating great container driven experience. You know that abstraction is all about the container. It's not about, you know, the code. So it's different than Cloud Foundry. There are workloads that make sense to run in one or the other. And we want to make sure that they run really well. Right, so the problem that we're solving with the Kuber project is what deploys Kubernetes? What supports Kubernetes if there is an infrastructure adage and a node goes offline? Right, because it does a great job of restarting containers, but if you have ten nodes in a cluster, and then now you're down to nine, that's a problem. So what Bosh does, is it takes care of solving the node outage level problem. You can also do rolling upgrades that are seamless, no downtime for the Kubernetes cluster. It brings a level of operational maturity to the Kubernetes users that they may not have had otherwise. >> Chip, can you bring us inside a little bit the creation of Kubo, is that something that the market and customers drove towards you? I talked to a couple other Cloud Foundry ecosystem members that were doing some other ways of integrating in Kubernetes. So what lead to this way of deploying it with Bosh? >> Yeah, absolutely so, it came out of a direct collaboration between Pivotal and Google. And it was driven based on Pivotal customer demand. It also, if you speak with people from Google that are involved in the project, they also see it as a need, for the Kubernetes ecosystem. So it's driven based on real-world large financial services companies that wanted to have the multiple abstractions available, they wanted to do it with a common operational platform that is proven mature that they've already adopted. And then as that collaboration board, the fruit of the project, and it was announced by Pivotal and Google several months back, they realized that they needed to move it to the vendor neutral locations so that we can continue to expand the community that can work on it, that can build up the story. >> The other topic I raised at the beginning of the interview, was the Multicloud. So in a panel, Microsoft, Google, MTC for Amazon was there. All of the Cloud guys are going to tell you we have the best platform and can do the best things for you. >> Of course they do. >> How do you balance the "We want to live in a multicultural Cloud world" and be able to go there versus "Oh I'm going to take standard plus and get in a little bit deeper to make sure that we're stickier with the customers there." What role does Cloud Foundry play? What have you seen in the marketplace for that? >> Well the public lab providers are, if you look at the services that they offer, you can roughly categorize them with two things. One, are the infrastructure building blocks. Two, are the higher level services, like their database capabilities, their analytics capabilities, log aggregation, you know, and they all have a portfolio that varies, some have specific things that are very similar. So when we talk about MultiCloud we talk about Cloud Foundry as a way to make use of those common capabilities, now they're going to differentiate based on speeds and feeds, availability, whatever they choose to, but you can then as a user have choice. And then secondarily, that Open Service Broker initiative is what's really about saying "great, there's also all these really valuable additional capabilities, that, as a user, I may choose to integrate with a Google machine learning-service, or I may choose to integrate with a wonderful Microsoft capability, or an Amazon capability." And we just want to make that easy for a developer to make that choice. >> Chip, Cloud Founder was very early in terms of a concept of a platform of services, let's not call it platform as a service right now. But you know, this platform that going to make developers lives easier, multi-target, MultiCloud we call it now, on from your laptop to anywhere. And it's been a really interesting discussion over the last couple years as this parallel container thread can come up with Kubernetes and Mesosphere and all the orchestration tools, and the focus has been on orchestration tools. And I've always thought Cloud Foundry was kind of way ahead of the game in saying "wait a minute, there's a set of services that you're going to have for full life-cycles, day two operation, at scale that you all are going to have to pull together from components." As we're doing this interview here, and this year at Cloud Foundry Summit are there anything that you think people don't kind of realize that over and over again people who are using Cloud Foundry go, "Wow I'm really glad "I had logging or identity management," or what are some of the frameworks that people sometimes don't realize is in there that actually is a huge time-savor. >> Yeah, there are a lot of operational capabilities in the Cloud Foundry platform. When you include both our Bosh layer, as well as the elastic runtime which is in the developer centers experience-- >> John: Anything that people don't often realize is in there? >> Well, I think that the right way to think of it is, it's all the things you need in one application, right? So we've been doing this for years as developers. In the applications operators team, we've been doing it. We've just been doing it via bunch of tickets, we've been doing it via bunch of scripts. What Cloud Foundry does is it takes all of those capabilities you need to really trust a platform to operate something on your behalf, and give you the right view into it, right? The appropriate telemetry, log aggregation, and know that there's going to be help monitoring there. It makes it really easy. Right, so we were talking earlier about the haiku, that Onsi Fakhouri from Pivotal had authored, it's appropriate. It's a promise that a platform makes. And platforms designed to let a user trust that the declarative nature of asking a platform to do X, Y, or Z, will be delivered. >> Chip, we've been hearing Pivotal talks a lot about Spring, when Cloud Foundry's involved. Is it so much so that the Foundation needs to be behind that, or support that? How does that interact and work? >> Well, we're super supportive of all the languages in the framework communities that are out there. You know, even if you pick a particular vendor, Pivotal in this case has a very strong investment in the Spring, Spring Cloud, Spring Boot, they're doing really amazing things. But that's also, it's our software, you know, they steward that community, so all the other vendors actually get the advantage of that. Let's take Dot Net and Microsoft. Microsoft open sourced Dot Net. So now you can run Dot Net applications on Linux. They're embrace of the container details and the APIs and their operating system is making it so that now it can also run on Windows. So the whole Microsoft technology stack, languages and frameworks, they matter quite a bit to the enterprise as well. So we see ourselves as supportive of all of these communities, right? Even ones like the Ruby community. When there's an enterprise developer that chooses to use something like Ruby, with the Ruby on Rails framework, if they use Cloud Foundry, they're getting the latest and greatest version of that language, framework, they know that it's secure, they know that it's going to be patched for them. So it's actually a great experience for that developer, that's working with the language. So, we like to support all of them, we're big fans of any that work really well with the platform and maybe integrate deeper. But it's a polyglot platform. >> We want to give you the final word. People take away from Cloud Foundry Summit 2017, what would you want them to take away? >> Yeah the simple takeaway that I can give you is that this is an absolutely enterprise grade open source ecosystem. And you don't hear that often, right? Because normally we talk about products, being enterprise great. >> Did somebody say in the keynote enterprise great mean that there's a huge salesforce that's going to try sell you stuff? (Chip laughs) Well that's coming from the buying side of the market for years. And you know, it was a bit of a joke. What is "enterprise great?" Well, it means that there's a piece of paper that says, this product will cost x dollars and the salesperson is offering it to you. So of course it's going to be enterprise great. But really, we see it as four key things, right? It's about security, it's about being well-integrated, it's about being able to scale to the needs of even the largest enterprises, and it's also about that great developer experience. So, Cloud Foundry is an ecosystem and all of our downstream distributions get the advantage of this really robust and mature technical community that is producing this software. >> Chip, really appreciate you sharing all the updates with us, and appreciate the foundation's support to bring theCUBE here. We'll be back with lots more coverage here from The Cloud Foundry Summit 2017, you're watching theCUBE. (techno music)
SUMMARY :
Brought to you by the Cloud Foundry Foundation and Pivotal. the Cloud Foundry Foundation. of large enterprises that are here to share their story. doing the event, bringing the community in. of the Open Source software. in what you did in the Keynote. the open source piece first? So our community that builds the UpStream software, Are the path wars over? Regardless of the choices you make, Okay, the other piece, networking you talked about. that the application that the user is Along the same theme, Kubo, You know that abstraction is all about the container. the market and customers drove towards you? that are involved in the project, All of the Cloud guys are going to tell you to make sure that we're stickier with the customers there." I may choose to integrate with a Google machine at scale that you all are going in the Cloud Foundry platform. it's all the things you need in one application, right? Is it so much so that the Foundation needs They're embrace of the container details and the APIs We want to give you the final word. Yeah the simple takeaway that I can give you is the salesperson is offering it to you. Chip, really appreciate you sharing all the updates
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Troyer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Chip Childers | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cloud Foundry Foundation | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Bosh | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Pivotal | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Cloud Foundry | TITLE | 0.99+ |
Two | QUANTITY | 0.99+ |
Ruby on Rails | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Cloud Foundry Show | EVENT | 0.99+ |
Hilton | LOCATION | 0.99+ |
Kubo | ORGANIZATION | 0.98+ |
Santa Clara | LOCATION | 0.98+ |
Chip | PERSON | 0.98+ |
Ruby | TITLE | 0.98+ |
Stu | PERSON | 0.98+ |
MTC | ORGANIZATION | 0.98+ |
Spring Boot | TITLE | 0.98+ |
one application | QUANTITY | 0.98+ |
two things | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
first-time | QUANTITY | 0.97+ |
billions of dollars | QUANTITY | 0.97+ |
nine | QUANTITY | 0.97+ |
two key things | QUANTITY | 0.97+ |
ten nodes | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
Spring Cloud | TITLE | 0.96+ |
Narrator: Live | TITLE | 0.96+ |
Cloud Foundry Summit | EVENT | 0.95+ |
Global 2000 | ORGANIZATION | 0.94+ |
Cloud Foundry Summit 2017 | EVENT | 0.94+ |
Windows | TITLE | 0.94+ |
this year | DATE | 0.93+ |
four key | QUANTITY | 0.92+ |
today | DATE | 0.92+ |
Spring | TITLE | 0.92+ |
#theCUBE | ORGANIZATION | 0.91+ |
Linux | TITLE | 0.91+ |
Kubernetes | ORGANIZATION | 0.9+ |
Fortune 500 | ORGANIZATION | 0.9+ |
several months back | DATE | 0.9+ |
Dot Net | ORGANIZATION | 0.88+ |
Multicloud | ORGANIZATION | 0.86+ |
Onsi Fakhouri | PERSON | 0.86+ |
theCUBE | ORGANIZATION | 0.86+ |
Kuber | ORGANIZATION | 0.83+ |
Open Shift | TITLE | 0.82+ |
Kendall Nelson, OpenStack Foundation & John Griffith, NetApp - OpenStack Summit 2017 - #theCUBE
>> Narrator: Live from Boston, Massachusetts, it's theCUBE covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem support. (techno music) >> And we're back. I'm Stu Miniman joined by my co-host, John Troyer. Happy to welcome to the program two of the keynote speakers this morning, worked on some of the container activity, Kendall Nelson, who's a Upstream Developer Advocate with the OpenStack Foundation. >> Yep. >> And John Griffith, who's a Principal Engineer from NetApp, excuse me, through the SolidFire acquisition. Thank you so much both for joining. >> Kendall Nelson: Yeah. Thank you. >> John Griffith: Thanks for havin' us. >> Stu Miniman: So you see-- >> Yeah. >> When we have any slip-ups when we're live, we just run through it. >> Run through it. >> Kendall, you ever heard of something like that happening? >> Kendall Nelson: Yeah. Yeah. That might've happened this morning a little bit. (laughs) >> So, you know, let's start with the keynote this morning. I tell ya, we're pretty impressed with the demos. Sometimes the demo gods don't always live up to expectations. >> Kendall Nelson: Yeah. >> But maybe share with our audience just a little bit about kind of the goals, what you were looking to accomplish. >> Yeah. Sure. So basically what we set out to do was once the ironic nodes were spun up, we wanted to set up a standalone cinder service and use Docker Compose to do that so that we could do an example of creating a volume and then attaching it to a local instance and kind of showing the multiple backend capabilities of Cinder, so... >> Yeah, so the idea was to show how easy it is to deploy Cinder. Right? So and then plug that into that Kubernetes deployment using a flex volume plugin and-- >> Stu Miniman: Yeah. >> Voila. >> It was funny. I saw some comments on Twitter that were like, "Well, maybe we're showing Management that it's not, you know, a wizard that you just click, click, click-- >> John Griffith: Right. >> Kendall Nelson: Yeah. >> "And everything's done." There is some complexity here. You do want to have some people that know what they're doing 'cause things can break. >> Kendall Nelson: Yeah. >> I love that the container stuff was called ironic. The bare metal was ironic because-- >> Kendall Nelson: Yeah. >> Right. When you think OpenStack at first, it was like, "Oh. This is virtualized infrastructure." And therefore when containers first came out, it was like, "Wait. It's shifting. It's going away from virtualization." John, you've been on Cinder. You helped start Cinder. >> Right. >> So maybe you could give us a little bit about historical view as to where that came from and where it's goin'. Yeah. >> Yeah. It's kind of interesting, 'cause it... You're absolutely right. There was a point where, in the beginning, where virtualization was everything. Right? Ironic actually, I think it really started more of a means to an end to figure out a better way to deploy OpenStack. And then what happened was, as people started to realize, "Oh, hey. Wait." You know, "This whole bare metal thing and running these cloud services on bare metal and bare metal clouds, this is a really cool thing. There's a lot of merit here." So then it kind of grew and took on its own thing after that. So it's pretty cool. There's a lot of options, a lot of choices, a lot of different ways to run a cloud now, so... >> Kendall Nelson: Yeah. >> You want to comment on that Kendall, or... >> Oh, no. Just there are definitely tons of ways you can run a cloud and open infrastructure is really interesting and growing. >> That has been one thing that we've noticed here at the show. So my first summit, so it was really interesting to me as an outsider, right, trying to perceive the shape of OpenStack. Right? Here the message has actually been very clear. We're no longer having to have a one winner... You know, one-size-fits-all kind of cloud world. Like we had that fight a couple of years ago. It's clear there's going to be multiple clouds, multiple places, multiple form factors, and it was very nice people... An acknowledgement of the ecosystem, that there's a whole open source ecosystem of containers and of other open source projects that have grown up all around OpenStack, so... But I want to talk a little bit about the... And the fact that containers and Kubernetes and that app layer is actually... Doesn't concern itself with the infrastructure so much so actually is a great fit for sitting on top of or... And adjacent to OpenStack. Can you all talk a little bit about the perception here that you see with the end users and cloud builders that are here at the show and how are they starting to use containers. Do they understand the way these two things fit together? >> Yeah. I think that we had a lot of talks submitted that were focused on containers, and I was just standing outside the room trying to get into a Women of OpenStack event, and the number of people that came pouring out that were interested in the container stack was amazing. And I definitely think people are getting more into that and using it with OpenStack is a growing direction in the community. There are couple new projects that are growing that are containers-focused, like... One just came into the projects, OpenStack Helm. And that's a AT&T effort to use... I think it's Kubernetes with OpenStack. So yeah, tons. >> So yeah, it's interesting. I think the last couple of years there's been a huge uptick in the interest of containers, and not just in containers of course, but actually bringing those together with OpenStack and actually running containers on OpenStack as the infrastructure. 'Cause to your point, what everybody wants to see, basically, is commoditized, automated and generic infrastructure. Right? And OpenStack does a really good job of that. And as people start to kind of realize that OpenStack isn't as hard and scary as it used to be... You know, 'cause for a few years there it was pretty difficult and scary. It's gotten a lot better. So deployment, maintaining, stuff like that, it's not so bad, so it's actually a really good solution to build containers on. >> Well, in fact, I mean, OpenStack has that history, right? So you've been solving a lot of problems. Right now the container world, both on the docker side and Kubernetes as well, you're dealing with storage drivers-- >> John Griffith: Yeah. >> Networking overlays-- >> Right. >> Multi-tenancy security, all those things that previous generations of technology have had to solve. And in fact, I mean, you know, right now, I'd say storage and storage interfaces actually are one of the interesting challenges that docker and Kubernetes and all that level of containers and container orchestration and spacing... I mean, it seems like... Has OpenStack already solved, in some way, it's already solved some of these problems with things like Cinder? >> Abso... Yeah. >> John Troyer: And possibly is there an application to containers directly? >> Absolutely. I mean, I think the thing about all of this... And there's a number of us from the OpenStack community on the Cinder side as well as the networking side, too-- >> Yeah. >> Because that's another one of those problem spaces. That are actually taking active roles and participating in the Kubernetes communities and the docker communities to try and kind of help with solving the problems over on that side, right? And moving forward. The fact is is storage is, it's kind of boring, but it's hard. Everybody thinks-- >> John Troyer: It's not boring. >> Yeah. >> It's really awesomely hard. Yeah. >> Everybody thinks it's, "Oh, I'll just do my own." It's actually a hard thing to get right, and you learn a lot over the last seven years of OpenStack. >> Yeah. >> We've learned a lot in production, and I think there's a lot to be learned from what we've done and how things could be going forward with other projects and new technologies to kind of learn from those lessons and make 'em better, so... >> Yeah. >> In terms of multicloud, hybrid cloud world that we're seeing, right? What do you see as the role of OpenStack in that kind of a multicloud deployments now? >> OpenStack can be used in a lot of different ways. It can be on top of containers or in containers. You can orchestrate containers with OpenStack. That's like the... Depending on the use case, you can plug and play a lot of different parts of it. On all the projects, we're trying to move to standalone sort of services, so that you can use them more easily with other technologies. >> Well, and part of your demo this morning, you were pulling out of a containerized repo somehow. So is that kind of a path forward for the mainline OpenStack core? >> So personally, I think it would be a pretty cool way to go forward, right? It would make things a lot easier, a lot simpler. And kind of to your point about hybrid cloud, the thing that's interesting is people have been talking about hybrid cloud for a long time. What's most interesting these days though is containers and things like Kubernetes and stuff, they're actually making hybrid cloud something that's really feasible and possible, right? Because now, if I'm running on a cloud provider, whether it's OpenStack, Amazon, Google, DigitalOcean, it doesn't matter anymore, right? Because all of that stuff in my app is encapsulated in the container. So hybrid cloud might actually become a reality, right? The one thing that's missing still (John Troyer laughs) is data, right? (Kendall Nelson laughs) Data gravity and that whole thing. So if we can figure that out, we've actually got somethin', I think. >> Interesting comment. You know, hybrid cloud a reality. I mean, we know the public cloud here, it's real. >> Yeah. >> With the Kubernetes piece, doesn't that kind of pull together some... Really enable some of that hybrid strategy for OpenStack, which I felt like two or three years ago it was like, "No, no, no. Don't do public cloud. >> John Griffith: Yeah. >> "It's expensive and (laughter) hard or something. "And yeah, infrastructure's easy and free, right?" (laughter) Wait, no. I think I missed that somewhere. (laughter) But yeah, it feels like you're right at the space that enables some of those hybrid and multicloud capabilities. >> Well, and the thing that's interesting is if you look at things like Swarm and Kubernetes and stuff like that, right? One of the first things that they all build are cloud providers, whether OpenStack, AWS, they're all in there, right? So for Swarm, it's pretty awesome. I did a demo about a year ago of using Amazon and using OpenStack, right? And running the exact same workloads the exact same way with the exact same tools, all from Docker machine and Swarm. It was fantastic, and now you can do that with Kubernetes. I mean, now that's just... There's nothing impressive. It's just normal, right? (Kendall Nelson laughs) That's what you do. (laughs) >> I love the demos this morning because they actually were, they were CLI. They were command-line driven, right? >> Kendall Nelson: Yeah. >> I felt at some conferences, you see kind of wizards and GUIs and things like that, but here they-- >> Yeah. >> They blew up the terminal and you were typing. It looked like you were actually typing. >> Kendall Nelson: Oh, yeah. (laughter) >> John Griffith: She was. >> And I actually like the other demo that went on this morning too, where they... The interop demo, right? >> Mm-hmm. >> John Troyer: They spun up 15 different OpenStack clouds-- >> Yeah. >> From different providers on the fly, right there, and then hooked up a CockroachDB, a huge cluster with all of them, right? >> Kendall Nelson: Yeah. >> Can you maybe talk... I just described it, but can you maybe talk a little bit about... That seemed actually super cool and surprising that that would happen that... You could script all that that it could real-time on stage. >> Yeah. I don't know if you, like, noticed, but after our little flub-up (laughs) some of the people during the interop challenge, they would raise their hand like, "Oh, yeah. I'm ready." And then there were some people that didn't raise their hands. Like, I'm sure things went wrong (John Troyer laughs) and with other people, too. So it was kind of interesting to see that it's really happening. There are people succeeding and not quite gettin' there and it definitely is all on the fly, for sure. >> Well, we talked yesterday to CTO Red Hat, and he was talking same thing. No, it's simpler, but you're still making a complicated distributed computing system. >> Kendall Nelson: Oh, definitely. >> Right? There are a lot of... This is not a... There are a lot of moving parts here. >> Kendall Nelson: Yeah. >> Yeah. >> Well, it's funny, 'cause I've been around for a while, right? So I remember what it was like to actually build these things on your own. (laughs) Right? And this is way better, (laughter) so-- >> So it gets your seal of approval? We have reached a point of-- >> Yeah. >> Of usability and maintainability? >> Yeah, and it's just going to keep gettin' better, right? You know, like the interop challenge, the thing that's awesome there is, so they use Ansible, and they talk to 20 different clouds and-- >> Kendall Nelson: Yeah. >> And it works. I mean, it's awesome. It's great. >> Kendall Nelson: Yeah. >> So I guess I'm hearing containers didn't kill OpenStack, as a matter of fact, it might enable the next generation-- >> Kendall Nelson: Yeah. >> Of what's going on, so-- >> John Griffith: Yeah. >> How about serverless? When do we get to see that in here? I actually was lookin' real quick. There's a Functions as a Service session that somebody's doing, but any commentary as to where that fits into OpenStack? >> Go ahead. (laughs) >> So I'm kind of mixed on the serverless stuff, especially in a... In a public cloud, I get it, 'cause then I just call it somebody else's server, right? >> Stu Miniman: Yeah. >> In a private context, it's something that I haven't really quite wrapped my head around yet. I think it's going to happen. I mean, there's no doubt about it. >> Kendall Nelson: Yeah. >> I just don't know exactly what that looks like for me. I'm more interested right now in figuring out how to do awesome storage in things like Kubernetes and stuff like that, and then once we get past that, then I'll start thinking about serverless. >> Yeah. >> Yeah. >> 'Cause where I guess I see is... At like an IoT edge use case where I'm leveraging a container architecture that's serverless driven, that's where-- >> Yeah. >> It kind of fits, and sometimes that seems to be an extension of the public cloud, rather than... To the edge of the public cloud rather than the data center driven-- >> John Griffith: Yeah. >> But yeah. >> Well, that's kind of interesting, actually, because in that context, I do have some experience with some folks that are deploying that model now, and what they're doing is they're doing a mini OpenStack deployment on the edge-- >> Stu Miniman: Yep. >> And using Cinder and Instance and everything else, and then pushing, and as soon as they push that out to the public, they destroy what they had, and they start over, right? And so it's really... It's actually really interesting. And the economics, depending on the scale and everything else, you start adding it up, it's phenomenal, so... >> Well, you two are both plugged into the user community, the hands-on community. What's the mood of the community this year? Like I said, my first year, everybody seems engaged. I've just run in randomly to people that are spinning up their first clouds right now in 2017. So it seems like there's a lot of people here for the first time excited to get started. What do you think the mood of the user community is like? >> I think it's pretty good. I actually... So at the beginning of the week, I helped to run the OpenStack Upstream Institute, which is teaching people how to contribute to the Upstream Community. And there were a fair amount of users there. There are normally a lot of operators and then just a set of devs, and it seemed like there were a lot more operators and users looking that weren't originally interested in contributing Upstream that are now looking into those things. And at our... We had a presence at DockerCon, actually. We had a booth there, and there were a ton of users that were coming and talking to us, and like, "How can I use OpenStack with containers?" So it's, like, getting more interest with every day and growing rapidly, so... >> That's great. >> Yeah. >> All right. Well, want to thank both of you for joining us. I think this went flawless on the interview. (laughter) And yeah, thanks so much. >> Yeah. >> All these things happen... Live is forgiving, as we say on theCUBE and absolutely going forward. So thanks so much for joining us. >> John Griffith: Thank you. John and I will be back with more coverage here from the OpenStack Summit in Boston. You're watching theCUBE. (funky techno music)
SUMMARY :
Brought to you by the OpenStack Foundation, Happy to welcome to the program And John Griffith, who's a Principal Engineer When we have any slip-ups when we're live, That might've happened this morning a little bit. Sometimes the demo gods about kind of the goals, and kind of showing the multiple backend capabilities So and then plug that into that Kubernetes deployment I saw some comments on Twitter that were like, You do want to have some people that know what they're doing I love that the container stuff was called ironic. When you think OpenStack at first, So maybe you could give us a little bit more of a means to an end to figure out and open infrastructure is really interesting and growing. that are here at the show and how are they starting and the number of people that came pouring out and not just in containers of course, Well, in fact, I mean, OpenStack has that history, that previous generations of technology have had to solve. Yeah. on the Cinder side as well as the networking side, too-- in the Kubernetes communities and the docker communities Yeah. and you learn a lot over the last seven years of OpenStack. and I think there's a lot to be learned from what we've done Depending on the use case, you can plug and play So is that kind of a path forward And kind of to your point about hybrid cloud, I mean, we know the public cloud here, With the Kubernetes piece, doesn't that kind of that enables some of those hybrid Well, and the thing that's interesting I love the demos this morning because they actually were, They blew up the terminal and you were typing. Kendall Nelson: Oh, yeah. And I actually like the other demo and surprising that that would happen that... and it definitely is all on the fly, for sure. and he was talking same thing. There are a lot of moving parts here. to actually build these things on your own. And it works. I actually was lookin' real quick. (laughs) So I'm kind of mixed on the serverless stuff, I think it's going to happen. and then once we get past that, At like an IoT edge use case It kind of fits, and sometimes that seems to be and as soon as they push that out to the public, here for the first time excited to get started. So at the beginning of the week, I think this went flawless on the interview. and absolutely going forward. John and I will be back with more coverage here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Griffith | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Kendall Nelson | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
15 | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Kendall | PERSON | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
OpenStack Summit | EVENT | 0.99+ |
OpenStack | TITLE | 0.99+ |
one thing | QUANTITY | 0.98+ |
20 different clouds | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
three years ago | DATE | 0.98+ |
one winner | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
OpenStack Upstream Institute | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
OpenStack Summit 2017 | EVENT | 0.97+ |
SolidFire | ORGANIZATION | 0.96+ |
CTO Red Hat | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.95+ |
NetApp | ORGANIZATION | 0.95+ |
first clouds | QUANTITY | 0.94+ |
Cinder | ORGANIZATION | 0.93+ |
first summit | QUANTITY | 0.93+ |
couple of years ago | DATE | 0.93+ |
Cinder | TITLE | 0.91+ |
Kubernetes | TITLE | 0.91+ |
this morning | DATE | 0.91+ |