Image Title

Search Results for Cumulus:

JR Rivers, Cumulus Network | OpenStack Summit 2018


 

(bright music) >> Hi, I'm Peter Burris. Welcome to another CUBE Conversation from our beautiful studios here in Palo Alto, California. As we do with every CUBE Conversation, we come up with a great topic and we find someone who really understands it so they can talk about it. We capture them for you so you can learn something about some of the new trends and changes in the industry, and we're doing that today too. The topic that we're talking about is, how do you do a better job of mapping the costs that are being generated by the cloud. Well that information's coming out of cloud suppliers related to what you're using with the actual business activities that generate the differential capabilities that customers are looking for. That's a tough, tough challenge, and to understand that better, we're talking with J.R. Storment, who's a co-founder of Cloudability. J.R, welcome to the CUBE. >> Thanks Peter, good to be here. >> So let's talk about... First, who are you? >> Yeah, so I'm co-founder of Cloudability, and Cloudability is focused around improving the unit economics of cloud spend, so our customers tend to be those who are spending large amount in AWS or Azure or GCP. And we take their billing data, their utilization data, various meta data about their business and do machine learning and data science on top of it to help them get better visibility into where that spend is going, how their using it, but more importantly to give them some controls around how they want to optimize. And optimize doesn't necessarily mean save money in a cloud world. Cause most companies who are moving into cloud very heavily are doing that for the innovation, for the speed, so they can deliver better data faster. But it's really about fine-tuning the conversation. Say, "Okay, here we want to save money. "Here we want to move faster. "Here we want to focus on quality." And really providing a way for the various groups that aren't normally talking, the finance teams with the engineering teams with the procurement teams, all these groups to come together, and be able to take executive input to say, "Okay, how do we want to operate? "And how do we want to improve those unit economics as we go?" >> Well, I want to start with just a quick comment on this notion of unit economics. Cause when people historically hear the notion of unit economics, they think of increasing scale so the average cost per unit goes down. But I think you're talking about more than that, right? Aren't you really also talking about a mapping of what spend is generating to the business activities that actually generate value and ensuring that you get the differential or the optimized unit economics or unit cost? >> Yeah, so the mapping is actually really interestingly challenging in cloud. It's hard enough in traditional IT. If you look at somebody like AWS, they have 200,000 SKUs, different products you can buy. And they now bill at a second level resolution. So what this means is you've got all these engineers out there using cloud in a very good way to move quickly, innovate, include more features. And they kind of have an unlimited credit card that they can go spend on as quickly as they need. And they never see the statements. They never see the bills. And the other side, you've got finance teams, procurement teams who've sort of lost control of traditionally the power of the PO that they have to rein that in. And they're struggling just to understand what is the spend. And then to the mapping question, how do I allocate these hundreds of millions of charges that I have this month into cost centers and business units, and getting that sorted in a world where engineers are focused on moving fast. They're not tagging things based on cost center typically. So once you get that sort of mapping aspect sorted to the next point you brought is is then bring in the business value. So how do we start to relate that back. There's a concept a lot of you know, IT has been a cost center, and now it's actual driver of value in a world where businesses are increasingly delivering their value through software. So we need to start tying the spending, mapping of the business and then tying that to the value delivered. A great example of this, I was sitting last week with one of the largest cloud spenders in the world. And they're up in, you know, nine figures with their primary vendor. And in the conversation with the executives, we realized that nobody was looking at both side of that equation. You had the finance people who were saying, "Hey, we're tracking the costs, "and we're figuring out what's happening there." And then you have the revenue generators looking at the money coming in, you know the cloud people with that. But there wasn't this centralized view to say, "Alright, we want to have a conversation about what value are we getting out of this spend." And the question that always comes up with that is are we spending the right amount? I don't know. >> Let me build on that, because IT is historically, and this is one of the things that we've been doing over the last few years, IT has historically done things at a project level. Alright, so we had waterfall development. We tried to change that with Agile. We had buy the hardware upfront and then deploy the application on it, cloud changes that. So this project orientation has led to a set of decisions about finance at the moment that the business decides to do it. We've changed the practices that we use at a development level. We've changed the practices that we use at an asset level. Is it now time to change the practices that we use at a finance level? Is that really kind of what's going on here? >> It is, the project analogy is good. Because what we're seeing is they're shifting from a project basis to a product basis, and products that deliver value. Increasingly if you think about the change that's happened with DevOps in the scene and cloud, companies are delivering more of their value through software, and they're not just using IT for internal projects, right. It's actually the driver of business. It's how we interact with airlines and banks and all these things. So that's the shift to say, okay, now we gotten good at DevOps moving fast, and we've gotten good at deploying and building better data stores. Now we need to bring in this new discipline. And the discipline is what the market is calling FinOps, which essentially combining financial operations. You're essentially combining-- >> Applied to a technology world. >> Applied specifically to a cloud world. And it can only really happen in cloud. It can't happen in data centers. Because data centers have fixed spending, right? You have to wait to get resources. Once you make the investment, it's a sum cost. There's months of lead time. Cloud introduced the removal of constraints, which means you can get whatever you want as quickly as you want. And DevOps meant it's all automated. So instead of your collection of 60 servers, you've got thousands that are coming up and down all the time. So what you now have to do is bring in all these groups. Engineers have to think about cost as a new efficiency metric. They have to think about the impact on their business that this code, this confirmation template they just wrote is going to have. And the finance teams have to shift from this mode of "I'm going to report retroactively at a quarterly granularity, "60 days after it happened and block investment" to be "I'm going to partner with these teams. "Report in a real-time fashion. "Give them the visibility and help forecast. "Actually bring them together and make better business decisions about the cloud spend." >> So cloud has allowed development to alter practically, I mean Agile has been around for a long time, pre-dates the cloud, but it became practical and almost demanded as a consequence of what you could do with cloud. So cloud changed development through Agile. It changed infrastructure management through DevOps. Where now you're deploying software infrastructure as code. And what you're saying is the third leg of that stool, cloud is now changing how you do financial management of technology, financial management of IT. And we're calling that FinOps. >> You can't really have FinOps without cloud or without DevOps, and if you have the two together, you ultimately need this new set of, it's a new operating model. The reason this has sort of come to a head of late is if you look at going to the Amazon re:Invent conferences a few years back, it was like well how much is cloud going to be a thing. And okay, cloud's not going to be a thing. When's it going to happen? Now it's about the how and how do we do this better. Cloud is hitting sort of material spend levels now at big organizations. You always see the cloud projections where it's going, I think it's now 360 billion in the next few years. And we're seeing CFOs at public companies look to say, "Okay, it's not my biggest line item yet. "But it's the most variable and fastest growing "cogs expense, so it's actually "starting to affect our margins. "We need a new set of processes to actually manage this." So one of the things that's coming to market is this new group called the FinOps Foundation, which is a non-profit trade association that initially has a few dozen of some of the largest cloud spenders in the world. There's the Spotifys, the Laciens, the Nationwides, the Autodesks. And they've all come together as a set of best practice practitioners to start to clarify this into something that can be scaled out in organizations. So that group is going to be putting out a user conference around this area. There's a new O'Reilly book that's coming out the end of the year that's going to be sort of the treatise and all this stuff pulled together. Because what we found and you know me, as in Cloudability in the last eight years, we bring in technology and platform to show the recommendations of visibility and how to do this, but the real challenge companies run into is they don't have the internal expertise. Their finance teams understand what they need to. The engineers don't. And so they came to us last year saying, "Can you help figure out the processes? "Can you educate us?" And that's really where this FinOps Foundation has grown out of, of bringing together those people to define those processes. >> So the impact of cloud on each of these different groups, the development group, on the infrastructure team, and now on the finance team. The developer groups, some of them resisted it. But generally speaking, it's gone okay. And eventually tooling from a variety of different players came along that made it easy to enact best practices in software development through an Agile mechanism. In the last few years after significant battles within infrastructure teams about whether or not they were going to use software as code. We've seen new products, new tooling that has facilitate the adoption of those practices. What kind of tooling are we going to see introduced that facilitates FinOps, so that finance teams, procurement teams move from a project orientation to a strategic management of a resource orientation? >> I mean I think the first is on the engineering side is seeing cost become a first class citizen of an efficiency metric that they need to look at. So you know in their build processes baked in the CICD, looking to see am I properly sizing my compute request for the workload that it needs. There's some research that just came out showing that, I think it's 80% of the market is not using the best discounting options that cloud providers offer. You hear these horror stories. Cloud's too expensive, we overspend. That's not actually a problem with the cloud provider. That's a problem with the enterprises not using the tools that offer the discounts, the reservances, the infrequent access. >> Caveat emptor. >> Exactly, so I think at the end of the day, the first step in this is getting those checks in place to say, "Are we using the things that help drive the right cost for our needs?" And on the other side of that, the finance teams really changing the way that they are interacting with the technology teams. Becoming partners, becoming advocates in this versus a passive, retroactive reporter down the line. And this enables these sort of micro-optimization discussions that can happen where data center world, we bought it at some cost, it's sitting there, cloud world, we can make decisions today that impact the business tomorrow. >> So let me make sure I got this. So I have a client who I was having a conversation with him. They told me that their Amazon, their AWS bill, is 87 gigabytes monthly. Not some 87 pages. That's 87 gigabytes. So we bring this 87 gigabytes in, and it's a story about what I consume out of Amazon. It's not a story of what my business utilizes to achieve its objectives. So we're now entering into a world where we're trying to introduce that financial visibility into how that spend can be mapped to what the business does. So the finance group can look at a common notion of truth. And the IT group can look at a common notion of truth. Application owners can looks at a common notion of truth. And that's what FinOps is providing. Have I got that right? >> Absolutely, and the 87 gigabytes example is the exact reason why it is FinOps, and not just cloud financial management. You can't have a person with a spreadsheet looking at data and trying to make decisions about it, right? It has to be automated. It's IT finances code. It's got to be baked into the processes. We've seen organizations that have hundreds of millions of individual charges hitting them in a consumption based manner. The other thing that's come in with the FinOps as a core tenet is we're now seeing a decentralization of accountability for that spend. So if you look at the big cloud spenders out there who are maybe spending tens or hundreds of millions a year, some of them have thousands of cloud environments. Gone is the day of we have a centralized group getting to say, "We're going to turn this off, turn this off." We want to give each of those teams the ability to see just their portion of that bill in the right mapped way, as you said, and to be able to take actions on the back of that. So that's changed in the you know, you run it, you maintain it, you understand what's shut down. What has sort of come back to the old centralized model is this notion, and this is where procurement's job has shifted to largely, of we do still want to centralize the rate reduction. So engineers, you go use less, right? Essentially, finance teams, procurement work together with the cloud vendors to get the best possible rates through reserved instances, can be deduced discounts, you know volume discounts, negotiated rates, whatever it is. And they become sort of strategic sourcing. To say you're going to use whatever you're going to use, and you're going to watch that to make sure you're using the right amount with target thresholds. We're going to make sure we get the best rate for it. And that's sort of the two sides of the coin. >> Well, very important, procurement has always been organized on episodic purchases, where the whole point is to bring the price point down. And now we're talking about a continuous service, where you are literally basing your business on capabilities provided by a third party. And that is a very, very, very different relationship. >> It's just in time purchasing. And it's a new supply-chain management process, where you have so many SKU options, and you are making these purchase decisions, sometimes thousands a day, and that impacts everything down the road. >> Excellent. J.R. Storment, co-founder of Cloudability, talking about FinOps and Cloudability's role in helping businesses map their cloud spend to their business activities for better, more optimal views of how they get what they need out of their cloud expenditures. J.R., thank you very much for being on the CUBE. >> Thanks, Peter. >> And once again, I'm Peter Burris. And thanks for listening to this CUBE Conversation. Until next time.

Published Date : May 24 2018

SUMMARY :

and changes in the industry, So let's talk about... are doing that for the so the average cost per unit goes down. And in the conversation that the business decides to do it. So that's the shift to say, And the finance teams have of what you could do with cloud. So that group is going to be putting out and now on the finance team. that offer the discounts, the reservances, And on the other side of that, And the IT group can look So that's changed in the you know, bring the price point down. and that impacts everything down the road. for being on the CUBE. to this CUBE Conversation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PeterPERSON

0.99+

J.RPERSON

0.99+

Peter BurrisPERSON

0.99+

tensQUANTITY

0.99+

J.R. StormentPERSON

0.99+

80%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

60 serversQUANTITY

0.99+

AWSORGANIZATION

0.99+

CloudabilityORGANIZATION

0.99+

J.R.PERSON

0.99+

FinOps FoundationORGANIZATION

0.99+

last weekDATE

0.99+

360 billionQUANTITY

0.99+

LaciensORGANIZATION

0.99+

200,000 SKUsQUANTITY

0.99+

87 gigabytesQUANTITY

0.99+

last yearDATE

0.99+

first stepQUANTITY

0.99+

AutodesksORGANIZATION

0.99+

thousandsQUANTITY

0.99+

FirstQUANTITY

0.99+

DevOpsTITLE

0.99+

two sidesQUANTITY

0.99+

NationwidesORGANIZATION

0.99+

twoQUANTITY

0.99+

hundredsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

87 pagesQUANTITY

0.99+

SpotifysORGANIZATION

0.99+

firstQUANTITY

0.99+

eachQUANTITY

0.99+

third legQUANTITY

0.98+

tomorrowDATE

0.98+

60 daysQUANTITY

0.98+

second levelQUANTITY

0.98+

hundreds of millionsQUANTITY

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

CUBEORGANIZATION

0.96+

both sideQUANTITY

0.95+

thousands a dayQUANTITY

0.95+

O'ReillyORGANIZATION

0.95+

last eight yearsDATE

0.94+

hundreds of millions a yearQUANTITY

0.93+

AgileTITLE

0.93+

FinOpsORGANIZATION

0.92+

OpenStack Summit 2018EVENT

0.89+

FinOpsTITLE

0.89+

nine figuresQUANTITY

0.88+

Amazon re:InventEVENT

0.88+

AzureTITLE

0.86+

this monthDATE

0.85+

cloudQUANTITY

0.83+

next few yearsDATE

0.83+

Cumulus NetworkORGANIZATION

0.82+

CUBETITLE

0.81+

ConversationEVENT

0.8+

CUBE ConversationEVENT

0.79+

few years backDATE

0.75+

last few yearsDATE

0.74+

GCPTITLE

0.74+

dozenQUANTITY

0.7+

lastDATE

0.68+

AgileORGANIZATION

0.61+

JR RiversORGANIZATION

0.6+

millionsQUANTITY

0.56+

neQUANTITY

0.53+

endDATE

0.51+

Kevin Deierling, NVIDIA and Scott Tease, Lenovo | CUBE Conversation, September 2020


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, I'm Stu Miniman, and welcome to a CUBE conversation. I'm coming to you from our Boston Area studio. And we're going to be digging into some interesting news regarding networking. Some important use cases these days, in 2020, of course, AI is a big piece of it. So happy to welcome to the program. First of all, I have one of our CUBE alumni, Kevin Deierling. He's the Senior Vice President of Marketing with Nvidia, part of the networking team there. And joining him is Scott Tease, someone we've known for a while, but first time on the program, who's the General Manager of HPC and AI, for the Lenovo Data Center Group. Scott and Kevin, thanks so much for joining us. >> It's great to be here Stu. >> Yeah, thank you. >> Alright, so Kevin, as I said, you you've been on the program a number of times, first when it was just Mellanox, now of course the networking team, there's some other acquisitions that have come in. If you could just set us up with the relationship between Nvidia and Lenovo. And there's some news today that we're here to talk about too. So let's start getting into that. And then Scott, you'll jump in after Kevin. >> Yeah, so we've been a long time partner with Lenovo, on our high performance computing. And so that's the InfiniBand piece of our business. And more and more, we're seeing that AI workloads are very, very similar to HPC workloads. And so that's been a great partnership that we've had for many, many years. And now we're expanding that, and we're launching a OEM relationship with Lenovo, for our Ethernet switches. And again, with our Ethernet switches, we really take that heritage of low latency, high performance networking that we built over many years in HPC, and we bring that to Ethernet. And of course that can be with HPC, because frequently in an HPC supercomputing environment, or in an AI supercomputing environment, you'll also have an Ethernet network, either for management, or sometimes for storage. And now we can offer that together with Lenovo. So it's a great partnership. We talked about it briefly last month, and now we're coming to market, and we'll be able to offer this to the market. >> Yeah, yeah, Kevin, we're super excited about it here in Lenovo as well. We've had a great relationship over the years with Mellanox, with Nvidia Mellanox. And this is just the next step. We've shown in HPC that the days of just taking an Ethernet card, or an InfiniBand card, plugging it in the system, and having it work properly are gone. You really need a system that's engineered for whatever task the customer is going to use. And we've known that in HPC for a long time, as we move into workloads, like artificial intelligence, where networking is a critical aspect of getting these systems to communicate with one another, and work properly together. We love from HPC perspective, to use InfiniBand, but most enterprise clients are using Ethernet. So where do we go? We go to a partner that we've trusted for a very long time. And we selected the Nvidia Mellanox Ethernet switch family. And we're really excited to be able to bring that end-to-end solution to our enterprise clients, just like we've been doing for HPC for a while. >> Yeah, well Scott, maybe if you could. I'd love to hear a little bit more about kind of that customer demand that those usages there. So you think traditionally, of course, is supercomputing, as you both talked about that move from InfiniBand, to leveraging Ethernet, is something that's been talked about for quite a while now in the industry. But maybe that AI specifically, could you talk about what are the networking requirements, how similar is it? Is it 95% of the same architecture, as what you see in HPC environments? And also, I guess the big question there is, how fast are customers adopting, and rolling out those AI solutions? And what kind of scale are they getting them to today? >> So yeah, there's a lot there of good things we can talk about. So I'd say in HPC, the thing that we've learned, is that you've got to have a fabric that's up to the task. When you're testing an HPC solution, you're not looking at a single node, you're looking at a combination of servers, and storage, management, all these things have to come together, and they come together over InfiniBand fabric. So we've got this nearly a purpose built fabric that's been fantastic for the HPC community for a long time. As we start to do some of that same type of workload, but in an enterprise environment, many of those customers are not used to InfiniBand, they're used to an Ethernet fabric, something that they've got all throughout their data center. And we want to try to find a way to do was, bring a lot of that rock solid interoperability, and pre-tested capability, and bring it to our enterprise clients for these AI workloads. Anything high performance GPUs, lots of inner internode communications, worries about traffic and congestion, abnormalities in the network that you need to spot. Those things happen quite often, when you're doing these enterprise AI solutions. You need a fabric that's able to keep up with that. And the Nvidia networking is definitely going to be able to do that for us. >> Yeah well, Kevin I heard Scott mention GPUs here. So this kind of highlights one of the reasons why we've seen Nvidia expand its networking capabilities. Could you talk a little bit about that kind of expansion, the portfolio, and how these use cases really are going to highlight what Nvidia helped bring to the market? >> Yeah, we like to really focus on accelerated computing applications. And whether those are HPC applications, or now they're becoming much more broadly adopted in the enterprise. And one of the things we've done is, tight integration at a product level, between GPUs, and the networking components in our business. Whether that's the adapters, or the DPU, the data processing unit, which we've talked about before. And now even with the switches here, with our friends at Lenovo, and really bringing that all together. But most importantly, is at a platform level. And by that I mean the software. And the enterprise here has all kinds of different verticals that are going after. And we invest heavily in the software ecosystem that's built on top of the GPU, and the networking. And by integrating all of that together on a platform, we can really accelerate the time to market for enterprises that wants to leverage these modern workloads, sort of cloud native workloads. >> Yeah, please Scott, if you have some follow up there. >> Yeah, if you don't mind Stu, I just like to say, five years ago, the roadmap that we followed was the processor roadmap. We all could tell you to the week when the next Xeon processor was going to come out. And that's what drove all of our roadmaps. Since that time what we found is that the items that are making the radical, the revolutionary improvements in performance, they're attached to the processor, but they're not the processor itself. It's things like, the GPU. It's things like that, especially networking adapters. So trying to design a platform that's solely based on a CPU, and then jam these other items on top of it. It no longer works, you have to design these systems in a holistic manner, where you're designing for the GPU, you're designing for the network. And that's the beauty of having a deep partnership, like we share with Nvidia, on both the GPU side, and on the networking side, is we can do all that upfront engineering to make sure that the platform, the systems, the solution, as a whole works exactly how the customer is going to expect it to. >> Kevin, you mentioned that a big piece of this is software now. I'm curious, there's an interesting piece that your networking team has picked up, relatively recently, that the Cumulus Linux, so help us understand how that fits into the Ethernet portfolio? And would it show up in these kind of applications that we're talking about? >> Yeah, that's a great question. So you're absolutely right, Cumulus is integral to what we're doing here with Lenovo. If you looked at the heritage that Mellanox had, and Cumulus, it's all about open networking. And what we mean by that, is we really decouple the hardware, and the software. So we support multiple network operating systems on top of our hardware. And so if it's, for example, Sonic, or if it's our Onyx or Dents, which is based on switch def. But Cumulus who we just recently acquired, has been also on that same access of open networking. And so they really support multiple platforms. Now we've added a new platform with our friends at Lenovo. And really they've adopted Cumulus. So it is very much centered on, Enterprise, and really a cloud like experience in the Enterprise, where it's Linux, but it's highly automated. Everything is operationalized and automated. And so as a result of that, you get sort of the experience of the cloud, but with the economics that you get in the Enterprise. So it's kind of the best of both worlds in terms of network analytic, and all of the ability to do things that the cloud guys are doing, but fully automated, and for an Enterprise environment. >> Yeah, so Kevin, I mean, I just want to say a few things about this. We're really excited about the Cumulus acquisition here. When we started our negotiations with Mellanox, we were still planning to use Onyx. We love Onyx, it's been our IB nodes of choice. Our users love, our are architects love it. But we were trying to lean towards a more open kind of futuristic, node as we got started with this. And Cumulus is really perfect. I mean it's a Linux open source based system. We love open source in HPC. The great thing about it is, we're going to be able to take all the great learnings that we've had with Onyx over the years, and now be able to consolidate those inside of Cumulus. We think it's the perfect way to start this relationship with Nvidia networking. >> Well Scott, help us understand a little more. What you know what does this expansion of the partnership mean? If you're talking about really the full solutions that Lenovo opens in the think agile brand, as well as the hybrid and cloud solutions. Is this something then that, is it just baked into the solution, is it a reseller, what should customers, and your your channel partners understand about this? >> Yeah, so any of the Lenovo solutions that require a switch to perform the functionality needed across the solution, are going to show up with the networking from Nvidia inside of it. Reasons for that, a couple of reasons. One is even something as simple as solution management for HPC, the switch is so integral to how we do all that, how we push all those functions down, how we deploy systems. So you've got to have a switch, in a connectivity methodology, that ensures that we know how to deploy these systems. And no matter what scale they are, from a few systems up, to literally thousands of systems, we've got something that we know how to do. Then when we're we're selling these solutions, like an SAP solution, for instance. The customer is not buying a server anymore, they're buying a solution, they're buying a functionality. And we want to be able to test that in our labs to ensure that that system, that rack, leaves our factory ready to do exactly what the customer is looking for. So any of the systems that are going to be coming from us, pre configured, pre tested, are all going to have Nvidia networking inside of them. >> Yeah, and I think that's, you mentioned the hybrid cloud. I think that's really important. That's really where we cut our teeth first in InfiniBand, but also with our Ethernet solutions. And so today, we're really driving a bunch of the big hyper scalars, as well as the big clouds. And as you see things like SAP or Azure, it's really important now that you're seeing Azure stack coming into a hybrid environment, that you have the known commodity here. So we're something that we're built in to many of those different platforms, with our Spectrum ASIC, as well as our adapters. And so now the ability with Nvidia, and Lenovo together, to bring that to enterprise customers, is really important. I think it's a proven set of components that together forms a solution. And that's the real key, as Scott said, is delivering a solution, not just piece parts, we have a platform, that software, hardware, all of it integrated. >> Well, it's great to see you. We've had an existing partnership for a while. I want to give you both the opportunity, anything specific, you've been hearing kind of the customer demand leading up this. Is it people that might be transitioning from InfiniBand to Ethernet? Or is it just general market adoption of new solutions that you have out there? (speakers talk over each other) >> You go ahead and start. >> Okay, so I think that there's different networks for different workloads, is what we've seen. And InfiniBand certainly is going to continue to be the best platform out there for HPC, and often for AI. But as Scott said, the enterprise frequently is not familiar with that, and for various reasons, would like to leverage Ethernet. So I think we'll see two different cases, one where there's Ethernet with an InfiniBand network. And the other is for new enterprise workloads that are coming, that are very AI centric, modern workloads, sort of cloud native workloads. You have all of the infrastructure in place with our Spectrum ASICs, and our Connectx adapters, and now integrated with GPUs, that we'll be able to deliver solutions rather than just compliments. And that's the key. >> Yeah, I think Stu, a great example, I think of where you need that networking, like we've been used to an HPC, is when you start looking at deep learning in training, scale out training. A lot of companies have been stuck on a single workstation, because they haven't been able to figure out how to spread that workload out, and chop it up, like we've been doing in HPC, because they've been running into networking issues. They can't run over an unoptimized network. With this new technology, we're hoping to be able to do a lot of the same things that HPC customers take for granted every day, about workload management, distribution of workload, chopping jobs up into smaller portions, and feeding them out to a cluster. We're hoping that we're going to be able to do those exact same things for our enterprise clients. And it's going to look magical to them, but it's the same kind of thing we've been doing forever. With Mellanox, in the past, now Nvidia networking, we're just going to take that to the enterprise. I'm really excited about it. >> Well, it's so much flexibility. We used to look at, it would take a decade to roll out some new generations. Kevin, if you could just give us latest speeds and feeds. If I look at Ethernet, did I see that this has from n gig, all the way up to 400 gig? I think I lose track a little bit of some of the pieces. I know the industry as a whole is driving it. But where are we with the general customer adoption of some of the some of the speeds today? >> Yeah indeed, we're coming up on the 40th anniversary of the first specification of Ethernet. And we're about 4000 times faster now, 40,000 times faster at 400 gigabits, versus 10 megabits. So yeah, we're shipping today at the adapter level, 100 gig, and even 200 gig. And then at the switch level, 400 gig. And people sort of ask, "Do we really need all that performance?" The answer is absolutely. So the amount of data that the GPU can crunch, and these AI workloads, these giant neural networks, it needs massive amounts of data. And then as you're scaling out, as Scott was talking about, much along the lines of InfiniBand Ethernet needs that same level of performance, throughput, latency and offloads, and we're able to deliver. >> Yeah, so Kevin, thank you so much. Scott, I want to give you a final word here. Anything else you want your customers to understand regarding this partnerships? >> Yeah, just a quick one Stu, quick one. So we've been really fortunate in working really closely with Mellanox over the years, and with Nvidia. And now the two together, we're just excited about what the future holds. We've done some really neat things in HPC, with being one of the first watercool an InfiniBand card. We're one of the first companies to deploy Dragonfly topology. We've done some unique things where we can share a single IP adapter, across multiple users. We're looking forward to doing a lot of that same exact kind of innovation, inside of our systems as we look to Ethernet. We often think that as speeds of Ethernet continue to go higher, we may see more and more people move from InfiniBand to Ethernet. I think that now having both of these offerings inside of our lineup, is going to make it really easy for customers to choose what's best for them over time. So I'm excited about the future. >> Alright, well Kevin and Scott, thank you so much. Deep integration and customer choice, important stuff. Thank you so much for joining us. >> Thank you Stu. >> Thanks Stu. >> Alright, I'm Stu Miniman, and thank you. Thanks for watching theCUBE. (upbeat music)

Published Date : Sep 15 2020

SUMMARY :

leaders all around the world, for the Lenovo Data Center Group. now of course the networking team, And of course that can be with HPC, We've shown in HPC that the days Is it 95% of the same architecture, And the Nvidia networking that kind of expansion, the portfolio, And by that I mean the software. Yeah, please Scott, if you And that's the beauty of that the Cumulus Linux, and all of the ability to do things that we've had with Onyx over the years, of the partnership mean? So any of the systems that And so now the ability with Nvidia, of the customer demand leading up this. And that's the key. do a lot of the same things of some of the some of the speeds today? that the GPU can crunch, Yeah, so Kevin, thank you so much. And now the two together, Scott, thank you so much. Miniman, and thank you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

LenovoORGANIZATION

0.99+

KevinPERSON

0.99+

Kevin DeierlingPERSON

0.99+

NvidiaORGANIZATION

0.99+

2020DATE

0.99+

40,000 timesQUANTITY

0.99+

OnyxORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

Lenovo Data Center GroupORGANIZATION

0.99+

100 gigQUANTITY

0.99+

Stu MinimanPERSON

0.99+

10 megabitsQUANTITY

0.99+

95%QUANTITY

0.99+

400 gigQUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

September 2020DATE

0.99+

200 gigQUANTITY

0.99+

MellanoxORGANIZATION

0.99+

400 gigabitsQUANTITY

0.99+

Scott TeasePERSON

0.99+

CumulusORGANIZATION

0.99+

firstQUANTITY

0.99+

Stu MinimanPERSON

0.99+

LinuxTITLE

0.99+

bothQUANTITY

0.99+

StuPERSON

0.99+

HPCORGANIZATION

0.99+

oneQUANTITY

0.98+

twoQUANTITY

0.98+

CUBEORGANIZATION

0.98+

todayDATE

0.98+

five years agoDATE

0.98+

last monthDATE

0.98+

InfiniBandORGANIZATION

0.98+

two different casesQUANTITY

0.98+

BostonLOCATION

0.97+

first timeQUANTITY

0.97+

Paresh Kharya & Kevin Deierling, NVIDIA | HPE Discover 2020


 

>> Narrator: From around the global its theCUBE, covering HPE Discover Virtual Experience, brought to you by HPE. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of HPE, discover the virtual experience for 2020, getting to talk to Hp executives, their partners, the ecosystem, where they are around the globe, this session we're going to be digging in about artificial intelligence, obviously a super important topic these days. And to help me do that, I've got two guests from Nvidia, sitting in the window next to me, we have Paresh Kharya, he's director of product marketing and sitting next to him in the virtual environment is Kevin Deierling, who is this senior vice president of marketing as I mentioned both with Nvidia. Thank you both so much for joining us. >> Thank you, so great to be here. >> Great to be here. >> All right, so Paresh when you set the stage for us? AI, obviously, one of those mega trends to talk about but just, give us the stages, where Nvidia sits, where the market is, and your customers today, that they think about AI. >> Yeah, so we are basically witnessing a massive changes that are happening across every industry. And it's basically the confluence of three things. One is of course, AI, the second is 5G and IOT, and the third is the ability to process all of the data that we have, that's now possible. For AI we are now seeing really advanced models, from computer vision, to understanding natural language, to the ability to speak in conversational terms. In terms of IOT and 5G, there are billions of devices that are sensing and inferring information. And now we have the ability to act, make decisions in various industries, and finally all of the processing capabilities that we have today, at the data center, and in the cloud, as well as at the edge with the GPUs as well as advanced networking that's available, we can now make sense all of this data to help industrial transformation. >> Yeah, Kevin, you know it's interesting when you look at some of these waves of technology and we say, "Okay, there's a lot of new pieces here." You talk about 5G, it's the next generation but architecturally some of these things remind us of the past. So when I look at some of these architectures, I think about, what we've done for high performance computing for a long time, obviously, you know, Mellanox, where you came from through NVIDIA's acquisition, strong play in that environment. So, maybe give us a little bit compare, contrast, what's the same, and what's different about this highly distributed, edge compute AI, IOT environment and what's the same with what we were doing with HPC in the past. >> Yeah, so we've--Mellanox has now been a part of Nvidia for a little over a month and it's great to be part of that. We were both focused on accelerated computing and high performance computing. And to do that, what it means is the scale and the type of problems that we're trying to solve are just simply too large to fit into a single computer. So if that's the case, then you connect a lot of computers. And Jensen talked about this recently at the GTC keynote where he said that the new unit computing, it's really the data center. So it's no longer the box that sits on your desk or even in Iraq, it's the entire data center because that's the scale of the types of problems that we're solving. And so the notion of scale up and scale out, the network becomes really, really critical. And we're doing high-performance networking for a long time. When you move to the edge, instead of having, a single data center with 10,000 computers, you have 10,000 data centers, each of which as a small number of servers that is processing all of that information that's coming in. But in a sense, the problems are very, very similar, whether you're at the edge or you're doing massive HPC, scientific computing or cloud computing. And so we're excited to be part of bringing together the AI and the networking because they are really optimizing at the data center scale across the entire stack. >> All right, so it's interesting. You mentioned, Nvidia CEO, Jensen. I believe if I saw right in there, he actually could, wrote a term which I had not run across, it was the data processing unit or DPU in that, data center, as you talked about. Help us wrap our heads around this a little bit. I know my CPU, when I think about GPUs, I obviously think of Nvidia. TPUs, in the cloud and everything we're doing. So, what is DPUs? Is this just some new AI thing or, is this kind of a new architectural model? >> Yeah. I think what Jensen highlighted is that there's three key elements of this accelerated disaggregated infrastructure that the data center has becoming. And so that's the CPU, which is doing traditional single threaded workloads but for all of the accelerated workloads, you need the GPU. And that does massive parallelism deals with massive amounts of data, but to get that data into the GPU and also into the CPU, you need really an intelligent data processing because the scale and scope of GPUs and CPUs today, these are not single core entities. These are hundreds or even thousands of cores in a big system. And you need to steer the traffic exactly to the right place. You need to do it securely. You need to do it virtualized. You need to do it with containers and to do all of that, you need a programmable data processing unit. So we have something called our BlueField, which combines our latest, greatest, 100 gig and 200 gig network connectivity with Arm processors and a whole bunch of accelerators for security, for virtualization, for storage. And all of those things then feed these giant parallel engines which are the GPU. And of course the CPU, which is really the workload at the application layer for non-accelerated outs. >> Great, so Paresh, Kevin talked about, needing similar types of services, wherever the data is. I was wondering if you could really help expand for us a little bit, the implications of it AI at the edge. >> Sure, yeah, so AI is basically not just one workload. AI is many different types of models and AI also means training as well as inferences, which are very different workloads or AI printing, for example, we are seeing the models growing exponentially, think of any AI model, like a brain of a computer or like a brain, solving a particular use case a for simple models like computer vision, we have models that are smaller, bugs have computer vision but advanced models like natural language processing, they require larger brains or larger models, so on one hand we are seeing the size of the AI models increasing tremendously and in order to train these models, you need to look at computing at the scale of data center, many processors, many different servers working together to train a single model, on the other hand because of these AI models, they are so accurate today from understanding languages to speaking languages, to providing the right recommendations whether it's for products or for content that you may want to consume or advertisements and so on. These models are so effective and efficient that they are being powered by AI today. These applications are being powered by AI and each application requires a small amount of acceleration, so you need the ability to scale out or, and support many different applications. So with our newly launched MPR architecture, just couple of weeks to go that Jensen announced, in the virtual keynote for the first time, we are now able to provide both, scale up and scale out both training data analytics as well as imprints on the single architecture and that's very exciting. >> Yeah, so look at that. The other thing that's interesting is you're talking about at the edge and scale out versus scale up, the networking is critical for both of those. And there's a lot of different workloads. And as Paresh was describing, you've got different workloads that require different amounts of GPU or storage or networking. And so part of that vision of this data center as the computer is that, the DPU lets you scale independently, everything. So you can compose, you desegregate into DPUs and storage and CPUs, and then you compose exactly the computer that you need on the fly container, right, to solve the problem that you're solving right now. So these new way of programming is programming the entire data center at once and you'll go grab all of it and it'll run for a few hundred milliseconds even and then it'll come back down and recompose itself onsite. And to do that, you need this very highly efficient networking infrastructure. And the good news is we're here at HPE Discover. We've got a great partner with HPE. You know, they have our M series switches that uses the Mellanox hundred gig and now even 200 and 400 gig ethernet switches, we have all of our adapters and they have great platforms. The Apollo platform for example, is break for HPC and they have other great platforms that we're looking at with the new telco that we're doing or 5G and accelerating that. >> Yeah, and on the edge computing side, there's the edge line set of products which are very interesting, the other sort of aspect that I wanted to touch upon, is the whole software stack that's needed for the edge. So edge is different in the sense that it's not centrally managed, the edge computing devices are distributed remote locations. And so managing the workflow of running and updating software on it is important and needs to be done in a very secure manner. The second thing that's, that's very different again, for the edges, these devices are going to require connectivity. As Kevin was pointing out, the importance of networking so we also announced, a couple of weeks ago at our GTC, our EGX product that combines the Mellanox NIC and our GPUs into a single a processor, Mellanox NIC provides a fast connectivity, security, as well as the encryption and decryption capabilities, GPUs provide acceleration to run the advanced DI models, that are required for applications at the edge. >> Okay, and if I understood that, right. So, you've got these throughout the HPE the product line, HPE's got long history of making, flexible configurations, I remember when they first came out with a Blade server it was, different form factors, different connectivity options, they pushed heavily into composable infrastructure. So it sounds like this is just a kind of extending, you know, what HP has been doing for a couple of decades. >> Yeah, I think HP is a great partner there and these new platforms, the EGX, for example that was just announced, a great workload there is a 5G telco. So we'll be working with our friends at HPE to take that to market as well. And, you know, really, there's a lot of different workloads and they've got a great portfolio of products across the spectrum from regular servers. And 1U, 2U, and then all the way up to their big Apollo platform. >> Well I'm glad you brought up telco, I'm curious, are there any specific, applications or workloads that, where the low hanging fruit or the kind of the first targets that you use for AI acceleration? >> Yeah, so you know, the 5G workload is just awesome. We're introduced with the EGX, a new platform called Ariel which is a programming framework and there were lots of partners there that were part of that, including, folks like Ericsson. And the idea there is that you have a software defined hardware accelerated radio area network, so a cloud RAM and it really has all of the right attributes of the cloud and what's nice there is now you can change on the fly, the algorithms that you're using for the baseband codex without having to go climb a radio tower and change the actual physical infrastructure. So that's a critical part. Our role in that, on the networking side, we introduced the technology that's part of EGX then are connected, It's like the DX adapter, it's called 5T for 5G. And one of the things that happens is you need this time triggered transport or a telco technology. That's the 5T's for 5G. And the reason is because you're doing distributed baseband unit, distributed radio processing and the timing between each of those server nodes needs to be super precise, 20 nanosecond. It's something that simply can't be done in software. And so we did that in hardware. So instead of having an expensive FPGA, I try to synchronize all of these boxes together. We put it into our NIC and now we put that into industry standard servers HP has some fantastic servers. And then with the EGX platform, with that we can build, really scale out software to client cloud RAM. >> Awesome, Paresh, anything else on the application side you'd like to add in just about what Kevin spoke about. >> Oh yeah, so from application perspective, every industry has applications that touch on edge. If you take a look at the retail, for example, there is, you know, all the way from supply chain to inventory management, to keeping the right stock units in the shelves, making sure there is a there is no slippage or shrinkage. So to telecom, to healthcare, we are re-looking at constantly monitoring patients and taking actions for the best outcomes to manufacturing. We are looking to automate production detecting failures much early on in the production cycle and so on every industry has different applications but they all use AI. They can all leverage the computing capabilities and high-speed networking at the edge to transform their business processes. >> All right, well, it's interesting almost every time we've talked about AI, networking has come up. So, you know, Kevin, I think that probably ease up a little bit why, Nvidia, spent around $7 billion for the acquisition of Mellanox and not only was it the Mellanox acquisition, Cumulus Networks, very known in the network space for software defined really, operating system for networking but give us strategically, does this change the direction of Nvidia, how should we be thinking about Nvidia in the overall network? >> Yeah, I think the way to think about it is going back to that data center as the computer. And if you're thinking about the data center as computer then networking becomes the back plane, if you will of that data center computer and having a high performance network is really critical. And Mellanox has been a leader in that for 20 years now with our InfiniBand and our Ethernet product. But beyond that, you need a programmatic interface because one of the things that's really important in the cloud is that everything is software defined and it's containerized now and there is no better company in the world then Cumulus, really the pioneer and building Cumulus clinics, taking the Linux operating system and running that on multiple homes. So not just hardware from Mellanox but hardware from other people as well. And so that whole notion of an open networking platform more committed to, you need to support that and now you have a programmatic interface that you can drop containers on top of, Cumulus has been the leader in the Linux FRR, it's Free Range Routing, which is the core routing algorithm. And that really is at the heart of other open source network operating systems like Sonic and DENT so we see a lot of synergy here, all the analytics that Cumulus is bringing to bear with NetQ. So it's really great that they're going to be part here of the Nvidia team. >> Excellent, well thank you both much. Want to give you the final word, what should they do, HPE customers in their ecosystem know about the Nvidia and HPE partnership? >> Yeah, so I'll start you know, I think HPE has been a longtime partner and a customer of ours. If you have accelerated workloads, you need to connect those together. The HPE server portfolio is an ideal place. We can combine some of the work we're doing with our new amp years and existing GPUs and then also to connect those together with the M series, which is their internet switches that are based on our spectrum switch platforms and then all of the HPC related activities on InfiniBand, they're a great partner there. And so all of that, pulling it together, and now as at the edge, as edge becomes more and more important, security becomes more and more important and you have to go to this zero trust model, if you plug in a camera that's somebody has at the edge, even if it's on a car, you can't trust it. So everything has to become, validated authenticated, all the data needs to be encrypted. And so they're going to be a great partner because they've been a leader and building the most secure platforms in the world. >> Yeah and on the data center, server, portfolio side, we really work very closely with HP on various different lines of products and really fantastic servers from the Apollo line of a scale up servers to synergy and ProLiant line, as well as the Edgeline for the edge and on the super computing side with the pre side of things. So we really work to the fullest spectram of solutions with HP. We also work on the software side, wehere a lot of these servers, are also certified to run a full stack under a program that we call NGC-Ready so customers get phenomenal value right off the bat, they're guaranteed, to have accelerated workloads work well when they choose these servers. >> Awesome, well, thank you both for giving us the updates, lots happening, obviously in the AI space. Appreciate all the updates. >> Thanks Stu, great to talk to you, stay well. >> Thanks Stu, take care. >> All right, stay with us for lots more from HPE Discover Virtual Experience 2020. I'm Stu Miniman and thank you for watching theCUBE. (bright upbeat music)

Published Date : Jun 24 2020

SUMMARY :

the global its theCUBE, in the virtual environment that they think about AI. and finally all of the processing the next generation And so the notion of TPUs, in the cloud and And of course the CPU, which of it AI at the edge. for the first time, we are And the good news is we're Yeah, and on the edge computing side, the product line, HPE's across the spectrum from regular servers. and it really has all of the else on the application side and high-speed networking at the edge in the network space for And that really is at the heart about the Nvidia and HPE partnership? all the data needs to be encrypted. Yeah and on the data Appreciate all the updates. Thanks Stu, great to I'm Stu Miniman and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kevin DeierlingPERSON

0.99+

KevinPERSON

0.99+

Paresh KharyaPERSON

0.99+

NvidiaORGANIZATION

0.99+

200 gigQUANTITY

0.99+

HPORGANIZATION

0.99+

100 gigQUANTITY

0.99+

hundredsQUANTITY

0.99+

10,000 computersQUANTITY

0.99+

MellanoxORGANIZATION

0.99+

200QUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

PareshPERSON

0.99+

CumulusORGANIZATION

0.99+

Cumulus NetworksORGANIZATION

0.99+

IraqLOCATION

0.99+

20 yearsQUANTITY

0.99+

HPEORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

2020DATE

0.99+

two guestsQUANTITY

0.99+

OneQUANTITY

0.99+

thirdQUANTITY

0.99+

StuPERSON

0.99+

first timeQUANTITY

0.99+

around $7 billionQUANTITY

0.99+

telcoORGANIZATION

0.99+

each applicationQUANTITY

0.99+

Stu MinimanPERSON

0.99+

secondQUANTITY

0.99+

20 nanosecondQUANTITY

0.99+

LinuxTITLE

0.99+

bothQUANTITY

0.99+

NetQORGANIZATION

0.99+

400 gigQUANTITY

0.99+

eachQUANTITY

0.99+

10,000 data centersQUANTITY

0.98+

second thingQUANTITY

0.98+

three key elementsQUANTITY

0.98+

oneQUANTITY

0.98+

thousands of coresQUANTITY

0.98+

three thingsQUANTITY

0.97+

JensenPERSON

0.97+

ApolloORGANIZATION

0.97+

JensenORGANIZATION

0.96+

single computerQUANTITY

0.96+

HPE DiscoverORGANIZATION

0.95+

single modelQUANTITY

0.95+

firstQUANTITY

0.95+

hundred gigQUANTITY

0.94+

InfiniBandORGANIZATION

0.94+

DENTORGANIZATION

0.93+

GTCEVENT

0.93+

Scott Raynovich, Futuriom | Future Proof Your Enterprise 2020


 

>> From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. (smooth music) >> Hi, I'm Stu Miniman, and welcome to this special exclusive presentation from theCUBE. We're digging into Pensando and their Future Proof Your Enterprise event. To help kick things off, welcoming in a friend of the program, Scott Raynovich. He is the principal analyst at Futuriom coming to us from Montana. I believe first time we've had a guest on the program in the state of Montana, so Scott, thanks so much for joining us. >> Thanks, Stu, happy to be here. >> All right, so we're going to dig a lot into Pensando. They've got their announcement with Hewlett Packard Enterprise. Might help if we give a little bit of background, and definitely I want Scott and I to talk a little bit about where things are in the industry, especially what's happening in networking, and how some of the startups are helping to impact what's happening on the market. So for those that aren't familiar with Pensando, if you followed networking I'm sure you are familiar with the team that started them, so they are known, for those of us that watch the industry, as MPLS, which are four people, not to be confused with the protocol MPLS, but they had very successfully done multiple spin-ins for Cisco, Andiamo, Nuova and Insieme, which created Fibre Channel switches, the Cisco UCS, and the ACI product line, so multiple generations to the Nexus, and Pensando is their company. They talk about Future Proof Your Enterprise is the proof point that they have today talking about the new edge. John Chambers, the former CEO of Cisco, is the chairman of Pensando. Hewlett Packard Enterprise is not only an investor, but also a customer in OEM piece of this solution, and so very interesting piece, and Scott, I want to pull you into the discussion. The waves of technology, I think, the last 10, 15 years in networking, a lot it has been can Cisco be disrupted? So software-defined networking was let's get away from hardware and drive towards more software. Lots of things happening. So I'd love your commentary. Just some of the macro trends you're seeing, Cisco's position in the marketplace, how the startups are impacting them. >> Sure, Stu. I think it's very exciting times right now in networking, because we're just at the point where we kind of have this long battle of software-defined networking, like you said, really pushed by the startups, and there's been a lot of skepticism along the way, but you're starting to see some success, and the way I describe it is we're really on the third generation of software-defined networking. You have the first generation, which was really one company, Nicira, which VMware bought and turned into their successful NSX product, which is a virtualized networking solution, if you will, and then you had another round of startups, people like Big Switch and Cumulus Networks, all of which were acquired in the last year. Big Switch went to Arista, and Cumulus just got purchased by... Who were they purchased by, Stu? >> Purchased by Nvidia, who interestingly enough, they just picked up Mellanox, so watching Nvidia build out their stack. >> Sorry, I was having a senior moment. It happens to us analysts. (chuckling) But yeah, so Nvidia's kind of rolling up these data center and networking plays, which is interesting because Nvidia is not a traditional networking hardware vendor. It's a chip company. So what you're seeing is kind of this vision of what they call in the industry disaggregation. Having the different components sold separately, and then of course Cisco announced the plan to roll out their own chip, and so that disaggregated from the network as well. When Cisco did that, they acknowledged that this is successful, basically. They acknowledged that disaggregation is happening. It was originally driven by the large public cloud providers like Microsoft Azure and Amazon, which started the whole disaggregation trend by acquiring different components and then melding it all together with software. So it's definitely the future, and so there's a lot of startups in this area to watch. I'm watching many of them. They include ArcOS, which is a exciting new routing vendor. DriveNets, which is another virtualized routing vendor. This company Alkira, which is going to do routing fully in the cloud, multi-cloud networking. Aviatrix, which is doing multi-cloud networking. All these are basically software companies. They're not pitching hardware as part of their value add, or their integrated package, if you will. So it's a different business model, and it's going to be super interesting to watch, because I think the third generation is the one that's really going to break this all apart. >> Yeah, you brought up a lot of really interesting points there, Scott. That disaggregation, and some of the changing landscape. Of course that more than $1 billion acquisition of Nicira by VMware caused a lot of tension between VMware and Cisco. Interesting. I think back when to Cisco created the UCS platform it created a ripple effect in the networking world also. HP was a huge partner of Cisco's before UCS launched, and not long after UCS launched HP stopped selling Cisco gear. They got heavier into the networking component, and then here many years later we see who does the MPLS team partner with when they're no longer part of Cisco, and Chambers is no longer the CEO? Well, it's HPE front and center there. You're going to see John Chambers at HPE Discover, so it was a long relationship and change. And from the chip companies, Intel, of course, has built a sizeable networking business. We talked a bit about Mellanox and the acquisitions they've done. One you didn't mention but caused a huge impact in the industry, and something that Pensando's responding to is Amazon, but Annapurna Labs, and Annapurna Labs, a small Israeli company, and really driving a lot of the innovation when it comes to compute and networking at Amazon. The Graviton, Compute, and Nitro is what powers their Outposts solutions, so if you look at Amazon, they buy lots of pieces. It's that mixture of hardware and software. In early days people thought that they just bought kind of off-the-shelf white boxes and did it cheap, but really we see Amazon really hyper optimizes what they're doing. So Scott, let's talk a little bit about Pensando if we can. Amazon with the Nitro solutions built to Outposts, which is their hybrid solution, so the same stack that they put in Amazon they can now put in customers' data center. What Pensando's positioning is well, other cloud providers and enterprise, rather than having to buy something from Amazon, we're going to enable that. So what do you think about what you've seen and heard from Pensando, and what's that need in the market for these type of solutions? >> Yes, okay. So I'm glad you brought up Outposts, because I should've mentioned this next trend. We have, if you will, the disaggregated open software-based networking which is going on. It started in the public cloud, but then you have another trend taking hold, which is the so-called edge of the network, which is going to be driven by the emergence of 5G, and the technology called CBRS, and different wireless technologies that are emerging at the so-called edge of the network, and the purpose of the edge, remember, is to get closer to the customer, get larger bandwidth, and compute, and storage closer to the customer, and there's a lot of people excited about this, including the public cloud providers, Amazon's building out their Outposts, Microsoft has an Edge stack, the Azure Edge Stack that they've built. They've acquired a couple companies for $1 billion. They acquired Metaswitch, they acquired Affirmed Networks, and so all these public cloud providers are pushing their cloud out to the edge with this infrastructure, a combination of software and hardware, and that's the opportunity that Pensando is going after with this Outposts theme, and it's very interesting, Stu, because the coopetition is very tenuous. A lot of players are trying to occupy this edge. If you think about what Amazon did with public cloud, they sucked up all of this IT compute power and services applications, and everything moved from these enterprise private clouds to the public cloud, and Amazon's market cap exploded, right, because they were basically sucking up all the money for IT spending. So now if this moves to the edge, we have this arms race of people that want to be on the edge. The way to visualize it is a mini cloud. Whether this mini cloud is at the edge of Costco, so that when Stu's shopping at Costco there's AI that follows you in the store, knows everything you're going to do, and predicts you're going to buy this cereal and "We're going to give you a deal today. "Here's a coupon." This kind of big brother-ish AI tracking thing, which is happening whether you like it or not. Or autonomous vehicles that need to connect to the edge, and have self-driving, and have very low latency services very close to them, whether that's on the edge of the highway or wherever you're going in the car. You might not have time to go back to the public cloud to get the data, so it's about pushing these compute and data services closer to the customers at the edge, and having very low latency, and having lots of resources there, compute, storage, and networking. And that's the opportunity that Pensando's going after, and of course HPE is going after that, too, and HPE, as we know, is competing with its other big mega competitors, primarily Dell, the Dell/VMware combo, and the Cisco... The Cisco machine. At the same time, the service providers are interested as well. By the way, they have infrastructure. They have central offices all over the world, so they are thinking that can be an edge. Then you have the data center people, the Equinixes of the world, who also own real estate and data centers that are closer to the customers in the metro areas, so you really have this very interesting dynamic of all these big players going after this opportunity, putting in money, resources, and trying to acquire the right technology. Pensando is right in the middle of this. They're going after this opportunity using the P4 networking language, and a specialized ASIC, and a NIC that they think is going to accelerate processing and networking of the edge. >> Yeah, you've laid out a lot of really good pieces there, Scott. As you said, the first incarnation of this, it's a NIC, and boy, I think back to years ago. It's like, well, we tried to make the NIC really simple, or do we build intelligence in it? How much? The hardware versus software discussion. What I found interesting is if you look at this team, they were really good, they made a chip. It's a switch, it's an ASIC, it became compute, and if you look at the technology available now, they're building a lot of your networking just in a really small form factor. You talked about P4. It's highly programmable, so the theme of Future Proof Your Enterprise. With anything you say, "Ah, what is it?" It's a piece of hardware. Well, it's highly programmable, so today they position it for security, telemetry, observability, but if there's other services that I need to get to edge, so you laid out really well a couple of those edge use cases and if something comes up and I need that in the future, well, just like we've been talking about for years with software-defined networking, and network function virtualization, I don't want a dedicated appliance. It's going to be in software, and a form factor like Pensando does, I can put that in lots of places. They're positioning they have a cloud business, which they sell direct, and expect to have a couple of the cloud providers using this solution here in 2020, and then the enterprise business, and obviously a huge opportunity with HPE's position in the marketplace to take that to a broad customer base. So interesting opportunity, so many different pieces. Flexibility of software, as you relayed, Scott. It's a complicated coopetition out there, so I guess what would you want to see from the market, and what is success from Pensando and HPE, if they make this generally available this month, it's available on ProLiant, it's available on GreenLake. What would you want to be hearing from customers or from the market for you to say further down the road that this has been highly successful? >> Well, I want to see that it works, and I want to see that people are buying it. So it's not that complicated. I mean I'm being a little superficial there. It's hard sometimes to look in these technologies. They're very sophisticated, and sometimes it comes down to whether they perform, they deliver on the expectation, but I think there are also questions about the edge, the pace of investment. We're obviously in a recession, and we're in a very strange environment with the pandemic, which has accelerated spending in some areas, but also throttled back spending in other areas, and 5G is one of the areas that it appears to have been throttled back a little bit, this big explosion of technology at the edge. Nobody's quite sure how it's going to play out, when it's going to play out. Also who's going to buy this stuff? Personally, I think it's going to be big enterprises. It's going to start with the big box retailers, the Walmarts, the Costcos of the world. By the way, Walmart's in a big competition with Amazon, and I think one of the news items you've seen in the pandemic is all these online digital ecommerce sales have skyrocketed, obviously, because people are staying at home more. They need that intelligence at the edge. They need that infrastructure. And one of the things that I've heard is the thing that's held it back so far is the price. They don't know how much it's going to cost. We actually ran a survey recently targeting enterprises buying 5G, and that was one of the number one concerns. How much does this infrastructure cost? So I don't actually know how much Pensando costs, but they're going to have to deliver the right ROI. If it's a very expensive proprietary NIC, who pays for that, and does it deliver the ROI that they need? So we're going to have to see that in the marketplace, and by the way, Cisco's going to have the same challenge, and Dell's going to have the same challenge. They're all racing to supply this edge stack, if you will, packaged with hardware, but it's going to come down to how is it priced, what's the ROI, and are these customers going to justify the investment is the trick. >> Absolutely, Scott. Really good points there, too. Of course the HPE announcement, big move for Pensando. Doesn't mean that they can't work with the other server vendors. They absolutely are talking to all of them, and we will see if there are alternatives to Pensando that come up, or if they end up singing with them. All right, so what we have here is I've actually got quite a few interviews with the Pensando team, starting with I talked about MPLS. We have Prem, Jane, and Sony Giandoni, who are the P and the S in MPLS as part of it. Both co-founders, Prem is the CEO. We have Silvano Guy who, anybody that followed this group, you know writes the book on it. If you watched all the way this far and want to learn even more about it, I actually have a few copies of Silvano's book, so if you reach out to me, easiest way is on Twitter. Just hit me up at @Stu. I've got a few copies of the book about Pensando, which you can go through all those details about how it works, the programmability, what changes and everything like that. We've also, of course, got Hewlett Packard Enterprise, and while we don't have any customers for this segment, Scott mentioned many of the retail ones. Goldman Sachs is kind of the marquee early customer, so did talk with them. I have Randy Pond, who's the CFO, talking about they've actually seen an increase beyond what they expected at this point of being out of stealth, only a little over six months, even more, which is important considering that it's tough times for many startups coming out in the middle of a pandemic. So watch those interviews. Please hit us up with any other questions. Scott Raynovich, thank you so much for joining us to help talk about the industry, and this Pensando partnership extending with HPE. >> Thanks, Stu. Always a pleasure to join theCUBE team. >> All right, check out thecube.net for all the upcoming, as well as if you just search "Pensando" on there, you can see everything we had on there. I'm Stu Miniman, and thank you for watching theCUBE. (smooth music)

Published Date : Jun 17 2020

SUMMARY :

leaders all around the world, He is the principal analyst at Futuriom and how some of the startups are helping and the way I describe it is we're really they just picked up Mellanox, and it's going to be super and Chambers is no longer the CEO? and "We're going to give you a deal today. in the marketplace to take and 5G is one of the areas that it appears Scott mentioned many of the retail ones. Always a pleasure to join theCUBE team. I'm Stu Miniman, and thank

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ScottPERSON

0.99+

CiscoORGANIZATION

0.99+

WalmartsORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Scott RaynovichPERSON

0.99+

Annapurna LabsORGANIZATION

0.99+

WalmartORGANIZATION

0.99+

MontanaLOCATION

0.99+

NuovaORGANIZATION

0.99+

AndiamoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

PensandoORGANIZATION

0.99+

DellORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

John ChambersPERSON

0.99+

PremPERSON

0.99+

HPORGANIZATION

0.99+

HPEORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

CostcoORGANIZATION

0.99+

Randy PondPERSON

0.99+

Stu MinimanPERSON

0.99+

2020DATE

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

BostonLOCATION

0.99+

CumulusORGANIZATION

0.99+

$1 billionQUANTITY

0.99+

Palo AltoLOCATION

0.99+

StuPERSON

0.99+

Goldman SachsORGANIZATION

0.99+

John ChambersPERSON

0.99+

NiciraORGANIZATION

0.99+

SilvanoPERSON

0.99+

more than $1 billionQUANTITY

0.99+

JanePERSON

0.99+

first generationQUANTITY

0.99+

MellanoxORGANIZATION

0.99+

IntelORGANIZATION

0.99+

ACIORGANIZATION

0.99+

AlkiraORGANIZATION

0.99+

Big SwitchORGANIZATION

0.99+

third generationQUANTITY

0.99+

Joseph Jacks, OSS Capital | CUBEConversation, October 2018


 

(bright symphony music) >> Hello, I'm John Furrier, the founder of SiliconANGLE Media and co-host of theCUBE. We're here in Paulo Alto at our studio here. I'm joining with Joseph Jacks, the founder and general partner of OSS Capital. Open Source Software Capital, is what OSS stands for. He's also the founder of KubeCon which now is part of the CNCF. It's a huge conference around Kubernetes. He's a cloud guy. He knows open source. Very well respected in the industry and also a great guest and friend of theCUBE, CUBE alumni. Joseph, great to see you. Also known as JJ. JJ, good to see you. >> Thank you for having me on again, John. >> Hey, great to have you come on. I know we've talked many times on theCUBE, but you've got some exciting news. You got a new firm, OSS Capital. Open Source Software, not operational support like a telco, but this is an investment opportunity where you're making investments. Congratulations. >> Thank you. >> So I know you can't talk about some of the specifics on the funds size, but you are actually going to go out, talk to entrepreneurs, make some equity investments. Around open source software. What's the thesis? How did you get here, why did you do it? What's motivating you, and what's the thesis? >> A lot of questions in there. Yeah, I mean this is a really profoundly huge year for open source software. On a bunch of different levels. I think the biggest kind of thing everyone anchors towards is GitHub being acquired by Microsoft. Just a couple of weeks ago, we had the two huge hadoop vendors join forces. That, I think, surprised a lot of people. MuleSoft, which is a big opensource middleware company, getting acquired by Salesforce just a year after going public. Just a huge outcome. I think one observation, just to sort of like summarize the year 2018, is actually, starting in January, almost on sort of like a monthly basis, we've observed a major sort of opensource software company outcome. And sort of kicking off the year, we had CoreOS getting acquired by Red Hat. Brandon and Alex, the founders over there, built a really interesting company in the Kubernetes ecosystem. And I think in February, Al Fresco, which is an open source content portal taking privatization outcome from a private equity firm, I believe in March we had Magento getting acquired by Adobe, which an open source based CMS. PHP CMS. So just a lot of activity for significant outcomes. Multibillion dollar outcomes of commercial open source companies. And open source software is something like 20 years old. 20 years in the making. And this year in particular, I've just seen just a huge amount of large scale outcomes that have been many years in the making from companies that have taken lots of venture funding. And in a lot of cases, sort of partially focused funding from different investors that have an affinity for open source software and sort of understand the uniqueness of the open source model when it's applied to business, when it's applied to company building. But more sort of opportunistic and sort of affinity oriented, as opposed to a pure focus. So that's kind of been part of the motivation. I'd say the more authentically compelling motivation for doing this is that it just needs to exist. This is sort of a model that is happening by necessity. We're seeing more and more software companies be open source software companies. So open source first. They're built in a distributed way. They're leveraging engineers and talent around the world. They're just part of this open source kind of philosophy. And they are fundamentally kind of commercial open source software companies. We felt that if you had a firm basically designed in a way to exclusively focus on those kind of companies, and where the firmware actually backed and supported by the founders of the largest commercial open source companies in the world before sort of the last decade. That could actually deliver a lot of value. So we've been sort of blogging a little bit about this. >> And you wrote a great post on it. I read about open source monetization. But I think one of the things I'm seeing as well that supports your thesis, and I like to get your reaction to it because I think this is something that's not really talked about, but open source is still young. I mean, you go back. I remember the days when we used to have to hide in the shadows to get licenses and pirate stuff and do all those crazy stuff. But now, it's only a couple decades away. The leaders that were investing were usually entrepreneurs that've been successful. The Rob Bearns, the Amar Wadhwa, the guy that did Spring. All these different open source. Linux, obviously, great success story. But there hasn't any been any institutional. Yeah, you got benchmark, other things, done some investments. A discipline around open source. Where open source is now table stakes in all software development. Cloud is scaling, scaling out globally. There's no real foc- There's never been a firm that's been focused on- Just open source from a commercial, while maintaining the purity and ethos of open source. I mean, is that. >> You agree? >> That's true. >> 100%, yeah. That's been the big part of creating the firm is aligning and solving for a pure focused structure. And I think what I'll say abstractly is this sort of venture capital, venture style approach to funding enterprise technology companies, software companies in general, has been to kind of find great entrepreneurs and in an abstract way that can build great technology companies. Can bring them to market, can sell them, and can scale them, and so on. And either create categories, or dominate existing categories, and disrupt incumbents, and so on. And I think while that has worked for quite a while, in the venture industry overall, in the 50, 60 years of the venture industry, lots of successful firms, I think what we're starting to see is a necessary shift toward accounting for the fundamental differences of opensource software as it relates to new technology getting created and going, and new software companies kind of coming into market. So we actually fundamentally believe that commercial open source software companies are fundamentally different. Functionally in almost every way, as compared to proprietary closed source software companies of the last 30 years. And the way we've sort of designed our firm and we'll about ten people pretty soon. We're just about a month in. We're growing the team quickly, but we're sort of a small, focused team. >> A ten's not focused small, I mean, I know venture firms that have two billion in management that don't have more than 20 people. >> Well, we have portfolio partners that are focused in different functional areas where commercial open source software companies have really fundamental differences. If you were to sort of stack rank, by function, where commercial open source software companies are really fundamentally different, sort of top to bottom. Legal would be, probably, the very top of the list. Right, in terms of license compliance management, structuring all the sort of protections and provisions around how intellectual property is actually shipped to and sold to customers. The legal licensing aspects. The commercial software licensing. This is quite a polarizing hot topic these days. The second big functional area where we have a portfolio partner focused on this is finance. Finance is another area where commercial open source software companies have to sort of behaviorally orient and apply that function very, very differently as compared to proprietary software companies. So we're crazy honored and excited to have world experts and very respected leaders in those different areas sort of helping to provide sort of different pillars of wisdom to our portfolio companies, our portfolio founders, in those different functional areas. And we provide a really focused kind of structure for them. >> Well I want to ask you the kind of question that kind of bridges the old way and new way, 'cause I definitely see you guys definitely being new and different, which is good. Or as Andy Jassy would say, you can be misunderstood for a while, but as you become successful, people will start understanding what you do. And that's a great example of Amazon. The pattern with success is traditionally the same. If we kind of encapsulate the difference between open source old and new, and that is you have something of value, and you're disrupting the market and collecting rents from it. Or revenue, or profit. So that's commercial, that's how businesses run. How are you guys going to disrupt with open source software the next generation value creation? We know how value's created, certainly in software that opensource has shown a path on how to create value in writing software if code is value and functionality's value. But to commercialize and create revenue, which is people paying something for something. That's a little bit different kind of value extraction from the value creation. So open source software can create value in functionality and value product. Now you bring it to the market, you get paid for it, you have to disrupt somebody, you have to create something. How are you looking at that? What's the vision of the creation, the extraction of value, who's disrupted, is it greenfield new opportunities? What's your vision? >> A lot of nuance and complexity in that question. What I would say is- >> Well, open source is creating products. >> Well, open source is the basis for creating products in a different kind of way. I'll go back to your question around let's just sort of maybe simplify it as the value creation and the value capture dynamics, right? We've sort of written a few posts about this, and it's subtle, but it's easy to understand if you look at it from a fundamental kind of perspective. We actually believe, and we'll be publishing research on this, and maybe even sort of more principled scientific, perhaps, even ways of looking at it. And then blog posts and research. We believe that open source software will always generate or create orders of magnitude more value than any constituent can capture. Right, and that's a fundamental way of looking at it. So if you see how cloud providers are capturing value that open source creates, whether it's Elasticsearch, or Postgres, or MySQL or Hadoop. And then commercial open source software companies that capture value that open source software creates, whether it's companies like Confluent around Kafka, or Cloudera around Hadoop, or Databricks around Apache Spark. Or whether it's the creators of those projects. The creators of Spark and Hadoop and Elasticsearch, sometimes many of them are the founders of those companies I mentioned, and sometimes they're not. We just believe regardless of how that sort of value is captured by the cloud providers, the commercial vendors, or the creators, the value created relative to the value captured will always be orders and orders of magnitude greater. And this is expressed in another way, which this may be easier to understand, it's a sort of reinforcing this kind of assertion that there's orders of magnitude value created far greater than what can be captured. If you were to do a survey, which we're currently in the process of doing, and I'm happy to sort of say that publicly for the first time here, of all the commercial open source software companies that have projects with large significant adoption, whether, say for example, it's Docker, with millions of users, or Apache Hadoop. How many Hadoop deployments there are. How many customers' companies are there running Hadoop deployments. Or it may be even MySQL. How many MySQL installations are there. And then you were to sort of survey those companies and see how many end users are there relative to how many customers are paying for the usage of the project. It would probably be something like if there were a million users of a given project, the company behind that project or the cloud provider, or say the end user, the developer behind the project, is unlikely to capture more than, say, 1% or a couple percent of those end users to companies, to paying companies, to paying customers. And many times, that's high. Many times, 1% to 2% is very high. Often, what we've seen actually anecdotally, and we're doing principled research around this, and we'll have data here across a large number of companies, many times it's a fraction of 1%. Which is just sort of maybe sometimes 10% of 1%, or even smaller. >> So the practitioners will be making more money than the actual vendors? >> Absolutely right. End users and practitioners always stand to benefit far greater because of the fundamental nature of open source. It's permissionless, it's disaggregated, the value creation dynamics are untethered, and it is fundamentally freely available to use, freely available to contribute to, with different constraints based on the license. However, all those things are sort of like disaggregating the creating of technology into sort of an unbounded network. And that's really, really incredible. >> Okay, so first of all, I agree with your premise 100%. We've seen it with CUBE, where videos are free. >> And that's a good thing. All those things are good. >> And Dave Vellante says this all the time on theCUBE. And we actually pointed this out and called this in the Hadoop ecosystem in 2012. In fact, we actually said that on theCUBE, and it turned out to be true, 'cause look at Hortonworks and Cloudera had to merge because, again, the market changed very quickly >> Value Creation. >> Because value >> Was created around them in the immediate cloud, etc. So the question is, that changes the valuation mechanisms. So if this true, which we believe it is. Just say it is. Then the traditional net present value cash flow metric of the value of the firm, not your firm, but, like, if I'm an open source firm, I'm only one portion of the extraction. I'm a supplier, and I'm an enabler, the valuation on cash flow might not be as great as the real impact. So the question I have for you, have you thought about the valuation? 'Cause now you're thinking about bigger construct community network effects. These are new dynamics. I don't think anyone's actually crunched a valuation model around this. So if someone knew that, say for example, an open source project created all this value, and they weren't necessarily harvesting it from a cash flow perspective, there might be other ways to monetize it. Have you though about that, and what's your reaction to that concept? 'Cause capitalism would kind of shake down the system. 'Cause why would someone be motivated to participate if they're not capturing any value? So if the value shifts, are they still going to be able to participate? You follow the logic I'm trying to- >> I definitely do. I think what I would say to that is we expect and we encourage and we will absolutely heavily invest in more business model innovation in the area of open source. So what I mean by that is, and it's important to sort of qualify a few things there. There's a huge amount of polarization and lack of consensus, lack of industry consensus on what it actually means to have or implement an open source based business model. In fact there's a lot of people who just sort of point blankedly assert that an opensource business model does not exist. We believe that many business models for monetizing and commercializing open source exist. We've blogged and written about a few of them. Their services and training and support. There's open core, which is very effective in sort of a spectrum of ways to implement open core. Around the core, you can have a thin crust or a thick crust. There's SAS. There are hardware based distribution models, things like Sourcefire, and Cumulus Networks. And there are also network based approaches. For example, project called Storj or Stor-J. Being developed and run now by Ben Golub, who's the former CEO of Docker. >> CUBE alumni. >> Ben's really great open source veteran. This is a network, kind of decentralized network based approach of sort of right sizing the production and consumption of the resource of a storage based open source project in a decentralized network. So those are sort of four or five ways to commercializing value, however, four or five ways of commercializing value, however what we believe is that there will be more business model innovation. There will be more developments around how you can better capture more, or in different ways, the value that open source creates. However, what I will say though, is it is unrealistic to expect two things. It is unrealistic and, in fact, unfair to expect that any of those constituents will contribute back to open source proportional to the value that they received from it, or the benefit, and I'm actually paraphrasing Doug Cutting there, who tweeted this a couple of years ago. Very profoundly deep, wise tweet, which I very strongly agree with. And it is also unrealistic to expect a second thing, which is that any of those constituents can capture a material portion of the value that open source creates, which I would assert is many trillions of dollars, perhaps tens of trillions of dollars. It's really hard to quantify that. And it's not just dollars in economic sense, it's dollars in productivity time saved, new markets, new areas, and so on. >> Yeah, I think this is interesting, and I think that we'll be an open book at that. But I will say that what I've observed in looking through all these CUBE interviews, I think that business model innovation absolutely is something that is an IP. >> We need it. Well, it's now intellectual property, the business model isn't, hey I went to business school, learned this at Babson or Harvard, I learned this business model. We're going to do SAS premium. Okay, I get that. There's going to be very interesting new innovations coming, and I think that's the new IP. 'Cause open source, if it's community based, there's going to be formulas. So that's going to be really inter- Okay, so now let's get back to actual funding itself. You guys are doing early stage. Can you take us through the approach? >> We're very focused on early stage, investing, and backing teams that are, just sort of welcoming the idea of a commercial entity around their open source project. Or building a business fundamentally dependent on an open source project or maybe even more than one. The reason for that is this is really where there's a lot of structural inefficiency in supporting and backing those types of founders. >> I think one of the things with ... is with that acquisition. They were pure on the open source side, doing a great job, didn't want to push the business model too hard because the open source, let's face it, you got people like, eh, I don't want to get caught on the business side, and get revenue, perverse incentives might come up, or fear of incentives that might be different or not aligned. Was a great a value. >> I think so. >> So Red Hat got a steal on that one. But as you go forward, there's going to be certainly a lot more stuff. We're seeing a lot of it now in CNCF, for instance. I want to get your thoughts on this because, being the co founder of KubeCon, and donating it to the CNCF, Kubernetes is the hottest thing on the planet, as we talked about many years ago. What's your take on that, now? I see exciting things happening. What is the impact of Kubernetes, in your opinion, to the world, and where do you see that evolving rapidly, and where is the focus here as the people should be paying attention to? >> I think that Kubernetes replaces EC2. Kubernetes is a disaggregated API for distributed computing anywhere. And it happens to be portable and able to run on any kind of computer infrastructure, which sort of makes it like a liquid disaggregated EC2-like API. Which a lot of people have been sort of chasing and trying to implement for many years with things like OpenStack or Eucalyptus. But interestingly, Kubernetes is sort of the right abstraction for distributed computing, because it meets people where they are architecturally. It's sort of aligned with this current movement around distributed systems first designs. Microservices, packaging things in small compartmentalized units. >> Good for integrating of existing stuff. >> Absolutely, and it's very composable, un-opinionated architecturally. So you can sort of take an application and structure it in any given way, and as long as it has this sort of isolation boundary of a container, you can run it on Kubernetes without needing to sort of retrofit the architecture, which is really awesome. I think Kubernetes is a foundational part of the next kind of computing paradigm in the same way that Linux was foundational to the computing paradigm that gave rise to the internet. We had commodity hardware meeting open source based sort of cost reduction and efficiency, which really Linux enabled, and the movement toward scale out data center infrastructure that supported the Internet's sort of maturity and infrastructure. I think we're starting to see the same type of repeat effect thanks to Kubernetes basically being really well received by engineers, by the cloud providers. It's now the universal sort of standard for running container based applications on the different cloud providers. >> And think having the non-technical opinion posture, as you said, architectural posture, allows it to be compatible with a new kind of heterogeneous. >> Heterogeneity is critical. >> Heterogeneity is key, 'cause it's not just within the environment, it's also within each vendor, or customer has more heterogeneity. So, okay, now that's key. So multi cloud, I want to get your thoughts on multi cloud, because now this goes into some of things that might build on top of if Kubernetes continues to go down the road that you say it does. Then the next question is, stateful applications, service meshes. >> A lot of buzz words. A lot of buzz words in there. Stateful application's real because at a certain point in time, you have a maturity curve with critical infrastructure that starts to become appealing for stateful mission critical storage systems, which is typically where you have all the crown jewels of a given company's infrastructure, whether it's a transactional system, or reading and writing core customer, or financial service information, or whatever it is. So Kubernetes' starting to hit this maturity curve where people are migrating really serious mission critical storage workloads onto that platform. And obviously we're going to start to see even more critical work loads. We're starting to see Edge workloads because Kubernetes is a pretty low footprint system, so you can run it on Edge devices, you can even run it on microcontrollers. We're sort of past the experimental, you know, fun and games was Raspberry Pi, sort of towers, and people actually legitimately doing real world Edge kind of deployments with Kubernetes. We're absolutely starting to see multi-geo, multi-replication, multi-cloud sort of style architectures becoming real, as well. Because Kubernetes is this API that the industry's agreeing upon sufficiently. We actually have agreement around this sort of surface area for distributed system style computing that if cloud providers can actually standardize on in a way that lets application specific vendors or new types of application deployment models innovate further, then we can really unlock this sort of tight coupling of proprietary services inside cloud providers and disaggregate it. Which is really exciting, and I forget the Netscape, Jim Barksdale. Bundling, un-bundling. We're starting to see the un-bundling of proprietary cloud computing service API's. Things like Kinesis, and ALB and ELB and proprietary storage services, and these other sticky services get un-bundled because of two big things. Open source, obviously, we have open source alternative data paths. And then we have Kubernetes which allows us to sort of disaggregate things out pretty easily. >> I want to hear your thoughts, one final concept, before we break, 'cause I was having a private conversation with three people besides myself. A big time CIO of a company that if I said the name everyone would go, oh my god, that guy is huge, he's seen it all going back many, many ways. Currently done a lot of innovation. A hardcore network chip guy who knows networking, old school infrastructure. And then a cloud native application founder who knows a lot about software development and is state-of-the-art cloud native. So cloud native, all experienced, old-school, kind of about my age, a cloud native app developer, a big time CIO, and a chip networking kind of infrastructure guy. And we're talking, and one thing that came out, I want to get you thoughts on this, he says, so what's going on with DevOps, how do you see this service mesh, is a stay for (mumbles) on top of the stack, no stacks, horizontally scalable. And the comment that came out was storage and networking have had this relationship with everything since day one. Network moves a packet from point A to point B, and nothing happens in between, maybe some inspection. And storage goes from here now to the then, because you store it. He goes, that premise moves up the stacks, so then the cloud native guy goes, well that's what's happening up at the top, there's a lot of moving things around, workloads and or services, provisioning services, and then from now to then state. In real time. And what dawned on the next conversation the CIO goes, well this is exactly our challenge. We have under the hood infrastructure being programmable, >> We're having some trouble with the connection. Please try again. >> My phone's calling me. >> Programmable connections. >> So you got the programmable on the top of the stack too, so the CIO said, that's exactly the problem we're trying to solve. We're trying to solve some of these network storage concepts now at an application level. Your thoughts to that. >> Well, I think if I could tease apart everything you just said, which is profound synthesis of a lot of different things, I think we've started to see application logic leak out of application code itself into dedicated layers that are really good at doing one specific thing. So traditionally we had some crud style kind of behavioral semantics implemented around business logic. And then, inside of that, you also had libraries for doing connectivity and lookups and service discovery and locking and key management and encryption and coordination with other types of applications. And all that stuff was sort of shoved into the single big application binary. And now, we're starting to see all those language runtime specific parts of application code sort of crack or leak out into these dedicated, highly scalable, Unix philosophy oriented sort of like layers. So things like Envoy are really just built for the sort of nervous system layer of application communication fabric up and down the layer two through layer seven sort of protocol transport stack, which is really profound. We're seeing things like Vault from Hashicorp handle secure key storage persistence of application dedication, authorization, metadata and information to sort of access different systems and end points. And that's a dedicated sort of stateful layer that you can sort of fragment out and delegate sort of application specific functionality to, which is really great for scalability reasons. And on, and on, and on. So we've started to see that, and I think one way of looking at that is it's a cycle. It's the sort of bundling and un-bundling aspect. >> One of the granny level services are getting a really low level- >> Yeah, it's a sort of like bundling and un-bundling and so we've got all this un-bundling happening out of application code to these dedicated layers. The bundling back may happen. I've actually seen a few Bay Area companies go like, we're going back to the monolith 'cause it actually gives us lots of efficiencies in things that we though were trade offs before. We're actually comfortable with a big monorepo, and one or two core languages, and we're going to build everything into these big binaries, and everyone's going to sort of live in the same source code repository and break things out through folders or whatever. There's a lot of really interesting things. I don't want to say we're sort of clear on where this bundling, un-bundling is happening, but I do think that there's a lot of un-bundling happening right now. And there's a lot of opportunity there. >> And the open source, obviously, driving it. So final question for you, how many deals have you done? Can you talk a little bit about the firm? And exciting things and plans that you have going forward. >> Yeah, we're going to be making a lot of announcements over the next few months, and we're, I guess, extremely thrilled. I don't want to say overwhelmed, 'cause we're able to handle all of the volume and inquiries and inbound interest. We're really honored and thrilled by the reception over the last couple weeks from announcing the firm on the first of October, sort of before the Hortonworks Cloudera merger. The JFrog funding announcement that week. The Elastic IPO. Just a lot of really awesome things happened that week. This is obviously before Microsoft open sourced all their patents. We'll be announcing more investments that we've made. We announced our first one on the first of October as well with the announcement of the firm. We've made a good number of investments. We're not able to talk to much about our first initiative, but you'll hear more about that in the near future. >> Well, we're excited. I think it's the timing's perfect. I know you've been working on this kind of vision for a while, and I think it's really great timing. Congratulations, JJ >> Thank you so much. Thanks for having me on. >> Joesph Jacks, also known as JJ, founder and general partner of OSS Capital, Open Source Software Capital, co founder of KubeCon, which is now part of the CNCF. A real great player in the community and the ecosystem, great to have him on theCUBE, thanks for coming in. I'm John Furrier, thanks for watching. >> Thanks, John. (bright symphony music)

Published Date : Oct 18 2018

SUMMARY :

Hello, I'm John Furrier, the founder of SiliconANGLE Media Hey, great to have you come on. on the funds size, but you are actually going to go out, And sort of kicking off the year, hide in the shadows to get licenses And the way we've sort of designed our firm that have two billion in management structuring all the sort of that kind of bridges the old way and new way, A lot of nuance and complexity in that question. Well, open source is the basis for creating products far greater because of the fundamental nature Okay, so first of all, I agree with your premise 100%. And that's a good thing. because, again, the market changed very quickly of the value of the firm, Around the core, you can have a thin crust or a thick crust. sort of right sizing the and I think that we'll be an open book at that. So that's going to be really inter- The reason for that is this is really where because the open source, let's face it, What is the impact of Kubernetes, in your opinion, Which a lot of people have been sort of chasing the computing paradigm that gave rise to the internet. allows it to be compatible with the road that you say it does. We're sort of past the experimental, that if I said the name everyone would go, We're having some trouble that's exactly the problem we're trying to solve. and delegate sort of and everyone's going to sort of live in the same source code And the open source, obviously, driving it. sort of before the Hortonworks Cloudera merger. I think it's the timing's perfect. Thank you so much. A real great player in the community and the ecosystem, (bright symphony music)

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Ben GolubPERSON

0.99+

FebruaryDATE

0.99+

John FurrierPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Andy JassyPERSON

0.99+

MarchDATE

0.99+

JanuaryDATE

0.99+

Joseph JacksPERSON

0.99+

JohnPERSON

0.99+

Paulo AltoLOCATION

0.99+

two billionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

10%QUANTITY

0.99+

JosephPERSON

0.99+

oneQUANTITY

0.99+

OSS CapitalORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

JJPERSON

0.99+

Joesph JacksPERSON

0.99+

2012DATE

0.99+

CNCFORGANIZATION

0.99+

Doug CuttingPERSON

0.99+

Red HatORGANIZATION

0.99+

SourcefireORGANIZATION

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

MySQLTITLE

0.99+

secondQUANTITY

0.99+

Cumulus NetworksORGANIZATION

0.99+

100%QUANTITY

0.99+

50QUANTITY

0.99+

Jim BarksdalePERSON

0.99+

1%QUANTITY

0.99+

five waysQUANTITY

0.99+

MuleSoftORGANIZATION

0.99+

DockerORGANIZATION

0.99+

20 yearsQUANTITY

0.99+

two thingsQUANTITY

0.99+

October 2018DATE

0.99+

JFrogORGANIZATION

0.99+

ClouderaORGANIZATION

0.99+

fourQUANTITY

0.99+

Open Source Software CapitalORGANIZATION

0.99+

2018DATE

0.99+

first initiativeQUANTITY

0.99+

CUBEORGANIZATION

0.99+

BabsonORGANIZATION

0.99+

three peopleQUANTITY

0.99+

Rob BearnsPERSON

0.99+

2%QUANTITY

0.99+

OSSORGANIZATION

0.99+

AlexPERSON

0.99+

first timeQUANTITY

0.99+

KubernetesTITLE

0.99+

ConfluentORGANIZATION

0.98+

Al FrescoORGANIZATION

0.98+

BenPERSON

0.98+

Bay AreaLOCATION

0.98+

theCUBEORGANIZATION

0.98+

SalesforceORGANIZATION

0.98+

DatabricksORGANIZATION

0.98+

first oneQUANTITY

0.98+

NetscapeORGANIZATION

0.98+

GitHubORGANIZATION

0.98+

singleQUANTITY

0.98+

more than 20 peopleQUANTITY

0.98+

LinuxTITLE

0.98+

one observationQUANTITY

0.98+

StorjORGANIZATION

0.97+

KubeConORGANIZATION

0.97+

second thingQUANTITY

0.97+

two core languagesQUANTITY

0.97+

tenQUANTITY

0.97+

each vendorQUANTITY

0.97+

Roland Cabana, Vault Systems | OpenStack Summit 2018


 

>> Announcer: Live from Vancouver, Canada it's theCUBE, covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack foundation, and its Ecosystem partners. >> Welcome back, I'm Stu Miniman and my cohost John Troyer and you're watching theCUBE's coverage of OpenStack Summit 2018 here in Vancouver. Happy to welcome first-time guest Roland Cabana who is a DevOps Manager at Vault Systems out of Australia, but you come from a little bit more local. Thanks for joining us Roland. >> Thank you, thanks for having me. Yes, I'm actually born and raised in Vancouver, I moved to Australia a couple years ago. I realized the potential in Australian cloud providers, and I've been there ever since. >> Alright, so one of the big things we talk about here at OpenStack of course is, you know, do people really build clouds with this stuff, where does it fit, how is it doing, so a nice lead-in to what does Vault Systems do for the people who aren't aware. >> Definitely, so yes, we do build cloud, a cloud, or many clouds, actually. And Vault Systems provides cloud services infrastructure service to Australian Government. We do that because we are a certified cloud. We are certified to handle unclassified DLM data, and protected data. And what that means is the sensitive information that is gathered for the Australian citizens, and anything to do with big user-space data is actually secured with certain controls set up by the Australian Government. The Australian Government body around this is called ASD, the Australian Signals Directorate, and they release a document called the ISM. And this document actually outlines 1,088 plus controls that dictate how a cloud should operate, how data should be handled inside of Australia. >> Just to step back for a second, I took a quick look at your website, it's not like you're listed as the government OpenStack cloud there. (Roland laughs) Could you give us, where does OpenStack fit into the overall discussion of the identity of the company, what your ultimate end-users think about how they're doing, help us kind of understand where this fits. >> Yeah, for sure, and I mean the journey started long ago when we, actually our CEO, Rupert Taylor-Price, set out to handle a lot of government information, and tried to find this cloud provider that could handle it in the prescribed way that the Australian Signals Directorate needed to handle. So, he went to different vendors, different cloud platforms, and found out that you couldn't actually meet all the controls in this document using a proprietary cloud or using a proprietary platform to plot out your bare-metal hardware. So, eventually he found OpenStack and saw that there was a great opportunity to massage the code and change it, so that it would comply 100% to the Australian Signals Directorate. >> Alright, so the keynote this morning were talking about people that build, people that operate, you've got DevOps in your title, tell us a little about your role in working with OpenStack, specifically, in broader scope of your-- >> For sure, for sure, so in Vault Systems I'm the DevOps Manager, and so what I do, we run through a lot of tests in terms of our infrastructure. So, complying to those controls I had mentioned earlier, going through the rigmarole of making sure that all the different services that are provided on our platform comply to those specific standards, the specific use cases. So, as a DevOps Manger, I handle a lot of the pipelining in terms of where the code goes. I handle a lot of the logistics and operations. And so it actually extends beyond just operation and development, it actually extends into our policies. And so marrying all that stuff together is pretty much my role day-to-day. I have a leg in the infrastructure team with the engineering and I also have a leg in with sort of the solutions architects and how they get feedback from different customers in terms of what we need and how would we architect that so it's safe and secure for government. >> Roland, so since one of your parts of your remit is compliance, would you say that you're DevSecOps? Do you like that one or not? >> Well I guess there's a few more buzzwords, and there's a few more roles I can throw in there but yeah, I guess yes. DevSecOps there's a strong security posture that Vault holds, and we hold it to a higher standard than a lot of the other incumbents or a lot of platform providers, because we are actually very sensitive about how we handle this information for government. So, security's a big portion of it, and I think the company culture internally is actually centered around how we handle the security. A good example of this is, you know, internally we actually have controls about printing, you know, most modern companies today, they print pages, and you know it's an eco thing. It's an eco thing for us too, but at the same time there are controls around printed documents, and how sensitive those things are. And so, our position in the company is if that control exists because Australian Government decides that that's a sensitive matter, let's adopt that in our entire internal ecosystem. >> There was a lot of talk this morning at the keynote both about upgrades, and I'm blanking on the name of the new feature, but also about Zuul and about upgrading OpenStack. You guys are a full Upstream, OpenStack expert cloud provider. How do you deal with upgrades, and what do you think the state of the OpenStack community is in terms of kind of upgrades, and maintenance, and day two kind of stuff? >> Well I'll tell you the truth, the upgrade path for OpenStack is actually quite difficult. I mean, there's a lot of moving parts, a lot of components that you have to be very specific in terms of how you upgrade to the next level. If you're not keeping in step of the next releases, you may fall behind and you can't upgrade, you know, Keystone from a Liberty all the way up to Alcatel, right? You're basically stuck there. And so what we do is we try to figure out what the government needs, what are the features that are required. And, you know, it's also a conversation piece with government, because we don't have certain features in this particular release of OpenStack, it doesn't mean we're not going to support it. We're not going to move to the next version just because it's available, right? There's a lot of security involved in fusing our controls inside our distribution of OpenStack. I guess you can call it a distribution, on our build of OpenStack. But it's all based on a conversation that we start with the government. So, you know, if they need VGPUs for some reason, right, with the Queens release that's coming out, that's a conversation we're starting. And we will build into that functionality as we need it. >> So, does that mean that you have different entities with different versions, and if so, how do you manage all of that? >> Well, okay, so yes that's true. We do have different versions where we have a Liberty release, and we have an Alcatel release, which is predominant in our infrastructure. And that's only because we started with the inception of the Liberty release before our certification process. A lot of the things that we work with government for is how do they progress through this cloud maturity model. And, you know, the forklift and shift is actually a problem when you're talking about releases. But when you're talking about containerization, you're talking about Agile Methodologies and things like that, it's less of a reliance on the version because you now have the ability to respawn that same application, migrate the data, and have everything live as you progress through different cloud platforms. And so, as OpenStack matures, this whole idea of the fast forward idea of getting to the next release, because now they have an integration step, or they have a path to the next version even though you're two or three versions behind, because let's face it, most operators will not go to the latest and greatest, because there's a lot of issues you're going to face there. I mean, not that the software is bad, it's just that early adopters will come with early adopter problems. And, you know, you need that userbase. You need those forum conversations to be able to be safe and secure about, you know, whether or not you can handle those kinds of things. And there's no need for our particular users' user space to have those latest and greatest things unless there is an actual request. >> Roland, you are an IAS provider. How are you handling containers, or requests for containers from your customers? >> Yes, containers is a big topic. There's a lot of maturity happening right now with government, in terms of what a container is, for example, what is orchestration with containers, how does my Legacy application forklift and shift to a container? And so, we're handling it in stages, right, because we're working with government in their maturity. We don't do container services on the platform, but what we do is we open-source a lot of code that allows people to deploy, let's say a terraform file, that creates a Docker Host, you know, and we give them examples. A good segue into what we've just launched last week was our Vault Academy, which we are now training 3,000 government public servants on new cloud technologies. We're not talking about how does an OS work, we're talking about infrastructures, code, we're talking about Kubernetes. We're talking about all these cool, fun things, all the way up to function as a service, right? And those kinds of capabilities is what's going to propel government in Australia moving forward in the future. >> You hit on one of my hot buttons here. So functions as a service, do you have serverless deployed in your environment, or is it an education at this point? >> It's an education at this point. Right now we have customers who would like to have that available as a native service in our cloud, but what we do is we concentrate on the controls and the infrastructure as a service platform first and foremost, just to make sure that it's secure and compliant. Everyone has the ability to deploy functions as a service on their platform, or on their accounts, or on their tenancies, and have that available to them through a different set of APIs. >> Great. There's a whole bunch of open-source versions out there. Is that what they're doing? Do you any preference toward the OpenWhisk, or FN, or you know, Fission, all the different versions that are out there? >> I guess, you know, you can sort of like, you know, pick your racehorse in that regard. Because it's still early days, and I think open to us is pretty much what I've been looking at recently, and it's just a discovery stage at this point. There are more mature customers who are coming in, some partners who are championing different technologies, so the great is that we can make sure our platform is secure and they can build on top of it. >> So you brought up security again, one of the areas I wanted to poke at a little bit is your network. So, it being an IS provider, networking's critical, what are you doing from a networking standpoint is micro-segmentation part of your environment? >> Definitely. So natively to build in our cloud, the functions that we build in our cloud are all around security, obviously. Micro-segmentation's a big part of that, training people in terms of how micro-segmentation works from a forklift and shift perspective. And the network connectivity we have with the government is also a part of this whole model, right? And so, we use technologies like Mellanox, 400G fabric. We're BGP internally, so we're routing through the host, or routing to the host, and we have this... Well so in Australia there's this, there's service from the Department of Finance, they create this idea of an icon network. And what it is, is an actually direct media fiber from the department directly to us. And that means, directly to the edge of our cloud and pipes right through into their tenancy. So essentially what happens is, this is true, true hybrid cloud. I'm not talking about going through gateways and stuff, I'm talking about I speed up an instance in the Vault cloud, and I can ping it from my desktop in my agency. Low latency, submillisecond direct fiber link, up to 100g. >> Do you have certain programmability you're doing in your network? I know lots of service providers, they want to play and get in there, they're using, you know, new operating models. >> Yes, I mean, we're using the... I draw a blank. There's a lot of technologies we're using for network, and the Cumulus Networking OS is what we're using. That allows us to bring it in to our automation team, and actually use more of a DevOps tool to sort of create the deployment from a code perspective instead of having a lot of engineers hardcoding things right on the actual production systems. Which allows us to gate a lot of the changes, which is part of the security posture as well. So, we were doing a lot of network offloading on the ConnectX-5 cards in the data center, we're using cumulus networks for bridging, we're working with Neutron to make sure that we have Neutron routers and making sure that that's secure and it's code reviewed. And, you know, there's a lot of moving parts there as well, and I think from a security standpoint and from a network functionality standpoint, we've come to a happy place in terms of providing the fastest network possible, and also the most secure and safe network as possible. >> Roland, you're working directly with the Upstream OpenStack projects, and it sounds like some others as well. You're not working with a vendor who's packaging it for you or supporting it. So that's a lot of responsibility on you and your team, I'm kind of curious how you work with the OpenStack community, and how you've seen the OpenStack community develop over the years. >> Yeah, so I mean we have a lot of talented people in our company who actually OpenStack as a passion, right? This is what they do, this is what they love. They've come from different companies who worked in OpenStack and have contributed a lot actually, to the community. And actually that segues into how we operate inside culturally in our company. Because if we do work with Upstream code, and it doesn't have anything to do with the security compliance of the Australian Signals Directorate in general, we'd like to Upstream that as much as possible and contribute back the code where it seems fit. Obviously, there's vendor mixes and things we have internally, and that's with the Mellanox and Cumulus stuff, but anything else beyond that is usually contributed up. Our team's actually very supportive of each other, we have network specialists, we have storage specialists. And it's a culture of learning, so there's a lot of synchronizations, a lot of synergies inside the company. And I think that's part to do with the people who make up Vault Systems, and that whole camaraderie is actually propagated through our technology as well. >> One of the big themes of the show this year has been broadening out of what's happening. We talked a little bit about containers already, Edge Computing is a big topic here. Either Edge, or some other areas, what are you looking for next from this ecosystem, or new areas that Vault is looking at poking at? >> Well, I mean, a lot of the exciting things for me personally, I guess, I can't talk to Vault in general, but, 'cause there's a lot of engineers who have their own opinions of what they like to see, but with the Queens release with the VGPUs, something I'd like, that all's great, a long-term release cycle with the OpenStack foundation would be great, or the OpenStack platform would be great. And that's just to keep in step with the next releases to make sure that we have the continuity, even though we're missing one release, there's a jump point. >> Can you actually put a point on that, what that means for you. We talked to Mark Collier a little bit about it this morning but what you're looking and why that's important. >> Well, it comes down to user acceptance, right? So, I mean, let's say you have a new feature or a new project that's integrated through OpenStack. And, you know, some people find out that there's these new functions that are available. There's a lot of testing behind-the-scenes that has to happen before that can be vetted and exposed as part of our infrastructure as a service platform. And so, by the time that you get to the point where you have all the checks and balances, and marrying that next to the Australian controls that we have it's one year, two years, or you know, however it might be. And you know by that time we're at the night of the release and so, you know, you do all that work, you want to make sure that you're not doing that work and refactoring it for the next release when you're ready to go live. And so, having that long-term release is actually what I'm really keen about. Having that point of, that jump point to the latest and greatest. >> Well Roland, I think that's a great point. You know, it used to be we were on the 18 month cycle, OpenStack was more like a six month cycle, so I absolutely understand why this is important that I don't want to be tied to a release when I want to get a new function. >> John: That's right. >> Roland Cabana, thank you the insight into Vault Systems and congrats on all the progress you have made. So for John Troyer, I'm Stu Miniman. Back here with lots more coverage from the OpenStack Summit 2018 in Vancouver, thanks for watching theCUBE. (upbeat music)

Published Date : May 21 2018

SUMMARY :

Brought to you by Red Hat, the OpenStack foundation, but you come from a little bit more local. I realized the potential in Australian cloud providers, Alright, so one of the big things we talk about and anything to do with big user-space data into the overall discussion of the identity of the company, that the Australian Signals Directorate needed to handle. I have a leg in the infrastructure team with the engineering A good example of this is, you know, of the new feature, but also about Zuul a lot of components that you have to be very specific A lot of the things that we work with government for How are you handling containers, that creates a Docker Host, you know, So functions as a service, do you have serverless deployed and the infrastructure as a service platform or you know, Fission, all the different versions so the great is that we can make sure our platform is secure what are you doing from a networking standpoint And the network connectivity we have with the government they're using, you know, new operating models. and the Cumulus Networking OS is what we're using. So that's a lot of responsibility on you and your team, and it doesn't have anything to do with One of the big themes of the show this year has been And that's just to keep in step with the next releases Can you actually put a point on that, And so, by the time that you get to the point where that I don't want to be tied to a release and congrats on all the progress you have made.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AustraliaLOCATION

0.99+

VancouverLOCATION

0.99+

Stu MinimanPERSON

0.99+

John TroyerPERSON

0.99+

OpenStackORGANIZATION

0.99+

one yearQUANTITY

0.99+

Roland CabanaPERSON

0.99+

Red HatORGANIZATION

0.99+

Mark CollierPERSON

0.99+

100%QUANTITY

0.99+

RolandPERSON

0.99+

JohnPERSON

0.99+

Vault SystemsORGANIZATION

0.99+

AlcatelORGANIZATION

0.99+

Australian Signals DirectorateORGANIZATION

0.99+

Rupert Taylor-PricePERSON

0.99+

Department of FinanceORGANIZATION

0.99+

18 monthQUANTITY

0.99+

six monthQUANTITY

0.99+

ASDORGANIZATION

0.99+

two yearsQUANTITY

0.99+

NeutronORGANIZATION

0.99+

last weekDATE

0.99+

MellanoxORGANIZATION

0.99+

twoQUANTITY

0.99+

Australian GovernmentORGANIZATION

0.99+

OpenStackTITLE

0.99+

Vancouver, CanadaLOCATION

0.99+

CumulusORGANIZATION

0.99+

1,088 plus controlsQUANTITY

0.99+

OpenStack Summit 2018EVENT

0.99+

first-timeQUANTITY

0.98+

Vault AcademyORGANIZATION

0.98+

oneQUANTITY

0.97+

this yearDATE

0.97+

VaultORGANIZATION

0.97+

bothQUANTITY

0.96+

OneQUANTITY

0.96+

LibertyTITLE

0.96+

three versionsQUANTITY

0.96+

KubernetesTITLE

0.96+

theCUBEORGANIZATION

0.95+

ZuulORGANIZATION

0.95+

one releaseQUANTITY

0.95+

DevSecOpsTITLE

0.93+

up to 100gQUANTITY

0.93+

todayDATE

0.93+

OpenStack Summit North America 2018EVENT

0.91+

ConnectX-5 cardsCOMMERCIAL_ITEM

0.9+

3,000 government public servantsQUANTITY

0.9+

ISMORGANIZATION

0.9+

UpstreamORGANIZATION

0.9+

this morningDATE

0.89+

Agile MethodologiesTITLE

0.88+

a secondQUANTITY

0.87+

QueensORGANIZATION

0.87+

couple years agoDATE

0.87+

DevOpsTITLE

0.86+

day twoQUANTITY

0.86+

LibertyORGANIZATION

0.85+

Tom Burns, Dell EMC | Dell Technologies World 2018


 

>> Announcer: Live from Las Vegas, it's the Cube. Covering Dell Technologies World 2018. Brought to you by Dell EMC, and its ecosystem partners. >> Welcome back to SiliconANGLE media's coverage of Dell Technologies World 2018. I'm Stu Miniman here with my cohost Keith Townsend, happy to welcome back to the program Tom Burns, who's the SVP of Networking and Solutions at Dell EMC. Tom, great to see ya. >> Great to see you guys as well. Good to see you again. >> All right, so I feel like one of those CNBC guys. It's like, Tom, I remember back when Force10 was acquired by Dell and all the various pieces that have gone on and converged in infrastructure, but of course with the merger, you've gotten some new pieces to your toy chest. >> Tom: That's correct. >> So maybe give us the update first as to what's under your purview. >> Right, right, so I continue to support and manage the entire global networking business on behalf of Dell EMC, and then recently I picked up what we called our converged infrastructure business or the VxBlock, Vscale business. And I continue also to manage what we call Enterprise Infrastructure, which is basically any time our customers want to extend the life of their infrastructure around memory, storage, optics, and so forth. We support them with Dell EMC certified parts, and then we add to that some third-party componentry around rack power and cooling, software, Cumulus, Big Switch, things like that. Riverbed, Silver Peak, others. And so with that particular portfolio we also cover what we call the Dell EMC Ready Solutions, both for the service provider, but then also for traditional enterprises as well. >> Yeah, well luckily there's no change in any of those environments. >> Tom: No, no. >> Networking's been static for decades. I mean they threw a product line that I mean last I checked was somewhere in the three to four billion dollar range. With the VxBlock under what you're talking there. >> Yeah it's a so, yeah-- >> Maybe you could talk, what does this mean? 'Cause if I give you your networking guy. >> Right. >> Keith and I are networking guys by background, obviously networking's a piece of this, but give us a little bit of how the sausage is made inside to-- >> Tom: Sure. >> Get to this stuff. >> Well I think when you talk about all these solutions, Cloud, Hybrid Cloud, Public Cloud, when you think about software-defined X, the network is still pretty darn important, right? I often say that if the network's not working, it's going to be a pretty cloudy day. It's not going to connect. And so the fabric continues to remain one of the most critical parts of the solution. So the thought around the VxBlock and moving that in towards the networking team is the importance of the fabric and the capability to scale out and scale up with our customers' workloads and applications. So that's probably the reason primarily the reason. And then we can also look at how we can work very closely with our storage division 'cause that's the key IP component coming from Dell EMC on the block side. And see how we can continue to help our customers solve their problems when it comes to this not your do-it-yourself but do-it-for-me environment. >> All right, I know Keith wants to jump in, but one just kind of high-level question for you. I look at networking, we've really been talking about disaggregation of what's going on. It's really about disaggregated systems. And then you've got convergence, and there's other parts of the group that have hyper convergence. How do you square the circle on those two trends and how do those go together? >> Well, I think it's pretty similar on whether you go hyper converge, converge, or do-it-yourself, you build your own block so to speak. There's a set of buyers that want everything to be done for them. They want to buy the entire stack, they want it pre-tested, they want it certified, they want it supported. And then there's a set of customers that want to do it themselves. And that's where we see this opportunity around disaggregation. So we see it primarily in hyperscale and Cloud, but we're seeing it more and more in large enterprise, medium enterprise, particular verticals where customers are in essence looking for some level of agility or capability to interchange their solutions by a particular vendor or solutions that are coming from the same vendor but might be a different IP as an example. And I'm really proud of the fact that Dell EMC really kicked off this disaggregation of the hardware and software and networking. Some 4 1/2 years ago. Now you see some of the, let's say, larger industry players starting to follow suit. And they're starting to disaggregate their software as well. >> Yeah, I would have said just the commonality between those two seemingly opposed trends it's scale. >> Right. >> It's how do customers really help scale these environments? >> Exactly, exactly. It depends a lot around the customer environment and what kind of skill sets do they have. Are they willing to help go through some of that do-it-yourself type of process. Obviously Dell EMC services is there to help them in those particular cases. But we kind of have this buying conundrum of build versus buy. I think my old friend, Chad Sakac, used to say, there's different types of customers that want a VxRail or build-it-themselves, or they want a VxBlock. We see the same thing happen in a networking. There's those customers that want disaggregated hardware and software, and in some cases even disaggregated software. Putting those protocols and features on the switch that they actually use in the data center. Rather than buying a full proprietary stack, well we continue to build the full stack for a select number of customers as well because that's important to that particular sector. >> So again, Tom, two very different ends of the spectrum. I was at ONS a couple of months ago, talked to the team. Dell is a huge sponsor of the Open Source community. And I don't think many people know that. Can you talk about the Open Source relationship or the relationship that Dell Networking has with the Open Source community? >> Absolutely, we first made our venture in Open Source actually with Microsoft in their SONiC work. So they're creating their own network operating software, and we made a joint contribution around the switch abstraction interface, or side. So that was put into the Open Compute Project probably around 3 1/2, maybe four years ago. And that's right after we announced this disaggregation. We then built basically an entire layer of what we call our OS10 base, or what's known in the Linux foundation as OPX. And we contributed that to the OPX or to the Linux foundation, where basically that gives the customer the capability through the software that takes care of all the hardware, creates this switch subtraction interface to gather the intelligence from the ASIC and the silicon, and bringing it to a control plane, which allows APIs to be connected for all your north-bound applications or your general analysis that you want to use, or a disaggregated analysis, what you want to do. So we've been very active in Linux. We've been very active in OCP as well. We're seeing more and more of embracing this opportunity. You've probably seen recently AT&T announced a rather large endeavor to replace tens of thousands of routers with basically white box switches and Open Source software. We really think that this trend is moving, and I'm pretty proud that Dell EMC was a part of getting that all started. >> So that was an awful lot of provider talk. You covered both the provider's base and the enterprise space. Talk to us about where the two kind of meet. You know the provider space, they're creating software, they're embracing OpenStack, they're creating plug-ins for disaggregated networking. And then there's the enterprise. There's opportunity there. Where do you see the enterprise leveraging disaggregation versus the service provider? >> Well, I think it's this move towards software-defined. If you heard in Michael's keynote today, and you'll hear more tomorrow from Jeff Clarke. The whole world is moving to software-defined. It's no longer if, it's when. And I think the opportunity for enterprises that are kind of in that transformation stage, and moving from traditional software-defined, or excuse me, traditional data centers to the software-defined, they could look at disaggregation as an opportunity to give them that agility and capability. In a manner of which they can kind of continue to manage the old world, but move forward into the new world of disaggregation software-defined with the same infrastructure. You know it's not well-known that Dell EMC, we've made our switching now capable of running five different operating softwares. That's dependent upon workloads and use cases, and the customer environment. So, traditional enterprise, they want to look at traditional protocols, traditional features. We give them that capability through our own OS. We can reduce that with OS partners, software coming from some of our OS partners, giving them just the protocols and features that they need for the data center or even out to the edge. And it gives them that flexibility and change. So I think it really comes at this point of when are they going to move towards moving from traditional networking to the next generation of networking. And I'm very happy, I think Dell Technologies is leading the way. >> So I'm wondering if you could expand a little bit about that. When I think about Dell and this show, I mean it is a huge ecosystem. We're sitting right near the Solutions Expo, which will be opening in a little bit, but on the networking side, you've got everything from all the SD-WAN pieces, to all the network operating systems that can sit on top. Maybe, give us kind of the update on the overview, the ecosystem, where Dell wins. >> Yeah, yeah I mean, if you think about 30-something years ago when Michael started the company and Dell started, what was it about. It was really about transforming personal computing, right? It was about taking something that was kind of a traditional proprietary architecture and commoditizing it, making sure it's scalable and supportable. You think of the changes that's occurred now between the mainframe and x86. This is what we think's happening in networking. And at Dell Technologies in the networking area whether it's Dell EMC or to VMware, we're really geared towards this SDX type of market. Virtualization, Layer two, day or three disaggregated switching in the data center. Now SD-WAN with the acquisition of Velocloud by VMware. We're really hoping customers transform at the way networking is being managed, operated, supported to give them much more flexibility and agility in a software-defined market. That being said, we continue to support a multitude of other partners. We have Cumulus, Big Switch, IP infusion, and Pluribus as network operating software alternatives. We have our own, and then we have them as partners. On the SD-WAN area while we lead with Velocloud, we have Silver Peak and we also have Versa Technology, which is getting a lot of upkick in the area. Both in the service provider and in the enterprise space. Huge area of opportunity for enterprises to really lower their cost of connectivity and their branch offices. So, again, we at Dell, we want to have an opinion. We have some leading technologies that we own, but we also partner with some very good, best-of-breed solutions. But being that we're open, and we're disaggregated, and we have an incredible scaling and service department or organization, we have this capability to bring it together for our customers and support them as they go through their IT transformation. >> So, Dell EMC is learning a lot of lessons as you guys start to embrace software-defined. Couple of Dell EMC World's ago, big announcement Chad talked about, ScaleIO, and abstracting, and giving away basically, ScaleIO as a basic solution for free. Then you guys pulled back. And you said, you know what, that's not quite what customers want. They want a packaged solution. So we're talking on one end, total disaggregation and another end, you know what, in a different area of IT, customers seem to want packaged solutions. >> Tom: Yeah. >> Can you talk to the importance of software-defined and packaged solutions? >> Right, it's kind of this theory of appliances, right? Or how is that software going to be packaged? And we give that flexibility in either way. If you think of VxRail or even our vSAN operating or vSAN ready node, it gives that customer the capability to know that we put that software and hardware together, and we tested it, we certified it, most importantly we can support it with kind of one throat to choke, one single call. And so I think the importance for customers are again, am I building it myself or do I want to buy a stack. If I'm somewhere in the middle maybe I'm doing a hybrid or perhaps a Rail type of solution, where it's just compute and storage for the most part. Maybe I'm looking for something different on my networking or connectivity standpoint. But Dell EMC, having the entire portfolio, can help them at any point of the venture or at any part of the solution. So I think that you're absolutely right. The customer buying is varied. You've got those that want everything from a single point, and you got others that are saying I want decision points. I think a lot of the opportunity around the cost savings, mostly from an Opex standpoint are those that are moving towards disaggregated. It doesn't lock 'em in to a single solution. It doesn't get 'em into that long life cycle of when you're going to do changes and upgrades and so forth. This gives them a lot more flexibility and capability. >> Tom, sometimes we have the tendency to get down in the weeds on these products. Especially in the networking space. One of my complaints was, the whole SDN wave, didn't seem to connect necessarily to some of the big businesses' challenges. Heard in the keynote this morning a lot of talk about digital transformation. Bring us up to speed as to how networking plays into that overall story. What you're hearing from customers and if you have any examples we'd love to hear. >> Yeah, no so, I think networking plays a critical part of the IT transformation. I think if you think of the first move in virtualization around compute, then you have the software-defined storage, the networking component was kind of the lagger. It was kind of holding back. And in fact today, I think some analysts say that even when certain software-defined storage implementations occur, interruptions or issues happen in the network. Because the network has then been built and architected for that type of environment. So the companies end up going back and re-looking at how that's done. And companies overall are I think are frustrated with this. They're frustrated with the fact that the network is holding them back from enabling new services, new capabilities, new workloads, moving towards a software-defined environment. And so I think this area again, of disaggregation, of software-defined, of offering choice around software, I think it's doing well, and it's really starting to see an uptick. And the customer experiences as follows. One is, open networking where it's based upon standard commodity-based hardware. It's simply less expensive than proprietary hardware. So they're going to have a little bit of savings from the CapEx standpoint. But because they moved towards this disaggregated model where perhaps they're using one of our third-party software partners that happens to be based in Linux, or even our own OS10 is now based in Linux. Look at that, the tools around configuration and automation are the same as compute. And the same as storage. And so therefore I'm saving on this configuration and automation and so forth. So we have examples such as Verizon that literally not only saves about 30% cost savings on their CapEx, they're saving anywhere between 40 and 50% on their Opex. Why? They can roll out applications much faster. They can make changes to their network much faster. I mean that's the benefit of virtualization and NSX as well, right? Instead of having this decisions of sending a network engineer to a closet to do CLI, down in the dirt as you would say, and reconfigure the switch, a lot of that now has been attracted to a software lever, and getting the company much more capability to make the changes across the fabric, or to segregate it using NSX micro segmentation to make the changes to those users or to that particular environment that needs those changes. So, just the incredible amount of flexibility. I think SDN let's say six, seven years ago, everyone thought it was going to be CapEx. You know, cheaper hardware, cheaper ASICs, et cetera. It's all about Opex. It's around flexibility, agility, common tool sets, better configuration, faster automation. >> So we all have this nirvana idea that we can take our traditional stacks, whether it's pre-packaged CI configurations that's pre-engineered, HCI, SDN, disaggregated networking. Add to that a software layer this magical automation. Can you unpack that for us a little bit? What are you seeing practically whether it's in the server provider perspective or on the enterprise. What are those crucial relationships that Dell EMC is forming with the software industry to bring forth that automation? >> Well obviously we have a very strong relationship with VMware. >> Keith: Right. >> And so you have vRealize and vROps and so forth, and in fact in the new VxBlock 1000, you're going to see a lot of us gearings, a lot of our development towards the vRealize suite, so that helps those customers that are in a VMware environment. We also have a very strong relationship with Red Hat and OpenStack, where we've seen very successful implementations in the service provider space. Those that want to go a little bit more, a little bit more disaggregated, a little bit more open, even it from the storage participation like SAP and so forth. But then obviously we're doing a lot of work with Ansible, Chef, and Puppet, for those that are looking for more of a common open source set of tools across server, compute, networking storage and so forth. So I think the real benefit is kind of looking at it at that 25,000-foot view on how we want to automate. Do you want to go towards containers, do you want to go traditional? What are the tool sets that you've been using in your compute environment, and can those be brought down to the entire stack? >> All right, well Tom Burns, really appreciate catching up with you. I know Keith will be spending a little time at Interop this week too. I know, I'm excited that we have a lot more networking here at this end of the strip also this week. >> Appreciate it. Listen to Pat's talk this afternoon. I think we're going to be hearing even more about Dell Technology's networking. >> All right. Tom Burns, SVP of Networking and Solutions at Dell EMC. I'm Stu Miniman and this is Keith Townsend. Thanks for watching The Cube. (upbeat music)

Published Date : Apr 30 2018

SUMMARY :

Brought to you by Dell EMC, the program Tom Burns, Great to see you guys as well. all the various pieces to what's under your purview. and manage the entire in any of those environments. in the three to four billion dollar range. 'Cause if I give you your networking guy. and the capability to and how do those go together? that are coming from the same vendor said just the commonality on the switch that they different ends of the spectrum. and the silicon, and bringing and the enterprise space. and the customer environment. but on the networking and in the enterprise space. to want packaged solutions. gives that customer the have the tendency to get that the network is holding them back or on the enterprise. Well obviously we have and in fact in the new VxBlock 1000, of the strip also this week. Listen to Pat's talk this afternoon. and Solutions at Dell EMC.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Keith TownsendPERSON

0.99+

MichaelPERSON

0.99+

KeithPERSON

0.99+

MicrosoftORGANIZATION

0.99+

TomPERSON

0.99+

Tom BurnsPERSON

0.99+

Jeff ClarkePERSON

0.99+

Stu MinimanPERSON

0.99+

Chad SakacPERSON

0.99+

Tom BurnsPERSON

0.99+

DellORGANIZATION

0.99+

threeQUANTITY

0.99+

AT&TORGANIZATION

0.99+

tomorrowDATE

0.99+

ChadPERSON

0.99+

Las VegasLOCATION

0.99+

PatPERSON

0.99+

Dell EMCORGANIZATION

0.99+

Dell TechnologyORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

this weekDATE

0.99+

todayDATE

0.99+

LinuxTITLE

0.99+

CNBCORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

twoQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

25,000-footQUANTITY

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

ScaleIOTITLE

0.99+

four years agoDATE

0.99+

50%QUANTITY

0.99+

firstQUANTITY

0.98+

BothQUANTITY

0.98+

OS10TITLE

0.98+

AnsibleORGANIZATION

0.98+

one single callQUANTITY

0.98+

sixDATE

0.98+

VxRailTITLE

0.98+

Dell Technologies World 2018EVENT

0.97+

Versa TechnologyORGANIZATION

0.97+

PluribusORGANIZATION

0.97+

about 30%QUANTITY

0.97+

CumulusORGANIZATION

0.97+

vSANTITLE

0.97+

two trendsQUANTITY

0.96+

seven years agoDATE

0.96+

SiliconANGLEORGANIZATION

0.96+

SDNORGANIZATION

0.96+

single pointQUANTITY

0.96+

four billion dollarQUANTITY

0.96+

InteropORGANIZATION

0.95+

OPXORGANIZATION

0.95+

oneQUANTITY

0.95+

Solutions ExpoEVENT

0.95+

Lee Doyle | OpenStack Summit 2017


 

>> Live from Boston, Massachusetts, it's The Cube covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystems support. >> Welcome back, I'm Stu Miniman joined by my cohost this week, John Troyer, here at the OpenStack Summit in Boston, Massachusetts. Happy to welcome back to the program, Lee Doyle, who is Principal Analyst with Doyle Research. Lee, nice to see you. >> Nice to see you. Thanks for having me. >> Alright, so networking's your main space. >> Lee: Absolutely >> We've talked about networking for a bunch of years here at the show. Last year: telecommunication, NFV. This year, it seem like half the people on the main stage worked for, you know, some big Telco, and NFV, buzz on the edge. Before we get into some of the initial pieces, what's your take on the OpenStack community, in general, and the show? We're gettin' towards the end so what's your take been this week? >> Always great to have the show in Boston, my hometown. OpenStack and telecom have been going together hand in hand since the beginning of OpenStack, really, and a lot of contributions and use to the big service providers who are here, AT&T, Verizon, some others. So OpenStack's really becoming a good platform for their NFV and virtualization modernization efforts. >> Before we get into some of the cool, new stuff. Core networking, I mean, Neutron's one of those things we've been banging on for years. It seems like it's matured to a bit, But always the one, I mean, networking's never done, right? We're always cranking on it, doing new things. What do you hear about the stability? What the community hears? Is the networking thriving good? Any feedback you've had. >> Sure, no, it was good question and always a question that I ask folks. I think we've seen significant maturity in Neutron. It's stable, it performs, it does a lot of things we expect networks to do, but there still are third party network solutions. If you look at Big Switch or Cumulus or others, say, you don't want to use Neutron or you want to enhance it, feel free to work with us to provide even better networking. >> In a broad trend, companies you mentioned, they're software companies. >> Lee: Absolutely. >> Networking is like boxes and cabling and things like that. How is that software-eating-the-world stack up when it comes to the network space? >> I think the majority of the value in networking, as in IT, is in software, right? The majority of the revenue is in boxes, which are hardware and software integrated. So, from a technology standpoint, it's very software driven. From a market standpoint, it's still box driven. We're in between those two and that's what makes this a very interesting point in time. >> Maybe you could tease apart for us a little bit, for people on the enterprise side, they're used to hearing the letters SDN, right? >> Lee: Right. >> Here, if you're talking to telecom NFV, slightly different takes on some similar problems about service, management, and delivery. >> Lee: Right. >> In OpenStack, are the same bits, is Neutron used by the enterprise for SDN in the same way it's used at the network core by the service providers or are these really two different planes that are developing? >> Right and it's a bit of a complex question. At Doyle Research, what I've done to simplify, is talking about software based networking. So that includes SDN, that includes NFV. Those things overlap and we'll get very hung up, like, what does SDN mean? It's separation control and data plane. What does network function virtualization mean? What's an Etsy telecom standard for taking boxes in the telecom network and turning them into software? So, I try to get away from that and move towards: ok, what is it we're trying to accomplish? Well, with OpenStack, we're trying to deliver networking. It's going to be in software. There still might be, and probably is, some form of Ethernet switch or other box that's moving the bits, right? So, the way I think about it is some of the SDN products that I mentioned, like Cumulus or Big Switch, would be enhancements to something that's a core function of OpenStack, which I wouldn't traditionally call SDN, but that's my view. >> Lee, speak to us, what have you heard about Edge? It was one of those things we heard, the buzz coming in. There's a couple different definitions. The telecommunication people have a very, you know: that's the edge of our network. When I talked to enterprise people, it's IoT and sensors. So what are you hearing about Edge? How's network play across all those? >> Right, well, Edge is very much how you define it or which environment you're talking about, right? Traditionally, in the telecom world, you've got your core of your network and you've got your edge of the network and how that's defined in between because you have network capabilities all throughout the environment. SD-WAN is by far been the hottest technology, not just in terms of buzz, but in terms of actual deployment both in enterprise and service provider. In the service provider space, that sort of blurs into what the vCPE offerings are. So you hear: Verizon, Telefonica just made an announcement, went with Nuage on that. So you can go through all the major service providers. Either they're incorporating SD-WAN functionality into their VCP or they're announcing SD-WAN functionality separately. >> Is there any connection between the SD-WAN stuff and OpenStack I hadn't heard or talked about. Of course, hot technology. We covered Riverbed's announcements. Last year, Viptela, been on The Cube a number of times, just acquired by Cisco. Where do you see SDN playing out? Is this the year that it just becomes a feature? Does it still stay as a distinct market segment? >> On the OpenStack question. OpenStack's traditionally sort of a cloud-based, the bigger data center thing. There are elements you can use and leverage from OpenStack at the edge. In terms of SD-WAN, we're at the hockey-stick phase. The market's going straight up, starting to see wide-scale deployments across a large number of verticals. Usually, the verticals that have lots of branches. So you look at financial services, you look at retail, but you can extend to government, and healthcare, and anywhere where you're trying to do a lot of connectivity between distributed environments. And the real change is that, previously, you do a hub-and-spoke network. You get MPLS, you take the information from the branch and you move it to your corporate data center or data centers. Well now, cloud, SaaS. The information doesn't need to go to the data center. In fact, if it goes to the data center, you add a lot of latency. So SD-WAN is adding the intelligence, the traffic-steering, the ability to manage multiple networks and to move away from MPLS and towards more cost-effective internet connectivity. So, there's still 25. Viptela was the biggest company taken out recently but there's still 24 other solutions and probably more being announced over the next six months. >> Stu: Wow, 24, huh? >> At least, yeah. >> I'm curious, we talk about hybrid-cloud and multi-cloud and networking's one of the things that sort of tie all of that together. How do thing like Kubernetes, and the public-to-private piece, how's that shaking out in the network space? >> Well, networks have to support multi-cloud environments. They need to support what's happening privately, publicly, VMware, Red Hat, OpenStack obviously, and soon to be containers. Each of those are little bit different. So can you have a network solution that spans all of that? One of the things that VMware is very public about talking about, at this show, is their ability to do the hybrid public-private. Red Hat talks about that and I spent a lot of time last week on that topic as well. >> As you're talking with network engineers, both in service providers and out at the enterprise. We've talked about all this change, we've hyped the cloud, we're now switching from a hardware-centric model to a more software defined, literally. Are you seeing new skillsets needed for these network engineers? Automation, you know, does the job change as we go forward? >> Absolutely, it changes. When you look at a traditional CCIE, which is Cisco certified, that's about Cisco APIs, Cisco boxes, in a world where there's a lot of other software elements and you've got to tie to different orchestration, different management, public-private cloud. There absolutely is different skillsets and there needs to be an evolution and it's on of the challenges of the networking industry because there simply aren't enough people who are familiar with building the new style, software-driven networks as there need to be. >> John: With all this exhilaration and change, how are you seeing people say at the management layer, the management layer of people, the CxO layer, how are they dealing with all this change? You know, new technologies, emerging technologies. Things are not slowing down. >> No and so AT&T has a large-scale, public training program that tries to get its people skilled up to the new technologies. I know a lot of the other Telcos, who have been less public about it, are doing the same. If you go to large network user groups like ONUG, they're talking about new skillsets and how to train there. There's also the organizations. Do you blend compute, storage, application, and networking folks all in the same team. And I know you guys have talked about that previously. How quickly do organizations do that or do they remain relatively traditional. The CIOs are thinking about that, they're reorganizing, but it's not going to be just snap your fingers and hey, everyone's ready for the new software-driven world. >> Yeah, it's a fascinating thing, of course. Networking industry tends to move a little-bit slow. Especially enterprise and we've been talking about fast and agile for a lot of things but that does not characterize that. That being said, feels like things do move faster. What's the general attitude you hear from customers? Are they still reticent to move forward? Others slow to move those processes? You kind of hear, things like security, tend to realize I need to update more, I need to move forward. What do you hear when you're talking to customers, today versus, lets say, only five years ago? >> Sure, we're five years in on NFV and Etsy and I think we're making significant progress. You hear a lot about us at the shows where the Telcos are wanting NFV, but it's still in the initial phases. We've been talking about SDN and the enterprise for about the same amount of time and, you know, mainstream enterprises. The hyper-scale guys, you know: Google, Amazon, Facebook. Yeah, they're already there and they're very innovative and people are following their example and leveraging that. But I just think we're still early in the truly software-driven networking game. >> One of the questions I always have is: What size company you are and what capability do you have? What do you do internally? Versus, do you just adopt a platform that's going to do all that stuff for you? You and I talked about this years ago about network-fabric type of topologies, all the different pieces that went out. There's certain sized organizations, you're going to just go to someone else that can do that. I hear some pieces, Kubernetes might be the same kind of things. Do you see that? People just saying it's not outsourcing anymore, but I'm going to be more strategic, focus on my business, my applications, and let somebody else handle the underlying stuff. >> If IT, or the network, or branch operations is not central to what you do, I think outsourcing makes perfect sense. And that may be outsourcing it to a reseller, or someone to manage it for you, it may still be on-prem. But more and more the workloads are going to the clouds. >> And the reason I move away from outsourcing, the old outsourcing was: my mess for less and this is a more strategic: what piece of the stack do I own or what do I run versus someone else. It's not: I told you this is the exact configuration in something you run. It's: I'm buying x-bandwidth, x-performance, things like that and it's something that's updated a little more frequently. They manage that piece and it's further down the stack than I care to look at. >> Lee: Sure, there's new, managed service providers who look at your WAN and networks, so that comes into play. The leading Telcos would certainly want to play a role here beyond just providing the pipe. They want to take care of your networking challenges for you. So there's a lot of new options for folks who don't want to build and buy and sweat there. >> Do you see a difference between what's going on inside the U.S. and then in the rest of the world in terms of the Telcos, and services they're rolling out, ambitions, and where they want to play? >> There are clearly geographic differences when you get into telecom but it's not as simple as saying: x-geography is doing. You almost have to go operator by operator, there. >> Anything that you've seen here at the show. This is your first summit. You've been following, obviously, the space for a very long time. Anything you've seen here, either sessions, or vendors, or users doing interesting things, or anything that's excited you recently in areas that you're following and are interested? >> Yeah, the passion here for OpenStack is undeniable. You've got a lot of people who are committed to the community, they're aware of the networking challenges, and the significant strides we've made with OpenStack networking, but also where we need to go in the future. So, it's exciting to be here and fun to see everyone. >> Last thing I want to ask, Lee. Is there anything that, advice you want to give the community? Things that you heard of from users or you observed where we should mature over the next iteration of the solution set? >> I think, as a technology-driven community, it's always incumbent on the community to really explain the business benefits and talk about how this technology is really solving real-world problems. And it is, but it's just making that translation, sometimes, is challenging. >> Alright, Lee Doyle, great to catch up with you and, like yourself, thrilled to be here in Boston for a technology show. Hope to have more of these here, as always. It's our second week, back-to-back, here in Boston amongst all the other shows we've been doing at SiliconANGLE Media so, stay tuned. John and I have a few more interviews left as we get to wrap up three days of programming here from the OpenStack summit. Thanks for watching The Cube. (electronic music)

Published Date : May 10 2017

SUMMARY :

Brought to you by the OpenStack Foundation, Red Hat, here at the OpenStack Summit in Boston, Massachusetts. Nice to see you. on the main stage worked for, you know, some big Telco, since the beginning of OpenStack, really, What the community hears? If you look at Big Switch or Cumulus or others, say, In a broad trend, companies you mentioned, How is that software-eating-the-world stack up The majority of the revenue is in boxes, Here, if you're talking to telecom NFV, in the telecom network and turning them into software? Lee, speak to us, what have you heard about Edge? Traditionally, in the telecom world, Where do you see SDN playing out? the ability to manage multiple networks and networking's one of the things One of the things that VMware is very public both in service providers and out at the enterprise. and it's on of the challenges of the networking industry the management layer of people, the CxO layer, and networking folks all in the same team. What's the general attitude you hear from customers? but it's still in the initial phases. and let somebody else handle the underlying stuff. to what you do, I think outsourcing makes perfect sense. They manage that piece and it's further down the stack beyond just providing the pipe. in terms of the Telcos, and services they're rolling out, when you get into telecom You've been following, obviously, the space and the significant strides we've made of the solution set? it's always incumbent on the community Alright, Lee Doyle, great to catch up with you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

John TroyerPERSON

0.99+

FacebookORGANIZATION

0.99+

BostonLOCATION

0.99+

Lee DoylePERSON

0.99+

TelefonicaORGANIZATION

0.99+

TelcosORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

LeePERSON

0.99+

Stu MinimanPERSON

0.99+

CiscoORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

OpenStack FoundationORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Last yearDATE

0.99+

Doyle ResearchORGANIZATION

0.99+

This yearDATE

0.99+

Boston, MassachusettsLOCATION

0.99+

EtsyORGANIZATION

0.99+

five yearsQUANTITY

0.99+

last weekDATE

0.99+

SiliconANGLE MediaORGANIZATION

0.99+

ViptelaORGANIZATION

0.99+

ONUGORGANIZATION

0.99+

NFVORGANIZATION

0.99+

NeutronORGANIZATION

0.99+

second weekQUANTITY

0.99+

OneQUANTITY

0.99+

OpenStackORGANIZATION

0.98+

NuageORGANIZATION

0.98+

U.S.LOCATION

0.98+

three daysQUANTITY

0.98+

twoQUANTITY

0.98+

25QUANTITY

0.98+

CumulusORGANIZATION

0.98+

StuPERSON

0.98+

todayDATE

0.98+

five years agoDATE

0.97+

two different planesQUANTITY

0.97+

RiverbedORGANIZATION

0.97+

bothQUANTITY

0.97+

OpenStack Summit 2017EVENT

0.97+

first summitQUANTITY

0.97+

oneQUANTITY

0.96+

Big SwitchORGANIZATION

0.96+

this weekDATE

0.95+

EachQUANTITY

0.94+

OpenStack SummitEVENT

0.94+

OpenStackTITLE

0.92+

OpenStack summitEVENT

0.9+

VMwareORGANIZATION

0.88+

The CubeTITLE

0.87+

EdgeORGANIZATION

0.84+

halfQUANTITY

0.8+

Andrius Benokraitis, Red Hat - Red Hat Summit 2017


 

>> Red Hat OpenShift Container Platform >> Announcer: Live from Boston, Massachusetts, it's theCube Covering Red Hat Summit 2017. Brought to you by Red Hat. >> Welcome back to theCube's coverage, I'm Rebecca Knight your host, here with Stu Miniman. Our guest now is Andrius Benokraitis, he is the Principle Product Manager at Ansible Red Hat Network Automation, thanks so much Andrius. >> Thanks for having me I appreciate it. >> This is your first time on the program. >> Andrius: First time. >> We're nice, >> Really nervous, so, okay. we don't bite. >> Start a little bit with your new to the company relatively >> Andrius: Relatively. >> networking guy by background, can you give us a little bit about your background. >> Sure, I mean, I actually started at Red Hat in 2003. And then did about four five jobs there for about 11 years. And then jumped, went to a startup named Cumulus Networks for about two years. Great crew, and then, now I'm at Ansible, been there since about December, so working on the Network Automation Use Case for Ansible. >> Alright, so networking, has a little bit of coverage here, I remember, you know, something like the Open Daylight stuff and I have, actually there are a couple of Red Hatters that I interviewed at one show ended up forming a company that got bought by Dockers, so you know, there's definitely networking people, but maybe give us a broad view of where networking fits into this stuff that you're working on specifically. >> Yeah, sure thing. I think it's interesting to point out that as everything started in the compute side, and everything started to get disaggregated, the networking side has come along for the ride per se. It's been a little bit behind. When we talk about networking a lot of people just think automatically that's the end. And we're actually trying to think a little bit lower level, so layer one, layer two, layer three, so switching, routing, firewalls, load balancers, all those things are still required in the data center. And when people started using Ansible, it started five years ago on the compute side, a lot of the people started saying, I need to run the whole rec, and I'm not a CCIE, and I don't really know what to do there but I've been thrown in to do something, I'm a cloud admin, the new title right. I have to run the network, so what do I do. I don't know anything about networking, I'm just trying to be good enough, well, I know Ansible, so why don't I just treat switches like servers, and just treat them like, like what I know, they just have a lot more interfaces, but they just treat it that way. So a lot of the expertise came from the ground up with the opensource model and said this is the new use case. >> Well, Jay Rivers, the founder of Cumulus, it's like networking will just be a Linux operating model, you know, extended to the network, which is always like, hey, sounds like a company like Red Hat should be doing that kind of stuff. >> Exactly, it's interesting to see a Bash prompt in the networking right, it's familiar to a lot of people, in the devop space, absolutely. >> So it's a very rapidly changing time, as we know, in this digital computing age, the theme of this conference is the power of the individual, celebrating that individual, the developer, empowering the developers to take risks, be able to fail, make changes, modify. You're not a developer, but you manage developers, you lead developers, how do you work on creating that context, that Jim Whitehurst talked about today. >> I think it starts with, the true empowerment, you have the majority of the networking platforms are still proprietary and walled off, walled off gardens, they're black boxes you can't really do much with them, but you still have the ability to SSH into them, you have familiar terms and concepts from the server side in the networking side. So as long as you have SSH in the box and you know your CLI commands to make changes, you can utilize that in part of Ansible to generate larger abstractions to use the play books in order to build out your data center, with the terms and the Lexicon of YAML, the language of Ansible, things that you already know and utilizing that and going further. >> Can you speak to us a little bit about customers, you know, what's holding them back, how are you guys moving them forward to the more agile development space? >> Our customers are mostly brownfield, they're trying to extend what they already have. They have all their gear, they have everything they have that they need but they're trying to do things better. >> I don't find greenfield customers when it comes to the network side of the house, I mean we've all got what I have and we knew that IT's always additive, so, I mean that's got to be a challenge. >> It's a huge challenge. >> Something you can help with right? >> It's a huge challenge, and I think from the network operators and network engineers, a lot of them are saying, again, they're looking at their friends on the compute side, and they can spin up VMs and provision hardware instantaneously, but why does it have to take four to six weeks to provision a VLAN or get a VLAN added to a network switch? That sounds ridiculous, so a lot of the network engineers and operators are saying, well I think I can be as agile as you, so we can actually work together, using a common framework, common language with Ansible, and we can get things done, and we can get all of this stuff I hate doing, and we don't have to do that anymore, we can worry about more important things in our network, like designing the next big thing, if you want to do BGP, design your BGP infrastructure, you want to move from a layer two to a layer three or an SDN solution. >> I love that you talk about everybody, kind of the software wave and breaking down silos, network and storage people are like, oh my God, you're taking my job away. >> Exactly, completely, no, we're not taking your job. We are augmenting what you already have. We're giving you more tools in your tool belt to do better at your job, and that's truly it, we don't have to, people can be smarter so, if you want to add a VLAN, that can be a code snippet created by the sys admin, it can be in Git, and then the network engineer can say, oh yeah, that looks good, and then I just say, submit. What we see today with some of the customers is, yeah, I want to automate, I really want to automate, and you say, great, let's automate. But then you start getting, you peel back the onion, and you start seeing that, well, how are you managing your inventory, how are you managing your endpoints. And they're like, I have a spreadsheet? And you're like, as a networking guy I guess you, (excited clamoring) >> Networking is scary for a lot, >> It's super scary, yeah. >> So how, do you break that down? >> You do what you can, you do it in small pieces, we're not trying to change the world, we're not trying to say, you're going to go 100% devops in the network. Start small, start with something, like again, you really hate doing, if you want to change, something really low risk, things you really hate doing, just start small, low risk things. And then you can propagate that, and as you start getting confidence, and you start getting the knowledge, and the teams, and every one starts, everyone has to be bought in by the way. This is not something you just go in and say, go do it. You have to have everyone on board, the entire organization, it can't be bottom up, it can't be top down, everyone has to be on board. >> And Andrius, when I talk to people in the networking space, risk is the number one thing they're worried about. They buy on risk, they build on risk, and the problem we have with the networks, they're too many things that are manual. So if I'm typing in some you know, 16 digit hexadecimal code >> From notepad, manually you're copying and pasting >> from like a spreadsheet. Copying and pasting, or gosh, so things like that, the room for error is too high. So there's the things that we need to be able to automate, so that we don't have somebody that's tired or just, wait, was that a one or an L or an I. I don't know, so we understand that it actually should be able to reduce risk, increase security, all the things that the business is telling you. >> All these network vendors have virtual instances. You can do all your testing and deployment, all your testing and your infrastructure, and you can do everything in Jenkins and have all your networking switches, virtually, you can have your whole data center in a virtual environment if you want. So if you talk about lower risk, instead of just copying and pasting, and oh was that a slash 24 or a slash 16, oops, I mean that looked right, but it was wrong, but did it go through test, it probably didn't. And then someone's going to get paged at three in the morning, and a router's down, an edge router's down and your toast. So enabling the full devops cycle of continuous integration. So bringing in the same concepts that you have on the compute side, testing, changes, in a full cycle, and then doing that. >> You talked about the importance of buy in and also the difficulties of getting buy in. How much of that is an impediment to the innovation process, but one of the things we've been talking about, is can big companies innovate? What are the challenges that you see, and how do you overcome them? >> That is the number one, that is the biggest issue right now in the network space, is getting buy in. Whether it's someone who has done it on their own, someone can just install Ansible and do something, and then deploy a switch, but if they leave the company and there's no remediation, if it's not in the MOP, if it's not in the Method of Procedure, no one knows about it. So it has to be part of your, you want to keep all the things you have, all the good things you have today with your checks and balances in the networking, and the CIOs and the people at the top have to understand, you can keep all that stuff, but you have to buy in to the automation framework, and everyone has to be onboard to understand how it fits in in order to go from where you are today to where you want to be. >> At the show here what's exciting your customers? You know, give us a little bit of a viewpoint for people that are checking out your stuff, what to expect. >> Well I think the one thing is they're not used to seeing, they think it's black magic, they think it's just magic. They're like, I can use the same things for everything? I say, yeah, you can. The development processes, the innovation in the community, you know for example, if you want to assist, go ACI Module, it's in GitHub, it's in Cisco's GitHub, you can just go ahead and do that. Now we're trying, starting to migrate those things into core. So the more that we get innovation in the community, and that we have the vendors and the partners driving it, and you're seeing that today, you know, we have F5 here we have Cisco, we have Juniper we have Avi, all those people, you know, they have certified platforms with Ansible, Ansible Core, which is going to be integrated with Ansible Tower, we have full buy in from them. They want to meet with us and say how can we do better. How can we innovate with you to drive the nexgen data centers with our products. >> You talked about yourself as a boomerang employee, what is the value in that, and are you seeing a lot of colleagues who are bouncing around and then coming back from ... >> Absolutely, I think pre acquisition Ansible, the vast majority of the people, I believe were ex-Red Hatters that went to Ansible. So what's really nice to come back home and understand the people that left, that came back to understand already what the, >> And people feel that way, it's a coming home? >> Yeah, it's a coming home, it really is. They understand, you know, they came back, they understood the values of opensource and the culture, again, I started Red Hat in 2003, I see the great things, I see new people getting hired and I see the same things I saw back then, 2003, 2004, with all the great things that people are doing, and the culture. You know, Jim's done a great job at keeping the culture how it is, even way back then when there was only 400 people when I started. >> Andrius, extend that culture, I think about the network community and opensource and you know, you talk about, there's risk there, and you know, you think about, I grew up with kind of enterprise, infrastructure mentality, it's like, don't touch it, don't play with it. We always joked, I got every thing there, really don't walk by it and definitely, you know, some zip tie or duct tape's going to come apart. Are we getting better, is networking embracing this? >> Yes, for sure. I think the nice thing is you start seeing these communities pop up. You're starting to see network operators and engineers, they've been historically, if they don't know the answer, they won't go find it. They kind of may be shy, shy to ask for help, per se. >> If it wasn't on their certification, >> Exactly. >> They weren't going to do it. >> If it wasn't there I'm not going to go, we're bringing them into, so we have, whether there's slack instance, there are networking communities, networking automation, communities, just for network automation. And there's one, there's an Ansible channel, on the network decode, select channel, has almost 800 people on it. So they're coming and now they have a place, they have a safe place to ask questions. They don't have to kind of guess or say, you know what, I'm not going to do that. And know they have a safe place for network engineers, for network engineers to get into the net devop space. >> Another one of the sort of sub themes of this summit is people's data strategy, and customers and vendors, how they're dealing with the massive amounts of data that they're customers are generating. What is your data strategy, and how are you using data? >> So there's two aspects here. So the data can be the actual playbooks themselves, the actual, the golden master images, so you can pull configs from switches, and you can store them and you can use them for continuous compliance. You can say, you know, a rogue engineer might make a change, you know, configuration drift happens. But you need to be able to make those comparisons to the other versions. So we're utilizing things like Git, so you're data strategy can be in the cloud, it can be similar on your side, you can do Stash locally. For part of the operations piece, you can use that. A second piece is, log aggregation is a big piece of the Ansible. So when you actually want to make sure that a change happens, that it's been successful, and that you want to ensure continuous compliance, all that data has to go somewhere, right? So you can utilize Ansible Tower as an aggregator, you can go off using the integrations like Splunk and some other log aggregation connectors with Ansible Tower to help utilize your data strategy with the partners that are really the driving, the people that know data and data structures, so we can use them. >> And one of the other issues is the building the confidence to make decisions with all the data, are you working on that too with your team? >> Yes, we are working with that, and that's part of the larger tower organization, so it goes beyond networking. So, whatever networking gets, everyone else gets. When we started developing Ansible Core and the community and Ansible Tower in-house, we think about networking and we think about Windows, that's a huge opportunity there, you know, we're talking about AWS in the cloud. So cloud instances, these are all endpoints that Ansible can manage, and it's not just networking, so we have to make sure that all of the pieces, all of the endpoints can be managed directly. Everyone benefits from that. >> Andrius thank you so much for your time we appreciate it. >> Thanks again for having me. >> I'm Rebecca Knight for Stu Miniman, thank you very much for joining us. We'll be back after this.

Published Date : May 3 2017

SUMMARY :

Brought to you by Red Hat. he is the Principle Product Manager we don't bite. can you give us a little bit about your background. And then did about four five jobs there for about 11 years. I remember, you know, something like So a lot of the expertise came from the ground up you know, extended to the network, in the networking right, it's familiar to a lot of people, empowering the developers to take risks, the language of Ansible, things that you already know that they need but they're trying to do things better. the network side of the house, I mean we've all got like designing the next big thing, if you want to do BGP, I love that you talk about everybody, and you start seeing that, and you start getting the knowledge, and the problem we have with the networks, all the things that the business is telling you. and you can do everything in Jenkins What are the challenges that you see, all the good things you have today At the show here what's exciting your customers? How can we innovate with you to drive the nexgen and are you seeing a lot of colleagues that came back to understand already what the, They understand, you know, they came back, and you know, you talk about, there's risk there, you start seeing these communities pop up. They don't have to kind of guess or say, you know what, the massive amounts of data that and that you want to ensure continuous compliance, and the community and Ansible Tower in-house, Andrius thank you so much for your time thank you very much for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jay RiversPERSON

0.99+

Rebecca KnightPERSON

0.99+

Andrius BenokraitisPERSON

0.99+

2003DATE

0.99+

CiscoORGANIZATION

0.99+

JimPERSON

0.99+

Jim WhitehurstPERSON

0.99+

Stu MinimanPERSON

0.99+

Red HatORGANIZATION

0.99+

100%QUANTITY

0.99+

Cumulus NetworksORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AnsibleORGANIZATION

0.99+

2004DATE

0.99+

two aspectsQUANTITY

0.99+

fourQUANTITY

0.99+

CumulusORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

first timeQUANTITY

0.99+

oneQUANTITY

0.99+

second pieceQUANTITY

0.99+

todayDATE

0.99+

Red HattersORGANIZATION

0.98+

16 digitQUANTITY

0.98+

six weeksQUANTITY

0.98+

Ansible Red Hat Network AutomationORGANIZATION

0.98+

Ansible TowerORGANIZATION

0.98+

five years agoDATE

0.98+

JenkinsTITLE

0.98+

First timeQUANTITY

0.98+

about 11 yearsQUANTITY

0.98+

AndriusPERSON

0.98+

JuniperORGANIZATION

0.97+

400 peopleQUANTITY

0.97+

about two yearsQUANTITY

0.97+

DockersORGANIZATION

0.97+

LinuxTITLE

0.96+

WindowsTITLE

0.96+

Ansible CoreORGANIZATION

0.95+

Red Hat Summit 2017EVENT

0.95+

GitTITLE

0.93+

about four five jobsQUANTITY

0.93+

AndriusTITLE

0.9+

almost 800 peopleQUANTITY

0.89+

threeDATE

0.87+

YAMLTITLE

0.86+

layer oneQUANTITY

0.85+

GitHubTITLE

0.85+

theCubeORGANIZATION

0.84+

AviORGANIZATION

0.84+

one showQUANTITY

0.82+

layer threeQUANTITY

0.77+

HatORGANIZATION

0.71+

layer twoQUANTITY

0.7+

StashTITLE

0.68+

F5ORGANIZATION

0.68+

layerQUANTITY

0.67+

one thingQUANTITY

0.65+

SplunkORGANIZATION

0.65+

aboutDATE

0.62+

OpenShift Container PlatformTITLE

0.62+

RedTITLE

0.6+

threeOTHER

0.59+