Sujal Das, Netronome - OpenStack Summit 2017 - #OpenStackSummit - #theCUBE
>> Announcer: Live from Boston, Massachusetts, it's theCUBE covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem support. >> And we're back. I'm Stu Miniman with my cohost, John Troyer, getting to the end of day two of three days of coverage here at the OpenStack Summit in Boston. Happy to welcome the program Sujal Das, who is the chief marketing and strategy officer at Netronome. Thanks so much for joining us. >> Thank you. >> Alright, so we're getting through it, you know, really John and I have been digging into, you know, really where OpenStack is, talking to real people, deploying real clouds, where it fits into the multi cloud world. You know, networking is one of those things that took a little while to kind of bake out. Seems like every year we talk about Neutron and all the pieces that are there. But talk to us, Netronome, we know you guys make SmartNICs. You've got obviously some hardware involved when I hear a NIC, and you've got software. What's your involvement in OpenStack and what sort of things are you doing here at the show? >> Absolutely, thanks, Stu. So, we do SmartNIC platforms, so that includes both hardware and software that can be used in commercial office house servers. So with respect to OpenStack, I think the whole idea of STN with OpenStack is centered around the data plane that runs on the server, things such as the Open vSwitch, or Virtual Router, and they're evolving new data planes coming into the market. So we offload and accelerate the data plane in our SmartNICs, because the SmartNICs are programmable, we can evolve the feature set very quickly. So in fact, we have software releases that come out every six months that keep up to speed with OpenStack releases and Open vSwitches. So that's what we do in terms of providing a higher performance OpenStack environment so to say. >> Yeah, so I spent a good part of my career working on that part of the stack, if you will, and the balance is always like, right, what do you build into the hardware? Do I have accelerators? Is this the software that does, you know, usually in the short term hardware can take it care of it, but in the long term you follow the, you know, just development cycles, software tends to win in terms, so, you know. Where are we with where functionality is, what differentiates what you offer compared to others in the market? >> Absolutely. So we see a significant trend in terms of the role of a coprocessor to the x86 or evolving ARM-based servers, right, and the workloads are shifting rapidly. You know, with the need for higher performance, more efficiency in the server, you need coprocessors. So we make, essentially, coprocessors that accelerate networking. And that sits next to an x86 on a SmartNIC. The important differentiation we have is that we are able to pack a lot of cores on a very small form factor hardware device. As many as 120 cores that are optimized for networking. And by able to do that, we're able to deliver very high performance at the lowest cost and power. >> Can you speak to us, just, you know, what's the use case for that? You know, we talk about scale and performance. Who are your primary customers for this? Is this kind of broad spectrum, or, you know, certain industries or use cases that pop out. >> Sure, so we have three core market segments that we go after, right? One is the innovene construction market, where we see a lot of OpenStack use, for example. We also have the traditional cloud data center providers who are looking at accelerating even SmartNICs. And lastly the security market, that's kind of been our legacy market that we have grown up with. With security kind of moving away from appliances to more distributed security, those are our key three market segments that we go after. >> The irony is, in this world of cloud, hardware still matters, right? Not only does hardware, like, you're packing a huger number of cores into a NIC, so that hardware matters. But, one of the reasons that it matters now is because of the rise of this latest generation of solid-state storage, right? People are driving more and more IO. Do you see, what are the trends that you're seeing in terms of storage IO and IO in general in the data center? >> Absolutely. So I think the large data centers of the world, they showed the way in terms of how to do storage, especially with SSDs, what they call disaggregated storage, essentially being able to use the storage on each server and being able to aggregate those together into a pool of storage resources and its being called hyperconverged. I think companies like Nutanix have found a lot of success in that market. What I believe is going to happen in the next phase is hyperconvergence 2.0 where we're going to go beyond security, which essentially addressed TCO and being able to do more with less, but the next level would be hyperconvergence around security where you'd have distributed security in all servers and also telemetry. So basically your storage appliance is going away with hyperconvergence 1.0, but with the next generation of hyperconvergence we'd see the secured appliances and the monitoring appliances sort of going away and becoming all integrated in the server infrastructure to allow for better service levels and scalability. >> So what's the relationship between distributed security and then the need for more bandwidth at the back plane? >> Absolutely. So when you move security into the server, the processing requirements in the server goes up. And typically with all security processing, it's a lot of what's called flow processing or match-action processing. And those are typically not suitable for a general purpose server like the ARM or the x86, but that's where you need specialized coprocessors, kind of like the world of GPUs doing well in the artificial intelligence applications. I think the same example here. When you have security, telemetry, et cetera being done in each server, you need special purpose processing to do that at the lowest cost and power. >> Sujal, you mentioned that you've got solutioned into the public cloud. Are those the big hyperscale guys? Is it service providers? I'm curious if you could give a little color there. >> Yes, so these are both tier one and tier two service providers in the cloud market as well as the telco service providers, more in the NFV side. But we see a common theme here in terms of wanting to do security and things like telemetry. Telemetry is becoming a hot topic. Something called in-band telemetry that we are actually demonstrating at our booth and also speaking about with some our partners at the show, such as with Mirantis, Red Hat, and Juniper. Where doing all of these on each server is becoming a requirement. >> When I hear you talk, I think about here at OpenStack, we're talking about the hybrid or multi cloud world and especially something like security and telemetry I need to handle my data center, I need to handle the public cloud, and even when I start to get into that IoT edge environment, we know that the service area for attack just gets orders of magnitude larger, therefore we need security that can span across those. Are you touching all of those pieces, maybe give us a little bit of, dive into it. >> Absolutely, I think a great example is DDoS, right, distributed denial of service attacks. And today you know you have these kind of attacks happening from computers, right. Look at the environment where you have IoTs, right, you have tons and tons of small devices that can be hacked and could flood attacks into the data center. Look at the autonomous car or self-driving car phenomenon, where each car is equivalent to about 2,500 Internet users. So the number of users is going to scale so rapidly and the amount of attacks that could be proliferated from these kind of devices is going to be so high that people are looking at moving DDoS from the perimeter of the network to each server. And that's a great example that we're working with with a large service provider. >> I'm kind of curious how the systems take advantage of your technology. I can see it, some of it being transparent, like if you just want to jam more bits through the system, then that should be pretty transparent to the app and maybe even to the data plane and the virtual switches. But I'm guessing also there are probably some API or other software driven ways of doing, like to say, hey not only do I want you to jam more bits through there, but I want to do some packet inspection or I want to do some massaging or some QoS or I'm not sure what all these SmartNICs do. So is my model correct? Is that kind of the different ways of interacting with your technology? >> You're hitting a great point. A great question by the way, thank you. So the world has evolved from very custom ways of doing things, so proprietary ways of doing things, to more standard ways of doing things. And one thing that has kind of standardized so to say the data plane that does all of these functions that you mention, things like security or ACL roots or virtualization. Open vSwitch is a great example of a data plane that has kind of standardized how you do things. And there are a lot of new open source projects that are happening in the Linux Foundation, such as VPP for example. So each of these standardize the way you do it and then it becomes easier for vendors like us to implement a standard data plane and then work with the Linux kernel community in getting all of those things upstream, which we are working on. And then having the Red Hats of the world actually incorporate those into their distributions so that way the deployment model becomes much easier, right. And one of the topics of discussion with Red Hat that we presented today was exactly that, as to how do you make these kind of scales, scalability for security and telemetry, be more easily accessible to users through a Red Hat distribution, for example. >> Sujal, can you give us a little bit of just an overview of the sessions that Netronome has here at the show and what are the challenges that people are coming to that they're excited to meet with your company about? >> Absolutely, so we presented one session with Mirantis. Mirantis, as you know, is a huge OpenStack player. With Mirantis, we presented exactly the same, the problem statement that I was talking about. So when you try to do security with OpenStack, whether its stateless or stateful, your performance kind of tanks when you apply a lot of security policies, for example, on a per server basis that you can do with OpenStack. So when you use a SmartNIC, you essentially return a lot of the CPU cores to the revenue generating applications, right, so essentially operators are able to make more per server, make more money per server. That's a sense of what the value is, so that was the topic with Mirantis, who uses actually Open Contrail virtual router data plane in their solution. We also have presented with Juniper, which is also-- >> Stu: Speaking of Open Contrail. >> Yeah, so Juniper is another version of Contrail. So we're presenting a very similar product but that's with the commercial product from Juniper. And then we have yesterday presented with Red Hat. And Red Hat is based on Red Hat's OpenStack and their Open vSwitch based products where of course we are upstreaming a lot of these code bits that I talked about. But the value proposition is uniform across all of these vendors, which is when you do storage, sorry, security and telemetry and virtualization et cetera in a distributed way across all of your servers and get it for all of your appliances, you get better scale. But to achieve the efficiencies in the server, you need a SmartNIC such as ours. >> I'm curious, is the technology usually applied then at the per server level, is there a rack scale component too that needs to be there? >> It's on a per server basis, so it's the use cases like any other traditional NIC that you would use. So it looks and feels like any other NIC except that there is more processing cores in the hardware and there's more software involved. But again all of the software gets tightly integrated into the OS vendor's operating system and then the OpenStack environment. >> Got you. Well I guess you can never be too rich, too thin, or have too much bandwidth. >> That's right, yeah. >> Sujal, share with our audience any interesting conversation you had or other takeaways you want people to have from the OpenStack Summit. >> Absolutely, so without naming specific customer names, we had one large data center service provider in Europe come in and their big pain point was latency. Latency going form the VM on one server to another server. And that's a huge pain point and their request was to be able to reduce that by 10x at least. And we're able to do that, so that's one use case that we have seen. The other is again relates to telemetry, you know, how... This is a telco service provider, so as they go into 5G and they have to service many different applications such as what they call network slices. One slice servicing the autonomous car applications. Another slice managing the video distribution, let's say, with something like Netflix, video streaming. Another one servicing the cellphone, something like a phone like this where the data requirements are not as high as some TV sitting in your home. So they need different kinds of SLA for each of these services. How do they slice and dice the network and how are they able to actually assess the rogue VM so to say that might cause performance to go down and affect SLAs, telemetry, or what is called in-band telemetry is a huge requirement for those applications. So I'm giving you like two, one is a data center operator. You know an infrastructure as a service, just want lower latency. And the other one is interest in telemetry. >> So, Sujal, final question I have for you. Look forward a little bit for us. You've got your strategy hat on. Netronome, OpenStack in general, what do you expect to see as we look throughout the year maybe if we're, you know, sitting down with you in Vancouver a year from now, what would you hope that we as an industry and as a company have accomplished? >> Absolutely, I think you know you'd see a lot of these products so to say that enable seamless integration of SmartNICs become available on a broad basis. I think that's one thing I would see happening in the next one year. The other big event is the whole notion of hyperconvergence that I talked about, right. I would see the notion of hyperconvergence move away from one of just storage focus to security and telemetry with OpenStack kind of addressing that from a cloud orchestration perspective. And also with each of those requirements, software defined networking which is being able to evolve your networking data plane rapidly in the run. These are all going to become mainstream. >> Sujal Das, pleasure catching up with you. John and I will be back to do the wrap-up for day two. Thanks so much for watching theCUBE. (techno beat)
SUMMARY :
Brought to you by the OpenStack Foundation, of coverage here at the OpenStack Summit in Boston. But talk to us, Netronome, we know you guys make SmartNICs. in our SmartNICs, because the SmartNICs are programmable, on that part of the stack, if you will, of a coprocessor to the x86 or evolving ARM-based servers, Can you speak to us, just, you know, And lastly the security market, is because of the rise of this latest generation to do more with less, but the next level kind of like the world of GPUs doing well into the public cloud. more in the NFV side. that the service area for attack just gets orders of the network to each server. I'm kind of curious how the systems take advantage So each of these standardize the way you do it of the CPU cores to the revenue generating applications, of these vendors, which is when you do storage, sorry, But again all of the software gets tightly integrated Well I guess you can never be too rich, too thin, or other takeaways you want people to have The other is again relates to telemetry, you know, how... as we look throughout the year maybe if we're, you know, of these products so to say that enable seamless integration Sujal Das, pleasure catching up with you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Troyer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Sujal Das | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Vancouver | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
Netronome | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Juniper | ORGANIZATION | 0.99+ |
Mirantis | ORGANIZATION | 0.99+ |
120 cores | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
Red Hat | TITLE | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
each car | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
each server | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
today | DATE | 0.99+ |
OpenStack Summit | EVENT | 0.98+ |
OpenStack | TITLE | 0.98+ |
OpenStack Summit 2017 | EVENT | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
three days | QUANTITY | 0.98+ |
about 2,500 Internet users | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one session | QUANTITY | 0.97+ |
telco | ORGANIZATION | 0.97+ |
Red Hats | TITLE | 0.97+ |
each | QUANTITY | 0.97+ |
Sujal | PERSON | 0.97+ |
day two | QUANTITY | 0.97+ |
one server | QUANTITY | 0.97+ |
#OpenStackSummit | EVENT | 0.96+ |
ARM | ORGANIZATION | 0.96+ |
Stu | PERSON | 0.96+ |
Neutron | ORGANIZATION | 0.95+ |
three market segments | QUANTITY | 0.94+ |
both tier one | QUANTITY | 0.92+ |
Linux kernel | TITLE | 0.9+ |
Open vSwitch | TITLE | 0.9+ |
next one year | DATE | 0.89+ |
hyperconvergence 2.0 | OTHER | 0.84+ |
tier two | QUANTITY | 0.84+ |
x86 | COMMERCIAL_ITEM | 0.83+ |
one use case | QUANTITY | 0.81+ |
one large data center | QUANTITY | 0.81+ |
TCO | ORGANIZATION | 0.8+ |
one thing | QUANTITY | 0.79+ |
Open Contrail | TITLE | 0.79+ |
1.0 | OTHER | 0.75+ |
three core market segments | QUANTITY | 0.74+ |
Niel Viljoen, Netronome & Nick McKeown, Barefoot Networks - #MWC17 - #theCUBE
(lively techno music) >> Hello, everyone, I'm John Furrier with theCUBE. We are here in Palo Alto to showcase a brand new relationship and technology partnership and technology showcase. We're here with Niel Viljoen, who's the CEO of Netronome. Did I get that right? (Niel mumbles) Almost think that I will let you say it, and Nick McKeown, who's Chief Scientist and Chairman and the co-founder Barefoot Networks. Guys, welcome to the conversation. Obviously, a lot going on in the industry. We're seeing massive change in the industry. Certainly, digital transmissions, the buzzword the analysts all use, but, really, what that means is the entire end-to-end digital space, with networks all the way to the applications are completely transforming. Network transformation is not just moving packets around, it's wireless, it's content, it's everything in between that makes it all work. So let's talk about that, and let's talk about your companies. Niel, talk about your company, what you guys do, Netronome and Nick, same for you, for Barefoot. Start with you guys. >> So as Netronome, our core focus lies around SmartNICs. What we mean by that, these are elements that go into the network servers, which in this sort of cloud and NFV world, gets used for a lot of network services, and that's our area of focus. >> Barefoot is trying to make switches that were previously fixed function, turning them into something that those who own and operate networks can program them for themselves to customize them or add new features or protocols that they need to support. >> And Barefoot, you're walking in the park, you don't want to step in any glass, and get a cut, and I like that, love the name of the company, but brings out the real issue of getting this I/O world if there were NICs, it throws back the old school mindset of just network cards and servers, but if you take that out on the Internet now, that is the I/O channel engine, real time, it's certainly a big part of the edge device, whether that's a human or device, IoT to mobile, and then moving it across the network, and by the way, there's multiple networks, so is this kind of where you guys are showcasing your capabilities? >> So, fundamentally, you need both sides of the line, if I could put it that way, so we, on the server side, and specifically, also giving visibility between virtual machines to virtual machines, also called VNFs to VNFs in a service chaining mechanism, which has what a lot of the NFV customers are deploying today. >> Really, as the entire infrastructure upon which these services are delivered, as that moves into software, and more of it is created by those who own and operate these services for themselves, they either create it, commission it, buy it, download it, and then modify it to best meet their needs. That's true whether it's in the network interface portion, whether it's in the switch, and they've seen it happen in the control plane, and now it's moving down so that they can define all the way down to how packets are processed in the NIC and in the switches, and when they do that, they can then add in their ability to see what's going on in ways that they've never been able to do before, so we really think of ourselves as providing that programmability and that flexibility down, all the way to the way that the packets are processed. >> And what's the impact, Nick, talk about the impact then take us through like an example. You guys are showcasing your capabilities to the world, and so what's the impact and give us an example of what the benefit would be. I mean, what goes on like this instrumentation, certainly, everyone wants to instrument everything. >> Niel: Yes. >> Nick: Yeah. >> But what's the practical benefit. I mean who wins from this and what's the real impact? >> Well, you know, in days gone by, if you're a service provider providing services to your customers, then you would typically do this out of vertically integrated pieces of equipment that you get from equipment vendors. It's closed, it's proprietary, they have their own sort of NetFlow, sFlow, whatever the mechanism that they have for measuring what's going on, and you had to learn to live with the constraints of what they had. As this all gets kind of disaggregated and broken apart, and that the owner of the infrastructure gets to define the behavior in software, they can now chain together the modules and the pieces that they need in order to deliver the service. That's great, but now they've lost that proprietary measurement, so now they need to introduce the measurement that they can get greater visibility. This actually has created a tremendous opportunity and this is what we're demonstrating, is if you can come up with a uniform way of doing this, so that you can see, for example, the path that every packet takes, the delay that it encounters along the way, the rules that it encounters that determines the path that it gets, if it encounters congestion, who else contributed to that congestion, so we know who to go blame, then by giving them that flexibility, they can go and debug systems much more quickly, and change them and modify them. >> It's interesting, it's almost like the aspirin, right? You need, the headache now is, I have good proprietary technology for point measurement and solutions, but yet I need to manage multiple components. >> I think there's an add-on to what Nick said, which is the whole key point here which is the programmability, because there's data, and then there's information. Gathering lots and lots of telemetry data is easy. (John chuckles) The problem is you need to have it at all points, which is Nick's key point, but the programmability allows the DevOps person, in other words, the operational people within the cloud or carrier infrastructure, to actually write code that identifies and isolates the data, the information rather than the data that they need. >> So is this customer-based for you guys, the carriers, the service providers, who's your target audience? >> Yep, I think it's service providers who are applying the NFV technologies, in other words, the cloud-like technologies. I always say the real big story here is the cloud technologies rather than just the cloud. >> Yeah, yeah. >> And how that's-- >> And same for you guys, you guys have this, this joint, same target customer. >> Yeah, I don't think there's any disagreement. >> Okay. (laughs) Well, I want to get drilling to the whole aspirin analogy 'cause it's of the things that you brought up with the programmability because NFV has been that, you know, saving grace, it's been the Holy Grail for how many years now, and you're starting to see the tides shifting now towards where NFV is not a silver bullet, so to speak, but it is actually accelerating some of the change, and I always like to ask people, "Hey, are you an aspirin or you a vitamin?" One guest told me, "I'm a steroid. "We make things grow faster." I'm like, "Okay," but in a way, the aspirin solves a problem, like immediate headaches, so it sounds like a lot of the things that you mentioned. That's an immediate benefit right there on the instrumentation, in an open way, multi-component, multi-vendor kind of, benefits of proprietary but open, but the point about programmability gives a lot of headroom around kind of that vitamin, that steroid piece where it's going to allow for automation, which brings an interesting thing, that's customizable automation, meaning, you can apply software policy to it. Is that kind of like, can you tease that out, is that an area that you guys talking about? >> I think the first thing that we should mention is probably the new language called P4. I think Nick will be too modest to state that but I think Nick has been a key player in, along with his team and many other people, in the definition and the creation of this language, which allows the programmability of all these elements. >> Yeah, just drill down, I mean, toot your own horn here, let's get into it because what is it and what's the benefit and what is the real value, what's the upshot of P4? >> Yeah, the way that hardware that processes packets, whether it's in network interface cards, or in switching, the way that that's been defined in the past, has been by chip designers. At the time that they defined the behavior, they're writing Verilog or VHDL, and as we know, people that design chips, don't operate big networks, so they really know what capabilities to put in-- >> They're good at logic in a vacuum but not necessarily in the real world, right? Is that what you (laughs). >> So what we-- >> Not to insult chip designers, they're great, right? >> So what we've all wanted to do for some time is to come up with a uniform language, a domain-specific language that allows you to define how packets will be processed in interfaces, in switches, in hypervisor switches inside the virtual machine environments, in a uniform way so that someone who's proficient in that language can then describe a behavior that can then operate in different paths of the chained services, so that they can get the same behavior, a uniform behavior, so that they can see the network-wide, the service-wide behavior in a uniform way. The P4 language is merely a way to describe that behavior, and then both Netronome and Barefoot, we each have our own compilers for compiling that down to the specific processing element that operates in the interfaces and in the switches. >> So you're bridging the chip layer with some sort of abstraction layer to give people the ability to do policy programming, so all the heavy lifting stuff in the old network days was configuration management, I mean all the, I mean that was like hard stuff and then, now you got dynamic networks. It even gets harder. Is this kind of where the problem goes away? And this is where automation. >> Exactly, and the key point is the programmability versus configurability. >> John: Yeah. >> In a configurable environment, you're always trying to pre-guess what your customer's going to try to look at. >> (chuckles) Guessing's not good in the networking area. That's not good for five nines. >> In the new world that we're in now, the customer actually wants to define exactly what the information is they want to extract-- >> John: I wanted to get-- >> Which is your whole question around the rules and-- >> So let me see if I can connect the dots here, just kind of connect this for, and so, in the showcase, you guys are going to show this programmability, this kind of efficiency at the layer of bringing instrumentation then using that information, and/or data depending on how it's sliced and diced via the policy and programmability, but this becomes cloud-like, right? So when you start moving, thinking about cloud where service providers are under a lot of pressure to go cloud because Over-The-Top right now is booming, you're seeing a huge content and application market that's super ripe for kind of the, these kinds of services. They need that ability to have the infrastructure be like software, so infrastructure is code, is the DevOps term that we talk about in our DevOps world, but that has been more data-centered kind of language, with developers. Is it going the same trajectory in the service provider world because you have networks, I mean they're bigger, higher scale. What are some of those DevOps dynamics in your world? Can you talk about that and share some color on that? >> I mean, the way in which large service providers are starting to deliver those services is out of something that looks very much like the cloud platform. In fact, it could in fact be exactly the same technology. The same servers, the same switches, same operating systems, a lot of the same techniques. The problem they're trying to solve is slightly different. They're chaining together the means to process a sequence of operations. A little bit like, though the cloud operators are moving towards microservices that get chained together, so there are a lot of similarities here and the problems they face are very similar, but think about the hell that this potentially creates for them. It means that we're giving them so much rope to hang themselves because everything is now got to be put together in a way that's coming from different sources, written and authored by different people with different intent, or from different places across the Internet, and so, being able to see and observe exactly how this is working is even more critical than-- >> So I love that rope to hang yourself analogy because a lot of people will end up breaking stuff as Mark Zuckerberg's famous quote is, "Move fast, break stuff," and then by the way, when they 100 million users and moved, slogan went for, "Move fast, be reliable," so he got on the five nines bandwagon pretty quick, but it's more than just the instrumentation. The key that you're talking about here is that they have to run those networks in really high reliability environments. >> Nick: Correct. >> And so that begs the challenge of, okay, it's not just easy as throwing a docker container at something. I mean that's what people are doing now, like hey, I'm going to just use microservices, that's the answer. They still got stuff under the hood, but underneath microservices. You have orchestration challenges and this kind of looks and feels like the old configuration management problems but moved up the stack, so is that a concern in your market as well? >> So I think that's a very, very good point that you make because the carriers, as you say, tend to be more dependent, almost, on absolute reliability, and very importantly, performance, but in other words, they need to know that this is going to be 100 gigs because that's what they've signed up the SLA with their customer for. (John chuckles) It's not going to be almost 100 gigs 'cause then they're going to end up paying a lot of penalties. >> Yeah, they can't afford breakage. They're OpsDev, not DevOps. Which comes first in their world? >> Yes, so the critical point here is just that this is where the demo that we're doing which shows the ability to capture all this information at line rate, at very high speeds in the switches. (mumbles) >> So let's about this demo you're doing, this showcase that you guys are providing and demonstrating to the marketplace, what's the pitch, I mean what is it, what's the essence of the insight of this demo, what's it proving? >> So I think that the, it's good to think about a scenario in which you would need this, and then this leads into what the demo would be. Very common in an environment like the VNF kind of environment, where something goes wrong, they're trying to figure out very quickly, who's to blame, which part of the infrastructure was the problem? Could it be congestion, could it be a misconfiguration? (John laughs) >> Niel: Who's flow-- >> Everyone pointing finger at the other guy. >> Nick: The typical way-- >> Two days later, what happened, really? >> Typical way that they do this, is they'll bring the people that are responsible for the compute, the networking, and the storage quickly into one room, and say, "Go figure it out." The people that are doing the compute, they'll be modifying and changing and customizing, running experiments, isolating the problem. So are the people that are doing storage. They can program their environment. In the past, the networking people had ping and traceroute. That's the same tools that they had 20 years ago. (John chuckles) What we're doing is changing that by introducing the means where they can program and configure, run different experiments, run different probes, so that they can look and see the things that they need to see, and in the demo in particular, you'll be able to see the packets coming in through a switch, through a NIC, through a couple of VMs, back out through a switch, and then you can look at that packet afterwards, and you can ask questions of the packet itself, something you've never been able to-- >> It's the ultimate debugger. Basically, it's the ultimate debugger. >> Nick: That's right. Go to the packet, say-- >> Niel: Programmable debugger. >> "Which path did you take? "How long did you wait at each NIC, "at each VM, at each switch port as you went through? "What are the rules that you followed "that led you to be here, and if you encountered "some congestion, whose fault was it? "Who did you share that queue with?" so we can go back and apportion the blame-- >> So you get a multiple dimension of path information coming in, not just the standard stovepiped tools-- >> Nick: That's right. >> And then, everyone compares logs and then there's all these holes in it, people don't know what the hell happened. >> And through the programmability, you can isolate the piece of the information-- >> So the experimentation agile is where I think, is that what you're getting at? You can say, you can really get down and dirty into a duplication environment and also run these really fast experiments versus kind of in theory or in-- >> Exactly, which is what, as Nick said, is exactly what people on the server side and on the storage side have been able to do in the past. >> Okay so for people watching that are kind of getting into this and people who aren't, just give me in order maybe through of the impact and the consequences of not taking this approach, vis-a-vis the available, today's available techniques. >> If you wanted to try and figure out who it was that you were sharing a queue with inside an interface or inside a switch, you have no way to do that today, right? No means to do that, and so if you wanted to be able to say it's that aggressive flow over there, that malfunction in service over there, you've got no means to do it. As a consequence, the networking people always get the blame because they can't show that it wasn't them. But if you can say, I can see, in this queue, there were four flows going through or 4,000 flows, and one of them was really badly behaved, and it was that one over there and I can tell you exactly why its packets were ending up here, then you can immediately go in and shut that one down. They have no way that they go and randomly shut-- >> Can I get this for my family, I need this for my household. I mean, I'm going to use this for my kids. I mean I know exactly the bad behavior, I need to prove it. No, but this is what the point is, is this is fast. I mean you're talking speed, too, as another aspect-- >> Niel: It's all about the-- >> What's the speed lag on approach versus taking the old, current approach versus this joint approach you guys are taking? What's the, give me an estimate on just ballpark numbers-- >> Well there's two aspects to the speed. One is the speed at which it's operating, so this is going to be in the demo, it's running at 40 gigabits per seconds, but this can easily run, for example, in the Barefoot switch, it'll run at 6 terabits per second. The interesting thing here is that in this entire environment, this measurement capability does not generate a single extra packet. All of it is self-contained in the packets that are already flowing. >> So there's no latency issues on running this in production. >> If you wanted then change the behavior, you needed to go and modify what was happening in the NIC, modify what was happening in the switch, you can do that in minutes. So that you can say-- >> Now the time it takes for a user now to do this, let's go to that time series. What does that look like? So current method is get everyone in a room, do these things, are we talking, you know. >> I think that today, it's just simply not possible. >> Not possible. >> So it's, yes, new capability. >> I think is the key issue. >> So this is a new capability. >> This is a new capability and exactly as Nick said, it's getting the network to the same level of ability that you always had inside the-- >> So I got to ask you guys, as founders of your companies because this is one of those things that's a great success story, entrepreneurs, you got, it's not just a better mousetrap, it's revolutionary in the sense that no one's ever had the capability before, so when you go to events like Mobile World Congress, you're out in the field, are you shaking people like, "You need me! "I need to cut the line and tell you what's going on." I mean, you must have a sense of urgency that, is it resonating with the folks you're talking to? I mean, what are some of the conversations you're having with folks? They must be pretty excited. Can you share any anecdotal stories? >> Well, yup, I mean we're finding, across the industry, not only in the service providers, the data center companies, Wall Street, the OEM box vendors, everybody is saying, "I need," and have been saying for a long time, "I need the ability to probe into the behavior "of individual packets, and I need whoever is owning "and operating the network to be able to customize "and change that." They've never been able to do that. The name of the technique that we use is called In-band Network Telemetry or INT, and everybody is asking for it now. Actually, whether it's with the two of us, or whether they're asking for it more generally, this is, this is-- >> Game changer. >> You'll see this everywhere. >> John: It's a game changer, right? >> That's right. >> Great, all right, awesome. Well, final question is, is that, what's the business benefits for them because I can imagine you get this nailed down with the proper, the ability to test new apps because obviously, we're in a Wild West environment, tsunami of apps coming, there's always going to be some tripwires in new apps, certainly with microservices and APIs. >> I think the general issues that we're addressing here is absolutely crucial to the successful rollout of NFV infrastructures. In other words, the ability to rapidly change, monitor, and adapt is critical. It goes wider than just this particular demo, but I think-- >> It's all apps on the service provider. >> The ability to handle all the VNFs-- >> Well, in the old days, it was simply network spikes, tons of traffic, I mean, now you have, apps could throw off anomalies anywhere, right? You'd have no idea what the downstream triggers could be. >> And that's the whole notion of the programmable network, which is critical. >> Well guys, any information where people can get some more information on this awesome opportunity? You guys' sites, want to share quick web addresses and places people get whitepapers or information? >> For the general P4 movement, there's P4.org. P, the number four, .org. Nice and easy. They'll find lots of information about the programmability that's possible by programming the, the forwarding being what both of us are doing. In-band Network Telemetry, you'll find descriptions there, P4 programs, and whitepapers describing that, and of course, on the two company websites, Netronome and Barefoot. >> Right. Nick and Niel, thanks for spending some time sharing the insights and congratulations. We'll keep an eye for it, and we'll be talking to you soon. >> Thank you. >> Thank you very much. >> This is theCUBE here in Palo Alto. I'm John Furrier, thanks for watching. (lively techno music)
SUMMARY :
and the co-founder Barefoot Networks. that go into the network servers, that they need to support. So, fundamentally, you need both sides of the line, and in the switches, and when they do that, talk about the impact then take us through like an example. I mean who wins from this and what's the real impact? and broken apart, and that the owner It's interesting, it's almost like the aspirin, right? that identifies and isolates the data, is the cloud technologies rather than just the cloud. And same for you guys, you guys have this, 'cause it's of the things that you brought up in the definition and the creation of this language, in the past, has been by chip designers. Is that what you (laughs). that operates in the interfaces and in the switches. so all the heavy lifting stuff in the old network days Exactly, and the key point is the programmability what your customer's going to try to look at. (chuckles) Guessing's not good in the networking area. in the showcase, you guys are going to show and the problems they face are very similar, is that they have to run those networks And so that begs the challenge of, okay, because the carriers, as you say, Which comes first in their world? in the switches. Very common in an environment like the VNF and see the things that they need to see, Basically, it's the ultimate debugger. Go to the packet, say-- and then there's all these holes in it, and on the storage side have been able to do in the past. of the impact and the consequences always get the blame because they can't show I mean I know exactly the bad behavior, I need to prove it. One is the speed at which it's operating, So there's no latency issues on running this in the NIC, modify what was happening in the switch, Now the time it takes for a user now to do this, that no one's ever had the capability before, "I need the ability to probe into the behavior because I can imagine you get this nailed down is absolutely crucial to the successful rollout Well, in the old days, it was simply network spikes, And that's the whole notion of the programmable network, and of course, on the two company websites, sharing the insights and congratulations. This is theCUBE here in Palo Alto.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick McKeown | PERSON | 0.99+ |
Niel Viljoen | PERSON | 0.99+ |
Niel | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
100 gigs | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barefoot Networks | ORGANIZATION | 0.99+ |
Netronome | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
Barefoot | ORGANIZATION | 0.99+ |
two aspects | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
both | QUANTITY | 0.99+ |
#MWC17 | EVENT | 0.99+ |
two company | QUANTITY | 0.98+ |
each VM | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
100 million users | QUANTITY | 0.98+ |
each switch | QUANTITY | 0.98+ |
Two days later | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
four | QUANTITY | 0.97+ |
one room | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
both sides | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
each NIC | QUANTITY | 0.96+ |
One guest | QUANTITY | 0.95+ |
.org. | OTHER | 0.95+ |
first | QUANTITY | 0.94+ |
6 terabits per second | QUANTITY | 0.94+ |
single extra packet | QUANTITY | 0.91+ |
4,000 flows | QUANTITY | 0.88+ |
P4 | TITLE | 0.88+ |
40 gigabits per seconds | QUANTITY | 0.85+ |
five nines bandwagon | QUANTITY | 0.84+ |
five nines | QUANTITY | 0.84+ |
theCUBE | ORGANIZATION | 0.76+ |
almost 100 gigs | QUANTITY | 0.76+ |
DevOps | TITLE | 0.75+ |
#theCUBE | ORGANIZATION | 0.69+ |
Verilog | TITLE | 0.67+ |
NetFlow | ORGANIZATION | 0.66+ |
OpsDev | ORGANIZATION | 0.64+ |
VNFs | TITLE | 0.62+ |
P4 | OTHER | 0.61+ |
agile | TITLE | 0.59+ |
P4 | ORGANIZATION | 0.58+ |
Wall | ORGANIZATION | 0.56+ |
P4.org | TITLE | 0.5+ |
Day 2 Wrap - OpenStack Summit 2017 - #OpenStackSummit - #theCUBE
>> Announcer: Live from Boston, Massachusetts, it's the CUBE covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem support. >> Welcome back, I'm Stu Miniman. And if I'm sitting on this side of the table with the long hallways behind me, it means we're here for the wrap of the second day. John Troyer's here, day two of three days, theCUBE here at OpenStack Summit. John, I feel like you're building energy as the show goes on, kind of like the show itself. >> Yeah, yeah, getting my footing here. Again, my first summit. It was a good second day, Stu, I think we made it through. We had some fascinating stuff. >> Yeah, fascinating stuff. Before we jump into some of the analysis here, I do want to say you know, first and foremost, big thanks to the foundation. Foundations themselves tend to get, they get beat up some, they get loved some, without the OpenStack Foundation, we would not be here. Their support for a number of years, our fifth year here at the show, as well as the ecosystem here, really interesting and diverse and ever-changing ecosystem, and that fits into our sponsors too. So Red Hat's our headline sponsor here. We had Red Hat Summit last week and two weeks, lots of Red Haters, and now lots of Stackers here. Additional support brought to us by Cisco, by Netronome, and by Canonical. By the way, no secret, we try to be transparent as to how we make our money. If it's a sponsored segment, it lists "sponsored by" that guest here, and otherwise it is editorial. Day three actually has a lot of editorial, it means we have a lot of endusers on the program. We do have vendors, cool startups, interesting people, people like Brian Stevens from Google. When I can get access to them, love to have it here. So big shout out as always. Content, we put it out there, the community, try to have it. Back to the wrap. John, you know we've kind of looked at some of the pieces here, the maturity, you know where it fits in the hybrid and multi cloud world. What jumped out at you as you've been chewing on day two? >> Well, my favorite thing from today, and we talked about it a couple times just in passing it keep coming up, is OpenStack on the edge. So the concept of, that the economics works today, that you can have a device, a box, maybe it's in your closet somewhere, maybe it's bolted to a lamppost or something, but in the old days it would have run on some sort of proprietary chip, maybe an embedded Linux. You can put a whole OpenStack distribution on there, and when you do that, it becomes controllable, it becomes a service layer, you can upgrade it, you can launch more services from there, all from a central location. That kind of blew my mind. So that's my favorite thing from today. I finally got my arms around that I think. >> Okay, great, and we saw Beth Cohen from Verizon was in the day one keynote. We're actually going to have her on our program for the third day. And right, teasing out that edge, most of it, telecommunications is a big discussion point here. I understand why. Telcos spend a lot of money, they are at large scale, and that NFV use case has driven a lot of adoption. So Deutsche Telekom is a headline sponsor of the OpenStack Foundation, did a big keynote this morning. AT&T's up on the main stage, Verizon's up on the main stage, you know Red Hat and Canonical all talk about their customers that are using it. You know, we just talked to Netronome about telecommunications. Everybody here, if you're doing OpenStack, you probably have a telco place because that's where the early money is and it tends to be, there's the network edge, then there's the IoT edge, and some of the devices there. So it was was one of the buzzy things going in and definitely is one of the big takeaways from the show so far. >> Well, Stu, I also think it's a major prove point for OpenStack, right. Bandwidth needs are not going down, that's pretty clear, with all the things you mentioned. Throughput is going to have to go up, services are going to have to be more powerful, and so all these different connected devices and qualities of service and streaming video to your car. So if OpenStack can build a back plan, a data plan for OpenStack that can do that, which it looks like they are doing, right, that's a huge prove point downstream from the needs of a telco, so I think that's super important for OpenStack that it's usable enough and robust enough to do that and that's one of the reasons I think it gets talked about so much. The nice thing is this year compared to my comparisons of previous years of OpenStack Summit, telco is not the only game in town, right. Enterprise also got a lot of play and there's a lot of use cases there too. >> And just to close out on that edge piece, really enjoyed the conversation we had with John and Kendall who had worked on the container space. Talking about the maturation of where Cinder had gone, how we went from virtualized environments to containerized environments. And even we teased out a little bit that edge use case. I can have a really small OpenStack deployment to put it at that edge. Maybe that's where some of the serverless stuff fits in. I know I've been, I tell my team, every time I get a good quote on serverless, let's make a gem out of that, put it out there, 'cause it's early days, but that is one of those deployments where I need at the edge environments, I need something lightweight, I need something that's going to be less expensive, can do some task processing, and both containers and potentially serverless can be interesting there. >> Yeah, I mean, even in our Canonical discussion with the product manager for their OpenStack distribution, right, containers are all over that, right, containers are just a way of packaging, there are some really interesting development pipelines that are now very popular and being talked about and built on in the container space. But containerization actually can come into play multiple points in the stack. Like you said, the Canonical distribution gets containerized and pushed out, it's a great way of compartmentalizing and upgrading, that's what the demo on stage today was about. Also, just with a couple of very short scripts, containerizing and pulling down components. So I think again, my second favorite thing after the edge today was just showing that actually containers and OpenStack mix pretty well. They're really not two separate things. >> Right, and I think containerization is one of those things that enables that multi cloud world. We talked in a number of segments today, everything from Kubernetes with Brian Stevens as to how that enables that. Reminds me at Red Hat Summit last week we talked a lot about OpenShift. OpenShift's that layer on top of OpenStack and sits at that application level layer to allow be to be able to span between public or private clouds and we need that kind of you know that to be able to enable some real multi or hybrid cloud environments. >> Yeah I mean, containers and in fact that Kubernetes layer may end up being the thing that drives more OpenStack adoption. >> Yeah, and the other thing that's been interesting, just hallway conversations, bumping into people we know, you know trying to walk around the show a little bit, as to people that are finally getting their arms around, okay, OpenStack from a technology standpoint has matured and you know they either need it to clean up what was their internal cloud or building something out, so real deployments. We talked about it yesterday in the close though. They're real customers doing real deployments. It's heartening to hear. >> Yeah I mean, one of those conversations, I ran into somebody at a hyperscale company, a friend of mine, and you know they are building out, internal OpenStack clouds to use for real stuff, right. >> But wait, hyperscale, come on, John, we can give away. Is this something we have on our phone or something we, I'll buy and use? >> One of those big folks. >> There's a large Chinese company that anybody in tech knows that's supposed to be doing a lot with OpenStack. We heard definitely Asia, very broad use of OpenStack. Been a theme of the whole show, right, is that outside the US where we tend to talk a lot about the public cloud, OpenStack's being used. An undertone I've heard is certain companies that start here in the United States, it's sometimes challenging for a foreign company to say I'm going to buy and use that, absolutely that is a headwind against a company like Amazon. Ties back to we had a keynote this morning with Edward Snowden and some of those things. What is the relationship between government and global companies that have a headquarters in the US and beyond. >> Yeah I think it's too soon to say where the pendulum, how the far the pendulum is going to swing. I'll be very interested in the commentary for next year to see have we moved away from more of the centralized services dominating the entire marketplace and workload into more distributed, more private, more customizable. For all those reasons, there's a lot of dynamics that might be pushing the pendulum in that direction. >> And one of the things I've liked hearing is infrastructure needs to be more agile, it needs to be more distributed, more modularized, especially as the applications are changing. So I feel like more than previous summits I've been at, we're at least talking about how those things fit together. With everything that's happening with the OpenStack Days, the Kubernetes, Cloud Foundry, Ceph, other open source projects, how those all fit together. It feels like a more robust, full position as opposed to , we were just building a software version of what we were doing in the data center before. >> My impression was the conversation at times had been a little more internally focused, right, it's a world unto its own. Here at this summit, they're definitely acknowledging there's an ecosystem, there's a landscape, it all has to interoperate. Usability's a part of that, and then interoperability and componentization is a part of that as well. >> The changing world of applications. We understand the whole reason we have infrastructure is to run those applications, so if we're not getting ready for that, what are we doing? >> I don't want to put words in their mouth, but I think the OpenStack community as a whole, one of their goals, you know, OpenStack needs to be as easy to run as a public cloud. The infrastructure needs to be boring. We heard the word boring a lot actually today. >> Yeah and what we say is, first of all, the public cloud is the bar that you were measured against. Whether it is easier or cheaper, your mileage may vary, because public cloud was supposed to be simple. They're adding like a thousand new features every year, and it seems to get more complicated over time. It's wonderful if we could architect everything and make it simple. Unfortunately, you know, that's why we have technology. I know every time I go home and have some interaction with a financial institution or a healthcare institution, boy, you wish we could make everything simpler, but the world's a complicated place and that's why we need really smart people like we've gotten to interview here at the show. So any final comments, John? >> No, I think that sums it up. Those are my favorite things for today. I'm looking forward to talking to a lot of customers tomorrow. >> Yeah, I'm really excited about that. John, appreciate your help here. So there's a big party here at the show. They're taking everyone to Fenway Park for the Stacker party. Last year it was an epic party in Austin. Boston's fun, Fenway's a great venue. Looks like the rain's going to hold off, which is good, but it'll be a little chillier than normal, but we will be back here with a third day of programming as John and I talked about. Got a lot of users on the program. Really great lineup, two days in the bag. Check out all the videos, go to SiliconANGLE.tv to check it all out. Big shout out to the rest of the team that's at the Dell EMC World and ServiceNOW shows, be able to check those out and all our upcoming shows. And thank you, everyone, for watching theCUBE. (technical beat)
SUMMARY :
Brought to you by the OpenStack Foundation, Red Hat, as the show goes on, kind of like the show itself. It was a good second day, Stu, I think we made it through. of the pieces here, the maturity, you know where it fits So the concept of, that the economics works today, and definitely is one of the big takeaways and that's one of the reasons really enjoyed the conversation we had with John and Kendall and built on in the container space. at that application level layer to allow be to be able that Kubernetes layer may end up being the thing Yeah, and the other thing that's been interesting, and you know they are building out, Is this something we have on our phone that outside the US where we tend to talk a lot how the far the pendulum is going to swing. to , we were just building a software version and componentization is a part of that as well. to run those applications, so if we're not getting ready The infrastructure needs to be boring. is the bar that you were measured against. to a lot of customers tomorrow. Looks like the rain's going to hold off, which is good,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Stevens | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Austin | LOCATION | 0.99+ |
Beth Cohen | PERSON | 0.99+ |
Edward Snowden | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Deutsche Telekom | ORGANIZATION | 0.99+ |
fifth year | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Netronome | ORGANIZATION | 0.99+ |
Kendall | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Fenway Park | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
last week | DATE | 0.99+ |
second day | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two days | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
third day | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
Boston | LOCATION | 0.98+ |
next year | DATE | 0.98+ |
OpenStack Summit | EVENT | 0.98+ |
OpenStack Summit 2017 | EVENT | 0.98+ |
#OpenStackSummit | EVENT | 0.98+ |
OpenShift | TITLE | 0.98+ |
OpenStack | TITLE | 0.98+ |
first summit | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
telco | ORGANIZATION | 0.97+ |
ORGANIZATION | 0.97+ | |
Red Hat Summit | EVENT | 0.97+ |
Asia | LOCATION | 0.97+ |
OpenStack | ORGANIZATION | 0.96+ |
second favorite | QUANTITY | 0.95+ |
Day three | QUANTITY | 0.95+ |
two separate things | QUANTITY | 0.95+ |
day two | QUANTITY | 0.95+ |
first | QUANTITY | 0.92+ |
CUBE | ORGANIZATION | 0.91+ |
Fenway | LOCATION | 0.89+ |
Telcos | ORGANIZATION | 0.88+ |