Image Title

Search Results for Barefoot:

Niel Viljoen, Netronome & Nick McKeown, Barefoot Networks - #MWC17 - #theCUBE


 

(lively techno music) >> Hello, everyone, I'm John Furrier with theCUBE. We are here in Palo Alto to showcase a brand new relationship and technology partnership and technology showcase. We're here with Niel Viljoen, who's the CEO of Netronome. Did I get that right? (Niel mumbles) Almost think that I will let you say it, and Nick McKeown, who's Chief Scientist and Chairman and the co-founder Barefoot Networks. Guys, welcome to the conversation. Obviously, a lot going on in the industry. We're seeing massive change in the industry. Certainly, digital transmissions, the buzzword the analysts all use, but, really, what that means is the entire end-to-end digital space, with networks all the way to the applications are completely transforming. Network transformation is not just moving packets around, it's wireless, it's content, it's everything in between that makes it all work. So let's talk about that, and let's talk about your companies. Niel, talk about your company, what you guys do, Netronome and Nick, same for you, for Barefoot. Start with you guys. >> So as Netronome, our core focus lies around SmartNICs. What we mean by that, these are elements that go into the network servers, which in this sort of cloud and NFV world, gets used for a lot of network services, and that's our area of focus. >> Barefoot is trying to make switches that were previously fixed function, turning them into something that those who own and operate networks can program them for themselves to customize them or add new features or protocols that they need to support. >> And Barefoot, you're walking in the park, you don't want to step in any glass, and get a cut, and I like that, love the name of the company, but brings out the real issue of getting this I/O world if there were NICs, it throws back the old school mindset of just network cards and servers, but if you take that out on the Internet now, that is the I/O channel engine, real time, it's certainly a big part of the edge device, whether that's a human or device, IoT to mobile, and then moving it across the network, and by the way, there's multiple networks, so is this kind of where you guys are showcasing your capabilities? >> So, fundamentally, you need both sides of the line, if I could put it that way, so we, on the server side, and specifically, also giving visibility between virtual machines to virtual machines, also called VNFs to VNFs in a service chaining mechanism, which has what a lot of the NFV customers are deploying today. >> Really, as the entire infrastructure upon which these services are delivered, as that moves into software, and more of it is created by those who own and operate these services for themselves, they either create it, commission it, buy it, download it, and then modify it to best meet their needs. That's true whether it's in the network interface portion, whether it's in the switch, and they've seen it happen in the control plane, and now it's moving down so that they can define all the way down to how packets are processed in the NIC and in the switches, and when they do that, they can then add in their ability to see what's going on in ways that they've never been able to do before, so we really think of ourselves as providing that programmability and that flexibility down, all the way to the way that the packets are processed. >> And what's the impact, Nick, talk about the impact then take us through like an example. You guys are showcasing your capabilities to the world, and so what's the impact and give us an example of what the benefit would be. I mean, what goes on like this instrumentation, certainly, everyone wants to instrument everything. >> Niel: Yes. >> Nick: Yeah. >> But what's the practical benefit. I mean who wins from this and what's the real impact? >> Well, you know, in days gone by, if you're a service provider providing services to your customers, then you would typically do this out of vertically integrated pieces of equipment that you get from equipment vendors. It's closed, it's proprietary, they have their own sort of NetFlow, sFlow, whatever the mechanism that they have for measuring what's going on, and you had to learn to live with the constraints of what they had. As this all gets kind of disaggregated and broken apart, and that the owner of the infrastructure gets to define the behavior in software, they can now chain together the modules and the pieces that they need in order to deliver the service. That's great, but now they've lost that proprietary measurement, so now they need to introduce the measurement that they can get greater visibility. This actually has created a tremendous opportunity and this is what we're demonstrating, is if you can come up with a uniform way of doing this, so that you can see, for example, the path that every packet takes, the delay that it encounters along the way, the rules that it encounters that determines the path that it gets, if it encounters congestion, who else contributed to that congestion, so we know who to go blame, then by giving them that flexibility, they can go and debug systems much more quickly, and change them and modify them. >> It's interesting, it's almost like the aspirin, right? You need, the headache now is, I have good proprietary technology for point measurement and solutions, but yet I need to manage multiple components. >> I think there's an add-on to what Nick said, which is the whole key point here which is the programmability, because there's data, and then there's information. Gathering lots and lots of telemetry data is easy. (John chuckles) The problem is you need to have it at all points, which is Nick's key point, but the programmability allows the DevOps person, in other words, the operational people within the cloud or carrier infrastructure, to actually write code that identifies and isolates the data, the information rather than the data that they need. >> So is this customer-based for you guys, the carriers, the service providers, who's your target audience? >> Yep, I think it's service providers who are applying the NFV technologies, in other words, the cloud-like technologies. I always say the real big story here is the cloud technologies rather than just the cloud. >> Yeah, yeah. >> And how that's-- >> And same for you guys, you guys have this, this joint, same target customer. >> Yeah, I don't think there's any disagreement. >> Okay. (laughs) Well, I want to get drilling to the whole aspirin analogy 'cause it's of the things that you brought up with the programmability because NFV has been that, you know, saving grace, it's been the Holy Grail for how many years now, and you're starting to see the tides shifting now towards where NFV is not a silver bullet, so to speak, but it is actually accelerating some of the change, and I always like to ask people, "Hey, are you an aspirin or you a vitamin?" One guest told me, "I'm a steroid. "We make things grow faster." I'm like, "Okay," but in a way, the aspirin solves a problem, like immediate headaches, so it sounds like a lot of the things that you mentioned. That's an immediate benefit right there on the instrumentation, in an open way, multi-component, multi-vendor kind of, benefits of proprietary but open, but the point about programmability gives a lot of headroom around kind of that vitamin, that steroid piece where it's going to allow for automation, which brings an interesting thing, that's customizable automation, meaning, you can apply software policy to it. Is that kind of like, can you tease that out, is that an area that you guys talking about? >> I think the first thing that we should mention is probably the new language called P4. I think Nick will be too modest to state that but I think Nick has been a key player in, along with his team and many other people, in the definition and the creation of this language, which allows the programmability of all these elements. >> Yeah, just drill down, I mean, toot your own horn here, let's get into it because what is it and what's the benefit and what is the real value, what's the upshot of P4? >> Yeah, the way that hardware that processes packets, whether it's in network interface cards, or in switching, the way that that's been defined in the past, has been by chip designers. At the time that they defined the behavior, they're writing Verilog or VHDL, and as we know, people that design chips, don't operate big networks, so they really know what capabilities to put in-- >> They're good at logic in a vacuum but not necessarily in the real world, right? Is that what you (laughs). >> So what we-- >> Not to insult chip designers, they're great, right? >> So what we've all wanted to do for some time is to come up with a uniform language, a domain-specific language that allows you to define how packets will be processed in interfaces, in switches, in hypervisor switches inside the virtual machine environments, in a uniform way so that someone who's proficient in that language can then describe a behavior that can then operate in different paths of the chained services, so that they can get the same behavior, a uniform behavior, so that they can see the network-wide, the service-wide behavior in a uniform way. The P4 language is merely a way to describe that behavior, and then both Netronome and Barefoot, we each have our own compilers for compiling that down to the specific processing element that operates in the interfaces and in the switches. >> So you're bridging the chip layer with some sort of abstraction layer to give people the ability to do policy programming, so all the heavy lifting stuff in the old network days was configuration management, I mean all the, I mean that was like hard stuff and then, now you got dynamic networks. It even gets harder. Is this kind of where the problem goes away? And this is where automation. >> Exactly, and the key point is the programmability versus configurability. >> John: Yeah. >> In a configurable environment, you're always trying to pre-guess what your customer's going to try to look at. >> (chuckles) Guessing's not good in the networking area. That's not good for five nines. >> In the new world that we're in now, the customer actually wants to define exactly what the information is they want to extract-- >> John: I wanted to get-- >> Which is your whole question around the rules and-- >> So let me see if I can connect the dots here, just kind of connect this for, and so, in the showcase, you guys are going to show this programmability, this kind of efficiency at the layer of bringing instrumentation then using that information, and/or data depending on how it's sliced and diced via the policy and programmability, but this becomes cloud-like, right? So when you start moving, thinking about cloud where service providers are under a lot of pressure to go cloud because Over-The-Top right now is booming, you're seeing a huge content and application market that's super ripe for kind of the, these kinds of services. They need that ability to have the infrastructure be like software, so infrastructure is code, is the DevOps term that we talk about in our DevOps world, but that has been more data-centered kind of language, with developers. Is it going the same trajectory in the service provider world because you have networks, I mean they're bigger, higher scale. What are some of those DevOps dynamics in your world? Can you talk about that and share some color on that? >> I mean, the way in which large service providers are starting to deliver those services is out of something that looks very much like the cloud platform. In fact, it could in fact be exactly the same technology. The same servers, the same switches, same operating systems, a lot of the same techniques. The problem they're trying to solve is slightly different. They're chaining together the means to process a sequence of operations. A little bit like, though the cloud operators are moving towards microservices that get chained together, so there are a lot of similarities here and the problems they face are very similar, but think about the hell that this potentially creates for them. It means that we're giving them so much rope to hang themselves because everything is now got to be put together in a way that's coming from different sources, written and authored by different people with different intent, or from different places across the Internet, and so, being able to see and observe exactly how this is working is even more critical than-- >> So I love that rope to hang yourself analogy because a lot of people will end up breaking stuff as Mark Zuckerberg's famous quote is, "Move fast, break stuff," and then by the way, when they 100 million users and moved, slogan went for, "Move fast, be reliable," so he got on the five nines bandwagon pretty quick, but it's more than just the instrumentation. The key that you're talking about here is that they have to run those networks in really high reliability environments. >> Nick: Correct. >> And so that begs the challenge of, okay, it's not just easy as throwing a docker container at something. I mean that's what people are doing now, like hey, I'm going to just use microservices, that's the answer. They still got stuff under the hood, but underneath microservices. You have orchestration challenges and this kind of looks and feels like the old configuration management problems but moved up the stack, so is that a concern in your market as well? >> So I think that's a very, very good point that you make because the carriers, as you say, tend to be more dependent, almost, on absolute reliability, and very importantly, performance, but in other words, they need to know that this is going to be 100 gigs because that's what they've signed up the SLA with their customer for. (John chuckles) It's not going to be almost 100 gigs 'cause then they're going to end up paying a lot of penalties. >> Yeah, they can't afford breakage. They're OpsDev, not DevOps. Which comes first in their world? >> Yes, so the critical point here is just that this is where the demo that we're doing which shows the ability to capture all this information at line rate, at very high speeds in the switches. (mumbles) >> So let's about this demo you're doing, this showcase that you guys are providing and demonstrating to the marketplace, what's the pitch, I mean what is it, what's the essence of the insight of this demo, what's it proving? >> So I think that the, it's good to think about a scenario in which you would need this, and then this leads into what the demo would be. Very common in an environment like the VNF kind of environment, where something goes wrong, they're trying to figure out very quickly, who's to blame, which part of the infrastructure was the problem? Could it be congestion, could it be a misconfiguration? (John laughs) >> Niel: Who's flow-- >> Everyone pointing finger at the other guy. >> Nick: The typical way-- >> Two days later, what happened, really? >> Typical way that they do this, is they'll bring the people that are responsible for the compute, the networking, and the storage quickly into one room, and say, "Go figure it out." The people that are doing the compute, they'll be modifying and changing and customizing, running experiments, isolating the problem. So are the people that are doing storage. They can program their environment. In the past, the networking people had ping and traceroute. That's the same tools that they had 20 years ago. (John chuckles) What we're doing is changing that by introducing the means where they can program and configure, run different experiments, run different probes, so that they can look and see the things that they need to see, and in the demo in particular, you'll be able to see the packets coming in through a switch, through a NIC, through a couple of VMs, back out through a switch, and then you can look at that packet afterwards, and you can ask questions of the packet itself, something you've never been able to-- >> It's the ultimate debugger. Basically, it's the ultimate debugger. >> Nick: That's right. Go to the packet, say-- >> Niel: Programmable debugger. >> "Which path did you take? "How long did you wait at each NIC, "at each VM, at each switch port as you went through? "What are the rules that you followed "that led you to be here, and if you encountered "some congestion, whose fault was it? "Who did you share that queue with?" so we can go back and apportion the blame-- >> So you get a multiple dimension of path information coming in, not just the standard stovepiped tools-- >> Nick: That's right. >> And then, everyone compares logs and then there's all these holes in it, people don't know what the hell happened. >> And through the programmability, you can isolate the piece of the information-- >> So the experimentation agile is where I think, is that what you're getting at? You can say, you can really get down and dirty into a duplication environment and also run these really fast experiments versus kind of in theory or in-- >> Exactly, which is what, as Nick said, is exactly what people on the server side and on the storage side have been able to do in the past. >> Okay so for people watching that are kind of getting into this and people who aren't, just give me in order maybe through of the impact and the consequences of not taking this approach, vis-a-vis the available, today's available techniques. >> If you wanted to try and figure out who it was that you were sharing a queue with inside an interface or inside a switch, you have no way to do that today, right? No means to do that, and so if you wanted to be able to say it's that aggressive flow over there, that malfunction in service over there, you've got no means to do it. As a consequence, the networking people always get the blame because they can't show that it wasn't them. But if you can say, I can see, in this queue, there were four flows going through or 4,000 flows, and one of them was really badly behaved, and it was that one over there and I can tell you exactly why its packets were ending up here, then you can immediately go in and shut that one down. They have no way that they go and randomly shut-- >> Can I get this for my family, I need this for my household. I mean, I'm going to use this for my kids. I mean I know exactly the bad behavior, I need to prove it. No, but this is what the point is, is this is fast. I mean you're talking speed, too, as another aspect-- >> Niel: It's all about the-- >> What's the speed lag on approach versus taking the old, current approach versus this joint approach you guys are taking? What's the, give me an estimate on just ballpark numbers-- >> Well there's two aspects to the speed. One is the speed at which it's operating, so this is going to be in the demo, it's running at 40 gigabits per seconds, but this can easily run, for example, in the Barefoot switch, it'll run at 6 terabits per second. The interesting thing here is that in this entire environment, this measurement capability does not generate a single extra packet. All of it is self-contained in the packets that are already flowing. >> So there's no latency issues on running this in production. >> If you wanted then change the behavior, you needed to go and modify what was happening in the NIC, modify what was happening in the switch, you can do that in minutes. So that you can say-- >> Now the time it takes for a user now to do this, let's go to that time series. What does that look like? So current method is get everyone in a room, do these things, are we talking, you know. >> I think that today, it's just simply not possible. >> Not possible. >> So it's, yes, new capability. >> I think is the key issue. >> So this is a new capability. >> This is a new capability and exactly as Nick said, it's getting the network to the same level of ability that you always had inside the-- >> So I got to ask you guys, as founders of your companies because this is one of those things that's a great success story, entrepreneurs, you got, it's not just a better mousetrap, it's revolutionary in the sense that no one's ever had the capability before, so when you go to events like Mobile World Congress, you're out in the field, are you shaking people like, "You need me! "I need to cut the line and tell you what's going on." I mean, you must have a sense of urgency that, is it resonating with the folks you're talking to? I mean, what are some of the conversations you're having with folks? They must be pretty excited. Can you share any anecdotal stories? >> Well, yup, I mean we're finding, across the industry, not only in the service providers, the data center companies, Wall Street, the OEM box vendors, everybody is saying, "I need," and have been saying for a long time, "I need the ability to probe into the behavior "of individual packets, and I need whoever is owning "and operating the network to be able to customize "and change that." They've never been able to do that. The name of the technique that we use is called In-band Network Telemetry or INT, and everybody is asking for it now. Actually, whether it's with the two of us, or whether they're asking for it more generally, this is, this is-- >> Game changer. >> You'll see this everywhere. >> John: It's a game changer, right? >> That's right. >> Great, all right, awesome. Well, final question is, is that, what's the business benefits for them because I can imagine you get this nailed down with the proper, the ability to test new apps because obviously, we're in a Wild West environment, tsunami of apps coming, there's always going to be some tripwires in new apps, certainly with microservices and APIs. >> I think the general issues that we're addressing here is absolutely crucial to the successful rollout of NFV infrastructures. In other words, the ability to rapidly change, monitor, and adapt is critical. It goes wider than just this particular demo, but I think-- >> It's all apps on the service provider. >> The ability to handle all the VNFs-- >> Well, in the old days, it was simply network spikes, tons of traffic, I mean, now you have, apps could throw off anomalies anywhere, right? You'd have no idea what the downstream triggers could be. >> And that's the whole notion of the programmable network, which is critical. >> Well guys, any information where people can get some more information on this awesome opportunity? You guys' sites, want to share quick web addresses and places people get whitepapers or information? >> For the general P4 movement, there's P4.org. P, the number four, .org. Nice and easy. They'll find lots of information about the programmability that's possible by programming the, the forwarding being what both of us are doing. In-band Network Telemetry, you'll find descriptions there, P4 programs, and whitepapers describing that, and of course, on the two company websites, Netronome and Barefoot. >> Right. Nick and Niel, thanks for spending some time sharing the insights and congratulations. We'll keep an eye for it, and we'll be talking to you soon. >> Thank you. >> Thank you very much. >> This is theCUBE here in Palo Alto. I'm John Furrier, thanks for watching. (lively techno music)

Published Date : Mar 13 2017

SUMMARY :

and the co-founder Barefoot Networks. that go into the network servers, that they need to support. So, fundamentally, you need both sides of the line, and in the switches, and when they do that, talk about the impact then take us through like an example. I mean who wins from this and what's the real impact? and broken apart, and that the owner It's interesting, it's almost like the aspirin, right? that identifies and isolates the data, is the cloud technologies rather than just the cloud. And same for you guys, you guys have this, 'cause it's of the things that you brought up in the definition and the creation of this language, in the past, has been by chip designers. Is that what you (laughs). that operates in the interfaces and in the switches. so all the heavy lifting stuff in the old network days Exactly, and the key point is the programmability what your customer's going to try to look at. (chuckles) Guessing's not good in the networking area. in the showcase, you guys are going to show and the problems they face are very similar, is that they have to run those networks And so that begs the challenge of, okay, because the carriers, as you say, Which comes first in their world? in the switches. Very common in an environment like the VNF and see the things that they need to see, Basically, it's the ultimate debugger. Go to the packet, say-- and then there's all these holes in it, and on the storage side have been able to do in the past. of the impact and the consequences always get the blame because they can't show I mean I know exactly the bad behavior, I need to prove it. One is the speed at which it's operating, So there's no latency issues on running this in the NIC, modify what was happening in the switch, Now the time it takes for a user now to do this, that no one's ever had the capability before, "I need the ability to probe into the behavior because I can imagine you get this nailed down is absolutely crucial to the successful rollout Well, in the old days, it was simply network spikes, And that's the whole notion of the programmable network, and of course, on the two company websites, sharing the insights and congratulations. This is theCUBE here in Palo Alto.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nick McKeownPERSON

0.99+

Niel ViljoenPERSON

0.99+

NielPERSON

0.99+

NickPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

100 gigsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

Barefoot NetworksORGANIZATION

0.99+

NetronomeORGANIZATION

0.99+

twoQUANTITY

0.99+

Mark ZuckerbergPERSON

0.99+

BarefootORGANIZATION

0.99+

two aspectsQUANTITY

0.99+

Mobile World CongressEVENT

0.99+

bothQUANTITY

0.99+

#MWC17EVENT

0.99+

two companyQUANTITY

0.98+

each VMQUANTITY

0.98+

OneQUANTITY

0.98+

todayDATE

0.98+

oneQUANTITY

0.98+

100 million usersQUANTITY

0.98+

each switchQUANTITY

0.98+

Two days laterDATE

0.98+

20 years agoDATE

0.98+

fourQUANTITY

0.97+

one roomQUANTITY

0.96+

first thingQUANTITY

0.96+

both sidesQUANTITY

0.96+

eachQUANTITY

0.96+

each NICQUANTITY

0.96+

One guestQUANTITY

0.95+

.org.OTHER

0.95+

firstQUANTITY

0.94+

6 terabits per secondQUANTITY

0.94+

single extra packetQUANTITY

0.91+

4,000 flowsQUANTITY

0.88+

P4TITLE

0.88+

40 gigabits per secondsQUANTITY

0.85+

five nines bandwagonQUANTITY

0.84+

five ninesQUANTITY

0.84+

theCUBEORGANIZATION

0.76+

almost 100 gigsQUANTITY

0.76+

DevOpsTITLE

0.75+

#theCUBEORGANIZATION

0.69+

VerilogTITLE

0.67+

NetFlowORGANIZATION

0.66+

OpsDevORGANIZATION

0.64+

VNFsTITLE

0.62+

P4OTHER

0.61+

agileTITLE

0.59+

P4ORGANIZATION

0.58+

WallORGANIZATION

0.56+

P4.orgTITLE

0.5+

Silvano Gai, Pensando | Future Proof Your Enterprise 2020


 

>> Narrator: From the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, and welcome to this CUBE conversation, I'm Stu Min and I'm coming to you from our Boston area studio, we've been digging in with the Pensando team, understand how they're fitting into the cloud, multi-cloud, edge discussion, really thrilled to welcome to the program, first time guest, Silvano Gai, he's a fellow with Pensando. Silvano, really nice to see you again, thanks so much for joining us on theCUBE. >> Stuart, it's so nice to see you, we used to work together many years ago and that was really good and is really nice to come to you from Oregon, from Bend, Oregon. A beautiful town in the high desert of Oregon. >> I do love the Pacific North West, I miss the planes and the hotels, I should say, I don't miss the planes and the hotels, but going to see some of the beautiful places is something I do miss and getting to see people in the industry I do like. As you mentioned, you and I crossed paths back through some of the spin-ins, back when I was working for a very large storage company, you were working for SISCO, you were known for writing the book, you were a professor in Italy, many of the people that worked on some of those technologies were your students. But Silvano, my understanding is you retired so, maybe share for our audience, what brought you out of that retirement and into working once again with some of your former colleagues and on the Pensando opportunity. >> I did retire for a while, I retired in 2011 from Cisco if I remember correctly. But at the end of 2016, beginning of 2017, some old friend that you may remember and know called me to discuss some interesting idea, which was basically the seed idea that is behind the Pensando product and their idea were interesting, what we built, of course, is not exactly the original idea because you know product evolve over time, but I think we have something interesting that is adequate and probably superb for the new way to design the data center network, both for enterprise and cloud. >> All right, and Silvano, I mentioned that you've written a number of books, really the authoritative look on when some new products had been released before. So, you've got a new book, "Building a Future-Proof Cloud Infrastructure," and look at you, you've got the physical copy, I've only gotten the soft version. The title, really interesting. Help us understand how Pensando's platform is meeting that future-proof cloud infrastructure that you discuss. >> Well, network have evolved dramatically in the data center and in the cloud. You know, now the speed of classical server in enterprise is probably 25 gigabits, in the cloud we are talking of 100 gigabit of speed for a server, going to 200 gigabit. Now, the backbone are ridiculously fast. We no longer use Spanning Tree and all the stuff, we no longer use access code aggregation. We switched to closed network, and with closed network, we have huge enormous amount of bandwidth and that is good but it also imply that is not easy to do services in a centralized fashion. If you want to do a service in a centralized fashion, what you end up doing is creating a giant bottleneck. You basically, there is this word that is being used, that is trombone or tromboning. You try to funnel all this traffic through the bottleneck and this is not really going to work. The only place that you can really do services is at the edge, and this is not an invention, I mean, even all the principles of cloud is move everything to the edge and maintain the network as simple as possible. So, we approach services with the same general philosophy. We try to move services to the edge, as close as possible to the server and basically at the border between the sever and the network. And when I mean services I mean three main categories of services. The networking services of course, there is the basic layer, two-layer, three stuff, plus the bonding, you know VAMlog and what is needed to connect a server to a network. But then there is the overlay, overlay like the xLAN or Geneva, very very important, basically to build a cloud infrastructure, and that are basically the network service. We can have others but that, sort of is the core of a network service. Some people want to run BGP layers, some people don't want to run BGP. There may be a VPN or kind of things like that but that is the core of a network service. Then of course, and we go back to the time we worked together, there are storage services. At that time, we were discussing mostly about fiber tunnel, now the BUS world is clearly NVMe, but it's not just the BUS world, it's really a new way of doing storage, and is very very interesting. So, NVMe kind of service are very important and NVMe as a version that is called NVMeOF, over fiber. Which is basically, sort of remote version of NVMe. And then the third, least but not last, most important category probably, is security. And when I say that security is very very important, you know, the fact that security is very important is clear to everybody in our day, and I think security has two main branches in terms of services. There is the classical firewall and micro-segmentation, in which you basically try to enforce the fact that only who is allowed to access something can access something. But you don't, at that point, care too much about the privacy of the data. Then there is the other branch that encryption, in which you are not trying to enforce to decide who can access or not access the resource, but you are basically caring about the privacy of the data, encrypting the data so that if it is hijacked, snooped or whatever, it cannot be decoded. >> Eccellent, so Silvano, absolutely the edge is a huge opportunity. When someone looks at the overall solution and say you're putting something in the edge, you know, they could just say, "This really looks like a NIC." You talked about some of the previous engagement we'd worked on, host bus adapters, smart NICs and the like. There were some things we could build in but there were limits that we had, so, what differentiates the Pensando solution from what we would traditionally think of as an adapter card in the past? >> Well, the Pensando solution has two main, multiple pieces but in term of hardware, has two main pieces, there is an ASIC that we call copper internally. That ASIC is not strictly related to be used only in an adapter form, you can deploy it also in other form factors in another part of the network in other embodiment, et cetera. And then there is a card, the card has a PCI-E interface and sit in a PCI-E slot. So yes, in that sense, somebody can can call it a NIC and since it's a pretty good NIC, somebody can call it a smart NIC. We don't really like that two terms, we prefer to call it DSC, domain specific card, but the real term that I like to use is domain specific hardware, and I like to use domain specific hardware because it's the same term that Hennessy and Patterson use in a beautiful piece of literature that is the Turing Award lecture. It's on the internet, it's public, I really ask everybody to go and try to find it and listen to that beautiful piece of literature, modern literature on computer architecture. The Turing Award lecture of Hennessy and Patterson. And they have introduced the concept of domain specific hardware, and they explain also the justification for why now is important to look at domain specific hardware. And the justification is basically in a nutshell and we can go more deep if you're interested, but in a nutshell is that the specing, that is the single tried performer's measurement of a CPU, is not growing fast at all, is only growing nowadays like a few point percent a year, maybe 4% per year. And with this slow grow, over specing performance of a core, you know the core need to be really used for user application, for customer application, and all what is known as Sentian can be moved to some domain specific hardware that can do that in a much better fashion, and by no mean I imply that the DSC is the best example of domain specific hardware. The best example of domain specific hardware is in front of all of us, and are GPUs. And not GPUs for graphic processing which are also important, but GPU used basically for artificial intelligence, machine learning inference. You know, that is a piece of hardware that has shown that something can be done with performance that the purpose processor can do. >> Yeah, it's interesting right. If you term back the clock 10 or 15 years ago, I used to be in arguments, and you say, "Do you build an offload, "or do you let it happen is software." And I was always like, "Oh, well Moore's law with mean that, "you know, the software solution will always win, "because if you bake it in hardware, it's too slow." It's a very different world today, you talk about how fast things speed up. From your customer standpoint though, often some of those architectural things are something that I've looked for my suppliers to take care of that. Speak to the use case, what does this all mean from a customer stand point, what are some of those early use cases that you're looking at? >> Well, as always, you get a bit surprised by the use cases, in the sense that you start to design a product thinking that some of the most cool thing will be the dominant use cases, and then you discover that something that you have never really fought have the most interesting use case. One that we have fought about since day one, but it's really becoming super interesting is telemetry. Basically, measuring everything in the network, and understanding what is happening in the network. I was speaking with a friend the other day, and the friend was asking me, "Oh, but we have SNMP for many many years, "which is the difference between SNMP and telemetry?" And the difference is to me, the real difference is in SNMP or in many of these management protocol, you involve a management plan, you involve a control plan, and then you go to read something that is in the data plan. But the process is so inefficient that you cannot really get a huge volume of data, and you cannot get it practically enough, with enough performance. Doing telemetry means thinking a data path, building a data path that is capable of not only measuring everything realtime, but also sending out that measurement without involving anything else, without involving the control path and the management path so that the measurement becomes really very efficient and the data that you stream out becomes really usable data, actionable data in realtime. So telemetry is clearly the first one, is important. One that you honestly, we had built but we weren't thinking this was going to have so much success is what we call Bidirectional ERSPAN. And basically, is just the capability of copying data. And sending data that the card see to a station. And that is very very useful for replacing what are called TAP network, Which is just network, but many customer put in parallel to the real network just to observe the real network and to be able to troubleshoot and diagnose problem in the real network. So, this two feature telemetry and ERSPAN that are basically troubleshooting feature are the two features that are beginning are getting more traction. >> You're talking about realtime things like telemetry. You know, the applications and the integrations that you need to deal with are so important, back in some of the previous start-ups that you done was getting ready for, say how do we optimize for virtualization, today you talk cloud-native architectures, streaming, very popular, very modular, often container based solutions and things change constantly. You look at some of these architectures, it's not a single thing that goes on for a long period of time, but it's lots of things that happen over shorter periods of time. So, what integrations do you need to do, and what architecturally, how do you build things to make them as you talk, future-proof for these kind of cloud architectures? >> Yeah, what I mentioned were just the two low hanging fruit, if you want the first two low hanging fruit of this architecture. But basically, the two that come immediately after and where there is a huge amount of radio are distributor's state for firewall, with micro-segmentation support. That is a huge topic in itself. So important nowadays that is absolutely fundamental to be able to build a cloud. That is very important, and the second one is wire rate encryption. There is so much demand for privacy, and so much demand to encrypt the data. Not only between data center but now also inside the data center. And when you look at a large bank for example. A large bank is no longer a single organization. A large bank is multiple organizations that are compartmentalized by law. That need to keep things separate by law, by regulation, by FCC regulation. And if you don't have encryption, and if you don't have distributed firewall, is really very difficult to achieve that. And then you know, there are other applications, we mentioned storage NVME, and is a very nice application, and then we have even more, if you go to look at load balance in between server, doing compression for storage and other possible applications. But I sort of lost your real question. >> So, just part of the pieces, when you look at integrations that Pensando needs to do, for maybe some of the applications that you would tie in to any of those that come to mind? >> Yeah, well for sure. It depends, I see two main branches again. One is the cloud provider, and one are the enterprise. In the cloud provider, basically this cloud provider have a huge management infrastructure that is already built and they want just the card to adapt to this, to be controllable by this huge management infrastructure. They already know which rule they want to send to the card, they already know which feature they want to enable on the card. They already have all that, they just want the card to provide the data plan performers for that particular feature. So they're going to build something particular that is specific for that particular cloud provider that adapt to that cloud provider architecture. We want the flexibility of having an API on the card that is like a rest API or a gRPC which they can easily program, monitor and control that card. When you look at the enterprise, the situation is different. Enterprise is looking to at two things. Two or three things. The first thing is a complete solution. They don't want to, they don't have the management infrastructure that they have built like a cloud provider. They want a complete solution that has the card and the management station and there's all what is required to make from day one, a working solution, which is absolutely correct in an enterprise environment. They also want integration, and integration is the tool that they already have. If you look at main enterprise, one of a dominant presence is clearly VMware virtualization in terms of ESX and vSphere and NSX. And so most of the customer are asking us to integrate with VMware, which is a very reasonable demand. And then of course, there are other player, not so much in the virtualization's space, but for example, in the data collections space, and the data analysis space, and for sure Pensando doesn't want to reinvent the wheel there, doesn't want to build a data collector or data analysis engine and whatever, there is a lot of work, and there are a lot out there, so integration with things like Splunk for example are kind of natural for Pensando. >> Eccellent, so wait, you talked about some of the places where Pensando doesn't need to reinvent the wheel, you talk through a lot of the different technology pieces. If I had to have you pull out one, what would you say is the biggest innovation that Pensando has built into the platform. >> Well, the biggest innovation is this P4 architecture. And the P4 architecture was a sort of gift that was given us in the sense that it was not invented for what we use it. P4 was basically invented to have programmable switches. The first big P4 company was clearly Barefoot that then was acquired by Intel and Barefoot built a programmable switch. But if you look at the reality of today, the network, most of the people want the network to be super easy. They don't want to program anything into the network. They want to program everything at the edge, they want to put all the intelligence and the programmability of the edge, so we borrowed the P4 architecture, which is fantastic programmable architecture and we implemented that yet. It's also easier because the bandwidth is clearly more limited at the edge compared to being in the core of a network. And that P4 architecture give us a huge advantage. If you, tomorrow come up with the Stuart Encapsulation Super Duper Technology, I can implement in the copper The Stuart, whatever it was called, Super Duper Encapsulation Technology, even when I design the ASIC I didn't know that encapsulation exists. Is the data plan programmability, is the capability to program the data plan and programming the data plan while maintaining wire-speed performance, which I think is the biggest benefit of Pensando. >> All right, well Silvano, thank you so much for sharing, your journey with Pensando so far, really interesting to dig into it and absolutely look forward to following progress as it goes. >> Stuart, it's been really a pleasure to talk with you, I hope to talk with you again in the near future. Thank you so much. >> All right, and thank you for watching theCUBE, I'm Stu Miniman, thanks for watching. (upbeat music)

Published Date : Jun 17 2020

SUMMARY :

leaders all around the world, I'm Stu Min and I'm coming to you and is really nice to and on the Pensando opportunity. that is behind the Pensando product I've only gotten the soft version. but that is the core of a network service. as an adapter card in the past? but the real term that I like to use "you know, the software and the data that you stream out becomes really usable data, and the integrations and the second one is and integration is the tool that Pensando has built into the platform. is the capability to program the data plan and absolutely look forward to I hope to talk with you you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SilvanoPERSON

0.99+

OregonLOCATION

0.99+

SISCOORGANIZATION

0.99+

2011DATE

0.99+

Stu MinPERSON

0.99+

PensandoORGANIZATION

0.99+

TwoQUANTITY

0.99+

ItalyLOCATION

0.99+

Silvano GaiPERSON

0.99+

BarefootORGANIZATION

0.99+

BostonLOCATION

0.99+

StuartPERSON

0.99+

CiscoORGANIZATION

0.99+

two featuresQUANTITY

0.99+

two main piecesQUANTITY

0.99+

Stu MinimanPERSON

0.99+

200 gigabitQUANTITY

0.99+

OneQUANTITY

0.99+

Palo AltoLOCATION

0.99+

100 gigabitQUANTITY

0.99+

two termsQUANTITY

0.99+

25 gigabitsQUANTITY

0.99+

FCCORGANIZATION

0.99+

Pacific North WestLOCATION

0.99+

IntelORGANIZATION

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

Bend, OregonLOCATION

0.99+

two thingsQUANTITY

0.99+

Building a Future-Proof Cloud InfrastructureTITLE

0.99+

thirdQUANTITY

0.98+

10DATE

0.98+

first oneQUANTITY

0.98+

Future Proof Your EnterpriseTITLE

0.98+

two main branchesQUANTITY

0.98+

vSphereTITLE

0.98+

ESXTITLE

0.98+

firstQUANTITY

0.98+

two-layerQUANTITY

0.98+

tomorrowDATE

0.98+

three thingsQUANTITY

0.97+

MoorePERSON

0.97+

Cube StudiosORGANIZATION

0.97+

two featureQUANTITY

0.97+

bothQUANTITY

0.97+

todayDATE

0.97+

two main branchesQUANTITY

0.96+

two mainQUANTITY

0.96+

single thingQUANTITY

0.96+

first timeQUANTITY

0.95+

4% per yearQUANTITY

0.95+

HennessyORGANIZATION

0.95+

first thingQUANTITY

0.95+

15 years agoDATE

0.94+

second oneQUANTITY

0.93+

single organizationQUANTITY

0.92+

NSXTITLE

0.91+

singleQUANTITY

0.9+

CUBEORGANIZATION

0.89+

ERSPANORGANIZATION

0.89+

SplunkORGANIZATION

0.88+

P4COMMERCIAL_ITEM

0.85+

P4ORGANIZATION

0.84+

PensandoLOCATION

0.84+

2016DATE

0.83+

TuringEVENT

0.82+

two low hangingQUANTITY

0.79+

VMwareTITLE

0.77+

2020DATE

0.77+

Super Duper Encapsulation TechnologyOTHER

0.77+

PattersonORGANIZATION

0.76+