Scott Raynovich, Futuriom | Future Proof Your Enterprise 2020
>> From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. (smooth music) >> Hi, I'm Stu Miniman, and welcome to this special exclusive presentation from theCUBE. We're digging into Pensando and their Future Proof Your Enterprise event. To help kick things off, welcoming in a friend of the program, Scott Raynovich. He is the principal analyst at Futuriom coming to us from Montana. I believe first time we've had a guest on the program in the state of Montana, so Scott, thanks so much for joining us. >> Thanks, Stu, happy to be here. >> All right, so we're going to dig a lot into Pensando. They've got their announcement with Hewlett Packard Enterprise. Might help if we give a little bit of background, and definitely I want Scott and I to talk a little bit about where things are in the industry, especially what's happening in networking, and how some of the startups are helping to impact what's happening on the market. So for those that aren't familiar with Pensando, if you followed networking I'm sure you are familiar with the team that started them, so they are known, for those of us that watch the industry, as MPLS, which are four people, not to be confused with the protocol MPLS, but they had very successfully done multiple spin-ins for Cisco, Andiamo, Nuova and Insieme, which created Fibre Channel switches, the Cisco UCS, and the ACI product line, so multiple generations to the Nexus, and Pensando is their company. They talk about Future Proof Your Enterprise is the proof point that they have today talking about the new edge. John Chambers, the former CEO of Cisco, is the chairman of Pensando. Hewlett Packard Enterprise is not only an investor, but also a customer in OEM piece of this solution, and so very interesting piece, and Scott, I want to pull you into the discussion. The waves of technology, I think, the last 10, 15 years in networking, a lot it has been can Cisco be disrupted? So software-defined networking was let's get away from hardware and drive towards more software. Lots of things happening. So I'd love your commentary. Just some of the macro trends you're seeing, Cisco's position in the marketplace, how the startups are impacting them. >> Sure, Stu. I think it's very exciting times right now in networking, because we're just at the point where we kind of have this long battle of software-defined networking, like you said, really pushed by the startups, and there's been a lot of skepticism along the way, but you're starting to see some success, and the way I describe it is we're really on the third generation of software-defined networking. You have the first generation, which was really one company, Nicira, which VMware bought and turned into their successful NSX product, which is a virtualized networking solution, if you will, and then you had another round of startups, people like Big Switch and Cumulus Networks, all of which were acquired in the last year. Big Switch went to Arista, and Cumulus just got purchased by... Who were they purchased by, Stu? >> Purchased by Nvidia, who interestingly enough, they just picked up Mellanox, so watching Nvidia build out their stack. >> Sorry, I was having a senior moment. It happens to us analysts. (chuckling) But yeah, so Nvidia's kind of rolling up these data center and networking plays, which is interesting because Nvidia is not a traditional networking hardware vendor. It's a chip company. So what you're seeing is kind of this vision of what they call in the industry disaggregation. Having the different components sold separately, and then of course Cisco announced the plan to roll out their own chip, and so that disaggregated from the network as well. When Cisco did that, they acknowledged that this is successful, basically. They acknowledged that disaggregation is happening. It was originally driven by the large public cloud providers like Microsoft Azure and Amazon, which started the whole disaggregation trend by acquiring different components and then melding it all together with software. So it's definitely the future, and so there's a lot of startups in this area to watch. I'm watching many of them. They include ArcOS, which is a exciting new routing vendor. DriveNets, which is another virtualized routing vendor. This company Alkira, which is going to do routing fully in the cloud, multi-cloud networking. Aviatrix, which is doing multi-cloud networking. All these are basically software companies. They're not pitching hardware as part of their value add, or their integrated package, if you will. So it's a different business model, and it's going to be super interesting to watch, because I think the third generation is the one that's really going to break this all apart. >> Yeah, you brought up a lot of really interesting points there, Scott. That disaggregation, and some of the changing landscape. Of course that more than $1 billion acquisition of Nicira by VMware caused a lot of tension between VMware and Cisco. Interesting. I think back when to Cisco created the UCS platform it created a ripple effect in the networking world also. HP was a huge partner of Cisco's before UCS launched, and not long after UCS launched HP stopped selling Cisco gear. They got heavier into the networking component, and then here many years later we see who does the MPLS team partner with when they're no longer part of Cisco, and Chambers is no longer the CEO? Well, it's HPE front and center there. You're going to see John Chambers at HPE Discover, so it was a long relationship and change. And from the chip companies, Intel, of course, has built a sizeable networking business. We talked a bit about Mellanox and the acquisitions they've done. One you didn't mention but caused a huge impact in the industry, and something that Pensando's responding to is Amazon, but Annapurna Labs, and Annapurna Labs, a small Israeli company, and really driving a lot of the innovation when it comes to compute and networking at Amazon. The Graviton, Compute, and Nitro is what powers their Outposts solutions, so if you look at Amazon, they buy lots of pieces. It's that mixture of hardware and software. In early days people thought that they just bought kind of off-the-shelf white boxes and did it cheap, but really we see Amazon really hyper optimizes what they're doing. So Scott, let's talk a little bit about Pensando if we can. Amazon with the Nitro solutions built to Outposts, which is their hybrid solution, so the same stack that they put in Amazon they can now put in customers' data center. What Pensando's positioning is well, other cloud providers and enterprise, rather than having to buy something from Amazon, we're going to enable that. So what do you think about what you've seen and heard from Pensando, and what's that need in the market for these type of solutions? >> Yes, okay. So I'm glad you brought up Outposts, because I should've mentioned this next trend. We have, if you will, the disaggregated open software-based networking which is going on. It started in the public cloud, but then you have another trend taking hold, which is the so-called edge of the network, which is going to be driven by the emergence of 5G, and the technology called CBRS, and different wireless technologies that are emerging at the so-called edge of the network, and the purpose of the edge, remember, is to get closer to the customer, get larger bandwidth, and compute, and storage closer to the customer, and there's a lot of people excited about this, including the public cloud providers, Amazon's building out their Outposts, Microsoft has an Edge stack, the Azure Edge Stack that they've built. They've acquired a couple companies for $1 billion. They acquired Metaswitch, they acquired Affirmed Networks, and so all these public cloud providers are pushing their cloud out to the edge with this infrastructure, a combination of software and hardware, and that's the opportunity that Pensando is going after with this Outposts theme, and it's very interesting, Stu, because the coopetition is very tenuous. A lot of players are trying to occupy this edge. If you think about what Amazon did with public cloud, they sucked up all of this IT compute power and services applications, and everything moved from these enterprise private clouds to the public cloud, and Amazon's market cap exploded, right, because they were basically sucking up all the money for IT spending. So now if this moves to the edge, we have this arms race of people that want to be on the edge. The way to visualize it is a mini cloud. Whether this mini cloud is at the edge of Costco, so that when Stu's shopping at Costco there's AI that follows you in the store, knows everything you're going to do, and predicts you're going to buy this cereal and "We're going to give you a deal today. "Here's a coupon." This kind of big brother-ish AI tracking thing, which is happening whether you like it or not. Or autonomous vehicles that need to connect to the edge, and have self-driving, and have very low latency services very close to them, whether that's on the edge of the highway or wherever you're going in the car. You might not have time to go back to the public cloud to get the data, so it's about pushing these compute and data services closer to the customers at the edge, and having very low latency, and having lots of resources there, compute, storage, and networking. And that's the opportunity that Pensando's going after, and of course HPE is going after that, too, and HPE, as we know, is competing with its other big mega competitors, primarily Dell, the Dell/VMware combo, and the Cisco... The Cisco machine. At the same time, the service providers are interested as well. By the way, they have infrastructure. They have central offices all over the world, so they are thinking that can be an edge. Then you have the data center people, the Equinixes of the world, who also own real estate and data centers that are closer to the customers in the metro areas, so you really have this very interesting dynamic of all these big players going after this opportunity, putting in money, resources, and trying to acquire the right technology. Pensando is right in the middle of this. They're going after this opportunity using the P4 networking language, and a specialized ASIC, and a NIC that they think is going to accelerate processing and networking of the edge. >> Yeah, you've laid out a lot of really good pieces there, Scott. As you said, the first incarnation of this, it's a NIC, and boy, I think back to years ago. It's like, well, we tried to make the NIC really simple, or do we build intelligence in it? How much? The hardware versus software discussion. What I found interesting is if you look at this team, they were really good, they made a chip. It's a switch, it's an ASIC, it became compute, and if you look at the technology available now, they're building a lot of your networking just in a really small form factor. You talked about P4. It's highly programmable, so the theme of Future Proof Your Enterprise. With anything you say, "Ah, what is it?" It's a piece of hardware. Well, it's highly programmable, so today they position it for security, telemetry, observability, but if there's other services that I need to get to edge, so you laid out really well a couple of those edge use cases and if something comes up and I need that in the future, well, just like we've been talking about for years with software-defined networking, and network function virtualization, I don't want a dedicated appliance. It's going to be in software, and a form factor like Pensando does, I can put that in lots of places. They're positioning they have a cloud business, which they sell direct, and expect to have a couple of the cloud providers using this solution here in 2020, and then the enterprise business, and obviously a huge opportunity with HPE's position in the marketplace to take that to a broad customer base. So interesting opportunity, so many different pieces. Flexibility of software, as you relayed, Scott. It's a complicated coopetition out there, so I guess what would you want to see from the market, and what is success from Pensando and HPE, if they make this generally available this month, it's available on ProLiant, it's available on GreenLake. What would you want to be hearing from customers or from the market for you to say further down the road that this has been highly successful? >> Well, I want to see that it works, and I want to see that people are buying it. So it's not that complicated. I mean I'm being a little superficial there. It's hard sometimes to look in these technologies. They're very sophisticated, and sometimes it comes down to whether they perform, they deliver on the expectation, but I think there are also questions about the edge, the pace of investment. We're obviously in a recession, and we're in a very strange environment with the pandemic, which has accelerated spending in some areas, but also throttled back spending in other areas, and 5G is one of the areas that it appears to have been throttled back a little bit, this big explosion of technology at the edge. Nobody's quite sure how it's going to play out, when it's going to play out. Also who's going to buy this stuff? Personally, I think it's going to be big enterprises. It's going to start with the big box retailers, the Walmarts, the Costcos of the world. By the way, Walmart's in a big competition with Amazon, and I think one of the news items you've seen in the pandemic is all these online digital ecommerce sales have skyrocketed, obviously, because people are staying at home more. They need that intelligence at the edge. They need that infrastructure. And one of the things that I've heard is the thing that's held it back so far is the price. They don't know how much it's going to cost. We actually ran a survey recently targeting enterprises buying 5G, and that was one of the number one concerns. How much does this infrastructure cost? So I don't actually know how much Pensando costs, but they're going to have to deliver the right ROI. If it's a very expensive proprietary NIC, who pays for that, and does it deliver the ROI that they need? So we're going to have to see that in the marketplace, and by the way, Cisco's going to have the same challenge, and Dell's going to have the same challenge. They're all racing to supply this edge stack, if you will, packaged with hardware, but it's going to come down to how is it priced, what's the ROI, and are these customers going to justify the investment is the trick. >> Absolutely, Scott. Really good points there, too. Of course the HPE announcement, big move for Pensando. Doesn't mean that they can't work with the other server vendors. They absolutely are talking to all of them, and we will see if there are alternatives to Pensando that come up, or if they end up singing with them. All right, so what we have here is I've actually got quite a few interviews with the Pensando team, starting with I talked about MPLS. We have Prem, Jane, and Sony Giandoni, who are the P and the S in MPLS as part of it. Both co-founders, Prem is the CEO. We have Silvano Guy who, anybody that followed this group, you know writes the book on it. If you watched all the way this far and want to learn even more about it, I actually have a few copies of Silvano's book, so if you reach out to me, easiest way is on Twitter. Just hit me up at @Stu. I've got a few copies of the book about Pensando, which you can go through all those details about how it works, the programmability, what changes and everything like that. We've also, of course, got Hewlett Packard Enterprise, and while we don't have any customers for this segment, Scott mentioned many of the retail ones. Goldman Sachs is kind of the marquee early customer, so did talk with them. I have Randy Pond, who's the CFO, talking about they've actually seen an increase beyond what they expected at this point of being out of stealth, only a little over six months, even more, which is important considering that it's tough times for many startups coming out in the middle of a pandemic. So watch those interviews. Please hit us up with any other questions. Scott Raynovich, thank you so much for joining us to help talk about the industry, and this Pensando partnership extending with HPE. >> Thanks, Stu. Always a pleasure to join theCUBE team. >> All right, check out thecube.net for all the upcoming, as well as if you just search "Pensando" on there, you can see everything we had on there. I'm Stu Miniman, and thank you for watching theCUBE. (smooth music)
SUMMARY :
leaders all around the world, He is the principal analyst at Futuriom and how some of the startups are helping and the way I describe it is we're really they just picked up Mellanox, and it's going to be super and Chambers is no longer the CEO? and "We're going to give you a deal today. in the marketplace to take and 5G is one of the areas that it appears Scott mentioned many of the retail ones. Always a pleasure to join theCUBE team. I'm Stu Miniman, and thank
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Walmarts | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Annapurna Labs | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Montana | LOCATION | 0.99+ |
Nuova | ORGANIZATION | 0.99+ |
Andiamo | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
Prem | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Costco | ORGANIZATION | 0.99+ |
Randy Pond | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
$1 billion | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
John Chambers | PERSON | 0.99+ |
Nicira | ORGANIZATION | 0.99+ |
Silvano | PERSON | 0.99+ |
more than $1 billion | QUANTITY | 0.99+ |
Jane | PERSON | 0.99+ |
first generation | QUANTITY | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
ACI | ORGANIZATION | 0.99+ |
Alkira | ORGANIZATION | 0.99+ |
Big Switch | ORGANIZATION | 0.99+ |
third generation | QUANTITY | 0.99+ |
Krishna Doddapaneni and Pirabhu Raman, Pensando | Future Proof Your Enterprise 2020
(upbeat music) >> Narrator: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, I'm Stu Miniman, and welcome to this CUBE conversation. We're digging in with Pensando. Talking about the technologies that they're using. And happy to welcome to the program, two of Pensando's technical leaders. We have Krishna Doddapaneni, he's the Vice President of Software. And we have here Pirabhu Raman, he's a Principal Engineer, both with Pensando. Thank you so much for joining us. >> Thank you Stu. >> All right. >> Thank you for having us here >> Krishna, you run the Software Team. So let's start there and talk about really the mission and shortly obviously, bring us through a little bit of architecturally what Pensando was doing. >> To get started, Pensando we are building a platform, which can automate and manage the network storage and security services. So when we talk about software here, it's like the better software as you start from all the way from bootloader, to all the way it goes to microservices controller. So the fundamentally the company is building a domain specific processor called a DSP, that goes on the card called DSC. And that card goes into a server in a PCIe slot. Since we go into a server and we act as a NIC, we have to do drivers for Windows, all the OS' Windows, Linux, ESX and FreeBSD. And on the card itself, the chip itself, there are two fundamental pieces of the chip. One is the P4 pipelines, where we run all our applications, if you can think like in the firewalls, in the virtualization, all security applications. And then there's Arm SoC, which we have to bring up the platform and where we run the control plane and data and management plane so that's one piece of the software. The other big piece of software is called PSM. Which kind of, if you think about it in data center, you don't want to manage, one DSC at a time or one server at a time. We want to manage all thousands of servers, using a single management and control point. And that's where the test for the PSM comes from. >> Yeah, excellent. You talked about a pretty complex solution there. One of the big discussion points in the networking world and I think in general has been really the role of software. I think we all know, it got a little overblown. The discussion of software, does not mean that hardware goes away. I wrote a piece, many years ago, if you look at how hyperscalars do things, how they hyper optimize. They don't just buy the cheapest, most generic thing. they tend to configure things and they just roll it out in massive scale. So your team is well known for, really from a chip standpoint, I think about the three Cisco spin-ins. If you dug underneath the covers, yes there was software, but there was an Async there. So, when I look at what you're doing in Pensando, you've got software and there is a chip, at the end of the day. It looks, the first form factor of this looks like, a network card, the NIC that fits in there. So give us in there some of the some of the challenges of software and there's so much diversity in hardware these days. Everything getting ready for AI and GPUs. And you listed off a bunch of pieces when you were talking about the architecture. So give us that software/hardware dynamic, if you would. >> I mean, if you look at where the industry has been going towards, right, I mean, the Moore's law has been ending and Dennard scale is a big on Dennard scaling. So if you want to set all the network in certain security services on x86, you will be wasting a bunch of x86 cycles. The customer, why does he buy x86? He buys x86 to run his application. Not to run IO or do security for IO or policies for IO. So where we come in is basically, we do this domain specific processor, which will take away all the IO part of it, and the computer, just the compute of the application is left for x86. The rest is all offloaded to what we call Pensando. So NIC is kind of one part of what we do. NIC is how we connect to the server. But what we do inside the card is, firewalls, all the networking functions: SDNs, load balancing in all the storage functions, NVMe virtualization, and encryption of all the packets, data of data at rest and data of data in motion. All these services is what we do in this part. And you know, yes, it's an Async. But if you look at what we do inside, it's not a fixed Async. We did work on the previous spin-ins as you said, with Async, but there's a fundamental difference between that Async can this Async. In those Asyncs for example, there's a hard coded routing table or there's a hard coded ACL table. This Async is a completely programmable. It's more like it's a programmable software that we have domain specific language called P4. We use that P4 to program the Async. So the way I look at it, it's an Async, but it's mostly software driven completely. And from all the way from controllers, to what programs you run on the chip, is completely software driven. >> Excellent. Pirabhu of course, the big announcement here, HPE. You've now got the product. It's becoming generally available this month. We'd watch from the launch of Pensando, obviously, having HPE as not only an investor, but they're an OEM of the product. They've got a huge customer base. Maybe help explain, from the enterprise standpoint, if I'm buying ProLion, where now does, am I going to be thinking about Pensando? What specific use cases? How does this translate to the general and enterprise IP buyer? >> We cover of whole breadth of use cases, at the very basic level, if your use cases or if your company is not ready for all the different features, you could buy it as a basic NIC and start provisioning it, and you will get all the basic network functions. But at the same time in addition to the standard network functions, you will get always on telemetry. Like you will get rich set of metrics, you will get packet capture capabilities, which will help you very much in troubleshooting issues, when they happen, or you can leave them always on as well. So, you can do some of these tap kind of functionalities, which financial services do. And all these things you will get without any impact on the workload performance. Like the customers' application don't see any performance impact when any of these capabilities are turned on. So once this is as a standard network function, but beyond this when you are ready for enforcing policies at the edge or you're ready for enforcing stateful firewalls, distributed firewalling capabilities, connection tracking, some of the other things, like Krishna touched upon NVMe virtualization, there are all sorts of other features you can add on top of. >> Okay, so it sounds like what we're really democratizing some of those cloud services or cloud like services for the network, down to the end device, if I have this right. >> Exactly. >> Maybe if you could, networking, we know, our friends in network. We tend to get very acronym driven, to overlays and underlays and various layers of the stack there. When we talk about innovation, I'd love to hear from both of you, what are some of those kind of key innovations, if you were to highlight just one or two? Pirabhu, maybe you can go first and then Krishna would would love your follow up from that. >> Sure, there are many innovations, but just to highlight a few of them, right. Krishna touched upon P4, but even on the P4, P4 is very much focused on manipulating the packets, packets in and packets out, but we enhanced it so that we can address it in such a way that from memory in-packet out, packet in-memory out. Those kind of capabilities so that we can interface it with the host memory. So those innovations we are taking it to the standard and they are in the process of getting standardized as well. In addition to this, our software stack, we touched upon the always on telemetry capabilities. You could do flow based packet captures, NetFlow, you could get a lot of visibility and troubleshooting information. The management plane in itself, has some of the state of the art capabilities. Like it's distributed, highly available, and it makes it very easy for you to manage thousands of these servers. Krishna, do you want to add something more? >> Yes, the biggest thing of the platform is that when we did underlays and overlays, as you said there, everything was like fixed. So tomorrow, you wake up and come with a new protocol, or you may come up with a new way to do storage, right? Normally, in the hardware world, what happens is, Oh, you have to I have to sell you this new chip. That is not what we are doing. I mean, here, whatever we ship on this Async, you can continue to evolve and continue to innovate, irrespective of changing standards. If NVMe goes from one dot two to one dot three, or you come up with a new encapsulation of VXLAN, you do whatever encapsulations, whatever TLVs you would want to, you don't need to change the hardware. It's more about downloading new firmware, and upgrading the new firmware and you get the new feature. That is that's one of the key innovation. That's why most of the cloud providers like us, that we are not tied to hardware. It's more of software programmable processor that we can keep on adding features in the future. >> So one way to look at it, is like, you get the best of both worlds kind of a thing. You get power and performance of Async, but at the same time you get the flexibility of closer to that of a general purpose processor. >> Yeah, so Krishna, since you own the software piece of thing, help us understand architecturally, how you can deploy something today but be ready for whatever comes in the future. That's always been the challenge is, Gee, maybe if I wait another six months, there'll be another generation something, where I don't want to make sure that I miss some window of opportunity. >> Yeah, so it's a very good question. I mean, basically you can keep enhancing your features with the same performance and power and latency and throughput. But the other important thing is how you upgrade the software. I mean today whenever you have Async. When you have changed the Async, obviously, you have to pull the card out and you put the new card in. Here, when you're talking upgrading software, we can upgrade software while traffic is going through. With very minimal disruption, in the order of sub second. Right, so you can change your protocol, for example, tomorrow, we change from VXLAN to your own innovative protocol, you can upgrade that without disrupting any existing network or storage IO. I mean, that's where the power of the platform is very useful. And if you look at it today, where cloud providers are going right, and the cloud providers, you don't want to, because there are customers who are using that server, and they're deploying their application, they don't want to disturb that application, just because you decided to do some new innovative feature. The platform capability is that you could upgrade it, and you can change your mind sometime in the future. But whatever existing traffic is there, the traffic will continue to flow and not disrupt your app. >> All right, great. Well, you're talking about clouds one of the things we look at is multi cloud and multi vendor. Pirabhu, we've got the announcement with HPE now, ProLion and some of their other platforms. Tell us how much work will it be for you to support things like Dell servers or I think your team's quite familiar with the Cisco UCS platform. Two pieces on that number one: how easy or hard is it to do that integration? And from an architectural design? Does a customer need to be homogeneous from their environment or is whatever cloud or server platform they're on independent, and we should be able to work across those? >> Yeah, first off, I should start with thanking HPE. They have been a great partner and they have been quick to recognize the synergy and the potential of the synergy. And they have been very helpful towards this integration journey. And the way we see it, a lot of the work has already been done in terms of finding out the integration issues with HPE. And we will build upon this integration work that has been done so that we can quickly integrate with other manufacturers like Dell and Cisco. We definitely want to integrate with other server manufacturers as well, because that is in the interest of our customers, who want to consume Pensando in a heterogenous fashion, not just from one server manufacturer. >> Just want to add one thing to what Pirabhu's saying. Basically, the way we think about it is that, there's x86 and then the all the IO, the infrastructure services, right. So for us, as long as you get power from the server, and you can get packets and IO across the PCIe bus, we are kind of, we want to make it a uniform layer. So the Pensando, if you think about it, is a layer that can work across servers, and could work inside the public cloud and when we have, one of our customers using this in hybrid cloud. So we want to be the base where we can do all the storage network and security services, irrespective of the server and where the server is placed. Whether it's placed in the call log, it's placed in the enterprise data center, or it's placed in the public cloud. >> All right, so I guess Krishna, you said first x86. Down the road, is there opportunity to go beyond Intel processors? >> Yes. I mean, we already support AMD, which is another form of x86. But other architecture doesn't prevent us from any servers. As long as you follow the PCIe standard, we should, it's more of a testing matrix issue. It's not about support of any other OS, we should be able to support it. And initially, we also tested once on PowerPC. So any kind of CPU architecture, we should be able to support. >> Okay, so walk me up the application stack a little bit though. Things like virtualization, containerization. There's the question of does it work but does it optimize? Any of us live through those waves of, Oh, okay, well it kind of worked, but then there was a lot of time to make things like the origin networking work well in virtualization and then in containerization. So how about your solution? >> I mean you should look at, a good example is AWS, like what AWS does with Nitro. So on Nitro, you do EBS, you do security, and you do VPC. In all the services is effectively, we think about it, all of those can be encapsulated in one DSC card. And obviously, when it comes to this kind of implementation on one card, right, the first question you would ask what happens to the noisy neighbor? So we have the right QOS mechanisms to make sure all the services go through the same card, at the same time giving guarantees to the customer that (mumbles) especially in the multi-tenant environment, whatever you're doing on one VPC will not affect the other VPC. And the advantage of the platform that what we have is very highly scalable and highly performing. Scale will not be the issue. I mean, if you look at existing platforms, even if you look at the cloud, because when you're doing this product, obviously, we'll do benchmarking with the cloud and enterprises. With respect to scale, performance and latency, we did the measurements and we are order of magnitude compared to (sneezes) given the existing clouds and currently whatever enterprise customers have. >> Excellent, so Pirabhu, I'm curious, from the enterprise standpoint, are there certain applications, I think about like, from an analytic standpoint, Splunk is so heavily involved in data that might be a natural fit or other things where it might not be fully tested out with anything kind of that ISV world that we need to think about. >> So if we're talking in terms of partner ecosystems, our enterprise customers do use many of the other products as well. And we are trying to integrate with other products so that we can get the maximum value. So if you look at it, you could get rich metrics and visualization capabilities from our product, which can be very helpful for the partner products because they don't have to install an agent and they can get the same capability across bare metal virtual stack as well as containers. So we are integrating with various partners including some CMDB configuration management database products, as well as data analytics or network traffic analytics products. Krishna, do you want to add anything? >> Yeah, so I think it's just not the the analytics products. We're also integrating with VMware. Because right now VMware is a computer orchestrated and we want to be the network policy orchestrator. In the future, we want to integrate with Kubernetes and OpenShift. So we want to add integration so that our platform capability can be easily consumable irrespective of what kind of workload you use or what kind of traffic analytics tool you use or what kind of data link that you use in your enterprise data center. >> Excellent, I think that's a good view forward as to where some of the work is going on the future integration. Krishna and Pirabhu, thank you so much for joining us. Great to catch up. >> Thank you Stu. >> Thanks for having us. >> All right. I'm Stu Miniman. Thank you for watching theCUBE. (gentle music)
SUMMARY :
leaders all around the world, he's the Vice President of Software. really the mission and shortly obviously, it's like the better software as you start One of the big discussion to what programs you run on the chip, Pirabhu of course, the big and you will get all the or cloud like services for the network, Maybe if you could, networking, and it makes it very easy for you and you get the new feature. but at the same time you comes in the future. and you can change your clouds one of the things And the way we see it, So the Pensando, if you think about it, Down the road, is there opportunity As long as you follow the PCIe standard, There's the question of does it work the first question you would ask from the enterprise standpoint, So if you look at it, you In the future, we want to integrate on the future integration. Thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Krishna | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Pirabhu | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Pirabhu Raman | PERSON | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
ESX | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
Two pieces | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Krishna Doddapaneni | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one server | QUANTITY | 0.99+ |
Windows | TITLE | 0.99+ |
Stu | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
one card | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
FreeBSD | TITLE | 0.99+ |
six months | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
both worlds | QUANTITY | 0.97+ |
thousands | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one part | QUANTITY | 0.97+ |
one piece | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
Async | TITLE | 0.95+ |
this month | DATE | 0.95+ |
first form | QUANTITY | 0.94+ |
thousands of servers | QUANTITY | 0.94+ |
two fundamental pieces | QUANTITY | 0.93+ |
HPE | ORGANIZATION | 0.92+ |
HPE | TITLE | 0.91+ |
x86 | TITLE | 0.9+ |
one way | QUANTITY | 0.89+ |
Pensando | LOCATION | 0.89+ |
single | QUANTITY | 0.88+ |
ProLion | ORGANIZATION | 0.88+ |
Asyncs | TITLE | 0.86+ |
one server manufacturer | QUANTITY | 0.85+ |
VMware | TITLE | 0.82+ |
CUBE | ORGANIZATION | 0.8+ |
Mario Baldi, Pensando | Future Proof Your Enterprise 2020
(bright music) >> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world. This is a Cube conversation. >> Hi, I'm Stu Miniman, and welcome to a Cube conversation. I'm coming to you from our Boston area studio. And we're going to be digging into P4, which is, the programming protocol independent packet processors. And to help me with that, first time guest on the program, Mario Baldi, he is a distinguished technologist with Pensando. Mario, so nice to see you. Thanks for joining us. >> Thank you. Thank you for inviting. >> Alright, so Mario, you have you have a very, you know, robust technical career, lot of patents, you've worked on, you know, many technologies, you know, deep in the networking and developer world, but give our audience a little bit of your background and what brought you to Pensando. >> Yeah, yes, absolutely. So I started my my professional life in academia, actually, I worked for many years in academia, about 15 years exclusively in academia, and I was focusing both my teaching in research on computer networking. And then I also worked in a number of startups and established companies, in the last about eight years almost exclusively in the industry. And before joining Pensando, I worked for a couple of years at Cisco on a P4 programmable switch and that's where I got in touch with P4 actually. For the occasion I wore a T shirt of one of the P4 workshops. Which reminds me a bit of those people when you ask them, whether they do any sports, they tell you they have a membership at the gym. So I don't just have membership, I didn't just show up at the workshop. I've really been involved in the community and so when I learned what pensando was doing, I immediately got very excited that the ASIC that Pensando has developed these is really extremely powerful and flexible because it's fully programmable, partly programmable, with P4 partly programmable differently. And Pensando is starting to deploy these ASIC at the edge and Haas. And I think such a powerful and flexible device, at the edge of the network really opens incredible opportunities to, on the one hand implement what we have been doing in a different way, on the other hand, implement completely different solution. So, you know, I've been working most of my career in innovation, and when when I saw these, I immediately got very excited and I realized that Pensando was really the right place for me to be. >> Excellent. Yeah, interesting, you know, many people in the industry, they talk about innovation coming out of the universities, you know, Stanford often gets mentioned, but the university that you, you know, attended and also were associate professor at in Italy, a lot of the networking team, your MPLS, you know, team at Pensando, many of them came from them. Silvano guy, you know, written many books, they're, you know, very storied career in that environment. P4, maybe step back for a second, you know, you're you're deep in this group, help us understand what that is, how long it's been around, you know, and who participates in it with P4? >> Yeah, yeah. So as you were saying before, one of the few P4 from whom I've heard saying it, because everyone calls it P4 and nobody says what it really means. So programming protocol, independent packet processor. So it's a programming language for packet processors. And it's protocol independent. So it doesn't start from assuming that we want to use certain protocols. So P4 first of all allows you to specify what packets look like. So what the headers look like, and how they can be parsed. And secondly, because P4 is specifically designed for packet processing, and it's based on the idea that you want to look up values in tables. So it allows you to define tables, in keys that are being used to look up those tables and find an entry in the table. And when you find an entry, that entry contains an action and parameters to be used for that action. So the idea is that the package descriptions that you have in the program, define how the package should be processed. Header fields should be parsed, values extracted from them, and those values are being used as keys to look up into tables. And when the appropriate entry in the table is found, an action is executed and that action is going to modify those header fields, and these happens a number of times, the program specifies a sequence of tables that are being looked up, header fields being modified. In the end, those modified header fields are used to construct new packets that are being sent out of the device. So this is the basic idea of a P4 program. You specify a bunch of tables that are being looked up using values extracted from packets. So this is very powerful for a number of reasons. So first of all, its input, which is always good as we know, especially in networking, and then it maps very well on what we need to do, when we do packet processing. So writing a packet processing program, is relatively easy and fast. Could be difficult to write a generic programming in P4, you could not, but the packet processing program, it's easy to write. And last but not least, P4 really maps well on hardware that was designed specifically to process packet. What we call domain specific processes, right. And those processes are, in fact designed to quickly look up tables that might have decamping side, they might have processes that are specialized in performing, in building keys and performing table lookup, and modifying those header fields. So when you have those processors that are usually organized in pipelines to achieve a good throughput, then you can very efficiently take a P4 program and compile it to execute it very high speed on those processors. And this way, you get the same performance of a fixed function ASIC, but it's fully programmable, nothing is fixed. Which means that you can develop your features much faster, you can add features and fix bugs, you know, with a very short cycle, not with a four or five year cycle of baking a new ASIC. And this is extremely powerful. This is the strong value proposition of P4. >> Yeah, absolutely. I think that that resonates Mario, you know, I used to do presentations about the networking industry and you would draw timelines out there in decades. Because from the standard to get deployed for, you know, the the hardware to get baked, the customers to do the adoption, things take a really long time. You brought up, you know, edge computing, obviously, you know, we are, you know, it is really exciting, but it is changing really fast, and there's a lot of different, you know, capabilities out there. So if you could help us, you know, connect the dots between what P4 does and what the customers need. You know, we talked about multi-cloud and edge. What is it that you know, P4 in general, and what Pensando is doing with P4 specifically, enables this next generation architecture? >> Yeah, sure. So, Pensando has developed these card, which we call DSC distribute services card, that is built around an ASIC, that has a very very versatile architecture. It's a fully programmable. And it's fully programmable it's various levers, and one of them is in fact P4. Now this card and has a PCIE interface. So it can be installed in horse. And by the way, this is not the only way this powerful as you can be deployed. It's the first way Pensando has decided to use it. And so we have this card, it can be plugged into a host, it has two network interfaces. So it can be used as a network adapter. But in reality, because the card is fully programmable and it has several processors inside, it can be used to implement very sophisticated services. Things that you wouldn't even dream of doing with the typical network adapter, with a typical NIC. So in particular, this card, this ASIC contains a sizable amount of memory. Right now we have two sizes four, an eight gig but we are going to have versions of the card with even larger memory. Then it has some specialized hardware for specific functions like cryptographic functions, compression, computation of CRCs and if sophisticated queueing system with packet buffer with the queuing system to end the packets that have to go out to the interfaces or coming from the interfaces. Then it is several types of processors. It has generic processors, specifically arms, arm processors that can be programmed with general purpose languages. And then a set of processors that are specific for packet processing that are organized in a pipeline. In those, idea to be programmed with P4. We can very easily map a P4 program, on those pipeline of processor. So that's where Pensando is leveraging P4, is the language for programming those processes that allow us to process packets at the line rate of our 200 gigabit interfaces that we have in the card. >> Great. So Mario, what about from a customer viewpoint? Do they need to understand you know, how to program in P4, is this transparent to them? What's the customer interaction with it? >> Oh yeah, not at all. The Pensando platform, Pensando is offering a platform that is a completely turnkey solution. Basically the platform, first of all, the platform has a controller with which the user interacts, the user can configure policies on this controller. So using an intent based paradigm, the user defines policies that the controller is going to push those policies to the cards. So in your data center in your horse, in your data center, you can deploy thousands of those cards. Those cards implement distributed services. Let's say, just to give a very simple example, a distributed stateful firewall implemented on the all of those cards. The user writes a security policy, says this particular application can talk to these other particular application, and then translate it into configuration for those cards. It's transparently deployed on the cards that start in force the policies. So the user can use this system at this very high level. However, if the user has more specific needs, then the system, the platform offers several interfaces and several API's to program the platform through those interfaces. So the one at the highest level, is a REST API to the controller. So if the customer has an orchestrator, they can use that orchestrator to automatically send policies to the controller. Or if a customer already have their own controller, they can interact directly with the DSCs with the cards on the horse, with another API's that's fully open, is based on GRPC. And in this way, they can control the cards directly. If they need something even more specific, if they need a functionality that Pensando doesn't offer on those card, hasn't already ever written software for the cards, then customers can program the card, and the first level at which they can program it is the ARM processors. We have ARM processors, those are running in version of Linux, so customers can program it by writing C-code or Python. But if they have very specific needs, like when they write a software for the ARM processor, they can leverage the P4 code that we have already written for the card for those specialized packet processors. So they can leverage all of the protocols that our P4 program is already supported. And by the way because that's software, they can pick and choose in a Manga library of many different protocols and features we support, and decide to deploy them and then integrate them in their software running on the ARM processor. However, if they want to add their own proprietary protocols, if they want, if they need to execute some functionalities at very high performance, then they that's when they can write P4 code. And even in that case, we are going to make it very simple for them. Because they don't have to write everything from scratch. They don't have to worry about how to process AP packets, how to terminate TCP, we have to solve the P4 code for them. They can focus just on their own feature. And we are going to give them a development environment that allows them to focus on their own little feature and integrate it with the rest of our P4 program. Which by the way, is something that P4 is not designed for. P4 is not designed for having different programmers, write different pieces of the program and put them together. But we have the means to enable this. >> Okay, interesting. So, you know, maybe bring us inside a little bit, you know the P4 community, you're very active in it, when I look online, there's a large language consortium, many of, you know, all the hardware and software companies that I would expect in the networking space are on that list. So what's Pensando's participation in the community? And you were just teasing through, you know, what does P4 do and then what does Pensando, maybe enable, you know, above and beyond what, you know, P4 just does on its own? >> Yeah, so yes Pensando is very much involved in the community. There has been recently an event, online event that substituted the yearly P4 workshop. It was called the P4 expert round-table series. And Pensando had very strong participation. our CTO, Vipin Jain, had the keynote speech. Talking about how P4 can be extended beyond packet processing. P4, we said, has been designed for packet processing, but today, there are many applications that require message processing, which is more sophisticated then. And he gave a speech on how we can go towards that direction. Then we had a talk that was resulting from a submission that was reviewed and accepted on in fact, the architecture of our ASIC, and how it can be used to implement many interesting use cases. And finally, we participated into a panel in which we discussed how to use P4 in mix-ins Martin at the edge of the network. And there we argued with some use cases and example and code, how before it needs to be extended a little bit because NICs have different needs and open up different opportunities rather than switches. Now P4 was never really meant only for switches. But if we looked at what happened, the community has worked mostly on switches. For example it is defined that what is called the PSA, portable switch architecture. And we see that the NICs have an edge devices, have a little bit different requirements. So, one of the things we are doing within the communities working within one of the working groups, is called the architecture work group. And they are working in there to create the definition of a PNA, Portable NIC Architecture. Now, we didn't start this activity, this activity has started already in 2018. But it did slow down significantly, mostly because there wasn't so much of a push. So now Pensando coming on the market with this new architecture really gave new life to this activity. And we are contributing, actively we have proposed a candidate for a new architecture which has been discussed within the community. And, you know, just to give you an example, why do we need a new architecture? Because if you think of the switch, there are several reasons but one, it's very intuitive. If you think of a switch, you have packets coming in, they've been processed and packets go out. As we said before, there's the PMA then sorry, PSA architecture is meant for these kinds of operation. If you think of a NIC, it's a little bit different because yes, you have packets coming in, and yes, if you have multiple interfaces like our card, you might take those packets and send them out. But most likely what you want to do, you want to process those packets, and then not give the packets to the host. Otherwise the host CPU will have to process them again, to pass them again. You want to give some artifacts to the host, some pre-processed information. So you want to, I don't know take those packets for example, assemble many TCP messages and provide a stream of bytes coming out of this TCP connection. Now, these requires a completely different architecture, packets come in, something else goes out. And goes out, for example, through a PCI bus. So, you need the some different architecture and then you will need in the P4 language, different constructs to deal with the fact that you are modifying memory, you are moving data from the card to the host and vice versa. So again, back to your question, how are we involved in the workgroups? We are involved in the architecture workgroup right now to define the PNA, the Portable NIC Architecture. And also, I believe in the future we will be involved in the language group to propose some extensions to the language. >> Excellent. Well, Mario, thank you so much for giving us a deep dive into P4, where it is and you know some of the potential futures for where it will go in the future. Thanks so much for joining us. >> Thank you. >> Alright. I'm Stu Miniman, thank you so much for watching the Cube. (gentle music)
SUMMARY :
Announcer: From the Cube I'm coming to you from Thank you for inviting. and what brought you to Pensando. that the ASIC that Pensando a lot of the networking and it's based on the idea What is it that you know, P4 in general, And by the way, this is not the only way Do they need to understand you know, and the first level at which above and beyond what, you And also, I believe in the future some of the potential futures thank you so much for watching the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mario | PERSON | 0.99+ |
Mario Baldi | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Italy | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
P4 | ORGANIZATION | 0.99+ |
five year | QUANTITY | 0.99+ |
Stanford | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Vipin Jain | PERSON | 0.99+ |
200 gigabit | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
P4 | TITLE | 0.99+ |
eight gig | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Silvano | PERSON | 0.98+ |
about 15 years | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
first way | QUANTITY | 0.97+ |
Future Proof Your Enterprise | TITLE | 0.97+ |
Cube | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
first time | QUANTITY | 0.96+ |
P4 | COMMERCIAL_ITEM | 0.96+ |
two network interfaces | QUANTITY | 0.95+ |
two sizes | QUANTITY | 0.94+ |
today | DATE | 0.92+ |
secondly | QUANTITY | 0.92+ |
about eight years | QUANTITY | 0.9+ |
Haas | ORGANIZATION | 0.89+ |
2020 | DATE | 0.87+ |
ASIC | ORGANIZATION | 0.84+ |
first | QUANTITY | 0.83+ |
Martin | PERSON | 0.8+ |
PNA | TITLE | 0.8+ |
second | QUANTITY | 0.78+ |
those cards | QUANTITY | 0.75+ |
Silvano Gai, Pensando | Future Proof Your Enterprise 2020
>> Narrator: From the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, and welcome to this CUBE conversation, I'm Stu Min and I'm coming to you from our Boston area studio, we've been digging in with the Pensando team, understand how they're fitting into the cloud, multi-cloud, edge discussion, really thrilled to welcome to the program, first time guest, Silvano Gai, he's a fellow with Pensando. Silvano, really nice to see you again, thanks so much for joining us on theCUBE. >> Stuart, it's so nice to see you, we used to work together many years ago and that was really good and is really nice to come to you from Oregon, from Bend, Oregon. A beautiful town in the high desert of Oregon. >> I do love the Pacific North West, I miss the planes and the hotels, I should say, I don't miss the planes and the hotels, but going to see some of the beautiful places is something I do miss and getting to see people in the industry I do like. As you mentioned, you and I crossed paths back through some of the spin-ins, back when I was working for a very large storage company, you were working for SISCO, you were known for writing the book, you were a professor in Italy, many of the people that worked on some of those technologies were your students. But Silvano, my understanding is you retired so, maybe share for our audience, what brought you out of that retirement and into working once again with some of your former colleagues and on the Pensando opportunity. >> I did retire for a while, I retired in 2011 from Cisco if I remember correctly. But at the end of 2016, beginning of 2017, some old friend that you may remember and know called me to discuss some interesting idea, which was basically the seed idea that is behind the Pensando product and their idea were interesting, what we built, of course, is not exactly the original idea because you know product evolve over time, but I think we have something interesting that is adequate and probably superb for the new way to design the data center network, both for enterprise and cloud. >> All right, and Silvano, I mentioned that you've written a number of books, really the authoritative look on when some new products had been released before. So, you've got a new book, "Building a Future-Proof Cloud Infrastructure," and look at you, you've got the physical copy, I've only gotten the soft version. The title, really interesting. Help us understand how Pensando's platform is meeting that future-proof cloud infrastructure that you discuss. >> Well, network have evolved dramatically in the data center and in the cloud. You know, now the speed of classical server in enterprise is probably 25 gigabits, in the cloud we are talking of 100 gigabit of speed for a server, going to 200 gigabit. Now, the backbone are ridiculously fast. We no longer use Spanning Tree and all the stuff, we no longer use access code aggregation. We switched to closed network, and with closed network, we have huge enormous amount of bandwidth and that is good but it also imply that is not easy to do services in a centralized fashion. If you want to do a service in a centralized fashion, what you end up doing is creating a giant bottleneck. You basically, there is this word that is being used, that is trombone or tromboning. You try to funnel all this traffic through the bottleneck and this is not really going to work. The only place that you can really do services is at the edge, and this is not an invention, I mean, even all the principles of cloud is move everything to the edge and maintain the network as simple as possible. So, we approach services with the same general philosophy. We try to move services to the edge, as close as possible to the server and basically at the border between the sever and the network. And when I mean services I mean three main categories of services. The networking services of course, there is the basic layer, two-layer, three stuff, plus the bonding, you know VAMlog and what is needed to connect a server to a network. But then there is the overlay, overlay like the xLAN or Geneva, very very important, basically to build a cloud infrastructure, and that are basically the network service. We can have others but that, sort of is the core of a network service. Some people want to run BGP layers, some people don't want to run BGP. There may be a VPN or kind of things like that but that is the core of a network service. Then of course, and we go back to the time we worked together, there are storage services. At that time, we were discussing mostly about fiber tunnel, now the BUS world is clearly NVMe, but it's not just the BUS world, it's really a new way of doing storage, and is very very interesting. So, NVMe kind of service are very important and NVMe as a version that is called NVMeOF, over fiber. Which is basically, sort of remote version of NVMe. And then the third, least but not last, most important category probably, is security. And when I say that security is very very important, you know, the fact that security is very important is clear to everybody in our day, and I think security has two main branches in terms of services. There is the classical firewall and micro-segmentation, in which you basically try to enforce the fact that only who is allowed to access something can access something. But you don't, at that point, care too much about the privacy of the data. Then there is the other branch that encryption, in which you are not trying to enforce to decide who can access or not access the resource, but you are basically caring about the privacy of the data, encrypting the data so that if it is hijacked, snooped or whatever, it cannot be decoded. >> Eccellent, so Silvano, absolutely the edge is a huge opportunity. When someone looks at the overall solution and say you're putting something in the edge, you know, they could just say, "This really looks like a NIC." You talked about some of the previous engagement we'd worked on, host bus adapters, smart NICs and the like. There were some things we could build in but there were limits that we had, so, what differentiates the Pensando solution from what we would traditionally think of as an adapter card in the past? >> Well, the Pensando solution has two main, multiple pieces but in term of hardware, has two main pieces, there is an ASIC that we call copper internally. That ASIC is not strictly related to be used only in an adapter form, you can deploy it also in other form factors in another part of the network in other embodiment, et cetera. And then there is a card, the card has a PCI-E interface and sit in a PCI-E slot. So yes, in that sense, somebody can can call it a NIC and since it's a pretty good NIC, somebody can call it a smart NIC. We don't really like that two terms, we prefer to call it DSC, domain specific card, but the real term that I like to use is domain specific hardware, and I like to use domain specific hardware because it's the same term that Hennessy and Patterson use in a beautiful piece of literature that is the Turing Award lecture. It's on the internet, it's public, I really ask everybody to go and try to find it and listen to that beautiful piece of literature, modern literature on computer architecture. The Turing Award lecture of Hennessy and Patterson. And they have introduced the concept of domain specific hardware, and they explain also the justification for why now is important to look at domain specific hardware. And the justification is basically in a nutshell and we can go more deep if you're interested, but in a nutshell is that the specing, that is the single tried performer's measurement of a CPU, is not growing fast at all, is only growing nowadays like a few point percent a year, maybe 4% per year. And with this slow grow, over specing performance of a core, you know the core need to be really used for user application, for customer application, and all what is known as Sentian can be moved to some domain specific hardware that can do that in a much better fashion, and by no mean I imply that the DSC is the best example of domain specific hardware. The best example of domain specific hardware is in front of all of us, and are GPUs. And not GPUs for graphic processing which are also important, but GPU used basically for artificial intelligence, machine learning inference. You know, that is a piece of hardware that has shown that something can be done with performance that the purpose processor can do. >> Yeah, it's interesting right. If you term back the clock 10 or 15 years ago, I used to be in arguments, and you say, "Do you build an offload, "or do you let it happen is software." And I was always like, "Oh, well Moore's law with mean that, "you know, the software solution will always win, "because if you bake it in hardware, it's too slow." It's a very different world today, you talk about how fast things speed up. From your customer standpoint though, often some of those architectural things are something that I've looked for my suppliers to take care of that. Speak to the use case, what does this all mean from a customer stand point, what are some of those early use cases that you're looking at? >> Well, as always, you get a bit surprised by the use cases, in the sense that you start to design a product thinking that some of the most cool thing will be the dominant use cases, and then you discover that something that you have never really fought have the most interesting use case. One that we have fought about since day one, but it's really becoming super interesting is telemetry. Basically, measuring everything in the network, and understanding what is happening in the network. I was speaking with a friend the other day, and the friend was asking me, "Oh, but we have SNMP for many many years, "which is the difference between SNMP and telemetry?" And the difference is to me, the real difference is in SNMP or in many of these management protocol, you involve a management plan, you involve a control plan, and then you go to read something that is in the data plan. But the process is so inefficient that you cannot really get a huge volume of data, and you cannot get it practically enough, with enough performance. Doing telemetry means thinking a data path, building a data path that is capable of not only measuring everything realtime, but also sending out that measurement without involving anything else, without involving the control path and the management path so that the measurement becomes really very efficient and the data that you stream out becomes really usable data, actionable data in realtime. So telemetry is clearly the first one, is important. One that you honestly, we had built but we weren't thinking this was going to have so much success is what we call Bidirectional ERSPAN. And basically, is just the capability of copying data. And sending data that the card see to a station. And that is very very useful for replacing what are called TAP network, Which is just network, but many customer put in parallel to the real network just to observe the real network and to be able to troubleshoot and diagnose problem in the real network. So, this two feature telemetry and ERSPAN that are basically troubleshooting feature are the two features that are beginning are getting more traction. >> You're talking about realtime things like telemetry. You know, the applications and the integrations that you need to deal with are so important, back in some of the previous start-ups that you done was getting ready for, say how do we optimize for virtualization, today you talk cloud-native architectures, streaming, very popular, very modular, often container based solutions and things change constantly. You look at some of these architectures, it's not a single thing that goes on for a long period of time, but it's lots of things that happen over shorter periods of time. So, what integrations do you need to do, and what architecturally, how do you build things to make them as you talk, future-proof for these kind of cloud architectures? >> Yeah, what I mentioned were just the two low hanging fruit, if you want the first two low hanging fruit of this architecture. But basically, the two that come immediately after and where there is a huge amount of radio are distributor's state for firewall, with micro-segmentation support. That is a huge topic in itself. So important nowadays that is absolutely fundamental to be able to build a cloud. That is very important, and the second one is wire rate encryption. There is so much demand for privacy, and so much demand to encrypt the data. Not only between data center but now also inside the data center. And when you look at a large bank for example. A large bank is no longer a single organization. A large bank is multiple organizations that are compartmentalized by law. That need to keep things separate by law, by regulation, by FCC regulation. And if you don't have encryption, and if you don't have distributed firewall, is really very difficult to achieve that. And then you know, there are other applications, we mentioned storage NVME, and is a very nice application, and then we have even more, if you go to look at load balance in between server, doing compression for storage and other possible applications. But I sort of lost your real question. >> So, just part of the pieces, when you look at integrations that Pensando needs to do, for maybe some of the applications that you would tie in to any of those that come to mind? >> Yeah, well for sure. It depends, I see two main branches again. One is the cloud provider, and one are the enterprise. In the cloud provider, basically this cloud provider have a huge management infrastructure that is already built and they want just the card to adapt to this, to be controllable by this huge management infrastructure. They already know which rule they want to send to the card, they already know which feature they want to enable on the card. They already have all that, they just want the card to provide the data plan performers for that particular feature. So they're going to build something particular that is specific for that particular cloud provider that adapt to that cloud provider architecture. We want the flexibility of having an API on the card that is like a rest API or a gRPC which they can easily program, monitor and control that card. When you look at the enterprise, the situation is different. Enterprise is looking to at two things. Two or three things. The first thing is a complete solution. They don't want to, they don't have the management infrastructure that they have built like a cloud provider. They want a complete solution that has the card and the management station and there's all what is required to make from day one, a working solution, which is absolutely correct in an enterprise environment. They also want integration, and integration is the tool that they already have. If you look at main enterprise, one of a dominant presence is clearly VMware virtualization in terms of ESX and vSphere and NSX. And so most of the customer are asking us to integrate with VMware, which is a very reasonable demand. And then of course, there are other player, not so much in the virtualization's space, but for example, in the data collections space, and the data analysis space, and for sure Pensando doesn't want to reinvent the wheel there, doesn't want to build a data collector or data analysis engine and whatever, there is a lot of work, and there are a lot out there, so integration with things like Splunk for example are kind of natural for Pensando. >> Eccellent, so wait, you talked about some of the places where Pensando doesn't need to reinvent the wheel, you talk through a lot of the different technology pieces. If I had to have you pull out one, what would you say is the biggest innovation that Pensando has built into the platform. >> Well, the biggest innovation is this P4 architecture. And the P4 architecture was a sort of gift that was given us in the sense that it was not invented for what we use it. P4 was basically invented to have programmable switches. The first big P4 company was clearly Barefoot that then was acquired by Intel and Barefoot built a programmable switch. But if you look at the reality of today, the network, most of the people want the network to be super easy. They don't want to program anything into the network. They want to program everything at the edge, they want to put all the intelligence and the programmability of the edge, so we borrowed the P4 architecture, which is fantastic programmable architecture and we implemented that yet. It's also easier because the bandwidth is clearly more limited at the edge compared to being in the core of a network. And that P4 architecture give us a huge advantage. If you, tomorrow come up with the Stuart Encapsulation Super Duper Technology, I can implement in the copper The Stuart, whatever it was called, Super Duper Encapsulation Technology, even when I design the ASIC I didn't know that encapsulation exists. Is the data plan programmability, is the capability to program the data plan and programming the data plan while maintaining wire-speed performance, which I think is the biggest benefit of Pensando. >> All right, well Silvano, thank you so much for sharing, your journey with Pensando so far, really interesting to dig into it and absolutely look forward to following progress as it goes. >> Stuart, it's been really a pleasure to talk with you, I hope to talk with you again in the near future. Thank you so much. >> All right, and thank you for watching theCUBE, I'm Stu Miniman, thanks for watching. (upbeat music)
SUMMARY :
leaders all around the world, I'm Stu Min and I'm coming to you and is really nice to and on the Pensando opportunity. that is behind the Pensando product I've only gotten the soft version. but that is the core of a network service. as an adapter card in the past? but the real term that I like to use "you know, the software and the data that you stream out becomes really usable data, and the integrations and the second one is and integration is the tool that Pensando has built into the platform. is the capability to program the data plan and absolutely look forward to I hope to talk with you you for watching theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Silvano | PERSON | 0.99+ |
Oregon | LOCATION | 0.99+ |
SISCO | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
Stu Min | PERSON | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
Two | QUANTITY | 0.99+ |
Italy | LOCATION | 0.99+ |
Silvano Gai | PERSON | 0.99+ |
Barefoot | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Stuart | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
two features | QUANTITY | 0.99+ |
two main pieces | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
200 gigabit | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
100 gigabit | QUANTITY | 0.99+ |
two terms | QUANTITY | 0.99+ |
25 gigabits | QUANTITY | 0.99+ |
FCC | ORGANIZATION | 0.99+ |
Pacific North West | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Bend, Oregon | LOCATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Building a Future-Proof Cloud Infrastructure | TITLE | 0.99+ |
third | QUANTITY | 0.98+ |
10 | DATE | 0.98+ |
first one | QUANTITY | 0.98+ |
Future Proof Your Enterprise | TITLE | 0.98+ |
two main branches | QUANTITY | 0.98+ |
vSphere | TITLE | 0.98+ |
ESX | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
two-layer | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
three things | QUANTITY | 0.97+ |
Moore | PERSON | 0.97+ |
Cube Studios | ORGANIZATION | 0.97+ |
two feature | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
two main branches | QUANTITY | 0.96+ |
two main | QUANTITY | 0.96+ |
single thing | QUANTITY | 0.96+ |
first time | QUANTITY | 0.95+ |
4% per year | QUANTITY | 0.95+ |
Hennessy | ORGANIZATION | 0.95+ |
first thing | QUANTITY | 0.95+ |
15 years ago | DATE | 0.94+ |
second one | QUANTITY | 0.93+ |
single organization | QUANTITY | 0.92+ |
NSX | TITLE | 0.91+ |
single | QUANTITY | 0.9+ |
CUBE | ORGANIZATION | 0.89+ |
ERSPAN | ORGANIZATION | 0.89+ |
Splunk | ORGANIZATION | 0.88+ |
P4 | COMMERCIAL_ITEM | 0.85+ |
P4 | ORGANIZATION | 0.84+ |
Pensando | LOCATION | 0.84+ |
2016 | DATE | 0.83+ |
Turing | EVENT | 0.82+ |
two low hanging | QUANTITY | 0.79+ |
VMware | TITLE | 0.77+ |
2020 | DATE | 0.77+ |
Super Duper Encapsulation Technology | OTHER | 0.77+ |
Patterson | ORGANIZATION | 0.76+ |
Stephen Herzig, University of Arkansas and Andrew McDaniel, Dell EMC | Dell Technologies World 2018
>> Announcer: Live from Las Vegas It's theCube covering Dell Technologies World 2018 brought to you by Dell EMC and its ecosystem partners. >> Welcome back to theCube's live coverage of the Inaugural Dell Technologies World 2018 here in Las Vegas. Getting to the end of three days of wall-to-wall live coverage from two sets I'm Stu Miniman, joined by my co-host John Troyer, and for those of you that haven't attended one of these shows, sometimes like "Oh, you're going to Vegas, this is some boondoggle," but I'm really happy, I've got a customer, one of the Dell EMC employees, here. A lot of stuff goes on. There's learning, there's lotsa meetings, there's, you know, you come here, you kind of, you know, get as much out of it as you can. So, first, Stephen Herzig, who's the Director of Enterprise Systems at the University of Arkansas, >> Correct, yes. >> Stu: You had a busy week so far. >> I have. >> Thank you for joining us >> You bet. >> Stu: Also, Andrew McDaniel, who's the Senior Director of Ready Solutions for VDI with Dell EMC, thank you for joining us-- >> Thanks guys >> Alright, so, Stephen, first of all, give us a little bit about your background and University of Arkansas, I think most people know the Razorbacks-- >> Stephen: That's right, the Razorbacks! >> Talk about your org and your role there. >> Yeah, I'm Director of Enterprise Systems, as you mentioned. We're an R1 University, we have about 27,000 students, about 5,000 faculty and staff in the university. And, so my organization is responsible for maintaining, as I said, all the enterprise systems, essentially everything in the data center on the floor to support all the educational activities. Now there is some distributed or commonly known as shadow IT organizations throughout the university and we work quite closely with them, too. >> Okay, you stamp out all that shadow IT stuff and pull it all back in, right? >> Stephen: (laughs) No, no! No, absolutely not. >> We'll get a, Andrew, before we get into more about the university, tell us a little bit about your role and your org, inside Dell EMC. >> So my organization basically develops the end-to-end VDI solutions that Dell EMC sell globally. So, we work with partners such as VMware and Citrix, to put together the industry leading solutions for VDI. Tested, validated, engineered, to give real good confidence in the solution the customer's going to buy. >> Okay, John and I spent many years looking at these, you know, memes in the industry, all that, you know, but uh, Stephen, before we get into the VDI piece, give us, what are some of the challenges that you're facing in the University? We've had, you know, from an IT standpoint, we know the technology requirements are more than ever. While tuitions go up, budgets are always a challenge. So, when you're talking to your peers, what are the things you're all commiserating about or, you know, working at. >> Yeah, like any IT organization, it's a challenge to do more with less. We're constantly being required to support more systems, more technology, and technology is becoming more and more an integral part of the educational process. We also have students coming from very diverse backgrounds, and so the kinds of computing devices that they're able to bring to the university with them, some can afford high-end, some not, and so, it's a challenge for us to deliver that, the applications to them, no matter what kind of device they happen to bring. >> Alright, so, sounds like VDI is something that fits there-- >> Yes >> Before we get into the actual solution, tell us, what was the struggle you were facing, what led to that, what was there, was there a mandate? How did you get to the solution that you were-- >> Well, really, we were struggling with those challenges We're a very small IT team, and as those things grew, we knew we had to find a way to reduce the number of resources that we're supporting, all the end points, all the machines in the labs, all the machines on faculty and staff desks, and again, like I said, the students bring their own devices, which we had to support as well. >> Alright, so, you ended up choosing a Dell Solution, maybe give us a little bit about that, that process and walk us through the project some. >> Yeah, we really needed a solution. We could not go out and assemble pieces, parts, from a lot of different vendors, and we needed a solution that was tailored to our needs, that fit, VDI is complex by its nature, but some vendors made it really complex. So, we had to find one that was right for our environment, for what we were trying to achieve, and of course, at the right price point. Higher education, we're not flush with cash. >> That's always been really hard, I think that's been the hard thing about VDI, right? It's always been kind of complicated and hard to do, at least back in the day, and then when you did it, half the things didn't work, and the things that didn't work were really weird, and the user was very confused. "This application works, but this one doesn't." And, "where's my cursor?" and "Everything went wonky all of a sudden and I can't login at 9am." I mean, I'm kind of curious, what is necessary maybe, from eye-level in a modern VDI solution stack, that makes it easy? You know, is it the hypervisor, the end clients? >> I think, John, you know we've seen such great advances in the software side of it, right? So, if you look at Horizon, as a broker, VMware Horizon, the advances that they've made in things like protocols, right, so Blast Extreme, for example, one of the big challenges that we've always had, is things like Link or Skype, in a VDI environment. It was, it made a disaster for many customers, right? So, that has been solved by VMware and the advances that they did, above and beyond what was capable in PC over IP. So, that's one of the things. From a hardware perspective, you know, one of the challenges we frequently had in VDI, was poor user experience, right? And it was typically because the graphics requirement for the application could not be delivered by the CPU alone, right, so GPUs, Nvidia, K1, K2's, then it went to the M10, M60's, and moving forward into the P4 and P40's, they've really helped us to improve that user experience, and it's starting to get to a point where GPUs are a standard delivery within any VDI employment. So, you get really good experience moving forward. And as you know, if you can't deliver a good user experience, the project is dead before it even starts. Alright, so that's a big challenge. >> Stephen, do you have any commentary on some of the challenges that we faced before? What was your experience like? >> Yeah, it, that's exactly right. We made the decision early on to include GPU in every session that we served up. And we weren't quite sure, 'cause it is an additional expense, but it was one of the best decisions that we've made. It really does make all the difference. >> Was there something specific from the application or user-base, and how they were using it, that led you to that? >> Well, we are all Windows 10, and Windows 10 just looks better, it runs better, the video, scrolling through a Word document, the text, some are very nuanced, but it makes a big difference in the user experience. And of course, we have higher-end users using CAD programs, things like that, you know, in the School of Engineering, they needed the GPU for what they were doing. >> Andrew, wondering if you could give us, little bit of an update on the stack, So, I think back to, on the EMC side, I watched everything from the Flash on the converge side. On the Dell side, there was the Wyse acquisition of course, EMC and VM were coming together, so, a long journey, but even the first year we did theCube, you know, Dell had some big customers doing large scale, cost-effective VDI, because, had that, you know, to give some of the marketing terms I've heard here, it's end to end, but you add the devices all the way through. So, bring us up to 2018. >> Yeah, so, I guess, you know, one of the challenges that Stephen spoke about is the, previously, the hassle of having to go and buy each of the individual components from multiple different vendors. So, you're buying your storage from one vendor, compute from another, GPUs from another, hypervisor from another, broker from another, and so on. So, it gets very complicated to manage all of that. And so, we had lots of customers who had run into scenarios where, say a BIAS firmware and a driver revision were not compatible, and so we'd run into those kinds of problems that we were talking about earlier on, right? So, I think, you know, bringing all of that together, in Dell Technologies, we can now deliver every single aspect of what you need for a VDI deployment. So, we created a bundle called VDI Complete. It uses vSAN ReadyNodes or VxRail, right? So, hyper-converged, massive from a VDI perspective, and I'll come back to that in a second. It pairs then, Horizon Advanced or Horizon Enterprise, with those base platforms, and the Dell Wyse Thin clients. So, every aspect, true end to end, is delivered by Dell Technologies, and there's simply no other vendor in the market who can do that. So, what that basically does is it gives the customer confidence that everything that has been tested can be owned, from a support perspective, by Dell Technologies. Alright, so, if you've got a problem, we're not going to hand you off to another company to go solve that issue, or lay blame with somebody else. It's fully our stack, and as a result, we take full responsibility for it. And that's one of the benefits that we have with customers like University of Arkansas. >> And that was important to us. That single point of contact for support was really important to us. >> Stephen, I wonder if you could talk about, from an operational standpoint, you said, you've got a small team. One of the challenges, at least years ago, was like "Oh, wait! I have the guy that walked around "and did the desktops, now I centralized it, "who owns it, you know, how do we sort through this? "You know, we've got a full stack there. "Simplicity's one of the big messages of HCI," but what was the reality for your team and the roles, how did you change? >> Well one of the first areas, or actually, the first area that we implemented VDI in was in the labs. Hundreds of end points across the campus. And, before VDI, you would walk into the lab, and a certain percentage of the machines would always be down. They needed updating, there was a virus, somebody spilled a coffee on the machine, you know, that kind of thing. After VDI, when you walked into the lab, 100% of the end points were always up, and there was no noise in the lab, except when somebody printed. So, the maintenance required, the resources for my team, and these distributed IT teams was reduced drastically. As a matter of fact, some of the distributed teams had 50% of their resources reduced. That could then go and do more high-value projects and deliver high-value services to their colleges. >> From the student and faculty perspective, it sounds like the uptake has been good, and the satisfaction level high. I mean, user experience is everything with VDI, right? >> Yeah, absolutely, the students came, we installed during spring break, and they came back from spring break, went into the labs with these beautiful new 27-inch monitors, sat down, logged on, and it looked almost the same as before. Which was exactly what we were after. We wanted that same high-quality experience in VDI that they had with a laptop or a desktop. >> The monitors are an important thing to consider, right, 'cause a lot of customers will think about the data center side of VDI, right, so, get lots of compute, good, high-performing storage, good network, and then they put a really poorly designed thin client or an old desktop PC, or something like that, on the end, and wonder why they're not getting good performance, right? So, we just launched yesterday the Dell Wyse 5070. It's the first thin client in the market that can have six monitors attached to it, four of those can be 4K, and two 2K, right? So, it's immense from a display perspective, and this is what our customers are demanding. Especially in financial services, for example, or in automotive design, you know, in CAD labs, for example, you need three or four really good, high-quality screens attached. >> Well, I'm saying, I'll date myself, I wish I had that when I was playing Doom when I was in college in the labs. >> That too! >> That does bring into question, your upgrade and scenarios, moving on to the future, right? You used to have all those janky old PC's that you'd kind of, maybe they'd slide out the back door, maybe they'd get recycled, or whatever, but now it's a different refreshed cycle, and maybe even different use cases. >> Yeah, the lifespan of the endpoints is much longer with the VDI solution. >> John: It's got to be good, yeah. I was curious, you mentioned the converged infrastructure, too, Andrew. I mean, how does that play into it? (muffled) >> Yeah, so I mean, you know, traditionally, a SAN infrastructure was used in VDI, alright? So, for us, that would have been Equallogic Compellent, historically. Now, we're seeing that VDI market almost totally transition to hyperconverged. Alright, so vSAN has really revolutionized VDI, okay? I'd say, you know, a good 30, 35% of all VxRail and vSan deployments that we do, are in the VDI space. So, it's really, and I would say about 90, 95% of our VDI deployments are on hyperconverged rather than a traditional SAN infrastructure. That's really where VDI has moved now. 'Cause it gives customers the ability to scale on demand. Instead of having to go and buy another half-million dollar storage rate, add another thousand users, you can simply add in a couple of more compute nodes with the storage built in. For us, hybrid works very well. So, a hybrid-disc configuration is working very well in most VDI deployments. Some customers require all flash, it depends on the applications and the other kind of performance that they want to get from it. But for a majority of customers, hyperconverged with the hybrid configuration works brilliantly. >> So, Stephen, I want to give you the final word. Sounds like everything went really well, but one of the things we always like to understand, when you're talking with your peers, they said "Hey, what did you learn? "What would you do a little different, "either internally, or configuration-wise, or roll-out," What would you tell your peers? >> Well, when we implemented VDI it was just before VDI Complete came out. So, the work that's done in the VDI Complete solution, we didn't have. So, as we look to the future, and we want to expand, and grow our environment, VDI Complete will be a huge help. Had we had that, it only took us about four months to stand it up, which, considering what we accomplished, was very short time, but, if we had had VDI Complete, that time would've been much more compressed. So, looking to the future, we're looking to expand using VDI Complete. >> Just to, Andrew, maybe you can tie the knot on this bow for us, is sounds like this could, if I've got VDI, I don't have to start brand new, it can fit with existing environments, how does that all work? >> Absolutely, I mean we've got lots of customers who've already done Citrix or VMware deployments, right? Ideally, you want to connect with one broker. So you want to stick with one broker. But, we can bring in a hyperconverged VDI solution into your existing user estate, and merge into that. So, that's pretty common. >> Alright, well, Andrew and Stephen, thank you so much for sharing the story. Really great to always get the customer stories. We're getting towards the end of three days of live coverage here at the Sands Convention Center in Las Vegas, at Dell Technologies World 2018. For John Troyer, I'm Stu Miniman, thanks for watching theCube. (techno music)
SUMMARY :
brought to you by Dell EMC and its ecosystem partners. and for those of you that haven't attended essentially everything in the data center on the floor Stephen: (laughs) No, no! about the university, tell us a little bit about in the solution the customer's going to buy. the VDI piece, give us, what are some of the challenges and so the kinds of computing devices that they're and again, like I said, the students bring Alright, so, you ended up choosing a Dell Solution, and of course, at the right price point. and the user was very confused. one of the challenges we frequently had in VDI, We made the decision early on to include GPU a big difference in the user experience. On the Dell side, there was the Wyse acquisition of course, And that's one of the benefits that we have And that was important to us. and the roles, how did you change? So, the maintenance required, the resources for my team, and the satisfaction level high. Yeah, absolutely, the students came, or an old desktop PC, or something like that, on the end, in the labs. and scenarios, moving on to the future, right? Yeah, the lifespan of the endpoints I was curious, you mentioned the 'Cause it gives customers the ability to scale on demand. but one of the things we always like to understand, the VDI Complete solution, we didn't have. So you want to stick with one broker. so much for sharing the story.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
Andrew McDaniel | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Stephen Herzig | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
9am | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Windows 10 | TITLE | 0.99+ |
M60 | COMMERCIAL_ITEM | 0.99+ |
University of Arkansas | ORGANIZATION | 0.99+ |
P40 | COMMERCIAL_ITEM | 0.99+ |
three | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
M10 | COMMERCIAL_ITEM | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
P4 | COMMERCIAL_ITEM | 0.99+ |
Skype | ORGANIZATION | 0.99+ |
27-inch | QUANTITY | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
six monitors | QUANTITY | 0.99+ |
one broker | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Doom | TITLE | 0.99+ |
half-million dollar | QUANTITY | 0.99+ |
Word | TITLE | 0.99+ |
K2 | COMMERCIAL_ITEM | 0.99+ |
VDI Complete | TITLE | 0.99+ |
two sets | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
K1 | COMMERCIAL_ITEM | 0.98+ |
Enterprise Systems | ORGANIZATION | 0.98+ |
Dell Technologies World 2018 | EVENT | 0.98+ |
yesterday | DATE | 0.98+ |
Sands Convention Center | LOCATION | 0.98+ |
two | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
Wyse | ORGANIZATION | 0.97+ |
thousand users | QUANTITY | 0.97+ |
Wyse 5070 | COMMERCIAL_ITEM | 0.97+ |
Razorbacks | ORGANIZATION | 0.96+ |
about four months | QUANTITY | 0.95+ |
about 90, 95% | QUANTITY | 0.95+ |
one vendor | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
Niel Viljoen, Netronome & Nick McKeown, Barefoot Networks - #MWC17 - #theCUBE
(lively techno music) >> Hello, everyone, I'm John Furrier with theCUBE. We are here in Palo Alto to showcase a brand new relationship and technology partnership and technology showcase. We're here with Niel Viljoen, who's the CEO of Netronome. Did I get that right? (Niel mumbles) Almost think that I will let you say it, and Nick McKeown, who's Chief Scientist and Chairman and the co-founder Barefoot Networks. Guys, welcome to the conversation. Obviously, a lot going on in the industry. We're seeing massive change in the industry. Certainly, digital transmissions, the buzzword the analysts all use, but, really, what that means is the entire end-to-end digital space, with networks all the way to the applications are completely transforming. Network transformation is not just moving packets around, it's wireless, it's content, it's everything in between that makes it all work. So let's talk about that, and let's talk about your companies. Niel, talk about your company, what you guys do, Netronome and Nick, same for you, for Barefoot. Start with you guys. >> So as Netronome, our core focus lies around SmartNICs. What we mean by that, these are elements that go into the network servers, which in this sort of cloud and NFV world, gets used for a lot of network services, and that's our area of focus. >> Barefoot is trying to make switches that were previously fixed function, turning them into something that those who own and operate networks can program them for themselves to customize them or add new features or protocols that they need to support. >> And Barefoot, you're walking in the park, you don't want to step in any glass, and get a cut, and I like that, love the name of the company, but brings out the real issue of getting this I/O world if there were NICs, it throws back the old school mindset of just network cards and servers, but if you take that out on the Internet now, that is the I/O channel engine, real time, it's certainly a big part of the edge device, whether that's a human or device, IoT to mobile, and then moving it across the network, and by the way, there's multiple networks, so is this kind of where you guys are showcasing your capabilities? >> So, fundamentally, you need both sides of the line, if I could put it that way, so we, on the server side, and specifically, also giving visibility between virtual machines to virtual machines, also called VNFs to VNFs in a service chaining mechanism, which has what a lot of the NFV customers are deploying today. >> Really, as the entire infrastructure upon which these services are delivered, as that moves into software, and more of it is created by those who own and operate these services for themselves, they either create it, commission it, buy it, download it, and then modify it to best meet their needs. That's true whether it's in the network interface portion, whether it's in the switch, and they've seen it happen in the control plane, and now it's moving down so that they can define all the way down to how packets are processed in the NIC and in the switches, and when they do that, they can then add in their ability to see what's going on in ways that they've never been able to do before, so we really think of ourselves as providing that programmability and that flexibility down, all the way to the way that the packets are processed. >> And what's the impact, Nick, talk about the impact then take us through like an example. You guys are showcasing your capabilities to the world, and so what's the impact and give us an example of what the benefit would be. I mean, what goes on like this instrumentation, certainly, everyone wants to instrument everything. >> Niel: Yes. >> Nick: Yeah. >> But what's the practical benefit. I mean who wins from this and what's the real impact? >> Well, you know, in days gone by, if you're a service provider providing services to your customers, then you would typically do this out of vertically integrated pieces of equipment that you get from equipment vendors. It's closed, it's proprietary, they have their own sort of NetFlow, sFlow, whatever the mechanism that they have for measuring what's going on, and you had to learn to live with the constraints of what they had. As this all gets kind of disaggregated and broken apart, and that the owner of the infrastructure gets to define the behavior in software, they can now chain together the modules and the pieces that they need in order to deliver the service. That's great, but now they've lost that proprietary measurement, so now they need to introduce the measurement that they can get greater visibility. This actually has created a tremendous opportunity and this is what we're demonstrating, is if you can come up with a uniform way of doing this, so that you can see, for example, the path that every packet takes, the delay that it encounters along the way, the rules that it encounters that determines the path that it gets, if it encounters congestion, who else contributed to that congestion, so we know who to go blame, then by giving them that flexibility, they can go and debug systems much more quickly, and change them and modify them. >> It's interesting, it's almost like the aspirin, right? You need, the headache now is, I have good proprietary technology for point measurement and solutions, but yet I need to manage multiple components. >> I think there's an add-on to what Nick said, which is the whole key point here which is the programmability, because there's data, and then there's information. Gathering lots and lots of telemetry data is easy. (John chuckles) The problem is you need to have it at all points, which is Nick's key point, but the programmability allows the DevOps person, in other words, the operational people within the cloud or carrier infrastructure, to actually write code that identifies and isolates the data, the information rather than the data that they need. >> So is this customer-based for you guys, the carriers, the service providers, who's your target audience? >> Yep, I think it's service providers who are applying the NFV technologies, in other words, the cloud-like technologies. I always say the real big story here is the cloud technologies rather than just the cloud. >> Yeah, yeah. >> And how that's-- >> And same for you guys, you guys have this, this joint, same target customer. >> Yeah, I don't think there's any disagreement. >> Okay. (laughs) Well, I want to get drilling to the whole aspirin analogy 'cause it's of the things that you brought up with the programmability because NFV has been that, you know, saving grace, it's been the Holy Grail for how many years now, and you're starting to see the tides shifting now towards where NFV is not a silver bullet, so to speak, but it is actually accelerating some of the change, and I always like to ask people, "Hey, are you an aspirin or you a vitamin?" One guest told me, "I'm a steroid. "We make things grow faster." I'm like, "Okay," but in a way, the aspirin solves a problem, like immediate headaches, so it sounds like a lot of the things that you mentioned. That's an immediate benefit right there on the instrumentation, in an open way, multi-component, multi-vendor kind of, benefits of proprietary but open, but the point about programmability gives a lot of headroom around kind of that vitamin, that steroid piece where it's going to allow for automation, which brings an interesting thing, that's customizable automation, meaning, you can apply software policy to it. Is that kind of like, can you tease that out, is that an area that you guys talking about? >> I think the first thing that we should mention is probably the new language called P4. I think Nick will be too modest to state that but I think Nick has been a key player in, along with his team and many other people, in the definition and the creation of this language, which allows the programmability of all these elements. >> Yeah, just drill down, I mean, toot your own horn here, let's get into it because what is it and what's the benefit and what is the real value, what's the upshot of P4? >> Yeah, the way that hardware that processes packets, whether it's in network interface cards, or in switching, the way that that's been defined in the past, has been by chip designers. At the time that they defined the behavior, they're writing Verilog or VHDL, and as we know, people that design chips, don't operate big networks, so they really know what capabilities to put in-- >> They're good at logic in a vacuum but not necessarily in the real world, right? Is that what you (laughs). >> So what we-- >> Not to insult chip designers, they're great, right? >> So what we've all wanted to do for some time is to come up with a uniform language, a domain-specific language that allows you to define how packets will be processed in interfaces, in switches, in hypervisor switches inside the virtual machine environments, in a uniform way so that someone who's proficient in that language can then describe a behavior that can then operate in different paths of the chained services, so that they can get the same behavior, a uniform behavior, so that they can see the network-wide, the service-wide behavior in a uniform way. The P4 language is merely a way to describe that behavior, and then both Netronome and Barefoot, we each have our own compilers for compiling that down to the specific processing element that operates in the interfaces and in the switches. >> So you're bridging the chip layer with some sort of abstraction layer to give people the ability to do policy programming, so all the heavy lifting stuff in the old network days was configuration management, I mean all the, I mean that was like hard stuff and then, now you got dynamic networks. It even gets harder. Is this kind of where the problem goes away? And this is where automation. >> Exactly, and the key point is the programmability versus configurability. >> John: Yeah. >> In a configurable environment, you're always trying to pre-guess what your customer's going to try to look at. >> (chuckles) Guessing's not good in the networking area. That's not good for five nines. >> In the new world that we're in now, the customer actually wants to define exactly what the information is they want to extract-- >> John: I wanted to get-- >> Which is your whole question around the rules and-- >> So let me see if I can connect the dots here, just kind of connect this for, and so, in the showcase, you guys are going to show this programmability, this kind of efficiency at the layer of bringing instrumentation then using that information, and/or data depending on how it's sliced and diced via the policy and programmability, but this becomes cloud-like, right? So when you start moving, thinking about cloud where service providers are under a lot of pressure to go cloud because Over-The-Top right now is booming, you're seeing a huge content and application market that's super ripe for kind of the, these kinds of services. They need that ability to have the infrastructure be like software, so infrastructure is code, is the DevOps term that we talk about in our DevOps world, but that has been more data-centered kind of language, with developers. Is it going the same trajectory in the service provider world because you have networks, I mean they're bigger, higher scale. What are some of those DevOps dynamics in your world? Can you talk about that and share some color on that? >> I mean, the way in which large service providers are starting to deliver those services is out of something that looks very much like the cloud platform. In fact, it could in fact be exactly the same technology. The same servers, the same switches, same operating systems, a lot of the same techniques. The problem they're trying to solve is slightly different. They're chaining together the means to process a sequence of operations. A little bit like, though the cloud operators are moving towards microservices that get chained together, so there are a lot of similarities here and the problems they face are very similar, but think about the hell that this potentially creates for them. It means that we're giving them so much rope to hang themselves because everything is now got to be put together in a way that's coming from different sources, written and authored by different people with different intent, or from different places across the Internet, and so, being able to see and observe exactly how this is working is even more critical than-- >> So I love that rope to hang yourself analogy because a lot of people will end up breaking stuff as Mark Zuckerberg's famous quote is, "Move fast, break stuff," and then by the way, when they 100 million users and moved, slogan went for, "Move fast, be reliable," so he got on the five nines bandwagon pretty quick, but it's more than just the instrumentation. The key that you're talking about here is that they have to run those networks in really high reliability environments. >> Nick: Correct. >> And so that begs the challenge of, okay, it's not just easy as throwing a docker container at something. I mean that's what people are doing now, like hey, I'm going to just use microservices, that's the answer. They still got stuff under the hood, but underneath microservices. You have orchestration challenges and this kind of looks and feels like the old configuration management problems but moved up the stack, so is that a concern in your market as well? >> So I think that's a very, very good point that you make because the carriers, as you say, tend to be more dependent, almost, on absolute reliability, and very importantly, performance, but in other words, they need to know that this is going to be 100 gigs because that's what they've signed up the SLA with their customer for. (John chuckles) It's not going to be almost 100 gigs 'cause then they're going to end up paying a lot of penalties. >> Yeah, they can't afford breakage. They're OpsDev, not DevOps. Which comes first in their world? >> Yes, so the critical point here is just that this is where the demo that we're doing which shows the ability to capture all this information at line rate, at very high speeds in the switches. (mumbles) >> So let's about this demo you're doing, this showcase that you guys are providing and demonstrating to the marketplace, what's the pitch, I mean what is it, what's the essence of the insight of this demo, what's it proving? >> So I think that the, it's good to think about a scenario in which you would need this, and then this leads into what the demo would be. Very common in an environment like the VNF kind of environment, where something goes wrong, they're trying to figure out very quickly, who's to blame, which part of the infrastructure was the problem? Could it be congestion, could it be a misconfiguration? (John laughs) >> Niel: Who's flow-- >> Everyone pointing finger at the other guy. >> Nick: The typical way-- >> Two days later, what happened, really? >> Typical way that they do this, is they'll bring the people that are responsible for the compute, the networking, and the storage quickly into one room, and say, "Go figure it out." The people that are doing the compute, they'll be modifying and changing and customizing, running experiments, isolating the problem. So are the people that are doing storage. They can program their environment. In the past, the networking people had ping and traceroute. That's the same tools that they had 20 years ago. (John chuckles) What we're doing is changing that by introducing the means where they can program and configure, run different experiments, run different probes, so that they can look and see the things that they need to see, and in the demo in particular, you'll be able to see the packets coming in through a switch, through a NIC, through a couple of VMs, back out through a switch, and then you can look at that packet afterwards, and you can ask questions of the packet itself, something you've never been able to-- >> It's the ultimate debugger. Basically, it's the ultimate debugger. >> Nick: That's right. Go to the packet, say-- >> Niel: Programmable debugger. >> "Which path did you take? "How long did you wait at each NIC, "at each VM, at each switch port as you went through? "What are the rules that you followed "that led you to be here, and if you encountered "some congestion, whose fault was it? "Who did you share that queue with?" so we can go back and apportion the blame-- >> So you get a multiple dimension of path information coming in, not just the standard stovepiped tools-- >> Nick: That's right. >> And then, everyone compares logs and then there's all these holes in it, people don't know what the hell happened. >> And through the programmability, you can isolate the piece of the information-- >> So the experimentation agile is where I think, is that what you're getting at? You can say, you can really get down and dirty into a duplication environment and also run these really fast experiments versus kind of in theory or in-- >> Exactly, which is what, as Nick said, is exactly what people on the server side and on the storage side have been able to do in the past. >> Okay so for people watching that are kind of getting into this and people who aren't, just give me in order maybe through of the impact and the consequences of not taking this approach, vis-a-vis the available, today's available techniques. >> If you wanted to try and figure out who it was that you were sharing a queue with inside an interface or inside a switch, you have no way to do that today, right? No means to do that, and so if you wanted to be able to say it's that aggressive flow over there, that malfunction in service over there, you've got no means to do it. As a consequence, the networking people always get the blame because they can't show that it wasn't them. But if you can say, I can see, in this queue, there were four flows going through or 4,000 flows, and one of them was really badly behaved, and it was that one over there and I can tell you exactly why its packets were ending up here, then you can immediately go in and shut that one down. They have no way that they go and randomly shut-- >> Can I get this for my family, I need this for my household. I mean, I'm going to use this for my kids. I mean I know exactly the bad behavior, I need to prove it. No, but this is what the point is, is this is fast. I mean you're talking speed, too, as another aspect-- >> Niel: It's all about the-- >> What's the speed lag on approach versus taking the old, current approach versus this joint approach you guys are taking? What's the, give me an estimate on just ballpark numbers-- >> Well there's two aspects to the speed. One is the speed at which it's operating, so this is going to be in the demo, it's running at 40 gigabits per seconds, but this can easily run, for example, in the Barefoot switch, it'll run at 6 terabits per second. The interesting thing here is that in this entire environment, this measurement capability does not generate a single extra packet. All of it is self-contained in the packets that are already flowing. >> So there's no latency issues on running this in production. >> If you wanted then change the behavior, you needed to go and modify what was happening in the NIC, modify what was happening in the switch, you can do that in minutes. So that you can say-- >> Now the time it takes for a user now to do this, let's go to that time series. What does that look like? So current method is get everyone in a room, do these things, are we talking, you know. >> I think that today, it's just simply not possible. >> Not possible. >> So it's, yes, new capability. >> I think is the key issue. >> So this is a new capability. >> This is a new capability and exactly as Nick said, it's getting the network to the same level of ability that you always had inside the-- >> So I got to ask you guys, as founders of your companies because this is one of those things that's a great success story, entrepreneurs, you got, it's not just a better mousetrap, it's revolutionary in the sense that no one's ever had the capability before, so when you go to events like Mobile World Congress, you're out in the field, are you shaking people like, "You need me! "I need to cut the line and tell you what's going on." I mean, you must have a sense of urgency that, is it resonating with the folks you're talking to? I mean, what are some of the conversations you're having with folks? They must be pretty excited. Can you share any anecdotal stories? >> Well, yup, I mean we're finding, across the industry, not only in the service providers, the data center companies, Wall Street, the OEM box vendors, everybody is saying, "I need," and have been saying for a long time, "I need the ability to probe into the behavior "of individual packets, and I need whoever is owning "and operating the network to be able to customize "and change that." They've never been able to do that. The name of the technique that we use is called In-band Network Telemetry or INT, and everybody is asking for it now. Actually, whether it's with the two of us, or whether they're asking for it more generally, this is, this is-- >> Game changer. >> You'll see this everywhere. >> John: It's a game changer, right? >> That's right. >> Great, all right, awesome. Well, final question is, is that, what's the business benefits for them because I can imagine you get this nailed down with the proper, the ability to test new apps because obviously, we're in a Wild West environment, tsunami of apps coming, there's always going to be some tripwires in new apps, certainly with microservices and APIs. >> I think the general issues that we're addressing here is absolutely crucial to the successful rollout of NFV infrastructures. In other words, the ability to rapidly change, monitor, and adapt is critical. It goes wider than just this particular demo, but I think-- >> It's all apps on the service provider. >> The ability to handle all the VNFs-- >> Well, in the old days, it was simply network spikes, tons of traffic, I mean, now you have, apps could throw off anomalies anywhere, right? You'd have no idea what the downstream triggers could be. >> And that's the whole notion of the programmable network, which is critical. >> Well guys, any information where people can get some more information on this awesome opportunity? You guys' sites, want to share quick web addresses and places people get whitepapers or information? >> For the general P4 movement, there's P4.org. P, the number four, .org. Nice and easy. They'll find lots of information about the programmability that's possible by programming the, the forwarding being what both of us are doing. In-band Network Telemetry, you'll find descriptions there, P4 programs, and whitepapers describing that, and of course, on the two company websites, Netronome and Barefoot. >> Right. Nick and Niel, thanks for spending some time sharing the insights and congratulations. We'll keep an eye for it, and we'll be talking to you soon. >> Thank you. >> Thank you very much. >> This is theCUBE here in Palo Alto. I'm John Furrier, thanks for watching. (lively techno music)
SUMMARY :
and the co-founder Barefoot Networks. that go into the network servers, that they need to support. So, fundamentally, you need both sides of the line, and in the switches, and when they do that, talk about the impact then take us through like an example. I mean who wins from this and what's the real impact? and broken apart, and that the owner It's interesting, it's almost like the aspirin, right? that identifies and isolates the data, is the cloud technologies rather than just the cloud. And same for you guys, you guys have this, 'cause it's of the things that you brought up in the definition and the creation of this language, in the past, has been by chip designers. Is that what you (laughs). that operates in the interfaces and in the switches. so all the heavy lifting stuff in the old network days Exactly, and the key point is the programmability what your customer's going to try to look at. (chuckles) Guessing's not good in the networking area. in the showcase, you guys are going to show and the problems they face are very similar, is that they have to run those networks And so that begs the challenge of, okay, because the carriers, as you say, Which comes first in their world? in the switches. Very common in an environment like the VNF and see the things that they need to see, Basically, it's the ultimate debugger. Go to the packet, say-- and then there's all these holes in it, and on the storage side have been able to do in the past. of the impact and the consequences always get the blame because they can't show I mean I know exactly the bad behavior, I need to prove it. One is the speed at which it's operating, So there's no latency issues on running this in the NIC, modify what was happening in the switch, Now the time it takes for a user now to do this, that no one's ever had the capability before, "I need the ability to probe into the behavior because I can imagine you get this nailed down is absolutely crucial to the successful rollout Well, in the old days, it was simply network spikes, And that's the whole notion of the programmable network, and of course, on the two company websites, sharing the insights and congratulations. This is theCUBE here in Palo Alto.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick McKeown | PERSON | 0.99+ |
Niel Viljoen | PERSON | 0.99+ |
Niel | PERSON | 0.99+ |
Nick | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
100 gigs | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barefoot Networks | ORGANIZATION | 0.99+ |
Netronome | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
Barefoot | ORGANIZATION | 0.99+ |
two aspects | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
both | QUANTITY | 0.99+ |
#MWC17 | EVENT | 0.99+ |
two company | QUANTITY | 0.98+ |
each VM | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
100 million users | QUANTITY | 0.98+ |
each switch | QUANTITY | 0.98+ |
Two days later | DATE | 0.98+ |
20 years ago | DATE | 0.98+ |
four | QUANTITY | 0.97+ |
one room | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
both sides | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
each NIC | QUANTITY | 0.96+ |
One guest | QUANTITY | 0.95+ |
.org. | OTHER | 0.95+ |
first | QUANTITY | 0.94+ |
6 terabits per second | QUANTITY | 0.94+ |
single extra packet | QUANTITY | 0.91+ |
4,000 flows | QUANTITY | 0.88+ |
P4 | TITLE | 0.88+ |
40 gigabits per seconds | QUANTITY | 0.85+ |
five nines bandwagon | QUANTITY | 0.84+ |
five nines | QUANTITY | 0.84+ |
theCUBE | ORGANIZATION | 0.76+ |
almost 100 gigs | QUANTITY | 0.76+ |
DevOps | TITLE | 0.75+ |
#theCUBE | ORGANIZATION | 0.69+ |
Verilog | TITLE | 0.67+ |
NetFlow | ORGANIZATION | 0.66+ |
OpsDev | ORGANIZATION | 0.64+ |
VNFs | TITLE | 0.62+ |
P4 | OTHER | 0.61+ |
agile | TITLE | 0.59+ |
P4 | ORGANIZATION | 0.58+ |
Wall | ORGANIZATION | 0.56+ |
P4.org | TITLE | 0.5+ |