Image Title

Search Results for Francis Matus:

Francis Matus, Pensando | Future Proof Your Enterprise 2020


 

>>from the Cube Studios in >>Palo Alto and Boston connecting with thought leaders all around the world. This is a cube conversation. Hi. I'm stupid, man. And welcome to a cube conversation. I'm coming to you from our Boston area studio. Happy to welcome to the program. First time guest on the program. Francis Mattis. He is the vice president of engineering at Pensando. Francis. Thanks so much for joining us. >>Thank you. Good to be here. All >>right. So, Frances, you and I actually overlapped. Ah, you know, some of the companies who work with, you know, if anybody familiar with Pensando, you have worked with some of the mpls team over the years through some of those spin ins, but for our audience, give us a little bit about your background. You know, what brought you to help and be part of the team that you started pensando? >>Sure. Yeah. Yeah. So I started my career with Advanced Micro Devices in the mid nineties, got out of school, really wanted to build micro processors. And so, Andy, being in Austin, Texas, and be going to ls you for undergrad was perfect sort of alignment. And so I got to say M. D and Austin built K five worked on that team or kind of team with K seven. And, uh, when I came out to California to help with K, and that brought me to California. And then we got into the dot com era and and being a A and B fighting intel, so to speak, seemed like a hard battle. And so, with the dot com era coming, I just saw this perfect opportunity to jump into the Internet. And so that's how we got into building Internet and data communications equipment, went to the show on systems. We talked a little bit about that earlier, and that got me into storage. From there, I got into a company called on GMO, which was building fibre channel sand equipment. So built chips there, and I got to know the Mpls team there. I always say they hired me off the street. And from that point on, while we've been together since Jews 1001 So 19 years, yeah. Yeah, and I've been building silicon with them and systems for almost 20 years now. So we had quite a journey. Yeah, it's been fun. Great >>stuff. Yeah, you know it's going back, you know, niche on talking about ice scuzzy. You know, in the networking world, you know, it's a little bit of a dark arts in general for most people, you know, understanding the networking protocols and all the various pieces and three and four letter acronyms aren't something that most people are familiar with. Pensando, I'm curious. You know what? You know, networking In general, you're like, I work on Internet stuff and we're the tubes that, you know, Things go around. So when when you describe pensando, you know how to explain that to the people that maybe aren't deep into East, west, south, over on under underlay protocols? >>Yeah, absolutely. So for me, pensando was kind of the sort of the culmination of all the things I've done in my career processing, you know, being able to build compute engines that have programmable, starting with microprocessors, being able to do storage and storage networking with Andy on no, we build a computer with druva and the virtualization layers around the Ethernet interfaces in the adapter with what was really our first smart nick, Um, in 6 4007 timeframe and then with STN in CNI, all of these elements kind of came together. These multiple different layers in the infrastructure stack, if you will, and so pensando for me. What was interesting was the explosion of scale in both space and time with the advent of, let's say, 25 gig 50 gig 100 gig to the server, the notion of very dense computing on in each rack and the need for very high scale After doing all of these technologies and seeing where silicon kind of started to fall in place, I was 16 centimeter. It seemed that bringing this kind of technology to the edge very low power with sort of an end to end security architecture and to end policy engine architecture, distributed services as we're doing all seem to naturally fit into place. And the cloud was already proving this morning when I say the cloud, I mean, the hyper scaler is like Amazon and Microsoft. We are already building these platforms. And so yeah, it dawned on me that, uh I didn't think this was possible unless you built the entire platform. We built the entire system. If you build any one piece, the market transition would take a lot longer. And I think this is true. In technology, history tends to repeat itself, starting with mainframes. When IBM built an entire computer and that built the entire computer, HP built these people. So these kinds of things, um, are important if you want to really push a market transition. And so pensando became this opportunity to take all of these things that I've done in my past life and bring them together in a way that would give a complete stack for the purposes of what I call the new computer, which is basically the data center. And so, um, you know, when my mom asks me, you know, what is it that you're doing? I said, Well, it's just imagine the computer you have right now and multiplying by thousands and thousands stacking in Iraq, and anyone can use it at any one time. And we provide the infrastructure and the mechanisms to be able to Teoh, orchestrate and control that very, very high speed layers. So I don't know if that was a long answer. >>No, no, no. It's fascinating stuff, and you know, when I look at the industry, you know cloud. Of course. Is that just make a wave? That changed the way a lot of people look at this. The way we architect things, there was this belief for a number of years. Well, you know, I'm going to go from this complicated mess that I had in my own data centers and cloud was going to be, you know, inexpensive and easy. And I don't think anybody thinks about inexpensive and easy when they look at cloud computing these days, then add edge into these environments. So I guess what I'm asking is, you know, today's environment, you know, we know I t always is additive. So I have various pieces that I need to put together. You talked about building platforms, and how can it be a complete stack? So companies like Oracle, you know, for many years said we can do everything from the silicon all the way up through your application. Amazon in many ways does the same thing they can. You can build everything on Amazon, but they built out their ecosystem. So how does Pensando fit into this? You know, multi cloud, multi dimensional multi vendor. >>So yeah, so that's a good question. so So one of the things we wanted to do is to be able to bring a systematic management layer two header Genius, beauty. And what I mean by that is in any enterprise data center, modern data center, you're gonna have multiple types of computing. You're gonna have virtual machines, you're gonna have their metal, and you're gonna have containers, or at least in the last, say, three or four years. Chances are you'll have some containers and moving there. And so what we wanted to do was be able to Brighton Infrastructure a management mechanism where all of these head Virginia's types of computing could be managed the same way with respect to policy. What I mean by policy is sort of this declarative or intent based model of I have declared what I'd like to see, whether that the network policy or and and security with data in motion and be able to plot apply it in a distributed manner. Across these different types of hetero genius elements, the cloud has the advantage that it's homogenous for the most part. I mean, they own the entire infrastructure and they can control everything on their now our systems will obviously manage the marginal systems as well, and in many ways that's easier. But bringing together these this notion of heterogeneity these types of computing with one management plane one type of interface for the operator, specifically the networking services operator, was fundamental. That and then the second thing is being able to bring the scale and speed to the edge. So a top of rack switch or something in the in the middle of the network is obviously very dense in terms of this Iot capability. So the silicon area that you spend building a high speed switch is really spent for the most part on the Iot, unless typically, 30 to 40% of the area will be Iot and the rest will be very much hardwired control protocols. We know that as we go to STN services and we want, uh, let's say software defined mechanisms in terms of what the policy looks like, what the protocols look like. The ability to change over time in the lifespan of the computer, which is 3 to 5 years, are you want that to be programmable, very difficult to apply a very dense scale in the core of the network. And so it was an obvious move to bring that to the edge where we could plug it into the server effectively, just like we did. Really? In the UCS system. Uh, no system. >>Yeah, some some really tough engineering challenges. You know, for the longest time, it was very predictable in the networking world, You know, you go from one gig to 10 gig. You know, there was a little discussion how we went the next step, whether, you know, 25 50 40 and 100 gig now. But you talk about containerized architectures. You talk about distributed systems with edge. Things change at a much smaller granular level and change much more frequently. So what are some of the design principles and challenges that you make sure that you're ready for what's happening today but also knowing that, you know, technology changes there always coming, and you need to be able to handle, You know, that next thing. Yeah, >>that's right. Yes. So, uh, I think part of the biggest challenges we have are around power with respect to design power. And then what is the usefulness of each transistor? So, um, when you you have sort of a scale of flexibility. See, views are the most flexible, obviously, but have probably the least performance in them. PG A's are pretty useful in terms of its flexibility, but not very dense in terms of its logic capability. And then you have hardwired a six, which are extremely dense, very much purpose built logic, but completely inflexible. And so the design challenge it was put in front of us is how do we find that sweet spot of extremely programmable, extremely flexible, but still having a cost profile that didn't look like an F PGA And God knows the benefits of the CPU. And and that's where this sort of this notion of domain specific processing came in, which is okay, well, if we're going to solve a few problems, we're going to solve them well. And those few problems are going to be we're gonna bring PC services. We're going to bring networking services. We're going to bring stories, services. We're gonna bring security services around the edge of the computer so that we can offload or let's say, partition correctly the computing problem in a data center. And to do that, we knew a core of sea views wasn't going to do a job that's basically borrowing from this guy to pay this other guy. Right? So what we wanted to do was bring this notion of domain specific processing, and that's where our design challenges came in, which is okay, So now we build around this language called P four, What is the most optimal way to pack? The most amount of threads are processing elements into the silicon while managing the memory bandwidth, which is obviously, you know, packet processing is it has been said to be embarrassingly parallel, which is true. However, the memory bandwidth is insane. And so how do we build a system that insurance that memory is not the bottleneck? Obviously, we're producing a lot of data or, uh, computing a lot of data. And so So these were some of our design challenges. All of that within a power envelope where this part of this device could sit at the edge inside of a computer within a typical power profiling by PC, a attached card in a modern computer. So that was a huge design challenge for us. >>Yeah, I'd love to hear, you know, it was a multi year journey toe solution. And I think of the old World. It was very much a hardware centric 18 to 24 months for design and all the tape out you need to do on this. Sounds like obviously there is still hardware, but it is more software driven. Then it would have been, you know, 10 years ago. So give us some of the ups and downs in that journey. Love to hear any. Any stories that you can share their Well, yeah, I >>think you know, good question. It's always there's always ups and downs in anything you do, especially in the start up. And I think one of the biggest challenges we we've faced is, uh, the exact hardware software boundary. So what is it that you want in hardware? What is it that you want in software And, uh, you know, one of the greatest assets and our company depends on who are the people. We have amazing software and hardware architects who work extremely well together because most of us have been together for so long. So, um, so that always helps when you start to partition the problem. We spent the first year of Pensando, which was basically 2017. The company was founded really thinking through this problem, would it for for all the problems, we wanted to solve the goals that were given to us and and security. Okay, so I want to be able to terminate TCP and initiate TLS connections. What's the right architecture for that? I want to be able to do storage off load and be able to provide encryption of data at rest data in motion. I want to be able to do compression these kinds of things. What's the right part of our software boundary for that? What do we what do we hardwire in silicon versus what we make it programmable and silicon, obviously, but still through a computing engine. And so we spent the first year of the company really thinking through those different partitioning problems, and that was definitely a challenge. And we spent a lot of time and and, uh, you helped me conference rooms and white boards figuring that out. And then 2018. The challenge there was now taking this architecture, this sort of technology substrate, if you will that we built and then executing on it, making sure that it was actually going to yield what we hope that would that we would be able to provide the services. When we talk about El four firewall at line rate, that's completely programmable. Uh, we achieved that. Can we do load balancing? And we do all of it with this before processing engine and the innovations we brought before satisfy all of these requirements we put for us. And so 2018 was really about execution. And there you always have. The challenge is in execution. In terms of, you know, things are going to go wrong. It's not. It's not. If it's when and then how do you deal with it? And so again, um, I would say the biggest challenge and execution is, uh, containing the changes. You know, it's so easy for things to change, especially when you're trying to really build a software platform right, because it's always easy to sort of kick the can and say we'll deal with that later and software. But we know that given what we're trying to do, which is build a system that is highly performance, um, you can't get that. Can you have to deal with it when it comes in. So we spend a lot of time doing performance analysis, making sure that all these applications we were building we're going t yield the right performance. And so that was quite a challenge. And then 2019 was kind of the year of shaping the product. Really lots of product design. Okay, now that we have this technology and it does these, he says that we wanted to do these pieces meaning services. What are all the different ways we can shake this product after talking to customers for, you know, months and months and months. You know, Sony is very much custom, customer driven customer centric. So we we were fortunate enough that we got to spend a lot of time with customers and then that brings us out of challenges, right? Because every customer has a unique problems and so I don't know how to reform this product around a solution that solves quite a bit of problems that really brings value. And so that was the those are the challenges in 2019 which we overcame. Now, obviously we have several releases that we've come out with already. We've got a six and the chips and the It's all there now. So now, 2020. Unfortunately, covitz here, But this is this is a year of growth. This is the year that we really bring it out into the world with our partners and our customers and show how this technology has been developed and benefit will benefit customers over over the next years. Two years. >>Frances really appreciate the insight there. Yeah, that that discussion of the hardware versus software brings back memories for May. Lots of heated debates. A CIO What? One of lines you know we've used on the Cube many times is you know, you know, software will eventually work. Hardware will eventually break. So those trade rto >>taught me something over time ago. He said that uh huh, hardware is hard to change. Software is hard to stop changing. So >>that that's a great one to All right, So you gave us through the last three years journey. Give us a little bit. Look, you know, on the next three years and where you expect pensando to be going >>Sure. Where I see pensando in the next three years as we go through this market transition is uh, both a market leader in a thought leader in terms of the next wave of data center edge computing, whether the, uh in the service provider space, whether it be in the enterprise space or whether it be in the cloud space, the hyper hyper scale of space. As I was mentioning in the beginning, we had when we were talking about, uh, the journey. Market transitions of this major really require understanding the entire stack. If you provide a piece and someone else provides a piece, you will eventually get there. But it's a matter of when, and by the time you get there, there's probably something new. So, you know, uh, time in and of itself is an innovation in this area, especially when you're dealing with the market transition like this. And so we've been fortunate enough that we're building the entire system when we go from the transistors to the rest of the FBI's way, have the entire staff. And so where I see us in three years is not only being a market leader in this space, but also being a thought leader in terms of what does domain specific processing look like at the edge. Um, you know, what are the tools? What are the techniques for? Really a z save? Democratizing the cloud bringing, bringing this technology to everyone. >>Excellent. Well, hey, Frances, That has been a pleasure to talk with you. Thank you so much. Congratulations on the journey so far and I can't wait to see you. How? Thanks for going >>forward. Yeah, we're excited, and I appreciate it. Thank you for your time to. All >>right, check out the cube dot net. We've got lots of back catalogue with pensando. Also, I'm stew minimum. And thank you for watching the Q. Yeah, yeah, yeah.

Published Date : Jun 17 2020

SUMMARY :

I'm coming to you from our Boston area studio. Good to be here. some of the companies who work with, you know, if anybody familiar with Pensando, And so, Andy, being in Austin, Texas, and be going to ls you for undergrad was You know, in the networking world, you know, it's a little bit of a dark arts in general for most I said, Well, it's just imagine the computer you have mess that I had in my own data centers and cloud was going to be, you know, So the silicon area that you spend building a high speed switch You know, there was a little discussion how we went the next step, whether, you know, 25 50 40 the memory bandwidth, which is obviously, you know, Yeah, I'd love to hear, you know, it was a multi year journey toe so that always helps when you start to partition the problem. Yeah, that that discussion of the hardware versus software Software is hard to stop changing. that that's a great one to All right, So you gave us through the last three years in the beginning, we had when we were talking about, uh, Thank you so much. Thank you for your time to. And thank you for watching the Q. Yeah, yeah,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Francis MattisPERSON

0.99+

2019DATE

0.99+

MicrosoftORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

3QUANTITY

0.99+

BostonLOCATION

0.99+

IBMORGANIZATION

0.99+

IraqLOCATION

0.99+

OracleORGANIZATION

0.99+

18QUANTITY

0.99+

Francis MatusPERSON

0.99+

FBIORGANIZATION

0.99+

2017DATE

0.99+

SonyORGANIZATION

0.99+

threeQUANTITY

0.99+

30QUANTITY

0.99+

FrancisPERSON

0.99+

2018DATE

0.99+

2020DATE

0.99+

one gigQUANTITY

0.99+

16 centimeterQUANTITY

0.99+

Palo AltoLOCATION

0.99+

HPORGANIZATION

0.99+

10 gigQUANTITY

0.99+

25 gigQUANTITY

0.99+

100 gigQUANTITY

0.99+

FrancesPERSON

0.99+

thousandsQUANTITY

0.99+

50 gigQUANTITY

0.99+

GMOORGANIZATION

0.99+

first yearQUANTITY

0.99+

24 monthsQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

one pieceQUANTITY

0.99+

PensandoPERSON

0.99+

PensandoORGANIZATION

0.98+

10 years agoDATE

0.98+

First timeQUANTITY

0.98+

IotTITLE

0.98+

four yearsQUANTITY

0.98+

Two yearsQUANTITY

0.98+

three yearsQUANTITY

0.98+

sixQUANTITY

0.98+

5 yearsQUANTITY

0.98+

40%QUANTITY

0.98+

M. DPERSON

0.97+

oneQUANTITY

0.97+

Cube StudiosORGANIZATION

0.97+

one timeQUANTITY

0.97+

19 yearsQUANTITY

0.97+

bothQUANTITY

0.97+

MayDATE

0.97+

each rackQUANTITY

0.96+

second thingQUANTITY

0.96+

25QUANTITY

0.96+

VirginiaLOCATION

0.95+

todayDATE

0.95+

almost 20 yearsQUANTITY

0.94+

K fiveORGANIZATION

0.94+

OneQUANTITY

0.93+

each transistorQUANTITY

0.93+

one typeQUANTITY

0.93+

6QUANTITY

0.92+

MplsORGANIZATION

0.92+

four letterQUANTITY

0.9+

next three yearsDATE

0.9+

40QUANTITY

0.9+

BrightonORGANIZATION

0.89+

first smartQUANTITY

0.87+

50QUANTITY

0.87+

yearsDATE

0.86+

mid ninetiesDATE

0.84+

pensandoPERSON

0.82+

UCSORGANIZATION

0.81+

a yearQUANTITY

0.79+