Dom Wilde and Glenn Sullivan, SnapRoute | CUBEConversation, July 2019
(upbeat jazz music) >> Narrator: From our studios in the heart of Silicon Valley, Palo Alto, California, This is a Cube Conversation. >> Everyone welcome to this Cube Conversation here in Palo Alto, California. I'm John Furrier host of the Cube, here in the Cube Studios. We have Dom Wilde the CEO of SnapRoute, and Glenn Sullivan co-founder of SnapRoute hot startup. You guy are out there. Great to see you again, thanks of coming on. >> Good to see you. >> Appreciate it. >> Thanks. >> Your famous you got done at Apple, we talked about last time. You guys were in buildup mode, bringing your product to market. What is the update? You guys are now out there with traction. Dom give us the update. What's going on with the company? Quick update. >> Yeah, so if you remember we've built the sort of new generation of networking, targeted at the next generation of cloud around distributed compute networking. We have built Cloud Native microservices architecture from the ground up to reinvent networking. We now have the product out. We released the product back at the end of February of this year, 2019. So we're out with our initial POCs, we've got a couple of initial deals already done. And a couple customers of record and we deployed up and running with a lot of interest coming in. I think it's kind of one of the topics we want to talk about here is where is the interest coming from and where is this sort of new build out of networking, new build out of cloud happening. >> Yeah I want to get the detail on that traction but real quick what is the main motivator for some of these interest points? Obviously you got traction. What is the main traction points? >> So a couple things, number one, people need to be able to deploy apps faster. The network has always traditionally got in the way. It's been a inhibitor to the speed of business. So, number one, we enable people to deploy applications much faster because we're sort of integrating networking with the rest of the infrastructure operational model. We're also solving some of the problems around, or in fact, all of the problems around how do you keep your network compliant and security patched. And make it easier for operations teams to do those things and get security updates done really really quickly. So there's a whole bunch of operational problems that we're solving and then we're also looking at some of the issues around how do we have both a technology revolution in networking but also a economic revolution. Networking is just too expensive and always has been. So we've got quite a works of revolutionary model there in terms of bringing the cost of networking down significantly. >> Glenn, as the co-founder, as the baby starts to get out there and grow up, what's your perspective? Are you happy with things right now or how are things going on your end? >> Absolutely, the thing that I'm proudest of is the innovation that the team has been able to drive based on having folks that are real experts in Kubernetes, DevOps, and networking, all sitting in one room solving this problem of how you manage a distributed cloud using tool sets that are Cloud Native. That's really what I'm proudest of is the technology that we've been able to build and demonstrate to folks. Because nobody else can really do what we're doing with this mix of DevOps and Kubernetes, and Cloud Native engineering. Like general network protocol and systems people. >> You know it's always fun to interview the founders, and being an entrepreneur myself, sometimes where you get is not always where you thought you'd end up. But you guys always had a good line of sight on this Cloud Native shift in the modern infrastructure. >> Glenn: Right. >> You did work at Apple we talked about it in our last conversation. Really with obviously leading the win, they had pressure from the marketplace selling trillion dollar valuation company. But that was a early indicator. You guys had clear line of sight on this new modern architecture, kind of the cloud 2.0 we were talking about before we came on camera. This is now developing, right? So you guys are now in the market, you're riding that wave. It's a good wave to be on because certainly app developers are talking about microservices, or you talking about Kubernetes, talking about service meshes, stateful data. All these things are now part of the conversation but it's not siloed organizations doing it. So I want to dig into this topic of what is cloud 2.0. How do you guys define this cloud 2.0 and what is cloud 1.0? And then lets talk about cloud 2.0. >> Yeah, so cloud 1.0, huge success. The growth of the hyperscale vendors. You've got the success of Amazon, or Microsoft, Azure, and all of these guys. And that was all about the hyper-centralization of data, bringing all the desperate data centers that enterprises used to run and all that infrastructure into relatively a few locations. A few geographic locations and hyper-centralizing everything to support SaaS applications. Massively successful because really what cloud 1.0 did was it made infrastructure invisible. You could be an application developer and you didn't have to manage or understand infrastructure, you could just go and deploy your applications. So, the rise of SaaS with cloud 1.0. Cloud 2.0 is actually a evolution in our mind. It's not an alternative, it's actually an evolution that compliments what those vendors did with cloud 1.0. But it's actually... It's actually distributing data. So we pulled everything to central and now what we're seeing is that the applications themselves are developing such that we have new use cases. Things like enhanced reality and retail. We have massive sensor networks that are generating enormous amounts of data. We have self-driving cars where, you know, that need rapid response for safety things. And so what happens is you have to put compute closer to the devices that are generating that data. So you have to geographically now disperse and have edge compute and obviously the network that goes with that to support that. And you have to push that out into thousands of locations geographically. And so cloud 2.0 is this move of we've got this whole new class of cloud service providers and some regional telcos and things who are reinventing themselves, and saying, "Hey we can actually provide "the colos, we can provide the smaller locations "to host these edge compute capabilities." But what that creates is a huge networking problem. Distributed networking in massively distributed cases is a really big problem. What it does is it amplifies all of the problems that we coped with in networking for many years. I mean, Glenn, you can talk about this right? When you were at Apple one of the first realtime apps was Siri. >> Yeah, and I know it. Lets get back to the huge networking problem but I want to stay on the thread of cloud 2.0. Glenn, you were talking about that before we came on camera. He referenced that you worked for a time at Apple. Kind of a peak into the future around what cloud 2.0 was. Can you elaborate on this notion of realtime, latency, as an extension to the success of cloud 1.0? >> Right, so we saw this when we were deploying Siri. Siri was originally just a centralized application, just like every other centralized application. You know, iTunes. You buy a song, it doesn't really have to have that much data about you when you're buying that song. You go and you download it via the CDN and it gets it to you very quickly, and you're happy and everything's great. But Siri kind of changed that because now it has to know my voice, it has to know what questions I ask, it has to know things about me that are very personal. And it's also very latency sensitive, right? The quicker that it gets me a response the more likely I am to use it, the more data it gets about me the better the answers get. Everything about it drives that the data has to be close to the edge. So that means the network has to be a lot bigger than it was before. >> And this changes the architectural view. So just to summarize what you said is, iTunes needs to know a lot about the songs that it needs to deliver to. >> Glenn: Right. >> The network delivers it, okay easy. >> Glenn: Right. >> If you're clicking. But with the voice piece that kind of changed the paradigm a little bit because it had to be optimized and peaked for realtime, low latency, accuracy. Different problem set, than say, the iTunes. >> Glenn: Exactly. >> So they've networked together. >> Language specific, right? So, where is the user, what language are they speaking, how much data do we have to have for that language? It's all very very specific to the user. >> So cloud 2.0 is if I can piece this together is cloud 1.0 we get it, Amazon showcased there. It's kind of data, it's a data problem too. It's like AI, you seen the growth of AI validate that. It's about data personalization, Siri is a great example. Edge where you have data (chuckles) that needs to integrate into another application. So if cloud 1.0 is about making the infrastructure invisible, what is cloud 2.0 about? What's the main value proposition? >> To me it's about extracting the value from the data and personalizing it. It's about being able to provide more realtime services and applications while maintaining that infrastructure invisibility paradigm. That is still the big value of any cloud, any public cloud offering, is that I don't want to own the infrastructure, I don't want to know about it, I want to be able to use it and deploy applications. But it's the types of applications now and it's the value that the applications are delivering has changed. It's not just a standard SaaS application like Workday for instance, that is still a very static application-- >> John: It's a monolithic application, yeah. >> These are realtime apps, they're operating realtime. If you take an autonomous car, right? If I'm about to crash my car and the sensors are all going off, and it needs to brake and it needs to send information back and get a response. I want all that to happen in realtime, I don't want to sort of like have-- >> In any extraction layer of any layer of innovation 1.0, 2.0, as you're implying advancement. It's still an application developer opportunity, Glenn, right? >> Absolutely. >> Because at the end of the day the user expectations changed because of the experience that they're getting-- >> Yeah and it only gets worse right? Because the more network that I have the more distributed the network is, the harder it is to manage it. So if you don't take that network OS, the really really boring, not very exciting thing, and treat it the same way you always have. And try to take what you learned in the data center and apply to the edge, you lose the ability to really take advantage of all the things that we've learned from the Cloud Native era a from the public cloud 1.0, right? I mean just look at containers for instance, containers have taken over. But you still see this situation where most of the applications that are infrastructure based aren't actually containerized themselves. So how can they build upon what we've learned from pubic cloud 1.0 and take it to that next level, unless you start replacing the parts of the infrastructure with things that are containerized. >> This just is a side note, just going through my head right now. It's going to be a huge conflict between who leads the innovation in the future. >> Glenn: Absolutely. >> On premises or cloud. And that's going to be an interesting dynamic because you could argue that containerization and networking is a trend in mixed tense to be Cloud Native but now you got it on premises. It's going to be a dynamic we're going to have to watch. But you mentioned, Dom, about this huge networking problem that evolves out of cloud 2.0. >> Dom: Absolutely. >> What is that networking problem? And what specifically is a directionally correct solution for that problem? >> So I think the biggest problem is an operational one. In the cloud 1.0 era and even prior to that when we were in a hosted enterprise data centers, we've always built data centers and the applications running with them, with the assumption that there are physically expert resources there. That if something goes wrong, they can hands-on do something about it. With cloud 2.0 because it's so distributed, you can't have people everywhere. And one of the challenges that has always existed with networking technology and architecture is it is a very static thing. We set it, we forge it, we walk away, and try not to touch it again because it's pretty brittle. 'Cause we know that if we do touch it, it probably breaks and something goes wrong. And we see today a ton of outages, we were talking about a survey the other day that says the second biggest cause of outages in the cloud age is still the network. It's an operational problem whereby I want to be able to go and now touch these thousands of devices for... Usually I'm fixing a bug or I want to add a feature but more and more it's about security. It's more about security compliance, and I want to make sure that all my security updates are done. With a traditional network operating system, we call it The Monolith, all of the features are in big blob. You can turn them off but you can't remove them. So it's a big blob and all of those features are interdependent. When you have to do a security patch in a traditional model, what happens is that you actually are going to replace the blob. And so you're going to remove that and put a new blob in place. It's a rip and replace. >> And that's a hard enough operational problem all on it's own because when you do that you sort of down things and up things. So consequently-- >> And anyone who's done any location shifting on hardware knows it's a multi-day/week operation. >> It is but, ya know, and what people do is they overbuild the network, so they have two of everything. So it's when they down one, the other one stays up. When you're in thousands of geographic locations, that's really expensive to have two of everything. >> So the problem statement is essentially how do you have a functional robust network that can handle the kind of apps and IOT. Is that-- >> Yeah it is absolutely but as I said it's important to understand that you have this Monolith that is getting in the way of this robust network. What we've done is we've said, 'We'll apply Cloud Native technology in thinking.' Containerize the actual network operating system itself, not just the protocols, but the actually infrastructure services to the operating system. So if you have to security patch something or you have to fix something, you can replace an individual container and you don't touch anything else. So you maintain a known state for your network that devices is probably going to be way more reliable, and you don't have to interrupt any kind of service. So rather than downing and uping the thing you're just replacing a container. >> You guys built a service on top of the networks to make it manageable, make it more functional, is that-- >> We actually didn't build it. This is the beautiful part. If we built it then I would just be another network vendor that says, "Hey trust my propietary not-open solution. "I can do it better than everyone else." That would be what traditional vendors did with stuff like ISSU and things like that. We've actually just used Kubernetes to do that. So you've already trust Kubernetes, it came out of Google, everybody's adding to it, it's the best community project ever for distributed systems. So you don't have to trust that we've built the solution, you just trust in Kubernetes. So what we've done is we made the network native to that and then used that paradigm to do these updates and keep everything current. >> And the reason why you're getting traction is you're attractive to a network environment because you're not there to sell them more networking (laughs). >> Right. >> You're there to give them more network capability with Kubernetes. >> Yeah, well I mean-- Yeah we're attractive to a business for two reasons. We're attractive to the business because we enable you to move your business faster. You can deploy applications faster, more reliably, you can keep them up and running. So from a business perspective, we've taken away the pain of the network interrupting the business. From an operations perspective, from an IT operations network operations perspective, what we've done is we've made the network manageable. We've now, as you said, we've taken this paradigm and said what would've taken months of pretesting, and planning, and troubleshooting at two o'clock in the morning has now become a matter of seconds in order to replace a container. And has eased the burden operationally. And now those operational teams can do worthwhile work that is more meaningful than just testing a bunch of vendor fixes. >> Yeah, even though cloud 1.0 had networking in their computed storage, I think cloud 1.0 data would be about computing storage. cloud 2.0 is really about the network and all the data that's going around to help the app developers scale up their capability. >> Dom: Yeah, that's a great way to think about it. >> I was talking about the use cases. I think the next track that I'd love to dig in with you guys on is as you guys are pioneering this new modern approach, some of the use cases that you touch are probably also pretty modern. What specific use cases are you guys getting into or your customers are talking about. What are some of these cloud 2.0 use cases that you're seeing? >> Yeah, so one we already touched on was this sort of horizontally and generally was the security one. I mean security is everybody's business today. And it's a very very difficult networking problem, ya know, keeping things compliant. If you take for instance, recently Cisco announced that there was faulty vulnerabilities in their mainstream Nexus products. And that's not a terrible thing, it's normal course of business. And they put out the patches and the fixes and said, "Hey, here it is." But now when you think about the burden on any IT team. That comes out of the blue, they hadn't planned for it. Now they have to take the time to take a step back and what they have to do is say, well I've got this new code. I don't know what else was fixed or changed in there. So I now have to retest everything and retest all of my use cases, and I have to spend considerable time to do that to understand what else has changed. And then I have to have a plan to go out and deploy this. That's a hard enough problem in a centralized data center. Doing that across hundreds, if not thousands of geographically dispersed sights is a nightmare. But it's just, ya know, the new world we live in, this is going to happen more and more and more. And so being able to change that operational model to say actually this is trivial. And actually what you should be doing is doing these updates everyday to keep yourself compliant. >> Do the use cases Glenn, have certain characteristics? I mean, we're talking about latency and bandwidth that's a traditional networking kind of philosophy. Is there certain characteristics that these new use cases have? Is it latency and bandwidth, is there anything else? >> No it's mostly about bringing properties like CI/CD to networking, right? So the biggest thing we're seeing now is as people start to investigate disaggregated networking and new ways of doing things. They're not getting this free pass that they used to get for the network because the network isn't just an appliance anymore. When you had something that was from one of the three vendors you'd say, "Okay, that thing runs some version of Linux on it. "I don't know what it is. "Maybe it runs free SD in Juniper's case. "I don't understand what kernel it is, "I don't care just keep that thing up to date." But now it's like, "Oh I'm starting to "add more services to my network devices." Say in the remote sites I want to kickstart some servers with these network devices I install first, well that means that I have to start treating this thing like it's another server in my environment for my provisioning. That means that everything on that box has to be compliant just like it is in everything else. Lets not even get into personal credit card information, personal identifying information. Everything is becoming more and more heightened from a non-exemplary status. >> It's a surface area device, I mean it's part of the surface area. >> And if it's not inside a data center than it's even worse because you can't guarantee the physical security of that device as much as you could if it was inside a regular data center. >> So this is a new dynamic that's going on with the advent of security, regulatory issues, and also obviously the parameter being dismantled because of cloud. >> Glenn: Absolutely. >> Yeah, you also got specific use cases. There are multiple verticals and industries that are having these challenges. Retail is a good example, point-of-sale. Anywhere where you have the sort of a branch problem or mentality where you're running sophisticated applications, and by the way, people think of point-of-sale is not terribly sophisticated. It's incredibly sophisticated these days. Incredibly sophisticated. And there are thousands of these devices, hundreds of stores, thousands of devices, similar with healthcare. You know, again, distributed hospitals, medical centers, doctor's offices, etcetera. You have all running private mission critical data. I think one of the ones that we see coming is this kind of autonomous car thing. As we get IOT sensor networks, large amounts of data being aggregated from those. So there's lots of different use cases. We add on a lot of interest. And to be quite frankly, the challenge for us as a startup is keeping focused on just a few things today. But the number of things we're being asked to look at is just enormous. >> Well those tailwinds for you guys in terms of momentum, you have this cloud 2.0 trend. Which we talked about. But hybrid cloud and multi-cloud is essentially distributed cloud on edge? If you think about it. >> Yeah, yeah. >> And that's what most companies are going to do, they're going to keep there own premises and their going to treat it as either on their platform or an external remote location that's going to be everywhere, big surface area. So with that, what are some of the under the hood benefits of the OS? Can you go into more detail on that because I find that to be much more interesting to say the network architect or someone who's saying, "Hey you know what? "I got hybrid cloud right now. "I got Amazon, I know the future's coming on "to my front door step really fast. "I got to start architecting, I got to start hiring, "I got to start planning for distributed cloud "and distributed edge deployments." If not already doing it. So technical depth becomes an huge issue. I might try some things with my old gear or old stuff. They're in this mold, you know, a lot of people are in that mode. I'll do a little technical depth to learn but ultimately I got to build out this capability. What do you guys do for that? >> So the critical thing for us is that you have to standardize on an open non-proprietary orchestration layer, right? You can talk about containers and microservices all day long. We hear those terms all the time but what people really need to make sure that they focus on is that their orchestrator that managing those containers is open and non-proprietary. If you pull that from one of the current vendors it's going to be something that is network centric and it's going to be something that was developed by them for their use. Their basically saying here's another silo, keep feeding into it. Sure we give you API, sure we give you a way to programmatically configure the network but you're still doing it specifically to me. One of the smartest decisions we made besides just using Kubernetes as core infrastructure. We've also completely adapted their API structure. So if you already speak Kubernetes, if you understand how to configure network paradigms in Kubernetes, we just extend that. So now you can take somebody, who off the street might be a Cloud Native Kubernetes expert and say here's a little bit of networking, go to play the network, right? You just have to take the barrier down of what you have to teach them from this CLI and this API structure that's specific to this vendor, and then this CLI and this API structure. But the cool thing about what we're doing is we also don't leave the network engineers out in the cold, we've give them a fully Cloud Native network CLI that is just like everything else they're used to, but it's doing all this Cloud Native Kubernetes microservices containers stuff underneath to hide all that from them. So they don't have to learn it and that's powerful because we recognize because of our Ops experience, there's a lot of different people touching these boxes. Whether you put it in a ivory tower or not, you've got knocks that have to login and check 'em, you've got junior network admin, senior network engineers, architects. You've got Cloud Native folks, Kubernetes folks, everybody has to look at these boxes, so they all have to have ways that end of the switch, end of the routers that is native to what they understand. So that's very critical as to present data that makes sense to the audience. >> And also give them comfort to what they're used to like you said before. If they got whatever's running Linux on there, as long as it's operationally running, water's flowing through the pipes, your packets are moving through, their happy. >> Glenn: Right. >> But they got to have this new capability to please the people who need to touch the boxes and work with the network, and gives them some more capabilities. >> Right, it prevents you from building those silos which is really critical in the Cloud Native. And that's what public cloud 1.0 taught us, right? Is stop building these silos, these infrastructure silos and say okay, you look at AWS right now. There's AWS certified engineers, they're not network experts, they're not storage experts, their not compute experts, they're AWS experts. And you're going to see the same thing happen with Cloud Native. >> Cloud 3.0 is decimating the silos basically 'cause if this goes that next level, that's why horizontally scalable networks is the way to go, right? That's kind of what you were talking about about the use case. >> Yeah, I think all revolutionary ideas are all actually more transformational. Revolutions begin by taking something that is familiar and presenting it in a new way, and enabling somebody to do something different. So I think it's important as we approach this is to not just come in and go, 'Oh what you're doing is stupid, we have to replace it.' The answer is, what you're doing is obviously the right thing. But you've not been given the tools that enable you to take full advantage and achieve the full potential of the network as it relates to your business. >> And you guys know as well as we do is that the networking folks are, it's a high bar for them because you mentioned the security and the lockdown nature of networking. It's always been, you don't F with it because you think that thing is going to be, anyone who touches it, they need to be reviewed. So they're a hard customer to sell to. You got to align with their Ops mindset. >> I think the network operators have been, and Glenn, and our other co-founder have waxed theoretical about this. (laughter) But network operators have been forced to live in a world of no. Anytime the business comes to them and says, "Hey we need you to do X." The answer is no, because I know that if I touch my stuff it's going to break, or I'm limited in what I can do, or I can't achieve the timeframe that you're looking for. So the network has always been an inhibitor but the heroes of the moment are actually the network operations team. Because nobody knows that the network was an inhibitor. >> Well this is an interesting agile conversation we've been having this is our, here in our Cube Studios yesterday amongst our own team because we love agile content. Agile's different, agile is about getting to yes because iteration in a sense is about learning, right? So you have to say no, but you have to say no with the idea of getting to yes. Because the whole microservices is about figuring out through iteration and ultimately automation, what to tear down, what to. So I would see a trend where it's not the no Ops kind of guys, as they say, "No, no, no." It's no, don't mess with the current operational plumbing. >> Glenn: Right. >> But we got to get to yes for the new capabilities. So there's a shift in the Cloud Native. Your thoughts and reaction to that Glenn. >> Yeah, so it's basically like I set myself up so that I'm doing a whole drop the forklift with everything in there, like a crated replacement. Networking has always been this way. I'm not saying no to you, I'm just saying not right now. I do my maintenance three times a year on the third Sunday of the second month and the moon's in the right place, and I make sure that I've 50/60 changes. I've got 20 engineers on call, we do everything in order. We've got a rollback plan if something breaks. This is the problem. Network engineers don't do enough changes to build a muscle like the agile developers have seen or CI/CD developers have seen. Where it's like I do a little bit of changes everyday, if something breaks, I roll it back. I do a little bit of changes everyday, and if something breaks I roll it back. That's what we enable because you can do things without breaking the entire system, you can just replace a container, you can move on. In networking, the classic networking, you're stack modeling so many changes and so many new things that everything has to be a greenfield deployment. How many times have you heard that? Like, "Oh this thing would be perfect "for our Greenfield Data Center. "We're going to do everything different "in this Greenfield Data Center." And that doesn't work. >> You don't get a mulligan in network and you realize they say, look this is a good point, great conversation. I think that is a very good follow up topic because developing those muscles is an operational practice as well as understanding what you're building. You got to know what the outcome looks like, this is where we're starting to get into more of these agile apps. And you guys are at the front end of it, and I think this is a sea change, cloud 2.0. >> Yeah, it is. >> Quick plug for the company. Take the last minute to explain what you guys are up to, hiring, funding. What are you guys looking for? Give a quick plug for the company. >> Yeah, I mean, we're doing great. Always hiring, everybody always is if you're a cutting edge startup. We're always looking for great new talent. Yeah, we're moving forward with our next round of funding plans. We're looking at expanding the growth of the company or go to market. Doubling down on our engineering. We're just delivering now our Kubernetes fabric capabilities, so that's the next big functional release that we're actually already delivered the beta of. So taking Kubernetes and actually using it as a distributed fabric. So a lot of exciting things happening technology wise. A lot of customer engagements happening. So yeah, it's great. >> Glenn, what are you excited about now? Obviously Kubernetes, we know you're excited about. >> Oh yeah. >> But what's getting you excited. >> So the dual process that we have where we actually use, we're doing stuff in Kubernetes that nobody else is doing because we have a version that runs on the switch. And it manages all the containers local and then it also talks to a big controller. It's fixing that SDN issue, right? Where you have this SDN controller that manages everything in the data plane, and it controls my devices, and it uses open flow to do this. And it has a headless operation in case the controllers go away. Oh and if I need another controller, here's another one, so now I've got two controllers. It gets really messy, you got to buy a lot of gear to manage it. Now we're saying, 'Okay, you've got 'Kubernetes running local. 'You don't want to have a Kubernetes cluster, don't bother.' It just uses it autonomously. 'You want to manage it as a fabric like Dom says. 'Now you can use the Kubernetes fabric 'that you've already built. 'There are Kubernetes masters that 'you've already built for the applications.' And now we can start to really imbed some really neat operational stuff in there. Things that as a network engineer took me years of breaking stuff and then fixing it to learn, we can start putting those operational intelligence in the operating system itself to make it react to problems in the network and solve things before waking people up at three a.m.. >> This takes policy to a whole nother level. >> Absolutely. >> It's a whole nother intelligence layer. >> Yeah, if this is broken, do this, cut off the arm to save the rest of the animal. And don't wake people up and troubleshoot stuff, troubleshoot stuff during the day when everybody's there and happy and awake. >> Guys congratulations. SnapRoute, hot startup. Networking is the real area for cloud 2.0. You got realtime, you got data, you got to move packets from A to B, you got to store them, you got to move compute around, you need to (laughs) move stuff around the cloud to distribute to networks. Thanks for coming in. >> Thanks. >> Thank you. >> Appreciate it. >> Thanks for having us. >> I'm John Furrier for Cube Conversation here in Palo Alto which SnapRoute, thanks for watching. (upbeat jazz music)
SUMMARY :
Narrator: From our studios in the Great to see you again, thanks of coming on. What is the update? is the interest coming from and What is the main traction points? It's been a inhibitor to the speed of business. is the innovation that the team has been able You know it's always fun to interview the founders, kind of the cloud 2.0 we were talking of the problems that we coped with Kind of a peak into the future around what cloud 2.0 was. So that means the network has to be a lot So just to summarize what you said is, because it had to be optimized and peaked how much data do we have to have for that language? So if cloud 1.0 is about making the and it's the value that the applications and it needs to brake and it needs In any extraction layer of any layer of in the data center and apply to the edge, It's going to be a huge conflict to be Cloud Native but now you got it on premises. In the cloud 1.0 era and even prior to that all on it's own because when you do that And anyone who's done any location shifting that's really expensive to have two of everything. that can handle the kind of apps and IOT. it's important to understand that you built the solution, you just trust in Kubernetes. And the reason why you're getting traction You're there to give them more network we enable you to move your business faster. and all the data that's going around to help some of the use cases that you touch And actually what you should be doing Do the use cases Glenn, have certain characteristics? So the biggest thing we're seeing now it's part of the surface area. of that device as much as you could the parameter being dismantled because of cloud. And to be quite frankly, the challenge for us of momentum, you have this cloud 2.0 trend. because I find that to be much more interesting of what you have to teach them from And also give them comfort to what But they got to have this new capability Right, it prevents you from building those silos That's kind of what you were talking and achieve the full potential of the network is that the networking folks are, Anytime the business comes to them So you have to say no, but you have Your thoughts and reaction to that Glenn. and the moon's in the right place, You got to know what the outcome looks like, Take the last minute to explain growth of the company or go to market. Glenn, what are you excited about now? So the dual process that we have cut off the arm to save the rest of the animal. the cloud to distribute to networks. in Palo Alto which SnapRoute, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Glenn | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
SnapRoute | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
July 2019 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Glenn Sullivan | PERSON | 0.99+ |
Siri | TITLE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dom Wilde | PERSON | 0.99+ |
20 engineers | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
cloud 1.0 | TITLE | 0.99+ |
cloud 2.0 | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
Dom | PERSON | 0.99+ |
two controllers | QUANTITY | 0.99+ |
iTunes | TITLE | 0.99+ |
Cloud 2.0 | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
thousands of devices | QUANTITY | 0.99+ |
three vendors | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
two reasons | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two o'clock | DATE | 0.99+ |
Linux | TITLE | 0.98+ |
three a.m | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
hundreds of stores | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Cube Studios | ORGANIZATION | 0.98+ |
Greenfield Data Center | ORGANIZATION | 0.97+ |
Cloud Native Kubernetes | TITLE | 0.97+ |
Azure | ORGANIZATION | 0.97+ |
end of February of this year, 2019 | DATE | 0.97+ |
Cloud 3.0 | TITLE | 0.97+ |
One | QUANTITY | 0.97+ |
second month | QUANTITY | 0.96+ |