Kevin Deierling, NVIDIA and Scott Tease, Lenovo | CUBE Conversation, September 2020
>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, I'm Stu Miniman, and welcome to a CUBE conversation. I'm coming to you from our Boston Area studio. And we're going to be digging into some interesting news regarding networking. Some important use cases these days, in 2020, of course, AI is a big piece of it. So happy to welcome to the program. First of all, I have one of our CUBE alumni, Kevin Deierling. He's the Senior Vice President of Marketing with Nvidia, part of the networking team there. And joining him is Scott Tease, someone we've known for a while, but first time on the program, who's the General Manager of HPC and AI, for the Lenovo Data Center Group. Scott and Kevin, thanks so much for joining us. >> It's great to be here Stu. >> Yeah, thank you. >> Alright, so Kevin, as I said, you you've been on the program a number of times, first when it was just Mellanox, now of course the networking team, there's some other acquisitions that have come in. If you could just set us up with the relationship between Nvidia and Lenovo. And there's some news today that we're here to talk about too. So let's start getting into that. And then Scott, you'll jump in after Kevin. >> Yeah, so we've been a long time partner with Lenovo, on our high performance computing. And so that's the InfiniBand piece of our business. And more and more, we're seeing that AI workloads are very, very similar to HPC workloads. And so that's been a great partnership that we've had for many, many years. And now we're expanding that, and we're launching a OEM relationship with Lenovo, for our Ethernet switches. And again, with our Ethernet switches, we really take that heritage of low latency, high performance networking that we built over many years in HPC, and we bring that to Ethernet. And of course that can be with HPC, because frequently in an HPC supercomputing environment, or in an AI supercomputing environment, you'll also have an Ethernet network, either for management, or sometimes for storage. And now we can offer that together with Lenovo. So it's a great partnership. We talked about it briefly last month, and now we're coming to market, and we'll be able to offer this to the market. >> Yeah, yeah, Kevin, we're super excited about it here in Lenovo as well. We've had a great relationship over the years with Mellanox, with Nvidia Mellanox. And this is just the next step. We've shown in HPC that the days of just taking an Ethernet card, or an InfiniBand card, plugging it in the system, and having it work properly are gone. You really need a system that's engineered for whatever task the customer is going to use. And we've known that in HPC for a long time, as we move into workloads, like artificial intelligence, where networking is a critical aspect of getting these systems to communicate with one another, and work properly together. We love from HPC perspective, to use InfiniBand, but most enterprise clients are using Ethernet. So where do we go? We go to a partner that we've trusted for a very long time. And we selected the Nvidia Mellanox Ethernet switch family. And we're really excited to be able to bring that end-to-end solution to our enterprise clients, just like we've been doing for HPC for a while. >> Yeah, well Scott, maybe if you could. I'd love to hear a little bit more about kind of that customer demand that those usages there. So you think traditionally, of course, is supercomputing, as you both talked about that move from InfiniBand, to leveraging Ethernet, is something that's been talked about for quite a while now in the industry. But maybe that AI specifically, could you talk about what are the networking requirements, how similar is it? Is it 95% of the same architecture, as what you see in HPC environments? And also, I guess the big question there is, how fast are customers adopting, and rolling out those AI solutions? And what kind of scale are they getting them to today? >> So yeah, there's a lot there of good things we can talk about. So I'd say in HPC, the thing that we've learned, is that you've got to have a fabric that's up to the task. When you're testing an HPC solution, you're not looking at a single node, you're looking at a combination of servers, and storage, management, all these things have to come together, and they come together over InfiniBand fabric. So we've got this nearly a purpose built fabric that's been fantastic for the HPC community for a long time. As we start to do some of that same type of workload, but in an enterprise environment, many of those customers are not used to InfiniBand, they're used to an Ethernet fabric, something that they've got all throughout their data center. And we want to try to find a way to do was, bring a lot of that rock solid interoperability, and pre-tested capability, and bring it to our enterprise clients for these AI workloads. Anything high performance GPUs, lots of inner internode communications, worries about traffic and congestion, abnormalities in the network that you need to spot. Those things happen quite often, when you're doing these enterprise AI solutions. You need a fabric that's able to keep up with that. And the Nvidia networking is definitely going to be able to do that for us. >> Yeah well, Kevin I heard Scott mention GPUs here. So this kind of highlights one of the reasons why we've seen Nvidia expand its networking capabilities. Could you talk a little bit about that kind of expansion, the portfolio, and how these use cases really are going to highlight what Nvidia helped bring to the market? >> Yeah, we like to really focus on accelerated computing applications. And whether those are HPC applications, or now they're becoming much more broadly adopted in the enterprise. And one of the things we've done is, tight integration at a product level, between GPUs, and the networking components in our business. Whether that's the adapters, or the DPU, the data processing unit, which we've talked about before. And now even with the switches here, with our friends at Lenovo, and really bringing that all together. But most importantly, is at a platform level. And by that I mean the software. And the enterprise here has all kinds of different verticals that are going after. And we invest heavily in the software ecosystem that's built on top of the GPU, and the networking. And by integrating all of that together on a platform, we can really accelerate the time to market for enterprises that wants to leverage these modern workloads, sort of cloud native workloads. >> Yeah, please Scott, if you have some follow up there. >> Yeah, if you don't mind Stu, I just like to say, five years ago, the roadmap that we followed was the processor roadmap. We all could tell you to the week when the next Xeon processor was going to come out. And that's what drove all of our roadmaps. Since that time what we found is that the items that are making the radical, the revolutionary improvements in performance, they're attached to the processor, but they're not the processor itself. It's things like, the GPU. It's things like that, especially networking adapters. So trying to design a platform that's solely based on a CPU, and then jam these other items on top of it. It no longer works, you have to design these systems in a holistic manner, where you're designing for the GPU, you're designing for the network. And that's the beauty of having a deep partnership, like we share with Nvidia, on both the GPU side, and on the networking side, is we can do all that upfront engineering to make sure that the platform, the systems, the solution, as a whole works exactly how the customer is going to expect it to. >> Kevin, you mentioned that a big piece of this is software now. I'm curious, there's an interesting piece that your networking team has picked up, relatively recently, that the Cumulus Linux, so help us understand how that fits into the Ethernet portfolio? And would it show up in these kind of applications that we're talking about? >> Yeah, that's a great question. So you're absolutely right, Cumulus is integral to what we're doing here with Lenovo. If you looked at the heritage that Mellanox had, and Cumulus, it's all about open networking. And what we mean by that, is we really decouple the hardware, and the software. So we support multiple network operating systems on top of our hardware. And so if it's, for example, Sonic, or if it's our Onyx or Dents, which is based on switch def. But Cumulus who we just recently acquired, has been also on that same access of open networking. And so they really support multiple platforms. Now we've added a new platform with our friends at Lenovo. And really they've adopted Cumulus. So it is very much centered on, Enterprise, and really a cloud like experience in the Enterprise, where it's Linux, but it's highly automated. Everything is operationalized and automated. And so as a result of that, you get sort of the experience of the cloud, but with the economics that you get in the Enterprise. So it's kind of the best of both worlds in terms of network analytic, and all of the ability to do things that the cloud guys are doing, but fully automated, and for an Enterprise environment. >> Yeah, so Kevin, I mean, I just want to say a few things about this. We're really excited about the Cumulus acquisition here. When we started our negotiations with Mellanox, we were still planning to use Onyx. We love Onyx, it's been our IB nodes of choice. Our users love, our are architects love it. But we were trying to lean towards a more open kind of futuristic, node as we got started with this. And Cumulus is really perfect. I mean it's a Linux open source based system. We love open source in HPC. The great thing about it is, we're going to be able to take all the great learnings that we've had with Onyx over the years, and now be able to consolidate those inside of Cumulus. We think it's the perfect way to start this relationship with Nvidia networking. >> Well Scott, help us understand a little more. What you know what does this expansion of the partnership mean? If you're talking about really the full solutions that Lenovo opens in the think agile brand, as well as the hybrid and cloud solutions. Is this something then that, is it just baked into the solution, is it a reseller, what should customers, and your your channel partners understand about this? >> Yeah, so any of the Lenovo solutions that require a switch to perform the functionality needed across the solution, are going to show up with the networking from Nvidia inside of it. Reasons for that, a couple of reasons. One is even something as simple as solution management for HPC, the switch is so integral to how we do all that, how we push all those functions down, how we deploy systems. So you've got to have a switch, in a connectivity methodology, that ensures that we know how to deploy these systems. And no matter what scale they are, from a few systems up, to literally thousands of systems, we've got something that we know how to do. Then when we're we're selling these solutions, like an SAP solution, for instance. The customer is not buying a server anymore, they're buying a solution, they're buying a functionality. And we want to be able to test that in our labs to ensure that that system, that rack, leaves our factory ready to do exactly what the customer is looking for. So any of the systems that are going to be coming from us, pre configured, pre tested, are all going to have Nvidia networking inside of them. >> Yeah, and I think that's, you mentioned the hybrid cloud. I think that's really important. That's really where we cut our teeth first in InfiniBand, but also with our Ethernet solutions. And so today, we're really driving a bunch of the big hyper scalars, as well as the big clouds. And as you see things like SAP or Azure, it's really important now that you're seeing Azure stack coming into a hybrid environment, that you have the known commodity here. So we're something that we're built in to many of those different platforms, with our Spectrum ASIC, as well as our adapters. And so now the ability with Nvidia, and Lenovo together, to bring that to enterprise customers, is really important. I think it's a proven set of components that together forms a solution. And that's the real key, as Scott said, is delivering a solution, not just piece parts, we have a platform, that software, hardware, all of it integrated. >> Well, it's great to see you. We've had an existing partnership for a while. I want to give you both the opportunity, anything specific, you've been hearing kind of the customer demand leading up this. Is it people that might be transitioning from InfiniBand to Ethernet? Or is it just general market adoption of new solutions that you have out there? (speakers talk over each other) >> You go ahead and start. >> Okay, so I think that there's different networks for different workloads, is what we've seen. And InfiniBand certainly is going to continue to be the best platform out there for HPC, and often for AI. But as Scott said, the enterprise frequently is not familiar with that, and for various reasons, would like to leverage Ethernet. So I think we'll see two different cases, one where there's Ethernet with an InfiniBand network. And the other is for new enterprise workloads that are coming, that are very AI centric, modern workloads, sort of cloud native workloads. You have all of the infrastructure in place with our Spectrum ASICs, and our Connectx adapters, and now integrated with GPUs, that we'll be able to deliver solutions rather than just compliments. And that's the key. >> Yeah, I think Stu, a great example, I think of where you need that networking, like we've been used to an HPC, is when you start looking at deep learning in training, scale out training. A lot of companies have been stuck on a single workstation, because they haven't been able to figure out how to spread that workload out, and chop it up, like we've been doing in HPC, because they've been running into networking issues. They can't run over an unoptimized network. With this new technology, we're hoping to be able to do a lot of the same things that HPC customers take for granted every day, about workload management, distribution of workload, chopping jobs up into smaller portions, and feeding them out to a cluster. We're hoping that we're going to be able to do those exact same things for our enterprise clients. And it's going to look magical to them, but it's the same kind of thing we've been doing forever. With Mellanox, in the past, now Nvidia networking, we're just going to take that to the enterprise. I'm really excited about it. >> Well, it's so much flexibility. We used to look at, it would take a decade to roll out some new generations. Kevin, if you could just give us latest speeds and feeds. If I look at Ethernet, did I see that this has from n gig, all the way up to 400 gig? I think I lose track a little bit of some of the pieces. I know the industry as a whole is driving it. But where are we with the general customer adoption of some of the some of the speeds today? >> Yeah indeed, we're coming up on the 40th anniversary of the first specification of Ethernet. And we're about 4000 times faster now, 40,000 times faster at 400 gigabits, versus 10 megabits. So yeah, we're shipping today at the adapter level, 100 gig, and even 200 gig. And then at the switch level, 400 gig. And people sort of ask, "Do we really need all that performance?" The answer is absolutely. So the amount of data that the GPU can crunch, and these AI workloads, these giant neural networks, it needs massive amounts of data. And then as you're scaling out, as Scott was talking about, much along the lines of InfiniBand Ethernet needs that same level of performance, throughput, latency and offloads, and we're able to deliver. >> Yeah, so Kevin, thank you so much. Scott, I want to give you a final word here. Anything else you want your customers to understand regarding this partnerships? >> Yeah, just a quick one Stu, quick one. So we've been really fortunate in working really closely with Mellanox over the years, and with Nvidia. And now the two together, we're just excited about what the future holds. We've done some really neat things in HPC, with being one of the first watercool an InfiniBand card. We're one of the first companies to deploy Dragonfly topology. We've done some unique things where we can share a single IP adapter, across multiple users. We're looking forward to doing a lot of that same exact kind of innovation, inside of our systems as we look to Ethernet. We often think that as speeds of Ethernet continue to go higher, we may see more and more people move from InfiniBand to Ethernet. I think that now having both of these offerings inside of our lineup, is going to make it really easy for customers to choose what's best for them over time. So I'm excited about the future. >> Alright, well Kevin and Scott, thank you so much. Deep integration and customer choice, important stuff. Thank you so much for joining us. >> Thank you Stu. >> Thanks Stu. >> Alright, I'm Stu Miniman, and thank you. Thanks for watching theCUBE. (upbeat music)
SUMMARY :
leaders all around the world, for the Lenovo Data Center Group. now of course the networking team, And of course that can be with HPC, We've shown in HPC that the days Is it 95% of the same architecture, And the Nvidia networking that kind of expansion, the portfolio, And by that I mean the software. Yeah, please Scott, if you And that's the beauty of that the Cumulus Linux, and all of the ability to do things that we've had with Onyx over the years, of the partnership mean? So any of the systems that And so now the ability with Nvidia, of the customer demand leading up this. And that's the key. do a lot of the same things of some of the some of the speeds today? that the GPU can crunch, Yeah, so Kevin, thank you so much. And now the two together, Scott, thank you so much. Miniman, and thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Kevin | PERSON | 0.99+ |
Kevin Deierling | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
40,000 times | QUANTITY | 0.99+ |
Onyx | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Lenovo Data Center Group | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
10 megabits | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
September 2020 | DATE | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
400 gigabits | QUANTITY | 0.99+ |
Scott Tease | PERSON | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
HPC | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
last month | DATE | 0.98+ |
InfiniBand | ORGANIZATION | 0.98+ |
two different cases | QUANTITY | 0.98+ |
Boston | LOCATION | 0.97+ |
first time | QUANTITY | 0.97+ |
Jesse Rothstein, ExtraHop | AWS re:Inforce 2019
>> live from Boston, Massachusetts. It's the Cube covering A W s reinforce 2019 brought to you by Amazon Web service is and its ecosystem partners come >> back, Everyone live Coverage of AWS reinforced their first conference, The Cube here in Boston. Messages some jumper. MacOS David Lattin escapes Jesse rusting >> CT on co >> founder of Extra Cube alumni. Great to see you again. VM World Reinvent >> Now the new conference reinforce not a team. A >> summit reinforced a branded event around Cloud security. This is in your wheelhouse. >> Thank you for having me. Yeah, it's a spectacular event. Unbelievable turnout. I think there's 8000 people here. Maybe more. I know that's what they were expecting for an event that was conceived of, or at least announced barely six months ago. The turnout's just >> wait. Many conversation in the past on the Cube and others cloud security now having its own conference. It's not like a like a security conference like Black at Def Con, which is like a broader security. This is really focused on cloud security and the nuances involved for on premises and cloud as it's evolving. It's certainly a lot more change coming on this kind of spins into your direction you would talking this year in the front end. >> It absolutely does. First, it speaks to market demand. Clearly, there was demand for a cloud security focused conference, and that's why this exists. Every survey that I've seen lists security extremely high on the list of anxieties or even causes for delay for shifting workloads to the cloud. So Amazon takes security extremely seriously. >> And then my own personal >> view is that cloud security has been somewhat nascent and immature. And we're seeing, you know, hopefully kind of Ah, somewhere rapid, a >> lot of motivation in that market. Certainly a lot of motivated people want to see it go faster and there spitting in building that out. So I gotta ask >> you before you get off the show, I actually say something if I may. I mean, it's been a long time coming. Yeah, this to your point, Jesse. There was a real need for it, and I think Amazon deserves a lot of credit for that. But at the same time, I think Amazon. There's a little criticism there. I mean, I think that the message that reinvent that's always been we got the best security. We got the most features as I come on in, and the whole theme here of the shared responsibility model, which I'd love to get into, I think was somewhat misunderstood by some of those high high level messaging. So I didn't want to put that out there as a topic that we might touch on. Great. Let's talk about it. Okay, so I do think it was misunderstood. The shared responsibility model. I think the messaging was Hey, the cloud is more secure than your existing data centers. Come on in. And I think a lot of people naively entered waters and then realized, Oh, wait a minute. There's a lot that we still have toe secure. We can't just set it and forget it. I mean, you agree with that? >> I I think that's a controversial topic. I do agree with it. I think it continues to be misunderstood. Shared responsibility model in some ways is Amazon saying We're going the security infrastructure and we're going to give you the tools. But organizations air still expected to follow best practices, certainly, and implement their own, hopefully best in class security operations. >> It's highly nuanced. You can say sharing data see increases visibility into into threats and also of making quality alerts. But I think it's a little bit biased, Dave for Amazon to satiate responsibility because they're essentially want to share in the security posture because they're saying we'll do this. You do that as inherently shared. So why wouldn't they say that? >> Well, I guess we're gonna say way want to own everything? Well, I guess my weight So this show is that I really like their focus on that. I think they shone a light on it and for the goodness of the the industry in the community they have. But it is a bit >> nuanced, and they've said some controversial, perhaps even trajectory statements. In the keynote yesterday, I was I was amused to hear that security is everybody everyone's job, which is something I wholeheartedly believe in. But at the same time, you know, David said that he didn't believe Stephen Step Rather said that he didn't believe in depth set cops, and that seemed a little bit of odds because I but I think they're probably really Steven Schmidt. Steven >> so eight of us. But at the same time, there was a narrative around. Security is code. So, yes, there were some contradictions in messaging, so this smaller remains small ones. They were nuanced but remains some confusion. And that's why people look to the ecosystem to help acorns. And this goes back to >> my earlier point. I I believe that cloud security is really quite nascent. When we look at the way we look at the landscape of vendors, we see a number of vendors that really are kind of on Prem security solutions. They're trying to shoehorn into the cloud way, see a lot of essentially vulnerability scanning and static image scanning. But wait, don't see, in my opinion, that much really best in class security so solutions. And I think until relatively recently it was very hard to enable some of them. And that's why I'd love to talk about the VPC traffic marrying announcement, because I think that was actually the most impactful announcement >> that I want to get to it. So So this is ah, a new on the way. By the way, the other feedback up ahead on the Cube is the sessions here have been so good because you can dig deeper than what you can get it re invent given tries. This is a good example. Explained that the that story because this has been one of the most important stories, the traffic mirroring >> well, unlike >> reinvent. I think this show is Is Maura about education than it is about announcements? No, Amazon announced. A few new service is going into G ET, but these were service is, for the most part, that we already knew you were coming here like God Watchtower in security hub. But the BBC traffic mirroring was really the announcement of this show. And, gosh, it's been a long time in coming 11 closely held belief I've had for a long time is that in the fullness of time, there's really nothing of value that that you can do on Prem that you wouldn't eventually be able to do in the cloud. And it's just been a head scratcher for me. WIFE. For so many years, we've been unable to get any sort of view, mirror or tap of the traffic for diagnostic or analytic purpose is something you could do on prim so easily, with a span porter and network tap and in the cloud we've been having to do kind of back flips and workarounds and software taps and things like that. But with this announcement, it's finally here. It's native >> explain VPC Chapman. What is it for? The folks watching might not know it. Why it's wife. What is it and why is it important? >> So BBC traffic marrying is a network tap that is built into E. C. To networking. What it means is that you can configure a V p c traffic mirror four individual E C two instances actually down to the e n I. Level. You can configure filters and you can send that to a target for analysis purposes. And this analysis could be for diagnostics. But I think much more important is for security. Extra hop is is really began as a network analytics platform way do network detection and response. So this type of this ability to analyze the traffic in real time to run predictive models against it to detect in real time suspicious behaviors and potential threats, I think is absolutely game changing for someone security posture. >> And you guys have been on the doorstep of this day in day out. So this is like a great benefit to you guys. As a company, I can see that. I see That's a great thing for you guys. What's the impact of the customers? Because what is the good news that comes out of the traffic nearing for them? What's the impact of their environment? >> Well, it's all about >> friction. First, I wantto clarify that we've been running in a WS for over six years, six or seven years, so we've had that solution. But it's required some friction in the deployment process because our customers had to install some sort of software tap, which was usually an agent, that was analyzing that there was really gathering the packets in some sort of promiscuous mood and then sending them to us in a tunnel. Where is now? This is This is built into the service into the infrastructure. There's no performance penalty at all. You can configure it. You have I am rolls and policies to secure it. All of the friction goes away. I think, for the kind of the first time in in cloud history, you can now get extremely high quality network security analytics with practically the flip of a switch. >> So It's not another thing do manage. It's like you say, inherit to the network. John and I have heard this this week at this event from practitioners that they want to see less just incremental security products and Maur step function and what they mean by that is way want products that actually take action or give us a script that we can implement, or or actually fix the problem for us. Will this announcement on others that you guys were involved in take that next step more proactive security that these guys so a couple of thoughts >> on that first, the answer is yes, it can, and you're absolutely right. Remediation is extremely important, especially for attacks that they're fast and destructive. When you think about kind of the when you think about attack patterns, their attacks are low and slow. Their attacks their advanced in persistent but the taxes, air fast and destructive movie the speed that is really beyond the ability for humans to respond. And for those sorts of attacks, I think you absolutely need some sort of automated remediation. The most common solutions are some form of blocking the traffic, quarantining the traffic or maybe locking the accounts, and you're kind of blocking. Quarantining and locking are my top three, and then various forms of auditing and forensics go along the way. Amazon actually has a very good tool box for that already. And there are security orchestration, products that can help. And for products like extra hop, the ability to feed a detection into an action is actually a trivial form of integration that we offer out of the box. So the answer is yes. >> But let me go >> back to kind of the incrementalist approach as well that you mentioned. I kind of think about the space and really, really broad strokes and organizations for the last 10 years or so have really highly invested in prevention and protection. So a lot of this is your perimeter defense and in point protection, and the technologies have gotten better. Firewalls have turned into next generation firewalls and antivirus agents have turned into next generation anti virus or in point detection and response. But I strongly believe that network security has and in some ways just kind of lagged behind, and it's really ripe for innovation. And that's why that's what we've really spent the last decade >> building. And that's why you're excited about the traffic BPC traffic nearing because it allows for parallel analytics and so more real time, >> more real >> time. But the network has great properties that nothing else has. When you think about network security with the network itself is close to ground Truth as you can get, it's very hard to tamper with, and it's impossible to turn off those air great properties for cyber security. And you can't say that about something like that. Logs, which are from time to time disabled and scrubbed on. You certainly can't say that about en Pointe agents, which are often worked around and in some cases even used as a better for attack. >> I'm gonna ask you Okay, on that point, I get that. So the next question would come to my mind is okay with the surface here. With coyote expanding and with cloud, you have a sprawling surface area. So the surface area is growing just by default by natural evolution, connecting to the cloud people of back hauling their data into the cloud. All this is good stuff. >> Absolutely. Call it the attack surface, and it is absolutely glowing perhaps in an exponential >> about that dynamic, one sprawling attack air. Because that's just the environment now. And what's the best practice to kind of figure out security posture? Great, great >> question. People talk a lot about the dissolution of the perimeter, and I think I think that's a bit of the debate. And regardless of your views on that, we can all believe that the perimeter is changing and that workloads are moving around and that users are becoming more mobile. But I think an extremely important point is that every enterprise just about is hybrid. So we actually need protection for a hybrid attack surface. And that's an area where I believe extra hop offers a great solution because we have a solution that runs on premises in physical data centers are on campuses, which, no matter how much work, would you move to the cloud. You still have some sort of user on some sort of laptop or some sort of work station in some sort of campus environment, way workin in private cloud environments that are virtualized. And then, of course, we work in public cloud environments, and another announcement that we just made it this show, which I also think is game changing, is our revealed ex cloud offering. So this is an SAS. This is a sass based, network detection and response solution, which means that I talked about removing friction by marrying the traffic. But in this case, all >> you have to >> do is mirror the traffic, pointed to our sass, and we'll do all of the management mean that So is that in the streets for you that is in the marketplace. We launched it yesterday, >> So it's great integration point for you guys. Get it, get on board more customers. >> And I think I think solutions like ours are absolutely best practices and required to secure this hybrid attacks in the >> marketplace. What was that experience like, you know, Amazon >> was actually great to work with. I don't mean to say that with disbelief. You work with you work with such a large company. You kind of have certain expectations, and they exceeded all of my expectations in terms of their responsiveness. They worked with us extremely closely to get into the marketplace. They made recommendations with partners who could help accelerate our efforts. But >> in addition to the >> marketplace, we actually worked with them closely on the VPC traffic marrying feature. There was something we began talking with them about a SW far back, as I think last December, even before reinvent, they were extremely responsive to our feedback. They move very, very quickly. They've actually just >> been a delight to work. There's a question about you talking about the nana mutability of logs, and they go off line sometimes. And yet the same time there's been tens of $1,000,000,000 of value creation from that industry. Are there things that our magic there or things that you can learn from the analytics of analyzing logs that you could bring over to sort of what you're positioning is a more modern and cloud like approach? Or is there some kind of barrier to entry doing that? Can you shed some light on Jesse? That's >> a great question, and this is where I'll say it's a genius of the end situation, not a tyranny of the or so I'm not telling people. Don't collect your logs or analyze them. Of course you should do that, you know that's the best practice. But chances are that that space, you know, the log analysis and the, you know, the SIM market has become so mature. Chances are you're already doing that. And I'm not gonna tell organizations that they shouldn't have some sort of point protection. Of course you should. But what I am saying is that the network itself is a very fundamental data source that has all of those properties that are really good for cyber security and the ability that analyze what's going on in your environment in real time. Understand which users air involved? Which resource is air accessed? And are these behavioral patterns of suspicious and do they represent potential threats? I think that's very powerful. I have a I have a whole threat research team that we've built that just runs attacks, simulations and they run attack tools so that we can take behavioral profiles and understand what these look like in the environment. We build predictive models around how we expect you re sources and users and end points to behave. And when they deviate from those models, that's how we know something suspicious is going on. So this is definitely a a genius of the end situation. John >> reminds me of your you like you're very fond of saying, Hey, what got you here is not likely to move you forward. And that's kind of the takeaway for practitioners is >> yeah. I mean, you gotta build on your success. I mean, having economies of scale is about not having Disick onyx of scale, meaning you always constantly reinventing your product, not building on the success. And then you're gonna have more success if you can't trajectory if you it's just basic competitive strategy product strategy. But the thing that's interesting here is is that as you get more successful and you continue to raise the bar, which is an Amazon term, they work with you better. So if you're raising the bar and you did your own network security probably like OK, now we get parallel traffic mirroring so that >> that's true. But I think we've also heard the Amazon is I think they caught maniacally customer focused, right? And so I think that this traffic marrying capability really is due to customer demand. In fact, when you when you were if you were at the Kino when they made the announcement, that was the announcement where I feel like every phone in the in the whole auditorium went up. That's the announcement where I think there's a lot of excitement and for security practitioners in particular, and SEC ops teams I think this. I think this really reduces some anxiety they have, because cloud workloads really tend to be quite opaque. You have logs, you have audit logs, but it's very difficult to know what actually going on there and who is actually accessing that environment. And, even more important, where is my data going? This is where we can have all sorts of everything from a supply chain attack to a data exfiltration on. It's extremely important to to be able to have that visibility into these clouds >> way agree. We've been saying on the cue many, many years now that the network is the last bottleneck, really, where that script gets flipped upside down where Workloads air dictating Dev ops. Now the network piece is here, so I think this is going to create a lot of innovation. That's our belief. Love to follow up Mawr in Palo Alto. When we get back on this hybrid cloud, I think that's a huge opportunity. I think there's a create a blind spot for companies because that's where the the attackers will go, because they'll know that the hybrids rolling out and that'll be a vulnerability area >> one that's, you know, it's an arms race. Network security is not new. It's been around for decades. But the attack the attackers in the attacks have become more sophisticated, and as a result, you know the defenders need to raise their game as well. This is why, on the one hand, there's there's so much hype and I think machine learning in some ways is oversold. But in other ways, it is a great tool in our arsenal. You know, the machine learning the predictive models, the behavioral models, they really do work. And it really is the next evolution for defensive >> capabilities. Thanks for coming on. Great insight. >> One last question. The beer. Extra guys have been here way did in the past. It's been a while since >> we've done that, but it comes from early days when when I founded the company, people would ask you in the name extra hoppy. Oh, are you guys an online brewery? And we were joking. We said no, that that was extra hops way embraced it and We actually worked with a local brewer that has since been acquired by a major beverage brands. I >> don't know that. I just heard way built our own >> label, and it was the ex Rob Wired P. A. It was it was extremely well received. Every time we visit a customer they'd ask us to bring here. >> That's pretty. You gotta go back to proven formula. Thanks for the insights. Let's follow up when we get back in Palo Alto in our studio on his high breathing's a compelling conversation network Security Network analytics innovation areas where all the action's happening here in Boston, 80 best reinforced. Keep coverage. We'll be right back.
SUMMARY :
A W s reinforce 2019 brought to you by Amazon Web service is back, Everyone live Coverage of AWS reinforced their first conference, The Cube here in Boston. Great to see you again. Now the new conference reinforce not a team. This is in your wheelhouse. I think there's 8000 people here. This is really focused on cloud security and the nuances involved for on premises and cloud as Every survey that I've seen lists security extremely high on the list And we're seeing, you know, hopefully kind of Ah, lot of motivation in that market. I mean, you agree with that? I think it continues to be misunderstood. But I think it's a little bit biased, in the community they have. But at the same time, But at the same time, there was a narrative around. And I think until relatively recently it was very hard to enable some of them. By the way, the other feedback up ahead on the Cube is the sessions here have been so good because you can dig deeper But the BBC traffic mirroring was really the announcement of this What is it and why is it important? What it means is that you can configure a V p c traffic mirror four So this is like a great benefit to you guys. But it's required some friction in the deployment process Will this announcement on others that you guys were involved in take that next And for products like extra hop, the ability to feed a detection back to kind of the incrementalist approach as well that you mentioned. And that's why you're excited about the traffic BPC traffic nearing because it allows for parallel analytics And you can't say that about something like that. So the next question would come to my mind is okay Call it the attack surface, and it is absolutely glowing perhaps in an exponential Because that's just the environment now. But I think an extremely important point is that every enterprise just the management mean that So is that in the streets for you that is in the marketplace. So it's great integration point for you guys. What was that experience like, you know, Amazon I don't mean to say that with disbelief. There was something we began talking there or things that you can learn from the analytics of analyzing logs that you could bring that are really good for cyber security and the ability that analyze what's going on in your And that's kind of the takeaway for practitioners is But the thing that's interesting here is is that as you get more successful and you continue And so I think that this traffic marrying capability really Now the network piece is here, so I think this is going to create a lot of innovation. And it really is the next evolution for Thanks for coming on. It's been a while since we've done that, but it comes from early days when when I founded the company, people would ask you in the name extra I just heard way built our own Every time we visit a customer they'd ask us to bring here. Thanks for the insights.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jesse Rothstein | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Steven | PERSON | 0.99+ |
David Lattin | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
BBC | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Jesse | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
8000 people | QUANTITY | 0.99+ |
seven years | QUANTITY | 0.99+ |
last December | DATE | 0.99+ |
Stephen Step Rather | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
over six years | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
One last question | QUANTITY | 0.99+ |
Extra Cube | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
six months ago | DATE | 0.98+ |
WS | ORGANIZATION | 0.98+ |
80 | QUANTITY | 0.98+ |
11 | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
first conference | QUANTITY | 0.97+ |
Is Maura | TITLE | 0.97+ |
this week | DATE | 0.96+ |
Amazon Web | ORGANIZATION | 0.95+ |
VPC | PERSON | 0.95+ |
Kino | ORGANIZATION | 0.94+ |
2019 | DATE | 0.92+ |
two instances | QUANTITY | 0.92+ |
Cube | COMMERCIAL_ITEM | 0.92+ |
Disick | ORGANIZATION | 0.91+ |
decades | QUANTITY | 0.9+ |
Cube | ORGANIZATION | 0.89+ |
$1,000,000,000 | QUANTITY | 0.88+ |
Chapman | PERSON | 0.87+ |
VM World Reinvent | EVENT | 0.86+ |
eight | QUANTITY | 0.85+ |
top three | QUANTITY | 0.83+ |
Watchtower | TITLE | 0.83+ |
ExtraHop | ORGANIZATION | 0.81+ |
Wired P. | ORGANIZATION | 0.79+ |
last decade | DATE | 0.77+ |
G ET | ORGANIZATION | 0.75+ |
Rob | PERSON | 0.73+ |
God | PERSON | 0.66+ |
Con | EVENT | 0.64+ |
A W s | EVENT | 0.63+ |
last 10 years | DATE | 0.57+ |
years | QUANTITY | 0.56+ |
Mawr | PERSON | 0.56+ |
Prem | ORGANIZATION | 0.53+ |
SEC | ORGANIZATION | 0.53+ |
Def | ORGANIZATION | 0.52+ |
MacOS | TITLE | 0.48+ |
onyx | COMMERCIAL_ITEM | 0.42+ |
Black | ORGANIZATION | 0.37+ |
Thomas Wyatt, AppDynamics & Ben Nye, Turbonomic | Cisco Live US 2019
>> Live from San Diego, California It's the queue covering Sisqo live US 2019 Tio by Cisco and its ecosystem. Barker's >> Welcome Back. We're here at the San Diego Convention Center for Sisqo Live 2019 30th year The show. 28,000 in attendance. I'm stupid, and we're actually at the midpoint of three days of life water wall coverage here and happy to bring back to the program to Cube alumni first. To my right is Ben I, who is the CEO of Turban on Mick. And sitting next to him is Thomas wide, who's the chief marketing and strategy officer of AP Dynamics or APD? Ia's everybody calls them here at the show. Gentlemen, thanks so much for joining us. Thank you. All right, So, Thomas, first of all, we had you on it, reinvent like soon after the acquisition of AP ti Bisys. Go. It's been about two years, and I believe it's been about two years that turban Onyx been partnering with Cisco. So let's start with you. And you know what? What changed in those two years? >> Yeah, it's been amazing. Two years ago, we were on the doorstep of an I P O and it's been a rocketship ride ever since. You know AP Dynamics. After the last two years, the businesses more than doubled team sizes more than doubled, and today we're really happy to be the largest and fastest growing provider of application for miss monitoring in the market. But the reason why, that is, is because our customers are embarking on the sigil transformation, and the application has really become the foundation of their modern day business. That's the way brands are engaging with their users. And but now more than ever, and then the application landscape has gotten way more complex, with micro services and multiple clouds and all of the threats that go on in the infrastructure. And so what Hap Dynamics has been doing is just really providing that really time business and application performance that our customers need to ensure business outcomes. We think of ourselves as Thie Marie for the application or the infrastructure. >> That's awesome. So then, you know it's been interesting to watch in the networking space the last few years. For the most part, applications used to be That's just this thing that ran through the pipes every once in a while, I need to, you know, think about performance. I need to make sure I got buffer credits or, you know, it's now going East West rather the north south and the like. But it was solutions like turban on IQ that sat on top of it and helped understand and help people manage their application. Of course. AP ti pulling that story together even tighter. So, you know, give us the latest we've talked to you. It's just go live before an important partnership. What was the latest in your world, >> boy? The well, so one of the things we're doing is we're building an actual bundle together without D. And if you think about a PM, you're getting the application topology as well as response time and use a response time, which is critical to maintaining the brand and the digital economy that we're talking about. What when you look at every one of those hops and the application of there's a entire application stack that sits underneath a resource ing stack and what we're doing is we're bringing in a R M, which is application re sourcing management with a I so that they're automatically adjusting the resource is in all times continuously in order to support the performance needs that Abdi is showing us when you put together a PM plus a r m. You have total application performance and that customers air really, uh, queuing to so much so that we've actually decided to put this bundle officially together in the marketplace. We just became the first ap TI re sell software product, and now we're taking not to market as C one plus happy. >> Well, congratulations on that is harder ship, Thomas. Bring us inside the customers a little bit. What does this mean for them? You know what that journey we talk about, you know, for, you know, last 10 15 years, you gotta break down those silos. It's not just the networking team, you know, tossing over some band within Leighton, see and write them coming back. And I need some more. No, no, we're not going. You know, we're not going to give you any service level agreement or anything like that, because that's not our job. To what? We'll just set this up and you use what you got. So what would happen in >> trend that we're seeing is a move toward this concept of a iob, which is the really the consolidation of bringing and user application network and infrastructure monitoring closer together and tying that together with a base insights to Dr Automation and Action and very similar to what turbo gnomic specializes in here. And so what we're seeing is, you know, the combination of Cisco plus APP Dynamics. Plus, companies like Turbo is beginning to build that self healing, self learning environment where developers and environments need to be able to drive automation on that. Automation ultimately gets tomb or innovation when you can reduce the mundane tasks, really take a lot of our developers time. And so we're really excited about some of the work we're doing together when you think about the ability to take really time business insights from the application and reprogrammed the network based on the needs of the AP or change out the workloads and move them around on different servers, depending on the needs of the AP, these are all things that combination of Turbo, Cisco and epidemics are doing together. >> Yeah, actually, I did a whole show down in D. C a couple months ago, Cisco Partner. We're focused on a I ops. And, you know, we understand customers had a lot of tools that they have to deal with. We need to simplify this environment, allow them tow, you know, focus on their business, not managing this complex environment of all these tools. How does that whole concept of II ops and, you know, automating this environment managing my workloads? You know what? What do you sing with your customers? >> I think all the customers are saying, Look, there's too many tools today. They don't need another resource monitor, et cetera. What they need is they need to understand, through the lens of the application, all the resource dependencies. So instead of looking at a field of servers and saying, I have five nines availability or storage or whatever, what they really want to see is whatever the servers and storage and resource is dependent on this specific up that runs the bank or the CPD company of the manufacturer. And can I make sure that those re sources are supporting performance of the application? And that's is this total application performance concept, much more so than than whether I have five nines availability and all my other host accents? >> Yeah, absolutely. Did you have a comment on other Guy's >> gonna say We're seeing so many different customers in different verticals, Whether it's retail, hospitality, automakers, they're all benefitting from the cloud migration. And now that they have the cloud migration, the ability to have that elasticity of their workloads, they're scaling in and out based on the application demands. This is becoming critical. This is no longer a luxury for the most cloud eight of companies in the world. Enterprises with mission critical systems are all becoming dependent on these more modern technologies. And I think they need partners like ours more than ever. >> Yeah, One of the questions we've had is you talk to customers today and they are multi cloud. But that multiyear hybrid cloud is a bunch of pieces and one of our premises. We ask, from a research standpoint, how can this some of those pieces be more valuable than just the independent pieces alone, you know, kind of one plus one with, you know, an extra factor talk a little bit about the customers. And also, you know, what does this combination do that I couldn't just, you know, grab these pieces together and kind of make it work in my portfolio of those, you know, dozens of tools that I have. >> What glad. But I think the customers one of things this needed. We literally announced his partnership publicly two weeks ago and already have closed the 1st 2 just out of momentum that that folks are realizing the need to be able to say, Look, I can host my applications on Prem with a number of different vendors, I can host my applications off Prem with a number of different vendors. But the real question is, where am I going to get the most performance? Where can I do it in a compliant way with all my policies and how can I make sure that I'm doing it cost effectively? And when there's a multiplicity of tradeoffs where I can choose, then it's incumbent upon each of those vendors, strategic as they are to be able to offer the best service, the best performance, the best compliance and resource ing, and that's what we're bringing to him. And I think that's why you're seeing that a pipeline is built to several double digit millions and already deals are closing everything I'd >> add to that Is that, you know, going back to the point around a ops in the evolution of a lot of these modern ing and automation technologies. >> A lot of our >> customers have a hybrid environment of different tools and providers that they leverage. And so one of the things that were really focused on is an open ecosystem where you'd be able to ingest data sources from various different players. Some of them can be Cisco, Turman, Onyx and Abdi. But some of them can be other providers that are also have very good products in very specific domains. I think the key is that being ableto be ableto bring that data together, Dr Cross domain correlation in a more automated way than ever before, leveraging some of the more modern AI ai capabilities, which drives the action ing that people really need. And that is really the automation step is where customers start to see the benefits. But I think the better and more valuable the data that you have, the better automation you could do because your predictability of your algorithms are much better at that >> point. All right, been your customers that have rolled out that this solution I know the joint solutions brand new. But what? What is then the key metrics? Howto they define success how today they know you know that they they've reached that success. >> So first and foremost, the line of business. Who's the customer to central it? Whether it's hosted or not, they care the most. That performance does not degrade and is always improving. Okay, But when they do that and they can show that, then a ll the decision that the rest of central takes down in fromthe container layer to the pods that a virtual to the cloud I asked on Prem in off those become acceptable choices for central i t. To make because fundamentally, Lina businesses saying, Yep, we're good, right? So that's where we're seeing the value of being able to see the response time and bridging the application performance to the application resource ing that frankly hasn't ever been solved in five decades of it. And I think it goes back to a Thomas was just saying It's the quality of the analytics that comes from a iob. I don't think people need more tools to capture more data. There's a lot of data out there. The question is, can you make it actionable? And are your analytics correct? And, frankly, are they the best? And I think we see that that's been a big parcel of what we've done during the two years Cisco's told us on multiple occasions it's the fastest software O AM they've had by bringing it through, starting with the data center team and growing up through traditional Cisco and then with their purchase of Abdi two years ago. That combination makes a ton of sense, and now you've got the top all the way to the bottom. And that's a pretty special spot, I think un replicated by any other strategic today. Yeah, the other thing, >> I just added, That is the importance of being able to monitor the business in real time as well. And so a lot of what we've talked about are the technology analytics, the operational analytics that we run our business on, but being able to correlate the business transactions running through the application, so users what their journey looks like, they're, you know, abandonment, rates, revenues, you know, the ability to engage with the users, tying that back to the specific infrastructure in a way that's used to be a bit of black box before. Now that all comes a life by the combination of these technologies. >> So Thomas big trends we see at this show. So a Cisco's transformation towards a software company and the world of multi cloud abdi plays a pretty important piece of that. You know, discussion. Talk a little bit about kind of where you are and you know where do you see Cisco moving along that journey and then, you know, help tie in where turban Ah, Mick Fitz. >> Yeah. So I think it really goes back to the fact that as our customers are making this digital transformation, they're really looking at a variety of infrastructures. You know, cloud providers to be able to offer these applications. And what Appdynamics has done is really created this monitoring fabric that sits across any infrastructure and it tightly ties to the business value of the application. So if you combine that with a lot of what Cisco's doing around connectivity securing the clouds, securing the infrastructure around it and tying that Teo where we're strong and networking and bringing all that together, I think fundamentally, we've got a lot of the pieces of the puzzle to truly enable a i ops, but we don't have them all. And I think that's what's important, that we partner with people like Ben because it brings together a set of automation capability around application resource ing that we don't have and our customers are better suited working with with Ben and team on that. So how do we integrate those things in a frictionless way and make that part of our sales process? That's really what this partnerships all about. >> All right, then where do we see the partnership going down the road? >> I think it's going to get more exciting. So right now we're pulling unit Election Lee from Abdi. I think we're going to go right back the other way. That Thomas referred to, which is one of my favorite parts of Abdi. Is the business like you? Yeah, it's where you say, What is the cost of the late and see in anyone? Hop and where do the Bandon rates? Abandonment rates happen from consumers on that application right now, we can price for the first time what's the cost of the late sea in that one tear and across the across the application overall. And then, more importantly, what do we do about it? Well, that's the resource ing and the digestion is being resolved in real time. And so I think, the ability look att, the resiliency of applications both across and up and down the a p m plus the a r m and being able to guarantee or assure performance, total application performance. That's a big message. >> All right, what would I give you both? Just fun. A word here, you know, about halfway through the conference here in San Diego. Thomas, >> I would just say that the energy that we're seeing, the feedback we're getting from customers in the business insights part of the world of solutions been phenomenal. I think there's so many more developer oriented, application developer oriented individuals that's just go live than ever before. And I think that serves both of our business is quite well. >> Look, I think this has been a great show, but one of the things you're going to see is all of these vendors who have had global presence for in this case, 30 years. Sisqo live 30 years long But now being able to think through how do I become that much more application relevant? You know, if you think about transformation of application is going to come top down, not bottom up. And so, while we have all the evolution and, frankly disruption happening, digital disruption happening across it, the way to know which of the ones that are going to stick, they're going to come top down. And I think the moves that they're making all the way through buying happy all the way through partnering with C warmer turban Ah, Mick has been emblematic of what that opportunity is in the marketplace on the realization that customers care about their applications, their applications run their business. And you've got to look at the topology and you gotta look and response time and you gotta look at the resource ing. But that's a really fun spot for us to be in together. >> Bennett Thomas Congratulations on the expanded partnership and thanks again for joining us on the Cube. Thanks to you. All right, we're here in the Definite zone. Three days, Walter Wall coverage. Arms to Minuteman, David Long days in the house. Lisa Martin's here to we'll be back with lots more coverage. Thanks for watching the Cube
SUMMARY :
Live from San Diego, California It's the queue covering And you know what? That's the way brands are engaging with their users. I need to, you know, think about performance. the performance needs that Abdi is showing us when you put together a PM plus a r m. You know what that journey we talk about, you know, for, And so what we're seeing is, you know, We need to simplify this environment, allow them tow, you know, company of the manufacturer. Did you have a comment on other Guy's And now that they have the cloud migration, the ability to have that elasticity of their workloads, Yeah, One of the questions we've had is you talk to customers today and they are multi cloud. And I think that's why you're seeing that a pipeline is built to several double digit millions add to that Is that, you know, going back to the point around a ops in the evolution of a lot And that is really the automation step is where customers start to see the you know that they they've reached that success. that the rest of central takes down in fromthe container layer to the pods that a virtual to the cloud I just added, That is the importance of being able to monitor the business in real time as well. moving along that journey and then, you know, help tie in where turban Ah, Mick Fitz. And I think that's what's important, that we partner with people like Ben because I think it's going to get more exciting. All right, what would I give you both? And I think that serves both of our business is quite well. And I think the moves that they're making all the way through buying happy all the way through partnering with Bennett Thomas Congratulations on the expanded partnership and thanks again for joining us on the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Thomas | PERSON | 0.99+ |
Turman | ORGANIZATION | 0.99+ |
San Diego | LOCATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
D. C | LOCATION | 0.99+ |
Mick Fitz | PERSON | 0.99+ |
Ben I | PERSON | 0.99+ |
Turban | ORGANIZATION | 0.99+ |
AP Dynamics | ORGANIZATION | 0.99+ |
Thie Marie | PERSON | 0.99+ |
Walter Wall | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
Onyx | ORGANIZATION | 0.99+ |
Prem | ORGANIZATION | 0.99+ |
APD | ORGANIZATION | 0.99+ |
five decades | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
28,000 | QUANTITY | 0.99+ |
Ben Nye | PERSON | 0.99+ |
two weeks ago | DATE | 0.99+ |
Three days | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Mick | PERSON | 0.99+ |
Hap Dynamics | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
AppDynamics | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
two years ago | DATE | 0.98+ |
San Diego Convention Center | LOCATION | 0.98+ |
Turbonomic | ORGANIZATION | 0.98+ |
Two years ago | DATE | 0.98+ |
Abdi | PERSON | 0.98+ |
first time | QUANTITY | 0.98+ |
David Long | PERSON | 0.98+ |
Turbo | ORGANIZATION | 0.98+ |
three days | QUANTITY | 0.97+ |
about two years | QUANTITY | 0.97+ |
two years | QUANTITY | 0.97+ |
Ben | PERSON | 0.97+ |
Abdi | ORGANIZATION | 0.97+ |
Minuteman | PERSON | 0.97+ |
Thomas Wyatt | PERSON | 0.97+ |
AP | ORGANIZATION | 0.97+ |
dozens | QUANTITY | 0.97+ |
Cube | ORGANIZATION | 0.96+ |
Bennett Thomas | PERSON | 0.96+ |
Thomas wide | PERSON | 0.95+ |
Bandon | ORGANIZATION | 0.93+ |
Appdynamics | ORGANIZATION | 0.9+ |
Abdi | LOCATION | 0.88+ |
couple months ago | DATE | 0.87+ |
Leighton | LOCATION | 0.87+ |
Sisqo Live | EVENT | 0.86+ |
30th year | QUANTITY | 0.86+ |
1st 2 | QUANTITY | 0.86+ |
I P O | COMMERCIAL_ITEM | 0.84+ |
last two years | DATE | 0.83+ |
Jonathan Ballon, Intel | AWS re:Invent 2018
>> Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their Ecosystem partners. >> Oh welcome back, to theCUBE. Continuing coverage here from AWS re:Invent, as we start to wind down our coverage here on the second day. We'll be here tomorrow as well, live on theCUBE, bringing you interviews from Hall D at the Sands Expo. Along with Justin Warren, I'm John Walls, and we're joined by Jonathan Ballon, who's the Vice President of the internet of things at Intel. Jonathan, thank you for being with us today. Good to see you, >> Thanks for having me guys. >> All right, interesting announcement today, and last year it was all about DeepLens. This year it's about DeepRacer. Tell us about that. >> What we're really trying to do is make AI accessible to developers and democratize various AI tools. Last year it was about computer vision. The DeepLens camera was a way for developers to very inexpensively get a hold of a camera, the first camera that was a deep-learning enabled, cloud connected camera, so that they could start experimenting and see what they could do with that type of device. This year we took the camera and we put it in a car, and we thought what could they do if we add mobility to the equation, and specifically, wanted to introduce a relatively obscure form of AI called reinforcement learning. Historically this has been an area of AI that hasn't really been accessible to most developers, because they haven't had the compute resources at their disposal, or the scale to do it. And so now, what we've done is we've built a car, and a set of tools that help the car run. >> And it's a little miniature car, right? I mean it's a scale. >> It's 1/118th scale, it's an RC car. It's four-wheel drive, four-wheel steering. It's got GPS, it's got two batteries. One that runs the car itself, one that runs the compute platform and the camera. It's got expansion capabilities. We've got plans for next year of how we can turbo-charge the car. >> I love it. >> Right now it's baby steps, so to speak, and basically giving the developer the chance to write a reinforcement learning model, an algorithm that helps them to determine what is the optimum way that this car can move around a track, but you're not telling the car what the optimum way is, you're letting the car figure it out on their own. And that's really the key to reinforcement learning is you don't need a large dataset to begin with, it's pre-trained. You're actually letting, in this case, a device figure it out for themselves, and this becomes very powerful as a tool, when you think about it being applied to various industries, or various use-cases, where we don't know the answer today, but we can allow vast amounts of computing resources to run a reinforcement model over and over, perhaps millions of times, until they find the optimum solution. >> So how do you, I mean that's a lot of input right? That's a lot, that's a crazy number of variables. So, how do you do that? So, how do you, like in this case, provide a car with all the multiple variables that will come into play. How fast it goes, and which direction it goes, and all that, and on different axes and all those things, to make these own determinations, and how will that then translate to a real specific case in the workplace? >> Well, I mean the obvious parallel is of course autonomous driving. AWS had Formula One on stage today during Andy Jassy's keynote, that's also an Intel customer, and what Formula One does is they have the fastest cars in the world, and they have over 120 sensors on that car that are bringing in over a million pieces of data per second. Being able to process that vast amount of data that quickly, which includes a variety of data, like it's not just, it's also audio data, it's visual data, and being able to use that to inform decisions in close to real time, requires very powerful compute resources, and those resources exist both in the cloud as well as close to the source of the data itself at the edge, in the physical environment. >> So, tell us a bit about the software that's involved here, 'cause people think of Intel, you know that some people don't know about the software heritage that Intel has. It's not just about, the Intel inside isn't just the hardware chips that's there, there's a lot of software that goes into this. So, what's the Intel angle here on the software that powers this kind of distributed learning. >> Absolutely, software is a very important part of any AI architecture, and for us we've a tremendous amount of investment. It's almost perhaps, equal investment in software as we do in hardware. In the case of what we announced today with DeepRacer and AWS, there's some toolkits that allow developers to better harness the compute resources on the car itself. Two things specifically, one is we have a tool called, RL Coach or Reinforcement Learning Coach, that is integrated into SageMaker, AWS' machine learning toolkit, that allows them to access better performance in the cloud of that data that's coming into the, off their model and into their cloud. And then we also have a toolkit called OpenVINO. It's not about drinking wine. >> Oh darn. >> Alright. >> Open means it's an opensource contribution that we made to the industry. Vino, V-I-N-O is Visual Inference and Neural Network Optimization, and this is a powerful tool, because so much of AI is about harnessing compute resources efficiently, and as more and more of the data that we bring into our compute environments is actually taking place in the physical world, it's really important to be able to do that in a cost-effective and power-efficient way. OpenVINO allows developers to actually isolate individual cores or an integrated GPU on a CPU without knowing anything about hardware architecture, and it allows them then to apply different applications, or different algorithms, or inference workloads very efficiently onto that compute architecture, but it's abstracted away from any knowledge of that. So, it's really designed for an application developer, who maybe is working with a data scientist that's built a neural network in a framework like TensorFlow, or Onyx, or Pytorch, any tool that they're already comfortable with, abstract away from the silicon and optimize their model onto this hardware platform, so it performs at orders of magnitude better performance then what you would get from a more traditional GPU approach. >> Yeah, and that kind of decision making about understanding chip architectures to be able to optimize how that works, that's some deep magic really. The amount of understanding that you would need to have to do that as a human is enormous, but as a developer, I don't know anything about chip architectures, so it sounds like the, and it's a thing that we've been hearing over the last couple of days, is these tools allow developers to have essentially superpowers, so you become an augmented intelligence yourself. Rather than just giving everything to an artificial intelligence, these tools actually augment the human intelligence and allow you to do things that you wouldn't otherwise be able to do. >> And that's I think the key to getting mass market adoption of some of these AI implementations. So, for the last four or five years since ImageNet solved the image recognition problem, and now we have greater accuracy from computer models then we do from our own human eyes, really AI was limited to academia, or large IT tech companies, or proof-of-concepts. It didn't really scale into these production environments, but what we've seen over the couple of years is really a democratization of AI by companies like AWS and Intel that are making tools available to developers, so they don't need to know how to code in Python to optimize a compute module, or they don't need to, in many cases, understand the fundamental underlying architectures. They can focus on whatever business problem they're tryin' to solve, or whatever AI use-case it is that they're working on. >> I know you talked about DeepLens last year, and now we've got DeepRacer this year, and you've got the contest going on throughout this coming year with DeepRacer, and we're going to have a big race at the AWS re:Invent 2019. So what's next? I mean, or what are you thinking about conceptually to, I guess build on what you've already started there? >> Well, I can't reveal what next years, >> Well that I understand >> Project will be. >> But generally speaking. >> But what I can tell you, what I can tell you is what's available today in these DeepRacer cars is a level playing field. Everyone's getting the same car and they have essentially the same tool sets, but I've got a couple of pro-tips for your viewers if they want to win some of these AWS Summits that are going to be around the world in 2019. Two pro-tips, one is they can leverage the OpenVINO toolkit to get much higher inference performance from what's already on that car. So, I encourage them to work with OpenVINO. It's integrated into SageMaker, so that they have easy access to it if they're an AWS developer, but also we're going to allow an expansion of, almost an accelerator of the car itself, by being able to plug in an Intel Neural Compute Stick. We just released the second version of this stick. It's a USB form factor. It's got a Movidius Myriad X Vision processing unit inside. This years version is eight times more powerful than last years version, and when they plug it into the car, all of that inference workload, all of those images, and information that's coming off those sensors will be put onto the VPU, allowing all the CPU, and GPU resources to be used for other activities. It's going to allow that car to go at turbo speed. >> To really cook. >> Yeah. (laughing) >> Alright, so now you know, you have no excuse, right? I mean Jonathan has shared the secret sauce, although I still think when you said OpenVINO you got Justin really excited. >> It is vino time. >> It is five o'clock actually. >> Alright, thank you for being with us. >> Thanks for having me guys. >> And good luck with DeepRacer for the coming year. >> Thank you. >> It looks like a really, really fun project. We're back with more, here at AWS re:Invent on theCUBE, live in Las Vegas. (rhythmic digital music)
SUMMARY :
Brought to you by Amazon Web Services, Intel, Good to see you, and last year it was all about DeepLens. that hasn't really been accessible to most developers, And it's a little miniature car, right? One that runs the car itself, And that's really the key to reinforcement learning to a real specific case in the workplace? and being able to use that to inform decisions It's not just about, the Intel inside that allows them to access better performance in the cloud and as more and more of the data that we bring Yeah, and that kind of decision making about And that's I think the key to getting mass market adoption I mean, or what are you thinking about conceptually to, so that they have easy access to it I mean Jonathan has shared the secret sauce, on theCUBE, live in Las Vegas.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Jonathan Ballon | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
next year | DATE | 0.99+ |
Justin | PERSON | 0.99+ |
two batteries | QUANTITY | 0.99+ |
first camera | QUANTITY | 0.99+ |
This year | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
eight times | QUANTITY | 0.99+ |
five o'clock | DATE | 0.99+ |
Two things | QUANTITY | 0.99+ |
this year | DATE | 0.98+ |
Two pro-tips | QUANTITY | 0.98+ |
over a million pieces | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
over 120 sensors | QUANTITY | 0.98+ |
OpenVINO | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
four-wheel | QUANTITY | 0.97+ |
Sands Expo | EVENT | 0.97+ |
DeepRacer | ORGANIZATION | 0.97+ |
SageMaker | TITLE | 0.96+ |
Myriad X Vision | COMMERCIAL_ITEM | 0.95+ |
DeepLens | COMMERCIAL_ITEM | 0.95+ |
V-I-N-O | TITLE | 0.94+ |
second day | QUANTITY | 0.94+ |
TensorFlow | TITLE | 0.94+ |
both | QUANTITY | 0.94+ |
millions of times | QUANTITY | 0.92+ |
Pytorch | TITLE | 0.92+ |
Onyx | TITLE | 0.91+ |
Neural Compute Stick | COMMERCIAL_ITEM | 0.91+ |
RL Coach | TITLE | 0.91+ |
Movidius | ORGANIZATION | 0.89+ |
Invent 2018 | EVENT | 0.86+ |
coming year | DATE | 0.86+ |
Reinforcement Learning Coach | TITLE | 0.85+ |
this coming year | DATE | 0.82+ |
ImageNet | ORGANIZATION | 0.82+ |
theCUBE | ORGANIZATION | 0.82+ |
five years | QUANTITY | 0.81+ |
re:Invent 2019 | EVENT | 0.8+ |
Vino | TITLE | 0.78+ |
last couple of days | DATE | 0.77+ |
Formula One | TITLE | 0.75+ |
AWS re:Invent 2018 | EVENT | 0.72+ |
Hall D | LOCATION | 0.71+ |
couple of years | QUANTITY | 0.71+ |
four | QUANTITY | 0.71+ |
data per second | QUANTITY | 0.69+ |
re: | EVENT | 0.67+ |
1/118th scale | QUANTITY | 0.67+ |
DeepRacer | COMMERCIAL_ITEM | 0.67+ |
Formula | ORGANIZATION | 0.67+ |
DeepRacer | TITLE | 0.65+ |
Adrian Cockcroft, AWS | KubeCon 2017
>> Announcer: Live from Austin, Texas, It's The Cube. Covering KubeCon 2017 and CloudNativeCon 2017. Brought to you by Red Hat, The Lennox Foundation, and The Cube's ecosystem partners. >> Okay, welcome back everyone. Live here in Austin, Texas, this is The Cube's exclusive coverage of the CNCF CloudNativeCon which was yesterday, and today is KubeCon, for Kubernetes conference, and a little bit tomorrow as well, some sessions. Our next guest is Adrian Cockcroft, VP of Cloud Architecture Strategy at AWS, Amazon Web Services, and my co-host Stu Miniman. Obviously, Adrian, an industry legend on Twitter and the industry, formerly with Netflix, knows a lot about AWS, now VP of Cloud Architecture, thanks for joining us. Appreciate it. >> Thanks very much. >> This is your first time as an AWS employee on The Cube. You've been verified. >> I've been on The Cube before. >> Many times. You've been verified. What's going on now with you guys, obviously coming off a hugely successful reinvent, there's a ton of video of me ranting and raving about how you guys are winning, and there's no second place, in the rear-view mirror, certainly Amazon's doing great. But CloudNative's got the formula, here. This is a cultural shift. What is going on here that's similar to what you guys are doing architecturally, why are you guys here, are you evangelizing, are you recruiting, are you proposing anything? What's the story? >> Yeah, it's really all of those things. We've been doing CloudNative for a long time, and the key thing with AWS, we always listen to our customers, and go wherever they take us. That's a big piece of the way we've always managed to keep on top of everything. And in this case, the whole container industry, there's a whole whole market there, there's a lot of different pieces, we've been working on that for a long time, and we found more and more people interested in CNCF and Kubernetes, and really started to engage. Part of my role is to host the open source team that does outbound engagement with all the different open source communities. So I've hired a few people, I hired Arun Gupta, who's very active in CNCF earlier this year, and internally we were looking at, we need to join CNCF at some point. We got to do that eventually and venture in, let's go make it happen. So last summer we just did all the internal paperwork, and running around talking to people and got everyone on the same page. And then in August we announced, hey, we're joining. So we got that done. I'm on the board of CNCF, Arun's my alternate for the board and technical, running around, and really deeply involved in as much of the technology and everything. And then that was largely so that we could kind of get our contributions from engineering on a clear footing. We were starting to contribute to Kupernetes, like as an outsider to the whole thing. So that's why we're, what's going on here? So getting that in place was like the basis for getting the contributions in place, we start hiring, we get the teams in place, and then getting our ducks in a row, if you like. And then last week at Reinvent, we announced EKS, the EC2 Kubernete's Service. And this week, we all had to be here. Like last week after Reinvent, everyone at AWS wants to go and sleep for a week. But no, we're going to go to Austin, we're going to do this. So we have about 20 people here, we came in, I did a little keynote yesterday. I could talk through the different topics, there, but fundamentally we wanted to be here where we've got the engineering teams here, we've got the engineering managers, they're in full-on hiring mode, because we've got the basic teams in place, but there's a lot more we want to do, and we're just going out and engaging, really getting to know the customers in detail. So that's really what drives it. Customer interactions, little bit of hiring, and just being present in this community. >> Adrian, you're very well known in the open source community, everything that you've done. Netflix, when you were on the VC side, you evangelized a bunch of it, if I can use the term. Amazon, many of us from the outside looked and, trying to understand. Obviously Amazon used lots of open source, Amazon's participated in a number of open source. MXNet got a lot of attention, joining the CNCF is something, I know this community, it's been very positively received, everybody's been waiting for it. What can you tell us about how Amazon, how do they think about open source? Is that something that fits into the strategy, or is it a tactic? Obviously, you're building out your teams, that sends certain signals to market, but can you help clarify for those of us that are watching what Amazon thinks about when it comes to this space? >> I think we've been, so, we didn't really have a team focused on outbound communication of what we were doing in open source until I started building this team a year ago. I think that was the missing link. We were actually doing a lot more than most people realized. I'd summarize it as saying, we were doing more than most people expected, but less than we probably could have been given the scale of what we are, the scale that AWS is at. So part of what we're doing is unlocking some internal demand where engineering teams were going. We'd like to open source something, we don't know how to engage with the communities. We're trying to build trust with these communities, and I've hired a team, I've got several people now, who are mostly from the open source community, we were also was kind of interviewing people like crazy. That was our sourcing for this team. So we get these people in and then we kind of say, all right, we have somebody that understands how to build these communities, how to respond, how to engage with the open source community. It's a little different to a standard customer, enterprise, start up, those are different entities that you'd want to relate to. But from a customer point of view, being customer-obsessed as AWS is, how do we get AWS to listen to an open source community and work with them, and meet all their concerns. So we've been, I think, doing a better job of that now we've pretty much got the team in place. >> That's your point, is customer focus is the ethos there. The communities are your customers in this case. So you're formalizing, you're formalizing that for Amazon, which has been so busy building out, and contributing here and there, so it sounds like there was a lot of activity going on within AWS, it was just kind of like contributing, but so much work on building out cloud ... >> Well there's a lot going on, but if no one was out there telling the story, you didn't know about it. Actually one of the best analogies we have for the EKS is actually our EMR, our Hadoop service, which launched 2010 or something, 2009, we've had it forever. But from the first few years when we did EMR, it was actually in a fork. We kept just sort of building our own version of it to do things, but about three or four years ago, we started upstreaming everything, and it's a completely clean, upstreamed version of all the Hadoop and all the related projects. But you make one API call, a cluster appears. Hey, give me a Hadoop cluster. Voom, and I want Spark and I want all these other things on it. And we're basically taking Kubernetes, it's very similar, we're going to reduce that to a single API call, a cluster appears, and it's a fully upstreamed experience. So that's, in terms of an engineering relationship to open source, we've already got a pretty good success story that nobody really knew about. And we're following a very similar path. >> Adrian, can you help us kind of unpack the Amazon Kubernetes stack a little bit? One of the announcements had a lot of attention, definitely got our attention, Fargate, kind of sits underneath what Kubernetes is doing, my understanding. Where are you sitting with the service measures, kind of bring us through the Amazon stack. What does Amazon do on its own versus the open source, and how those all fit together. >> Yeah, so everyone knows Amazon is a place where you can get virtual machines. It's easy to get me a virtual machine from ten years ago, everyone gets that, right? And then about three years ago, I think it was three years ago, we announced Lambda - was that two or three years ago? I lose track of how many reinvents ago it was. But with Lambda it's like, well, just give me a function. But as a first class entity, there's a, give me a function, here's the code I want you to run. We've now added two new ways that you can deploy to, two things you can deploy to. One of them's bare metal, which is already announced, one of the many, many, many announcements last week that might have slipped by without you noticing, but Bare Metal is a service. People go, 'those machines are really big'. Yes, of course they're really big! You get the whole machine and you can be able to bring your own virtualization or run whatever you want. But you could launch, you could run Kubernetes on that if you wanted, but we don't really care what you run it on. So we had Bare Metal, and then we have container. So Fargate is container as a first class entity that you deploy to. So here's my container registry, point you at it, and run one of these for me. And you don't have to think about deploying the underlying machines it's running on, you don't have to think about what version of Lennox it is, you have to build an AMI, all of the agents and fussing around, and you can get it in much smaller chunks. So you can say you get a CPU and half a gig of ram, and have that as just a small container. So it becomes much more granular, and you can get a broader range of mixes. A lot of our instances are sort of powers of two of a ratio of CPU to memory, and with Fargate you can ask for a much broader ratio. So you can have more CPU, less memory, and go back the other way, as well. 'Cause we can mix it up more easily at the container level. So it gives you a lot more flexibility, and if you buy into this, basically you'll get to do a lot of cost reduction for the sort of smaller scale things that you're running. Maybe test environments, you could shrink them down to just the containers and not have a lot of wasted space where you're trying to, you have too many instances running that you want to put it in. So it's partly the finer grain giving you more ability to say -- >> John: Or consumption choice. >> Yeah, and the other thing that we did recently was move to per-second billing, after the first minute, it's per-second. So the granularity of Cloud is now getting to be extremely fine-grained, and Lambda is per hundred millisecond, so it's just a little bit -- >> $4.03 for your bill, I mean this is the key thing. You guys have simplified the consumption experience. Bare Metal, VM's, containers, and functions. I mean pick one. >> Or pick all of them, it's fine. And when you look at the way Fargate's deployed in ECS it's a mixture. It's not all one or all the other, you deploy a number of instances with your containers on them, plus Fargate to deploy some additional containers that maybe didn't fit those instances. Maybe you've got a fleet of GPU enhanced machines, but you want to run a bit of Logic around it, some other containers in the same execution environment, but these don't need to be on the GPU. That kind of thing, you can mix it up. The other part of the question was, so how does this play into Kubernetes, and the discussions are just that we had to release the thing first, and then we can start talking, okay, how does this fit. Parts of the model fit into Kubernetes, parts don't. So we have to expose some more functionality in Fargate for this to make sense, 'cause we've got a really minimal initial release right now, we're going to expose it and add some more features. And then we possibly have to look at ways that we mutate Kubernetes a little bit for it to fit. So the initial EKS release won't include Fargate, because we're just trying to get it out based on what everyone knows today, we'd rather get that out earlier. But we'll be doing development work in the meantime, so a subsequent release we'll have done the integration work, which will all happen in public, in discussion with the community, and we'll have a debate about, okay, this is the features Fargate needs to properly integrate into Kubernetes, and there are other similar services from other top providers that want to integrate to the same API. So it's all going to be done as a public development, how we architect this. >> I saw a tweet here, I want to hear your comments on, it's from your keynote, someone retweeted, "managing over 100,000 clusters on ACS, hashtag Fargate," integrated into ECS, your hashtag, open, ADM's open. What is that hundred thousand number. Is that the total number, is that an example? On elastic container service, what does that mean? >> So ECS is a very large scale, multi-tenant container operation service that we've had for several years. It's in production, if you compare it to Kubernetes it's running much larger clusters, and it's been running at production-grade for longer. So it's a little bit more robust and secure and all those kinds of things. So I think it's missing some Kubernetes features, and there's a few places where we want to bring in capabilities from Kubernetes and make ECS a better experience for people. Think of Kubernetes as some what optimized for the developer experience, and ECS for more the operations experience, and we're trying to bring all this together. It is operating over a hundred thousand clusters of containers, over a hundred thousand clusters. And I think the other number was hundreds of millions of new containers are launched every week, or something like that. I think it was hundreds of millions a week. So, it's a very large scale system that is already deployed, and we're running some extremely large customers on, like Expedia and Macbook. Macbook ... Mac Box. Some of these people are running tens of thousands of containers in production as a single, we have single clusters in the tens of thousands range. So it's a different beast, right? And it meets a certain need, and we're going to evolve it forwards, and Kubernetes is serving a very different purpose. If you look at our data science space, if you want exactly the same Hadoop thing, you can get that on prem, you can run EMR. But we have Athena and Red Shift and all these other ways that are more native to the way we think, where we can go iterate and build something very specific to AWS, so you blend these two together and it depends on what you're trying to achieve. >> Well Adrian, congratulations on a great opportunity, I think the world is excited to have you in your role, if you could clarify and just put the narrative around, what's actually happening in AWS, what's been happening, and what you guys are going to do forward. I'll give you the last minute to let folks know what your job is, what your objective is, what you're looking for to hire, and your philosophy in the open source for AWS. >> I think there's a couple of other projects, and we've talked, this is really all about containers. The other two key project areas that we've been looking at are deep learning frameworks, since all of the deep learning frameworks are open source. A lot of Kubernetes people are using it to run GPUs and do that kind of stuff. So Apache MXNet is another focus on my team. It went into the incubation phase last January, we're walking it through, helping it on its way. It's something where we're 30, 40% of that project is AWS contribution. So we're not dominating it, but we're one of its main sponsors, and we're working with other companies. There's joint work with, it's lots of open source projects around here. We're working with Microsoft on Gluon, we're working with Facebook and Microsoft on Onyx which is an open URL network exchange. There's a whole lot of things going on here. And I have somebody on my team who hasn't started yet, can't tell you who it is, but they're starting pretty soon, who's going to be focusing on that open source, deep learning AI space. And the final area I think is interesting is IOT, serverless, Edge, that whole space. One announcement recently is free AltOS. So again, we sort of acquired the founder of this thing, this free real-time operating system. Everything you have, you probably personally own hundreds of instances of this without knowing it, it's in everything. Just about every little thing that sits there, that runs itself, every light bulb, probably, in your house that has a processor in it, those are all free AltOS. So it's incredibly pervasive, and we did an open source announcement last week where we switched its license to be a pure MIT license, to be more friendly for the community, and announced an Amazon version of it with better Amazon integration, but also some upgrades to the open source version. So, again, we're pushing an open source platform, strategy, in the embedded and IOT space as well. >> And enabling people to build great software, take the software engineering hassles out for the application developers, while giving the software engineers more engineering opportunities to create some good stuff. Thanks for coming on The Cube and congratulations on your continued success, and looking forward to following up on the Amazon Web Services open source collaboration, contribution, and of course, innovation. The Cube doing it's part here with its open source content, three days of coverage of CloudNativeCon and KubeCon. It's our second day, I'm John Furrier, Stu Miniman, we'll be back with more live coverage in Austin, Texas, after this short break. >> Offscreen: Thank you.
SUMMARY :
Brought to you by Red Hat, The Lennox Foundation, exclusive coverage of the CNCF CloudNativeCon This is your first time as an AWS employee on The Cube. What's going on now with you guys, and got everyone on the same page. Is that something that fits into the strategy, So we get these people in and then we kind of say, and there, so it sounds like there was a lot of activity telling the story, you didn't know about it. One of the announcements had a lot of attention, So it's partly the finer grain giving you more Yeah, and the other thing that we did recently was move to You guys have simplified the consumption experience. It's not all one or all the other, you deploy Is that the total number, is that an example? that are more native to the way we think, and what you guys are going to do forward. So it's incredibly pervasive, and we did an open source And enabling people to build great software,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adrian | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
last week | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
August | DATE | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
second day | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
this week | DATE | 0.99+ |
AltOS | TITLE | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
first minute | QUANTITY | 0.99+ |
Austin | LOCATION | 0.99+ |
last summer | DATE | 0.99+ |
Arun Gupta | PERSON | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
MXNet | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Macbook | COMMERCIAL_ITEM | 0.99+ |
2009 | DATE | 0.99+ |
John | PERSON | 0.99+ |
three years ago | DATE | 0.99+ |
a year ago | DATE | 0.99+ |
hundreds of millions a week | QUANTITY | 0.99+ |
two | DATE | 0.98+ |
last January | DATE | 0.98+ |
The Cube | ORGANIZATION | 0.98+ |
ten years ago | DATE | 0.98+ |
two things | QUANTITY | 0.98+ |
three days | QUANTITY | 0.98+ |
over a hundred thousand clusters | QUANTITY | 0.98+ |
KubeCon 2017 | EVENT | 0.98+ |
over 100,000 clusters | QUANTITY | 0.98+ |
$4.03 | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
hundred thousand | QUANTITY | 0.97+ |
two new ways | QUANTITY | 0.97+ |
Fargate | ORGANIZATION | 0.97+ |
Lambda | TITLE | 0.97+ |
CloudNativeCon 2017 | EVENT | 0.97+ |
The Lennox Foundation | ORGANIZATION | 0.97+ |
half a gig | QUANTITY | 0.97+ |