Innovation Happens Best in Open Collaboration Panel | DockerCon Live 2020
>> Announcer: From around the globe, it's the queue with digital coverage of DockerCon live 2020. Brought to you by Docker and its ecosystem partners. >> Welcome, welcome, welcome to DockerCon 2020. We got over 50,000 people registered so there's clearly a ton of interest in the world of Docker and Eddie's as I like to call it. And we've assembled a power panel of Open Source and cloud native experts to talk about where things stand in 2020 and where we're headed. I'm Shawn Conley, I'll be the moderator for today's panel. I'm also a proud alum of JBoss, Red Hat, SpringSource, VMware and Hortonworks and I'm broadcasting from my hometown of Philly. Our panelists include; Michelle Noorali, Senior Software Engineer at Microsoft, joining us from Atlanta, Georgia. We have Kelsey Hightower, Principal developer advocate at Google Cloud, joining us from Washington State and we have Chris Aniszczyk, CTO CIO at the CNCF, joining us from Austin, Texas. So I think we have the country pretty well covered. Thank you all for spending time with us on this power panel. Chris, I'm going to start with you, let's dive right in. You've been in the middle of the Docker netease wave since the beginning with a clear focus on building a better world through open collaboration. What are your thoughts on how the Open Source landscape has evolved over the past few years? Where are we in 2020? And where are we headed from both community and a tech perspective? Just curious to get things sized up? >> Sure, when CNCF started about roughly four, over four years ago, the technology mostly focused on just the things around Kubernetes, monitoring communities with technology like Prometheus, and I think in 2020 and the future, we definitely want to move up the stack. So there's a lot of tools being built on the periphery now. So there's a lot of tools that handle running different types of workloads on Kubernetes. So things like Uvert and Shay runs VMs on Kubernetes, which is crazy, not just containers. You have folks that, Microsoft experimenting with a project called Kruslet which is trying to run web assembly workloads natively on Kubernetes. So I think what we've seen now is more and more tools built around the periphery, while the core of Kubernetes has stabilized. So different technologies and spaces such as security and different ways to run different types of workloads. And at least that's kind of what I've seen. >> So do you have a fair amount of vendors as well as end users still submitting in projects in, is there still a pretty high volume? >> Yeah, we have 48 total projects in CNCF right now and Michelle could speak a little bit more to this being on the DOC, the pipeline for new projects is quite extensive and it covers all sorts of spaces from two service meshes to security projects and so on. So it's ever so expanding and filling in gaps in that cloud native landscape that we have. >> Awesome. Michelle, Let's head to you. But before we actually dive in, let's talk a little glory days. A rumor has it that you are the Fifth Grade Kickball Championship team captain. (Michelle laughs) Are the rumors true? >> They are, my speech at the end of the year was the first talk I ever gave. But yeah, it was really fun. I wasn't captain 'cause I wasn't really great at anything else apart from constantly cheer on the team. >> A little better than my eighth grade Spelling Champ Award so I think I'd rather have the kickball. But you've definitely, spent a lot of time leading an Open Source, you've been across many projects for many years. So how does the art and science of collaboration, inclusivity and teamwork vary? 'Cause you're involved in a variety of efforts, both in the CNCF and even outside of that. And then what are some tips for expanding the tent of Open Source projects? >> That's a good question. I think it's about transparency. Just come in and tell people what you really need to do and clearly articulate your problem, more clearly articulate your problem and why you can't solve it with any other solution, the more people are going to understand what you're trying to do and be able to collaborate with you better. What I love about Open Source is that where I've seen it succeed is where incentives of different perspectives and parties align and you're just transparent about what you want. So you can collaborate where it makes sense, even if you compete as a company with another company in the same area. So I really like that, but I just feel like transparency and honesty is what it comes down to and clearly communicating those objectives. >> Yeah, and the various foundations, I think one of the things that I've seen, particularly Apache Software Foundation and others is the notion of checking your badge at the door. Because the competition might be between companies, but in many respects, you have engineers across many companies that are just kicking butt with the tech they contribute, claiming victory in one way or the other might make for interesting marketing drama. But, I think that's a little bit of the challenge. In some of the, standards-based work you're doing I know with CNI and some other things, are they similar, are they different? How would you compare and contrast into something a little more structured like CNCF? >> Yeah, so most of what I do is in the CNCF, but there's specs and there's projects. I think what CNCF does a great job at is just iterating to make it an easier place for developers to collaborate. You can ask the CNCF for basically whatever you need, and they'll try their best to figure out how to make it happen. And we just continue to work on making the processes are clearer and more transparent. And I think in terms of specs and projects, those are such different collaboration environments. Because if you're in a project, you have to say, "Okay, I want this feature or I want this bug fixed." But when you're in a spec environment, you have to think a little outside of the box and like, what framework do you want to work in? You have to think a little farther ahead in terms of is this solution or this decision we're going to make going to last for the next how many years? You have to get more of a buy in from all of the key stakeholders and maintainers. So it's a little bit of a longer process, I think. But what's so beautiful is that you have this really solid, standard or interface that opens up an ecosystem and allows people to build things that you could never have even imagined or dreamed of so-- >> Gotcha. So I'm Kelsey, we'll head over to you as your focus is on, developer advocate, you've been in the cloud native front lines for many years. Today developers are faced with a ton of moving parts, spanning containers, functions, Cloud Service primitives, including container services, server-less platforms, lots more, right? I mean, there's just a ton of choice. How do you help developers maintain a minimalist mantra in the face of such a wealth of choice? I think minimalism I hear you talk about that periodically, I know you're a fan of that. How do you pass that on and your developer advocacy in your day to day work? >> Yeah, I think, for most developers, most of this is not really the top of mind for them, is something you may see a post on Hacker News, and you might double click into it. Maybe someone on your team brought one of these tools in and maybe it leaks up into your workflow so you're forced to think about it. But for most developers, they just really want to continue writing code like they've been doing. And the best of these projects they'll never see. They just work, they get out of the way, they help them with log in, they help them run their application. But for most people, this isn't the core idea of the job for them. For people in operations, on the other hand, maybe these components fill a gap. So they look at a lot of this stuff that you see in the CNCF and Open Source space as number one, various companies or teams sharing the way that they do things, right? So these are ideas that are put into the Open Source, some of them will turn into products, some of them will just stay as projects that had mutual benefit for multiple people. But for the most part, it's like walking through an ion like Home Depot. You pick the tools that you need, you can safely ignore the ones you don't need, and maybe something looks interesting and maybe you study it to see if that if you have a problem. And for most people, if you don't have that problem that that tool solves, you should be happy. No one needs every project and I think that's where the foundation for confusion. So my main job is to help people not get stuck and confused in LAN and just be pragmatic and just use the tools that work for 'em. >> Yeah, and you've spent the last little while in the server-less space really diving into that area, compare and contrast, I guess, what you found there, minimalist approach, who are you speaking to from a server-less perspective versus that of the broader CNCF? >> The thing that really pushed me over, I was teaching my daughter how to make a website. So she's on her Chromebook, making a website, and she's hitting 127.0.0.1, and it looks like geo cities from the 90s but look, she's making website. And she wanted her friends to take a look. So she copied and paste from her browser 127.0.0.1 and none of her friends could pull it up. So this is the point where every parent has to cross that line and say, "Hey, do I really need to sit down "and teach my daughter about Linux "and Docker and Kubernetes." That isn't her main goal, her goal was to just launch her website in a way that someone else can see it. So we got Firebase installed on her laptop, she ran one command, Firebase deploy. And our site was up in a few minutes, and she sent it over to her friend and there you go, she was off and running. The whole server-less movement has that philosophy as one of the stated goal that needs to be the workflow. So, I think server-less is starting to get closer and closer, you start to see us talk about and Chris mentioned this earlier, we're moving up the stack. Where we're going to up the stack, the North Star there is feel where you get the focus on what you're doing, and not necessarily how to do it underneath. And I think server-less is not quite there yet but every type of workload, stateless web apps check, event driven workflows check, but not necessarily for things like machine learning and some other workloads that more traditional enterprises want to run so there's still work to do there. So server-less for me, serves as the North Star for why all these Projects exists for people that may have to roll their own platform, to provide the experience. >> So, Chris, on a related note, with what we were just talking about with Kelsey, what's your perspective on the explosion of the cloud native landscape? There's, a ton of individual projects, each can be used separately, but in many cases, they're like Lego blocks and used together. So things like the surface mesh interface, standardizing interfaces, so things can snap together more easily, I think, are some of the approaches but are you doing anything specifically to encourage this cross fertilization and collaboration of bug ability, because there's just a ton of projects, not only at the CNCF but outside the CNCF that need to plug in? >> Yeah, I mean, a lot of this happens organically. CNCF really provides of the neutral home where companies, competitors, could trust each other to build interesting technology. We don't force integration or collaboration, it happens on its own. We essentially allow the market to decide what a successful project is long term or what an integration is. We have a great Technical Oversight Committee that helps shepherd the overall technical vision for the organization and sometimes steps in and tries to do the right thing when it comes to potentially integrating a project. Previously, we had this issue where there was a project called Open Tracing, and an effort called Open Census, which is basically trying to standardize how you're going to deal with metrics, on the tree and so on in a cloud native world that we're essentially competing with each other. The CNCF TC and committee came together and merged those projects into one parent ever called Open Elementary and so that to me is a case study of how our committee helps, bridges things. But we don't force things, we essentially want our community of end users and vendors to decide which technology is best in the long term, and we'll support that. >> Okay, awesome. And, Michelle, you've been focused on making distributed systems digestible, which to me is about simplifying things. And so back when Docker arrived on the scene, some people referred to it as developer dopamine, which I love that term, because it's simplified a bunch of crufty stuff for developers and actually helped them focus on doing their job, writing code, delivering code, what's happening in the community to help developers wire together multi-part modern apps in a way that's elegant, digestible, feels like a dopamine rush? >> Yeah, one of the goals of the(mumbles) project was to make it easier to deploy an application on Kubernetes so that you could see what the finished product looks like. And then dig into all of the things that that application is composed of, all the resources. So we're really passionate about this kind of stuff for a while now. And I love seeing projects that come into the space that have this same goal and just iterate and make things easier. I think we have a ways to go still, I think a lot of the iOS developers and JS developers I get to talk to don't really care that much about Kubernetes. They just want to, like Kelsey said, just focus on their code. So one of the projects that I really like working with is Tilt gives you this dashboard in your CLI, aggregates all your logs from your applications, And it kind of watches your application changes, and reconfigures those changes in Kubernetes so you can see what's going on, it'll catch errors, anything with a dashboard I love these days. So Yali is like a metrics dashboard that's integrated with STL, a service graph of your service mesh, and lets you see the metrics running there. I love that, I love that dashboard so much. Linkerd has some really good service graph images, too. So anything that helps me as an end user, which I'm not technically an end user, but me as a person who's just trying to get stuff up and running and working, see the state of the world easily and digest them has been really exciting to see. And I'm seeing more and more dashboards come to light and I'm very excited about that. >> Yeah, as part of the DockerCon just as a person who will be attending some of the sessions, I'm really looking forward to see where DockerCompose is going, I know they opened up the spec to broader input. I think your point, the good one, is there's a bit more work to really embrace the wealth of application artifacts that compose a larger application. So there's definitely work the broader community needs to lean in on, I think. >> I'm glad you brought that up, actually. Compose is something that I should have mentioned and I'm glad you bring that up. I want to see programming language libraries, integrate with the Compose spec. I really want to see what happens with that I think is great that they open that up and made that a spec because obviously people really like using Compose. >> Excellent. So Kelsey, I'd be remiss if I didn't touch on your January post on changelog entitled, "Monoliths are the Future." Your post actually really resonated with me. My son works for a software company in Austin, Texas. So your hometown there, Chris. >> Yeah. >> Shout out to Will and the chorus team. His development work focuses on adding modern features via micro services as extensions to the core monolith that the company was founded on. So just share some thoughts on monoliths, micro services. And also, what's deliverance dopamine from your perspective more broadly, but people usually phrase as monoliths versus micro services, but I get the sense you don't believe it's either or. >> Yeah, I think most companies from the pragmatic so one of their argument is one of pragmatism. Most companies have trouble designing any app, monolith, deployable or microservices architecture. And then these things evolve over time. Unless you're really careful, it's really hard to know how to slice these things. So taking an idea or a problem and just knowing how to perfectly compartmentalize it into individual deployable component, that's hard for even the best people to do. And double down knowing the actual solution to the particular problem. A lot of problems people are solving they're solving for the first time. It's really interesting, our industry in general, a lot of people who work in it have never solved the particular problem that they're trying to solve for the first time. So that's interesting. The other part there is that most of these tools that are here to help are really only at the infrastructure layer. We're talking freeways and bridges and toll bridges, but there's nothing that happens in the actual developer space right there in memory. So the libraries that interface to the structure logging, the libraries that deal with rate limiting, the libraries that deal with authorization, can this person make this query with this user ID? A lot of those things are still left for developers to figure out on their own. So while we have things like the brunettes and fluid D, we have all of these tools to deploy apps into those target, most developers still have the problem of everything you do above that line. And to be honest, the majority of the complexity has to be resolved right there in the app. That's the thing that's taking requests directly from the user. And this is where maybe as an industry, we're over-correcting. So we had, you said you come from the JBoss world, I started a lot of my Cisco administration, there's where we focus a little bit more on the actual application needs, maybe from a router that as well. But now what we're seeing is things like Spring Boot, start to offer a little bit more integration points in the application space itself. So I think the biggest parts that are missing now are what are the frameworks people will use for authorization? So you have projects like OPA, Open Policy Agent for those that are new to that, it gives you this very low level framework, but you still have to understand the concepts around, what does it mean to allow someone to do something and one missed configuration, all your security goes out of the window. So I think for most developers this is where the next set of challenges lie, if not actually the original challenge. So for some people, they were able to solve most of these problems with virtualization, run some scripts, virtualize everything and be fine. And monoliths were okay for that. For some reason, we've thrown pragmatism out of the window and some people are saying the only way to solve these problems is by breaking the app into 1000 pieces. Forget the fact that you had trouble managing one piece, you're going to somehow find the ability to manage 1000 pieces with these tools underneath but still not solving the actual developer problems. So this is where you've seen it already with a couple of popular blog posts from other companies. They cut too deep. They're going from 2000, 3000 microservices back to maybe 100 or 200. So to my world, it's going to be not just one monolith, but end up maybe having 10 or 20 monoliths that maybe reflect the organization that you have versus the architectural pattern that you're at. >> I view it as like a constellation of stars and planets, et cetera. Where you you might have a star that has a variety of, which is a monolith, and you have a variety of sort of planetary microservices that float around it. But that's reality, that's the reality of modern applications, particularly if you're not starting from a clean slate. I mean your points, a good one is, in many respects, I think the infrastructure is code movement has helped automate a bit of the deployment of the platform. I've been personally focused on app development JBoss as well as springsSource. The Spring team I know that tech pretty well over the years 'cause I was involved with that. So I find that James Governor's discussion of progressive delivery really resonates with me, as a developer, not so much as an infrastructure Deployer. So continuous delivery is more of infrastructure notice notion, progressive delivery, feature flags, those types of things, or app level, concepts, minimizing the blast radius of your, the new features you're deploying, that type of stuff, I think begins to speak to the pain of application delivery. So I'll guess I'll put this up. Michelle, I might aim it to you, and then we'll go around the horn, what are your thoughts on the progressive delivery area? How could that potentially begin to impact cloud native over 2020? I'm looking for some rallying cries that move up the stack and give a set of best practices, if you will. And I think James Governor of RedMonk opened on something that's pretty important. >> Yeah, I think it's all about automating all that stuff that you don't really know about. Like Flagger is an awesome progressive delivery tool, you can just deploy something, and people have been asking for so many years, ever since I've been in this space, it's like, "How do I do AB deployment?" "How do I do Canary?" "How do I execute these different deployment strategies?" And Flagger is a really good example, for example, it's a really good way to execute these deployment strategies but then, make sure that everything's happening correctly via observing metrics, rollback if you need to, so you don't just throw your whole system. I think it solves the problem and allows you to take risks but also keeps you safe in that you can be confident as you roll out your changes that it all works, it's metrics driven. So I'm just really looking forward to seeing more tools like that. And dashboards, enable that kind of functionality. >> Chris, what are your thoughts in that progressive delivery area? >> I mean, CNCF alone has a lot of projects in that space, things like Argo that are tackling it. But I want to go back a little bit to your point around developer dopamine, as someone that probably spent about a decade of his career focused on developer tooling and in fact, if you remember the Eclipse IDE and that whole integrated experience, I was blown away recently by a demo from GitHub. They have something called code spaces, which a long time ago, I was trying to build development environments that essentially if you were an engineer that joined a team recently, you could basically get an environment quickly start it with everything configured, source code checked out, environment properly set up. And that was a very hard problem. This was like before container days and so on and to see something like code spaces where you'd go to a repo or project, open it up, behind the scenes they have a container that is set up for the environment that you need to build and just have a VS code ID integrated experience, to me is completely magical. It hits like developer dopamine immediately for me, 'cause a lot of problems when you're going to work with a project attribute, that whole initial bootstrap of, "Oh you need to make sure you have this library, this install," it's so incredibly painful on top of just setting up your developer environment. So as we continue to move up the stack, I think you're going to see an incredible amount of improvements around the developer tooling and developer experience that people have powered by a lot of this cloud native technology behind the scenes that people may not know about. >> Yeah, 'cause I've been talking with the team over at Docker, the work they're doing with that desktop, enable the aim local environment, make sure it matches as closely as possible as your deployed environments that you might be targeting. These are some of the pains, that I see. It's hard for developers to get bootstrapped up, it might take him a day or two to actually just set up their local laptop and development environment, and particularly if they change teams. So that complexity really corralling that down and not necessarily being overly prescriptive as to what tool you use. So if you're visual code, great, it should feel integrated into that environment, use a different environment or if you feel more comfortable at the command line, you should be able to opt into that. That's some of the stuff I get excited to potentially see over 2020 as things progress up the stack, as you said. So, Michelle, just from an innovation train perspective, and we've covered a little bit, what's the best way for people to get started? I think Kelsey covered a little bit of that, being very pragmatic, but all this innovation is pretty intimidating, you can get mowed over by the train, so to speak. So what's your advice for how people get started, how they get involved, et cetera. >> Yeah, it really depends on what you're looking for and what you want to learn. So, if you're someone who's new to the space, honestly, check out the case studies on cncf.io, those are incredible. You might find environments that are similar to your organization's environments, and read about what worked for them, how they set things up, any hiccups they crossed. It'll give you a broad overview of the challenges that people are trying to solve with the technology in this space. And you can use that drill into the areas that you want to learn more about, just depending on where you're coming from. I find myself watching old KubeCon talks on the cloud native computing foundations YouTube channel, so they have like playlists for all of the conferences and the special interest groups in CNCF. And I really enjoy talking, I really enjoy watching excuse me, older talks, just because they explain why things were done, the way they were done, and that helps me build the tools I built. And if you're looking to get involved, if you're building projects or tools or specs and want to contribute, we have special interest groups in the CNCF. So you can find that in the CNCF Technical Oversight Committee, TOC GitHub repo. And so for that, if you want to get involved there, choose a vertical. Do you want to learn about observability? Do you want to drill into networking? Do you care about how to deliver your app? So we have a cig called app delivery, there's a cig for each major vertical, and you can go there to see what is happening on the edge. Really, these are conversations about, okay, what's working, what's not working and what are the next changes we want to see in the next months. So if you want that kind of granularity and discussion on what's happening like that, then definitely join those those meetings. Check out those meeting notes and recordings. >> Gotcha. So on Kelsey, as you look at 2020 and beyond, I know, you've been really involved in some of the earlier emerging tech spaces, what gets you excited when you look forward? What gets your own level of dopamine up versus the broader community? What do you see coming that we should start thinking about now? >> I don't think any of the raw technology pieces get me super excited anymore. Like, I've seen the circle of around three or four times, in five years, there's going to be a new thing, there might be a new foundation, there'll be a new set of conferences, and we'll all rally up and probably do this again. So what's interesting now is what people are actually using the technology for. Some people are launching new things that maybe weren't possible because infrastructure costs were too high. People able to jump into new business segments. You start to see these channels on YouTube where everyone can buy a mic and a B app and have their own podcasts and be broadcast to the globe, just for a few bucks, if not for free. Those revolutionary things are the big deal and they're hard to come by. So I think we've done a good job democratizing these ideas, distributed systems, one company got really good at packaging applications to share with each other, I think that's great, and never going to reset again. And now what's going to be interesting is, what will people build with this stuff? If we end up building the same things we were building before, and then we're talking about another digital transformation 10 years from now because it's going to be funny but Kubernetes will be the new legacy. It's going to be the things that, "Oh, man, I got stuck in this Kubernetes thing," and there'll be some governor on TV, looking for old school Kubernetes engineers to migrate them to some new thing, that's going to happen. You got to know that. So at some point merry go round will stop. And we're going to be focused on what you do with this. So the internet is there, most people have no idea of the complexities of underwater sea cables. It's beyond one or two people, or even one or two companies to comprehend. You're at the point now, where most people that jump on the internet are talking about what you do with the internet. You can have Netflix, you can do meetings like this one, it's about what you do with it. So that's going to be interesting. And we're just not there yet with tech, tech is so, infrastructure stuff. We're so in the weeds, that most people almost burn out what's just getting to the point where you can start to look at what you do with this stuff. So that's what I keep in my eye on, is when do we get to the point when people just ship things and build things? And I think the closest I've seen so far is in the mobile space. If you're iOS developer, Android developer, you use the SDK that they gave you, every year there's some new device that enables some new things speech to text, VR, AR and you import an STK, and it just worked. And you can put it in one place and 100 million people can download it at the same time with no DevOps team, that's amazing. When can we do that for server side applications? That's going to be something I'm going to find really innovative. >> Excellent. Yeah, I mean, I could definitely relate. I was Hortonworks in 2011, so, Hadoop, in many respects, was sort of the precursor to the Kubernetes area, in that it was, as I like to refer to, it was a bunch of animals in the zoo, wasn't just the yellow elephant. And when things mature beyond it's basically talking about what kind of analytics are driving, what type of machine learning algorithms and applications are they delivering? You know that's when things tip over into a real solution space. So I definitely see that. I think the other cool thing even just outside of the container and container space, is there's just such a wealth of data related services. And I think how those two worlds come together, you brought up the fact that, in many respects, server-less is great, it's stateless, but there's just a ton of stateful patterns out there that I think also need to be addressed as these richer applications to be from a data processing and actionable insights perspective. >> I also want to be clear on one thing. So some people confuse two things here, what Michelle said earlier about, for the first time, a whole group of people get to learn about distributed systems and things that were reserved to white papers, PhDs, CF site, this stuff is now super accessible. You go to the CNCF site, all the things that you read about or we used to read about, you can actually download, see how it's implemented and actually change how it work. That is something we should never say is a waste of time. Learning is always good because someone has to build these type of systems and whether they sell it under the guise of server-less or not, this will always be important. Now the other side of this is, that there are people who are not looking to learn that stuff, the majority of the world isn't looking. And in parallel, we should also make this accessible, which should enable people that don't need to learn all of that before they can be productive. So that's two sides of the argument that can be true at the same time, a lot of people get caught up. And everything should just be server-less and everyone learning about distributed systems, and contributing and collaborating is wasting time. We can't have a world where there's only one or two companies providing all infrastructure for everyone else, and then it's a black box. We don't need that. So we need to do both of these things in parallel so I just want to make sure I'm clear that it's not one of these or the other. >> Yeah, makes sense, makes sense. So we'll just hit the final topic. Chris, I think I'll ask you to help close this out. COVID-19 clearly has changed how people work and collaborate. I figured we'd end on how do you see, so DockerCon is going to virtual events, inherently the Open Source community is distributed and is used to not face to face collaboration. But there's a lot of value that comes together by assembling a tent where people can meet, what's the best way? How do you see things playing out? What's the best way for this to evolve in the face of the new normal? >> I think in the short term, you're definitely going to see a lot of virtual events cropping up all over the place. Different themes, verticals, I've already attended a handful of virtual events the last few weeks from Red Hat summit to Open Compute summit to Cloud Native summit, you'll see more and more of these. I think, in the long term, once the world either get past COVID or there's a vaccine or something, I think the innate nature for people to want to get together and meet face to face and deal with all the serendipitous activities you would see in a conference will come back, but I think virtual events will augment these things in the short term. One benefit we've seen, like you mentioned before, DockerCon, can have 50,000 people at it. I don't remember what the last physical DockerCon had but that's definitely an order of magnitude more. So being able to do these virtual events to augment potential of physical events in the future so you can build a more inclusive community so people who cannot travel to your event or weren't lucky enough to win a scholarship could still somehow interact during the course of event to me is awesome and I hope something that we take away when we start all doing these virtual events when we get back to physical events, we find a way to ensure that these things are inclusive for everyone and not just folks that can physically make it there. So those are my thoughts on on the topic. And I wish you the best of luck planning of DockerCon and so on. So I'm excited to see how it turns out. 50,000 is a lot of people and that just terrifies me from a cloud native coupon point of view, because we'll probably be somewhere. >> Yeah, get ready. Excellent, all right. So that is a wrap on the DockerCon 2020 Open Source Power Panel. I think we covered a ton of ground. I'd like to thank Chris, Kelsey and Michelle, for sharing their perspectives on this continuing wave of Docker and cloud native innovation. I'd like to thank the DockerCon attendees for tuning in. And I hope everybody enjoys the rest of the conference. (upbeat music)
SUMMARY :
Brought to you by Docker of the Docker netease wave on just the things around Kubernetes, being on the DOC, the A rumor has it that you are apart from constantly cheer on the team. So how does the art and the more people are going to understand Yeah, and the various foundations, and allows people to build things I think minimalism I hear you You pick the tools that you need, and it looks like geo cities from the 90s but outside the CNCF that need to plug in? We essentially allow the market to decide arrived on the scene, on Kubernetes so that you could see Yeah, as part of the and I'm glad you bring that up. entitled, "Monoliths are the Future." but I get the sense you and some people are saying the only way and you have a variety of sort in that you can be confident and in fact, if you as to what tool you use. and that helps me build the tools I built. So on Kelsey, as you and be broadcast to the globe, that I think also need to be addressed the things that you read about in the face of the new normal? and meet face to face So that is a wrap on the DockerCon 2020
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Michelle | PERSON | 0.99+ |
Shawn Conley | PERSON | 0.99+ |
Michelle Noorali | PERSON | 0.99+ |
Chris Aniszczyk | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Kelsey | PERSON | 0.99+ |
1000 pieces | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Apache Software Foundation | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
January | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Philly | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
a day | QUANTITY | 0.99+ |
Atlanta, Georgia | LOCATION | 0.99+ |
SpringSource | ORGANIZATION | 0.99+ |
TOC | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
DockerCon | EVENT | 0.99+ |
North Star | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Prometheus | TITLE | 0.99+ |
Washington State | LOCATION | 0.99+ |
first time | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Will | PERSON | 0.99+ |
200 | QUANTITY | 0.99+ |
Spring Boot | TITLE | 0.99+ |
Android | TITLE | 0.99+ |
two companies | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
one piece | QUANTITY | 0.99+ |
Kelsey Hightower | PERSON | 0.99+ |
RedMonk | ORGANIZATION | 0.99+ |
two people | QUANTITY | 0.99+ |
3000 microservices | QUANTITY | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
JBoss | ORGANIZATION | 0.99+ |
Google Cloud | ORGANIZATION | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
50,000 people | QUANTITY | 0.98+ |
20 monoliths | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
Argo | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.98+ |
two companies | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
GitHub | ORGANIZATION | 0.98+ |
over 50,000 people | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
Docker | EVENT | 0.98+ |
Sumit Puri, Liqid | CUBEConversation, March 2019
(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hey, welcome back everybody, Jeff Frick here with theCUBE. We're at our Palo Alto studios having a CUBE Conversation, we're just about ready for the madness of the conference season to start in a few months, so it's nice to have some time to have things a little calmer in the studio, and we're excited to have a new company, I guess they're not that new, but they're relatively new, they've been working on a really interesting technology around infrastructure, and we welcome to the studio, first time, I think, Sumit Puri, CEO and co-founder of Liqid, welcome. >> Thank you guys, very very happy to be here. >> And joined by our big brain, David Floyer, of course, the CTO and co-founder of Wikibon and knows all things infrastructure. Dave, always good to see you. >> It's so good to see you. >> All right, so let's jump into this, Sumit, give us the basic overview of Liqid, what are you guys all about, little bit of the company background, how long you've been around. No, absolutely, absolutely, Liqid is a software-defined infrastructure company, the technology that we've developed is referred to as composable infrastructure, think, dynamic infrastructure, and what we do, is we go and we turn data center resources from statically-configured boxes to dynamic, agile infrastructure. Our core technology is two-part. Number 1, we have a fabric layer, that allows you to interconnect off-the-shelf hardware, but more importantly, we have a software layer, that allows you to orchestrate, or dynamically configure servers, at the bare metal. >> So, who are you selling these solutions to? What's your market, what's the business case for this solution? >> Absolutely, so first, I guess, let me explain a little bit about what we mean by composable infrastructure. Rather than building servers by plugging devices into the sockets of the motherboard, with composability it's all about pools, or trays, of resources. A tray of CPUs, a tray of SSDs, a tray of GPUs, a tray of networking devices, instead of plugging those into a motherboard, we connect those into a fabric switch, and then we come in with our software, and we orchestrate, or recompose, at the bare metal. Grab this CPU, grab those four SSDs, these eight GPUs, and build me a server, just like you were plugging devices into the motherboard, except you're defining it in software, on the other side, you're getting delivered infrastructure of any size, shape, or ratio that you want. Except that infrastructure is dynamic, when we need another GPU in our server, we don't send a guy with a cart to plug the device in, we reprogram the fabric and add or remove devices as required by the application. We give you all the flexibility that you would get from public cloud, on the infrastructure that you are forced to own. And now, to answer your question of where we find a natural fit for our solution, one primary area is obviously cloud. If you're building a cloud environment, whether you're providing cloud as a service or whether you're providing cloud to your internal customers, building a more dynamic, agile cloud is what we enable. >> So, is the use case more just to use your available resources and reconfigure it to set something that basically runs that way for a while, or are customers more using it to dynamically reconfigure those resources based on, say, a temporary workload, is kind of a classic cloud example, where you need a bunch of something now, but not necessarily forever. >> Sure. The way we look at the world is very much around resource utilization. I'm buying this very expensive hardware, I'm deploying it into my data center, typical resource utilization is very low, below 20%, right? So what we enable is the ability to get better resource utilization out of the hardware that you're deploying inside your data center. If we can take a resource that's utilized 20% of the time because it's deployed as a static element inside of a box and we can raise the utilization to 40%, does that mean we are buying less hardware inside of our data center? Our argument is yes, if we can take rack scale efficiency from 20% to 40%, our belief is we can do the same amount of work with less hardware. >> So it's a fairly simple business case, then. To do that. So who are your competition in this area? Is it people like HP or Intel, or, >> That's a great question, I think both of those are interesting companies, I think HPE is the 800-pound gorilla in this term called composability and we find ourselves a slightly different approach than the way that those guys take it, I think first and foremost, the way that we're different is because we're disaggregated, right? When we sell you trays of resources, we'll sell you a tray of SSD or a tray of GPUs, where HP takes a converged solution, right? Every time I'm buying resources for my composable rack, I'm paying for CPUs, SSDs, GPUs, all of those devices as a converged resource, so they are converged, we are disaggregated. We are bare metal, we have a PCIe-based fabric up and down the rack, they are an ethernet-based fabric, there are no ethernet SSDs, there are no ethernet GPUs, at least today, so by using ethernet as your fabric, they're forced to do virtualization protocol translation, so they are not truly bare metal. We are bare metal, we view of them more as a virtualized solution. We're an open ecosystem, we're hardware-agnostic, right? We allow our customers to use whatever hardware that they're using in their environment today. Once you've kind of gone down that HP route, it's very much a closed environment. >> So what about some of the customers that you've got? Which sort of industries, which sort of customers, I presume this is for the larger types of customers, in general, but say a little bit about where you're making a difference. >> No, absolutely, right? So, obviously at scale, composability has even more benefit than in smaller deployments, I'll give you just a couple of use case examples. Number one, we're working with a transportation company, and what happens with them at 5 p.m. is actually very different than what happens at 2 a.m., and the model that they have today is a bunch of static boxes and they're playing a game of workload matching. If the workload that comes in fits the appropriate box, then the world is good. If the workload that comes in ends up on a machine that's oversized, then resources are being wasted, and what they said was, "We want to take a new approach. "We want to study the workload as it comes in, "dynamically spin up small, medium, large, "depending on what that workload requires, "and as soon as that workload is done, "free the resources back into the general pool." Right, so that's one customer, by taking a dynamic approach, they're changing the TCO argument inside of their environment. And for them, it's not a matter of am I going dynamic or am I going static, everyone knows dynamic infrastructure is better, no one says, "Give me the static stuff." For them, it's am I going public cloud, or am I going on prem. That's really the question, so what we provide is public cloud is very easy, but when you start thinking about next-generation workloads, things that leverage GPUs and FPGAs, those instantiations on public cloud are just not very cheap. So we give you all of that flexibility that you're getting on public cloud, but we save you money by giving you that capability on prem. So that's use case number one. Another use case is very exciting for us, we're working with a studio down in southern California, and they leverage these NVIDIA V100 GPUs. During the daytime, they give those GPUs to their AI engineers, when the AI engineers go home at night, they reprogram the fabric and they use those same GPUs for rendering workloads. They've taken $50,000 worth of hardware and they've doubled the utilization of that hardware. >> The other use case we talked about before we turned the cameras on there, was pretty interesting, was kind of multiple workloads against the same data set, over a series of time where you want to apply different resources. I wonder if you can unpack that a little bit because I think that's a really interesting one that we don't hear a lot about. So, we would say about 60 plus to 70% of our deployments in one way or another touch the realm of AI. AI is actually not an event, AI is a workflow, what do we do? First we ingest data, that's very networking-centric. Then we scrub and we clean the data, that's actually CPU-centric. Then we're running inference, and then we're running training, that's GPU-centric. Data has gravity, right? It's very difficult to move petabytes of data around, so what we enable is the composable AI platform, leave data at the center of the universe, reorchestrate your compute, networking, GPU resources around the data. That's the way that we believe that AI is approached. >> So we're looking forward in the future. What are you seeing where you can make a difference in this? I mean, a lot of changes happening, there's Gen 4 coming out in PCIe, there's GPUs which are moving down to the edge, how do see, where do you see you're going to make a difference, over the next few years. >> That's a great question. So I think there's 2 parts to look at, right? Number one is the physical layer, right? Today we build or we compose based upon PCIe Gen 3 because for the first time in the data center, everything is speaking a common language. When SSDs moved to NVMe, you had SSDs, network cards, GPUs, CPUs, all speaking a common language which was PCIe. So that's why we've chosen to build our fabric on this common interconnect, because that's how we enable bare metal orchestration without translation and virtualization, right? Today, it's PCIe Gen 3, as the industry moves forward, Gen 4 is coming. Gen 4 is here. We've actually announced our first PCIe Gen 4 products already, and by the end of this year, Gen 4 will become extremely relevant into the market. Our software has been architected from the beginning to be physical layer-agnostic, so whether we're talking PCIe Gen 3, PCIe Gen 4, in the future something referred to as Gen Z, (laughing) it doesn't matter for us, we will support all of those physical layers. For us it's about the software orchestration. >> I would imagine, too, like TPUs and other physical units that are going to be introduced in the system, too, you're architected to be able to take those, new-- >> Today, today we're doing CPUs, GPUs, NVMe devices and we're doing NICs. We just made an announcement, now we're orchestrating Optane memory with Intel. We've made an announcement with Xilinx where we're orchestrating FPGAs with Xilinx. So this will continue, we'll continue to find more and more of the resources that we'll be able to orchestrate for a very simple reason, everything has a common interconnect, and that common interconnect is PCIe. >> So this is an exciting time in your existence. Where are you? I mean, how far along are you to becoming the standard in this industry? >> Yeah, no, that's a great question, and I think, we get asked a lot is what company are you most similar to or are you most like at the early stage. And what we say is we, a lot of time, compare ourselves to VMware, right? VMware is the hypervisor for the virtualization layer. We view ourselves as that physical hypervisor, right? We do for physical infrastructure what VMware is doing for virtualized environments. And just like VMware has enabled many of the market players to get virtualized, our hope is we're going to enable many of the market players to become composable. We're very excited about our partnership with Inspur, just recently we've announced, they're the number three server vendor in the world, we've announced an AI-centric rack, which leverages the servers and the storage solutions from Inspur tied to our fabric to deliver a composable AI platform. >> That's great. >> Yeah, and it seems like the market for cloud service providers, 'cause we always talk about the big ones, but there's a lot of them, all over the world, is a perfect use case for you, because now they can actually offer the benefits of cloud flexibility by leveraging your infrastructure to get more miles out of their investments into their backend. >> Absolutely, cloud, cloud service providers, and private cloud, that's a big market and opportunity for us, and we're not necessarily chasing the big seven hyperscalers, right? We'd love to partner with them, but for us, there's 300 other companies out there that can use the benefit of our technology. So they necessarily don't have the R&D dollars available that some of the big guys have, so we come in with our technology and we enable those cloud service providers to be more agile, to be more competitive. >> All right, Sumit, before we let you go, season's coming up, we were just at RSA yesterday, big shows comin' up in May, where you guys, are we going to cross paths over the next several weeks or months? >> No, absolutely, we got a handful of shows coming up, very exciting season for us, we're going to be at the OCP, the Open Compute Project conference, actually next week, and then right after that, we're going to be at the NVIDIA GPU Technology Conference, we're going to have a booth at both of those shows, and we're going to be doing live demos of our composable platform, and then at the end of April, we're going to be at the Dell Technology World conference in Las Vegas, where we're going to have a large booth and we're going to be doing some very exciting demos with the Dell team. >> Sumit, thanks for taking a few minutes out of your day to tell us a story, it's pretty exciting stuff, 'cause this whole flexibility is such an important piece of the whole cloud value proposition, and you guys are delivering it all over the place. >> Well, thank you guys for making the time today, I was excited to be here, thank you. >> All right, David, always good to see you, >> Good to see you. >> Smart man, alright, I'm Jeff Frick, you're watching theCUBE from theCUBE studios in Palo Alto, thanks for watching, we'll see you next time. (upbeat music)
SUMMARY :
in the heart of Silicon Valley, of the conference season to start in a few months, of course, the CTO and co-founder of Wikibon little bit of the company background, and then we come in with our software, So, is the use case more just to use from 20% to 40%, our belief is we can do So who are your competition in this area? When we sell you trays of resources, So what about some of the customers that you've got? So we give you all of that flexibility That's the way that we believe that AI is approached. how do see, where do you see you're going to make a difference, and by the end of this year, of the resources that we'll be able to orchestrate I mean, how far along are you many of the market players to become composable. the benefits of cloud flexibility that some of the big guys have, so we come in and then right after that, we're going to be at of the whole cloud value proposition, Well, thank you guys for making the time today, thanks for watching, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
$50,000 | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Sumit Puri | PERSON | 0.99+ |
2 a.m. | DATE | 0.99+ |
40% | QUANTITY | 0.99+ |
2 parts | QUANTITY | 0.99+ |
Sumit | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Inspur | ORGANIZATION | 0.99+ |
5 p.m. | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
March 2019 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
800-pound | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
May | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Liqid | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
two-part | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Liqid | PERSON | 0.98+ |
NVIDIA | ORGANIZATION | 0.98+ |
Today | DATE | 0.98+ |
300 other companies | QUANTITY | 0.98+ |
Xilinx | ORGANIZATION | 0.98+ |
70% | QUANTITY | 0.97+ |
southern California | LOCATION | 0.96+ |
Dell Technology World | EVENT | 0.96+ |
one customer | QUANTITY | 0.95+ |
end of April | DATE | 0.95+ |
end of this year | DATE | 0.95+ |
Open Compute Project conference | EVENT | 0.95+ |
CUBE Conversation | EVENT | 0.95+ |
V100 | COMMERCIAL_ITEM | 0.93+ |
NVIDIA GPU Technology Conference | EVENT | 0.93+ |
about 60 plus | QUANTITY | 0.93+ |
below 20% | QUANTITY | 0.93+ |
OCP | EVENT | 0.92+ |
VMware | ORGANIZATION | 0.91+ |
four | QUANTITY | 0.91+ |
Palo Alto, California | LOCATION | 0.89+ |
Silicon Valley, | LOCATION | 0.88+ |
eight GPUs | QUANTITY | 0.88+ |
one way | QUANTITY | 0.86+ |
seven hyperscalers | QUANTITY | 0.86+ |
petabytes | QUANTITY | 0.86+ |
PCIe Gen 3 | OTHER | 0.85+ |
Gen Z | OTHER | 0.8+ |
Gen 4 | OTHER | 0.79+ |
next several weeks | DATE | 0.76+ |
PCIe Gen 4 | COMMERCIAL_ITEM | 0.74+ |
handful of shows | QUANTITY | 0.74+ |
three server vendor | QUANTITY | 0.73+ |
years | DATE | 0.69+ |
case | QUANTITY | 0.69+ |
one primary area | QUANTITY | 0.69+ |
one | QUANTITY | 0.68+ |
VMware | TITLE | 0.67+ |
Number 1 | QUANTITY | 0.67+ |
theCUBE | ORGANIZATION | 0.64+ |
PCIe Gen 4 | OTHER | 0.61+ |
Clement Pang, Wavefront by VMware | AWS re:Invent 2018
>> Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2018. Brought to you by Amazon web services, intel, and their ecosystem partners. >> Welcome back everyone to theCUBE's live coverage of AWS re:Invent, here at the Venetian in Las Vegas. I'm your host, Rebecca Knight, along with my co-host John Furrier. We're joined by Clement Pang. He is the co-founder of Wavefront by VMware. Welcome. >> Thank you Thank you so much. >> It's great to have you on the show. So, I want you tell our viewers a little bit about Wavefront. You were just purchased by VMware in May. >> Right. >> What do you do, what is Wavefront all about? >> Sure, we were actually purchased last year in May by VMware, yeah. We are an operational analytics company, so monitoring, I think is you could say what we do. And the way that I always introduce Wavefront is kind of a untold secret of Silicon Valley. The reason I said that is because in the, well, just look at the floor. You know, there's so many monitoring companies doing logs, APM, metrics monitoring. And if you really want to look at what do the companies in the Valley really use, right? I'm talking about companies such as Workday, Watts, Groupon, Intuit, DoorDash, Lyft, they're all companies that are customers of Wavefront today. So they've obviously looked at all the tools that are available on the market, on the show floor, and they've decided to be with Wavefront, and they were with us before the acquisition, and they're still with us today, so. >> And they're the scale-up guys, they have large scale >> That's right, yeah, container, infrastructure, running clouds, hybrid clouds. Some of them are still on-prem data centers and so we just gobble up all that data. We are platform, we're not really opinionated about how you get the data. >> You call them hardcore devops. >> Yes, hardcore devops is the right word, yeah. >> Pushing the envelope, lot of new stuff. >> That's right. >> Doing their own innovation >> So even serverless and all the ML stuff that that's been talked about. They're very pioneering. >> Alright, so VMware, they're very inquisitive on technology, very technology buyers. Take a minute to explain the tech under the covers. What's going on. >> Sure, so Wavefront is a at scale time series database with an analytics engine on top of it. So we have actually since expanded beyond just time series data. It could be distributed histograms, it could be tracing, it includes things like events. So anything that you could gather up from your operation stack and application metrics, business metrics, we'll take that data. Again, I just said that we are unopinionated so any data that you have. Like sometimes it could be from a script , it could be from your serverless functions. We'll take that data, we'll store it, we'll render it and visualize it and of course we don't have people looking at charts all day long. We'll alert you if something bad is going on. So teams just really allow the ability to explore the data and just to figure out trends, correlations and just have a platform that scales and just runs reliably. >> With you is Switzerland. >> Yeah, basically I think that's the reason why VMware is very interested, is cause we work with AWS, work with Azure, work with GCP and soon to be AliCloud and IBM, right. >> Talk about why time series data is now more on board. We've got, we've had this conversation with Smug, we saw the new announcement by Amazon. So 'cause if you 're doing real-time, time matters and super important. Why is it important now, why are people coming to the realization as the early adopters, the pioneers. >> That's right, I think I used to work at Google and I think Google, very early on I realized that time series is a way to understand complex systems, especially if you have FMR workloads and so I think what companies have realized is that logs is just very voluminous, it's very difficulty to wield and then traditional APM products, they tend to just show you what they want to show you, like what are the important paying points that you should be monitoring and with Wavefront, it's just a tool that understands time series data and if you think about it, most of the data that you gather out of your operational environment is timer series data. CPU, memory, network, how many people logging in, how many errors, how many people are signing up. We certainly have our customer like Lyft. You know, how many of you are getting Rise, how many credit cards are off. You know all of that information drives, should we pay someone because a certain city, nobody is getting picked up and that's kind of the dimension that you want to be monitoring on, not on the individual like, okay this base, no network even though we monitor those of course. >> You know, Clement, I got to talk to you about the supporting point because we've been covering real time, we've been covering IoT, we've been doing a ton of stuff around looking at the importance of data and having data be addressable in real-time. And the database is part of the problem and also the overall architecture of the holistic operating environment. So to have an actual understanding of time series is one. Then you actually got to operationalize it. Talk about how customers are implementing and getting value out of time series data and how they differentiate that with data leagues that they might spin up as well as the new dupe data in it. Some might not be valuable. All this is like all now coming together. How do people do that? >> So I think there were a couple of dimensions to that. So it's scalability is a big piece. So you have to be able to take in enormous amount of data, (mumbles) data leagues can do that. It has to be real-time, so our latency from ingestion to maturalization on a chart is under our second So if you're a devops team, you're spinning up containers, you can't go blind for even 10 seconds or else you don't know what's going on with your new service that you just launched. So real-time is super important and then there's analytics. So you can't, you can see all the data in real-time but if it's like millions of time series coming in, it's like the matrix, you need to have some way to actually gather some insights out of that data. SO I think that's what we are good at. >> You know a couple of years ago, we were doing Open Compute, a summit that Facebook puts on, you eventually worked with Google so I see he's talking about the cutting edge tech companies. There's so much data going onto the scale, you need AI, you got to have machines so some of the processing, you can't have this manual process or even scrips, you got to have machines that take care of it. Talk about the at-scale component because as the tsunami of data continues to grow, I mean Amazon's got a satellite, Lockheed Martin, that's going to light up edge computing, autonomous vehicles, pentabytes moving to the cloud, time series matters. How do people start thinking about machine learning and AI, what do you guys do. >> So I think post-acquisition I would say, we really double down on looking at AI and machine learning in our system. We, because we don't down sample any of the data that we collect, we have actually the raw data coming in from weather sensors, from machines, from infrastructure, from cloud and we just is able to learn on that because we understand incidence, we understand anomalies. So we can take all of that data and punch it through different kinds of algorithms and figures out, maybe we could just have the computer look at the incoming time series data and tell you if its anomalist, right. The holy grail for VMware I think, is to have a self-driving data center and what that means is you have systems that understands, well yesterday there was a reinforcement learning announcement by Amazon. How do we actually apply those techniques so that we have the observability piece and then we have some way to in fact change against the environment and then we figure out, you know, just let the computer just do it. >> I love this topic, you should come into our studio, if I'm allowed to, we'll do a deep dive on this because there's so many implications to the data because if you have real-time data, you got to have the streaming data come in, you got to make sense of it. The old networking days, we call it differentiate services. You got to differentiate of the data. Machine learning, if the data's good, it works great, but data sucks, machine learning doesn't go well so if I want that dynamic of managing the data so you don't have to do all this cleaning. How do people get that data verified, how do they set up the machine learning. >> Sure, it still required clean data because I mean, it's garbage in, garbage out >> Not dirty data >> So, but the ability for us, for machine learning in general to understand anything in a high dimensional space is for it to figure out, what are the signals from a lot of the noise. A human may require to be reduces in dimensionality so that they could understand a single line, a single chart that they could actually have insights out of. Machines can technically look at hundreds or even tens of thousands of series and figures out, okay these are the two that are the signals and these are the knobs that I could turn that could affect those signals. So I think with machine learning, it actually helps with just the voluminous nature of the data that we're gathering. And figuring out what is the signal from the noise. >> It's a hard problem. So talk about the two functionalities you guys just launched. What's the news, what are you doing here at AWS. >> So the most exciting thing that we launched is our distributed tracing offering. We call it a three-dimensional micro service observability. So we're the only platform that marry metrics, histograms and distributed tracing in a single platform offering. So it's certainly at scale. As I said, it's reliable, it has all the analytical capabilities on top of it, but we basically give you a way to quickly dive down into a problem and realize what the root cause is and to actually see the actual request at it's context. Whether it's troubleshooting , root cause analysis, performance optimization. So it's a single shop kind of experience. You put in our SDK, it goes ahead and figures out, okay you're running Java, you're running Jersey or Job Wizard or Spring Boot and then it figures out, okay these are the key metrics you should be looking at. If there are any violations, we show you the actual request including multiple services that are involved in that request and just give you an out of the box turn keyway to understand at scale, microservice deployments, where are the pain points, where is latency coming from, where are the errors coming from. So that's kind of our first offering that we're launching. Same pricing mode, all that. >> So how are companies going to use this? What kind of business problem is this solving. >> So as the world transitions to a deployment architecture that mostly consists of Microservices, it's no longer a monolytic app, it's no longer an end-tier application. There are a lot of different heterogeneous languages, frameworks are involved, or even AWS. Cloud services, SAS services are involved and you just have to have some way to understand what is goin on. The classic example I have is you could even trace things like an actual order and how it goes through the entire pipeline. Someone places the orders, a couple days later there's someone who, the orders actually get shipped and then it gets delivered. You know, that's technically a trace. It could be that too. You could send that trace to us but you want to understand, so what are the different pieces that was involved. It could be code or it could be like a vendor. I could be like even a human process. All of that is a distributed tracing atom and you could actually send it to Wavefront and we just help you stitch that picture together so you could understand what's really going on. >> What's next for you guys. Now you're part of VMware. What's the investment area, what are you guys looking at building, what's the next horizon? >> So I think, obviously the (mumbles) tracing, we still have a lot to work on and just to help teams figure out, what do they want to see kind of instantly from the data that we've gathered. Again, we just have gathered data for so long, for so many years and at the full resolution so why can't we, what insights can develop out of it and then as I said, we're working on AI and ML so that's kind of the second launch offering that we have here where you know, people have been telling us, it's great to have all the analytics but if I don't have any statistical background to anything like that, can you just tell me, like, I have a chart, a whole bunch of lines, tell me just what I should be focusing on. So that's what we call the AI genie and so you just apply, call it a genie I guess, and then you would basically just have the chart show you what is going wrong and the machines that are going wrong, or maybe a particular service that's going wrong, a particular KPI that's in violation and you could just go there and figure out what's-- >> Yeah, the genie in the bottle. >> That's right (crosstalk) >> So final question before we go. What's it like working for VMware start-up culture. You raised a lot of money doing your so crunch based reports. VMware's cutting edge, they're a part with Amazon, bit turn around there, what's it like there? >> It's a very large company obviously, but they're, obviously as with everything, there's always some good points and bad points. I'll focus on the good. So the good things are there's just a lot of people, very smart people at VMware. They've worked on the problem of virtualization which was, as a computer scientist, I just thought, that's just so hard. How do you run it like the matrix, right, it's kind of like and a lot of very smart people there. A lot of the stuff that we're actually launching includes components that were built inside VMware based on their expertise over the years and we're just able to pull, it's just as I said, a lot of fun toys and how do we connect all of that together and just do an even better job than what we could have been as we were independent. >> Well congratulations on the acquisition. VMware's got the radio event we've covered. We were there, you got a lot of engineers, a lot of great scientists so congratulations. >> Thank you so much. >> Great, Clement thanks so much for coming on theCUBE. >> Thank you so much Rebecca. >> I'm Rebecca Knight for John Furrier. We will have more from AWS re:Invent coming up in just a little bit. (light electronic music)
SUMMARY :
Brought to you by Amazon web services, intel, of AWS re:Invent, here at the Venetian in Las Vegas. Thank you so much. It's great to have you on the show. so monitoring, I think is you could say what we do. and so we just gobble up all that data. So even serverless and all the ML stuff Take a minute to explain the tech under the covers. So anything that you could gather up is cause we work with AWS, work with Azure, So 'cause if you 're doing real-time, time matters most of the data that you gather You know, Clement, I got to talk to you it's like the matrix, you need to have some way and AI, what do you guys do. and what that means is you have systems so you don't have to do all this cleaning. of the data that we're gathering. What's the news, what are you doing here at AWS. and just give you an out of the box turn keyway So how are companies going to use this? and we just help you stitch that picture together what are you guys looking at building, and so you just apply, call it a genie I guess, So final question before we go. and how do we connect all of that together We were there, you got a lot of engineers, for coming on theCUBE. in just a little bit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Clement Pang | PERSON | 0.99+ |
Clement | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Groupon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Watts | ORGANIZATION | 0.99+ |
Wavefront | ORGANIZATION | 0.99+ |
DoorDash | ORGANIZATION | 0.99+ |
Intuit | ORGANIZATION | 0.99+ |
Workday | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
Lyft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
yesterday | DATE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Switzerland | LOCATION | 0.99+ |
Lockheed Martin | ORGANIZATION | 0.98+ |
Las Vegas | LOCATION | 0.98+ |
two functionalities | QUANTITY | 0.97+ |
Spring Boot | TITLE | 0.97+ |
Venetian | LOCATION | 0.97+ |
today | DATE | 0.96+ |
a couple days later | DATE | 0.96+ |
single line | QUANTITY | 0.95+ |
first offering | QUANTITY | 0.95+ |
single platform | QUANTITY | 0.95+ |
single chart | QUANTITY | 0.94+ |
second launch | QUANTITY | 0.94+ |
single shop | QUANTITY | 0.94+ |
tens of thousands | QUANTITY | 0.92+ |
AliCloud | ORGANIZATION | 0.92+ |
couple of years ago | DATE | 0.9+ |
millions of time series | QUANTITY | 0.89+ |
Job Wizard | TITLE | 0.87+ |
GCP | ORGANIZATION | 0.82+ |
theCUBE | ORGANIZATION | 0.81+ |
Open Compute | EVENT | 0.81+ |
Jersey | TITLE | 0.8+ |
Invent 2018 | EVENT | 0.76+ |
Azure | ORGANIZATION | 0.72+ |
re:Invent 2018 | EVENT | 0.7+ |
Wavefront | TITLE | 0.66+ |
re: | EVENT | 0.64+ |
re:Invent | EVENT | 0.62+ |
Smug | ORGANIZATION | 0.52+ |
Rise | TITLE | 0.47+ |
Invent | EVENT | 0.43+ |
Wikibon Action Item Quick Take | David Floyer | OCP Summit, March 2018
>> Hi I'm Peter Burris, and welcome once again to another Wikibon Action Item Quick Take. David Floyer you were at OCP, the Open Compute Platform show, or summit this week, wandered the floor, talked to a lot of people, one company in particular stood out, Nimbus Data, what'd you hear? >> Well they had a very interesting announcement of their 100 terrabyte three and a half inch SSD, called the ExaData. That's a lot of storage in a very small space. It's high capacity SSDs, in my opinion, are going to be very important. They are denser, much less power, much less space, not as much performance, but fit in very nicely between the lowest level of disc, hard disc storage and the upper level. So they are going to be very useful in lower tier two applications. Very low friction for adoption there. They're going to be useful in tier three, but they're not direct replacement for disc. They work in a slightly different way. So the friction is going to be a little bit higher there. And then in tier four, there's again very interesting of putting all of the metadata about large amounts of data and put the metadata on high capacity SSD to enable much faster access at a tier four level. So action item for me is have a look at my research, and have a look at the general pricing: it's about half of what a standard SSD is. >> Excellent so this is once again a Wikibon Action Item Quick Take. David Floyer talking about Nimbus Data and their new high capacity, slightly lower performance, cost effective SSD. (upbeat music)
SUMMARY :
to another Wikibon Action Item Quick Take. So they are going to be very useful and their new high capacity, slightly lower performance,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Steve Mulaney | PERSON | 0.99+ |
George | PERSON | 0.99+ |
John Currier | PERSON | 0.99+ |
Derek Monahan | PERSON | 0.99+ |
Justin Smith | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Mexico | LOCATION | 0.99+ |
George Buckman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Stephen | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Steve Eleni | PERSON | 0.99+ |
Bobby Willoughby | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
John Ford | PERSON | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Missouri | LOCATION | 0.99+ |
twenty-year | QUANTITY | 0.99+ |
Luis Castillo | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Ellie Mae | PERSON | 0.99+ |
80 percent | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
twenty years | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
John fritz | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
North America | LOCATION | 0.99+ |
Jennifer | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Michael Keaton | PERSON | 0.99+ |
Santa Clara, CA | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
National Instruments | ORGANIZATION | 0.99+ |
Jon Fourier | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
20 mile | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Toby Foster | PERSON | 0.99+ |
hundred-percent | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
11 years | QUANTITY | 0.99+ |
Stacey | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
two sides | QUANTITY | 0.99+ |
18 months ago | DATE | 0.99+ |
two types | QUANTITY | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Day 3 Open | Red Hat Summit 2017
>> (upbeat music) Live from Boston Massachusetts. It's theCube! Covering Red Hat Summit 2017. Brought to you by Red Hat. >> It is day three of the Red Hat Summit, here in Boston Massachusetts. I'm Rebecca Knight. Along with Stu Miniman. We are wrapping up this conference Stu. We just had the final keynote of the morning. Before the cameras were rolling, you were teasing me a little bit that you have more scoop on the AWS deal. I'm interested to hear what you learned. >> (Stu) Yeah, Rebecca. First of all, may the fourth be with you. >> (Rebecca) Well, thank you. Of course, yes. And also with you. >> (Stu) Always. >> Yeah. (giggles) >> (Stu) So, day three of the keynote. They started out with a little bit of fun. They gave out some "May The Fourth Be With You" t-shirts. They had a little Star Wars duel that I was Periscoping this morning. So, love their geeking out. I've got my Millennium Falcon cuff links on. >> (Rebecca) You're into it. >> I saw a bunch of guys wearing t-shirts >> (Rebecca) Princess Leia was walking around! >> Princess Leia was walking around. There were storm troopers there. >> (Rebecca) Which is a little sad to see, but yes. >> (Stu) Uh, yeah. Carrie Fisher. >> Yes. >> Absolutely, but the Amazon stuff. Sure, I think this is the biggest news coming out of the show. I've said this a number of times. And we're still kind of teasing out exactly what it is. Cause, partially really this is still being built out. There's not going to be shipping until later this year. So things like how pricing works. We're still going to get there. But there's some people that were like "Oh wait!' "Open shift can be in AWS, that's great!" "But then I can do AWS services on premises." Well, what that doesn't mean, of course is that I don't have everything that Amazon does packaged up into a nice little container. We understand how computer coding works. And even with open-source and how we can make things server-less. And it's not like I can take everything that everybody says and shove it in my data center. It's just not feasible. What that means though, is it is the same applications that I can run. It's running in OpenShift. And really, there's the hooks and the API's to make sure that I can leverage services that are used in AWS. Of course, from my standpoint I'm like "OK!" So, tell me a little bit about how what latency there's going to be between those services. But it will be well understood as we build these what it's going to be use for. Certain use cases. We already talked to Optim. I was really excited about how they could do this for their environment. So, it's something we expect to be talking about throughout the rest of the year. And by the time we get to AWS Reinvent the week after Thanksgiving, I expect we'll have a lot more detail. So, looking forward to that. >> (Rebecca) And it will be rolled out too. So we'll have a really good sense of how it's working in the marketplace. >> (Stu) Absolutely. >> So other thoughts on the key note. I mean, one of the things that really struck me was talking about open-source. The history of open-source. It started because of a need to license existing technologies in a cheaper way. But then, really, the point that was made is that open-source taught tech how to collaborate. And then tech taught the world how to collaborate. Because it really was the model for what we're seeing with crowdsourcing solutions to problems facing education, climate change, the developing world. So I think that that is really something that Red Hat has done really well. In terms of highlighting how open-source is attacking many of the worlds most pressing problems. >> (Stu) Yeah, Rebecca I agree. We talked with Jim Whitehurst and watched him in the keynotes in previous days. And talked about communities and innovation and how that works. And in a lot of tech conferences it's like "Okay, what are the business outcomes?" And here it's, "Well, how are we helping the greater good?" "How are we helping education?" It was great to see kids that are coding and doing some cool things. And they're like, "Oh yeah, I've done Java and all these other things." And the Red Hat guys were like, "Hey >> (Rebecca) We're hiring. Yeah. (giggles) >> can we go hire this seventh grader?" Had the open-source hardware initiative that they were talking about. And how they can do that. Everything from healthcare to get a device that used to be $10,000 to be able to put together the genome. Is I can buy it on Amazon for What was it? Like six seven hundred dollars and put it together myself. So, open-source and hardware are something we've been keeping an eye on. We've been at the Open Compute Project event. Which Facebook launched. But, these other initiatives. They had.... It was funny, she said like, "There's the internet of things." And they have the thing called "The Thing" that you can tie into other pieces. There was another one that weaved this into fabric. And we can sensor and do that. We know healthcare, of course. Lot's of open-source initiatives. So, lots of places where open-source communities and projects are helping proliferate and make greater good and make the world a greater place. Flattening the world in many cases too. So, it was exciting to see. >> And the woman from the Open-Source Association. She made this great point. And she wasn't trying to be flip. But she said one of our questions is: Are you emotionally ready to be part of this community? And I thought that that was so interesting because it is such a different perspective. Particularly from the product side. Where, "This is my IP. This is our idea. This is our lifeblood. And this is how we're going to make money." But this idea of, No. You need to be willing to share. You need to be willing to be copied. And this is about how we build ideas and build the next great things. >> (Stu) Yeah, if you look at the history of the internet, there was always. Right, is this something I have to share information? Or do we build collaboration? You know, back to the old bulletin board days. Through the homebrew computing clubs. Some of the great progress that we've made in technology and then technology enabling beyond have been because we can work in a group. We can work... Build on what everyone else has done. And that's always how science is done. And open-source is just trying to take us to the next level. >> Right. Right. Right. And in terms of one of the last... One of the last things that they featured in the keynote was what's going on at the MIT media lab. Changing the face of agriculture. And how they are coding climate. And how they are coding plant nutrition. And really this is just going to have such a big change in how we consume food and where food is grown. The nutrients we derive from fruit. I was really blown away by the fact that the average apple we eat in the grocery store has been around for 14 months. Ew, ew! (laughs) So, I mean, I'm just exciting what they're doing. >> Yeah, absolutely right. If we can help make sure people get clean water. Make sure people have availability of food. Shorten those cycles. >> (Rebecca) Right, right. Exactly. >> The amount of information, data. The whole Farm to Table Initiative. A lot of times data is involved in that. >> (Rebecca) Yeah. It's not necessarily just the stuff that you know, grown on the roof next door. Or in the farm a block away. I looked at a local food chain that's everywhere is like Chipotle. You know? >> (Rebecca) Right. >> They use data to be able to work with local farmers. Get what they can. Try to help change some of the culture pieces to bring that in. And then they ended up the keynote talking more about innovation award winners. You and I have had the chance to interview a bunch of them. It's a program I really like. And talking to some of the Red Hatters there actually was some focus to work with... Talk to governments. Talk to a lot of internationals. Because when they started the program a few years ago. It started out very U.S.-centric. So, they said "Yeah." It was a little bit coincidence that this year it's all international. Except for RackSpace. But, we should be blind when we think about who has great ideas and good innovation. And at this conference, I bumped into a lot of people internationally. Talked to a few people coming back from the Red Sox game. And it was like, "How was it?" And they were like, "Well, I got a hotdog and I understood this. But that whole ball and thing flying around, I don't get it." And things like that. >> So, they're learning about code but also baseball. So this is >> (Stu) Yeah, what's your take on the global community that you've seen at the show this week? >> (Rebecca) Well, as you've said, there are representatives from 70 countries here. So this really does feel like the United Nations of open-source. I think what is fascinating is that we're here in the states. And so we think about these hotbeds of technological innovation. We're here in Boston. Of course there's Silicon Valley. Then there are North Carolina, where Red Hat's based. Atlanta, Austin, Seattle, of course. So all these places where we see so much innovation and technological progress taking place here in the states. And so, it can be easy to forget that there are also pockets all over Europe. All over South America. In Africa, doing cool things with technology. And I think that that is also ... When we get back to one of the sub themes of this conference... I mean, it's not a sub theme. It is the theme. About how we work today. How we share ideas. How we collaborate. And how we manage and inspire people to do their best work. I think that that is what I'd like to dig into a little today. If we can. And see how it is different in these various countries. >> Yeah, and this show, what I like is when its 13th year of the show, it started out going to a few locations. Now it's very stable. Next year, they'll be back in San Francisco. The year after, they'll be back here in Boston. They've go the new Boston office opening up within walking distance of where we are. Here GE is opening up their big building. I just heard there's lots of startups when I've been walking around the area. Every time I come down to the Sea Port District. It's like, "Wow, look at all the tech." It's like, Log Me In is right down the road. There's this hot little storage company called Wasabi. That's like two blocks away. Really excited but, one last thing back on the international piece. Next week's OpenStack Summit. I'll be here, doing theCube. And some of the feedback I've been getting this week It's like, "Look, the misperception on an OpenStack." One of the reasons why people are like, "Oh, the project's floundering. And it's not doing great, is because the two big use case. One, the telecommunication space. Which is a small segment of the global population. And two, it's gaining a lot of traction in Europe and in Asia. Whereas, in North America public cloud has kind of pushed it aside a little bit. So, unfortunately the global tech press tends to be very much, "Oh wait, if it's seventy-five percent adoption in North America, that's what we expect. If its seventy-five percent overseas, it's not happening. So (giggles) it's kind of interesting. >> (Rebecca) Right. And that myopia is really a problem because these are the trends that are shaping our future. >> (Stu) Yeah, yeah. >> So today, I'm also going to be talking to the Women In Tech winners. That very exciting. One of the women was talking about how she got her idea. Or really, her idea became more formulated, more crystallized, at the Grace Hopper Conference. We, of course, have a great partnership with the Grace Hopper Conference. So, I'm excited to talk to her more about that today too. >> (Stu) Yeah, good lineup. We have few more partners. Another customer EasiER AG who did the keynote yesterday. Looking forward to digging in. Kind of wrapping up all of this. And Rebecca it's been fun doing it with you this week. >> And I'm with you. And may the force... May the fourth be with you. >> And with you. >> (giggles) Thank you, we'll have more today later. From the Red Hat Summit. Here in Boston, I'm Rebecca Knight for Stu Miniman. (upbeat music)
SUMMARY :
Brought to you by Red Hat. We just had the final keynote of the morning. may the fourth be with you. And also with you. They had a little Star Wars duel that I was Periscoping Princess Leia was walking around. (Stu) Uh, yeah. And by the time we get to AWS Reinvent (Rebecca) And it will be rolled out too. is attacking many of the worlds most pressing problems. And the Red Hat guys were like, "Hey (Rebecca) We're hiring. And we can sensor and do that. And the woman from the Open-Source Association. Some of the great progress that we've made in technology And in terms of one of the last... If we can help (Rebecca) Right, right. The amount of information, data. It's not necessarily just the stuff that You and I have had the chance to interview a bunch of them. So this is And so, it can be easy to forget And some of the feedback I've been getting this week And that myopia is really a problem One of the women was talking about how she And Rebecca it's been fun doing it with you this week. And may the force... From the Red Hat Summit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Chipotle | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
North Carolina | LOCATION | 0.99+ |
$10,000 | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
Atlanta | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Austin | LOCATION | 0.99+ |
Africa | LOCATION | 0.99+ |
Wasabi | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Carrie Fisher | PERSON | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Next year | DATE | 0.99+ |
North America | LOCATION | 0.99+ |
South America | LOCATION | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
seventy-five percent | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Next week | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
70 countries | QUANTITY | 0.99+ |
13th year | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
this week | DATE | 0.99+ |
today | DATE | 0.99+ |
six seven hundred dollars | QUANTITY | 0.98+ |
Grace Hopper Conference | EVENT | 0.98+ |
two | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
Stu | PERSON | 0.98+ |
two blocks | QUANTITY | 0.98+ |
OpenStack Summit | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
Sea Port District | LOCATION | 0.98+ |
United Nations | ORGANIZATION | 0.98+ |
this year | DATE | 0.97+ |
later this year | DATE | 0.97+ |
fourth | QUANTITY | 0.97+ |
Star Wars | TITLE | 0.97+ |
Red Hat Summit 2017 | EVENT | 0.97+ |
May The Fourth Be With You | TITLE | 0.96+ |
Princess Leia | PERSON | 0.96+ |
Ihab Tarazi, Equinix - Open Networking Summit 2017 - #ONS2017 - #theCUBE
>> Narrator: Live from Santa Clara, California it's theCUBE. Covering Open Networking Summit 2017. Brought to you by the Linux Foundation. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We're in Santa Clara at the Open Networking Summit 2017. We haven't been here for a couple years. Obviously Open is everywhere. It's in hardware, it's in compute, it's in store, and it's certainly in networking as well. And we're excited to be joined first off by Scott Raynovich who will be co-hosting for the next couple of days. Good to see you again Scott. >> Good to see you. >> And our next guest is Ihab Tarazi. He's the EVP and CTO of Equinix. Last time we saw Ihab was at Open Compute Project last year, so great to see you again. >> Yeah, thank you very much, good to be here. I really enjoyed the interview last year so thanks for having me again. >> Now you set it at the high bar, so hopefully we can pull it off again. >> We can do it. >> So first off for folks that aren't familiar with Equinix, give them kind of an overview. Because you don't have quite the profile of Amazon and Google and the other cloud providers, but you're a pretty important piece of the infrastructure. >> Ihab: Yeah absolutely. While we're nowhere close to the size of those players, the place we play in the universe is very significant. We are the edge of the cloud, I would say. We enable all these players, they're all our biggest customers. As well all the networks are our biggest customers. We have over 2,000 clouds in our data centers and over 1,400 networks. We have one of the largest global data center networks. We have 150 data centers and four eMarkets around the world. And that number is going to get a little bigger. Now we announce the acquisition of Verizon data center assets. So we'll have more data centers and a few more markets. >> I heard about the Verizon acquisition, so congratulations, just adding more infrastructure. But let's unpack it a little bit. Two things I want to dig into. One is you said you have clouds in your data centers. So what do you mean by that? >> Yeah the way the cloud architecture is deployed is that the big cloud providers will have these big data centers where they build them themselves and it hosts the applications. And then they work with an edge for the cloud. Either a caching edge or compute edge, or even a network edge in data centers like ours where they connect to all their enterprise customers and all the networks. So we have a significant number of edges, we have 21 markets around the world. We have just about the big list of names, edges, that you can connect to automatically. From AWS, Google, Microsoft, Salesforce.com, Oracle, anybody else you think of. >> So this is kind of an extension of what we heard back a long time ago with you guys and like Amazon specifically on this direct connect. So you are the edge between somebody else's data center and these giant cloud providers. >> Absolutely. And since the last time we talked, we've added a lot more density. More edge nodes and more markets and more new cloud providers. Everywhere from the assess to the infrastructure as a service provider. >> And why should customers care? What's the benefit to your customers for that? >> Yeah the benefit is really significant. These guys want direct access to the cloud for high performance and security. So everybody wants to build the hybrid cloud. Now it's very clear the hybrid cloud is the architecture of choice. You want to build a hybrid cloud, then you want to deploy in a data center and connect to the cloud. And the second thing that's happening, nobody's using just one cloud. Everybody's doing a multi-cloud. So if you want 40, 50 clouds like most companies do, most CIOs, then you're going to want to be in a data center that has as many as possible. If you're going to go global, connect to multi-cloud and have that proximity, you're going to have a hard time finding somebody like Equinix out there. >> Yeah but I've got a question. You mentioned the Verizon deal. There was a trend for a while where all these big service providers were buying data centers, including AT&T, CenturyLink, and now the trend appears to have reversed. Now they're selling the data centers that they bought. I'd love your insight on that. Why that just wasn't their core competency? Why are the selling them back to people like Equinix. >> Yeah that's a good question. What's happened over time as the cloud materialized, is the data canters are much more valuable if they're neutral. If you can come in and connect to all the clouds and all the networks, customers are much more likely to come in. And therefore if a data center is owned by a single network, customers are not as likely to want to use it because they want to use all the networks and all the clouds. And our model of neutrality and how we set up exchanges, and how we provide interconnection, and the whole way we do customer service, is the kind of things people are looking for. >> So you're the Switzerland of the cloud. >> And so the same assets become much more valuable in this new model. >> And I don't know if people understand quite how much direct connection and peer-to-peer, and how much of that's going on, especially in a business-to-business context to provide a much better experience. Versus you know the wild wooly internet of days of old where you're hopping all over the place, Lord knows how many hops you're taking. A lot of that's really been locked down. >> I think the most important step people can think about is by 2020 90% of all the internet, or at least 80 to 90, will be home to the top 10 clouds. Therefore the days of the wild internet, while that continues to be significant, the cloud access and interconnection is very critical, and continues to be even bigger. >> Go ahead. >> So tell us what the logistics are of managing the growth, like you opening how many data centers a year, and how much equipment are you moving into these data centers. We spend over a billion dollars a year on upgrading, adding capacity, and building new data centers. We usually announce five, six, new ones a year. We usually have 20 plus projects, if not more, active at any time. So we have a very focused process and people across the globe manage this thing. We don't want to go dark in any of our key matters like Washington DC, the D.C. market, or let's say the San Jose, Silicon Valley, etc. Because customers want to come in and continue to add and continue to bring people. And that means not only expanding the existing data centers, but buying land and building more data centers beside it, and continue to expand where we need to. And then every year or so we go into one or two more emerging markets. We went into Dubai a while ago and we continue to develop it. And those become long term investments to continue to build our global infrastructure. The last few years we've made massive acquisitions between Telecity in Europe, Bit-isle in Japan, and now the Verizon assents that expanded our footprint significantly into new markets, Eastern Europe, give us bigger markets in places like Tokyo which helped us get to where we are today. >> One of the themes in networking and cloud in general is that the speed of light is just too damn slow. At the end of the day, stuff's got to travel and it actually takes longer than you would think. So does having all these, increased presence, increased egos, increased physical locations, help you address some of that? Because you've got so many more points kind of into this private network if you will. >> Oh yeah absolutely. The content has become more and more localized by market. And the more you have things like IOT and devices pulling in more data, not all the data needs to go all over the globe. And also there is now jurisdiction and laws that require some of the content to stay. So the market approach that we have is becoming the center of mass for where the data resides. And once the data gets into our data center, the value of the data is how you exchange it with other pieces of information, and increasingly how you make immediate decisions on it, you know with automation and machine learning. So when you go to that environment you need massive capacity, very low latency, to many data warehouses or data lakes, and you want to connect that to the software that can make decisions. So that's how we see the world is evolving now. One thing we see though is that complementing that will be a new edge that will form. A lot of people in this conference were talking about that. A lot of the discussion about the open networks here is how we support the 5G, all the explosion of devices, and what we see that connecting to that dense market approach that we have where the data is housed. >> That's interesting you just mentioned all the devices which was going to be my next question. So the internet of things, how will this change the data center edge, as you refer to it? >> Yeah that's the biggest question in the industry, especially for networks. And the same discussion happened at Mobile Work Congress here a little while ago. People now believe that there'll be this compute edge, that the network will be a compute edge. Because you want to be able to put compute, keep pushing it out all the way to the edge. And that edge needs to support today's technologies but also all the open wireless spectrum, all the low powered networks, open R which is one of the frequencies for the millimeter frequencies, and also the 5G as you know. So when you add all that up you're going to need this edge to support. So all the different wireless options plus some amount of compute, and that problem is very hard to solve without an open source model, which is where a lot of people are here looking for solutions. >> It's interesting because your definition of the edge feels like it's kind of closer to the cloud where's there's a lot of converstion, we do a lot of stuff with GE about the edge, which is you know right out there on the device and the sensor. Because as you said depending on the application, depending on the optimization, depending on what you're trying to do, the device is some level of compute and store that's going to be done locally, and some of it will go upstream and get processed and come downstream. But you're talking about a different edge. Or you know of see you guys extending all the way down to that edge. >> We don't see ourselves extending at this time but definitely it's something we're spending a lot of time analyzing to see what happens. I would say a couple of big stats is that today our edge is maybe 100 milliseconds from devices in a market or a lot less in some cases. The new technology will make that even shorter. So with the new technology like you said, you can't beat the speed of light, but with more direct connections you'll get to 40, 50 milliseconds, which is fantastic for the vast majority of applications people want. There'll be very few applications that need much slower latency all the way down to the sub-10 millisecond. For those somebody like a network would need to put compute at the edge to do some of it. So that world of both types will continue. But even the ones that need the very low latency, for some of the data it still needs to compare it to other sources of data and connect to clouds and networks but some of the data will still come back to our data centers. So I think this is how we see the world evolving but it's early days and a lot of brain power will be spent on that. >> So as you look forward to 2017, what are some of the big items on your plate that you're trying to take down for this calendar year? >> The biggest thing I want on our list is that we have an explosion of software model. Everybody who was a software now has a software platform. When we were at OCP for example you saw NetApp, they showed their software as an open source. Every single company from security to storage, even networking, are now creating their platform available as a software. Well those platforms have no place to go today. They have no deployment model. So one of the things we are working on is how we create a deployment model for this as a service model. And most of them is open source, so it needs decoupling of software and hardware. So we are really actively working with all these to create an open source software and just software in general, ecosystem plus this whole open source hardware. >> So do you guys have a pretty aggressive software division inside Equinix, especially in these open source projects? Or how do you kind of interact with them? >> Our model is to enable the industry. So we have some of our tools but mostly for enabling customers and customer service, as well as some of the basic interconnection we do. The vast majority of all the stuff is our partners, and these are our customers. So our model is to enable them and to connect them to everybody else they need at ecosystem to succeed and help them set up as a service model. And as the enterprise customers come to our data center, how to they connect to them. So I would say that's one of the most sought after missions when we go to conferences like this. Everybody who announced today is talking to us about how they enable the announcements they make and given our place in the universe, we would be a very key player in enabling that ecosystem. >> Do you have like a special lab where you test these new technologies? Or how do you do that? >> Yeah that's the plan. And we connect this effort to also what we're doing with OCP and Telecom Infrastructure Project where we have a leadership position and highly engaged. We are creating a lab environment where people can come in and test not only the hardware from TIP and OCP, but also the software from open network, but many other open source software in general under the Linux Foundation or others. In our situation not only can they test it against each other, but they can test the performance against the entire world. How does this work with the internet, the cloud? And that leading us to deployment and go to market models that people are looking for. >> Alright sounds pretty exciting. Equinix, a company that probably handles more of your internet traffic than you ever thought. >> Ihab: That's very true. >> Well thanks again for stopping by. We'll look for you at our next open source show. >> Thank you very much. >> Ihab Tarazi from Equinix. He's Scott Raynovich, I'm Jeff Frick, you're watching theCube from Open Networking Summit 2017, see you next time after this short break. (techno music)
SUMMARY :
Brought to you by the Linux Foundation. Good to see you again Scott. so great to see you again. I really enjoyed the interview last year Now you set it at the high bar, and Google and the other cloud providers, We are the edge of the cloud, I would say. So what do you mean by that? and it hosts the applications. So you are the edge between somebody else's data center And since the last time we talked, And the second thing that's happening, Why are the selling them back to people like Equinix. and all the clouds. And so the same assets become and how much of that's going on, is by 2020 90% of all the internet, and people across the globe manage this thing. At the end of the day, stuff's got to travel And the more you have things like IOT So the internet of things, and also the 5G as you know. on the device and the sensor. for some of the data it still needs to So one of the things we are working on is And as the enterprise customers come to our data center, Yeah that's the plan. internet traffic than you ever thought. We'll look for you at our next open source show. see you next time after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Verizon | ORGANIZATION | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
CenturyLink | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Ihab Tarazi | PERSON | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Ihab | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
Scott | PERSON | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
40 | QUANTITY | 0.99+ |
150 data centers | QUANTITY | 0.99+ |
100 milliseconds | QUANTITY | 0.99+ |
Telecity | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
21 markets | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
last year | DATE | 0.99+ |
Dubai | LOCATION | 0.99+ |
Washington DC | LOCATION | 0.99+ |
D.C. | LOCATION | 0.99+ |
Tokyo | LOCATION | 0.99+ |
today | DATE | 0.99+ |
six | QUANTITY | 0.99+ |
20 plus projects | QUANTITY | 0.99+ |
San Jose | LOCATION | 0.99+ |
Santa Clara, California | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
Salesforce.com | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Two things | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Open Networking Summit 2017 | EVENT | 0.98+ |
OCP | ORGANIZATION | 0.98+ |
both types | QUANTITY | 0.98+ |
over 2,000 clouds | QUANTITY | 0.98+ |
over 1,400 networks | QUANTITY | 0.98+ |
GE | ORGANIZATION | 0.98+ |
#ONS2017 | EVENT | 0.97+ |
Bit-isle | ORGANIZATION | 0.96+ |
Eastern Europe | LOCATION | 0.95+ |
one cloud | QUANTITY | 0.95+ |
sub-10 millisecond | QUANTITY | 0.95+ |
over a billion dollars a year | QUANTITY | 0.95+ |
10 clouds | QUANTITY | 0.94+ |
40, 50 milliseconds | QUANTITY | 0.94+ |
NetApp | TITLE | 0.94+ |
first | QUANTITY | 0.93+ |
Switzerland | LOCATION | 0.92+ |
a year | QUANTITY | 0.92+ |
Mobile Work Congress | EVENT | 0.91+ |
90% | QUANTITY | 0.91+ |
TIP | ORGANIZATION | 0.91+ |
50 clouds | QUANTITY | 0.9+ |
90 | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
Raejeanne Skillern | Google Cloud Next 2017
>> Hey welcome back everybody. Jeff Frick here with theCUBE, we are on the ground in downtown San Francisco at the Google Next 17 Conference. It's this crazy conference week, and arguably this is the center of all the action. Cloud is big, Google Cloud Platform is really coming out with a major enterprise shift and focus, which they've always had, but now they're really getting behind it. And I think this conference is over 14,000 people, has grown quite a bit from a few years back, and we're really excited to have one of the powerhouse partners with Google, who's driving to the enterprise, and that's Intel, and I'm really excited to be joined by Raejeanne Skillern, she's the VP and GM of the Cloud Platform Group, Raejeanne, great to see you. >> Thank you, thanks for having me. >> Yeah absolutely. So when we got this scheduled, I was thinking, wow, last time I saw you was at the Open Compute Project 2015, and we were just down there yesterday. >> Yesterday. And we missed each other yesterday, but here we are today. >> So it's interesting, there's kind of the guts of the cloud, because cloud is somebody else's computer that they're running, but there is actually a computer back there. Here, it's really kind of the front end and the business delivery to people to have the elastic capability of the cloud, the dynamic flexibility of cloud, and you guys are a big part of this. So first off, give us a quick update, I'm sure you had some good announcements here at the show, what's going on with Intel and Google Cloud Platform? >> We did, and we love it all, from the silicon ingredients up to the services and solutions, this is where we invest, so it's great to be a part of yesterday and today. I was on stage earlier today with Urs Holzle talking about the Google and Intel Strategic Alliance, we actually announced this alliance last November, between Diane Green and Diane Bryant of Intel. And we had a history, a decade plus long of collaborating on CPU level optimization and technology optimization for Google's infrastructure. We've actually expanded that collaboration to cover hybrid cloud orchestration, security, IOT edge to cloud, and of course, artificial intelligence, machine learning, and deep learning. So we still do a lot of custom work with Google, making sure our technologies run their infrastructure the best, and we're working beyond the infrastructure to the software and solutions with them to make sure that those software and solutions run best on our architecture. >> Right cause it's a very interesting play, with Google and Facebook and a lot of the big cloud providers, they custom built their solutions based on their application needs and so I would presume that the microprocessor needs are very specific versus say, a typical PC microprocessor, which has a more kind of generic across the board type of demand. So what are some of the special demands that cloud demands from the microprocessor specifically? >> So what we've seen, right now, about half the volume we ship in the public cloud segment is customized in some way. And really the driving force is always performance per dollar TCO improvement. How to get the best performance and the lowest cost to pay for that performance. And what we've found is that by working with the top, not just the Super Seven, we call them, but the Top 100, closely, understanding their infrastructure at scale, is that they benefit from more powerful servers, with performance efficiency, more capability, more richly configured platforms. So a lot of what we've done, these cloud service providers have actually in some cases pushed us off of our roadmap in terms of what we can provide in terms of performance and scalability and agility in their infrastructure. So we do a lot of tweaks around that. And then of course, as I mentioned, it's not just the CPU ingredients, we have to optimize in the software level, so we do a lot of co-engineering work to make sure that every ounce of performance and efficiency is seen in their infrastructure. And that's how they, their data center is their cost to sales, they can't afford to have anything inefficient. So we really try to partner to make sure that it is completely tailor-optimized for that environment. >> Right, and the hyperscale, like you said, the infrastructure there is so different than kind of classic enterprise infrastructure, and then you have other things like energy consumption, which, again, at scale, itty bitty little improvements >> It's expensive. >> Make a huge impact. And then application far beyond the cloud service providers, so many of the applications that we interact with now today on a day to day basis are cloud-based applications, whether it is the G Suite for documents or this or that, or whether it's Salesforce, or whether we just put in Asana for task tracking, and Slack, and so many of these things are now cloud-based applications, which is really the way we work more and more and more on our desktops. >> Absolutely. And one of the things we look at is, applications really have kind of a gravity. Some applications are going to have a high affinity to public cloud. You see Tustin Dove, you see email and office collaboration already moving into the public cloud. There are some legacy applications, complex, some of the heavier modeling and simulation type apps, or big huge super computers that might stay on premise, and then you have this middle ground of applications, that, for various reasons, performance, security, data governance, data gravity, business need or IP, could go between the public cloud or stay on premise. And that's why we think it's so important that the world recognizes that this really is about a hybrid cloud. And it's really nice to partner with Google because they see that hybrid cloud as the end state, or they call it the Multi Cloud. And their Kubernetes Orchestration Platform is really designed to help that, to seamlessly move those apps from on a customer's premise into the Google environment and have that flow. So it's a very dynamic environment, we expect to see a lot of workloads kind of continue to be invested and move into the public cloud, and people really optimizing end-to-end. >> So you've been in the data center space, we talked a little bit before we went live, you've been in the data center space for a long, long time. >> Long time. >> We won't tell you how long. (laughing) >> Both: Long time. >> So it must be really exciting for you to see this shift in computing. There's still a lot of computing power at the edge, and there's still a lot of computing power now in our mobile devices and our PCs, but so much more of the heavy lift in the application infrastructure itself is now contained in the data center, so much more than just your typical old-school corporate data centers that we used to see. Really fun evolution of the industry, for you. >> Absolutely, and the public cloud is now one of the fastest growing segments in the enterprise space, in the data center space, I should say. We still have a very strong enterprise business. But what I love is it's not just about the fact that the public cloud is growing, this hybrid really connects our two segments, so I'm really learning a lot. It's also, I've been at Intel 23 years, most of it in the data center, and last year, we reorganized our company, we completely restructured Intel to be a cloud and IoT company. And from a company that for multiple decades was a PC or consumer-based client device company, it is just amazing to have data center be so front and center and so core to the type of infrastructure and capability expansion that we're going to see across the industry. We were talking about, there isn't going to be an industry left untouched by technology. Whether it's agriculture, or industrial, or healthcare, or retail, or logistics. Technology is going to transform them, and it all comes back to a data center and a cloud-based infrastructure that can handle the data and the scale and the processing. >> So one of the new themes that's really coming on board, next week will it be a Big Data SV, which has grown out of Hadoop and the old big data conversation. But it's really now morphing into the next stage of that, which is machine learning, deep learning, artificial intelligence, augmented reality, virtual reality, so this whole 'nother round that's going to eat up a whole bunch of CPU capacity. But those are really good cloud-based applications that are now delivering a completely new level of value and application sophistication that's driven by power back at the data center. >> Right. We see, artificial intelligence has been a topic since the 50s. But the reality is, the technology is there today to both capture and create the data, and compute on the data. And that's really unlocking this capabilities. And from us as a company, we see it as really something that is going to not just transform us as a business but transform the many use cases and industries we talked about. Today, you or I generate about a gig and a half of data, through our devices and our PC and tablet. A smart factory or smart plane or smart car, autonomous car, is going to generate terabytes of data. Right, and that is going to need to be stored. Today it's estimated only about 5% of the data captured is used for business insight. The rest just sits. We need to capture the data, store the data efficiently, use the data for insights, and then drive that back into the continuous learning. And that's why these technologies are so amazing, what they're going to be able to do, because we have the technology and the opportunity in the business space, whether it's AI for play or for good or for business, AI is going to transform the industry. >> It's interesting, Moore's Law comes up all the time. People, is Moore's Law done, is Moore's Law done? And you know, Moore's Law is so much more than the physics of what he was describing when he first said that in the first place, about number of transistors on a chip. It's really about an attitude, about this unbelievable drive to continue to innovate and iterate and get these order of magnitude of increase. We talked to David Floyer at OCP yesterday, and he's talking about it's not only the microprocessors and the compute power, but it's the IO, it's the networking, it's storage, it's flash storage, it's the interconnect, it's the cabling, it's all these things. And he was really excited that we're getting to this massive tipping point, of course in five years we'll look back and think it's archaic, of these things really coming together to deliver low latency almost magical capabilities because of this combination of factors across all those different, kind of the three horseman of computing, if you will, to deliver these really magical, new applications, like autonomous vehicles. >> Absolutely. And we, you'll hear Intel talk about Jevons Paradox, which is really about, if you take something and make it cheaper and easier to consume, people will consume more of it. We saw that with virtualization. People predicted oh everything's going to slow down cause you're going to get higher utilization rates. Actually it just unlocked new capabilities and the market grew because of it. We see the same thing with data. Our CEO will talk about, data is the new oil. It is going to transform, it's going to unlock business opportunity, revenue growth, cost savings in environment, and that will cause people to create more services, build new businesses, reach more people in the industry, transform traditional brick and mortar businesses to the digital economy. So we think we're just on the cusp of this transformation, and the next five to 10 years is going to be amazing. >> So before we let you go, again, you've been doing this for 20 plus years, I wasn't going to say anything, she said it, I didn't say it, and I worked at Intel the same time, so that's good. As you look forward, what are some of your priorities for 2017, what are some of the things that you're working on, that if we get together, hopefully not in a couple years at OCP, but next year, that you'll be able to report back that this is what we worked on and these are some of the new accomplishments that are important to me? >> So I'm really, there's a number of things we're doing. You heard me mention artificial intelligence many, many times. In 2016, Intel made a number of significant acquisitions and investments to really ensure we have the right technology road map for artificial intelligence. Machine learning, deep learning, training and inference. And we've really shored up that product portfolio, and you're going to see these products come to market and you're going to see user adoption, not just in my segment, but transforming multiple segments. So I'm really excited about those capabilities. And a lot of what we'll do, too, will be very vertical-based. So you're going to see the power of the technology, solving the health care problem, solving the retail problem, solving manufacturing, logistics, industrial problems. So I like that, I like to see tangible results from our technology. The other thing is the cloud is just growing. Everybody predicted, can it continue to grow? It does. Companies like Google and our other partners, they keep growing and we grow with them, and I love to help figure out where they're going to be two or three years from now, and get our products ready for that challenge. >> Alright, well I look forward to our next visit. Raejeanne, thanks for taking a few minutes out of your time and speaking to us. >> It was nice to see you again. >> You too. Alright, she's Raejeanne Skillern and I'm Jeff Frick, you're watching theCUBE, we're at the Google Cloud Next Show 2017, thanks for watching. (electronic sounds)
SUMMARY :
of the Cloud Platform Group, Raejeanne, great to see you. the Open Compute Project 2015, and we were just And we missed each other yesterday, but here we are today. and the business delivery to people to have the best, and we're working beyond the infrastructure and a lot of the big cloud providers, about half the volume we ship in the public cloud segment so many of the applications that we interact with And one of the things we look at is, we talked a little bit before we went live, We won't tell you how long. is now contained in the data center, and a cloud-based infrastructure that can handle the data and the old big data conversation. Right, and that is going to need to be stored. and the compute power, but it's the IO, and the next five to 10 years is going to be amazing. of the new accomplishments that are important to me? and investments to really ensure we have the right and speaking to us. to see you again. we're at the Google Cloud Next Show 2017,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Bryant | PERSON | 0.99+ |
Raejeanne | PERSON | 0.99+ |
Diane Green | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Jeff Frick | PERSON | 0.99+ |
Raejeanne Skillern | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Urs Holzle | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Intel | ORGANIZATION | 0.99+ |
Yesterday | DATE | 0.99+ |
OCP | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
next week | DATE | 0.99+ |
20 plus years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
23 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
G Suite | TITLE | 0.99+ |
Both | QUANTITY | 0.99+ |
Intel Strategic Alliance | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
two segments | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Cloud Platform Group | ORGANIZATION | 0.98+ |
last November | DATE | 0.98+ |
over 14,000 people | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
about 5% | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
three years | QUANTITY | 0.96+ |
Tustin Dove | PERSON | 0.96+ |
Asana | TITLE | 0.94+ |
50s | DATE | 0.93+ |
Google Next 17 Conference | EVENT | 0.93+ |
Open Compute Project 2015 | EVENT | 0.92+ |
Top 100 | QUANTITY | 0.89+ |
first place | QUANTITY | 0.89+ |
Kubernetes Orchestration Platform | TITLE | 0.88+ |
Slack | TITLE | 0.87+ |
three horseman | QUANTITY | 0.87+ |
Google Cloud Next | TITLE | 0.86+ |
Google Cloud Platform | TITLE | 0.86+ |
earlier today | DATE | 0.85+ |
Moore | PERSON | 0.85+ |
10 years | QUANTITY | 0.83+ |
Salesforce | TITLE | 0.81+ |
Jevons Paradox | ORGANIZATION | 0.81+ |
theCUBE | ORGANIZATION | 0.8+ |
about half | QUANTITY | 0.79+ |
five | QUANTITY | 0.77+ |
San Francisco | LOCATION | 0.77+ |
Moore's Law | TITLE | 0.73+ |
Cloud Platform | TITLE | 0.73+ |
a decade | QUANTITY | 0.72+ |
terabytes | QUANTITY | 0.67+ |
few years back | DATE | 0.67+ |
Seven | TITLE | 0.65+ |
Hadoop | TITLE | 0.63+ |
couple years | QUANTITY | 0.56+ |
Lisa Spelman, Intel - Google Next 2017 - #GoogleNext17 - #theCUBE
(bright music) >> Narrator: Live from Silicon Valley. It's theCUBE, covering Google Cloud Next 17. >> Okay, welcome back, everyone. We're live in Palo Alto for theCUBE special two day coverage here in Palo Alto. We have reporters, we have analysts on the ground in San Francisco, analyzing what's going on with Google Next, we have all the great action. Of course, we also have reporters at Open Compute Summit, which is also happening in San Hose, and Intel's at both places, and we have Intel senior manager on the line here, on the phone, Lisa Spelman, vice president and general manager of the Xeon product line, product manager responsibility as well as marketing across the data center. Lisa, welcome to theCUBE, and thanks for calling in and dissecting Google Next, as well as teasing out maybe a little bit of OCP around the Xeon processor, thanks for calling. >> Lisa: Well, thank you for having me, and it's hard to be in many places at once, so it's a busy week and we're all over, so that's that. You know, we'll do this on the phone, and next time we'll do it in person. >> I'd love to. Well, more big news is obviously Intel has a big presence with the Google Next, and tomorrow there's going to be some activity with some of the big name executives at Google. Talking about your relationship with Google, aka Alphabet, what are some of the key things that you guys are doing with Google that people should know about, because this is a very turbulent time in the ecosystem of the tech business. You saw Mobile World Congress last week, we've seen the evolution of 5G, we have network transformation going on. Data centers are moving to a hybrid cloud, in some cases, cloud native's exploding. So all new kind of computing environment is taking shape. What is Intel doing here at Google Next that's a proof point to the trajectory of the business? >> Lisa: Yeah, you know, I'd like to think it's not too much of a surprise that we're there, arm in arm with Google, given all of the work that we've done together over the last several years in that tight engineering and technical partnership that we have. One of the big things that we've been working with Google on is, as they move from delivering cloud services for their own usage and for their own applications that they provide out to others, but now as they transition into being a cloud service provider for enterprises and other IT shops as well, so they've recently launched their Google Cloud platform, just in the last week or so. Did a nice announcement about the partnership that we have together, and how the Google Cloud platform is now available and running and open for business on our latest next generation Intel Xeon product, and that's codenamed Skylake, but that's something that we've been working on with them since the inception of the design of the product, so it's really nice to have it out there and in the market, and available for customers, and we very much value partnerships, like the one we have with Google, where we have that deep technical engagement to really get to the heart of the workload that they need to provide, and then can design product and solution around that. So you don't just look at it as a one off project or a one time investment, it's an ongoing continuation and evolution of new product, new features, new capabilities to continue to improve their total cost of ownership and their customer experience. >> Well, Lisa, this is your baby, the Xeon, codename Skylake, which I love that name. Intel always has great codenames, by the way, we love that, but it's real technology. Can you share some specific features of what's different around these new workloads because, you know, we've been teasing out over the past day and we're going to be talking tomorrow as well about these new use cases, because you're looking at a plethora of use cases, from IoT edge all the way down into cloud native applications. What specific things is Xeon doing that's next generation that you could highlight, that points to this new cloud operating system, the cloud service providers, whether it's managed services to full blown down and dirty cloud? >> Lisa: So it is my baby, I appreciate you saying that, and it's so exciting to see it out there and starting to get used and picked up and be unleashing it on the world. With this next generation of Xeon, it's always about the processor, but what we've done has gone so much beyond that, so we have a ton of what we call platform level innovation that is coming in, we really see this as one of our biggest kind of step function improvements in the last 10 years that we've offered. Some of the features that we've already talked about are things like AVX-512 instructions, which I know just sounds fun and rolls of the tongue, but really it's very specific workload acceleration for things like high performance computing workloads. And high performance computing is something that we see more and more getting used in access in cloud style infrastructure. So it's this perfect marrying of that workload specifically deriving benefit from the new platforms, and seeing really strong performance improvements. It also speaks to the way with Intel and Xeon families, 'cause remember, with Xeon, we have Xeon Phi, you've got standard Xeon, you've got Xeon D. You can use these instructions across the families and have workloads that can move to the most optimized hardware for whatever you're trying to drive. Some of the other things that we've talked about announced is we'll have our next generation of Intel Resource Director technology, which really helps you manage and provide quality of service within you application, which is very important to cloud service providers, giving them control over hardware and software assets so that they can deliver the best customer experience to their customers based on the service level agreement they've signed up for. And then the other one is Intel Omni-Path architecture, so again, fairly high performance computing focused product, Omni-Path is a fabric, and we're going to offer that in an integrated fashion with Skylake so that you can get even higher level of performance and capability. So we're looking forward to a lot more that we have to come, the whole of the product line will continue to roll out in the middle of this year, but we're excited to be able to offer an early version to the cloud service providers, get them started, get it out in the market and then do that full scale enterprise validation over the next several months. >> So I got to ask you the question, because this is something that's coming up, we're seeing a transition, also the digital transformation's been talked about for a while. Network transformation, IoTs all around the corner, we've got autonomous vehicles, smart cities, on and on. But I got to ask you though, the cloud service providers seems to be coming out of this show as a key storyline in Google Next as the multi cloud architectures become very clear. So it's become clear, not just this show but it's been building up to this, it's pretty clear that it's going to be a multi cloud world. As well as you're starting to see the providers talk about their SaaS offerings, Google talking about G Suite, Microsoft talks about Office 365, Oracle has their apps, IBM's got Watson, so you have this SaaSification. So this now creates a whole another category of what cloud is. If you include SaaS, you're really talking about Salesforce, Adobe, you know, on and on the list, everyone is potentially going to become a SaaS provider whether they're unique cloud or partnering with some other cloud. What does that mean for a cloud service provider, what do they need for applications support requirements to be successful? >> So when we look at the cloud service provider market inside of Intel, we are talking about infrastructure as a service, platform as a service and software as a service. So cutting across the three major categories, I give you like, up until now, infrastructure of the service has gotten a lot of the airtime or focus, but SaaS is actually the bigger business, and that's why you see, I think, people moving towards it, especially as enterprise IT becomes more comfortable with using SaaS application. You know, maybe first they started with offloading their expense report tool, but over time, they've moved into more sophisticated offerings that free up resources for them to do their most critical or business critical applications the they require to stay in more of a private cloud. I think that's evolution to a multi cloud, a hybrid cloud, has happened across the entire industry, whether you are an enterprise or whether you are a cloud service provider. And then the move to SaaS is logical, because people are demanding just more and more services. One of the things through all our years of partnering with the biggest to the smallest cloud service providers and working so closely on those technical requirements that we've continued to find is that total cost of ownership really is king, it's that performance per dollar, TCO, that they can provide and derive from their infrastructure, and we focused a lot of our engineering and our investment in our silicon design around providing that. We have multi generations that we've provided even just in the last five years to continue to drive those step function improvements and really optimize our hardware and the code that runs on top of it to make sure that it does continue to deliver on those demanding workloads. The other thing that we see the providers focusing on is what's their differentiation. So you'll see cloud service providers that will look through the various silicon features that we offer and choose, they'll pick and choose based on whatever their key workload is or whatever their key market is, and really kind of hone in and optimize for those silicon features so that they can have a differentiated offering into the market about what capabilities and services they'll provide. So it's an area where we continue to really focus our efforts, understand the workload, drive the TCO down, and then focus in on the design point of what's going to give that differentiation and acceleration. >> It's interesting, the definition's also where I would agree with you, the cloud service provider is a huge market when you even look at the SaaS. 'Cause whether you're talking about Uber or Netflix, for instance, examples people know about in real life, you can't ignore these new diverse use cases coming out. For instance, I was just talking with Stu Miniman, one of our analysts here, Wikibon, and Riot Games could be considered a cloud, right, I mean, 'cause it's a SaaS platform, it's gaming. You're starting to see these new apps coming out of the woodwork. There seems to be a requirement for being agile as a cloud provider. How do you enable that, what specifically can you share, if I'm a cloud service provider, to be ready to support anything that's coming down the pike? >> Lisa: You know, we do do a lot of workload and market analysis inside of Intel and the data center group, and then if you have even seen over the past five years, again, I'll just stick with the new term, how much we've expanded and broadened our product portfolio. So again, it will still be built upon that foundation of Xeon and what we have there, but we've gone to offer a lot of varieties. So again, I mentioned Xeon Phi. Xeon Phi at the 72 cores, bootable Xeon but specific workload acceleration targeted at high performance computing and other analytics workloads. And then you have things at the other end. You've got Xeon D, which is really focused at more frontend web services and storage and network workloads, or Atom, which is even lower power and more focused on cold and warm storage workloads, and again, that network function. So you could then say we're not just sticking with one product line and saying this is the answer for everything, we're saying here's the core of what we offer, and the features people need, and finding options, whether they range from low power to high power high performance, and kind of mixed across that whole kind of workload spectrum, and then we've broadened around the CPU into a lot of other silicon innovation. So I don't know if you guys have had a chance to talk about some of the work that we're doing with FPGAs, with our FPGA group and driving and delivering cloud and network acceleration through FPGAs. We've also introduced new products in the last year like Silicon Photonics, so dealing with network traffic crossing through-- >> Well, is FPGA, that's the Altera stuff, we did talk with them, they're doing the programmable chips. >> Lisa: Exactly, so it requires a level of sophistication and understanding what you need the workload to accelerate, but once you have it, it is a very impressive and powerful performance gain for you, so the cloud service providers are a perfect market for that, as are the cloud service providers because they have very sophisticated IT and very technically astute engineering teams that are able to really, again, go back to the workload, understand what they need and figure out the right software solution to pair with it. So that's been a big focus of our targeting. And then, like I said, we've added all these different things, different new products to the platform that start to, over time, just work better and better together, so when you have things like Intel SSD there together with Intel CPUs and Intel Ethernet and Intel FPGA and Intel Silicon Photonics, you can start to see how the whole package, when it's designed together under one house, can offer a tremendous amount of workload acceleration. >> I got to ask you a question, Lisa, 'cause this comes up, while you're talking, I'm just in my mind visualizing a new kind of virtual computer server, the cloud is one big server, so it's a design challenge. And what was teased out at Mobile World Congress that was very clear was this new end to end architecture, you know, re-imagined, but if you have these processors that have unique capabilities, that have use case specific capabilities, in a way, you guys are now providing a portfolio of solutions so that it almost can be customized for a variety of cloud service providers. Am I getting that right, is that how you guys see this happening where you guys can just say, "Hey, just mix and match what you want and you're good." >> Lisa: Well, and we try to provide a little bit more guidance than as you wish, I mean, of course, people have their options to choose, so like, with the cloud service providers, that's what we have, really tight engineering engagement, so that we can, you know, again, understand what they need, what their design point is, what they're honing in on. You might work with one cloud service provider that is very facilities limited, and you might work with another one that is, they're face limited, the other one's power limited, and another one has performance is king, so you can, we can cut some SKUs to help meet each of those needs. Another good example is in the artificial intelligence space where we did another acquisition last year, a company called Nervana that's working on optimized silicon for a neural network. And so now we have put together this AI portfolio, so instead of saying, "Oh, here's one answer "for artificial intelligence," it's, "Here's a multitude of answers where you've got Xeon," so if you have, I'm going to utilize capacity, and are starting down your artificial intelligence journey, just use your Xeon capacity with an optimized framework and you'll get great results and you can start your journey. If you are monetizing and running your business based on what AI can do for you and you are leading the pack out there, you've got the best data scientists and algorithm writers and peak running experts in the world, then you're going to want to use something like the silicon that we acquired from the Nervana team, and that codename is Lake Crest, speaking of some lakes there. And you'll want to use something like Xeon with Lake Crest to get that ultimate workload acceleration. So we have the whole portfolio that goes from Xeon to Xeon Phi to Xeon with FPGAs or Xeon with Lake Crest. Depending on what you're doing, and again, what your design point is, we have a solution for you. And of course, when we say solution, we don't just mean hardware, we mean the optimized software frameworks and the libraries and all of that, that actually give you something that can perform. >> On the competitive side, we've seen the processor landscape heat up on the server and the cloud space. Obviously, whether it's from a competitor or homegrown foundry, whatever fabs are out there, I mean, so Intel's always had a great partnership with cloud service providers. Vis-a-vis the competition and context to that, what are you guys doing specifically and how you'd approach the marketplace in light of competition? >> Lisa: So we do operate in a highly competitive market, and we always take all competitors seriously. So far we've seen the press heat up, which is different than seeing all of the deployments, so what we look for is to continue to offer the highest performance and lowest total cost of ownership for all our customers, and in this case, the cloud service providers, of course. And what do we do is we kind of stick with our game plan of putting the best silicon in the world into the market on a regular beat rate and cadence, and so there's always news, there's always an interesting story, but when you look at having had eight new products and new generations in market since the last major competitive x86 product, that's kind of what we do, just keep delivering so that our customers know that they can bet on us to always be there and not have these massive gaps. And then I also talked to you about portfolio expansion, we don't bet on just one horse, we give our customers the choice to optimize for their workloads, so you can go up to 72 cores with Xeon Phi if that's important, you can go as low as two cores with Atom, if that's what works for you. Just an example of how we try to kind of address all of our customer segments with the right product at the right time. >> And IoT certainly brings a challenge too, when you hear about network edge, that's a huge, huge growth area, I mean, you can't deny that that's going to be amazing, you look at the cars are data centers these days, right? >> Lisa: A data center on wheels. >> Data center on wheels. >> Lisa: That's one of the fun things about my role, even in the last year, is that growing partnership, even inside of Intel with our IoT team, and just really going through all of the products that we have in development, and how many of them can be reused and driven towards IoT solution. The other thing is, if you look into the data center space, I genuinely believe we have the world's best ecosystem, you can't find an ISV that we haven't worked with to optimize their solution to run best on Intel architecture and get that workload acceleration. And now we have the chance to put that same playbook into play in the IoT space, so it's a growing, somewhat nascent but growing market with a ton of opportunity and a ton of standards to still be built, and a lot of full solution kits to be put together. And that's kind of what Intel does, you know, we don't just throw something out to the market and say, "Good luck," we actually put the ecosystem together around it so that it performs. But I think that's kind of what you see with, I don't know if you guys saw our Intel GO announcement, but it's really like the software development kit and the whole product offering for what you need for truly delivering automated vehicles. >> Well, Lisa, I got to say, so you guys have a great formula, why fix what's not broken, stay with Moore's law, keep that cadence going, but what's interesting is you are listening and adapting to the architectural shifts, which is smart, so congratulations and I think, as the cloud service provider world changes, and certainly in the data center, it's going to be a turbulent time, but a lot of opportunity, and so good to have that reliability and, if you can make the software go faster then they can write more software faster, so-- >> Lisa: Yup, and that's what we've seen every time we deliver a step function improvement in performance, we see a step function improvement in demand, and so the world is still hungry for more and more compute, and we see this across all of our customer bases. And every time you make that compute more affordable, they come up with new, innovative, different ways to do things, to get things done and new services to offer, and that fundamentally is what drives us, is that desire to continue to be the backbone of that industry innovation. >> If you could sum up in a bumper sticker what that step function is, what is that new step function? >> Lisa: Oh, when we say step functions of improvements, I mean, we're always looking at targeting over 20% performance improvement per generation, and then on top of that, we've added a bunch of other capabilities beyond it. So it might show up as, say, a security feature as well, so you're getting the massive performance improvement gen to gen, and then you're also getting new capabilities like security features added on top. So you'll see more and more of those types of announcements from us as well where we kind of highlight the, not just the performance but that and what else comes with it, so that you can continue to address, you know, again, the growing needs that are out there, so all we're trying to say is, day a step ahead. >> All right, Lisa Spelman, VP of the GM, the Xeon product family as well as marketing and data center. Thank you for spending the time and sharing your insights on Google Next, and giving us a peak at the portfolio of the Xeon next generation, really appreciate it, and again, keep on bringing that power, Moore's law, more flexibility. Thank you so much for sharing. We're going to have more live coverage here in Palo Alto after this short break. (bright music)
SUMMARY :
Narrator: Live from Silicon Valley. maybe a little bit of OCP around the Xeon processor, and it's hard to be in many places at once, of the tech business. partnerships, like the one we have with Google, that you could highlight, that points to and it's so exciting to see it out there So I got to ask you the question, and really optimize our hardware and the code is a huge market when you even look at the SaaS. and the data center group, and then if you have even seen Well, is FPGA, that's the Altera stuff, the right software solution to pair with it. I got to ask you a question, Lisa, so that we can, you know, again, understand what they need, Vis-a-vis the competition and context to that, And then I also talked to you about portfolio expansion, and the whole product offering for what you need and so the world is still hungry for more and more compute, with it, so that you can continue to address, you know, at the portfolio of the Xeon next generation,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Spelman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Nervana | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
two cores | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silicon Photonics | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
72 cores | QUANTITY | 0.99+ |
two day | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
San Hose | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
G Suite | TITLE | 0.99+ |
Office 365 | TITLE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Open Compute Summit | EVENT | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Xeon | ORGANIZATION | 0.98+ |
tomorrow | DATE | 0.98+ |
both places | QUANTITY | 0.98+ |
Altera | ORGANIZATION | 0.98+ |
Riot Games | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
Watson | TITLE | 0.96+ |
over 20% | QUANTITY | 0.95+ |
SaaS | TITLE | 0.95+ |
first | QUANTITY | 0.95+ |
one horse | QUANTITY | 0.94+ |
Silicon Valley | LOCATION | 0.94+ |
one product | QUANTITY | 0.94+ |
each | QUANTITY | 0.94+ |
one answer | QUANTITY | 0.94+ |
eight new products | QUANTITY | 0.93+ |
one time | QUANTITY | 0.92+ |
Xeon | COMMERCIAL_ITEM | 0.92+ |
GM | ORGANIZATION | 0.91+ |
one house | QUANTITY | 0.91+ |
Google Cloud | TITLE | 0.91+ |
James Hamilton - AWS Re:Invent 2014 - theCUBE - #awsreinvent
(gentle, upbeat music) >> Live from the Sands Convention Center in Las Vegas, Nevada, it's theCUBE, at AWs re:Invent 2014. Brought to you by headline sponsors Amazon and Trend Micro. >> Okay, welcome back everyone, we are here live at Amazon Web Services re:Invent 2014, this is theCUBE, our flagship program, where we go out to the events and extract synth from the noise. I'm John Furrier, the Founder of SiliconANGLE, I'm joined with my co-host Stu Miniman from wikibon.org, our next guest is James Hamilton, who is Vice President and Distinguished Engineer at Amazon Web Services, back again, second year in a row, he's a celebrity! Everyone wants his autograph, selfies, I just tweeted a picture with Stu, welcome back! >> Thank you very much! I can't believe this is a technology conference. (laughs) >> So Stu's falling over himself right now, because he's so happy you're here, and we are too, 'cause we really appreciate you taking the time to come on, I know you're super busy, you got sessions, but, always good to do a CUBE session on kind of what you're workin' on, certainly amazing progress you've done, we're really impressed with what you guys've done other this last year or two, but this year, the house was packed. Your talk was very well received. >> Cool. >> Every VC that I know in enterprise is here, and they're not tellin' everyone, there's a lot of stuff goin' on, the competitors are here, and you're up there in a whole new court, talk about the future. So, quickly summarize what you talked about in your session on the first day. What was the premise, what was the talks objective, and what was some of the key content? >> Gotcha, gotcha. My big objective was the cloud really is fundamentally different, this is not another little bit of nomenclature, this is something that's fundamentally different, it's going to change the way our industry operates. And what I wanted to do was to step through a bunch of examples of innovations, and show how this really is different from how IT has been done for years gone by. >> So the data center obviously, we're getting quotes after quotes, obviously we're here at the Amazon show so the quotes tend to be skewed towards this statement, but, I'm not in the data center business seems to be the theme, and, people generally aren't in the data center business, they're doing a lot of other things, and they need the data centers to run their business. With that in mind, what are the new innovations that you see coming up, that you're working on, that you have in place, that're going to be that enabler for this new data center in the cloud? So that customers can say hey, you know, I just want to get all this baggage off my back, I just run my business agile and effectively. Is it the equipment, is it the software, is it the chips? What're you doing there from an innovation standpoint? >> Yeah, what I focused on this year, and I think it's a couple important areas are networking, because there's big cost problems in networking, and we've done a lot of work in that area that we think is going to help customers a lot; the second one's database, because databases, they're complicated, they're the core of all applications, when applications run into trouble, typically it's the database at the core of it, so those are the two areas I covered, and I think that's two of the most important areas we're working right now. >> So James, we've looked back into people that've tried to do this services angle before, networking has been one of the bottlenecks, I think one of the reasons XSBs failed in the '90s, it was networking and security, grid computing, even to today. So what is Amazon fundamentally doing different today, and why now is it acceptable that you can deliver services around the world from your environment? What's different about networking today? >> It's a good question. I think it's a combination of private links between all of the regions, every major region is privately linked today. That's better cost structure, better availability, lower latency, scaling down to the data center level we run all custom Amazon designed gear, all custom Amazon designed protocol stacks. And why is that important? It's because cost of networking is actually climbing, relative to the rest of compute, and so, we need to do that in order to get costs under control and actually continue to be able to draw up costs. Second thing is customers need more networking-- more networking bandwidth per compute right now, it's, East/West is the big focus of the industry, because more bandwidth is required, we need to invest more, fast, that's why we're doing private gear. >> Yeah, I mean, it's some fascinating statistics, it's not just bandwidth, you said you do have up to 25 terabytes per second between nodes, it's latency and jitter that are hugely important, especially when you go into databases. Can you talk about just architecturally, what you do with availability zones versus if I'm going to a Google or a Microsoft, what does differentiate you? >> It is a little bit different. The parts that are the same are: every big enterprise that needs highly available applications is going to run those applications across multiple data centers, that's, so-- The way our system works is you choose the region to get close to your users, or to get close to your customers, or to be within a jurisdictional boundary. From down below the region, normally what's in a region is a data center, and customers usually are replicating between two regions. What's different in the Amazon solution, is we have availability zones within region; each availability zone is actually at least one data center. Because we have multiple data centers inside the same region it enables customers to do realtime, synchronous replication between those data centers. And so if they choose to, they can run multi-region replication just like most high end applications do today, or, they can run within an AZ, synchronous multiplication to multiple data centers. The advantage of that, is it takes less administrative complexity, if there's a failure, you never lose a transaction, where in multi-region replication, it has to be asynchronous because of the speed of light. >> Yeah, you-- >> Also, there's some jurisdictional benefits too, right? Say Germany, for instance, with a new data center. >> Yep. Yeah, many customers want to keep their data in region, and so that's another reason why you don't necessarily want to replicate it out in order to get that level of redundancy, you want to have multiple data centers in region, 100% correct >> So, how much is it that you drive your entire stack yourself that allows you to do this, I think about replication solutions, you used SRDF as an example. I worked for that, I worked for EMC for 10 years, and just doing a two site replication is challenging, >> It's hard. >> A multi site is differently, you guys, six data centers and availabilities on a bungee, you fundamentally have a different way of handling replication. >> We do, the strategy inside Amazon is to say multi-region replication is great, but because of the latency between regions, they're a long way apart, and the reality of speed of light, you can't run synchronous. If data centers are relatively close together in the same region, the replication can be done synchronously, and what that means is if there's a failure anywhere, you lose no transactions. >> Yeah. So, there was a great line you had in your session yesterday, that networking has been anti-Moore's law when it comes to pricing. Amazon is such a big player, everybody watches what you do, you buy from the ODMs, you're changing the supply chain. What's your vision as to where networking needs to go from a supply chain and equipment standpoint? >> Networking needs to be the same place where servers went 20 years ago, and that is: it needs to be on a Moore's law curve where, as we get more and more transistors on a chip, we should get lower and lower costs in a server, we should get lower and lower costs in a network. Today, an ASIC is always, which is the core of the router, is always around the same price. Each generation we add more ports to that, and so effectively we got a Moore's law price improvement happening where that ASIC stays the same price, you just keep adding ports. >> So, I got to jump in and ask ya about Open Compute, last year you said it's good I guess, I'm a fan, but we do our own thing, still the case? >> Yeah, absolutely. >> Still the case, okay doing your own thing, and just watching Open Compute which is a like a fair for geeks. >> Open Compute's very cool, the thing is, what's happening in our industry right now is hyper-specialization, instead of buying general purpose hardware that's good for a large number of customers, we're buying hardware that's targeted to a specific workload, a specific service, and so, we're not--I love what happens with Open Compute, 'cause you can learn from it, it's really good stuff, but it's not what we use; we want to target our workloads precisely. >> Yeah, that was actually the title of the article I wrote from everything I learned from you last year was: hyper-specialization is your secret sauce, so. You also said earlier this week that we should watch the mobile suppliers, and that's where service should be in the future, but I heard a, somebody sent me a quote from you that said: unfortunately ARM is not moving quite fast enough to keep up with where Intel's going, where do you see, I know you're a fan of some of the chip manufacturers, where's that moving? >> What I meant with watch ARM and understanding where servers are going, sorry, not ARM, watch mobile and understand where servers is going is: power became important in mobile, power becomes important in servers. Most functionalities being pulled up on chip, on mobile, same thing's happening in server land, and so-- >> What you're sayin' is mobile's a predictor >> Predicting. >> of the trends in the data center, >> Exactly, exactly right. >> Because of the challenges with the form factor. >> It's not so much the form factor, but the importance of power, and the importance of, of, well, density is important as well, so, it turns out the mobile tends to be a few years ahead, but all the same kinds of innovations that show up there we end up finding them in servers a few years later. >> Alright, so James, we've been, at Wikibon have a strong background in the storage world, and David Floyer our CTO said: one of the biggest challenges we had with databases is they were designed to respond to disk, and therefore there were certain kind of logging mechanisms in place. >> It's a good point. >> Can you talk a little bit about what you've done at Amazon with Aurora, and why you're fundamentally changing the underlying storage for that? >> Yeah, Aurora is applying modern database technology to the new world, and the new world is: SSDs at the base, and multiple availability zones available, and so if you look closely at Aurora you'll see that the storage engine is actually spread over multiple availability zones, and, what was mentioned in the keynote, it's a log-structured store. Log-structured stores work very very nicely on SSDs, they're not wonderful choices on spinning magnetic media. So this, what we're optimized for is SSDs, and we're not running it on spinning disk at all. >> So I got to ask you about the questions we're seeing in the crowd, so you guys are obviously doing great on the scale side, you've got the availability zones which makes a lot of sense certainly the Germany announcement, with the whole Ireland/EU data governance thing, and also expansion is great. But the government is moving fast into some enterprises, >> It's amazing. >> And so, we were talking about that last night, but people out there are sayin' that's great, it's a private cloud, the governments implementing a private cloud, so you agree, that's a private cloud or is that a public-- >> (laughing) It's not a private cloud; if you see Amazon involved, it's not a private cloud. Our view of what we're good at, and the advantages cloud brings to market are: we run a very large fleet of servers in every region, we provide a standard set of services in all those regions, it's completely different than packaged software. What the CIA has is another AWS region, it happens to be on their site, but it is just another AWS region, and that's the way they want it. >> Well people are going to start using that against you guys, so start parsing, well if it's private, it's only them then it's private, but there's some technicalities, you're clarifying that. >> It's definitely not a private cloud, the reason why we're not going to get involved with doing private clouds is: product software is different, it's innefficient, when you deliver to thousands of customers, you can't make some of the optimizations that we make. Because we run the same thing everywhere, we actually have a much more reliable product, we're innovating more quickly, we just think it's a different world. >> So James, you've talked a lot that scale fundamentally changes the way you architect and build things; Amazon's now got over a billion customers, and it's got so many services, just adding more and more, Wikibon, actually Dave Vellante, wrote a post yesterday said that: we're trying to fundamentally change the economic model for enterprise IT, so that services are now like software, when Microsoft would print an extra disk it didn't cost anything. When you're building your environment, is there more strain on your environment for adding that next thousand customers or that next big service or, did it just, do you have the substrate built that's going to help it grow for the future? >> It's a good question, it varies on the service. Usually what happens is we get better year over year over year, and what we find is, once you get a service to scale, like S3 is definitely at scale, then growth, I won't say it's easy, but it's easier to predict because you're already on a large base, and we already know how to do it fairly well. Other services require a lot more thought on how to grow it, and end up being a lot more difficult. >> So I got some more questions for ya, go on to some of the personal questions I want to ask you. Looking at this booth right here, it's Netflix guys right there, I love that service, awesome founder, just what they do, just a great company, and I know they're a big customer. But you mentioned networks, so at the Google conference we went to, Google's got some chops, they have a developer community rockin' and rollin', and then it's pretty obvious what they're doin', they're not tryin' to compete with Amazon because it's too much work, but they're goin' after the front end developer, Rails, whatnot, PHP, and really nailing the back end transport, you see it appearing, really going after to enable a Netflix, these next generation companies, to have the backbone, and not be reliant on third party networks. So I got to ask you, so as someone who's a tinkerer, a mechanic if you will of the large scale stuff, you got to get rid of that middleman on the network. What's your plans, you going to do peering? Google's obviously telegraphing they're comin' down that road. Do you guys meet their objective? Same product, better, what's your strategy? >> Yeah, it's a great question. The reason why we're running private links between our regions is the same reason that Google is, it's lower cost, that's good, it's much, much lower latency, that's really good, and it's a lot less jitter, and that's extremely important, and so it's private links, peering, customers direct connecting, that's all the reality of a modern cloud. >> And you see that, and do you have to build that in? Almost like you want to build your own chips, I'd imagine on the mobile side with the phone, you can see that, everyone's building their own chips. You got to have your own network stuff. Is that where you guys see the most improvement on the network side? Getting down to that precise hyper-specialized? >> We're not doing our own chips today, and we don't, in the networking world, and we don't see that as being a requirement. What we do see as a requirement is: we're buying our own ASICs, we're doing our own designs, we're building our own protocol stack; that's delivering great value, and that is what's deployed, private networking's deployed in all of our data centers now >> Yeah, I mean, James I wonder, you must look at Google, they do have an impressive network, they've got the undersea cables, is there anything you, that you look at them and saying: we need to move forward and catch up to them on certain, in certain pieces of the network? >> I don't think so, I think when you look at any of the big providers, they're all mature enough that they're doing, at that level, I think what we do has to be kind of similar. If private links are a better solution, then we're all going to do it, I mean. >> It makes a lot of sense, 'cause it, the impact on inspection, throttling traffic, that just creates uncertainty, so. I'm a big fan, obviously, of that direction. Alright, now a personal question. So, in talking to your wife last night, getting to know you over the years here, and Stu is obviously a big fan. There's a huge new generation of engineers coming into the market, Open Compute, I bring that up because it's such a great initiative, you guys obviously have your own business reasons to do your own stuff, I get that. But there's a whole new culture of engineering coming out, a new home brew computer club is out there forming right now my young son makes his own machines, assembling stuff. So, you're an inspiration to that whole group, so I would like you to share just some commentary to this new generation, what to do, how to approach things, what you've learned, how do you come over, on top of failure, how do you resolve that, how do you always grow? So, share some personal perspective. >> Yeah, it's an interesting question. >> I know you're humble, but, yeah. >> Interesting question. I think being curious is the most important thing possible, if anybody ever gets an opportunity to meet somebody that's the top of any business, a heart surgeon, a jet engine designer, an auto mechanic, anyone that's in the top of their business is always worth meeting 'cause you can always learn from them. One of the cool things that I find with my job is: because it spans so many different areas, it's amazing how often I'll pickup a tidbit one day talking to an expert sailor, and the next day be able to apply that tidbit, or that idea, solving problems in the cloud. >> So just don't look for your narrow focus, your advice is: talk to people who are pros, in whatever their field is, there's always a nugget. >> James a friend of mine >> Stay curious! >> Steve Todd, he actually called that Venn diagram innovation, where you need to find all of those different pieces, 'cause you're never going to know where you find the next idea. So, for the networking guys, there's a huge army of CCIEs out there, some have predicted that if you have the title administrator in your name, that you might be out of a job in five years. What do you recommend, what should they be training on, what should they be working toward to move forward to this new world? >> The history of computing is one of the-- a level of abstraction going up, never has it been the case those jobs go away, the only time jobs have ever gone away is when someone stated a level of abstraction that just wasn't really where the focus is. We need people taking care of systems, as the abstraction level goes up, there's still complexity, and so, my recommendation is: keep learning, just keep learning. >> Alright so I got to ask you, the big picture now, ecosystems out here, Oracle, IBM, these big incumbents, are looking at Amazon, scratching their head sayin': it's hard for us to change our business to compete. Obviously you guys are pretty clear in your positioning, what's next, outside of the current situation, what do you look at that needs to be built out, besides the network, that you see coming around the corner? And you don't have to reveal any secrets, just, philosophically, what's your vision there? >> I think our strategy is maybe a little bit, definitely a little bit different from some of the existing, old-school providers. One is: everyone's kind of used to, Amazon passes on value to customers. We tend to be always hunting and innovating and trying to lower costs, and passing on the value to customers, that's one thing. Second one is choice. I personally choose to run my XQL because I like the product I think it's very good value, some of our customers want to run Oracle, some of our customers want to run my XQL, and we're absolutely fine doing that, some people want to run SQL server. And so, the things that kind of differentiate us is: enterprise software hasn't dropped prices, ever, and that's just the way we were. Enterprise software is not about choice, we're all about choice. And so I think those are the two big differences, and I think those ones might last. >> Yeah, that's a good way to look at that. Now, back to the IT guy, let's talk about the CIO. Scratchin' his head sayin': okay, I got this facilities budget, and it's kind of the-- I talked to once CIO, hey says: I spend more time planning meetings around facilities, power, and cooling, than anything else on innovation, so. They have challenges here, so what's your advice, as someone who's been through a lot of engineering, a lot of large scale, to that team of people on power and cooling to really kind of go to the next level, and besides just saying okay throw some pots out there, or what not, what should they be doing, what's their roadmap? >> You mean the roadmap for doing a better job of running their facilities? >> Yeah, well there's always pressure for density, there's power's a sacred (laughs) sacred resource right now, I mean power is everything, power's the new oil, so, power's driving everything, so, they have to optimize for that, but you can't generate more power, and space, so, they want smaller spaces, and more efficiency. >> The biggest gains that are happening right now, and the biggest innovations that have been happening over the last five years in data centers is mostly around mechanical systems, and driving down the cost of cooling, and so, that's one odd area. Second one is: if you look closely at servers you'll see that as density goes up, the complexity and density of cooling them goes up. And so, getting designs that are optimized for running at higher temperatures, and certified for higher temperatures, is another good step, and we do both. >> So, James, there's such a diverse ecosystem here, I wonder if you've had a chance to look around? Anything cool outside of what Amazon is doing? Whether it's a partner, some startup, or some interesting idea that's caught your attention at the show. >> In fact I was meeting with western--pardon me, Hitachi Data Systems about three days ago, and they were describing some work that was done by Cycle Computing, and several hundred thousand doors-- >> We've had Cycle-- >> Jason came on. >> Oh, wow! >> Last year, we, he was a great guest. >> No, he was here too, just today! >> Oh, we got him on? Okay. >> So Hitachi's just, is showing me some of what they gained from this work, and then he showed me his bill, and it was five thousand six hundred and some dollars, for running this phenomenally big, multi-hundred thousand core project, blew me away, I think that's phenomenal, just phenomenal work. >> James, I really appreciate you coming in, Stu and I really glad you took the time to spend with our audience and come on theCUBE, again a great, pleasurable conversation, very knowledgeable. Stay curious, and get those nuggets of information, and keep us informed. Thanks for coming on theCUBE, James Hamilton, Distinguished Engineer at Amazon doing some great work, and again, the future's all about making it smaller, faster, cheaper, and passing those costs, you guys have a great strategy, a lot of your fans are here, customers, and other engineers. So thanks for spending time, this is theCUBE, I'm John Furrier with Stu Miniman, we'll be right back after this short break. (soft harmonic bells)
SUMMARY :
Brought to you by headline sponsors and extract synth from the noise. Thank you very much! 'cause we really appreciate you taking the time to come on, So, quickly summarize what you talked about in your session it's going to change the way our industry operates. I'm not in the data center business seems to be the theme, and I think that's two of the most and why now is it acceptable that you can deliver services private links between all of the regions, what you do with availability zones versus The parts that are the same are: Say Germany, for instance, with a new data center. and so that's another reason why So, how much is it that you you fundamentally have a different way We do, the strategy inside Amazon is to say everybody watches what you do, that ASIC stays the same price, you just keep adding ports. Still the case, okay doing your own thing, and so, we're not--I love what happens with Open Compute, where do you see, I know you're a fan of and understanding where servers are going, and the importance of, of, well, one of the biggest challenges we had with databases and so if you look closely at Aurora you'll see that So I got to ask you about the and the advantages cloud brings to market are: using that against you guys, so start parsing, when you deliver to thousands of customers, that scale fundamentally changes the way and we already know how to do it fairly well. and really nailing the back end transport, and it's a lot less jitter, and that's extremely important, Is that where you guys see the most improvement and that is what's deployed, I think when you look at any of the big providers, getting to know you over the years here, and the next day be able to apply that tidbit, or that idea, talk to people who are pros, in whatever their field is, some have predicted that if you have never has it been the case those jobs go away, besides the network, that you see coming around the corner? and that's just the way we were. I talked to once CIO, hey says: I mean power is everything, power's the new oil, so, and the biggest innovations that have been happening that's caught your attention at the show. he was a great guest. Oh, we got him on? and it was five thousand six hundred and some dollars, Stu and I really glad you took the time
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
James Hamilton | PERSON | 0.99+ |
Steve Todd | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Trend Micro | ORGANIZATION | 0.99+ |
Jason | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Stu | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Last year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Hitachi Data Systems | ORGANIZATION | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
second year | QUANTITY | 0.99+ |
two regions | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
two areas | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Each generation | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Moore | ORGANIZATION | 0.98+ |
CIA | ORGANIZATION | 0.98+ |
Las Vegas, Nevada | LOCATION | 0.98+ |
two site | QUANTITY | 0.98+ |
Second one | QUANTITY | 0.98+ |
Open Compute | TITLE | 0.98+ |
AWs re:Invent 2014 | EVENT | 0.98+ |
earlier this week | DATE | 0.98+ |
20 years ago | DATE | 0.97+ |
last night | DATE | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
ARM | ORGANIZATION | 0.97+ |
second | QUANTITY | 0.97+ |
Sands Convention Center | LOCATION | 0.97+ |
first day | QUANTITY | 0.97+ |
over a billion customers | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
two big differences | QUANTITY | 0.96+ |
Second thing | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.95+ |
six | QUANTITY | 0.95+ |