Sam Ramji, Google Cloud Platform | VMworld 2017
>> Welcome to our presentation here at VM World 2017. I'm John Furrier, co-host of The Cube, with Dave Vellante who's taking a lunch break. We are at VM World on the ground on the floor where we have Google's vice president of product management developer platforms Sam Ramji. Welcome to The Cube conversation. >> Great, thank you very much John. >> So you had a keynote this morning. You know, came up on stage, big announcement. Let's get right to it. That container as a service from Pivotal, VM Ware, and Google announced kind of a joint announcement. It was kind of weird. It wasn't a fully joint but it really came from Pivotal. Clarify what the announcement was. >> Sure, so what we announced is the result of a bunch of co-engineering that we've been doing in the open source with Pivotal around kubernetes running on bosh. So, if you've been paying attention to cloud foundry, you'd know that cloud foundry is the runtime layer and there's something called bosh sitting underneath it that does the cluster management and cluster operations. Pivotal is bringing that to commercial GA later this year. So what we announced with Pivotal and VMWare is that we're going to have cost incompatibility between Pivotal's kubernetes and Google's kubernetes. Google's kubernetes service is called Google Container Engine Pivotal's offering is called Pivotal Container Service. The big deal here is that PKS is going to be the standard way that you can get kubernetes from any of the Dell Group companies, whether that's VMWare, EMC. That gives us one consistent target for compatibility because one of the things that I pointed out in the keynote was inconsistency is the enemy in the data center. That's what makes operations difficult. >> And Kubo was announced at Cloud Foundry, Stu Miniman covered it, but that wasn't commercially available. That's the nuance, right? >> That's right, and that still is available in the open source. So what we've committed to is, we've said, every time that we update Google Container Engine, Pivotal Container Service is also going to update, so we have constant compatibility, that that's delivered on top of VMWare's infrastructure including NSX for networking and then the final twist is a big reason why people choose Google Cloud is because of our services. So Big Table, Big Query, a dynamically scaling data warehouse that we run an enormous amount of Google workloads on. Spanner, right. Which is why all of your data is consisted globally across Google's planet scaled data centers. And finally, all of our new machine learning and AI investments, those services will be delivered down to Pivotal Container Service, right, that's going to be there out of the box at launch and we'll keep adding to that catalog. >> It's just that Google Next was a lot of conversations, Oh Google's catching up to Amazon, Amazon's done a great job no doubt about it. We love Amazon. Andy Jassy was here as well. >> Super capable very competent engineering team. >> There's a lot of workloads in VMWare community that runs on AWS but it's not the only game in town. Jerry Chen, investor in Docker, friend of ours, we know, called this years ago. It's not going to be a one cloud winner take all game. Clearly. But there's the big three lining up, AWS, Microsoft, Google, you guys are doing great. So I got to ask you, what is the biggest misconception that people have about Google Cloud out in the market? 'Cause a lot of enterprises are used to running ops, maybe not as much dev as there is ops, and dev ops comes in with cloud native, there's a lot of confusion, what is the thing that you'd like to clarify about Google that they may not know about? >> The single most important thing to clarify about Google Cloud is our strategy is open-hybrid cloud. We think that we are in an amazing place to run workloads, we also recognize that compute belongs everywhere. We think that the durable state of computing is more of a mosaic than a uni-directional arrow that says everything goes to cloud. We think you want to run your containers and your VM's in clouds. We think you want to run them in your data centers. We also think you want to move them around. So we've been diehard committed to building out the open-source projects, the protocols to let all of that information flow, and then providing services that can get anywhere. So open-hybrid cloud is the strategy, and that's what we've committed to with kubernetes, with tensorflow, with apache beam, with so much of the open-source that we've contributed to Linux and others, and then maintaining open standards compatibility for our services. >> Well, it's great to see you at Google because I know your history, great open source guy, you know open source, it's been really part of your life, and bringing that to Google's great, so congratulations. >> There's a reason for that though, it's pragmatic. This is not a crazy crusade. The value of open source is giving control to the customer. And I think that the most ethical way that you can build businesses and markets is based on customer choice. Giving them the ability to move to where they want. Reducing their costs of switching. If they stay with you, then you're really producing a value-added service. So I've spent time in the operator shoes, in the developer shoes, and in the vendor shoes. When I've spent time buying and running the software on my own, I really always valued and preferred things that would let me move my stuff around. I preferred open source. So that's really the method to the madness here. It's not about opening everything up insanely, giving everything away. It serves customers better and in the long run, the better you serve customers, you'll build a winning business. >> We're here on the ground floor at VMWorld 2017 in Las Vegas, where behind us is the VM Village. And obviously Sam was on stage with the big announcement with Pivotal VMWare. And this is kind of important now, we got to debate now, usually I'm not the contrarian in the group, I'm usually the guy who's like yeah, rah rah, entrepreneurial, optimistic, yeah we can do that! You know that future's here, go to the future! But I was kind of skeptical and I told VMWare and I saw Pat Gelsinger and Michael Dell in the hallways and I'm like, they thought this was going to be the big announcement, and it was their big announcement, but I was kind of like, guys, I mean, it's the long game, these guys in the VMWare community, their operations guys, their not going to connect the dots and there was kind of an applause but not a standing ovation that Google would've gotten at a Google Next conference where the geeks would've been like going crazy. What is the operational dynamic that you're seeing in this market that Google's looking at and bringing value to, so that's the question for you. >> This is what the big change in the industry is is going from only worrying about increasing application velocity to figuring out how to do that with reliability. So there's a whole community of operators that I think many of us have left behind as we've talked about clouds and cloud data. We've done a great job of appealing to developers, enabling them to be more productive, but with operators, we've kind of said, well, your mileage may vary or we don't have time for you, or you have to figure it out yourself. I think the next big phase in adoption of cloud native technology is to say, first of all, open-hybrid, run your stuff wherever you want. >> Well you've got to have experience running cloud. Now you bring that knowledge out here. >> And that's the next piece. How do we offer you the tools and the skills that you need as an operator to have that same consistency, those same guarantees you used to have, and move everything forward in the future? Because if you turn one audience, one community, into the bad people who are holding everything back, that's a losing proposition, you have to give everybody a path to win, right? Everybody wants to be the good guy. So I think, now we need to start paying really close attention to operators and be approachable, right? I would like to see GCP become the most approachable cloud. We're already well known as the most advanced cloud. But can we be the easiest to adopt as well, and that's our challenge, to get the experience. >> You got to get that touch, that these enterprise teams historically have had, but it's interesting I mean, the mosaic you'd mentioned requires some unification, right? You got to be likable. You got to be approachable. And that's where you guys are going, I know you guys are building out for that, but the question is, for you, because Google has a lot of experience, and I know from personal knowledge Google's depth of people and talent, not always the cleanest execution out to the market in terms of the front-facing white glove service that some of these other companies have done, but you guys are certainly strong. >> Well, I think this is where Diane Greene has been driving the transformation, I mean like, she breathes, eats, sleeps, dreams enterprise. So, being both a board member at Google and being the SVP of Google Cloud, she's really bringing the discipline to say, you know, white glove service is mandatory. We have a pretty substantial professional services organization and building out partnerships with Accenture, with PWC, with Deloitte, with everyone to make sure that these things are all serviceable and properly packaged all the way down to the end user. So, no doubt there's more, more room for us to improve, there's miles to go on the journey, but the focus and the drive to make sure that we're delivering the enterprise requirements, Dianne never lets us stop thinking about that. >> It's like math, right, the order of operations is super important, and there's a lot of stuff going on in the cloud right now that's complex. >> Yes. >> Ease of use is the number one thing that we're hearing, because one, it's a moving a train in general, right? But the cloud's growing, a lot of complexity, how do you guys view that? And the question I want to ask you is, we know what cloud looks like today. Amazon, they're doing great. Multi-horse race if you will. But in 2022, the expectations and what it looks like then is going to be completely different, if you just take the trajectory of what's happening. So cleaning up kubernetes, making that a manageable, all the self updates, makes a lot of sense, and I think that's the dots no one's connecting here, I get the long game, but what's the customer's view in your opinion as someone who's sitting back and with the Google perch looking out over the horizon, 2022, what's it like for the customer? >> That's an outstanding question. So I think, 2022, looking back, we've actually absorbed so much of this complexity that we can provide ease of use to every workload and to every segment. Backing into that, ease of use looks different, like, let's think about tooling, ease of use looks different to an electrician verus a carpenter versus a plumber. They're doing different jobs, they need different tools, so I think about those as different audiences and different workloads. So if you're trying to migrate virtual machines to a cloud, ease of use means a thing and it includes taking care of the networking layer, how do we make sure that our cloud network shows up like an on premises network, and you don't have to set up some weird VPC configuration, how can those just look like part of your LAN subject to your same security controls. That's a whole path of engineering for a particular division of the company. For a different division of the company focused on databases ease of use is wow, I've got this enormous database, I'm straining at the edges, how do I move that to the cloud? Well, what kind of database is it, right? Is it a SQL database? Is it a NoSQL database? So engineering that in, that's the key. The other thing that we have to do for ease of use is upscaling. So a lot of things that we talked about before are the need to drive IT efficiency through automation. But who's going to teach people how to do the automation especially while they're being held to a very high SLA standard for their own data center and held to a high standard for velocity movement to the cloud. This is where Google has invented a discipline called SRE or site reliability engineering, and it's basically the meta discipline around what many people call dev ops. We think that this is absolutely teachable, it's learnable, it's becoming a growing community. You can get O'Reilly books on the topics. So I think we have an accountability to the industry to go and teach every operator and every operating group, hey here's what SRE looks like, some of your folks might want to do this, because that will give you the lift to make all of these workloads much easier to manage 'cause it's not just about velocity, it's also about reliability. >> It's interesting, we've got about a minute left or so. I'm just going to get your thoughts on this because you've certainly seen it on the developer side, stack wars, whatever you want to call them, the my stack runs this tech, but last night I heard in the hallway here multiple times the general consensus of two stacks coming together, not just software stacks, hardware stacks, you're seeing things that have never run together or been tested together before. So the site reliability is a very interesting concept and developers get pissed off when stacks don't work, right? So this is a super kind of nuance in this new use case that are emerging because stuff's happened that's never been done before. >> Yeah, so this is where the common tutorials get really interesting, especially as we build out a planetary scale computer at Google. Right, we're no longer thinking about how does the GPU as part of your daughter board, we think about what about racks of GPU's as part of your datacenters using NVDIA K80's, what does it mean to have 180 teraflops of tensor processing capability in a cloud TPU. So getting container centric is crucial and making it really easy to attach to all of those devices by having open source drivers making sure they're all Linux compatible and developers can get to them is going to be part of the substrate to make sure that application developers can target those devices, operators can set a policy that say, yes, I want this to deploy preferentially to environments with a TPU or a GPU and that the whole system can just work and be operable. >> Great, Sam thanks so much for taking the time to stop by. One on one conversation with Sam Ramji who's a Google Cloud, he's a vice president of product management and developer platforms for Google. We'll see you at Google Next. Thanks for spending the time. I'm John Furrier, thanks for watching. >> Thank you John.
SUMMARY :
We are at VM World on the ground on the floor Let's get right to it. The big deal here is that PKS is going to be the standard That's the nuance, right? Pivotal Container Service is also going to update, It's just that Google Next was a lot of conversations, that runs on AWS but it's not the only game in town. the open-source projects, the protocols to let all and bringing that to Google's great, so congratulations. So that's really the method to the madness here. You know that future's here, go to the future! We've done a great job of appealing to developers, Now you bring that knowledge out here. and that's our challenge, to get the experience. not always the cleanest execution out to the market but the focus and the drive to make sure It's like math, right, the order of operations And the question I want to ask you is, I'm straining at the edges, how do I move that to the cloud? So the site reliability is a very interesting concept and that the whole system can just work and be operable. Great, Sam thanks so much for taking the time to stop by.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Sam Ramji | PERSON | 0.99+ |
Jerry Chen | PERSON | 0.99+ |
Dianne | PERSON | 0.99+ |
PWC | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Diane | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Sam | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Pivotal | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
2022 | DATE | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
O'Reilly | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
Greene | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
single | QUANTITY | 0.98+ |
VM Ware | ORGANIZATION | 0.98+ |
180 teraflops | QUANTITY | 0.98+ |
VMWorld 2017 | EVENT | 0.98+ |
Dell Group | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.97+ |
Cloud Foundry | ORGANIZATION | 0.97+ |
PKS | ORGANIZATION | 0.97+ |
two stacks | QUANTITY | 0.96+ |
VMworld | EVENT | 0.96+ |
The Cube | ORGANIZATION | 0.96+ |
VM World 2017 | EVENT | 0.96+ |
last night | DATE | 0.96+ |
Docker | ORGANIZATION | 0.95+ |
NVDIA | ORGANIZATION | 0.94+ |
K80 | COMMERCIAL_ITEM | 0.94+ |
One | QUANTITY | 0.94+ |
later this year | DATE | 0.94+ |
NoSQL | TITLE | 0.93+ |
Google Cloud | TITLE | 0.93+ |
NSX | ORGANIZATION | 0.92+ |
today | DATE | 0.91+ |
SQL | TITLE | 0.9+ |
one community | QUANTITY | 0.89+ |
one audience | QUANTITY | 0.88+ |
three | QUANTITY | 0.87+ |
a minute | QUANTITY | 0.84+ |
VMWare | ORGANIZATION | 0.82+ |
this morning | DATE | 0.79+ |
Cloud | TITLE | 0.79+ |
Brian Stevens, Google Cloud - OpenStack Summit 2017 - #OpenmStackSummit - #theCUBE
>> Narrator: Live from Boston, Massachusets. It's theCUBE, covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat and additional ecosystem and support. >> Hi, welcome back, I'm Stu Miniman, joined by my cohost John Troyer and happy to welcome back to the program Brian Stevens who's the CTO of Google Cloud. Brian, thanks for joining us. >> I'm glad to, it's been a few years. >> All right, I wanted to bounce something off you. We always talk about, you know, it's like open source. You worked for in the past what is most considered the most successful open source company for monetizing open source, which is Red Hat. We have posited at Wikibon that it's not necessarily the company, it's not only the companies that sell a product or a solution that make money off it, but I said, if it wasn't for things like Linux in general and open source, we wouldn't have a company like Google. Do you agree with that, you look at the market cap of a Google, I said if we didn't have Linux and we didn't have open source, Google probably couldn't exist today. >> Yeah, I don't think any of the hyper scale cloud companies would exist without open source and Linux and Intel. I think it's a big part of the stack, absolutely. >> All right. You made a comment at the beginning about what it means to be an open source person working at Google. The joke we all used to make was the rest of us are using what Google did 10 years ago, it eventually goes from that whitepaper all the way down to some product that you used internally and then maybe gets spun off. We wouldn't have Hadoop if it wasn't for Google. Just some of the amazing things that have come out of those people at Google. But what does it mean to be open source at Google and with Google? >> You get both, right? 'Cause I think that's the fun part is I don't think a week goes by where I don't get to discover something coming out of a resource group somewhere. Now the latest is machine learning, you know, Spanner because they'd learned how do to distributed time synchronization across geo data centers, like who does that, right? But Google has both the people and the desire and the ability to invest in on the research side. And then you marry that innovation with everything that's happening in open source. It's a really perfect combination. And so instead of building these proprietary systems, it's all about how do we actually not just contribute to open source, but how do we actually build that interoperability framework, because you don't want cloud to be an island, you want it to be really integrated into developer tools, databases, infrastructure, et cetera. >> And a lot of that sounds like it plays into the Kubernetes story, 'cause, you know, Kubernetes is a piece that allows some similarities between wherever you place your data. Maybe give us a little bit more about what Google, you know, how do you decide what's internal, I think about like the Spanner program, which there's some other open source pieces coming up, looks like they read the whitepaper and they're trying to do some pieces. You said less whitepapers, more code coming out of people, what does that means? >> It's not that we'll do less whitepapers. 'Cause whitepapers are great for research, and Google's definitely a research strong academic oriented company. It's just that you need to go further as well. So that was, you know, what I was talking about like with GRPC, creating an Apache project I think was the first time for streaming analytics, right, was the first time that I think Google's done that. Obviously, been involved for years at the Linux kernel, compilers, et cetera. I think it's more around what do developers need, where can we actually contribute to areas, because what you don't want, what we don't want is you're on premise and you're using one type of system, then you move to Google Cloud and it feels like there's impedance. You're really trying to get rid of the impedance mismatch all the way across the stack, and one of the best ways you can do that is by contributing new system designs. There's a little bit less of that happening in the analytics space now though, I think the new ground for that is everything that's happening in machine learning with Tensor Flow et cetera. >> Yeah, absolutely. There was some mention in the keynote this morning, all of the AI and ML, I mean, Google with Tensor Flow, even Amazon themselves getting involved more with open source. You said you couldn't build the hyper scales without them, but is that the, do they start with open source, do you see, or? >> Well, I think that most people are running on a Linux backplane. It's a little bit different in Google 'cause we got an underlying provisioning system called the Borg. And that just works, so some things work, don't change them. Here is where you really want to be open source first are areas that are just under active evolution, because then you actually can join that movement of active evolution. Developer tools are kind of like that. Even machine learning. Machine learning's super strategic to just about every company out there. But what Google did by actually open sourcing Tensor Flow is now they created a canvas, that community, we talk about that here, but for data scientists to collaborate, and these are people that didn't do much in open source prior, but you've given that ability to sort of come up with the best ideas and to innovate in code. >> I wanted to ask a little bit about the enterprise, right. We can all make jokes about enterprising is what everybody should've been doing 10 years ago, and they're finally getting to. But on the other hand, Red Hat, very enterprise focused company. OpenStack, service provider and very enterprise focused. One of the things that Google Cloud is doing... Well, I guess the criticism has typically been how does Google as a company and as a culture and as a cloud focused on the enterprise, especially bringing advanced topics like machine learning and things like that, which to a traditional IT person are a little foreign. So I just am interested in kind of how you're viewing, how do we approach the needs of the enterprise, meet them where they are today, while yet giving them an access to a whole set of services and tools that are actually going to take them into a business transformation stance? >> Sure. And that's because you end up as a public cloud provider with the enterprise, you end up having multiple conversations. You certainly have one of your primary audiences, the IT team, right. And so you have to earn trust and help them understand the tools and your strategy and your commitment to enterprise. And then you have CSOs, right, and the CEO, that's worried about everything security and risk and compliance, so it's a little bit different than your IT department. And then what's happening with machine learning and some of the higher end services is now you're actually building solutions for lines of business. So you're not talking to the IT teams with machine learning and you're not talking to the CSOs, you're really talking around business transformation. And when you're actually, if you're going into healthcare, if you're going into financial, it's a whole different team when you're talking about machine learning. So what happens is Google's really got a segmented three sort of discreet conversations that happen at separate points of time, but all of which are enterprise focused, 'cause they all have to marry together. Even though there may be interest in machine learning, if you don't wrap that in an enterprise security model and a way that IT can sustain and enable and deal with identity and all the other aspects, then you'll come up short. >> Yeah. Building on that. One of the critiques of OpenStack for years has been it's tough. I think about one of the critiques of Google is like, oh well, Google build stuff for Google engineers, we're not Google engineers, you know, Google's got the smartest people and therefore we're not worthy to be able to handle some of that. What's your response to that? How do you put some of those together? >> Of course, Google's really smart, but there's smart people everywhere. And I don't think that's it. I think the issue is, you know, Google had to build it for themselves, right, they'd build it for search and build it for apps and build it for YouTube. And OpenStack's got a harder problem in a way, when you think about it, 'cause they're building it for everybody. And that was the Red Hat model as well, it's not just about building it for Goldman Sachs, it's building it for every vertical. And so it's supposed to be hard. This isn't just about building a technology stack and saying we're done, we're going to move on. This community has to make sure that it works across the industry. And that doesn't happen in six years, it takes a longer period of time to do that, and it just means keeping your focus on it. And then you deal with all the use cases over time and then you build, that's what getting to a unified commoditized platform delivers. >> I love that, absolutely. We tend to oversimplify things and, right, building from the ground up some infrastructure stack that can live in any data center is a big challenge. I wrote an article years ago about Amazon hyperoptimizes. They only have to build for one data center, it's theirs. At Google, you understand what set of applications you're going to be running, you build your applications and the infrastructure supports it underneath that. What are some of the big challenges you're working on, some of the meaty things that are exciting you in the technology space today? >> In a way, it's similar. In a way, it's similar, it's just that at least our stack's our stack, but what happens is then we have to marry that into the operational environments, not just for a niche of customers, but for every enterprise segment that's out there. What you end up realizing is that it ends up becoming more of a competency challenge than a technology issue because cloud is still, you know, public cloud is still really new. It's consolidating but it's still relatively new when you start to think about these journeys that happen in the IT world. So a lot of it for us is really that technical enablement of customers that want to get to Google Cloud, but how do you actually help them? And so it's really a people and processes kind of conversation over how fast is your virtual machine. >> One of the things I think is interesting about that Google Cloud that has developed is the role of the SRE. And Google has been, has invented that, wrote the book on it, literally, is training others, has partnerships to help train others with their SREs and the CRE program. So much of the people formerly known as sysadmins, in this new cloud world, some of them are architects, but some of them will end up being operators and SREs. How do you see the balance in this upscaling of kind of the architecture and the traditional infrastructure and capacities and app dev versus operations, how important is operations in our new world? >> It's everything. And that's why I think people, you know... What's funny is that if you do this code handoff where the software developers build code and then they hand it to a team to run and deploy. Developers never become great at building systems that can be operationally managed and maintained. And so I think that was sort of the aha moment, as the best I understand the SRE model at Google is that until you can actually deliver code that can be maintained or alive, well then the software developer owns that problem. The SRE organization only comes in at that point in time where they hand up their, and they're software developers. They're every bit as skilled software developers as the engineers are that are building the code, it's just that's the problem they want to decode, which I think is actually a harder problem than writing the code. 'Cause when you think about it for a public cloud, its like, how do you actually make change, right, but keep the plane flying? And to make sure that it works with everything in an ecosystem. At a period of time where you never really had a validation stage, because in the land of delivering ISV software, you always have the six month, nine month evaluation phase to bring in a new operating system or something else, or all the ecosystem tests around that. Cloud's harder, the magic of cloud is you don't have that window, but you still have to guarantee the same results. One of the things that we did around that was we took the page out of the SRE playbook, which is how does Google do it, and what we realized is that, even though public cloud's moved the layers up, enterprises still have the same issue. Because they're deploying critical applications and workloads on top. How do they do that and how do they keep those workloads running and what are their mechanisms for managing availability, service level objectives, share a few dashboards, and that's why we created the CRE team, which is customer reliability engineering, which is a playbook of SRE, but they work directly with end users. And that's part of the how do we help them get to Google Cloud, part of it's like really understanding their application stacks and helping them build those operational procedures, so they become SREs if you will. >> Brian, one of the things I, if you look at OpenStack, it's really, it's the infrastructure layer that it handles, when I think about Google Cloud, the area that you're strongest and, you know, you're welcome to correct me, but it's really when we talk about data, how you use data, how analytics, your leadership you're taking in the machine learning space. Is it okay for OpenStack to just handle those lower levels and let other projects sit on top of it? And curious as to the developing or where Google Cloud sits. >> I think that was a lower level aha moment for me, even prior to Google, was it was, I did have a lens and it was all about infrastructure. And I think the infrastructure is every bit as important as it ever was. But the fact that some of these services that don't exist in the on-premise world that live in Google Cloud are the ones that are transformative change, as opposed to just giving you operational, easing the operational burden, easing the security burden. But it's some of these add-on services that are the ones that really changed here, bring around business transformation. The reason we have been moving away from Hadoop as an example, not entirely but just because Hadoop's a batch oriented application. >> Could go to Spark, Flink, everything beyond that. >> Sure, and also now when you get to real time and streaming image, you can have adjusted data pipelines, data come from multiple sources. But then you can action on that data instantly, and a lot of businesses require, or ours certainly does and I think a lot of our customers' businesses do, the time to action really matters, and those are the types of services that, at least at scale, don't really exist anywhere else and machine learning, the ability of our custom ASICs to support machine learning. But I don't think it's a one versus the other, I think that brings about how do you allow enterprises to have both. And not have to choose between public cloud and on premise, or doing (mumbles) services or (mumbles) services, because if you ask them, the best thing they can have is actually how do you marry the two environments together so they don't look, again, back to that impedance differences. >> Yeah, and I think that's a great point, we've talked OpenStack is fitting into that hybrid or multi-cloud world a bunch. The challenge I guess we look at is some of those really cool features that are game changers that I have in public cloud that I can't do in my own data center, how do we bridge that? Started to see the reach or the APIs that do that, but how do you see that playing out? >> Because you don't have to bring them in. Because if you think about the fabric of IT, the fabric of IT is that Google's data center in that way just becomes an extension of the data center that a large enterprise is already using anyway. So it's through us. So they aren't going to the lines of distinction, only we and sort of the IT side see that. There isn't going to be seen, as long as they have an existing platform and they can take advantage of those services, and it doesn't mean that their workload has to be portable and the services have to exist in both places, it's just a data extension with some pretty compelling services. >> I think back, you know, Hadoop was let me bring the compute to the data 'cause the data's big and can't be moved. Look at edge computing now, I'm not going to be able to move all that data from the edge, I don't have the networking connectivity. There's certain pieces which we'll come back to, you know, a core public cloud, but I wonder if you can comment on some of those edge pieces, how you see that fitting in? We've talked a little bit about it here at OpenStack, but 'cause you're Google. I think it's the evolution. When we look at, we just even see the edge of our network, the edge of our network is in, it's 173 countries and regions globally. And so that edge of the network is full compute and cashing. And so even for us, we're looking at what sort of compute services do you bring to the edge of the network. We're like, low latency really matters and proximity matters. The easiest obvious examples are gaming, but there's other ones as well, trading. But still though, if you want to take advantage of that foundation, it shouldn't be one that you have to dive into the specificities of a single provider, you'd really want that abstraction layer across the edge, whether that's Docker and a defined set of APIs around data management and delivery and security, that probably gives you that edge computing sell, and then you really want to build around that on Google's edge, you want to build around that on a telco's edge. So I don't think it really becomes necessarily around whether it's centralized or it's the edge, it's really what's that architecture to deliver. >> All right. Brian, I want to give you the opportunity, final world, things either from OpenStack, retrospectively or Google looking forward that you'd like to leave our audience with. >> Wow, closing remarks. You know, I think the continuity here is open source. And I know the backdrop of this is OpenStack, but it's really around open source is the accepted foundation and substrate for IT computing up the stack, so I think that's not changing, the faces may change and what we call these projects may change, but that's the evolution and I think there's really no turning back on that now. >> Brian Stevens, always a pleasure to catch up with you, we'll be back with lots more coverage here with theCUBE, thanks for watching. (energetic music)
SUMMARY :
Brought to you by the OpenStack Foundation, John Troyer and happy to welcome back to the program it's not only the companies that sell a product I think it's a big part of the stack, absolutely. that you used internally and then maybe gets spun off. and the desire and the ability to invest in the Kubernetes story, 'cause, you know, So that was, you know, what I was talking about all of the AI and ML, I mean, Google with Tensor Flow, Here is where you really want to and as a cloud focused on the enterprise, and some of the higher end services is now you're actually One of the critiques of OpenStack for years I think the issue is, you know, some of the meaty things that are exciting you that happen in the IT world. One of the things I think is interesting is that until you can actually deliver code Brian, one of the things I, if you look at OpenStack, that are the ones that really changed here, Sure, and also now when you get to real time but how do you see that playing out? Because you don't have to bring them in. And so that edge of the network is Brian, I want to give you the opportunity, final world, And I know the backdrop of this is OpenStack, to catch up with you, we'll be back
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Stevens | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Stu Miniman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
nine month | QUANTITY | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
six month | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
first time | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
six years | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
OpenStack Summit 2017 | EVENT | 0.98+ |
173 countries | QUANTITY | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
Hadoop | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
two environments | QUANTITY | 0.98+ |
Linux kernel | TITLE | 0.98+ |
SRE | TITLE | 0.97+ |
both places | QUANTITY | 0.97+ |
SRE | ORGANIZATION | 0.97+ |
Kubernetes | TITLE | 0.96+ |
#OpenmStackSummit | EVENT | 0.96+ |
Tensor Flow | TITLE | 0.95+ |
three | QUANTITY | 0.95+ |
OpenStack | TITLE | 0.93+ |
today | DATE | 0.93+ |
single provider | QUANTITY | 0.93+ |
Boston | LOCATION | 0.93+ |
one data center | QUANTITY | 0.89+ |
Google Cloud | TITLE | 0.89+ |
Spark | TITLE | 0.89+ |
years ago | DATE | 0.88+ |
Sam Ramji, Google Cloud Platform - Red Hat Summit 2017
>> Announcer: Live, from Boston, Massachusetts, it's the Cube. Covering Red Hat Summit 2017. Brought to you by Red Hat. (futuristic tone) >> Welcome back to the Cube's coverage of the Red Hat Summit here in Boston, Massachusetts. I'm your host, Rebecca Knight, along with my co-host Stu Miniman. We are welcoming right now Sam Ramji. He is the Vice President of Product Management Google Cloud Platforms. Thanks so much for joining us. >> Thank you, Rebecca, really appreciate it. And Stu good to see you again. >> So in your keynote, you talked about how this is the age of the developer. You said this is the best time in history to be a developer. We have more veneration, more cred in the industry. People get us, people respect us. And yet you also talked about how it is also the most challenging time to be a developer. Can you unpack that a little bit for our viewers? >> Yeah, absolutely. So I think there's two parts that make it really difficult. One is just the velocity of all the different pieces, how fast they're moving, right? How do you stay on top of all the different latest technology, right? How do you unpack all of the new buzzwords? How do you say this is a cloud, that's not a cloud? So you're constantly racing to keep up, but you're also maintaining all of your old systems, which is the other part that makes it so complex. Many old systems weren't built for modernization. They were just kind of like hey, this is a really cool thing, and they were built without any sense of the history, or the future that they'd be used in. So imagine the modern enterprise developer who's got a ship software at high rates of speed, support new business initiatives, they've got to deliver innovation, and they have to bridge the very new with the very old. Because if your mobile app doesn't talk to your mainframe, you are not going to move money. It's that simple. There's layers of technology architecture. In fact, you could think of it as technology archeology, as I mentioned in the keynote, right, this we don't want to create a new genre of people called programmer archeologists, who have to go-- >> I'm picturing them just chipping away. >> Sam: I don't think it'll be as exciting as Indiana Jones. >> No. >> Digging through layers of the stack is not really what people want to be doing with their time. >> Sam: Temple of the lost kernel. >> I love it. >> So Sam, it's interesting to kind of see, I was at the Google Cloud event a couple months ago, and here you bring up the term open cloud, which part of me wants to poke a hole in that and be like, come on, everybody has their cloud. Come on, you want to lock everybody in, you've got the best technology, therefore why isn't it just being open because it's great to say open and maybe people will trust you. Help explain that. >> Puppies, freedom, apple pie, motherhood, right. >> Stu: Yeah, yeah. (laughs) >> So there's a couple sides to that. One, we think the cloud is just a spectacular opportunity. We think about 1.2 trillion dollars in current spend will end up in cloud. And the cloud market depending on how you measure it is in the mid 20 billions today. So there's just unbounded upside. So we don't have to be a aspirational monopolist in order to be a successful business. And in fact, if you wind the clock forward, you will see that every market ends up breaking down into a closed system and a closed company, and an open platform. And the open platforms tend to grow more slowly, sort of exponential versus logarithmic, is how we think about it. So it's a pragmatic business strategy. Think about Linux in '97. Think about Linux in 2002. Think about Linux in 2007. Think about Linux in 2012. Think about Linux today. Look at that rate. It's the only thing that you're going to use. So open is very pragmatic that way. It's pragmatic in another direction which is customer choice. Customers are going to come for things that give them more options. Because your job is to future proof your business, to create what in the financial community call optionality. So how do you get that? In 2011, about eight other people and I created a nonprofit called the Open Cloud Initiative. And the Initiative is long since dead, we didn't fund it right, we kind of got these ideas baked, and then moved on. >> Stu: There's another OCI now. >> That's right, it's the Open Container Initiative. But we had three really crisp concepts there. We said number one, an open cloud will be based on open source. There won't be stuff that you can't get, can't replicate, can't build yourself. Second, we said, it'll have open access. There'll be no barriers to entry or exit. There won't be any discrimination on which users can or can't come in, and there won't be any blockers to being able to take your stuff out. 'Cause we felt that without open access, the cloud would be unsafe at any speed, to borrow a quote from Ralph Nader. And then third, built on an open ecosystem. So if you are assuming that you have to be able to be open to tens of thousands of different ideas, tens of thousands of different software applications, which are maybe database infrastructure, things that as a cloud provider, you might want to be a first party provider of. Well those things have to compete, or trade off or enrich each other in a consistent way, in a way that's fair, which is kind of what we mean when we say open ecosystem, but being able to be pulled through is going to give you that rate of change that you need to be exponential rather than logarithmic. So it's based on some fairly durable concepts, but I welcome you to poke holes in it. >> So we did an event with MIT a little while back. We had Marshall Van Alstyne, professor at BU who I know you know. He's an advisor at Cloud Foundry, and he talked about those platforms and it was interesting, you know, with the phone system you had Apple who got lots of the money, smaller market share as opposed to Android, which of course comes out of Google, has all of the adoption but less revenue. So, not sure it's this, yeah. >> Interestingly, we've run those curves, and you kind of see that same logarithmic versus exponential shift happening in Android. So we've seen, I don't have the latest numbers on the top of my head, but that is generating billions of dollars of third party revenue now. So share does shift over time in favor of openness and faster innovation. >> So let's bring it back to Red Hat here, because if I talk to all the big public cloud guys, Microsoft has embraced open source. >> And they're not just guys, actually, there's lots of women. >> Rebecca: Yes, thank you. >> Stu: I apologize. >> Sorry, I'm in a little bit of a jam here, where I'm trying to tell people the collective noun for technologists is not guys. >> Stu: Okay. >> It could be people, it could be folks, internally we use squirrels from time to time, just to invite people in. >> So, when I talk to the cloud squirrels, Microsoft has embraced open source. Amazon has an interesting relationship. >> I was there when that happened. >> You and I both know the people that they've brought in who have very good credibility in the open source community that are helping out Amazon there. Is it Kubernetes that makes you open because I look at what Red Hat's doing, we say okay, if I want to be able to live across many clouds or in my own data centers, Kubernetes is a layer to do that. It comes back to some of the things like Cloud Foundry. Is that what makes it open because I have choice, or is there more to it that you want to cover from an open cloud standpoint, from a Google standpoint? >> Open and choice effectively is a spectrum of effort. If it's incredibly difficult, it's the same as not having a choice. If it's incredibly easy, then you're saying actually, you really are free to come and go. So Kubernetes is kind of the brightest star in the solar system of open cloud. There's a lot of other technologies, new things that are coming out, like istio and pluri. I don't want to lose you in word soup. Linker D, container D, a lot of other things, because this is a whole new field, a whole fabric that has to come to bear, that just like the internet, can layer on top of your existing data centers or your existing clouds, that you can have other applications or other capabilities layered on top of it. So this permission-less innovation idea is getting reborn in the cloud era, not on top of TCP/IP, we take that for granted, but on top of Kubernetes and all of the linked projects. So yeah, that's a big part of it. >> I want to continue on with that idea of permission-less innovation and talk about the culture of open source, particularly because of what you were saying in the keynote about how it's not about the code, it's about the community. And you were using words like empathy and trust, and things that we don't necessarily think of as synonymous with engineers. >> Sam: Isn't it? >> So, can you just talk a little bit about how you've seen the culture change, particularly since your days at Microsoft, and now being at Google, in terms of how people are working together? >> Absolutely, so the first thing is why did it change? It became an economic imperative. Let's look at software industry competition back in the 90s. In general, the biggest got the mostest. If you could assemble the largest number of very intelligent engineers, and put them all on the same project, you would overwhelm your competition. So we saw that play out again and again. Then this new form of collaboration came around, not just birthed by Linux, but also Apache and a number of other things, where it's like oh, we don't have to work for the same company in order to collaborate. And all of a sudden we started seeing those masses grow as big as the number of engineers who went a single company. Ten thousand people, ten thousand engineers, share the copyright to the Linux kernel. At no point have they worked at the same company. At no point could a company have afforded to get all of them together. So this economic imperative that marks what I think of as the first half of the thirty years of open source that we've been in. The second half has been more us all waking up, and realizing open source has got to be inclusive. A diverse world needs diverse solutions built by diverse people. How do we increase our empathy? How do we increase our understanding so that we can collaborate? Because if we think each other is a jerk, if we get turned off of building our great ideas into software because some community member has said something that's just fundamentally not cool, or deeply hurtful, we are human beings and we do take our toys away, and say I'm not going to be there. >> That's the crux of it too. >> It's absolutely a cutthroat industry, but I think one of the things I'm seeing, I've been in Silicon Valley for 22 years, less three years for a stint at Microsoft, I've actually started to see the community become more self-reflective and like, if we can have cutthroat competition in corporations, we don't have to make that personal. 'Cause every likelihood of open source projects is you're employed as a professional engineer at a company, and that employment agreement might change. Especially in containers, right? Great container developers you'll see they move from one company to another, whether it's a giant company like Google, or whether it's a big startup like Docker, or any range of companies. Or Red Hat. So, this sort of general sense that there is a community is starting to help us make better open source, and you can't be effective in a community if you don't have empathy and you don't start focusing on understanding code of conduct community norms. >> Sam, I'm curious how you look at this spectrum of with this complexity out there, how much will your average customer, and you can segment it anywhere you want, but they say, okay I'm going to engage with this, do open source, get involved, and what spectrum of customers are going to be like, well, let me just run it on Google because you've got a great platform, I'm not going to have Google engineers and you guys have lots of smart people that can do that in any of the platform. How do you see that spectrum of customer, is it by what their business IT needs are, is it the size of the customer, is there a decision tree that you guys have worked out yet to try to help end users with what do they own, what do they outsource? It's in clouds more than outsourcing these days. The deal of outsourcing was your mess for less, and this should be somewhat more transformational and hopefully more business value, right? >> Yeah, Urs Hölzle, who's our SVP of Technical Infrastructure, says, the cloud is not a co-location facility. It is different, it is not your server that you shipped up and you know, ran. It's an integrated set of services that should make it incredibly easy to do computing. And we have tons of very intelligent women and men operating our cloud. We think about things like how do you balance velocity and reliability? We have a discipline called site reliability engineering. We've published a book on it, a community is growing up around that, it's sort of the mainstream version of dev ops. So there are a bunch of components that any company at any size can adopt, as long as you need both velocity and reliability. This has always been the tyranny of the or. If I can move fast I can break things, but even Mark Zuckerberg recently said you know, move fast and break fewer things. Kind of a shift, 'cause you don't want to break a lot of people's experience. How do you do that, while making sure that you have high reliability? It really defies simple classification. We have seen companies from startups to mom and pop shops, all the way to giant enterprises adopting cloud, adopting Google cloud platform. One of the big draws is of course, data analytics. Google is a deeply data intensive business, and we've taken that to eleven basically with machine learning, which is why it was so important to explain tense or flow, offer that as open source, and be able to move AI forward. Any company, at any size that wants to do high speed, high scale data analytics, is coming to GCP. We've seen it basically break down into, what's the business value, how close is it to the decision maker, and how motivated is an engineer to learn something different and give cloud a try. >> Because the engineer has to get better at working with the data, understanding the data, and deriving the right insights from the data. >> You're exactly right. Engineers are people, and people need to learn, and they need to be motivated to change. >> Sam, last question I have for you is, you've been involved in many different projects. We look at from the outside and say, okay, how much should be company driven, how much does a foundation get involved? We've seen certain foundations that have done very well, and others that have struggled. It's very interesting to watch Google. We'd give you good as we've talked on the Cube so far. Kubernetes seems to be going well. Great adoption. Google participates, but not too much, and Red Hat I think would agree with that. So congratulations on that piece. >> Sam: Thank you. >> What's your learnings that you've had as you've been involved in some of these various initiatives, couple foundations. We interviewed you when you were back at the Cloud Foundry, and things like that, so, what have you learned that you might want to say, hey, here's some guidelines. >> Yeah, so I think the first guideline is the core of a foundation is, the core purpose of a foundation is bootstrapping trust. So where trust is missing, then you will need that in order to create better contribution and higher velocity in the project. If there's trust there, if there's a benevolent dictator and everyone says that person's fine or that company's fine, then you won't necessarily need a foundation. You've seen a lot of changes in open source startups, dot coms that are also a dot org, shifting to models where you say well, this thing is actually so big it needs to not be owned by any one company. And therefore, to get the next level of contribution, we need to be able to bring in giant companies, then we create trust at that next level. So foundations are really there for trust. It's really important to be strong enough to get something off the ground, and this is the challenge we had at Cloud Foundry, it was a VMware project and then a Pivotal project, and many people believe this is great open source, but it's not an open community, but the technology had to keep working really well. So we how do we have a majority contributor, and start opening up, in a thoughtful process and bringing people in, until you can say what our target is to have the main contributor be less than 50% of the code commits. 'Cause then the majority is really coming from the community. Other projects that have been around for longer, maybe they started out with no majority. Those organizations, those projects tend to be self-organizing, and what they need is just a foundation to build a place that people can contribute money to, so the community can have events. So there's two very different types of organizations. One's almost like a charity, to say I really care about this popular open source project, and I want to be able to give something back, and others are more like a trade association, which is like, we need to enable very complex coordination between big companies that have a lot at stake, in which case you'll create a different class of foundation. >> Great, well Sam Ramji, thank you so much for being with us here on the Cube. I'm Rebecca Knight, and for your host Stu Miniman, please join us back in a bit. (futuristic tone)
SUMMARY :
Brought to you by Red Hat. He is the Vice President of Product Management And Stu good to see you again. also the most challenging time to be a developer. and they have to bridge the very new with the very old. what people want to be doing with their time. and here you bring up the term open cloud, Stu: Yeah, yeah. And the cloud market depending on how you measure it but being able to be pulled through is going to give you and it was interesting, you know, and you kind of see that same logarithmic So let's bring it back to Red Hat here, And they're not just guys, actually, Sorry, I'm in a little bit of a jam here, just to invite people in. Microsoft has embraced open source. or is there more to it that you want to cover So Kubernetes is kind of the brightest star and talk about the culture of open source, share the copyright to the Linux kernel. and you can't be effective in a community and you guys have lots of smart people that can do that how close is it to the decision maker, Because the engineer has to get better at working and they need to be motivated to change. and others that have struggled. what have you learned that you might want to say, shifting to models where you say well, I'm Rebecca Knight, and for your host Stu Miniman,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Schneider | PERSON | 0.99+ |
Sam Ramji | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
David Schneider | PERSON | 0.99+ |
Frank Sleuben | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Mike Scarpelli | PERSON | 0.99+ |
Marshall Van Alstyne | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
CJ Desai | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
2007 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
2002 | DATE | 0.99+ |
2011 | DATE | 0.99+ |
John Donahoe | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Mike Scarpelli | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
22 years | QUANTITY | 0.99+ |
Urs Hölzle | PERSON | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
second half | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
less than 50% | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
'97 | DATE | 0.99+ |
first half | QUANTITY | 0.99+ |
Android | TITLE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
Linux | TITLE | 0.99+ |
ORGANIZATION | 0.99+ | |
CUBE | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
Cloud Foundry | ORGANIZATION | 0.98+ |
Ten thousand people | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
eleven | QUANTITY | 0.98+ |
ten thousand engineers | QUANTITY | 0.98+ |
90s | DATE | 0.98+ |
15 | QUANTITY | 0.98+ |
OCI | ORGANIZATION | 0.98+ |
Chris Wahl, Rubrik - Google Cloud Next 2017 #GoogleNext17 #theCUBE
>> Announcer: Live, from Silicon Valley, it's theCUBE, covering Google Cloud Next '17. (funky techno music) >> Welcome back to our live coverage here of Google Next 2017, an event that last year was focused only on Google Cloud. They've actually expanded a bit, they're talking about G Suite, talking about some of the devices, and they bring in a really broad and diverse community, so when I talk to the Google people, it's not one show, it's a handful of shows. I went to the analyst event. My guest for this segment is Chris Wahl, who came in through the community event. So, excited to get that angle. Chris, thanks so much for doing the drive with me from San Francisco down to Palo Alto. For those of us not in the area, it's a 45 minute drive, it's not too bad. It's a beautiful, sunny day. It's great to catch up with you and thanks for coming. >> Always glad to be on, love being a CUBE Alumni, so, I think it's my third time. >> Wow, a three-time Alumni. It's like if you've been a host of Saturday Night Live for like seven times, you know you get the special jacket. - Automatically. >> Things like that. You're getting up there. Three times. It's like, you're not quite in Pak Elsinger area, but you have passed, you've been on more than Andy Jassey now. >> Wow, cool. >> I think that that's pretty impressive. >> Bucket list, accomplished. >> Exactly, so, what brings you to the Google event and tell us a little bit about the community event. >> Yeah, to be honest, I thought it was a spam email at first. I just got an invite saying, hey, we have this Google event going on, and I'm not really plugged in to the Google Universe too much. So I said, cool, I'm interested, I'll take a look. Got invited out by Sarah Novotny to a community focus day. >> Host: Sarah's awesome. Also a CUBE Alum, of course. >> Yeah, Alum, and ran OSCON I think, as a boarder or some kind of management facility for quite a while. So yeah, the Google Cloud Next is this week but on Tuesday. They actually had a bunch of influencers, evangelists, community members, out to spend time with all sorts of Google-y Google-ers, talking around what their vision is around kind of bridging the gap to the enterprise, what their thought around Kubernetes, and just really the community in general were. Which was kind of cool because it was all fresh and clean and new for me. So, it was really great to taste the Kool-Aid, and see how delicious it could be. >> Yeah, so I'm curious what your take is. I remember I did a panel at Interop a couple of years ago, and it was like, basically, hyper-scale, you're-not-Google, so what do you need to do, how do you do it, do you just use Google stuff, can you code like Google, can you act like Google, or are you just an enterprise and you're forced to live in the past. >> I think over the last couple of years, the idea of the Sight Reliability Engineers come out and been more focused on the enterprise and kind of dovetailed into the Dev-Op story. So, it was really interesting to hear, not only trying to talk to the enterprise, but also how they're trying to get the enterprise to kind of stop being the traditional enterprise that it's been. Which I think entirely, it's something that we practitioners have always been trying to do. No one wants to be on-call all the time and fixing these flaming disasters and things like that. But at the same time, you have to recognize that moving that much intrinsic culture poison from one side to the next is hard. They're admitting that too, it's like, we wold love for you guys to be more Google-y, and to use the tools that we have here, but we're not sure you even know what the tools are or how to use them, or what kind of documentation is necessary, or what meet-ups we can go to find my people, you know, the practitioners. >> I want to channel our friends, the Geek Whisperers, and alright Chris, so how did you transition out of being a VMware guy to someone that does cool and interesting things now, because VMware is no longer the coolness. >> That's been the vibe, yeah. It's something I personally have been trying to, I don't think in any technology you want to be that technology specific. VMware, love it, have been doing it for 12 something years, but you don't just want to be pigeon-holed in that kind of silo. Which is actually why I wanted to come out and talk with the folks at Google around what they're doing to build a community. I think it was Sam something-or-other-- >> Host: Sam Ramji. >> Sam Ramji actually came up and said, you know, as long as we're going to exist as a company, we're going to have this community day. It's the first one they've done, and they plan to do it basically infinitely forever, because they realized they had the analysts, and things like that out there, they had all the engineers and developers, but what were they missing? The folk in the trenches that are trying to adopt and use this sort of technology. I like that aspect of it. There weren't any huge, mind-shattering results that were out there, except for I think, me personally, I like that Google kind of admitted that yeah, they hadn't been doing the best job around interfacing with the community and getting IT practitioners and operation-centric folks into the fold, welcoming into the bosom of Google, and that they were trying to work on that. And it's like, okay, awesome. Let's have a conversation, which the other half of the day was an un-conference, where we literally broke up into groups, that we decided ourselves as like a democracy of Google decision-making. We formed eight different groups. Some focused on containers, I actually sat in in a two hour session where we just kind of riffed on abstraction layers and where we should we start working. Is it at the container level, is it at the hypervisor level, is it at the virtual machine level? And it was neat because everyone had a completely different idea and background around that. I felt like I was an alien in that conversation for a lot of it 'cause they're working on solving problems that are totally alien to my world. So I liked all that. >> It's an interesting crowd when the server-less stuff got talked about in the keynote today-- >> Yes! >> There was a big clap and I loved Brian Stevens. He's like, functions are just fragments of code, and they get applause, you know, he's kind of like-- (Chris laughs) >> It's like either remark, I got applause for that. >> Yeah, yeah, it's pretty funny. But you know, that's the kind of people that come to this show, right? So, you checked out a thing called, what was it, Code Labs or something like that? Maybe you could talk a little bit about that. >> Yeah, yeah, there was, I had some notes there that I'd written down. Certification in Code Labs, specifically. So Code Labs was interesting 'cause it's a place that you can, you have to book it in advance, like a day in advance, and from about 11 to seven each day, they just have Google-y Google-ers, you know, very Google-y people out there that say alright, here's all our various APIs, such as the new one where you can query a video and say I'm looking for, I think in the keynote, they had "find me baseball" in this video, and it actually shows you in the timeline where baseball occurs. There's also things to do image tagging and things like that. And, I don't know, it might be difficult to grasp that API interaction at first. And so you can sit down, and they'll show you how to write code in the languages of your choice. Obviously Go is very prominent. I'm a PowerShell developer, so it's like, alright, how would you write that in Curl, and that's maybe our bridge to one another, since I don't know Go and they don't know PowerShell, or the person I was working with. So that was cool, to hear how they approach those things, because I've typically done it as an Ops person. I'm typically looking at it from the perspective of I'm trying to automate some task and feed it into an orchestration engine. And I'm not super deep on APIs in general, I like them, but ... That was cool, I liked that you're basically getting to meet with really, really awesome engineers and SREs to pick their brain and their vast decades of experience on writing code. To work with APIs and things that are Google-centric. So that was awesome. >> So it sounds like you didn't feel like this was a marketing show, right, - [Chris] No! >> that they bring in the engineers, the technical people, I mean it's not far being from San Franscisco from the Google-Plex, the Mothership is nearby. >> Thats's a good point because a lot of these shows have just become a sales pitch in a wolf's clothing or a conference clothing, and this was ... I've never met so many really, really talented engineers all concentrated in one spot. I mean, you've got the rock stars that I think everybody knows, like Sarah, and Kelsey, that are very available and personable, but you also have a whole army of people that have a huge amount of passion around writing code and understand what your problems are and wanting to talk to you. I felt like a person, which I've been a Google customer since, I guess, Google came out, you know, Google apps and things like that. This is really the first time I really started putting faces to the technical practitioners that work there, and they're really interested and excited with what my mundane kind of problems. So, that's kind of cool. >> Yeah, I found they're definitely, they're listening, they're talking, it's really good, because right, we at our firm, we've used Google for a while and it's like, oh wait I have a challenge. Who do I call, who do I email? Nope, you should just watch the YouTube video and use it. C'mon, aren't you smart enough to use these things right? You know, was kind of how we all felt for a while. Interesting. Kinder, gentler Google than we've knew in the past? >> They had the Google leaders circle and the various groups that you could join online, but it was just, you can't fake that kind of raw passion, and I sat down with some of the SREs at the community day, and it was really just, talk to me about what you do, and why, and what tools you use, and what can we do to be better? More specifically, the Dev Rel, the developer relations folks were just awesome. And they're like, is our title threatening? What meet-up should we go to? What can we do to make your life better? And I just kind of, at first, said a few comments and realized, no, this is real. They want to know my day one and day two operations, so that they can find the right tools, or if there isn't one, build one. And I don't know, that's great. I've never seen that at a conference before. So I'm hooked. I definitely plan to go again. >> Alright, so anything you didn't see that you were hoping to see, follow-up that you want to have, other cool stuff going on that you want to share? >> I almost want to do like a plea to Google that throughout the community today and at the conference, there's been a lot of commentary and some, kind of some references to, oh we don't want to tell you how to do things, we don't want to tell you how to build architecture in a certain way. Please do tell me how to do those things. At least give me a reference architecture, or some example environments, because I feel like a lot of it is just, here's some cool things you can do, kind of in isolation. Or here are some things with Kubernetes that kind of exist outside of reality. I'm looking for, alright, I don't have any of that stuff, how do I onboard into that? Here's a white paper, and that kind of jazz. >> Yeah, and we saw, you know, I hate to always bring up AWS, but AWS went from here's this giant toolbox with all these things to right, here's some services, here are some tracks, here are some, not wizards, but you know, templates you can follow for certain things. Here are people that are probably similar to you and, boy, with Google with their AI and ML and all their things that they can do to help us sort out all the TLAs that they've got to. (Chris laughs) You know, they should be able to help going forward because, yeah, Google should be able to personalize all that to be able to work a little bit better for us as opposed to us having to just kind of figure it out a little bit. I know you played with the Google Cloud a little bit yourself-- - Yeah. >> And it wasn't as simple as you were hoping, right? >> It was hard. (both laugh) I mean-- >> Host: C'mon, if you can't figure it out, you know-- >> I don't feel like I'm the sharpest tool in the shed, but I was like, I'm kind of the representative layman ops person, and it felt very convoluted, complex, the documentation was fragmented. I'm like, just give me the wizard so that I can start fishing for myself. I just do that first hit for free, and then I'll take care of it beyond that. So, that would be my one ask to Google as a whole, but otherwise I think the tooling and the people, and the culture are all there, it's just build a few more things and I think we've got some interesting entanglements at the enterprise level once that's done. >> Okay, want to give me the final word, what's going on with you other than, your hometown, your new hometown of Austin, Texas. South By coming, so I know there's a lot of music and fun going on but, what's happening in your world, what's happening with Rubrik? >> Oh yeah, I'll mention South By, definitely will be there, I will not be available online or anything. I'm going to be going into sequester mode and just listen to music with my co-host actually. If you listen to the Datanauts podcast, with Ethan Banks, he's going to come by. So, we'll be at the show I guess if you want to hang out with us, hit us up. Otherwise, Rubrik's been awesome. It's definitely a rocket ship ride and it was actually dove-tailed into quite a few conversations I had while at Google Next. Because movement of data into and around clouds is non-trivial, so that's where the Cloud Data Management world that we're in, kind of fits into that equation, and why I personally wanted to go to this show, but also professionally I thought that there'd be some inroads there to discuss with the other practitioners. >> Absolutely, the whole infrastructure side and how that plays in the public cloud, how it plays with Sass, there's a lot of those discussions going on. Congrats, you guys have been growing some good buzz. You guys have been hiring, too, so check Chris out for all that. We'll be back, lots more coverage here of the Google Cloud Next 2017, you're watching theCUBE. (funky techno music)
SUMMARY :
it's theCUBE, It's great to catch up with you and thanks for coming. Always glad to be on, for like seven times, you know but you have passed, Exactly, so, what brings you to the Google event and I'm not really plugged in Also a CUBE Alum, of course. kind of bridging the gap to the enterprise, so what do you need to do, But at the same time, you have to recognize so how did you transition out of being but you don't just want to be pigeon-holed and that they were trying to work on that. you know, he's kind of like-- that come to this show, right? and it actually shows you in the timeline that they bring in the engineers, but you also have a whole army of people C'mon, aren't you smart enough to use these things right? and it was really just, talk to me about what you do, I don't have any of that stuff, Yeah, and we saw, you know, I mean-- and the people, and the culture are all there, what's going on with you other than, and just listen to music with my co-host actually. and how that plays in the public cloud,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris Wahl | PERSON | 0.99+ |
Sarah | PERSON | 0.99+ |
Sam Ramji | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Kelsey | PERSON | 0.99+ |
Sarah Novotny | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Andy Jassey | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Silicon Valley | LOCATION | 0.99+ |
seven times | QUANTITY | 0.99+ |
Three times | QUANTITY | 0.99+ |
Brian Stevens | PERSON | 0.99+ |
45 minute | QUANTITY | 0.99+ |
Ethan Banks | PERSON | 0.99+ |
two hour | QUANTITY | 0.99+ |
San Franscisco | LOCATION | 0.99+ |
Tuesday | DATE | 0.99+ |
three-time | QUANTITY | 0.99+ |
third time | QUANTITY | 0.99+ |
G Suite | TITLE | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Kool-Aid | ORGANIZATION | 0.99+ |
Sam | PERSON | 0.99+ |
YouTube | ORGANIZATION | 0.98+ |
Austin, Texas | LOCATION | 0.98+ |
Google Cloud | TITLE | 0.97+ |
Interop | ORGANIZATION | 0.97+ |
first one | QUANTITY | 0.97+ |
Google Next 2017 | EVENT | 0.97+ |
today | DATE | 0.97+ |
one spot | QUANTITY | 0.97+ |
PowerShell | TITLE | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
first time | QUANTITY | 0.96+ |
Saturday Night Live | TITLE | 0.95+ |
Datanauts | TITLE | 0.93+ |
eight different groups | QUANTITY | 0.93+ |
a day | QUANTITY | 0.93+ |
Kubernetes | TITLE | 0.93+ |
both laugh | QUANTITY | 0.93+ |
Curl | TITLE | 0.93+ |
VMware | ORGANIZATION | 0.92+ |
Google Universe | EVENT | 0.92+ |
this week | DATE | 0.91+ |
Google Cloud Next | TITLE | 0.91+ |
South By | ORGANIZATION | 0.88+ |
one show | QUANTITY | 0.87+ |
half | QUANTITY | 0.86+ |
one side | QUANTITY | 0.82+ |
Rubrik | ORGANIZATION | 0.82+ |
seven each day | QUANTITY | 0.81+ |
two | QUANTITY | 0.8+ |
about 11 | QUANTITY | 0.8+ |
Google Cloud Next 2017 | TITLE | 0.8+ |
Code Labs | ORGANIZATION | 0.79+ |
Raejeanne Skillern | Google Cloud Next 2017
>> Hey welcome back everybody. Jeff Frick here with theCUBE, we are on the ground in downtown San Francisco at the Google Next 17 Conference. It's this crazy conference week, and arguably this is the center of all the action. Cloud is big, Google Cloud Platform is really coming out with a major enterprise shift and focus, which they've always had, but now they're really getting behind it. And I think this conference is over 14,000 people, has grown quite a bit from a few years back, and we're really excited to have one of the powerhouse partners with Google, who's driving to the enterprise, and that's Intel, and I'm really excited to be joined by Raejeanne Skillern, she's the VP and GM of the Cloud Platform Group, Raejeanne, great to see you. >> Thank you, thanks for having me. >> Yeah absolutely. So when we got this scheduled, I was thinking, wow, last time I saw you was at the Open Compute Project 2015, and we were just down there yesterday. >> Yesterday. And we missed each other yesterday, but here we are today. >> So it's interesting, there's kind of the guts of the cloud, because cloud is somebody else's computer that they're running, but there is actually a computer back there. Here, it's really kind of the front end and the business delivery to people to have the elastic capability of the cloud, the dynamic flexibility of cloud, and you guys are a big part of this. So first off, give us a quick update, I'm sure you had some good announcements here at the show, what's going on with Intel and Google Cloud Platform? >> We did, and we love it all, from the silicon ingredients up to the services and solutions, this is where we invest, so it's great to be a part of yesterday and today. I was on stage earlier today with Urs Holzle talking about the Google and Intel Strategic Alliance, we actually announced this alliance last November, between Diane Green and Diane Bryant of Intel. And we had a history, a decade plus long of collaborating on CPU level optimization and technology optimization for Google's infrastructure. We've actually expanded that collaboration to cover hybrid cloud orchestration, security, IOT edge to cloud, and of course, artificial intelligence, machine learning, and deep learning. So we still do a lot of custom work with Google, making sure our technologies run their infrastructure the best, and we're working beyond the infrastructure to the software and solutions with them to make sure that those software and solutions run best on our architecture. >> Right cause it's a very interesting play, with Google and Facebook and a lot of the big cloud providers, they custom built their solutions based on their application needs and so I would presume that the microprocessor needs are very specific versus say, a typical PC microprocessor, which has a more kind of generic across the board type of demand. So what are some of the special demands that cloud demands from the microprocessor specifically? >> So what we've seen, right now, about half the volume we ship in the public cloud segment is customized in some way. And really the driving force is always performance per dollar TCO improvement. How to get the best performance and the lowest cost to pay for that performance. And what we've found is that by working with the top, not just the Super Seven, we call them, but the Top 100, closely, understanding their infrastructure at scale, is that they benefit from more powerful servers, with performance efficiency, more capability, more richly configured platforms. So a lot of what we've done, these cloud service providers have actually in some cases pushed us off of our roadmap in terms of what we can provide in terms of performance and scalability and agility in their infrastructure. So we do a lot of tweaks around that. And then of course, as I mentioned, it's not just the CPU ingredients, we have to optimize in the software level, so we do a lot of co-engineering work to make sure that every ounce of performance and efficiency is seen in their infrastructure. And that's how they, their data center is their cost to sales, they can't afford to have anything inefficient. So we really try to partner to make sure that it is completely tailor-optimized for that environment. >> Right, and the hyperscale, like you said, the infrastructure there is so different than kind of classic enterprise infrastructure, and then you have other things like energy consumption, which, again, at scale, itty bitty little improvements >> It's expensive. >> Make a huge impact. And then application far beyond the cloud service providers, so many of the applications that we interact with now today on a day to day basis are cloud-based applications, whether it is the G Suite for documents or this or that, or whether it's Salesforce, or whether we just put in Asana for task tracking, and Slack, and so many of these things are now cloud-based applications, which is really the way we work more and more and more on our desktops. >> Absolutely. And one of the things we look at is, applications really have kind of a gravity. Some applications are going to have a high affinity to public cloud. You see Tustin Dove, you see email and office collaboration already moving into the public cloud. There are some legacy applications, complex, some of the heavier modeling and simulation type apps, or big huge super computers that might stay on premise, and then you have this middle ground of applications, that, for various reasons, performance, security, data governance, data gravity, business need or IP, could go between the public cloud or stay on premise. And that's why we think it's so important that the world recognizes that this really is about a hybrid cloud. And it's really nice to partner with Google because they see that hybrid cloud as the end state, or they call it the Multi Cloud. And their Kubernetes Orchestration Platform is really designed to help that, to seamlessly move those apps from on a customer's premise into the Google environment and have that flow. So it's a very dynamic environment, we expect to see a lot of workloads kind of continue to be invested and move into the public cloud, and people really optimizing end-to-end. >> So you've been in the data center space, we talked a little bit before we went live, you've been in the data center space for a long, long time. >> Long time. >> We won't tell you how long. (laughing) >> Both: Long time. >> So it must be really exciting for you to see this shift in computing. There's still a lot of computing power at the edge, and there's still a lot of computing power now in our mobile devices and our PCs, but so much more of the heavy lift in the application infrastructure itself is now contained in the data center, so much more than just your typical old-school corporate data centers that we used to see. Really fun evolution of the industry, for you. >> Absolutely, and the public cloud is now one of the fastest growing segments in the enterprise space, in the data center space, I should say. We still have a very strong enterprise business. But what I love is it's not just about the fact that the public cloud is growing, this hybrid really connects our two segments, so I'm really learning a lot. It's also, I've been at Intel 23 years, most of it in the data center, and last year, we reorganized our company, we completely restructured Intel to be a cloud and IoT company. And from a company that for multiple decades was a PC or consumer-based client device company, it is just amazing to have data center be so front and center and so core to the type of infrastructure and capability expansion that we're going to see across the industry. We were talking about, there isn't going to be an industry left untouched by technology. Whether it's agriculture, or industrial, or healthcare, or retail, or logistics. Technology is going to transform them, and it all comes back to a data center and a cloud-based infrastructure that can handle the data and the scale and the processing. >> So one of the new themes that's really coming on board, next week will it be a Big Data SV, which has grown out of Hadoop and the old big data conversation. But it's really now morphing into the next stage of that, which is machine learning, deep learning, artificial intelligence, augmented reality, virtual reality, so this whole 'nother round that's going to eat up a whole bunch of CPU capacity. But those are really good cloud-based applications that are now delivering a completely new level of value and application sophistication that's driven by power back at the data center. >> Right. We see, artificial intelligence has been a topic since the 50s. But the reality is, the technology is there today to both capture and create the data, and compute on the data. And that's really unlocking this capabilities. And from us as a company, we see it as really something that is going to not just transform us as a business but transform the many use cases and industries we talked about. Today, you or I generate about a gig and a half of data, through our devices and our PC and tablet. A smart factory or smart plane or smart car, autonomous car, is going to generate terabytes of data. Right, and that is going to need to be stored. Today it's estimated only about 5% of the data captured is used for business insight. The rest just sits. We need to capture the data, store the data efficiently, use the data for insights, and then drive that back into the continuous learning. And that's why these technologies are so amazing, what they're going to be able to do, because we have the technology and the opportunity in the business space, whether it's AI for play or for good or for business, AI is going to transform the industry. >> It's interesting, Moore's Law comes up all the time. People, is Moore's Law done, is Moore's Law done? And you know, Moore's Law is so much more than the physics of what he was describing when he first said that in the first place, about number of transistors on a chip. It's really about an attitude, about this unbelievable drive to continue to innovate and iterate and get these order of magnitude of increase. We talked to David Floyer at OCP yesterday, and he's talking about it's not only the microprocessors and the compute power, but it's the IO, it's the networking, it's storage, it's flash storage, it's the interconnect, it's the cabling, it's all these things. And he was really excited that we're getting to this massive tipping point, of course in five years we'll look back and think it's archaic, of these things really coming together to deliver low latency almost magical capabilities because of this combination of factors across all those different, kind of the three horseman of computing, if you will, to deliver these really magical, new applications, like autonomous vehicles. >> Absolutely. And we, you'll hear Intel talk about Jevons Paradox, which is really about, if you take something and make it cheaper and easier to consume, people will consume more of it. We saw that with virtualization. People predicted oh everything's going to slow down cause you're going to get higher utilization rates. Actually it just unlocked new capabilities and the market grew because of it. We see the same thing with data. Our CEO will talk about, data is the new oil. It is going to transform, it's going to unlock business opportunity, revenue growth, cost savings in environment, and that will cause people to create more services, build new businesses, reach more people in the industry, transform traditional brick and mortar businesses to the digital economy. So we think we're just on the cusp of this transformation, and the next five to 10 years is going to be amazing. >> So before we let you go, again, you've been doing this for 20 plus years, I wasn't going to say anything, she said it, I didn't say it, and I worked at Intel the same time, so that's good. As you look forward, what are some of your priorities for 2017, what are some of the things that you're working on, that if we get together, hopefully not in a couple years at OCP, but next year, that you'll be able to report back that this is what we worked on and these are some of the new accomplishments that are important to me? >> So I'm really, there's a number of things we're doing. You heard me mention artificial intelligence many, many times. In 2016, Intel made a number of significant acquisitions and investments to really ensure we have the right technology road map for artificial intelligence. Machine learning, deep learning, training and inference. And we've really shored up that product portfolio, and you're going to see these products come to market and you're going to see user adoption, not just in my segment, but transforming multiple segments. So I'm really excited about those capabilities. And a lot of what we'll do, too, will be very vertical-based. So you're going to see the power of the technology, solving the health care problem, solving the retail problem, solving manufacturing, logistics, industrial problems. So I like that, I like to see tangible results from our technology. The other thing is the cloud is just growing. Everybody predicted, can it continue to grow? It does. Companies like Google and our other partners, they keep growing and we grow with them, and I love to help figure out where they're going to be two or three years from now, and get our products ready for that challenge. >> Alright, well I look forward to our next visit. Raejeanne, thanks for taking a few minutes out of your time and speaking to us. >> It was nice to see you again. >> You too. Alright, she's Raejeanne Skillern and I'm Jeff Frick, you're watching theCUBE, we're at the Google Cloud Next Show 2017, thanks for watching. (electronic sounds)
SUMMARY :
of the Cloud Platform Group, Raejeanne, great to see you. the Open Compute Project 2015, and we were just And we missed each other yesterday, but here we are today. and the business delivery to people to have the best, and we're working beyond the infrastructure and a lot of the big cloud providers, about half the volume we ship in the public cloud segment so many of the applications that we interact with And one of the things we look at is, we talked a little bit before we went live, We won't tell you how long. is now contained in the data center, and a cloud-based infrastructure that can handle the data and the old big data conversation. Right, and that is going to need to be stored. and the compute power, but it's the IO, and the next five to 10 years is going to be amazing. of the new accomplishments that are important to me? and investments to really ensure we have the right and speaking to us. to see you again. we're at the Google Cloud Next Show 2017,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Bryant | PERSON | 0.99+ |
Raejeanne | PERSON | 0.99+ |
Diane Green | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Jeff Frick | PERSON | 0.99+ |
Raejeanne Skillern | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Urs Holzle | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Intel | ORGANIZATION | 0.99+ |
Yesterday | DATE | 0.99+ |
OCP | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
next week | DATE | 0.99+ |
20 plus years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
23 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
G Suite | TITLE | 0.99+ |
Both | QUANTITY | 0.99+ |
Intel Strategic Alliance | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
two segments | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Cloud Platform Group | ORGANIZATION | 0.98+ |
last November | DATE | 0.98+ |
over 14,000 people | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
about 5% | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
three years | QUANTITY | 0.96+ |
Tustin Dove | PERSON | 0.96+ |
Asana | TITLE | 0.94+ |
50s | DATE | 0.93+ |
Google Next 17 Conference | EVENT | 0.93+ |
Open Compute Project 2015 | EVENT | 0.92+ |
Top 100 | QUANTITY | 0.89+ |
first place | QUANTITY | 0.89+ |
Kubernetes Orchestration Platform | TITLE | 0.88+ |
Slack | TITLE | 0.87+ |
three horseman | QUANTITY | 0.87+ |
Google Cloud Next | TITLE | 0.86+ |
Google Cloud Platform | TITLE | 0.86+ |
earlier today | DATE | 0.85+ |
Moore | PERSON | 0.85+ |
10 years | QUANTITY | 0.83+ |
Salesforce | TITLE | 0.81+ |
Jevons Paradox | ORGANIZATION | 0.81+ |
theCUBE | ORGANIZATION | 0.8+ |
about half | QUANTITY | 0.79+ |
five | QUANTITY | 0.77+ |
San Francisco | LOCATION | 0.77+ |
Moore's Law | TITLE | 0.73+ |
Cloud Platform | TITLE | 0.73+ |
a decade | QUANTITY | 0.72+ |
terabytes | QUANTITY | 0.67+ |
few years back | DATE | 0.67+ |
Seven | TITLE | 0.65+ |
Hadoop | TITLE | 0.63+ |
couple years | QUANTITY | 0.56+ |
Tendü Yogurtçu | BigData SV 2017
>> Announcer: Live from San Jose, California. It's The Cube, covering Big Data Silicon Valley 2017. (upbeat electronic music) >> California, Silicon Valley, at the heart of the big data world, this is The Cube's coverage of Big Data Silicon Valley in conjunction with Strata Hadoop, well of course we've been here for multiple years, covering Hadoop World for now our eighth year, now that's Strata Hadoop but we do our own event, Big Data SV in New York City and Silicon Valley, SV NYC. I'm John Furrier, my cohost George Gilbert, analyst at Wikibon. Our next guest is Tendü Yogurtçu with Syncsort, general manager of the big data, did I get that right? >> Yes, you got it right. It's always a pleasure to be at The Cube. >> (laughs) I love your name. That's so hard for me to get, but I think I was close enough there. Welcome back. >> Thank you. >> Great to see you. You know, one of the things I'm excited about with Syncsort is we've been following you guys, we talk to you guys every year, and it just seems to be that every year, more and more announcements happen. You guys are unstoppable. You're like what Amazon does, just more and more announcements, but the theme seems to be integration. Give us the latest update. You had an update, you bought Trillium, you got a hit deal with Hortonworks, you got integrated with Spark, you got big news here, what's the news here this year? >> Sure. Thank you for having me. Yes, it's very exciting times at Syncsort and I've probably say that every time I appear because every time it's more exciting than the previous, which is great. We bought Trillium Software and Trillium Software has been leading data quality over a decade in many of the enterprises. It's very complimentary to our data integration, data management portfolio because we are helping our customers to access all of their enterprise data, not just the new emerging sources in the connected devices and mobile and streaming. Also leveraging reference data, my main frame legacy systems and the legacy enterprise data warehouse. While we are doing that, accessing data, data lake is now actually, in some cases, turning into data swamp. That was a term Dave Vellante used a couple of years back in one of the crowd chats and it's becoming real. So, data-- >> Real being the data swamps, data lakes are turning into swamps because they're not being leveraged properly? >> Exactly, exactly. Because it's about also having access to write data, and data quality is very complimentary because dream has had trusted right data, so to enterprise customers in the traditional environments, so now we are looking forward to bring that enterprise trust of the data quality into data lake. In terms of the data integration, data integration has been always very critical to any organization. It's even more critical now that the data is shifting gravity and the amount of data organizations have. What we have been delivering in very large enterprise production environments for the last three years is we are hearing our competitors making announcements in those areas very recently, which is a validation because we are already running in very large production environments. We are offering value by saying "Create your applications for integrating your data," whether it's in the cloud or originating on the cloud or origination on the main frames, whether it's on the legacy data warehouse, you can deploy the same exact application without any recompilations, without any changes on your standalone Windows laptop or in Hadoop MapReduce, or Spark in the cloud. So this design once and deploy anywhere is becoming more and more critical with data, it's originating in many different places and cloud is definitely one of them. Our data warehouse optimization solution with Hortonworks and AtScale, it's a special package to accelerate this adoption. It's basically helping organizations to offload the workload from the existing Teradata or Netezza data warehouse and deploying in Hadoop. We provide a single button to automatically map the metadata, create the metadata in Hive or on Hadoop and also make the data accessible in the new environment and AtScale provides fast BI on top of that. >> Wow, that's amazing. I want to ask you a question, because this is a theme, so I just did a tweetup just now while you were talking saying "the theme this year is cleaning up the data lakes, or data swamps, AKA data lakes. The other theme is integration. Can you just lay out your premise on how enterprises should be looking at integration now because it's the multi-vendor world, it's the multi-cloud world, multi-data type and source with metadata world. How do you advise customers that have the plethora of action coming at them. IOT, you've got cloud, you've got big data, I've got Hadoop here, I got Spark over here, what's the integration formula? >> First thing is identify your business use cases. What's your business's challenge, what's your business goals, and the challenge, because that should be the real driver. We assist in some organizations, they start with the intention "we would like to create a data lake" without having that very clear understanding, what is it that I'm trying to solve with this data lake? Data as a service is really becoming a theme across multiple organizations, whether it's on the enterprise side or on some of the online retail organizations, for example. As part of that data as a service, organizations really need to adopt tools that are going to enable them to take advantage of the technology stack. The technology stack is evolving very rapidly. The skill sets are rare, and skill sets are rare because you need to be kind of making adjustments. Am I hiring Ph.D students who can program Scala in the most optimized way, or should I hire Java developers, or should I hire Python developers, the names of the tools in the stack, Spark one versus Spark two APIs, change. It's really evolving very rapidly. >> It's hard to find Scala developers, I mean, you go outside Silicon Valley. >> Exactly. So you need to be, as an organization, ours advises that you really need to find tools that are going to fit those business use cases and provide a single software environment, that data integration might be happening on premise now, with some of the legacy enterprise data warehouse, and it might happen in a hybrid, on premise and cloud environment in the near future and perhaps completely in the cloud. >> So standard tools, tools that have some standard software behind it, so you don't get stuck in the personnel hiring problem. Some unique domain expertise that's hard to hire. >> Yes, skill set is one problem, the second problem is the fact that the applications needs to be recompiled because the stack is evolving and the APIs are not compatible with the previous version, so that's the maintenance cost to keep up with things, to be able to catch up with the new versions of the stack, that's another area that the tools really help, because you want to be able to develop the application and deploy it anywhere in any complete platform. >> So Tendü, if I hear you properly, what you're saying is integration sounds great on paper, it's important, but there's some hidden costs there, and that is the skill set and then there's the stack recompiling, I'm making sure. Okay, that's awesome. >> The tools help with that. >> Take a step back and zoom out and talk about Syncsort's positioning, because you guys have been changing with the stacks as well, I mean you guys have been doing very well with the announcements, you've been just coming on the market all the time. What is the current value proposition for Syncsort today? >> The current value proposition is really we have organizations to create the next generation modern data architecture by accessing and liberating all enterprise data and delivering that data at the right time and the right quality data. It's liberate, integrate, with integrity. That's our value proposition. How do we do that? We provide that single software environment. You can have batch legacy data and streaming data sources integrated in the same exact environment and it enables you to adapt to Spark 2 or Flink or whichever complete framework is going to help them. That has been our value proposition and it is proven in many production deployments. >> What's interesting to is the way you guys have approached the market. You've locked down the legacy, so you have, we talk about the main frame and well beyond that now, you guys have and understand the legacy, so you kind of lock that down, protect it, make it secure, it's security-wise, but you do that too, but making sure it works because it's still data there, because legacy systems are really critical in the hybrid. >> Main frame expertise and heritage that we have is a critical part of our offering. We will continue to focus on innovation on the main frame side as well as on the distributed. One of the announcements that we made since our last conversation was we have partnership with Compuware and we now bring in more data types about application failures, it's a Band-Aid data to Splunk for operational intelligence. We will continue to also support more delivery types, we have batch delivery, we have streaming delivery, and now replication into Hadoop has been a challenge so our focus is now replication from the B2 on mainframe and ISA on mainframe to Hadoop environments. That's what we will continue to focus on, mainframe, because we have heritage there and it's also part of big enterprise data lake. You cannot make sense of the customer data that you are getting from mobile if you don't reference the critical data sets that are on the mainframe. With the Trillium acquisition, it's very exciting because now we are at a kind of pivotal point in the market, we can bring that data validation, cleansing, and matching superior capabilities we have to the big data environments. One of the things-- >> So when you get in low latency, you guys do the whole low latency thing too? You bring it in fast? >> Yes, we bring it, that's our current value proposition and as we are accessing this data and integrating this part of the data lake, now we have capabilities with Trillium that we can profile that data, get statistics and start using machine learning to automate the data steward's job. Data stewards are still spending 75% of their time trying to clean the data. So if we can-- >> Lot of manual work labor there, and modeling too, by the way, the modeling and just the cleaning, cleaning and modeling kind of go hand in hand. >> Exactly. If we can automate any of these steps to drive the business rules automatically and provide right data on the data lake, that would be very valuable. This is what we are hearing from our customers as well. >> We've heard probably five years about the data lake as the center of gravity of big data, but we're hearing at least a bifurcation, maybe more, where now we want to take that data and apply it, operationalize it in making decisions with machine learning, predictive analytics, but at the same time we're trying to square this strange circle of data, the data lake where you didn't say up front what you wanted it to look like but now we want ever richer metadata to make sense out of it, a layer that you're putting on it, the data prep layer, and others are trying to put different metadata on top of it. What do you see that metadata layer looking like over the next three to five years? >> The governance is a very key topic and social organizations who are ahead of the game in the big data and who already established that data lake, data governance and even analytics governance becomes important. What we are delivering here with Trillium, we will have generally available by end of Q1. We are basically bringing business rules to the data. Instead of bringing data to business rules, we are taking the business rules and deploying them where the data exists. That will be key because of the data gravity you mentioned because the data might be in the Hadoop environment, there might be in a, like I said, enterprise data warehouse, and it might be originating in the cloud, and you don't want to move the data to the business rules. You want to move the business rules to where the data exists. Cloud is an area that we see more and more of our customers are moving forward. Two main use cases around our integration is one, because the data is originating in cloud, and the second one is archiving data to cloud, and we announced actually, tighter integration with cloud with our director earlier this week for this event, and that we have been in cloud deployments and we have actually an offering, an elastic MapReduce already and on AC too for couple of years now, and also on the Google cloud storage, but this announcement is primarily making deployments even easier by leveraging cloud director's elasticity for increasing and reducing the deployment. Now our customers will also take advantage of integration jobs from that elasticity. >> Tendü, it's great to have you on The Cube because you have an engineering mind but you're also now general manager of the business, and your business is changing. You're in the center of the action, so I want to get your expertise and insight into enterprise readiness concept and we saw last week at Google Cloud 2017, you know, Google going down the path of being enterprise ready, or taking steps, I don't think they're fully ready, but they're certainly serious about the cloud on the enterprise, and that's clear from Diane Green, who knows the enterprise. It sparked the conversation last week, around what does enterprise readiness mean for cloud players, because there's so many details in between the lines, if you will, of what products are, that integration, certification, SLAs. What's your take on the notion of cloud readiness? Vizaviz, Google and others that are bringing cloud compute, a lot of resources, with an IOT market that's now booming, big data evolving very, very fast, lot of realtime, lot of analytics, lot of innovation happening. What's the enterprise picture look like from a readiness standpoint? How do these guys get ready? >> From a big picture, for enterprise there are couple of things that these cannot be afterthought. Security, metadata lineage is part of data governance, and being able to have flexibility in the architecture, that they will not be kind of recreating the jobs that they might have all the way to deployed and on premise environments, right? To be able to have the same application running from on premise to cloud will be critical because it gives flexibility for adaptation in the enterprise. Enterprise may have some MapReduce jobs running on premise with the Spark jobs on cloud because they are really doing some predictive analytics, graph analytics on those, they want to be able to kind of have that flexible architecture where we hear this concept of a hybrid environment. You don't want to be deploying a completely different product in the cloud and redo your jobs. That flexibility of architecture, flexibility-- >> So having different code bases in the cloud versus on prem requires two jobs to do the same thing. >> Two jobs for maintaining, two jobs for standardizing, and two different skill sets of people potentially. So security, governance, and being able to access easily and have applications move in between environments will be very critical. >> So seamless integration between clouds and on prem first, and then potentially multi-cloud. That's table stakes in your mind. >> They are absolutely table stakes. A lot of vendors are trying to focus on that, definitely Hadoop vendors are also focusing on that. Also, one of the things, like when people talk about governance, the requirements are changing. We have been talking about single view and customer 360 for a while now, right? Do we have it right yet? The enrichment is becoming a key. With Trillium we made the recent announcement, the precise enriching, it's not just the address that you want to deliver and make sure that address should be correct, it's also the email address, and the phone number, is it mobile number, is it landline? It's enriched data sets that we have to be really dealing, and there's a lot of opportunity, and we are really excited because data quality, discovery and integration are coming together and we have a good-- >> Well Tendü, thank you for joining us, and congratulations as Syncsort broadens their scope to being a modern data platform solution provider for companies, congratulations. >> Thank you. >> Thanks for coming. >> Thank you for having me. >> This is The Cube here live in Silicon Valley and San Jose, I'm John Furrier, George Gilbert, you're watching our coverage of Big Data Silicon Valley in conjunction with Strata Hadoop. This is Silicon Angles, The Cube, we'll be right back with more live coverage. We've got two days of wall to wall coverage with experts and pros talking about big data, the transformations here inside The Cube. We'll be right back. (upbeat electronic music)
SUMMARY :
It's The Cube, covering Big Data Silicon Valley 2017. general manager of the big data, did I get that right? Yes, you got it right. That's so hard for me to get, but more announcements, but the theme seems to be integration. a decade in many of the enterprises. on Hadoop and also make the data accessible in it's the multi-cloud world, multi-data type it's on the enterprise side or on some It's hard to find Scala developers, I mean, the near future and perhaps completely in the cloud. get stuck in the personnel hiring problem. another area that the tools really help, So Tendü, if I hear you properly, what you're coming on the market all the time. and delivering that data at the right the legacy, so you kind of lock that down, One of the announcements that we made since automate the data steward's job. the modeling and just the cleaning, and provide right data on the data lake, data, the data lake where you didn't say the data to the business rules. many details in between the lines, if you will, kind of recreating the jobs that they might code bases in the cloud versus on prem So security, governance, and being able to on prem first, and then potentially multi-cloud. it's also the email address, and Well Tendü, thank you for the transformations here inside The Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
two jobs | QUANTITY | 0.99+ |
Two jobs | QUANTITY | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
75% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Diane Green | PERSON | 0.99+ |
San Jose, California | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Scala | TITLE | 0.99+ |
Syncsort | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
second problem | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Compuware | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Spark 2 | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
Vizaviz | ORGANIZATION | 0.99+ |
Tendü Yogurtçu | PERSON | 0.99+ |
Spark | TITLE | 0.99+ |
eighth year | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Two main use cases | QUANTITY | 0.98+ |
Trillium | ORGANIZATION | 0.98+ |
Python | TITLE | 0.98+ |
Netezza | ORGANIZATION | 0.98+ |
Trillium Software | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
Wikibon | ORGANIZATION | 0.97+ |
Hortonworks | ORGANIZATION | 0.97+ |
Hadoop | TITLE | 0.97+ |
earlier this week | DATE | 0.96+ |
today | DATE | 0.96+ |
Teradata | ORGANIZATION | 0.95+ |
Big Data Silicon Valley 2017 | EVENT | 0.94+ |
First thing | QUANTITY | 0.94+ |
single view | QUANTITY | 0.94+ |
big data | ORGANIZATION | 0.92+ |
Hive | TITLE | 0.92+ |
Java | TITLE | 0.92+ |
The Cube | ORGANIZATION | 0.92+ |
single button | QUANTITY | 0.91+ |
AtScale | ORGANIZATION | 0.91+ |
end of Q1 | DATE | 0.9+ |
single software | QUANTITY | 0.9+ |
second one | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
California, | LOCATION | 0.89+ |
Flink | TITLE | 0.88+ |
Big Data | TITLE | 0.88+ |
two different skill | QUANTITY | 0.87+ |
Silicon Valley, | LOCATION | 0.84+ |
360 | QUANTITY | 0.83+ |
three | QUANTITY | 0.82+ |
last three years | DATE | 0.8+ |
Valley | TITLE | 0.79+ |
Google Cloud 2017 | EVENT | 0.79+ |
Windows | TITLE | 0.78+ |
prem | ORGANIZATION | 0.76+ |
couple of years back | DATE | 0.76+ |
NYC | LOCATION | 0.75+ |
two APIs | QUANTITY | 0.75+ |