Zongjie Diao, Cisco and Mike Bundy, Pure Storage | Cisco Live EU 2019
(bouncy music) >> Live, from Barcelona, Spain, it's theCUBE, covering Cisco Live Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back everyone. Live here in Barcelona it's theCUBE's exclusive coverage of Cisco Live 2019. I'm John Furrier. Dave Vellante, my co-host for the week, and Stu Miniman, who's also here doing interviews. Our next two guests is Mike Bundy, Senior Director of Global Cisco Alliance with Pure Storage, and Z, who's in charge of product strategy for Cisco. Welcome to theCUBE. Thanks for joining us. >> Thank you for having us here. >> You're welcome. >> Thank you. >> We're in the DevNet zone. It's packed with people learning real use cases, rolling up their sleeves. Talk about the Cisco Pure relationship. How do you guys fit into all this? What's the alliance? >> You want to start? >> Sure. So, we have a partnership with Cisco, primarily around a solution called Flashstack in the converged infrastructure space. And most recently, we've evolved a new use-case and application together for artificial intelligence that Z's business unit have just released a new platform that works with Cisco and NVIDEA to accomplish customer application needs mainly in machine learning but all aspects of artificial intelligence, so. >> So AI is obviously a hot trend in machine learning but today at Cisco, the big story was not about the data center as much anymore as it's the data at the center of the value proposition which spans the on-premises, IoT edge, and multiple clouds so data now is everywhere. You've got to store it. It's going to be stored in the cloud, it's on-premise. So data at the center means a lot of things. You can program with it. It's got to be addressable. It has to be smart and aware and take advantage of the networking. So with all of that as the background, backdrop, what is the AI approach? How should people think about AI in context to storing data, using data? Not just moving packets from point A to point B, but you're storing it, you're pulling it out, you're integrating it into applications. A lot of moving parts there. What's the-- >> Yeah, you got a really good point here. When people think about machine learning, traditionally they just think about training. But we look at it as more than just training. It's the whole data pack line that starts with collecting the data, store the data, analyze the data, train the data, and then deploy it. And then put the data back. So it's really a very, it's a cycle there. It's where you need to consider how you actually collect the data from edge, how you store them, in the speed that you can, and give the data to the training side. So I believe when we work with Pure, we try to create this as a whole data pack line and think about the entire data movement and the storage need that we look at here. >> So we're in the DevNet zone and I'm looking at the machine learning with Python, ML Library, (mumbles) Flow, Appache Spark, a lot of this data science type stuff. >> Yup. >> But increasingly, AI is a workload that's going mainstream. But what are the trends that you guys are seeing in terms of traditional IT's involvement? Is it still sort of AI off on an island? What are you seeing there? >> So I'll take a guess, a stab at it. So really, every major company industry that we work with have AI initiatives. It's the core of the future for their business. What we're trying to do is partner with IT to get ahead of the large infrastructure demands that will come from those smaller, innovative projects that are in pilot mode so that they are a partner to the business and the data scientists rather than a laggard in the business, the way that sometimes the reputation that IT gets. We want to be the infrastructure, solid, like a cloud-like experience for the data scientists so they can worry more about the applications, the data, what it means to the business, and less about the infrastructure. >> Okay. And so you guys are trying to simplify that infrastructure, whether it's converged infrastructure, and other unifying approaches. Are you seeing the shift of that heavy lifting, of people now shifting resources to new workloads like AI? Maybe you could discuss what the trends are there? >> Yeah, absolutely. So I think AI started with more like a data science experiment. You see a couple of data scientists experimenting. Now it's really getting into mainstream. More and more people are into that. And as, I apologize. >> Mike. >> Mike. >> Mike, can we restart that question? (all laughing) My deep apology. I need a GPU or something in my brain. I need to store that data better. >> You're on Fortnite. Go ahead. >> Yes, so as Mike has said earlier on, it's not just the data scientists. It's actually an IT challenge as well and I think with Cisco, what we're trying to do with Pure here is, you know that Cisco thing, we're saying, "We're a bridge." We want to bridge the gap between the data scientists and the IT and make it not just AI as an experiment but AI at scale, at production level, and be ready to actually create real impact with the technology infrastructure that we can enable. >> Mike, talk about Pure's position. You guys have announced Pure in the cloud? >> Yes. >> You're seeing that software focus. Software is the key here. >> Absolutely. >> You're getting into a software model. AI and machine learning, all this we're talking about is software. Data is now available to be addressed and managed in that software life cycle. How is the role of the software for you guys with converged infrastructure at the center of all the Cisco announcements. You were out on stage today with converged infrastructure to the edge. >> Yes, so, if you look at the platform that we built, it's referenced back, being called the Data Hub. The Data Hub has a very tight synergy with all the applications you're referring to: Spark, Tensor Flow, et cetera, et cetera, Cafe. So, we look at it as the next generation analytics and the platform has a super layer on top of all those applications because that's going to really make the integration possible for the data scientists so they can go quicker and faster. What we're trying to do underneath that is use the Data Hub that no matter what the size, whether it's small data, large data, transaction based or more bulk data warehouse type applications, the Data Hub and the FlashBlade solution underneath handle all of that very, very different and probably more optimized and easier than traditional legacy infrastructures. Even traditional, even Flash, from some of our competitors, because we built this purpose-built application for that. Not trying to go backwards in terms of technology. >> So I want to put both you guys on the spot for a question. We hear infrastructure as code going on many, many years since theCUBE started nine years ago. Infrastructure as code, now it's here. The network is programmable, the infrastructure is programmable, storage is programmable. When a customer or someone asks you, how is infrastructure, networks, and storage programmable and what do I do? I used to provision storage, I've got servers. I'm going to the cloud. What do I do? How do I become AI enabled so that I could program the infrastructure? How do you guys answer that question? >> So a lot of that comes to the infrastructure management layer. How do you actually, using policy and using the right infrastructure management to make the right configuration you want. And I think one thing from programmability is also flexibility. Instead of having just a fixed configuration, what we're doing with Pure here is really having that flexibility where you can put Pure storage, different kind of storage with different kind of compute that we have. No matter we're talking about two hour use, four hour use, that kind of compute power is different and can max with different storage, depending on what the customer use case is. So that flexibility driven to the programmability that is managed by the infrastructure management layer. And we're extending that. So Pure and Cisco's infrastructure management actually tying together. It's really single pane of glass within the side that we can actually manage both Pure and Cisco. That's the programmability that we're talking about. >> Your customers get Pure storage, end-to-end manageability? >> With the Cisco compute, it's a single pane of glass. >> Okay. >> So where do I buy? I want to get started. What do you got for me? (laughing) >> It's pretty simple. It's three basic components. Cisco Compute and a platform for machine learning that's powered by NVIDEA GPUs; Cisco FlashBlade, which is the Data Hub and storage component; and then network connectivity from the number one network provider in the world, from Cisco. It's very simple. >> And it's a SKU, it's a solution? >> Yup, it's very simple. It's data-driven. It's not tied to a specific SKU. It's more flexible than that so you have better optimization of the network. You don't buy a 1000 series X and then only use 50% of it. It's very customizable. >> Okay, do I can customize it for my, whatever, data science team or my IT workloads? >> Yes, and provision it for multi-purpose, same way a service provider would if you're a large IT organization. >> Trend around breaking silos has been discussed heavily. Can you talk about multiple clouds, on-premise in cloud and edge all coming together? How should companies think about their data architecture because silos are good for certain things, but to make multi-cloud work and all this end-to-end and intent-based networking and all the power of AI's around the corner, you got to have the data out there and it's got to be horizontally scalable, if you will. How do you break down those silos? What's your advice, is there a use case for an architecture? >> I think it's a classic example of how IT has evolved to not think just silos and be multi-cloud. So what we advocate is to have a data platform that transpires the entire community, whether it's development, test, engineering, production applications, and that runs holistically across the entire organization. That would include on-prem, it would include integration with the cloud because most companies now require that. So you can have different levels of high availability or lower cost if your data needs to be archived. So it's really building and thinking about the data as a platform across the company and not just silos for various applications. >> So replication never goes away. >> Never goes away. (laughing) >> It's going to be around for a long, long time. >> Dev Test never goes away either. >> Your thoughts on this? >> Yeah, so adding on top of that, we believe where your infrastructure should go is where the data goes. You want to follow where the data is and that's exactly why we want to partner with Pure here because we see a lot of the data are sitting today in the very important infrastructure which is built by Pure Storage and we want to make sure that we're not just building a silo box sitting there where you have to pour the data in there all the time, but actually connect to our server with Pure Storage in the most manageable way. And for IT, it's the same kind of manual layer. You're not thinking about, oh, I have to manage all this silo box, or the shadow IT that some data scientists would have under their desk. That's the least thing you want. >> And the other thing that came up in the key note today, which we've been saying on theCUBE, and all the experts reaffirm, is that moving data costs money. You've got latency costs and also just cost to move traffic around. So moving compute to the edge or moving compute to the data has been a big, hot trend. How has the compute equation changed? Because I've got storage. I'm not just moving packets around. I'm storing it, I'm moving it around. How does that change the compute? Does that put more emphasis on the compute? >> It's definitely putting a lot more emphasis on compute. I think it's where you want compute to happen. You can pull all the data and want it to happen in the center place. That's fine if that's the way you want to manage it. If you have already simplified the data, you want to put it in that's the way. If you want to do it at the edge, near where the data source is, you can also do the cleaning there. So we want to make sure that, no matter how you want to manage it, we have the portfolio that can actually help you to manage that. >> And it's alternative processors. You mentioned NVIDEA. >> Exactly. >> You guys are the first to do a deal with them. >> And other ways, too. You've got to take advantage of technology like Kubernetes, as an example. So you can move the containers where they need to be and have policy managers for the compute requirements and also storage, so that you don't have contention or data integrity issues. So embracing those technologies in a multi-cloud world is very, very essential. >> Mike, I want to ask you a question around customer trends. What are you seeing as a pattern from a customer standpoint, as they prepare for AI, and start re-factoring some of their IT and/or resources, is there a certain use-case that they set up with Pure in terms of how they set up their storage? Is it different by customer? Is there a common trend that you see? >> Yeah, there are some commonalities. Take financial services, quant-trading as an example. We have a number of customers that leverage our platform for that because it's very time-sensitive, high-availability data. So really, I think that the trend overall of that would be: step back, take a look at your data, and focus on, how can I correlate and organize that? And really get it ready so that whatever platform you use from a storage standpoint, you're thinking about all aspects of data and get it in a format, in a form, where you can manage and catalog, because that's kind of essential to the entire thing. >> It really highlights the key things that we've been saying in storage for a long time. High-availability, integrity of the data, and now you've got application developers programming with data. With APIs, you're slinging APIs around like it's-- >> The way it should be. >> That's the way it should be. This is like Nirvana finally got here. How far along are we in the progress? How far? Are we early? Are we moving the needle? Where are the customers? >> You mean in terms of a partnership? >> Partnership, customer AI, in general. You guys, you've got storage, you've got networking and compute all working together. It has to be flexible, elastic, like the cloud. >> My feeling, Mike can correct me, or you can disagree with me. (laughing) I think right now, if we look at what all the analysts are saying, and what we're saying, I think most of the companies, more than 50% of companies either have deployed AI MO or are considering a plan of deploying that. But having said that, we do see that we're still at a relatively early stage because the challenges of making AI deployment at scale, where data scientists and IT are really working together. You need that level of security and that level of skill of infrastructure and software and evolving DevNet. So my feeling is we're still at a relatively early stage. >> Yeah, I think we are in the early adopter phase. We've had customers for the last two years that have really been driving this. We work with about seven of the automated car-driving companies. But if you look at the data from Morgan Stanley and other analysts, there's about a $13 billion infrastructure that's required for AI over the next three years, from 2019-2021, so that is probably 6X, 7X what it is today, so we haven't quite hit that bell curve yet. >> So people are doing their homework right now, setting up their architecture? >> It's the leaders. It's leaders in the industry, not the mainstream. >> Got it. >> And everybody else is going to close that gap, and that's where you guys come in, is helping them do that. >> That's scale. (talking over one another) >> That's what we built this platform with Cisco on, is really, the Flashstack for AI is around scale, for tens and twenties of petabytes of data that will be required for these applications. >> And it's a targeted solution for AI with all the integration pieces with Cisco built in? >> Yes. >> Great, awesome. We'll keep track of it. It's exciting. >> Awesome. >> It's cliche to say future-proof but in this case, it literally is preparing for the future. The bridge to the future, as the new saying at Cisco goes. >> Yes, absolutely. >> This is theCube coverage live in Barcelona. We'll be back with more live coverage after this short break. Thanks for watching. I'm John Furrier with Dave Vallente. Stay with us. (upbeat electronic music)
SUMMARY :
Brought to you by Cisco and its ecosystem partners. Dave Vellante, my co-host for the week, We're in the DevNet zone. in the converged infrastructure space. So data at the center means a lot of things. the data to the training side. at the machine learning with Python, ML Library, But what are the trends that you guys are seeing and less about the infrastructure. And so you guys are trying to simplify So I think AI started with I need to store that data better. You're on Fortnite. and the IT and make it not just AI as an experiment You guys have announced Pure in the cloud? Software is the key here. How is the role of the software and the platform has a super layer on top So I want to put both you guys on the spot So a lot of that comes to the What do you got for me? network provider in the world, from Cisco. It's more flexible than that so you have Yes, and provision it for multi-purpose, and it's got to be horizontally scalable, if you will. and that runs holistically across the entire organization. (laughing) That's the least thing you want. How does that change the compute? That's fine if that's the way you want to manage it. And it's alternative processors. and also storage, so that you don't have Mike, I want to ask you a where you can manage and catalog, High-availability, integrity of the data, That's the way it should be. It has to be flexible, elastic, like the cloud. and that level of skill of infrastructure that's required for AI over the next three years, It's leaders in the industry, not the mainstream. and that's where you guys come in, is helping them do that. That's scale. is really, the Flashstack for AI is around scale, It's exciting. it literally is preparing for the future. I'm John Furrier with Dave Vallente.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vallente | PERSON | 0.99+ |
Mike Bundy | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
four hour | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Zongjie Diao | PERSON | 0.99+ |
Morgan Stanley | ORGANIZATION | 0.99+ |
more than 50% | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
1000 series X | COMMERCIAL_ITEM | 0.99+ |
today | DATE | 0.99+ |
Pure | ORGANIZATION | 0.98+ |
7X | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Barcelona, Spain | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
one thing | QUANTITY | 0.98+ |
6X | QUANTITY | 0.98+ |
nine years ago | DATE | 0.98+ |
NVIDEA | ORGANIZATION | 0.97+ |
Global Cisco Alliance | ORGANIZATION | 0.97+ |
Flash | TITLE | 0.97+ |
two guests | QUANTITY | 0.96+ |
Appache Spark | TITLE | 0.96+ |
2019-2021 | DATE | 0.96+ |
Nirvana | ORGANIZATION | 0.96+ |
Flow | TITLE | 0.93+ |
$13 billion | QUANTITY | 0.93+ |
FlashBlade | COMMERCIAL_ITEM | 0.91+ |
Fortnite | TITLE | 0.91+ |
Z | PERSON | 0.9+ |
Data Hub | TITLE | 0.9+ |
Europe | LOCATION | 0.9+ |
Spark | TITLE | 0.89+ |
three basic components | QUANTITY | 0.88+ |
ML Library | TITLE | 0.88+ |
tens and twenties of petabytes of data | QUANTITY | 0.88+ |
about seven of the automated car-driving companies | QUANTITY | 0.84+ |
last two years | DATE | 0.83+ |
Cisco Live 2019 | EVENT | 0.82+ |
two hour | QUANTITY | 0.81+ |
Cisco | EVENT | 0.8+ |
Flashstack | TITLE | 0.79+ |
single pane of | QUANTITY | 0.78+ |
single pane of glass | QUANTITY | 0.77+ |
Dev Test | TITLE | 0.77+ |
about | QUANTITY | 0.74+ |
Cisco Pure | ORGANIZATION | 0.73+ |
next three years | DATE | 0.72+ |
Kubernetes | TITLE | 0.69+ |
FlashBlade | TITLE | 0.65+ |
DevNet | TITLE | 0.65+ |
Action Item | The Role of Open Source
>> Hi, I'm Peter Burris, Welcome to Wikibon's Action Item. (slow techno music) Once again Wikibon's research team is assembled, centered here in The Cube Studios in lovely Palo Alto, California, so I've got David Floyer and George Gilbert with me here in the studio, on the line we have Neil Raden and Jim Kobielus, thank you once again for joining us guys. This week we are going to talk about an issue that has been dominant consideration in the industry, but it's unclear exactly what direction it's going to take, and that is the role that open source is going to play in the next generation of solving problems with technology, or we could say the role that open source will play in future digital transformations. No one can argue whether or not open source has been hugely consequential, as I said it has been, it's been one of the major drivers of not only new approaches to creating value, but also new types of solutions that actually are leading to many of the most successful technology implementations that we've seen ever, that is unlikely to change, but the question is what formal open source take as we move into an era where there's new classes of individuals creating value, like data scientists, where those new problems that we're trying to solve, like problems that are mainly driven by the role that data as opposed to code plays, and that there are new classes of providers, namely service providers as opposed to product or software providers, these issues are going to come together, and have some pretty important changes on how open source behaves over the next few years, what types of challenges it's going to successfully take on, and ultimately how users are going to be able to get value out of it. So to start the conversation off George, let's start by making a quick observation, what has the history of open source been, take us through it kind of quickly. >> The definition has changed, in its first incarnation it was fixed UNIX fragmentation and the high price of UNIX system servers, meaning UNIX the proprietary UNIX's and the proprietary servers they were built, that actually rather quickly morphed into a second incarnation where it was let's take the Linux stack, Linux, Apache, MySQL, PHP, Python, and substitute that for the old incumbents, which was UNIX, BEA Web Logic, the J2E server and Oracle Database on an EMC storage device. So that was the collapse of the price of infrastructure, so really quickly then it morphed into something very, very different, which was we had the growth of the giant Internet scale vendors, and neither on pricing nor on capacity could traditional software serve their needs, so Google didn't quite do open source, but they published papers about what they did, those papers then were implemented. >> Like Map Produce. Yeah Map Produce, Big Table, Google File System, those became the basis of Hadoop which Yahoo open sourced. There is another incarnation going, that's probably getting near its end of life right now, which is sort of a hybrid, where you might take Kafka which is open source, and put sort of proprietary bits around it for management and things like that, same what Cloudera, this is called the open core model, it's not clear if you can build a big company around it, but the principle is, the principle for most of these is, the value of the software is declining, partly because it's open source, and partly because it's so easy to build new software systems now, and the hard part is helping the customer run the stuff, and that's where some of these vendors are capturing it. >> So let's David turn our attention to how that's going to turn into actual money. So in this first generation of open source, I think up until now, certainly Red Hat, Canonical have made money by packaging and putting forward distributions, that have made a lot of money, IBM has been one of the leaders in contributing open source, and then turning that into a services business, Cloudera, Horton Works, NapR, some of these other companies have not generated the same type of market presence that a Red Hat or Canonical have put forward, but that doesn't mean there aren't companies out there that have been very successful at appropriating significant returns out of open source software, mainly however they're doing it as George said, as a service, give us some examples. >> I think the key part of open source is providing a win-win environment, so that people are paid to do stuff, and what is happening now a lot is that people are putting stuff into open source in order that it becomes a standard, and also in order that it is maintained by the community as a whole. So those two functions, those two capabilities of being paid by a company often, by IBM or by whoever it is to do something on behalf of that company, so that it becomes a standard, so that it becomes accepted, that is a good business model, in the sense that it's win-win, the developer gets recognition, the person paying for it achieves their business objective of for example getting a standard recognized-- >> A volume. >> Volume, yes. >> So it's a way to get to volume for the technology that you want to build your business around. >> Yes, what I think is far more difficult in this area is application type software, so where open source has been successful, as George said is in the stacks themselves, the lower end of the stacks, there are a few, and they usually come from very very successful applications like Word, Microsoft Word, or things like that where they can be copied, and be put into open source, but even there they have around them software from a company, Red Hat or whoever it is, that will make it successful. >> Yes but open office wasn't that successful, get to the kind of, today we have Amazon, we have some of the hyper scalars that are using that open core model and putting forward some pretty powerful services, is that the new Red Hat, is that the new Canonical? >> The person who's made most money is clearly Amazon, they took open source code and made it robust, and made it in volume, those are the two key things you to have for success, it's got to be robust, it's got to be in volume, and it's very difficult for the open source community to achieve that on its own, it needs the support of a large company to do that, and it needs the value that that large company is going to get from it, for them to put those resources in. So that has been a very successful model a lot of people decry it because they're not giving back, and there's an argument-- >> They being Amazon, have not given back quite as much. >> Yes they have relatively very few commiters. I think that's more of a problem in the T&Cs of the open source contract, so those should probably be changed, to put more onus on people to give back into the pool. >> So let me stop you, so we have identified one thing that is likely going to have to be evolved as we move forward, to prevent problems, some of the terms and conditions, we try to ensure that there is that quid pro quo, that that win-win exists. So Jim Kobielus, let me ask you a question, open source has been, as David mentioned, open source has been more successful where there is a clear model, a clear target of what the community is trying to build, it hasn't been quite successful, where it is in fact is expected that the open source community is going to start with some of the original designs, so for example, there's an enormous plethora of big data tools, and yet people are starting to ask why is big data more successful, and partly it's because putting these tools together is so difficult. So are we going to see the type of artifacts and assets and technologies associated with machine learning, AI, deep learning et cetera, easily lend themselves to an open source treatment, what do you think? >> I think were going to see open source very much take off in the niches of the deep learning and machine learning AI space, where the target capabilities we've built are fairly well understood by our broad community. Machine learning clearly, we have a fair number of frameworks that are already well established, with respect to the core capabilities that need to be performed from modeling and training, and deployment of statistical models into applications. That's where we see a fair amount of takeoff for Tensor Flow, which Google built in an open source, because the core of deep learning in terms of the algorithm, in terms of the kinds of functions you perform to be able to take data and do feature engineering and algorithm selection are fairly well understood, so those are the kinds of very discreet capabilities for which open source code is becoming standard, but there's many different alternative frameworks for doing that, Tensor Flow being one of them, that are jostling for presence in the market. The term is commoditized, more of those core capabilities are being commoditized by the fact that there well understood and agreed to by a broad community. So those are the discrete areas we're seeing the open source alternatives become predominant, but when you take a Tensor Flow and combine it with a Spark, and with a Hadoop and a Kafka and broader collections of capabilities that are needed for robust infrastructure, those are disparate communities that each have their own participants committed and so forth, nobody owns that overall step, there's no equivalent of a lamp stack were all things to do with deep learning machine learning AI on an open source basis come to the fore. If some group of companies is going to own that broadening stack, that would indicate some degree of maturation for this overall ecosystem, that's not happening yet, we don't see that happening right now. >> So Jim, I want to, my bias, I hate the term commoditization, but I Want to unify what you said with something that David said, essentially what we're talking about is the agreement in a collaborative open way around the conventions of how we perform work that compute model which then turns into products and technologies that can in fact be distributed and regarded as a standard, and regarded as a commodity around which trading can take place. But what about the data side of things George, we have got, Jim's articulated I think a pretty good case, that we're going to start seeing some tools in the marketplace, it's going to be interesting to see whether that is just further layering on top of all this craziness that is happening in the big data world, and just adding to it in the ML world, but how does the data fit into this, are we going to see something that looks like open source data in the marketplace? >> Yes, yes, and a modified yes. Let me take those in two pieces. Just to be slightly technical, hopefully not being too pedantic, software used to mean algorithms and data structures, so in other words the recipe for what to do, and the buckets for where to put the data, that has changed in the data in terms of machine learning, analytic world where the algorithms and data are so tied together, the instances of the data, not the buckets, that the data changed the algorithms, the algorithms change the data, the significance of that is, when we build applications now, it's never done, and so you go, the construct we've been focusing on is the digital twin, more broadly defined than a smart device, but when you go from one vendor and you sort of partially build it, it's an evergreen thing, it's never done, then you go to the next vendor, but you need to be able to backport some core of that to the original vendor, so for all intents and purposes that's open source, but it boils down to actually the original Berkeley license for open source, not the Apache one everyone is using now. And remind me of the other question? >> The other issue is are we going to see datasets become open source like we see code bases and code fragments and algorithms becoming open source? >> Yes this is also, just the way Amazon made infrastructure commoditized and rentable, there are going to be many datasets were they used to be proprietary, like a Google web crawl, and Google knowledge graph of disambiguation people, places and things, some of these things are either becoming open source, or openly accessible by API, so when you put those resources together you're seeing a massive deflation, or a massive shrinkage in the capital intensity of building these sorts of apps. >> So Neil, if we take a look at where we are this far, we can see that there is, even though we're moving to a services oriented model, Amazon for example is a company that is able to generate commercial rents out of open source software, Jim has made a pretty compelling case that open source software can be, or will emerge out of the tooling world for some of these new applications, there are going to be some examples of datasets, or at least APIs to datasets that will look more open source like, so it's not inconceivable that we'll see some actual open source data, I think GDPR, and some other regulations, we're still early in the process of figuring out how we're going to turn data into commodity, using Jim's words. But what about the personnel, what about the people? There were reasons why developers moved to open source, some of the soft reasons that motivated them to do things, who they work with, getting the recognition, working on relevant projects, working with relevant technologies, are we going to see a similar set of soft motivators, diffuse into the data scientist world, so that these individuals, the real ones who are creating the real value, are going to have some degree of motivation to participate with each other collaborate with each other in an open source way, what do you think? >> Good question, I think the answer is absolutely true, but it's not unique to data scientists, academics, scientists in molecular biology, civil engineers, they all wannabe recognized by their peers, on some level beyond just their, just what they're doing in their organization, but there is another segment of data scientists that are just guys working for a paycheck, and generating predictive analysis and helping the company along and so forth, and that's what they're going to do. The whole open source thing, you remember object programming, you remember JavaBeans, you remember Web Services, we tried to turn developers into librarians, and when they wanted to develop something, you go to Github, I go to Github right now and I say I'm looking for a utility that can figure out why my face is so pink on this camera, I get 1000 listings of programs, and have no idea which ones work and which ones don't, so I think the whole open source thing is about to explode, it already has, in terms of piece parts. But I think managing in an organization is different, and when I say an organization, there's the Googles and the Amazons and so forth of the world, and then there's everybody else. >> Alright so we've identified an area where we can see some consequence of change where we can anticipate some change will be required to modernize the open source model, the licensing model, we see another one where the open source communities going to have to understand how to move from a product and code to a data and service orientation, can we think of any others? >> There is one other that I'd like to add to that, and that is compliance. You addressed it to some extent, but compliance brings some real-world requirements onto code and data, and you were saying earlier on that one of the options is bringing code and data so that they intermingle and change each other, I wonder whether that when you look at it from a compliance point of view will actually pass muster, because you need from a compliance point of view to prove, for example, in the health service, that it works, and it works the same way every time, and if you've got a set of code and data that doesn't work the same every time, you probably are going to get pushed back from the people who regularly health that this is not, you can't do it that way, you'll have to find another way to do it. But that again is, is at the same each time, so the point I'm making-- >> This is a bigger issue than just open source, this is an issue where the idea if continuous refinement of the code, and the data-- >> Automatic refinement. >> Automatic refinement, could in fact, we're going to have to change some compliance laws, is open source, is it possible the open source community might actually help us understand that problem? >> Absolutely, yes. >> I think that's a good point, I think that's a really interesting point, because you're right George, the idea of a continuous development, is not something that for example Serr Banes actually says I get this, Serr Banes actually says "Oh yeah, I get this." Serr Banes actually is like, yes the data, I acknowledge that this date is right, and I acknowledge the process by which it was created was read, now this is another subject, let's bring this up later, but I think it's relevant here, because in many respects it's a difference between an income statement and balance sheet right? Saying it's good now, is kind of like the income statement, but let's come back to this, because I think it's a bigger issue. You're asserting the open source community in fact may help solve this problem by coming up with new ways of conceiving say versioning of things, and stamping things and what is a distribution, what isn't a distribution, with some of these more tightly bound sets of-- >> What we find normally is that-- >> Jim: I think that we are going to-- >> Peter: Go on Jim. >> Just to elaborate on what Peter was talking about, that whole theme, I think what we're going to see is more open source governance of models and data, within distributed development environments, using technologies like block chain as a core enabler for these workflows, for these as it were general distributed hyper ledgers indicate the latest and greatest version of a given dataset, or a given model being developed somewhere around some common solution domain, I think those kinds of environments for governance will become critically important, as this pipeline for development and training and deployment of these assets, gets ever more distributed and virtual. >> By the way Jim I actually had a conversation with a very large open source distribution company a few months ago about this very point, and I agree, I think blockchain in fact could become a mechanism by which we track intellectual property, track intellectual contributions, find ways to then monetize those contributions, going back to what you were saying David, and perhaps that becomes something that looks like the basis of a new business model, for how we think about how open source goes after these looser, goosier problems. >> But also to guarantee integrity without going through necessarily a central-- >> Very important, very important because at the end of the day George-- >> It's always hard to find somebody to maintain. >> Right, big companies, one of the big challenges that companies today are having is that they do open source is that they want to be able to keep track of their intellectual property, both from a contribution standpoint, but also inside their own business, because they're very, very concerned that the stuff that they're creating that's proprietary to their business in a digital sense, might leave the building, and that's not something a lot of banks for example want to see happen. >> I want to stick one step into this logic process that it think we haven't yet discussed, which is, we're talking about now how end customers will consume this, but there still a disconnect in terms of how the open source software vendor's or even hybrid ones can get to market with this stuff, because between open source pricing models and pricing levels, we've seen a slow motion price collapse, and the problem is that, the new go to market motion is actually made up of many motions, which is discover, learn, try, buy, recommend, and within each of those, the motion was different, and you hear it's almost like a reflex, like when your doctor hit you on the knee and your leg kind of bounced, everybody says yeah we do land and expand, and land was to discover, learn, try augmented with inside sales, the recommend and standardizes still traditional enterprise software where someone's got to talk to IT and procurement about fitting into the broader architecture, and infrastructure of the firm, and to do that you still need what has always been called the most expensive migratory workforce in the world, which is an enterprise sales force. >> But I would suggest there's a big move towards standardization of stacks, true private cloud is about having a stack which is well established, and the relationship between all the different piece parts, and the stack itself is the person who is responsible for putting that stack and maintaining that stack. >> So for a moment pretend that you are a CIO, are you going to buy OpenStack or are you going to buy the Vmware stack? >> I'm going to buy Vmware stack. >> Because that's about open source? >> No, the point I'm saying is that those open source communities or pieces, would then be absorbed into the stack as an OEM supplier as opposed to a direct supplier and I think that's true for all of these stacks, if you look at the stack for example and you have code from Netapp or whatever it is that's in that code and they're contributing It You need an OEM agreement with that provider, and it doesn't necessarily have to be open source. >> Bottom line is this stuff is still really, really complicated. >> But this model of being an OEM provider is very different from growing an enterprise sales force, you're selling something that goes into the cost of goods sold of your customer, and that the cost of goods sold better be less than 15 percent, and preferably less than five percent. >> Your point is if you can't afford a sales force, an OEM agreement is a much better way of doing it. >> You have to get somebody else's sales force to do it for you. So look I'm going to do the Action Item on this, I think that this has been a great conversation again, David, George, Neil, Jim, thanks a lot. So here's the Action Item, nobody argues that open source hasn't been important, and nobody suggests that open source is not going to remain important, what we think based on our conversation today is that open source is going to go through some changes, and those changes will occur as a consequence of new folks that are going to be important to this like data scientists, to some of the new streams of value in the industry, may not have the same motivations that the old developer world had, new types of problems that are inherently more data oriented as opposed process-oriented, and it's not as clear that the whole concept of data as an artifact, data as a convention, data as standards and commodities, are going to be as easy to define as it was in the cold world. As well as ultimately IT organizations increasingly moving towards an approach that focused more on the consumption of services, as opposed to the consumption of product, so for these and many other reasons, our expectation is that the open source community is going to go through its own transformation as it tries to support future digital transformations, current and future digital transformations. Now some of the areas that we think are going to be transformed, is we expect that there's going to be some pressure on licensing, we think there's going to be some pressure in how compliance is handled, and we think the open source community may in fact be able to help in that regard, and we think very importantly that there will be some pressure on the open source community trying to rationalize how it conceives of the new compute models, the new design models, because where open source always has been very successful is when we have a target we can collaborate to replicate and replace that target or provide a substitute. I think we can all agree that in 10 years we will be talking about how open source took some time to in fact put forward that TPC stack, as opposed to define the true private cloud stack. So our expectation is that open source is going to remain relevant, we think it's going to go through some consequential changes, and we look forward to working with our clients to help them navigate what some of those changes are, both as commiters, and also as consumers. Once again guys, thank you very much for this week's Action Item, this is Peter Barris, and until next week thank you very much for participating on Wikibon's Action Item. (slow techno music)
SUMMARY :
and that is the role that open source is going to play and substitute that for the old incumbents, and partly because it's so easy to build IBM has been one of the leaders in contributing open source, so that people are paid to do stuff, that you want to build your business around. the lower end of the stacks, it needs the support of a large company to do that, of the open source contract, going to have to be evolved as we move forward, that are jostling for presence in the market. and just adding to it in the ML world, and the buckets for where to put the data, there are going to be many datasets were they used some of the soft reasons that motivated them to do things, and so forth of the world, There is one other that I'd like to add to that, and I acknowledge the process by which Just to elaborate on what Peter was talking about, going back to what you were saying David, are having is that they do open source is that they want and to do that you still need what has always and the stack itself is the person who is responsible and it doesn't necessarily have to be open source. Bottom line is this stuff is still and that the cost of goods sold better an OEM agreement is a much better way of doing it. and it's not as clear that the whole concept
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Neil Raden | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Canonical | ORGANIZATION | 0.99+ |
Peter Barris | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Horton Works | ORGANIZATION | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
less than five percent | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Red Hat | TITLE | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
NapR | ORGANIZATION | 0.99+ |
Word | TITLE | 0.99+ |
less than 15 percent | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
two functions | QUANTITY | 0.99+ |
two capabilities | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
PHP | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
MySQL | TITLE | 0.99+ |
second incarnation | QUANTITY | 0.99+ |
first incarnation | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.98+ |
Palo Alto, California | LOCATION | 0.98+ |
This week | DATE | 0.98+ |
GDPR | TITLE | 0.98+ |
two key | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
today | DATE | 0.97+ |
1000 listings | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
UNIX | TITLE | 0.97+ |
this week | DATE | 0.96+ |
Github | ORGANIZATION | 0.96+ |
first generation | QUANTITY | 0.96+ |
Vmware | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.95+ |
Kafka | TITLE | 0.95+ |
one step | QUANTITY | 0.94+ |
each time | QUANTITY | 0.93+ |
JavaBeans | TITLE | 0.92+ |
both | QUANTITY | 0.91+ |
BEA Web Logic | ORGANIZATION | 0.91+ |
Brian Stevens, Google Cloud - OpenStack Summit 2017 - #OpenmStackSummit - #theCUBE
>> Narrator: Live from Boston, Massachusets. It's theCUBE, covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat and additional ecosystem and support. >> Hi, welcome back, I'm Stu Miniman, joined by my cohost John Troyer and happy to welcome back to the program Brian Stevens who's the CTO of Google Cloud. Brian, thanks for joining us. >> I'm glad to, it's been a few years. >> All right, I wanted to bounce something off you. We always talk about, you know, it's like open source. You worked for in the past what is most considered the most successful open source company for monetizing open source, which is Red Hat. We have posited at Wikibon that it's not necessarily the company, it's not only the companies that sell a product or a solution that make money off it, but I said, if it wasn't for things like Linux in general and open source, we wouldn't have a company like Google. Do you agree with that, you look at the market cap of a Google, I said if we didn't have Linux and we didn't have open source, Google probably couldn't exist today. >> Yeah, I don't think any of the hyper scale cloud companies would exist without open source and Linux and Intel. I think it's a big part of the stack, absolutely. >> All right. You made a comment at the beginning about what it means to be an open source person working at Google. The joke we all used to make was the rest of us are using what Google did 10 years ago, it eventually goes from that whitepaper all the way down to some product that you used internally and then maybe gets spun off. We wouldn't have Hadoop if it wasn't for Google. Just some of the amazing things that have come out of those people at Google. But what does it mean to be open source at Google and with Google? >> You get both, right? 'Cause I think that's the fun part is I don't think a week goes by where I don't get to discover something coming out of a resource group somewhere. Now the latest is machine learning, you know, Spanner because they'd learned how do to distributed time synchronization across geo data centers, like who does that, right? But Google has both the people and the desire and the ability to invest in on the research side. And then you marry that innovation with everything that's happening in open source. It's a really perfect combination. And so instead of building these proprietary systems, it's all about how do we actually not just contribute to open source, but how do we actually build that interoperability framework, because you don't want cloud to be an island, you want it to be really integrated into developer tools, databases, infrastructure, et cetera. >> And a lot of that sounds like it plays into the Kubernetes story, 'cause, you know, Kubernetes is a piece that allows some similarities between wherever you place your data. Maybe give us a little bit more about what Google, you know, how do you decide what's internal, I think about like the Spanner program, which there's some other open source pieces coming up, looks like they read the whitepaper and they're trying to do some pieces. You said less whitepapers, more code coming out of people, what does that means? >> It's not that we'll do less whitepapers. 'Cause whitepapers are great for research, and Google's definitely a research strong academic oriented company. It's just that you need to go further as well. So that was, you know, what I was talking about like with GRPC, creating an Apache project I think was the first time for streaming analytics, right, was the first time that I think Google's done that. Obviously, been involved for years at the Linux kernel, compilers, et cetera. I think it's more around what do developers need, where can we actually contribute to areas, because what you don't want, what we don't want is you're on premise and you're using one type of system, then you move to Google Cloud and it feels like there's impedance. You're really trying to get rid of the impedance mismatch all the way across the stack, and one of the best ways you can do that is by contributing new system designs. There's a little bit less of that happening in the analytics space now though, I think the new ground for that is everything that's happening in machine learning with Tensor Flow et cetera. >> Yeah, absolutely. There was some mention in the keynote this morning, all of the AI and ML, I mean, Google with Tensor Flow, even Amazon themselves getting involved more with open source. You said you couldn't build the hyper scales without them, but is that the, do they start with open source, do you see, or? >> Well, I think that most people are running on a Linux backplane. It's a little bit different in Google 'cause we got an underlying provisioning system called the Borg. And that just works, so some things work, don't change them. Here is where you really want to be open source first are areas that are just under active evolution, because then you actually can join that movement of active evolution. Developer tools are kind of like that. Even machine learning. Machine learning's super strategic to just about every company out there. But what Google did by actually open sourcing Tensor Flow is now they created a canvas, that community, we talk about that here, but for data scientists to collaborate, and these are people that didn't do much in open source prior, but you've given that ability to sort of come up with the best ideas and to innovate in code. >> I wanted to ask a little bit about the enterprise, right. We can all make jokes about enterprising is what everybody should've been doing 10 years ago, and they're finally getting to. But on the other hand, Red Hat, very enterprise focused company. OpenStack, service provider and very enterprise focused. One of the things that Google Cloud is doing... Well, I guess the criticism has typically been how does Google as a company and as a culture and as a cloud focused on the enterprise, especially bringing advanced topics like machine learning and things like that, which to a traditional IT person are a little foreign. So I just am interested in kind of how you're viewing, how do we approach the needs of the enterprise, meet them where they are today, while yet giving them an access to a whole set of services and tools that are actually going to take them into a business transformation stance? >> Sure. And that's because you end up as a public cloud provider with the enterprise, you end up having multiple conversations. You certainly have one of your primary audiences, the IT team, right. And so you have to earn trust and help them understand the tools and your strategy and your commitment to enterprise. And then you have CSOs, right, and the CEO, that's worried about everything security and risk and compliance, so it's a little bit different than your IT department. And then what's happening with machine learning and some of the higher end services is now you're actually building solutions for lines of business. So you're not talking to the IT teams with machine learning and you're not talking to the CSOs, you're really talking around business transformation. And when you're actually, if you're going into healthcare, if you're going into financial, it's a whole different team when you're talking about machine learning. So what happens is Google's really got a segmented three sort of discreet conversations that happen at separate points of time, but all of which are enterprise focused, 'cause they all have to marry together. Even though there may be interest in machine learning, if you don't wrap that in an enterprise security model and a way that IT can sustain and enable and deal with identity and all the other aspects, then you'll come up short. >> Yeah. Building on that. One of the critiques of OpenStack for years has been it's tough. I think about one of the critiques of Google is like, oh well, Google build stuff for Google engineers, we're not Google engineers, you know, Google's got the smartest people and therefore we're not worthy to be able to handle some of that. What's your response to that? How do you put some of those together? >> Of course, Google's really smart, but there's smart people everywhere. And I don't think that's it. I think the issue is, you know, Google had to build it for themselves, right, they'd build it for search and build it for apps and build it for YouTube. And OpenStack's got a harder problem in a way, when you think about it, 'cause they're building it for everybody. And that was the Red Hat model as well, it's not just about building it for Goldman Sachs, it's building it for every vertical. And so it's supposed to be hard. This isn't just about building a technology stack and saying we're done, we're going to move on. This community has to make sure that it works across the industry. And that doesn't happen in six years, it takes a longer period of time to do that, and it just means keeping your focus on it. And then you deal with all the use cases over time and then you build, that's what getting to a unified commoditized platform delivers. >> I love that, absolutely. We tend to oversimplify things and, right, building from the ground up some infrastructure stack that can live in any data center is a big challenge. I wrote an article years ago about Amazon hyperoptimizes. They only have to build for one data center, it's theirs. At Google, you understand what set of applications you're going to be running, you build your applications and the infrastructure supports it underneath that. What are some of the big challenges you're working on, some of the meaty things that are exciting you in the technology space today? >> In a way, it's similar. In a way, it's similar, it's just that at least our stack's our stack, but what happens is then we have to marry that into the operational environments, not just for a niche of customers, but for every enterprise segment that's out there. What you end up realizing is that it ends up becoming more of a competency challenge than a technology issue because cloud is still, you know, public cloud is still really new. It's consolidating but it's still relatively new when you start to think about these journeys that happen in the IT world. So a lot of it for us is really that technical enablement of customers that want to get to Google Cloud, but how do you actually help them? And so it's really a people and processes kind of conversation over how fast is your virtual machine. >> One of the things I think is interesting about that Google Cloud that has developed is the role of the SRE. And Google has been, has invented that, wrote the book on it, literally, is training others, has partnerships to help train others with their SREs and the CRE program. So much of the people formerly known as sysadmins, in this new cloud world, some of them are architects, but some of them will end up being operators and SREs. How do you see the balance in this upscaling of kind of the architecture and the traditional infrastructure and capacities and app dev versus operations, how important is operations in our new world? >> It's everything. And that's why I think people, you know... What's funny is that if you do this code handoff where the software developers build code and then they hand it to a team to run and deploy. Developers never become great at building systems that can be operationally managed and maintained. And so I think that was sort of the aha moment, as the best I understand the SRE model at Google is that until you can actually deliver code that can be maintained or alive, well then the software developer owns that problem. The SRE organization only comes in at that point in time where they hand up their, and they're software developers. They're every bit as skilled software developers as the engineers are that are building the code, it's just that's the problem they want to decode, which I think is actually a harder problem than writing the code. 'Cause when you think about it for a public cloud, its like, how do you actually make change, right, but keep the plane flying? And to make sure that it works with everything in an ecosystem. At a period of time where you never really had a validation stage, because in the land of delivering ISV software, you always have the six month, nine month evaluation phase to bring in a new operating system or something else, or all the ecosystem tests around that. Cloud's harder, the magic of cloud is you don't have that window, but you still have to guarantee the same results. One of the things that we did around that was we took the page out of the SRE playbook, which is how does Google do it, and what we realized is that, even though public cloud's moved the layers up, enterprises still have the same issue. Because they're deploying critical applications and workloads on top. How do they do that and how do they keep those workloads running and what are their mechanisms for managing availability, service level objectives, share a few dashboards, and that's why we created the CRE team, which is customer reliability engineering, which is a playbook of SRE, but they work directly with end users. And that's part of the how do we help them get to Google Cloud, part of it's like really understanding their application stacks and helping them build those operational procedures, so they become SREs if you will. >> Brian, one of the things I, if you look at OpenStack, it's really, it's the infrastructure layer that it handles, when I think about Google Cloud, the area that you're strongest and, you know, you're welcome to correct me, but it's really when we talk about data, how you use data, how analytics, your leadership you're taking in the machine learning space. Is it okay for OpenStack to just handle those lower levels and let other projects sit on top of it? And curious as to the developing or where Google Cloud sits. >> I think that was a lower level aha moment for me, even prior to Google, was it was, I did have a lens and it was all about infrastructure. And I think the infrastructure is every bit as important as it ever was. But the fact that some of these services that don't exist in the on-premise world that live in Google Cloud are the ones that are transformative change, as opposed to just giving you operational, easing the operational burden, easing the security burden. But it's some of these add-on services that are the ones that really changed here, bring around business transformation. The reason we have been moving away from Hadoop as an example, not entirely but just because Hadoop's a batch oriented application. >> Could go to Spark, Flink, everything beyond that. >> Sure, and also now when you get to real time and streaming image, you can have adjusted data pipelines, data come from multiple sources. But then you can action on that data instantly, and a lot of businesses require, or ours certainly does and I think a lot of our customers' businesses do, the time to action really matters, and those are the types of services that, at least at scale, don't really exist anywhere else and machine learning, the ability of our custom ASICs to support machine learning. But I don't think it's a one versus the other, I think that brings about how do you allow enterprises to have both. And not have to choose between public cloud and on premise, or doing (mumbles) services or (mumbles) services, because if you ask them, the best thing they can have is actually how do you marry the two environments together so they don't look, again, back to that impedance differences. >> Yeah, and I think that's a great point, we've talked OpenStack is fitting into that hybrid or multi-cloud world a bunch. The challenge I guess we look at is some of those really cool features that are game changers that I have in public cloud that I can't do in my own data center, how do we bridge that? Started to see the reach or the APIs that do that, but how do you see that playing out? >> Because you don't have to bring them in. Because if you think about the fabric of IT, the fabric of IT is that Google's data center in that way just becomes an extension of the data center that a large enterprise is already using anyway. So it's through us. So they aren't going to the lines of distinction, only we and sort of the IT side see that. There isn't going to be seen, as long as they have an existing platform and they can take advantage of those services, and it doesn't mean that their workload has to be portable and the services have to exist in both places, it's just a data extension with some pretty compelling services. >> I think back, you know, Hadoop was let me bring the compute to the data 'cause the data's big and can't be moved. Look at edge computing now, I'm not going to be able to move all that data from the edge, I don't have the networking connectivity. There's certain pieces which we'll come back to, you know, a core public cloud, but I wonder if you can comment on some of those edge pieces, how you see that fitting in? We've talked a little bit about it here at OpenStack, but 'cause you're Google. I think it's the evolution. When we look at, we just even see the edge of our network, the edge of our network is in, it's 173 countries and regions globally. And so that edge of the network is full compute and cashing. And so even for us, we're looking at what sort of compute services do you bring to the edge of the network. We're like, low latency really matters and proximity matters. The easiest obvious examples are gaming, but there's other ones as well, trading. But still though, if you want to take advantage of that foundation, it shouldn't be one that you have to dive into the specificities of a single provider, you'd really want that abstraction layer across the edge, whether that's Docker and a defined set of APIs around data management and delivery and security, that probably gives you that edge computing sell, and then you really want to build around that on Google's edge, you want to build around that on a telco's edge. So I don't think it really becomes necessarily around whether it's centralized or it's the edge, it's really what's that architecture to deliver. >> All right. Brian, I want to give you the opportunity, final world, things either from OpenStack, retrospectively or Google looking forward that you'd like to leave our audience with. >> Wow, closing remarks. You know, I think the continuity here is open source. And I know the backdrop of this is OpenStack, but it's really around open source is the accepted foundation and substrate for IT computing up the stack, so I think that's not changing, the faces may change and what we call these projects may change, but that's the evolution and I think there's really no turning back on that now. >> Brian Stevens, always a pleasure to catch up with you, we'll be back with lots more coverage here with theCUBE, thanks for watching. (energetic music)
SUMMARY :
Brought to you by the OpenStack Foundation, John Troyer and happy to welcome back to the program it's not only the companies that sell a product I think it's a big part of the stack, absolutely. that you used internally and then maybe gets spun off. and the desire and the ability to invest in the Kubernetes story, 'cause, you know, So that was, you know, what I was talking about all of the AI and ML, I mean, Google with Tensor Flow, Here is where you really want to and as a cloud focused on the enterprise, and some of the higher end services is now you're actually One of the critiques of OpenStack for years I think the issue is, you know, some of the meaty things that are exciting you that happen in the IT world. One of the things I think is interesting is that until you can actually deliver code Brian, one of the things I, if you look at OpenStack, that are the ones that really changed here, Sure, and also now when you get to real time but how do you see that playing out? Because you don't have to bring them in. And so that edge of the network is Brian, I want to give you the opportunity, final world, And I know the backdrop of this is OpenStack, to catch up with you, we'll be back
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Stevens | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Stu Miniman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
nine month | QUANTITY | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
six month | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
first time | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
six years | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
OpenStack Summit 2017 | EVENT | 0.98+ |
173 countries | QUANTITY | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
Hadoop | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
two environments | QUANTITY | 0.98+ |
Linux kernel | TITLE | 0.98+ |
SRE | TITLE | 0.97+ |
both places | QUANTITY | 0.97+ |
SRE | ORGANIZATION | 0.97+ |
Kubernetes | TITLE | 0.96+ |
#OpenmStackSummit | EVENT | 0.96+ |
Tensor Flow | TITLE | 0.95+ |
three | QUANTITY | 0.95+ |
OpenStack | TITLE | 0.93+ |
today | DATE | 0.93+ |
single provider | QUANTITY | 0.93+ |
Boston | LOCATION | 0.93+ |
one data center | QUANTITY | 0.89+ |
Google Cloud | TITLE | 0.89+ |
Spark | TITLE | 0.89+ |
years ago | DATE | 0.88+ |