Brian Gracely, Red Hat | KubeCon + CloudNativeCon Europe 2021 - Virtual
>> From around the globe, it's theCUBE, with coverage of KubeCon and CloudNativeCon Europe 2021 Virtual. Brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. >> Hello, welcome back to theCUBE's coverage of KubeCon 2021 CloudNativeCon Europe Virtual, I'm John Furrier your host, preview with Brian Gracely from Red Hat Senior Director Product Strategy Cloud Business Unit Brian Gracely great to see you. Former CUBE host CUBE alumni, big time strategist at Red Hat, great to see you, always great. And also the founder of Cloudcast which is an amazing podcast on cloud, part of the cloud (indistinct), great to see you Brian. Hope's all well. >> Great to see you too, you know for years, theCUBE was always sort of the ESPN of tech, I feel like, you know ESPN has become nothing but highlights. This is where all the good conversation is. It's theCUBE has become sort of the the clubhouse of tech, if you will. I know that's that's an area you're focused on, so yeah I'm excited to be back on and good to talk to you. >> It's funny you know, with all the events going away loved going out extracting the signal from the noise, you know, game day kind of vibe. CUBE Virtual has really expanded, so it's been so much more fun because we can get more people easy to dial in. So we're going to keep that feature post COVID. You're going to hear more about theCUBE Virtual hybrid events are going to be a big part of it, which is great because as you know and we've talked about communities and ecosystems are huge advantage right now it's been a big part of the Red Hat story. Now part of IBM bringing that mojo to the table the role of ecosystems with hybrid cloud is so critical. Can you share your thoughts on this? Because I know you study it, you have podcasts you've had one for many years, you understand that democratization and this new direct to audience kind of concept. Share your thoughts on this new ecosystem. >> Yeah, I think so, you know, we're sort of putting this in the context of what we all sort of familiarly call KubeCon but you know, if we think about it, it started as KubeCon it was sort of about this one technology but it's always been CloudNativeCon and we've sort of downplayed the cloud native part of it. But even if we think about it now, you know Kubernetes to a certain extent has kind of, you know there's this feeling around the community that, that piece of the puzzle is kind of boring. You know, it's 21 releases in, and there's lots of different offerings that you can get access to. There's still, you know, a lot of innovation but the rest of the ecosystem has just exploded. So it's, you know, there are ecosystem partners and companies that are working on edge and miniaturization. You know, we're seeing things like Kubernetes now getting into outer space and it's in the space station. We're seeing, you know, Linux get on Mars. But we're also seeing, you know, stuff on the other side of the spectrum. We're sort of seeing, you know awesome people doing database work and streaming and AI and ML on top of Kubernetes. So, you know, the ecosystem is doing what you'd expect it to do once one part of it gets stable. The innovation sort of builds on top of it. And, you know, even though we're virtual, we're still seeing just tons and tons of contributions, different companies different people stepping up and leading. So it's been really cool to watch the last few years. >> Yes, interesting point about the CloudNativeCon. That's an interesting insight, and I totally agree with you. And I think it's worth double clicking on. Let me just ask you, because when you look at like, say Kubernetes, okay, it's enabled a lot. Okay, it's been called the dial tone of Cloud native. I think Pat Gelsinger of VMware used that term. We call it the kind of the interoperability layer it enables more large scale deployments. So you're seeing a lot more Kubernetes enablement on clusters. Which is causing more hybrid cloud which means more Cloud native. So it actually is creating a network effect in and of itself with more Cloud native components and it's changing the development cycle. So the question I want to ask you is one how does a customer deal with that? Because people are saying, I like hybrid. I agree, Multicloud is coming around the corner. And of course, Multicloud is just a subsystem of resource underneath hybrid. How do I connect it all? Now I have multiple vendors, I have multiple clusters. I'm cross-cloud, I'm connecting multiple clouds multiple services, Kubernetes clusters, some get stood up some gets to down, it's very dynamic. >> Yeah, it's very dynamic. It's actually, you know, just coincidentally, you know, our lead architect, a guy named Clayton Coleman, who was one of the Kubernetes founders, is going to give a talk on sort of Kubernetes is this hybrid control plane. So we're already starting to see the tentacles come out of it. So you know how we do cross cloud networking how we do cross cloud provisioning of services. So like, how do I go discover what's in other clouds? You know and I think like you said, it took people a few years to figure out, like how do I use this new thing, this Kubernetes thing. How do I harness it. And, but the demand has since become "I have to do multi-cloud." And that means, you know, hey our company acquires companies, so you know, we don't necessarily know where that next company we acquire is going to run. Are they going to run on AWS? Are they going to, you know, run on Azure I've got to be able to run in multiple places. You know, we're seeing banking industries say, "hey, look cloud's now a viable target for you to put your applications, but you have to treat multiple clouds as if they're your backup domains." And so we're, you know, we're seeing both, you know the way business operates whether it's acquisitions or new things driving it. We're seeing regulations driving hybrid and multi-cloud and, even you know, even if the stalwart were to you know, set for a long time, well the world's only going to be public cloud and sort of you know, legacy data centers even those folks are now coming around to "I've got to bring hybrid to, to these places." So it's been more than just technology. It's been, you know, industries pushing it regulations pushing it, a lot of stuff. So, but like I said, we're going to be talking about kind of our future, our vision on that, our future on that. And, you know Red Hat everything we end up doing is a community activity. So we expect a lot of people will get on board with it >> You know, for all the old timers out there they can relate to this. But I remember in the 80's the OSI Open Systems Interconnect, and I was chatting with Paul Cormier about this because we were kind of grew up through that generation. That disrupted network protocols that were proprietary and that opened the door for massive, massive growth massive innovation around just getting that interoperability with TCP/IP, and then everything else happened. So Kubernetes does that, that's a phenomenal impact. So Cloud native to me is at that stage where it's totally next-gen and it's happening really fast. And a lot of people getting caught off guard, Brian. So you know, I got to to ask you as a product strategist, what's your, how would you give them the navigation of where that North star is? If I'm a customer, okay, I got to figure out where I got to navigate now. I know it's super volatile, changing super fast. What's your advice? >> I think it's a couple of pieces, you know we're seeing more and more that, you know, the technology decisions don't get driven out of sort of central IT as much anymore right? We sort of talk all the time that every business opportunity, every business project has a technology component to it. And I think what we're seeing is the companies that tend to be successful with it have built up the muscle, built up the skill set to say, okay, when this line of business says, I need to do something new and innovative I've got the capabilities to sort of stand behind that. They're not out trying to learn it new they're not chasing it. So that's a big piece of it, is letting the business drive your technology decisions as opposed to what happened for a long time which was we built out technology, we hope they would come. You know, the other piece of it is I think because we're seeing so much push from different directions. So we're seeing, you know people put technology out at the edge. We're able to do some, you know unique scalable things, you know in the cloud and so forth That, you know more and more companies are having to say, "hey, look, I'm not, I'm not in the pharmaceutical business. I'm not in the automotive business, I'm in software." And so, you know the companies that realize that faster, and then, you know once they sort of come to those realizations they realize, that's my new normal, those are the ones that are investing in software skills. And they're not afraid to say, look, you know even if my existing staff is, you know, 30 years of sort of history, I'm not afraid to bring in some folks that that'll break a few eggs and, you know, and use them as a lighthouse within their organization to retrain and sort of reset, you know, what's possible. So it's the business doesn't move. That's the the thing that drives all of them. And it's, if you embrace it, we see a lot of success. It's the ones that, that push back on it really hard. And, you know the market tends to sort of push back on them as well. >> Well we're previewing KubeCon CloudNativeCon. We'll amplify that it's CloudNativeCon as well. You guys bought StackRox, okay, so interesting company, not an open source company they have soon to be, I'm assuring, but Advanced Cluster Security, ACS, as it's known it's really been a key part of Red Hat. Can you give us the strategy behind that deal? What does that product, how does it fit in that's a lot of people are really talking about this acquisition. >> Yeah so here's the way we looked at it, is we've learned a couple of things over the last say five years that we've been really head down in Kubernetes, right? One is, we've always embedded a lot of security capabilities in the platform. So OpenShift being our core Kubernetes platform. And then what's happened over time is customers have said to us, "that's great, you've made the platform very secure" but the reality is, you know, our software supply chain. So the way that we build applications that, you know we need to secure that better. We need to deal with these more dynamic environments. And then once the applications are deployed they interact with various types of networks. I need to better secure those environments too. So we realized that we needed to expand our functionality beyond the core platform of OpenShift. And then the second thing that we've learned over the last number of years is to be successful in this space, it's really hard to take technology that wasn't designed for containers, or it wasn't designed for Kubernetes and kind of retrofit it back into that. And so when we were looking at potential acquisition targets, we really narrowed down to companies whose fundamental technologies were you know, Kubernetes-centric, you know having had to modify something to get to Kubernetes, and StackRox was really the leader in that space. They really, you know have been the leader in enterprise Kubernetes security. And the great thing about them was, you know not only did they have this Kubernetes expertise but on top of that, probably half of their customers were already OpenShift customers. And about 3/4 of their customers were using you know, native Kubernetes services and other clouds. So, you know, when we went and talked to them and said, "Hey we believe in Kubernetes, we believe in multi-cloud. We believe in open source," they said, "yeah, those are all the foundational things for us." And to your point about it, you know, maybe not being an open source company, they actually had a number of sort of ancillary projects that were open source. So they weren't unfamiliar to it. And then now that the acquisition's closed, we will do what we do with every piece of Red Hat technology. We'll make sure that within a reasonable period of time that it's made open source. And so you know, it's good for the community. It allows them to keep focusing on their innovation. >> Yeah you've got to get that code out there cool. Brian, I'm hearing about Platform Plus what is that about? Take us through that. >> Yeah, so you know, one of the things that our customers, you know, have come to us over time is it's you know, it's like, I've been saying kind of throughout this discussion, right? Kubernetes is foundational, but it's become pretty stable. The things that people are solving for now are like, you highlighted lots and lots of clusters, they're all over the place. That was something that our advanced cluster management capabilities were able to solve for people. Once you start getting into lots of places you've got to be able to secure things everywhere you go. And so OpenShift for us really allows us to bundle together, you know, sort of the complete set of the portfolio. So the platform, security management, and it also gives us the foundational pieces or it allows our customers to buy the foundational pieces that are going to help them do multi and hybrid cloud. And, you know, when we bundle that we can save them probably 25% in terms of sort of product acquisition. And then obviously the integration work we do you know, saves a ton on the operational side. So it's a new way for us to, to not only bundle the platform and the technologies but it gets customers in a mindset that says, "hey we've moved past sort of single environments to hybrid and multi-cloud environments. >> Awesome, well thanks for the update on that, appreciate it. One of the things going into KubeCon, and that we're watching closely is this Cloud native developer action. Certainly end users want to get that in a separate section with you but the end user contribution, which is like exploding. But on the developer side there's a real trend towards adding stronger consistency programmability support for more use cases okay. Where it's becoming more of a data platform as a requirement. >> Brian: Right. >> So how, so that's a trend so I'm kind of thinking, there's no disagreement on that. >> Brian: No, absolutely. >> What does that mean? Like I'm a customer, that sounds good. How do I make that happen? 'Cause that's the critical discussion right now in the DevOps, DevSecOps day, two operations. What you want to call it. This is the number one concern for developers and that solution architect, consistency, programmability more use cases with data as a platform. >> Yeah, I think, you know the way I kind of frame this up was you know, for any for any organization, the last thing you want to to do is sort of keep investing in lots of platforms, right? So platforms are great on their surface but once you're having to manage five and six and, you know 10 or however many you're managing, the economies of scale go away. And so what's been really interesting to watch with Kubernetes is, you know when we first got started everything was Cloud native application but that really was sort of, you know shorthand for stateless applications. We quickly saw a move to, you know, people that said, "Hey I can modernize something, you know, a Stateful application and we add that into Kubernetes, right? The community added the ability to do Stateful applications and that got people a certain amount of the way. And they sort of started saying, okay maybe Kubernetes can help me peel off some things of an existing platform. So I can peel off, you know Java workloads or I can peel off, what's been this explosion is the data community, if you will. So, you know, the TensorFlows the PItorches, you know, the Apache community with things like Couchbase and Kafka, TensorFlow, all these things that, you know maybe in the past didn't necessarily, had their own sort of underlying system are now defaulting to Kubernetes. And what we see because of that is, you know people now can say, okay, these data workloads these AI and ML workloads are so important to my business, right? Like I can directly point to cost savings. I can point to, you know, driving innovation and because Kubernetes is now their default sort of way of running, you know we're seeing just sort of what used to be, you know small islands of clusters become these enormous footprints whether they're in the cloud or in their data center. And that's almost become, you know, the most prevalent most widely used use case. And again, it makes total sense. It's exactly the trends that we've seen in our industry, even before Kubernetes. And now people are saying, okay, I can consolidate a lot of stuff on Kubernetes. I can get away from all those silos. So, you know, that's been a huge thing over the last probably year plus. And the cool thing is we've also seen, you know the hardware vendors. So whether it's Intel or Nvidia, especially around GPUs, really getting on board and trying to make that simpler. So it's not just the software ecosystem. It's also the hardware ecosystem, really getting on board. >> Awesome, Brian let me get your thoughts on the cloud versus the power dynamics between the cloud players and the open source software vendors. So what's the Red Hat relationship with the cloud players with the hybrid architecture, 'cause you want to set up the modern day developer environment, we get that right. And it's hybrid, what's the relationship with the cloud players? >> You know, I think so we we've always had two philosophies that haven't really changed. One is, we believe in open source and open licensing. So you haven't seen us look at the cloud as, a competitive threat, right? We didn't want to make our business, and the way we compete in business, you know change our philosophy in software. So we've always sort of maintained open licenses permissive licenses, but the second piece is you know, we've looked at the cloud providers as very much partners. And mostly because our customers look at them as partners. So, you know, if Delta Airlines or Deutsche Bank or somebody says, "hey that cloud provider is going to be our partner and we want you to be part of that journey, we need to be partners with that cloud as well." And you've seen that sort of manifest itself in terms of, you know, we haven't gone and set up new SaaS offerings that are Red Hat offerings. We've actually taken a different approach than a lot of the open source companies. And we've said we're going to embed our capabilities, especially, you know OpenShift into AWS, into Azure into IBM cloud working with Google cloud. So we'd look at them very much as a partner. I think it aligns to how Red Hat's done things in the past. And you know, we think, you know even though it maybe easy to sort of see a way of monetizing things you know, changing licensing, we've always found that, you've got to allow the ecosystem to compete. You've got to allow customers to go where they want to go. And we try and be there in the most consumable way possible. So that's worked out really well for us. >> So I got to bring up the end user participation component. That's a big theme here at KubeCon going into it and around the event is, and we've seen this trend happen. I mean, Envoy, Lyft the laying examples are out there. But they're more end-use enterprises coming in. So the enterprise class I call classic enterprise end user participation is at an all time high in opensource. You guys have the biggest portfolio of enterprises in the business. What's the trend that you're seeing because it used to be limited to the hyperscalers the Lyfts and the Facebooks and the big guys. Now you have, you know enterprises coming in the business model is working, can you just share your thoughts on CloudNativeCons participation for end users? >> Yeah, I think we're definitely seeing a blurring of lines between what used to be the Silicon Valley companies were the ones that would create innovation. So like you mentioned Lyft, or, you know LinkedIn doing Kafka or Twitter doing you know, whatever. But as we've seen more and more especially enterprises look at themselves as software companies right. So, you know if you talk about, you know, Ford or Volkswagen they think of themselves as a software company, almost more than they think about themselves as a car company, right. They're a sort of mobile transportation company you know, something like that. And so they look at themselves as I've got to I've got to have software as an expertise. I've got to compete for the best talent, no matter where that talent is, right? So it doesn't have to be in Detroit or in Germany or wherever I can go get that anywhere. And I think what they really, they look for us to do is you know, they've got great technology chops but they don't always understand kind of the the nuances and the dynamics of open-source right. They're used to having their own proprietary internal stuff. And so a lot of times they'll come to us, not you know, "Hey how do we work with the project?" But you know like here's new technology. But they'll come to us and they'll say "how do we be good, good stewards in this community? How do we make sure that we can set up our own internal open source office and have that group, work with communities?" And so the dynamics have really changed. I think a lot of them have, you know they've looked at Silicon Valley for years and now they're modeling it, but it's, you know, for us it's great because now we're talking the same language, you know we're able to share sort of experiences we're able to share best practices. So it is really, really interesting in terms of, you know, how far that whole sort of software is eating the world thing is materialized in sort of every industry. >> Yeah and it's the workloads of expanding Cloud native everywhere edge is blowing up big time. Brian, final question for you before we break. >> You bet. >> Thanks for coming on and always great to chat with you. It's always riffing and getting the data out too. What's your expectation for KubeCon CloudNativeCon this year? What are you expecting to see? What highlights do you expect will come out of CloudNativeCon KubeCon this year? >> Yeah, I think, you know like I said, I think it's going to be much more on the Cloud native side, you know we're seeing a ton of new communities come out. I think that's going to be the big headline is the number of new communities that are, you know have sort of built up a following. So whether it's Crossplane or whether it's, you know get-ops or whether it's, you know expanding around the work that's going on in operators we're going to see a whole bunch of projects around, you know, developer sort of frameworks and developer experience and so forth. So I think the big thing we're going to see is sort of this next stage of, you know a thousand flowers are blooming and we're going to see probably a half dozen or so new communities come out of this one really strong and you know the trends around those are going to accelerate. So I think that'll probably be the biggest takeaway. And then I think just the fact that the community is going to come out stronger after the pandemic than maybe it did before, because we're learning you know, new ways to work remotely, and that, that brings in a ton of new companies and contributors. So I think those two big things will be the headlines. And, you know, the state of the community is strong as they, as they like to say >> Yeah, love the ecosystem, I think the values are going to be network effect, ecosystems, integration standards evolving very quickly out in the open. Great to see Brian Gracely Senior Director Product Strategy at Red Hat for the cloud business unit, also podcasts are over a million episode downloads for the cloud cast podcast, thecloudcast.net. What's it Brian, what's the stats now. >> Yeah, I think we've, we've done over 500 shows. We're you know, about a million and a half listeners a year. So it's, you know again, it's great to have community followings and, you know, and meet people from around the world. So, you know, so many of these things intersect it's a real pleasure to work with everybody >> You're going to create a culture, well done. We're all been there, done that great job. >> Thank you >> Check out the cloud cast, of course, Red Hat's got the great OpenShift mojo going on into KubeCon. Brian, thanks for coming on. >> Thanks John. >> Okay so CUBE coverage of KubeCon, CloudNativeCon Europe 2021 Virtual, I'm John Furrier with theCUBE virtual. Thanks for watching. (upbeat music)
SUMMARY :
Brought to you by Red great to see you Brian. Great to see you too, It's funny you know, with to a certain extent has kind of, you know So the question I want to ask you is one the stalwart were to you know, So you know, I got to to ask to say, look, you know Can you give us the but the reality is, you know, that code out there cool. Yeah, so you know, one of with you but the end user contribution, So how, so that's a trend What you want to call it. the PItorches, you know, and the open source software vendors. And you know, we think, you So the enterprise class come to us, not you know, Yeah and it's the workloads of What are you expecting to see? and you know the trends around for the cloud business unit, So it's, you know again, You're going to create Check out the cloud cast, of course, of KubeCon, CloudNativeCon
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ford | ORGANIZATION | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Clayton Coleman | PERSON | 0.99+ |
Brian Gracely | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Delta Airlines | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
25% | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Detroit | LOCATION | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
30 years | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
second piece | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
two philosophies | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
ESPN | ORGANIZATION | 0.99+ |
21 releases | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
CloudNativeCon | EVENT | 0.98+ |
Facebooks | ORGANIZATION | 0.98+ |
second thing | QUANTITY | 0.98+ |
Cloudcast | ORGANIZATION | 0.98+ |
thecloudcast.net | OTHER | 0.98+ |
Lyft | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
Silicon Valley | LOCATION | 0.97+ |
Linux | TITLE | 0.97+ |
over 500 shows | QUANTITY | 0.97+ |
CloudNativeCon Europe 2021 Virtual | EVENT | 0.97+ |
80's | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
OpenShift | TITLE | 0.96+ |
Java | TITLE | 0.96+ |
Kubernetes | ORGANIZATION | 0.96+ |
Lyfts | ORGANIZATION | 0.96+ |
Kubernetes | TITLE | 0.96+ |
pandemic | EVENT | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
one part | QUANTITY | 0.95+ |
KubeCon 2021 CloudNativeCon Europe Virtual | EVENT | 0.95+ |
Azure | TITLE | 0.94+ |
Mars | LOCATION | 0.94+ |
CloudNativeCon | TITLE | 0.94+ |
OpenShift | ORGANIZATION | 0.93+ |
ORGANIZATION | 0.93+ | |
Kafka | TITLE | 0.92+ |
Zhamak Dehghani, Director of Emerging Technologies at ThoughtWorks
(bright music) >> In 2009, Hal Varian, Google's Chief Economist said that statisticians would be the sexiest job in the coming decade. The modern big data movement really took off later in the following year, after the second Hadoop World, which was hosted by Cloudera, in New York city. Jeff Hama Bachar, famously declared to me and John Furrie, in "theCUBE," that the best minds of his generation were trying to figure out how to get people to click on ads. And he said that sucks. The industry was abuzz with the realization that data was the new competitive weapon. Hadoop was heralded as the new data management paradigm. Now what actually transpired over the next 10 years was only a small handful of companies could really master the complexities of big data and attract the data science talent, really necessary to realize massive returns. As well, back then, cloud was in the early stages of its adoption. When you think about it at the beginning of the last decade, and as the years passed, more and more data got moved to the cloud, and the number of data sources absolutely exploded, experimentation accelerated, as did the pace of change. Complexity just overwhelmed big data infrastructures and data teams, leading to a continuous stream of incremental technical improvements designed to try and keep pace, things like data lakes, data hubs, new open source projects, new tools, which piled on even more complexity. And as we reported, we believe what's needed is a complete bit flip and how we approach data architectures. Our next guest is Zhamak Dehgani, who is the Director of Emerging Technologies at ThoughtWorks. Zhamak is a software engineer, architect, thought leader and advisor, to some of the world's most prominent enterprises. She's in my view, one of the foremost advocates for rethinking and changing the way we create and manage data architectures, favoring a decentralized over monolithic structure, and elevating domain knowledge as a primary criterion, and how we organize so-called big data teams and platforms. Zhamak, welcome to the cube, it's a pleasure to have you on the program. >> Hi David, it's wonderful to be here. >> Okay. So you're pretty outspoken about the need for a paradigm shift, in how we manage our data, and our platforms at scale. Why do you feel we need such a radical change? What's your thoughts there? >> Well, I think if you just look back over the last decades, you gave us a summary of what happened since 2010. But even if we got it before then, what we have done over the last few decades is basically repeating, and as you mentioned, incrementally improving how we manage data, based on certain assumptions around, as you mentioned, centralization. Data has to be in one place so we can get value from it. But if you look at the parallel movement of our industry in general, since the birth of internet, we are actually moving towards decentralization. If we think today, like if in this move data side, if we said, the only way web would work, the only way we get access to various applications on the web or pages is to centralize it, we would laugh at that idea, but for some reason, we don't question that when it comes to data, right? So I think it's time to embrace the complexity that comes with the growth of number of sources, the proliferation of sources and consumptions models, embrace the distribution of sources of data, that they're not just within one part of organization. They're not just within even bounds of organizations. They're beyond the bounds of organization, and then look back and say, okay, if that's the trend of our industry in general, given the fabric of compensation and data that we put in globally in place, then how the architecture and technology and organizational structure incentives need to move, to embrace that complexity. And to me, that requires a paradigm shift. A full stack from how we organize our organizations, how we organize our teams, how we put a technology in place to look at it from a decentralized angle. >> Okay, so let's unpack that a little bit. I mean, you've spoken about and written today's big architecture, and you've basically just mentioned that it's flawed. So I want to bring up, I love your diagrams, you have a simple diagram, guys if you could bring up figure one. So on the left here, we're adjusting data from the operational systems, and other enterprise data sets. And of course, external data, we cleanse it, you've got to do the quality thing, and then serve them up to the business. So what's wrong with that picture that we just described, and give granted it's a simplified form. >> Yeah. Quite a few things. So, and I would flip the question maybe back to you or the audience. If we said that there are so many sources of the data and actually data comes from systems and from teams that are very diverse in terms of domains, right? Domain. If you just think about, I don't know, retail, the E-Commerce versus auto management, versus customer. These are very diverse domains. The data comes from many different diverse domains, and then we expect to put them under the control of a centralized team, a centralized system. And I know that centralization probably, if you zoom out is centralized, if you zoom in it's compartmentalized based on functions, and we can talk about that. And we assume that the centralized model, will be getting that data, making sense of it, cleansing and transforming it, then to satisfy a need of very diverse set of consumers without really understanding the domains because the teams responsible for it are not close to the source of the data. So there is a bit of a cognitive gap and domain understanding gap, without really understanding how the data is going to be used. I've talked to numerous, when we came to this, I came up with the idea. I talked to a lot of data teams globally, just to see, what are the pain points? How are they doing it? And one thing that was evident in all of those conversations, that they actually didn't know, after they built these pipelines and put the data in, whether the data warehouse tables or linked, they didn't know how the data was being used. But yet they're responsible for making the data available for this diverse set of use cases. So essentially system and monolithic system, often is a bottleneck. So what you find is that a lot of the teams are struggling with satisfying the needs of the consumers, are struggling with really understanding the data, the domain knowledge is lost, there is a loss of understanding and kind of it in that transformation, often we end up training machine learning models on data, that is not really representative of the reality of the business, and then we put them to production and they don't work because the semantic and the syntax of the data gets lost within that translation. So, and we are struggling with finding people to manage a centralized system because still the technology's fairly, in my opinion, fairly low level and exposes the users of those technology sets and let's say they warehouse a lot of complexity. So in summary, I think it's a bottleneck, it's not going to satisfy the pace of change or pace of innovation, and the availability of sources. It's disconnected and fragmented, even though there's centralized, it's disconnected and fragmented from where the data comes from and where the data gets used, and is managed by a team of hyper specialized people, they're struggling to understand the actual value of the data, the actual format of the data. So it's not going to get us where our aspirations, our ambitions need to be. >> Yeah, so the big data platform is essentially, I think you call it context agnostic. And so as data becomes more important in our lives, you've got all these new data sources injected into the system, experimentation as we said, the cloud becomes much, much easier. So one of the blockers that you've cited and you just mentioned it, is you've got these hyper specialized roles, the data engineer, the quality engineer, data scientist. And it's a losery. I mean, it's like an illusion. These guys, they seemingly they're independent, and can scale independently, but I think you've made the point that in fact, they can't. That a change in a data source has an effect across the entire data life cycle, entire data pipeline. So maybe you could add some some color to why that's problematic for some of the organizations that you work with, and maybe give some examples. >> Yeah, absolutely. So in fact initially, the hypothesis around data mesh came from a series of requests that we received from our both large scale and progressive clients, and progressive in terms of their investment in data architecture. So these were clients that were larger scale, they had diverse and rich set of domain, some of them were big technology, tech companies, some of them were big retail companies, big healthcare companies. So they had that diversity of the data and a number of the sources of the domains. They had invested for quite a few years in generations, of they had multi-generations of PROPRICER data warehouses on prem that were moving to cloud. They had moved through the various revisions of the Hadoop clusters, and they were moving to that to cloud, and then the challenges that they were facing were simply... If I want to just simplify it in one phrase, they we're not getting value from the data that they were collecting. They were continuously struggling to shift the culture because there was so much friction between all of these three phases of both consumption of the data, then transformation and making it available. Consumption from sources and then providing it and serving it to the consumer. So that whole process was full of friction. Everybody was unhappy. So it's bottom line is that you're collecting all this data, there is delay, there is lack of trust in the data itself, because the data is not representative of the reality, it's gone through the transformation, but people that didn't understand really what the data was got delayed. And so there's no trust, it's hard to get to the data. Ultimately, it's hard to create value from the data, and people are working really hard and under a lot of pressure, but it's still struggling. So we often, our solutions, like we are... Technologies, we will often point out to technology. So we go. Okay, this version of some proprietary data warehouse we're using is not the right thing. We should go to the cloud and that certainly will solve our problem, right? Or warehouse wasn't a good one, let's make a data Lake version. So instead of extracting and then transforming and loading into the database, and that transformation is that heavy process because you fundamentally made an assumption using warehouses that if I transform this data into this multidimensional perfectly designed schema, that then everybody can draw on whatever query they want, that's going to solve everybody's problem. But in reality, it doesn't because you are delayed and there is no universal model that serves everybody's need, everybody needs are diverse. Data scientists necessarily don't like the perfectly modeled data, they're for both signals and the noise. So then we've just gone from ATLs to let's say now to Lake, which is... Okay, let's move the transformation to the last mile. Let's just get load the data into the object stores and sort of semi-structured files and get the data scientists use it, but they still struggling because of the problems that we mentioned. So then what is the solution? What is the solution? Well, next generation data platform. Let's put it on the cloud. And we saw clients that actually had gone through a year or multiple years of migration to the cloud but it was great, 18 months, I've seen nine months migrations of the warehouse versus two year migrations of various data sources to the cloud. But ultimately the result is the same, unsatisfied, frustrated data users, data providers with lack of ability to innovate quickly on relevant data and have an experience that they deserve to have, have a delightful experience of discovering and exploring data that they trust. And all of that was still amiss. So something else more fundamentally needed to change than just the technology. >> So the linchpin to your scenario is this notion of context. And you pointed out, you made the other observation that "Look we've made our operational systems context aware but our data platforms are not." And like CRM system sales guys are very comfortable with what's in the CRMs system. They own the data. So let's talk about the answer that you and your colleagues are proposing. You're essentially flipping the architecture whereby those domain knowledge workers, the builders if you will, of data products or data services, they are now first-class citizens in the data flow, and they're injecting by design domain knowledge into the system. So I want to put up another one of your charts guys, bring up the figure two there. It talks about convergence. She showed data distributed, domain driven architecture, the self-serve platform design, and this notion of product thinking. So maybe you could explain why this approach is so desirable in your view. >> Sure. The motivation and inspirations for that approach came from studying what has happened over the last few decades in operational systems. We had a very similar problem prior to microservices with monolithic systems. One of the things systems where the bottleneck, the changes we needed to make was always on vertical now to how the architecture was centralized. And we found a nice niche. And I'm not saying this is a perfect way of decoupling your monolith, but it's a way that currently where we are in our journey to become data driven, it is a nice place to be, which is distribution or a decomposition of your system as well as organization. I think whenever we talk about systems, we've got to talk about people and teams that are responsible for managing those systems. So the decomposition of the systems and the teams, and the data around domains. Because that's how today we are decoupling our business, right? We are decoupling our businesses around domains, and that's a good thing. And what does that do really for us? What it does is it localizes change to the bounded context of that business. It creates clear boundary and interfaces and contracts between the rest of the universe of the organization, and that particular team, so removes the friction that often we have for both managing the change, and both serving data or capability. So if the first principle of data meshes, let's decouple this world of analytical data the same to mirror. The same way we have decoupled our systems and teams, and business. Why data is any different. And the moment you do that, so the moment you bring the ownership to people who understands the data best, then you get questions that well, how is that any different from silos of disconnected databases that we have today and nobody can get to the data? So then the rest of the principles is really to address all of the challenges that comes with this first principle of decomposition around domain context. And the second principle is, well, we have to expect a certain level of quality and accountability, and responsibility for the teams that provide the data. So let's bring products thinking and treating data as a product, to the data that these teams now share, and let's put accountability around it. We need a new set of incentives and metrics for domain teams to share the data, we need to have a new set of kind of quality metrics that define what it means for the data to be a product, and we can go through that conversation perhaps later. So then the second principle is, okay, the teams now that are responsible, the domain teams responsible for their analytical data need to provide that data with a certain level of quality and assurance. Let's call that a product, and bring product thinking to that. And then the next question you get asked off at work by CIO or CTO is the people who build the infrastructure and spend the money. They say, well, "It's actually quite complex to manage big data, now where we want everybody, every independent team to manage the full stack of storage and computation and pipelines and access control and all of that." Well, we've solved that problem in operational world. And that requires really a new level of platform thinking to provide infrastructure and tooling to the domain teams to now be able to manage and serve their big data, and I think that requires re-imagining the world of our tooling and technology. But for now, let's just assume that we need a new level of abstraction to hide away a ton of complexity that unnecessarily people get exposed to. And that's the third principle of creating self-serve infrastructure to allow autonomous teams to build their domains. But then the last pillar, the last fundamental pillar is okay, once he distributed a problem into smaller problems that you found yourself with another set of problems, which is how I'm going to connect this data. The insights happens and emerges from the interconnection of the data domains, right? It's just not necessarily locked into one domain. So the concerns around interoperability and standardization and getting value as a result of composition and interconnection of these domains requires a new approach to governance. And we have to think about governance very differently based on a federated model. And based on a computational model. Like once we have this powerful self-serve platform, we can computationally automate a lot of covenants decisions and security decisions, and policy decisions, that applies to this fabric of mesh, not just a single domain or not in a centralized. So really, as you mentioned, the most important component of the data mesh is distribution of ownership and distribution of architecture in data, the rest of them is to solve all the problems that come with that. >> So, very powerful. And guys, we actually have a picture of what Zhamak just described. Bring up figure three, if you would. So I mean, essentially, you're advocating for the pushing of the pipeline and all its various functions into the lines of business and abstracting that complexity of the underlying infrastructure which you kind of show here in this figure, data infrastructure as a platform down below. And you know why I love about this, Zhamak, is, to me it underscores the data is not the new oil. Because I can put oil in my car, I can put it in my house but I can't put the same code in both places. But I think you call it polyglot data, which is really different forms, batch or whatever. But the same data doesn't follow the laws of scarcity. I can use the same data for many, many uses, and that's what this sort of graphic shows. And then you brought in the really important, sticking problem, which is that the governance which is now not a command and control, it's federated governance. So maybe you could add some thoughts on that. >> Sure, absolutely. It's one of those, I think I keep referring to data mesh as a paradigm shift, and it's not just to make it sound grand and like kind of grand and exciting or important, it's really because I want to point out, we need to question every moment when we make a decision around, how we're going to design security, or governance or modeling of the data. We need to reflect and go back and say, "Am I applying some of my cognitive biases around how I have worked for the last 40 years?" I've seen it work? Or "Do I do I really need to question?" And do need to question the way we have applied governance. I think at the end of the day, the role of the data governance and the objective remains the same. I mean, we all want quality data accessible to a diverse set of users and its users now know have different personas, like data persona, data analysts, data scientists, data application user. These are very diverse personas. So at the end of the day, we want quality data accessible to them, trustworthy in an easy consumable way. However, how we get there looks very different in as you mentioned that the governance model in the old world has been very command and control, very centralized. They were responsible for quality, they were responsible for certification of the data, applying and making sure the data complies with all sorts of regulations, make sure data gets discovered and made available. In the world of data mesh, really the job of the data governance as a function becomes finding the equilibrium between what decisions need to be made and enforced globally, and what decisions need to be made locally so that we can have an interoperable mesh of data sets that can move fast and can change fast. It's really about, instead of kind of putting those systems in a straight jacket of being constantly and don't change, embrace change, and continuous change of landscape because that's just the reality we can't escape. So the role of governance really, the modern governance model I called federated and computational. And by that I mean, every domain needs to have a representative in the governance team. So the role of the data or domain data product owner who really were understands that domain really well, but also wears that hats of the product owner. It's an important role that has to have a representation in the governance. So it's a federation of domains coming together. Plus the SMEs, and people have Subject Matter Experts who understand the regulations in that environment, who understands the data security concerns. But instead of trying to enforce and do this as a central team, they make decisions as what needs to be standardized. What needs to be enforced. And let's push that into that computationally and in an automated fashion into the platform itself, For example. Instead of trying to be part of the data quality pipeline and inject ourselves as people in that process, let's actually as a group, define what constitutes quality. How do we measure quality? And then let's automate that, and let's codify that into the platform, so that every day the products will have a CICD pipeline, and as part of that pipeline, law's quality metrics gets validated, and every day to product needs to publish those SLOs or Service Level Objectives, or whatever we choose as a measure of quality, maybe it's the integrity of the data, or the delay in the data, the liveliness of the data, whatever are the decisions that you're making. Let's codify that. So it's really the objectives of the governance team trying to satisfies the same, but how they do it, it's very, very different. And I wrote a new article recently, trying to explain the logical architecture that would emerge from applying these principles, and I put a kind of a light table to compare and contrast how we do governance today, versus how we'll do it differently, to just give people a flavor of what does it mean to embrace decentralization, and what does it mean to embrace change, and continuous change. So hopefully that could be helpful. >> Yes. There's so many questions I have. But the point you make it too on data quality, sometimes I feel like quality is the end game, Where the end game should be how fast you can go from idea to monetization with a data service. What happens again? And you've sort of addressed this, but what happens to the underlying infrastructure? I mean, spinning up EC2s and S3 buckets, and MyPytorches and TensorFlows. That lives in the business, and who's responding for that? >> Yeah, that's why I'm glad you're asking this question, David, because I truly believe we need to reimagine that world. I think there are many pieces that we can use as utilities are foundational pieces, but I can see for myself at five to seven year road map building this new tooling. I think in terms of the ownership, the question around ownership, that would remain with the platform team, but I don't perhaps a domain agnostic technology focused team, right? That there are providing a set of products themselves, but the users of those products are data product developers, right? Data domain teams that now have really high expectations, in terms of low friction, in terms of a lead time to create a new data products. So we need a new set of tooling and I think the language needs to shift from I need a storage bucket, or I need a storage account, to I need a cluster to run my spark jobs. Too, here's the declaration of my data products. This is where the data file will come from, this is a data that I want to serve, these are the policies that I need to apply in terms of perhaps encryption or access control, go make it happen platform, go provision everything that I need, so that as a data product developer, all I can focus on is the data itself. Representation of semantic and representation of the syntax, and make sure that data meets the quality that I have to assure and it's available. The rest of provisioning of everything that sits underneath will have to get taken care of by the platform. And that's what I mean by requires a reimagination. And there will be a data platform team. The data platform teams that we set up for our clients, in fact themselves have a fair bit of complexity internally, they divide into multiple teams, multiple planes. So there would be a plane, as in a group of capabilities that satisfied that data product developer experience. There would be a set of capabilities that deal with those nitty gritty underlying utilities, I call them (indistinct) utilities because to me, the level of abstraction of the platform needs to go higher than where it is. So what we call platform today are a set of utilities we'll be continuing to using. We'll be continuing to using object storage, we will continue to using relational databases and so on. So there will be a plane and a group of people responsible for that. There will be a group of people responsible for capabilities that enable the mesh level functionality, for example, be able to correlate and connect and query data from multiple nodes, that's a mesh level capability, to be able to discover and explore the mesh of data products, that's the mesh of capability. So it would be a set of teams as part of platform. So we use a strong, again, products thinking embedded in a product and ownership embedded into that to satisfy the experience of this now business oriented domain data teams. So we have a lot of work to do. >> I could go on, unfortunately, we're out of time, but I guess, first of all, I want to tell people there's two pieces that you've put out so far. One is how to move beyond a Monolithic Data Lake to a distributed data mesh. You guys should read that in the "Data Mesh Principles and Logical Architecture," is kind of part two. I guess my last question in the very limited time we have is are organizations ready for this? >> I think how the desire is there. I've been overwhelmed with the number of large and medium and small and private and public, and governments and federal organizations that reached out to us globally. I mean, this is a global movement and I'm humbled by the response of the industry. I think, the desire is there, the pains are real, people acknowledge that something needs to change here. So that's the first step. I think awareness is spreading, organizations are more and more becoming aware, in fact, many technology providers are reaching to us asking what shall we do because our clients are asking us, people are already asking, we need the data mesh and we need the tooling to support it. So that awareness is there in terms of the first step of being ready. However, the ingredients of a successful transformation requires top-down and bottom-up support. So it requires support from chief data analytics officers, all above, the most successful clients that we have with data mesh are the ones that, the CEOs have made a statement that, "We'd want to change the experience of every single customer using data, and we're going to commit to this." So the investment and support exists from top to all layers, the engineers are excited, the maybe perhaps the traditional data teams are open to change. So there are a lot of ingredients of transformations that come together. Are we really ready for it? I think the pioneers, perhaps, the innovators if you think about that innovation curve of adopters, probably pioneers and innovators and lead adopters are making moves towards it, and hopefully as the technology becomes more available, organizations that are less engineering oriented, they don't have the capability in-house today, but they can buy it, they would come next. Maybe those are not the ones who are quite ready for it because the technology is not readily available and requires internal investments to make. >> I think you're right on. I think the leaders are going to lean in hard and they're going to show us the path over the next several years. And I think that the end of this decade is going to be defined a lot differently than the beginning. Zhamak, thanks so much for coming to "theCUBE" and participating in the program. >> Thank you for hosting me, David. >> Pleasure having you. >> It's been wonderful. >> All right, keep it right there everybody, we'll be back right after this short break. (slow music)
SUMMARY :
and attract the data science and our platforms at scale. and data that we put in globally in place, So on the left here, we're adjusting data how the data is going to be used. So one of the blockers that you've cited and a number of the So the linchpin to your scenario for the data to be a product, is that the governance So at the end of the day, we But the point you make and make sure that data meets the quality in the "Data Mesh Principles and hopefully as the technology and participating in the program. after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Marc Lemire | PERSON | 0.99+ |
Chris O'Brien | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Hilary | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ildiko Vancsa | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Alan Cohen | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
Rajiv | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Stefan Renner | PERSON | 0.99+ |
Ildiko | PERSON | 0.99+ |
Mark Lohmeyer | PERSON | 0.99+ |
JJ Davis | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Beth | PERSON | 0.99+ |
Jon Bakke | PERSON | 0.99+ |
John Farrier | PERSON | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Cassandra Garber | PERSON | 0.99+ |
Peter McKay | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Brown | PERSON | 0.99+ |
Beth Cohen | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Seth Dobrin | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
5 | QUANTITY | 0.99+ |
Hal Varian | PERSON | 0.99+ |
JJ | PERSON | 0.99+ |
Jen Saavedra | PERSON | 0.99+ |
Michael Loomis | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
Rajiv Ramaswami | PERSON | 0.99+ |
Stefan | PERSON | 0.99+ |
Katie Colbert, Pure Storage & Kaustubh Das, Cisco | Cisco Live EU 2019
>> Live from Barcelona, Spain, it's theCUBE, covering Cisco Live Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back to Barcelona, everybody. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante. I'm here with my cohost, Stu Miniman. This is day one of Cisco Live Barcelona. Katie Colbert is here. She's the vice president of alliances at Pure Storage, and she's joined by Kaustubh Das, otherwise known as KD, who's the vice president of computing systems at Cisco. Katie and KD, welcome to theCUBE, good to see you. >> Thank you. >> Thank you. >> Alright, so let's start off, KD2, if you could just tell us about the partnership. Where did it start, how did it evolve? We'll get into it. >> We just had a terrific partnership, and the reason it's so great is it's really based on some foundational things that are super compatible. Pure Storage, Cisco, both super technology-driven companies, innovating. They're both also super programmatic companies. They'll do everything via API. It's very modern in that sense, the frameworks that we work on. And then from a business perspective, it's very compatible. We're chasing common markets, very few conflicts. So it's been rooted in solid foundations. And then, we've actually invested over the years to build more and more solutions for our customers jointly. So it's been terrific. >> So, Katie, I hate to admit how long we talk about partnering with Cisco >> It's going to age us. >> So you and I won't admit how many decades it's been partnering with Cisco, but here we are, 2019, Cisco's a very different company than it was a decade or two ago. >> Absolutely. >> Tell me what it's like working with them, especially as a company that's primarily in storage and data at Pure, what it means to partner with them. >> Absolutely, you're right. So, worked with Cisco as a partner for many years at the beginning of my career, then went away for, I'd say, a good 10 years, and joined Pure in June, and I will tell you one of the most exciting reasons why I joined Pure was the Pure and Cisco relationship. When I worked with them at the beginning of my career, it was great and I would tell you it's even better now. I will say that the momentum that these two companies have in the market is very phenomenal. A lot of differentiation from our products separately, but both together, I think that it's absolutely been very successful, and to KD's point, the investment that both companies are making really is just astronomical, and I see that our customers are the beneficiaries of that. It makes it so much easier for them to deploy and use the technologies together, which is exciting. >> So we always joke about Barney deals, I love you, you love me, I mean, it's clear you guys go much much deeper than that. So I want to probe at that a little bit. Particularly from an engineering standpoint, whether it's validated designs or other innovations that you guys are working on together, can we peel the onion on that one a little bit? Talk about what you guys are doing below that line. >> I'll start there then I'll hand it over to the engineering leader from Cisco. But if you think about the pace of this, the partnership, I think, is roughly 3 or so years old. We've 16 Cisco-validated designs for our FlashStack infrastructure. So that is just unbelievable. So, huge amount of investment from engineers, product managers, on both sides of the fence. >> Yeah, totally second that. We start out with the... Cisco-validated designs are like blueprints, so we start out with the blueprints for the standard workloads: Oracle, SAP. And we keep those fresh as new versions come out. But then I think we've taken it further into new spaces of late. ACI, we saw in the keynote this morning, it's going everywhere, it's going multi-site. We've done some work on marrying that with the clustering service of Pure Storage. On top of that, we're doing some work in AI and ML, which is super exciting, so we got some CBDs around that that's just coming out. We're doing some work on automation, coupling Intersight, which is Cisco's cloud-based automation suite, with Pure Storage and Pure Storage's ability to integrate into the Intersight APIs. We talked about it, in fact, I talked about it in my session at the Cisco Live in the summer last year, and now we've got that out as a product. So tremendous amount of work, both in traditional areas as well as some of these new spaces. >> Maybe we can unpack that Intersight piece a bit, because people might look at it initially and say, "Okay, multi-cloud, on-prem, all these environments, "but is this just a networking tool?" And working we're working with someone with Pure, maybe explain a little bit the scope and how, if I'm a Pure administrator, how I live into this world. >> Absolutely, so let's start with what is Intersight, just for a foundational thing. Intersight is our software management tool driven from the cloud. So everything from the personality of the server, the bios settings, the WLAN settings, the networking and the compute pieces of it, that gets administered from the cloud, but it does more. What it does is it can deliver playbooks from the cloud that give the server a certain kind of personality for the workload that it's supporting. So then the next question that anyone asks is, "Now that we have this partnership, "well can it do the same thing for storage? "Can it actually provision that storage, "get that up and running?" And the answer is yes, it can, but it's better because what it can not only do is, not only can it do that, getting that done is super simple. All Pure Storage needed to do was to write some of those Intersight APIs and deliver that playbook from the cloud, from a remote location potentially, into whatever your infrastructure is, provisioning compute, provisioning networking, provisioning storage, in a truly modern cloud-driven environment, right? So I think that's phenomenal what it does for our customers. >> Yeah, I'd agree with that. And I think it'll even become more important as the companies are partnering around our multi-cloud solutions. So, as you probably saw earlier this year in February, sorry, the end of 2018, Pure announced our first leaning into hybrid cloud, so that's Pure Cloud Data Services. That enables us to have Purity, which is our operating system on our storage, running in AWS to begin with. So you can pretty easily start to think about where this partnership is going to go, especially as it pertains to Intersight integration. >> And just to bounce on that, strategically, you can see the alignment there as well. I mean, Cisco's been talking about multi-cloud for a bit now, we've done work to enable similar development environments, whether we're doing something on-prem or in the cloud, so that you can move workloads from one to the other, or actually you can make workloads on both sides talk to each other, and, again, combined with what Katie just said, it makes it a really really compelling solution. >> Like you said, you've got pretty clear swimming lanes for the two companies. There's very little overlap here. You can't have too many of these types of partnerships, right, I mean, you got 25 thousand engineers almost, but still, you still have limited resources. So what makes this one so special, and why are you able to spend so much time and effort, each of you? >> I could start, so from a Pure perspective, I think the cultures are aligned, you called it out there, there's inherently not a lot of overlap in terms of where core competencies are. Pure is not looking at all to become a networking company. And just a lot of synergies in the market make it one that our engineers want to invest in. We have really picked Cisco as our lean-in partner, truthfully, I run all of the alliances at Pure, and a lion's share of my resources really are focused at that partnership. >> Yeah, and if you look at both these companies, Pure is a relative youngster among the storage companies, a new, modern, in a good way, a new, modern company built on modern software practices and so forth. Cisco, although a pretty veteran company, but Cisco compute is relatively new as well as a compute provider. So we are very similar in how our design philosophies work and how modern our infrastructures are, and that gets us to delivering results, delivering solutions to our customers with relatively less effort from our engineers. And that pace of innovation that we can do with Pure is not something we can do with every other company. >> We had a session earlier today, and we went pretty deep into AI, but it's probably worth touching on that. I guess my question here is, what are the customers asking you guys for in terms of AI infrastructure? What's that infrastructure look like that's powering the machine and intelligence era? >> You want to start? >> You want to go, I'll go first. This is a really exciting space, and not only is it exciting because AI is exciting, it's actually exciting because we've got some unique ingredients across Pure and Cisco to make this happen. What does AI feed on? AI feeds on data. The model requires that volume of data to actually train itself We've got an infrastructure, so we just released the C4ATML, the UCC4ATML, highly powered infrastructure, eight GPUs, interconnected, 180 terabytes on board, high network bandwidth, but it needs something to feed it the data, and what Pure's got with their FlashBlade is that ability to actually feed data to this AI infrastructure so that we can train bigger models or train these models faster. Makes for a fantastic solution because these ingredients are just custom made for each other. >> Anything you can add? >> Absolutely I'd agree with that. Really, if you look at AI and what it needs to be successful, and, first of all, all of our customers, if they're not thinking about it, they should be, and I will tell you most of them are, is, how do you ingest that amount of data? If you can't ingest that quickly, it's not going to be of use. So that's a big piece of it, and that's really what the new Cisco platform, I mean, the folks over at Pure are just thrilled about the new Cisco product, and then you take a look at the FlashBlade and how it's able to really scale out unstructured data, object it and file, really to make that useful, so when you have to scrub that data to be able to use it and correlate it, FlashBlade is the perfect solution. So really, this is two companies coming together with the best of breed technologies. >> And the tooling in that world is exploding, open source innovation, it needs a place to run all the Kafkas and the Caffes and the TensorFlows and the Pythons. It's not just confined to data scientists anymore. It's really starting to seep throughout the organization, are you seeing that? >> Yeah. >> What's happening is you've got the buzzwords going around, and that leads to businesses and the leaders of businesses saying, "We've got to have an AI strategy. "We've got to hire these data scientists." But at the same time, the data scientists can get started on the laptop, they can get started on the cloud. When they want to deploy this, they need an enterprise class, resilient, automated infrastructure that fits into the way they do their work. You've got to have something that's built on these components, so what we provide together is that infrastructure for the ITTs so that the data scientists, when they build their beautiful models, have a place to deploy them, have a place to put that into production, and can actually have that life cycle running in a much more smooth production-grade environment. >> Okay, so you guys are three years in, roughly. Where do you want to take this thing, what's the vision? Give us a little road map for the future as to what this partnership looks like down the road. >> Yeah, so I can start. So I think there's a few different vectors. We're going to continue driving the infrastructure for the traditional workloads. That's it, that's a big piece that we do, we continue doing that. We're going to drive a lot more on the automation side, I think there's such a lot of potential with what we've got on Intersight, with the automation that Pure supports, bring those together and really make it simple for our customers to get this up and running and manage that life cycle. And third vector's going to be imparting those new use cases, whether it be AI or more data analytics type use cases. There's a lot of potential that it unleashes for our customers and there's a lot of potential of bringing these technologies together to partner. So you'll see a lot more of that from us. I don't know, will you add something? >> Yeah, no, I absolutely agree. And I would say more FlashStack, look for more FlashStack CVDs, and AI, I think, is one to watch. We believe Cisco, really, this step that Cisco's made, is going to take AI infrastructure to the next level. So we're going to be investing much more heavily into that. And then cloud, from a hybrid cloud, how do these two companies leverage FlashStack and all the innovation we've done on prem together to really enable the multi-cloud. >> Great, alright, well Katie and KD, thanks so much for coming to theCUBE. It was great to have you. >> Great. Thanks for having us. >> Thank you very much. >> You're welcome, alright. Keep it right there everybody. Stu and I will be back with our next guest right after this short break. You're watching theCUBE Live from Cisco Live Barcelona. We'll be right back. (techy music)
SUMMARY :
Brought to you by Cisco and its ecosystem partners. Welcome back to Barcelona, everybody. if you could just tell us about the partnership. and the reason it's so great is it's really based So you and I won't admit how many at Pure, what it means to partner with them. and I see that our customers are the beneficiaries of that. or other innovations that you guys are working on together, I'll start there then I'll hand it over to so we start out with the blueprints maybe explain a little bit the scope and how, and deliver that playbook from the cloud, So you can pretty easily start to think so that you can move workloads from one to the other, and why are you able to spend And just a lot of synergies in the market And that pace of innovation that we can do with Pure what are the customers asking you guys for is that ability to actually feed data and how it's able to really scale out unstructured data, and the TensorFlows and the Pythons. and that leads to businesses and the leaders of businesses as to what this partnership looks like down the road. for our customers to get this up and running and AI, I think, is one to watch. thanks so much for coming to theCUBE. Thanks for having us. Stu and I will be back with our next guest
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Katie | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Katie Colbert | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Kaustubh Das | PERSON | 0.99+ |
KD | PERSON | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
June | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
16 | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
KD2 | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
180 terabytes | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
ACI | ORGANIZATION | 0.99+ |
C4ATML | COMMERCIAL_ITEM | 0.99+ |
one | QUANTITY | 0.99+ |
25 thousand engineers | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Barcelona, Spain | LOCATION | 0.98+ |
FlashBlade | TITLE | 0.98+ |
SAP | ORGANIZATION | 0.98+ |
Barney | ORGANIZATION | 0.98+ |
FlashBlade | COMMERCIAL_ITEM | 0.98+ |
Pure Storage | ORGANIZATION | 0.97+ |
Intersight | ORGANIZATION | 0.96+ |
earlier this year | DATE | 0.96+ |
UCC4ATML | COMMERCIAL_ITEM | 0.95+ |
end of 2018 | DATE | 0.95+ |
Pythons | TITLE | 0.94+ |
eight GPUs | QUANTITY | 0.93+ |
each | QUANTITY | 0.93+ |
second | QUANTITY | 0.92+ |
TensorFlows | TITLE | 0.9+ |
two ago | DATE | 0.9+ |
February | DATE | 0.88+ |
Purity | ORGANIZATION | 0.87+ |
theCUBE | ORGANIZATION | 0.86+ |
Intersight | TITLE | 0.86+ |
Katie Colbert & Kaustubh Das | Cisco Live EU 2019
>> Live from Barcelona, Spain, it's The Cube, covering Cisco Live Europe. Brought to you by Cisco and its ecosystem partners. >> Welcome back to Barcelona, everybody. You're watching The Cube, the leader in live tech coverage. My name is Dave Vellante. I'm here with my cohost, Stu Miniman. This is day one of Cisco Live Barcelona. Katie Colbert is here. She's the vice president of alliances at Pure Storage, and she's joined by Kaustubh Das, otherwise known as KD, who's the vice president of computing systems at Cisco. Katie and KD, welcome to The Cube, good to see you. >> Thank you. >> Thank you. >> Alright, so let's start off, KD2, if you could just tell us about the partnership. Where did it start, how did it evolve? We'll get into it. >> We just had a terrific partnership, and the reason it's so great is it's really based on some foundational things that are super compatible. Pure Storage, Cisco, both super technology-driven companies, innovating. They're both also super programmatic companies. They'll do everything via API. It's very modern in that sense, the frameworks that we work on. And then from a business perspective, it's very compatible. We're chasing common markets, very few conflicts. So it's been rooted in solid foundations. And then, we've actually invested over the years to build more and more solutions for our customers jointly. So it's been terrific. >> So, Katie, I hate to admit how long we talk about partnering with Cisco >> It's going to age us. >> So you and I won't admit how many decades it's been partnering with Cisco, but here we are, 2019, Cisco's a very different company than it was a decade or two ago. >> Absolutely. >> Tell me what it's like working with them, especially as a company that's primarily in storage and data at Pure, what it means to partner with them. >> Absolutely, you're right. So, worked with Cisco as a partner for many years at the beginning of my career, then went away for, I'd say, a good 10 years, and joined Pure in June, and I will tell you one of the most exciting reasons why I joined Pure was the Pure and Cisco relationship. When I worked with them at the beginning of my career, it was great and I would tell you it's even better now. I will say that the momentum that these two companies have in the market is very phenomenal. A lot of differentiation from our products separately, but both together, I think that it's absolutely been very successful, and to KD's point, the investment that both companies are making really is just astronomical, and I see that our customers are the beneficiaries of that. It makes it so much easier for them to deploy and use the technologies together, which is exciting. >> So we always joke about Barney deals, I love you, you love me, I mean, it's clear you guys go much much deeper than that. So I want to probe at that a little bit. Particularly from an engineering standpoint, whether it's validated designs or other innovations that you guys are working on together, can we peel the onion on that one a little bit? Talk about what you guys are doing below that line. >> I'll start there then I'll hand it over to the engineering leader from Cisco. But if you think about the pace of this, the partnership, I think, is roughly 3 or so years old. We've 16 Cisco-validated designs for our FlashStack infrastructure. So that is just unbelievable. So, huge amount of investment from engineers, product managers, on both sides of the fence. >> Yeah, totally second that. We start out with the... Cisco-validated designs are like blueprints, so we start out with the blueprints for the standard workloads: Oracle, SAP. And we keep those fresh as new versions come out. But then I think we've taken it further into new spaces of late. ACI, we saw in the keynote this morning, it's going everywhere, it's going multi-site. We've done some work on marrying that with the clustering service of Pure Storage. On top of that, we're doing some work in AI and ML, which is super exciting, so we got some CBDs around that that's just coming out. We're doing some work on automation, coupling Intersight, which is Cisco's cloud-based automation suite, with Pure Storage and Pure Storage's ability to integrate into the Intersight APIs. We talked about it, in fact, I talked about it in my session at the Cisco Live in the summer last year, and now we've got that out as a product. So tremendous amount of work, both in traditional areas as well as some of these new spaces. >> Maybe we can unpack that Intersight piece a bit, because people might look at it initially and say, "Okay, multi-cloud, on-prem, all these environments, "but is this just a networking tool?" And working we're working with someone with Pure, maybe explain a little bit the scope and how, if I'm a Pure administrator, how I live into this world. >> Absolutely, so let's start with what is Intersight, just for a foundational thing. Intersight is our software management tool driven from the cloud. So everything from the personality of the server, the bios settings, the WLAN settings, the networking and the compute pieces of it, that gets administered from the cloud, but it does more. What it does is it can deliver playbooks from the cloud that give the server a certain kind of personality for the workload that it's supporting. So then the next question that anyone asks is, "Now that we have this partnership, "well can it do the same thing for storage? "Can it actually provision that storage, "get that up and running?" And the answer is yes, it can, but it's better because what it can not only do is, not only can it do that, getting that done is super simple. All Pure Storage needed to do was to write some of those Intersight APIs and deliver that playbook from the cloud, from a remote location potentially, into whatever your infrastructure is, provisioning compute, provisioning networking, provisioning storage, in a truly modern cloud-driven environment, right? So I think that's phenomenal what it does for our customers. >> Yeah, I'd agree with that. And I think it'll even become more important as the companies are partnering around our multi-cloud solutions. So, as you probably saw earlier this year in February, sorry, the end of 2018, Pure announced our first leaning into hybrid cloud, so that's Pure Cloud Data Services. That enables us to have Purity, which is our operating system on our storage, running in AWS to begin with. So you can pretty easily start to think about where this partnership is going to go, especially as it pertains to Intersight integration. >> And just to bounce on that, strategically, you can see the alignment there as well. I mean, Cisco's been talking about multi-cloud for a bit now, we've done work to enable similar development environments, whether we're doing something on-prem or in the cloud, so that you can move workloads from one to the other, or actually you can make workloads on both sides talk to each other, and, again, combined with what Katie just said, it makes it a really really compelling solution. >> Like you said, you've got pretty clear swimming lanes for the two companies. There's very little overlap here. You can't have too many of these types of partnerships, right, I mean, you got 25 thousand engineers almost, but still, you still have limited resources. So what makes this one so special, and why are you able to spend so much time and effort, each of you? >> I could start, so from a Pure perspective, I think the cultures are aligned, you called it out there, there's inherently not a lot of overlap in terms of where core competencies are. Pure is not looking at all to become a networking company. And just a lot of synergies in the market make it one that our engineers want to invest in. We have really picked Cisco as our lean-in partner, truthfully, I run all of the alliances at Pure, and a lion's share of my resources really are focused at that partnership. >> Yeah, and if you look at both these companies, Pure is a relative youngster among the storage companies, a new, modern, in a good way, a new, modern company built on modern software practices and so forth. Cisco, although a pretty veteran company, but Cisco compute is relatively new as well as a compute provider. So we are very similar in how our design philosophies work and how modern our infrastructures are, and that gets us to delivering results, delivering solutions to our customers with relatively less effort from our engineers. And that pace of innovation that we can do with Pure is not something we can do with every other company. >> We had a session earlier today, and we went pretty deep into AI, but it's probably worth touching on that. I guess my question here is, what are the customers asking you guys for in terms of AI infrastructure? What's that infrastructure look like that's powering the machine and intelligence era? >> You want to start? >> You want to go, I'll go first. This is a really exciting space, and not only is it exciting because AI is exciting, it's actually exciting because we've got some unique ingredients across Pure and Cisco to make this happen. What does AI feed on? AI feeds on data. The model requires that volume of data to actually train itself We've got an infrastructure, so we just released the C4ATML, the UCC4ATML, highly powered infrastructure, eight GPUs, interconnected, 180 terabytes on board, high network bandwidth, but it needs something to feed it the data, and what Pure's got with their FlashBlade is that ability to actually feed data to this AI infrastructure so that we can train bigger models or train these models faster. Makes for a fantastic solution because these ingredients are just custom made for each other. >> Anything you can add? >> Absolutely I'd agree with that. Really, if you look at AI and what it needs to be successful, and, first of all, all of our customers, if they're not thinking about it, they should be, and I will tell you most of them are, is, how do you ingest that amount of data? If you can't ingest that quickly, it's not going to be of use. So that's a big piece of it, and that's really what the new Cisco platform, I mean, the folks over at Pure are just thrilled about the new Cisco product, and then you take a look at the FlashBlade and how it's able to really scale out unstructured data, object it and file, really to make that useful, so when you have to scrub that data to be able to use it and correlate it, FlashBlade is the perfect solution. So really, this is two companies coming together with the best of breed technologies. >> And the tooling in that world is exploding, open source innovation, it needs a place to run all the Kafkas and the Caffes and the TensorFlows and the Pythons. It's not just confined to data scientists anymore. It's really starting to seep throughout the organization, are you seeing that? >> Yeah. >> What's happening is you've got the buzzwords going around, and that leads to businesses and the leaders of businesses saying, "We've got to have an AI strategy. "We've got to hire these data scientists." But at the same time, the data scientists can get started on the laptop, they can get started on the cloud. When they want to deploy this, they need an enterprise class, resilient, automated infrastructure that fits into the way they do their work. You've got to have something that's built on these components, so what we provide together is that infrastructure for the ITTs so that the data scientists, when they build their beautiful models, have a place to deploy them, have a place to put that into production, and can actually have that life cycle running in a much more smooth production-grade environment. >> Okay, so you guys are three years in, roughly. Where do you want to take this thing, what's the vision? Give us a little road map for the future as to what this partnership looks like down the road. >> Yeah, so I can start. So I think there's a few different vectors. We're going to continue driving the infrastructure for the traditional workloads. That's it, that's a big piece that we do, we continue doing that. We're going to drive a lot more on the automation side, I think there's such a lot of potential with what we've got on Intersight, with the automation that Pure supports, bring those together and really make it simple for our customers to get this up and running and manage that life cycle. And third vector's going to be imparting those new use cases, whether it be AI or more data analytics type use cases. There's a lot of potential that it unleashes for our customers and there's a lot of potential of bringing these technologies together to partner. So you'll see a lot more of that from us. I don't know, will you add something? >> Yeah, no, I absolutely agree. And I would say more FlashStack, look for more FlashStack CVDs, and AI, I think, is one to watch. We believe Cisco, really, this step that Cisco's made, is going to take AI infrastructure to the next level. So we're going to be investing much more heavily into that. And then cloud, from a hybrid cloud, how do these two companies leverage FlashStack and all the innovation we've done on prem together to really enable the multi-cloud. >> Great, alright, well Katie and KD, thanks so much for coming to The Cube. It was great to have you. >> Great. Thanks for having us. >> Thank you very much. >> You're welcome, alright. Keep it right there everybody. Stu and I will be back with our next guest right after this short break. You're watching The Cube Live from Cisco Live Barcelona. We'll be right back. (techy music)
SUMMARY :
Brought to you by Cisco and its ecosystem partners. Welcome back to Barcelona, everybody. if you could just tell us about the partnership. and the reason it's so great is it's really based So you and I won't admit how many at Pure, what it means to partner with them. and I see that our customers are the beneficiaries of that. or other innovations that you guys are working on together, I'll start there then I'll hand it over to so we start out with the blueprints maybe explain a little bit the scope and how, and deliver that playbook from the cloud, So you can pretty easily start to think so that you can move workloads from one to the other, and why are you able to spend And just a lot of synergies in the market And that pace of innovation that we can do with Pure what are the customers asking you guys for is that ability to actually feed data and how it's able to really scale out unstructured data, and the TensorFlows and the Pythons. and that leads to businesses and the leaders of businesses as to what this partnership looks like down the road. for our customers to get this up and running and AI, I think, is one to watch. thanks so much for coming to The Cube. Thanks for having us. Stu and I will be back with our next guest
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Katie | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Katie Colbert | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
KD | PERSON | 0.99+ |
Kaustubh Das | PERSON | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
June | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
KD2 | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
16 | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
ACI | ORGANIZATION | 0.99+ |
180 terabytes | QUANTITY | 0.99+ |
C4ATML | COMMERCIAL_ITEM | 0.99+ |
25 thousand engineers | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Barcelona, Spain | LOCATION | 0.98+ |
SAP | ORGANIZATION | 0.98+ |
FlashBlade | TITLE | 0.98+ |
The Cube Live | TITLE | 0.98+ |
Barney | ORGANIZATION | 0.98+ |
earlier this year | DATE | 0.97+ |
Pure Storage | ORGANIZATION | 0.97+ |
FlashBlade | COMMERCIAL_ITEM | 0.97+ |
Intersight | ORGANIZATION | 0.97+ |
end of 2018 | DATE | 0.96+ |
UCC4ATML | COMMERCIAL_ITEM | 0.95+ |
each | QUANTITY | 0.95+ |
Pythons | TITLE | 0.94+ |
second | QUANTITY | 0.92+ |
eight GPUs | QUANTITY | 0.91+ |
TensorFlows | TITLE | 0.9+ |
two ago | DATE | 0.9+ |
Purity | ORGANIZATION | 0.89+ |
Pure Cloud Data Services | ORGANIZATION | 0.89+ |
February | DATE | 0.89+ |
Bill Mannel & Dr. Nicholas Nystrom | HPE Discover 2017
>> Announcer: Live, from Las Vegas, it's the Cube, covering HPE Discover 2017. Brought to you by Hewlett Packard Enterprise. >> Hey, welcome back everyone. We are here live in Las Vegas for day two of three days of exclusive coverage from the Cube here at HPE Discover 2017. Our two next guests is Bill Mannel, VP and General Manager of HPC and AI for HPE. Bill, great to see you. And Dr. Nick Nystrom, senior of research at Pittsburgh's Supercomputer Center. Welcome to The Cube, thanks for coming on, appreciate it. >> My pleasure >> Thanks for having us. >> As we wrap up day two, first of all before we get started, love the AI, love the high performance computing. We're seeing great applications for compute. Everyone now sees that a lot of compute actually is good. That's awesome. What is the Pittsburgh Supercomputer Center? Give a quick update and describe what that is. >> Sure. The quick update is we're operating a system called Bridges. Bridges is operating for the National Science Foundation. It democratizes HPC. It brings people who have never used high performance computing before to be able to use HPC seamlessly, almost as a cloud. It unifies HPC big data and artificial intelligence. >> So who are some of the users that are getting access that they didn't have before? Could you just kind of talk about some of the use cases of the organizations or people that you guys are opening this up to? >> Sure. I think one of the newest communities that's very significant is deep learning. So we have collaborations between the University of Pittsburgh life sciences and the medical center with Carnegie Mellon, the machine learning researchers. We're looking to apply AI machine learning to problems in breast and lung cancer. >> Yeah, we're seeing the data. Talk about some of the innovations that HPE's bringing with you guys in the partnership, because we're seeing, people are seeing the results of using big data and deep learning and breakthroughs that weren't possible before. So not only do you have the democratization cool element happening, you have a tsunami of awesome open source code coming in from big places. You see Google donating a bunch of machine learning libraries. Everyone's donating code. It's like open bar and open source, as I say, and the young kids that are new are the innovators as well, so not just us systems guys, but a lot of young developers are coming in. What's the innovation? Why is this happening? What's the ah-ha moment? Is it just cloud, is it a combination of things, talk about it. >> It's a combination of all the big data coming in, and then new techniques that allow us to analyze and get value from it and from that standpoint. So the traditional HPC world, typically we built equations which then generated data. Now we're actually kind of doing the reverse, which is we take the data and then build equations to understand the data. So it's a different paradigm. And so there's more and more energy understanding those two different techniques of kind of getting two of the same answers, but in a different way. >> So Bill, you and I talked in London last year. >> Yes. With Dr. Gho. And we talked a lot about SGI and what that acquisition meant to you guys. So I wonder if you could give us a quick update on the business? I mean it's doing very well, Meg talked about it on the conference call this last quarter. Really high point and growing. What's driving the growth, and give us an update on the business. >> Sure. And I think the thing that's driving the growth is all this data and the fact that customers want to get value from it. So we're seeing a lot of growth in industries like financial services, like in manufacturing, where folks are moving to digitization, which means that in the past they might have done a lot of their work through experimentation. Now they're moving it to a digital format, and they're simulating everything. So that's driven a lot more HPC over time. As far as the SGI, integration is concern. We've integrated about halfway, so we're at about the halfway point. And now we've got the engineering teams together and we're driving a road map and a new set of products that are coming out. Our Gen 10-based products are on target, and they're going to be releasing here over the next few months. >> So Nick, from your standpoint, when you look at, there's been an ebb and flow in the supercomputer landscape for decades. All the way back to the 70s and the 80s. So from a customer perspective, what do you see now? Obviously China's much more prominent in the game. There's sort of an arms race, if you will, in computing power. From a customer's perspective, what are you seeing, what are you looking for in a supplier? >> Well, so I agree with you, there is this arms race for exaflops. Where we are really focused right now is enabling data-intensive applications, looking at big data service, HPC is a service, really making things available to users to be able to draw on the large data sets you mentioned, to be able to put the capability class computing, which will go to exascale, together with AI, and data and Linux under one platform, under one integrated fabric. That's what we did with HPE for Bridges. And looking to build on that in the future, to be able to do the exascale applications that you're referring to, but also to couple on data, and to be able to use AI with classic simulation to make those simulations better. >> So it's always good to have a true practitioner on The Cube. But when you talk about AI and machine learning and deep learning, John and I sometimes joke, is it same wine, new bottle, or is there really some fundamental shift going on that just sort of happened to emerge in the last six to nine months? >> I think there is a fundamental shift. And the shift is due to what Bill mentioned. It's the availability of data. So we have that. We have more and more communities who are building on that. You mentioned the open source frameworks. So yes, they're building on the TensorFlows, on the Cafes, and we have people who have not been programmers. They're using these frameworks though, and using that to drive insights from data they did not have access to. >> These are flipped upside down, I mean this is your point, I mean, Bill pointed it out, it's like the models are upside down. This is the new world. I mean, it's crazy, I don't believe it. >> So if that's the case, and I believe it, it feels like we're entering this new wave of innovation which for decades we talked about how we march to the cadence of Moore's Law. That's been the innovation. You think back, you know, your five megabyte disk drive, then it went to 10, then 20, 30, now it's four terabytes. Okay, wow. Compared to what we're about to see, I mean it pales in comparison. So help us envision what the world is going to look like in 10 or 20 years. And I know it's hard to do that, but can you help us get our minds around the potential that this industry is going to tap? >> So I think, first of all, I think the potential of AI is very hard to predict. We see that. What we demonstrated in Pittsburgh with the victory of Libratus, the poker-playing bot, over the world's best humans, is the ability of an AI to beat humans in a situation where they have incomplete information, where you have an antagonist, an adversary who is bluffing, who is reacting to you, and who you have to deal with. And I think that's a real breakthrough. We're going to see that move into other aspects of life. It will be buried in apps. It will be transparent to a lot of us, but those sorts of AI's are going to influence a lot. That's going to take a lot of IT on the back end for the infrastructure, because these will continue to be compute-hungry. >> So I always use the example of Kasperov and he got beaten by the machine, and then he started a competition to team up with a supercomputer and beat the machine. Yeah, humans and machines beat machines. Do you expect that's going to continue? Maybe both your opinions. I mean, we're just sort of spitballing here. But will that augmentation continue for an indefinite period of time, or are we going to see the day that it doesn't happen? >> I think over time you'll continue to see progress, and you'll continue to see more and more regular type of symmetric type workloads being done by machines, and that allows us to do the really complicated things that the human brain is able to better process than perhaps a machine brain, if you will. So I think it's exciting from the standpoint of being able to take some of those other roles and so forth, and be able to get those done in perhaps a more efficient manner than we're able to do. >> Bill, talk about, I want to get your reaction to the concept of data. As data evolves, you brought up the model, I like the way you're going with that, because things are being flipped around. In the old days, I want to monetize my data. I have data sets, people are looking at their data. I'm going to make money from my data. So people would talk about how we monetizing the data. >> Dave: Old days, like two years ago. >> Well and people actually try to solve and monetize their data, and this could be use case for one piece of it. Other people are saying no, I'm going to open, make people own their own data, make it shareable, make it more of an enabling opportunity, or creating opportunities to monetize differently. In a different shift. That really comes down to the insights question. What's your, what trends do you guys see emerging where data is much more of a fabric, it's less of a discreet, monetizable asset, but more of an enabling asset. What's your vision on the role of data? As developers start weaving in some of these insights. You mentioned the AI, I think that's right on. What's your reaction to the role of data, the value of the data? >> Well, I think one thing that we're seeing in some of our, especially our big industrial customers is the fact that they really want to be able to share that data together and collect it in one place, and then have that regularly updated. So if you look at a big aircraft manufacturer, for example, they actually are putting sensors all over their aircraft, and in realtime, bringing data down and putting it into a place where now as they're doing new designs, they can access that data, and use that data as a way of making design trade-offs and design decision. So a lot of customers that I talk to in the industrial area are really trying to capitalize on all the data possible to allow them to bring new insights in, to predict things like future failures, to figure out how they need to maintain whatever they have in the field and those sorts of things at all. So it's just kind of keeping it within the enterprise itself. I mean, that's a challenge, a really big challenge, just to get data collected in one place and be able to efficiently use it just within an enterprise. We're not even talking about sort of pan-enterprise, but just within the enterprise. That is a significant change that we're seeing. Actually an effort to do that and see the value in that. >> And the high performance computing really highlights some of these nuggets that are coming out. If you just throw compute at something, if you set it up and wrangle it, you're going to get these insights. I mean, new opportunities. >> Bill: Yeah, absolutely. >> What's your vision, Nick? How do you see the data, how do you talk to your peers and people who are generally curious on how to approach it? How to architect data modeling and how to think about it? >> I think one of the clearest examples on managing that sort of data comes from the life sciences. So we're working with researchers at University of Pittsburgh Medical Center, and the Institute for Precision Medicine at Pitt Cancer Center. And there it's bringing together the large data as Bill alluded to. But there it's very disparate data. It is genomic data. It is individual tumor data from individual patients across their lifetime. It is imaging data. It's the electronic health records. And trying to be able to do this sort of AI on that to be able to deliver true precision medicine, to be able to say that for a given tumor type, we can look into that and give you the right therapy, or even more interestingly, how can we prevent some of these issues proactively? >> Dr. Nystrom, it's expensive doing what you do. Is there a commercial opportunity at the end of the rainbow here for you or is that taboo, I mean, is that a good thing? >> No, thank you, it's both. So as a national supercomputing center, our resources are absolutely free for open research. That's a good use of our taxpayer dollars. They've funded these, we've worked with HP, we've designed the system that's great for everybody. We also can make this available to industry at an extremely low rate because it is a federal resource. We do not make a profit on that. But looking forward, we are working with local industry to let them test things, to try out ideas, especially in AI. A lot of people want to do AI, they don't know what to do. And so we can help them. We can help them architect solutions, put things on hardware, and when they determine what works, then they can scale that up, either locally on prem, or with us. >> This is a great digital resource. You talk about federally funded. I mean, you can look at Yosemite, it's a state park, you know, Yellowstone, these are natural resources, but now when you start thinking about the goodness that's being funded. You want to talk about democratization, medicine is just the tip of the iceberg. This is an interesting model as we move forward. We see what's going on in government, and see how things are instrumented, some things not, delivery of drugs and medical care, all these things are coalescing. How do you see this digital age extending? Because if this continues, we should be doing more of these, right? >> We should be. We need to be. >> It makes sense. So is there, I mean I just not up to speed on what's going on with federally funded-- >> Yeah, I think one thing that Pittsburgh has done with the Bridges machine, is really try to bring in data and compute and all the different types of disciplines in there, and provide a place where a lot of people can learn, they can build applications and things like that. That's really unusual in HPC. A lot of times HPC is around big iron. People want to have the biggest iron basically on the top 500 list. This is where the focus hasn't been on that. This is where the focus has been on really creating value through the data, and getting people to utilize it, and then build more applications. >> You know, I'll make an observation. When we first started doing The Cube, we observed that, we talked about big data, and we said that the practitioners of big data, are where the guys are going to make all the money. And so far that's proven true. You look at the public big data companies, none of them are making any money. And maybe this was sort of true with ERP, but not like it is with big data. It feels like AI is going to be similar, that the consumers of AI, those people that can find insights from that data are really where the big money is going to be made here. I don't know, it just feels like-- >> You mean a long tail of value creation? >> Yeah, in other words, you used to see in the computing industry, it was Microsoft and Intel became, you know, trillion dollar value companies, and maybe there's a couple of others. But it really seems to be the folks that are absorbing those technologies, applying them, solving problems, whether it's health care, or logistics, transportation, etc., looks to where the huge economic opportunities may be. I don't know if you guys have thought about that. >> Well I think that's happened a little bit in big data. So if you look at what the financial services market has done, they've probably benefited far more than the companies that make the solutions, because now they understand what their consumers want, they can better predict their life insurance, how they should-- >> Dave: You could make that argument for Facebook, for sure. >> Absolutely, from that perspective. So I expect it to get to your point around AI as well, so the folks that really use it, use it well, will probably be the ones that benefit it. >> Because the tooling is very important. You've got to make the application. That's the end state in all this That's the rubber meets the road. >> Bill: Exactly. >> Nick: Absolutely. >> All right, so final question. What're you guys showing here at Discover? What's the big HPC? What's the story for you guys? >> So we're actually showing our Gen 10 product. So this is with the latest microprocessors in all of our Apollo lines. So these are specifically optimized platforms for HPC and now also artificial intelligence. We have a platform called the Apollo 6500, which is used by a lot of companies to do AI work, so it's a very dense GPU platform, and does a lot of processing and things in terms of video, audio, these types of things that are used a lot in some of the workflows around AI. >> Nick, anything spectacular for you here that you're interested in? >> So we did show here. We had video in Meg's opening session. And that was showing the poker result, and I think that was really significant, because it was actually a great amount of computing. It was 19 million core hours. So was an HPC AI application, and I think that was a really interesting success. >> The unperfect information really, we picked up this earlier in our last segment with your colleagues. It really amplifies the unstructured data world, right? People trying to solve the streaming problem. With all this velocity, you can't get everything, so you need to use machines, too. Otherwise you have a haystack of needles. Instead of trying to find the needles in the haystack, as they was saying. Okay, final question, just curious on this natural, not natural, federal resource. Natural resource, feels like it. Is there like a line to get in? Like I go to the park, like this camp waiting list, I got to get in there early. How do you guys handle the flow for access to the supercomputer center? Is it, my uncle works there, I know a friend of a friend? Is it a reservation system? I mean, who gets access to this awesomeness? >> So there's a peer reviewed system, it's fair. People apply for large allocations four times a year. This goes to a national committee. They met this past Sunday and Monday for the most recent. They evaluate the proposals based on merit, and they make awards accordingly. We make 90% of the system available through that means. We have 10% discretionary that we can make available to the corporate sector and to others who are doing proprietary research in data-intensive computing. >> Is there a duration, when you go through the application process, minimums and kind of like commitments that they get involved, for the folks who might be interested in hitting you up? >> For academic research, the normal award is one year. These are renewable, people can extend these and they do. What we see now of course is for large data resources. People keep those going. The AI knowledge base is 2.6 petabytes. That's a lot. For industrial engagements, those could be any length. >> John: Any startup action coming in, or more bigger, more-- >> Absolutely. A coworker of mine has been very active in life sciences startups in Pittsburgh, and engaging many of these. We have meetings every week with them now, it seems. And with other sectors, because that is such a great opportunity. >> Well congratulations. It's fantastic work, and we're happy to promote it and get the word out. Good to see HP involved as well. Thanks for sharing and congratulations. >> Absolutely. >> Good to see your work, guys. Okay, great way to end the day here. Democratizing supercomputing, bringing high performance computing. That's what the cloud's all about. That's what great software's out there with AI. I'm John Furrier, Dave Vellante bringing you all the data here from HPE Discover 2017. Stay tuned for more live action after this short break.
SUMMARY :
Brought to you by Hewlett Packard Enterprise. of exclusive coverage from the Cube What is the Pittsburgh Supercomputer Center? to be able to use HPC seamlessly, almost as a cloud. and the medical center with Carnegie Mellon, and the young kids that are new are the innovators as well, It's a combination of all the big data coming in, that acquisition meant to you guys. and they're going to be releasing here So from a customer perspective, what do you see now? and to be able to use AI with classic simulation in the last six to nine months? And the shift is due to what Bill mentioned. This is the new world. So if that's the case, and I believe it, is the ability of an AI to beat humans and he got beaten by the machine, that the human brain is able to better process I like the way you're going with that, You mentioned the AI, I think that's right on. So a lot of customers that I talk to And the high performance computing really highlights and the Institute for Precision Medicine the end of the rainbow here for you We also can make this available to industry I mean, you can look at Yosemite, it's a state park, We need to be. So is there, I mean I just not up to speed and getting people to utilize it, the big money is going to be made here. But it really seems to be the folks that are So if you look at what the financial services Dave: You could make that argument So I expect it to get to your point around AI as well, That's the end state in all this What's the story for you guys? We have a platform called the Apollo 6500, and I think that was really significant, I got to get in there early. We make 90% of the system available through that means. For academic research, the normal award is one year. and engaging many of these. and get the word out. Good to see your work, guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
National Science Foundation | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Institute for Precision Medicine | ORGANIZATION | 0.99+ |
Pittsburgh | LOCATION | 0.99+ |
Carnegie Mellon | ORGANIZATION | 0.99+ |
Nick | PERSON | 0.99+ |
Meg | PERSON | 0.99+ |
Nick Nystrom | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Bill | PERSON | 0.99+ |
Bill Mannel | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
University of Pittsburgh Medical Center | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
HPE | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Discover | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
Yosemite | LOCATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Nystrom | PERSON | 0.99+ |
one year | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Nicholas Nystrom | PERSON | 0.99+ |
HPC | ORGANIZATION | 0.99+ |
two next guests | QUANTITY | 0.99+ |
SGI | ORGANIZATION | 0.99+ |
Kasperov | PERSON | 0.99+ |
2.6 petabytes | QUANTITY | 0.99+ |
80s | DATE | 0.98+ |
one piece | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
70s | DATE | 0.98+ |
Yellowstone | LOCATION | 0.98+ |
five megabyte | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.97+ |
two different techniques | QUANTITY | 0.97+ |
Pitt Cancer Center | ORGANIZATION | 0.97+ |
20 years | QUANTITY | 0.97+ |
Monday | DATE | 0.97+ |
Dr. | PERSON | 0.96+ |
one thing | QUANTITY | 0.96+ |
Gho | PERSON | 0.96+ |
one | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
one place | QUANTITY | 0.95+ |
day two | QUANTITY | 0.94+ |
four terabytes | QUANTITY | 0.94+ |
past Sunday | DATE | 0.93+ |
Pittsburgh Supercomputer Center | ORGANIZATION | 0.93+ |
University of Pittsburgh life sciences | ORGANIZATION | 0.9+ |
last quarter | DATE | 0.89+ |
four times a year | QUANTITY | 0.89+ |
Linux | TITLE | 0.88+ |
19 million core hours | QUANTITY | 0.86+ |
nine months | QUANTITY | 0.84+ |
decades | QUANTITY | 0.83+ |
Bridges | ORGANIZATION | 0.81+ |