Image Title

Search Results for HP.com:

A Day in the Life of a Data Scientist


 

>>Hello, everyone. Welcome to the a day in the life of a data science talk. Uh, my name is Terry Chang. I'm a data scientist for the ASML container platform team. And with me, I have in the chat room, they will be moderating the chat. I have Matt MCO as well as Doug Tackett, and we're going to dive straight into kind of what we can do with the asthma container platform and how we can support the role of a data scientist. >>So just >>A quick agenda. So I'm going to do some introductions and kind of set the context of what we're going to talk about. And then we're actually going to dive straight into the ASML container platforms. So we're going to walk straight into what a data scientist will do, kind of a pretty much a day in the life of the data scientists. And then we'll have some question and answer. So big data has been the talk within the last few years within the last decade or so. And with big data, there's a lot of ways to derive meaning. And then a lot of businesses are trying to utilize their applications and trying to optimize every decision with their, uh, application utilizing data. So previously we had a lot of focus on data analytics, but recently we've seen a lot of data being used for machine learning. So trying to take any data that they can and send it off to the data scientists to start doing some modeling and trying to do some prediction. >>So that's kind of where we're seeing modern businesses rooted in analytics and data science in itself is a team sport. We're seeing that it doesn't, we need more than data scientists to do all this modeling. We need data engineers to take the data, massage the data and do kind of some data manipulation in order to get it right for the data scientists. We have data analysts who are monitoring the models, and we even have the data scientists themselves who are building and iterating through multiple different models until they find a one that is satisfactory to the business needs. Then once they're done, they can send it off to the software engineers who will actually build it out into their application, whether it's a mobile app or a web app. And then we have the operations team kind of assigning the resources and also monitoring it as well. >>So we're really seeing data science as a team sport, and it does require a lot of different expertise and here's the kind of basic machine learning pipeline that we see in the industry now. So, uh, at the top we have this training environment and this is, uh, an entire loop. Uh, we'll have some registration, we'll have some inferencing and at the center of all, this is all the data prep, as well as your repositories, such as for your data, for any of your GitHub repository, things of that sort. So we're kind of seeing the machine learning industry, go follow this very basic pattern and at a high level I'll glance through this very quickly, but this is kind of what the, uh, machine learning pipeline will look like on the ASML container platform. So at the top left, we'll have our, our project depository, which is our, uh, persistent storage. >>We'll have some training clusters, we'll have a notebook, we'll have an inference deployment engine and a rest API, which is all sitting on top of the Kubernetes cluster. And the benefit of the container platform is that this is all abstracted away from the data scientist. So I will actually go straight into that. So just to preface, before we go into the data as small container platform, where we're going to look at is a machine learning example, problem that is, uh, trying to predict how long a specific taxi ride will take. So with a Jupiter notebook, the data scientists can take all of this data. They can do their data manipulation, train a model on a specific set of features, such as the location of a taxi ride, the duration of a taxi ride, and then model it to trying to figure out, you know, what, what kind of prediction we can get on a future taxi ride. >>So that's the example that we will talk through today. I'm going to hop out of my slides and jump into my web browser. So let me zoom in on this. So here I have a Jupiter environment and, um, this is all running on the container platform. All I need is actually this link and I can access my environment. So as a data scientist, I can grab this link from my it admin or my system administrator. And I could quickly start iterating and, and start coding. So on the left-hand side of the Jupiter, we actually have a file directory structure. So this is already synced up to my get repository, which I will show in a little bit on the container platform so quickly I can pull any files that are on my get hub repository. I can even push with a button here, but I can, uh, open up this Python notebook. >>And with all this, uh, unique features of the Jupiter environment, I can start coding. So each of these cells can run Python code and in specific the container at the ESMO container platform team, we've actually built our own in-house lime magic commands. So these are unique commands, um, that we can use to interact with the underlying infrastructure of the container platform. So the first line magic command that I want to mention is this command called percent attachments. When I run this command, I'll actually get the available training clusters that I can send training jobs to. So this specific notebook, uh, it's pretty much been created for me to quickly iterate and develop a model very quickly. I don't have to use all the resources. I don't have to allocate a full set of GPU boxes onto my little Jupiter environment. So with the training cluster, I can attach these individual data science notebooks to those training clusters and the data scientists can actually utilize those resources as a shared environment. >>So the, essentially the shared large eight GPU box can actually be shared. They don't have to be allocated to a single data scientist moving on. We have another line magic command, it's called percent percent Python training. This is how we're going to utilize that training cluster. So I will prepare the cell percent percent with the name of the training cluster. And this is going to tell this notebook to send this entire training cell, to be trained on those resources on that training cluster. So the data scientists can quickly iterate through a model. They can then format that model and all that code into a large cell and send it off to that training cluster. So because of that training cluster is actually located somewhere else. It has no context of what has been done locally in this notebook. So we're going to have to do and copy everything into one large cell. >>So as you see here, I'm going to be importing some libraries and I'm in a, you know, start defining some helper functions. I'm going to read in my dataset and with the typical data science modeling life cycle, we're going to have to take in the data. We're going to have to do some data pre-processing. So maybe the data scientists will do this. Maybe the data engineer will do this, but they have access to that data. So I'm here. I'm actually getting there to be reading in the data from the project repository. And I'll talk about this a little bit later with all of the clusters within the container platform, we have access to some project repository that has been set up using the underlying data fabric. So with this, I have, uh, some data preprocessing, I'm going to cleanse some of my data that I noticed that maybe something is missing or, uh, some data doesn't look funky. >>Maybe the data types aren't correct. This will all happen here in these cells. So once that is done, I can print out that the data is done cleaning. I can start training my model. So here we have to split our data, set into a test, train, uh, data split so that we have some data for actually training the model and some data to test the model. So I can split my data there. I could create my XG boost object to start doing my training and XG boost is kind of like a decision tree machine learning algorithm, and I'm going to fit my data into this, uh, XG boost algorithm. And then I'm going to do some prediction. And then in addition, I'm actually going to be tracking some of the metrics and printing them out. So these are common metrics that we, that data scientists want to see when they do their training of the algorithm. >>Just to see if some of the accuracy is being improved, if the loss is being improved or the mean absolute error. So things like that. So these are all things, data scientists want to see. And at the end of this training job, I'm going to be saving the model. So I'm going to be saving it back into the project repository in which we will have access to. And at the end, I will print out the end time so I can execute that cell. And I've already executed that cell. So you'll see all of these print statements happening here. So importing the libraries, the training was run reading and data, et cetera. All of this has been printed out from that training job. Um, and in order to access that, uh, kind of glance through that, we would get an output with a unique history URL. >>So when we send the training job to that training cluster, we'll the training cluster will send back a unique URL in which we'll use the last line magic command that I want to talk about called percent logs. So percent logs will actually, uh, parse out that response from the training cluster. And actually we can track in real time what is happening in that training job so quickly, we can see that the data scientist has a sandbox environment available to them. They have access to their get repository. They have access to a project repository in which they can read in some of their data and save the model. So very quick interactive environment for the data scientists to do all of their work. And it's all provisioned on the ASML container platform. And it's also abstracted away. So here, um, I want to mention that again, this URL is being surfaced through the container platform. >>The data scientist doesn't have to interact with that at all, but let's take, it's take a step back. Uh, this is the day to day in the life of the data scientists. Now, if we go backwards into the container platform and we're going to walk through how it was all set up for them. So here is my login page to the container platform. I'm going to log in as my user, and this is going to bring me to the, uh, view of the, uh, Emma lops tenant within the container platform. So this is where everything has been set up for me, the data scientist doesn't have to see this if they don't need to, but what I'll walk through now is kind of the topics that I mentioned previously that we would go back into. So first is the project repository. So this project deposited comes with each tenant that is created on the platform. >>So this is a more, nothing more than a shared collaborative workspace environment in which data scientist or any data scientist who is allocated to this tenant. They have this politics client that can visually see all their data of all, all of their code. And this is actually taking a piece of the underlying data fabric and using that for your project depository. So you can see here, I have some code I can create and see my scoring script. I can see the models that have been created within this tenant. So it's pretty much a powerful tool in which you can store your code store any of your data and have the ability to read and write from any of your Jupiter environments or any of your created clusters within this tenant. So a very cool ad here in which you can, uh, quickly interact with your data. >>The next thing I want to show is the source control. So here is where you would plug in all of your information for your source control. And if I edit this, you guys will actually see all the information that I've passed in to configure the source control. So on the backend, the container platform will take these credentials and connect the Jupiter notebooks you create within this tenant to that get repository. So this is the information that I've passed in. If GitHub is not of interest, we also have support for bit bucket here as well. So next I want to show you guys that we do have these notebook environments. So, um, the notebook environment was created here and you can see that I have a notebook called Teri notebook, and this is all running on the Kubernetes environment within the container platform. So either the data scientists can come here and create their notebook or their project admin can create the notebook. >>And all you'd have to do is come here to this notebook end points. And this, the container platform will actually map the container platform to a specific port in which you can just give this link to the data scientists. And this link will actually bring them to their own Jupiter environment and they can start doing all of their model just as I showed in that previous Jupiter environment. Next I want to show the training cluster. This is the training cluster that was created in which I can attach my notebook to start utilizing those training clusters. And then the last thing I want to show is the model, the deployment cluster. So once that model has been saved, we have a model registry in which we can register the model into the platform. And then the last step is to create a deployment clusters. So here on my screen, I have a deployment cluster called taxi deployment. >>And then all these serving end points have been configured for me. And most importantly, this endpoint model. So the deployment cluster is actually a wrap the, uh, train model with the flask wrapper and add a rest endpoint to it so quickly. I can operationalize my model by taking this end point and creating a curl command, or even a post request. So here I have my trusty postman tool in which I can format a post request. So I've taken that end point from the container platform. I've formatted my body, uh, right here. So these are some of the features that I want to send to that model. And I want to know how long this specific taxi ride at this location at this time of day would take. So I can go ahead and send that request. And then quickly I will get an output of the ride. >>Duration will take about 2,600 seconds. So pretty much we've walked through how a data scientists can quickly interact with their notebook. They can train their model. And then coming into the platform, we saw the project repository, we saw the source control. We can register the model within the platform, and then quickly we can operationalize that model with our deployment cluster, uh, and have our model up and running and available for inference. So that wraps up the demo. Uh, I'm gonna pass it back to Doug and Matt and see if they want to come off mute and see if there are any questions, Matt, Doug, you there. Okay. >>Yeah. Hey, Hey Terry, sorry. Sorry. Just had some trouble getting off mute there. Uh, no, that was a, that was an excellent presentation. And I think there are generally some questions that come up when I talk to customers around how integrated into the Kubernetes ecosystem is this capability and where does this sort of Ezreal starts? And the open source, uh, technologies like, um, cube flow as an example, uh, begin. >>Yeah, sure. Matt. So this is kind of one layer up. We have our Emma LOBs tenant and this is all running on a piece of a Kubernetes cluster. So if I log back out and go into the site admin view, this is where you would see all the Kubernetes clusters being created. And it's actually all abstracted away from the data scientists. They don't have to know Kubernetes. They just interact with the platform if they want to. But here in the site admin view, I had this Kubernetes dashboard and here on the left-hand side, I have all my Kubernetes sections. So if I just add some compute hosts, whether they're VMs or cloud compute hosts, like ETQ hosts, we can have these, uh, resources abstracted away from us to then create a Kubernetes cluster. So moving on down, I have created this Kubernetes cluster utilizing those resources. >>Um, so if I go ahead and edit this cluster, you'll actually see that have these hosts, which is just a click and a click and drop method. I can move different hosts to then configure my Kubernetes cluster. Once my Kubernetes cluster is configured, I can then create Kubernetes tenant or in this case, it's a namespace. So once I have this namespace available, I can then go into that tenant. And as my user, I don't actually see that it is running on Kubernetes. So in addition with our ML ops tenants, you have the ability to bootstrap cute flow. So queue flow is a open source machine learning framework that is run on Kubernetes, and we have the ability to link that up as well. So, uh, coming back to my Emma lops tenant, I can log in what I showed is the ASML container platform version of Emma flops. But you see here, we've also integrated QP flow. So, uh, very, uh, a nod to, uh, HPS contribution to, you know, utilizing open source. Um, it's actually all configured within our platform. So, um, hopefully, >>Yeah, actually, Tara, can you hear me? It's Doug. So there were a couple of other questions actually about key flare that came in. I wonder whether you could just comment on why we've chosen cube flow. Cause I know there was a question about ML flow in stead and what the differences between ML flow and coop flow. >>Yeah, sure. So the, just to reiterate, there are some questions about QP flow and I'm just, >>Yeah, so obviously one of, uh, one of the people watching saw the queue flow dashboard there, I guess. Um, and so couldn't help but get excited about it. But there was another question about whether, you know, ML flow versus cube flow and what the difference was between them. >>Yeah. So with flow, it's, it's an open source framework that Google has developed. It's a very powerful framework that comes with a lot of other unique tools and Kubernetes. So with Q flow, you really have the ability to launch other notebooks. You have the ability to utilize different Kubernetes operators like TensorFlow and PI torch. You can utilize a lot of the, some of the frameworks within Q4 to do training like Q4 pipelines, which visually allow you to see your training jobs, uh, within the queue flow. It also has a plethora of different serving mechanisms, such as Seldin, uh, for, you know, deploying your, your machine learning models. You have Ks serving, you have TF serving. So Q4 is very, it's a very powerful tool for data scientists to utilize if they want a full end to end open source and know how to use Kubernetes. So it's just a, another way to do your machine learning model development and right with ML flow, it's actually a different piece of the machine learning pipeline. So ML flow mainly focuses on model experimentation, comparing different models, uh, during the training and it off it can be used with Q4. >>The complimentary Terry I think is what you're saying. Sorry. I know we are dramatically running out of time now. So that was really fantastic demo. Thank you very much, indeed. >>Exactly. Thank you. So yeah, I think that wraps it up. Um, one last thing I want to mention is there is this slide that I want to show in case you have any other questions, uh, you can visit hp.com/asml, hp.com/container platform. If you have any questions and that wraps it up. So thank you guys.

Published Date : Mar 17 2021

SUMMARY :

I'm a data scientist for the ASML container platform team. So I'm going to do some introductions and kind of set the context of what we're going to talk about. the models, and we even have the data scientists themselves who are building and iterating So at the top left, we'll have our, our project depository, which is our, And the benefit of the container platform is that this is all abstracted away from the data scientist. So that's the example that we will talk through today. So the first line magic command that I want to mention is this command called percent attachments. So the data scientists can quickly iterate through a model. So maybe the data scientists will do this. So once that is done, I can print out that the data is done cleaning. So I'm going to be saving it back into the project repository in which we will So here, um, I want to mention that again, this URL is being So here is my login page to the container So this is a more, nothing more than a shared collaborative workspace environment in So on the backend, the container platform will take these credentials and connect So once that model has been saved, we have a model registry in which we can register So I've taken that end point from the container platform. So that wraps up the demo. And the open source, uh, technologies like, um, cube flow as an example, So moving on down, I have created this Kubernetes cluster So once I have this namespace available, So there were a couple of other questions actually So the, just to reiterate, there are some questions about QP flow and I'm just, But there was another question about whether, you know, ML flow versus cube flow and So with Q flow, you really have the ability to launch So that was really fantastic demo. So thank you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DougPERSON

0.99+

Doug TackettPERSON

0.99+

Terry ChangPERSON

0.99+

TerryPERSON

0.99+

TaraPERSON

0.99+

MattPERSON

0.99+

PythonTITLE

0.99+

GoogleORGANIZATION

0.99+

Matt MCOPERSON

0.99+

JupiterLOCATION

0.99+

KubernetesTITLE

0.99+

first lineQUANTITY

0.98+

eachQUANTITY

0.98+

GitHubORGANIZATION

0.98+

todayDATE

0.98+

firstQUANTITY

0.98+

about 2,600 secondsQUANTITY

0.97+

Q4TITLE

0.97+

A Day in the Life of a Data ScientistTITLE

0.97+

hp.com/asmlOTHER

0.97+

last decadeDATE

0.97+

one layerQUANTITY

0.95+

hp.com/containerOTHER

0.92+

single dataQUANTITY

0.91+

EmmaPERSON

0.91+

one large cellQUANTITY

0.91+

each tenantQUANTITY

0.88+

oneQUANTITY

0.84+

one last thingQUANTITY

0.81+

Q flowTITLE

0.8+

EmmaTITLE

0.8+

ESMOORGANIZATION

0.76+

last few yearsDATE

0.74+

one ofQUANTITY

0.73+

dayQUANTITY

0.72+

eight GPUQUANTITY

0.7+

SeldinTITLE

0.69+

Q4DATE

0.67+

percent percentOTHER

0.65+

EzrealORGANIZATION

0.65+

some questionsQUANTITY

0.65+

ASMLTITLE

0.65+

ASMLORGANIZATION

0.61+

peopleQUANTITY

0.49+

ETQTITLE

0.46+

TeriORGANIZATION

0.4+

EmmaORGANIZATION

0.35+

#HybridStorage


 

from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hi I'm Peter Burris analyst at wiki bond welcome to another wiki bond the cube digital community event this one sponsored by HP and focusing on hybrid storage like all of our digital community events this one will feature about 25 minutes of video followed by a crowd chat which will be your opportunity to ask your questions share your experiences and push forward the community's thinking on the important issues facing business today so what are we talking about today again hybrid storage let's get going so what is hybrid storage in a lot of shops most people have associated the cloud with public cloud but as we gain experience with the challenges associated with transforming to digital business in which we use data as a singular value producing asset increasingly IT professionals are starting to realize this important relationship between data storage and cloud services and in many respects that's really what we're trying to master today is a better understanding of how the business is going to use data to affect significant changes in how it behaves in the marketplace and it's that question of behavior that question of action that question of location that is pushing business to think differently about how its cloud architectures are going to work we're going to keep data proximate to where it's created to where it's going to be used to where it's going to be able to generate value which demands that we have storage resources in place close to that data proximate to that activity near that value producing activity and that the cloud services will have to follow in many respects that's what we're talking about when we talk about hybrid cloud today we're talking about the increasing recognition that we're going to move cloud services to the data default and not move the data into the cloud public cloud specifically so it's this ongoing understanding as we gain experience with this powerful set of technologies that data architecture is going to be increasingly distributed that storage therefore will be increasingly distributed and that cloud services will flow to where the data is required utilizing storage technologies that can best serve that set of workload so it's a more complex world that demands new levels of simplicity ease of use and optimization so that's where we're going to start our conversation so these crucial questions of how data storage and cloud are going to come together to create hybrid architectures was the basis for a great cubed conversation between silicon angle wiki bonds david Volante and HPE sun dip aurora let's hear what they had to say talk about let's talk about the break down those three things cost efficiency ease of use and resource optimization let's start with cost efficiency so obviously there's TCO there's also the way in which I consume the people I presume are looking for a different pricing model is that are you hearing that yeah absolutely so as part of the cost of of running their business and being able to operate like a cloud everybody is looking at a variety of different procurement and utilization models one of the ways HPE provides utilization model that can map to their cloud journey a public cloud journey is through Greenlake the ability to use and consume data on-demand consume compute on demand across the entire portfolio of products HPE has essentially is what a Greenlake journey looks like and let's go into ease-of-use so what do you mean by that I mean people look they think cloud they think swipe the credit card and start you know deploying machines what do you mean by easy for us ease of use translates back to how do you map to a simpler operating and support model for us the support model is the is the key for customers to be able to to realize the benefits of going to that cloud to get to a simpler support model we use AI ops and for us a offs means using a product called info site info site is a product that is uses deep learning and machine learning algorithms to look at a wide net of call home data from physical resources out there and then be able to take that data and make it actionable and the action behind that is predictiveness the prescriptive nosov creating automated support tickets enclosing automated support tickets without anybody ever having to pick up a phone and call IT support that info site model now is being expanded across the board to all HP products it started with nimble now info site is available on three part it's available on synergy and a recent announcement said it's also available on pro alliance and we expect that info set becomes the glue the automation a I do that goes across the entire portfolio of HP products so this is a great example of applying AI to data so it's like call home taking to a whole new level isn't it yeah it absolutely is and in fact what it does is it uses the call home data that we've had for a long time with products like 3par which essentially was amazing data but not being auctioned on in an automated fashion it takes that data and creates an automation tasks around it and many times that automation task leads to much simpler support experience all right third item you mentioned was resource optimization let's let's drill down into that I infer from that there's there are performance implications is maybe governance compliance you know physical placement can you elaborate that's in color yes I think it's all of the above that he just talked about it's definitely about applying the right performance level to the right set of applications we call this application of air storage the ability to be able to understand which application is creating the data allows us to understand how that data needs to be accessed which in turn means we know where it needs to reside one of the things that HP is doing in the storage domain is creating a common storage fabric with the cloud we call that the fabric for the cloud the idea there is that we have a single layer between the on-premises and off premises resources that allows us to move data as needed depending on the application needs and depending on the user needs so this crucial new factors that have to be incorporated through everyone's thinking of cost efficiency ease of use and resource optimization it's going to place new types of stress on the storage hierarchy it's gonna require new technologies to better support digital transformation David Flor an analyst here in wiki bon has been a leading thinker of the relationship between the storage hierarchy and workloads and digital thinking for quite some time I had a great conversation with David not too long ago let's hear what he had to say about this new storage hierarchy and the new technologies they're gonna make possible these changes have you've been looking at this notion of modern storage architectures for 10 years now and you've been relatively prescient in understanding what's going to happen you were one of the first guys to predict well in advance of everybody else that the crossover between flash and HDD was gonna happen sooner rather than later so I'm not gonna spend a lot of time quizzing you what do you see as a modern storage architecture let's just let it rip ok well let's start with one simple observation the days of stand-alone systems for data have gone we're in a software-defined world and you want to be able to run those data architectures anywhere where you the data is and that means in your data center where you've is created or in the cloud or in a public cloud or at the edge you want to be able to be flexible enough to be able to do all of the data services where the best place is and that means everything has to be software German Software Defined is the first proposition of a modern day in a storage so so the second thing is that there are different types of technology you have the very fastest storage which is in the in in the DRAM itself you have env dim which is the next one down from that expensive but a lot cheaper than the dim and then you have different sorts of flash you have the high-performance flash and you have the 3d flash you know as many layers as you can which is much cheaper flash and then at the bottom you have HD DS and an even tape as storage devices so how the key question is how do you manage that sort of environment well let me start because it still sounds like we still have a storage hierarchy absolutely and it still sounds like that hierarchy is defined largely in terms of access speeds yep and price point size points yes those are the two mason and and bandwidth and latency as well with it which are tied into the richer tied into those yes so what you if you're gonna have this everywhere and you need services everywhere what you have to have is an architecture which takes away all of that complexity so that you all you see from an application point of view is data and how it gets there and how it's put away and how it's stored and how it's protected that's under the covers so the first thing is you need a virtualization of that data layer the physical layer the virtualization of that physical yes and secondly you need that physical layer to extend to all the places that may be using this data you you don't want to be constrained to this data set lives here you want to be able to say ok I want to move this piece of programming to the data as quickly as I can that's much much faster than moving the data to the to the processing so I want to be able to know where all the data is for this particular dataset or file or whatever it is where they all are how they connect together what the latency is between everything I want to understand that architecture and I want a virtualized view of that across that whole the nodes that make up my hybrid cloud so let me be clear here so so we are going to use a software-defined infrastructure that allows us to place the physical devices that have the right cost performance characteristics where they need to be based on the physical realities of latency of you know power availability hardening etc on the network and the network but we want to mask that complexity from the application the application developer an application administrator yes and Software Defined helps do that but doesn't completely do it No well you you want services which say exactly so their service is on top of all that apps that are that are recognizable by the developer by the you know the business person by the administrator as they think about how they use data towards those outcomes not use a storage or use a device but use the data to reach application outcomes that's absolutely right then that's what I call the data plane which is a series of services which enable that to happen and and driven by the application required so we've looked at this and some of the services include you know and and compression deduplication the backup restore security data protection so that's kind of that's kind of the services that now the enterprise by or needs to think about so that those services can be applied with you know by policy yes wherever they're required based on the utilization of the data correct where it's kind of where the event takes place and then you still have at the bottom of that you have the different types of devices you still have you still want of hamsters Mickey you still want hard disk they're not disappearing but if you're gonna use hard disks then you want to use it in the right way if you're using a hard disk you know you want to give it large box you to have it going sequentially in and out all the time so the storage administration and the day the physical schema and everything else is still important in all this but it's less important less the centerpiece of the buying decision correct increasingly it's how well does this stuff prove support the services that the business is using to achieve their outcomes and you want to use course the lowest cost that you can and there will be many different options over more more options open but but the automation of that is absolutely key and that automation from a vendor point of view one of the key things they have to do is to be able to learn from the usage by their customers across as broad a number of customers as they can learn what works what doesn't work learn so that they can put automation into their own software their own software services well sounds like we're talking four things we got we got software-defined still have a storage hierarchy defined by cost and performance but with mainly semiconductor stuff we've got great data services that are relevant to the business and automation that masks the complexity from the artificial AI there is also also made many things fantastic so David's thinking on the new storage hierarchy and how it's going to relate to new classes of workload is a baseline for a lot of the changes happening in the industry today but we still have to turn technology into services that deliver higher levels of value once again let's go back to Dave volantes conversation with Sun dip Arora and here what Sun dip has to say about some of the new digital services some of the new data services they're gonna be essential to supporting these new hybrid storage capabilities we have and what it does it it gives us the opportunity now not just you look at column data from storage but then also look at call home data from the compute side and then what we can do is correlate the data coming back to have better predictability and outcomes on your data center operations as opposed to doing it at the layer of infrastructure you also set out a vision of this this orchestration yeah lair can you talk more about that are we talking about across all clouds whether it's on pram or at the edge or in the public cloud yeah we are we're talking about making it as simple as possible where the customers are not necessarily picking and choosing it allows them to have a strategy that allows them to go across the data center whether it's a public cloud building their own private infrastructure or running on a traditional on-premises sand structure so this vision for us cloud fabric vision for us allows for customers to do that and what about software-defined storage yeah where does that fit into this whole equation yeah I'm glad you mentioned that because that was a third tenant of what HP truly brings to our customers software-defined is is something that allows us to maximize the utilization of the existing resources that our customers have so what we've done is we've partnered with a great deal of really strong software-defined vendors such as comm world cohesive accumulo de terre I know we work very closely with the likes of veeam Zotoh and and the goal there is to do to provide our customers with a whole range of options to drive building a software-defined infrastructure build off the Apollo series of products Apollo servers or storage products for us are extremely dense storage products that allow for both cost and resource optimization so Sunday I made some fantastic points about how new storage technologies are going to be turned into usable services that digital businesses will require as they conceived of their overall hybrid storage approach here's an opportunity hear a little bit more about what HPE thinks about some of these crucial areas let's hear what they have to say in this Chuck talk short take I'm gonna introduce you to HPE primary storage if you want the agility of the public cloud but need the resiliency and speed of high-end storage for mission-critical applications this force is a trade-off of agility for resiliency high-end storage is fast and reliable but falls short on agility and simplicity what if you could have it all what if you could have both agility and resiliency for your mission-critical apps introducing the world's most intelligent storage for mission-critical apps HP primary it delivers an on-demand experience so storage is instantly available Apple wear resiliency backed with a hundred percent availability guarantee predictive acceleration so apps aren't fast some of the time but fast all the time with embedded AI let me tell you more about HPE primarily was engineered to drive unique value in high-end storage there are four areas we focus on global intelligence powered with the most advanced AI for infrastructure info site an all active architecture with multiple nodes for higher resiliency and limitless parallelization a service centric OS that eliminates the risk and simplifies management and timeless storage with a new ownership experience that keeps getting better to learn more go to hp.com slash storage slash prime era so that's been a great series of conversations about hybrid storage and I want to thank Sun dip Arora of HPE David floor of wiki bonds to look at angle jim kanby lists of wiki bonds to look and angle and my colleague David Volante for helping out on the interview side I'm Peter Burris and this has been another wiki bond the cube digital community event sponsored by HPE now stay tuned for our Crouch at which will be your opportunity to ask your questions share your experiences and push for the community's thinking on hybrid storage once again thank you very much for watching let's crouch at

Published Date : Aug 21 2019

SUMMARY :

and and the goal there is to do to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David VolantePERSON

0.99+

Peter BurrisPERSON

0.99+

DavidPERSON

0.99+

David FlorPERSON

0.99+

10 yearsQUANTITY

0.99+

HPEORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

HPORGANIZATION

0.99+

SundayDATE

0.99+

second thingQUANTITY

0.99+

hundred percentQUANTITY

0.99+

about 25 minutesQUANTITY

0.98+

first propositionQUANTITY

0.98+

firstQUANTITY

0.98+

AppleORGANIZATION

0.97+

todayDATE

0.97+

wiki bondsTITLE

0.97+

Dave volantesPERSON

0.96+

bothQUANTITY

0.95+

Sun dipORGANIZATION

0.95+

twoQUANTITY

0.95+

oneQUANTITY

0.94+

david VolantePERSON

0.92+

third itemQUANTITY

0.92+

single layerQUANTITY

0.89+

ChuckPERSON

0.89+

thirdQUANTITY

0.86+

Palo Alto CaliforniaLOCATION

0.83+

jim kanbyPERSON

0.82+

GermanOTHER

0.82+

one simpleQUANTITY

0.8+

ApolloORGANIZATION

0.8+

Sun dip AroraORGANIZATION

0.8+

first guysQUANTITY

0.79+

3parORGANIZATION

0.77+

nimbleORGANIZATION

0.73+

wikiORGANIZATION

0.72+

threeQUANTITY

0.7+

GreenlakeORGANIZATION

0.7+

Sun dip AroraPERSON

0.7+

the cube digital communityEVENT

0.67+

wiki bondORGANIZATION

0.65+

wiki bondTITLE

0.64+

ApolloTITLE

0.64+

secondlyQUANTITY

0.63+

bondEVENT

0.6+

thingsQUANTITY

0.58+

areasQUANTITY

0.57+

veeam ZotohORGANIZATION

0.57+

auroraPERSON

0.47+

hp.comOTHER

0.46+

HPE Data Platform


 

from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hi I'm Peter Burris analyst wiki Bond welcome to another wiki Bond the cube digital community event this one's sponsored by HPE like all of our digital community events this one will feature about 25 minutes of video followed by a crowd chat which will be your opportunity to ask your questions share your experiences and push forward the community's thinking on important issues facing business today so what are we talking about today over the course of the last say six months or so we've had a lot of conversations with our customers about the core issues that multi-cloud is going to engender with in business one of them clearly is how do we bring greater intelligence to how we move manage and administer data within the enterprise some of the more interesting conversations we've had turns out to have been with HPE and that's what we're going to talk about today we're going to be spending a few minutes with a number of HPE professionals as well as wiki bond professionals and thought leaders talking about the challenges that enterprises face as a consider intelligent data platforms so let's get started the first conversation that we're going to talk about is with Sandeep Singh who is the vice president at HPE Sandeep let's have that conversation about the challenges facing business today as it pertains to data so Sandeep I started off by making the observation that we've got this mountain of data coming in a lot of enterprises at the same time there seems to be a the the notion of how data is going to create new classes of business value seems to be pretty deeply ingrained and acculturated to a lot of decision-makers so they want more value out of their data but they're increasingly concerned about the volume of data that's going to hit them how in your conversations with customers are you hearing them talk about this fundamental challenge so that that's a great question you know across the board data is at the heart of applications pretty much everything that organizations do and when they look at it in conversations with customers it really boils down to a couple of areas one is how is my data just effortlessly available all the time it's always fast because fundamentally that's driving the speed of my business and that's incredibly important and how can my various audiences including developers just consume it like the public cloud in a self-service fashion and then the second part of that conversation is really about this massive data storm or mountain of data that's coming and it's gonna be available how do how do I Drive a competitive advantage how do i unlock these hidden insights in that data to uncover new revenue streams new customer experiences those are the areas that we hear about and fundamentally underlying it the challenge for customers is boy I have a lot of complexity and how do I ensure that I have the necessary insights in a the infrastructure management so I am not beholden am or my IT staff isn't beholden to fighting the IT fires that can cause disruptions and delays to projects so fundamentally we want to be able to push time and attention in the infrastructure in the administration of those devices that handle the data and move that time and attention up into how we deliver the data services and ideally up into the applications that are going to actually generate a new class of work within a digital business so I got that right absolutely it's about infrastructure that just runs seamlessly it's always on it's always fast people don't have to worry about what is it gonna go down is my data available or is it gonna slow down people don't want sometimes faster one always fast right I and that's governing the application performance that ultimately I can deliver and you talked about while geez if it if the data infrastructure just work seamlessly then can I eventually get to the applications and building the right pipelines ultimately for mining that data drive doing the AI and the machine learning analytics driven insides from there great discussion about the importance of data in the enterprise and how it's changing the way we think about business we're going to come back to Sandeep shortly but first let's spend some time talking with David floor who's the wiki bond analyst about the new mindset that is required to take advantage of some of these technologies and solve some of these problems specifically we need to think increasingly about data services let's hear what David has to say explain what that new mindset is yes I completely agree that that new mindset is required and it starts with you want to be able to deal with data wherever it's gonna be you in we are in a hybrid world hybrid cloud world your own clouds other public clouds partner clouds all of these need to be integrated and data is at the core of it so that the requirement then is to have rather than think about each individual piece is to think about services which are going to be applied to that data and can be applied not only to the data in one place but across all of that data and there isn't such a thing is just one set of services there going to be multiple sets of these services available but hope we will see some degree of conversion so they'll be the same lexicon and conceptual etcetera there'll be the same levels of things that are needed within each of these architectures but there'll be different emphasis on different areas we need to look at the way we administer data as a set of services that create outcomes for the business and as opposed to that are then translated into individual devices let me so let's jump into this notion of of what those services look like it seems as though we can list off a couple of them sure yeah so we must have of data reduction techniques so you must have deduplication compression type of techniques and you want to apply that our crosses bigger an amount of data as you can the more data you apply those the higher the levels of compression and deduplication you can get so that's clearly you've got those sort of sets of services across there you must backup and restore data in another place and be able to restore it quickly and easily there's that again is a service how quickly how integrated that recovery again that's going to be a variable that's a differentiation in the service exactly you're going to need data data protection in general end to end protection of once or another for example you need end-to-end encryption across there it's no longer good enough to say this bits been encrypted and then this bits the encrypted has got to be an end-to-end from one location to another location seamlessly provided that sort of thing well let me let me let me press on it cuz I think it's a really important point and and and it's you know the notion that the weakest link determines the strength of the chain right the what you just described says if you have encryption here and you don't have encryption there but because of the nature of digital you can start you start bringing that data together guess what the weakest link determines the protection of the overall data absolutely yes and then you need services like snapshots like like other services which provide much better usage of that data one of the great things about flash and that's brought about this about is that you can take a copy of that in real time and use that first totally different purpose and have that being changed in a different way so there are some really significantly great improvements you can have with services like snapshots and then you need some other services which are becoming even more important in my opinion the advent of [Music] bad actors in the in the world has really bought about the requirement for things like air gaps to have your data with the metadata all in one place and completely separated from everything else there are such things as called logical air gaps I think they as long as they're real in the real sense that the two paths can't interfere with each other those are going to be services which become very very important that's generally as an example of a general class of security data services they require so ultimately what we're describing is we're describing a new mindset that says that a storage administrator has to think about the services that the applications in the business requires and then seek out technologies that can provide those services at the price point with the degree of power consumption in the space or the environmental or with the type of maintenance and services related support that required based on the physical location the degree to which is under their control etc so that kind of what how we're thinking about this I think absolutely and the again if there's going to be multiple of these around in the marketplace one size is not going to fit all yeah you if you're wanting super fast response time at an edge and and if you don't get that response in time it's going to be no use whatsoever you're going to take you're going to have a different architecture a different way of doing it then if you need to be a hundred percent certain that every bit is captured and you know in a financial sort of environment but from a service standpoint you want to be able to look at that specific solution in a common way current policies current bilities correct great observations by David Flor it's very clear that for enterprises to get more control over their data their data assets and how they create value out of data they have to take a services mentality but the challenge that we all face is just taking a service mentality is not going to be enough we have to think about how we're going to organize those services into a platform that is pertinent and relevant to how business operates in a digital sense so let's go back to Sandeep saying and talk to him a little bit about this HPE notion of the intelligent data platform you've been one of the leaders in the complex systems arena for a long time and that includes storage where are you guys taking some of these technologies yeah so our strategy is to deliver an intelligent data platform and that intelligent data platform begins with workload optimized composable systems that can span the mission critical workloads general purpose secondary Big Data ai workloads we also deliver cloud data services that enable you to embrace hybrid cloud all of these systems including all the way to cloud data services are plumbed with data mobility and so for example use cases of even modernizing protection and going all the way to protecting cost effectively in the public cloud are enabled but really all of these systems then are imbued with a level of intelligence with a global intelligence engine that begins with predicting and proactively resolving issues before they occur but it goes way beyond that in delivering these prescriptive insights that are built on top of global learning across hundreds of thousands of systems with over a billion data points coming in on a daily basis to be able to deliver at the information at the fingertips of even the virtual machine admins to say this virtual machine is sapping the performance of this node and if you were to move it to this other node the performance or the SLA for all of the virtual machine farm will be even better we build on top of that to deliver pre-built automation so that it's hooked in with a REST API for strategy so that developers can consume it in a containerized application that's orchestrated with kubernetes or they can leverage it as an infrastructure as code whether it's with ansible puppet or chef we accelerate all of the application workloads and bring up where data protection and so it's available for the traditional business applications whether they're built on sa P or Oracle or sequel or the virtual machine farms or the new stack containerized applications and then customers can build their AI and big data pipelines on top of the infrastructure with a plethora of tools whether they're using basically Kafka lastic map our h2o that complete flexibility exists and within HPE were then able to turn around and deliver all of this with an as a service experience with HPE Greenlake to customers so that's where I want to take you next so how invasive is this going to be to a large shop well it is completely seamless in that way so with Greenlake we're able to deliver a fully managed service experience where the a cloud like page you go consumption model and combining it with HPE financial services we're also able to transform their organization in terms of this journey and make it a fully self-funding journey as well so today the typical administrator the typical shop has got a bunch of administrators that are administrating devices that's starting to change they've introduced automation that typically is associated with those devices but if we think three to five years out folks going to be thinking more in terms of data services and how those services get consumed and that's going to be what the storage part of I t's going to be thinking about they can almost become day to administrators if I got that right yes intelligence is fundamentally changing everything not only on the consumer side but on the business side of it a lot of what we've been talking about is intelligence is the game changer we actually see the dawn of the intelligence era and through this AI driven experience what it means for customers as a it enables a support experience that they just absolutely love secondly it means that the infrastructure is always on it's always fast it's always optimized in that sense and thirdly in terms of making these data services that are available and data insights that are being unlocked it's all about how can you enable your innovators and the data scientists and the data analysts to shrink that time to deriving insights from months literally down to minutes today there's this chasm that exists where there's a great concept of how can i leverage the AI technology and between that concept to making it real to thinking about a where can I actually fit and then how do i implement an end-to-end solution and a technology stack so then I just have a pipeline that's available to me that chasm literally is a matter of months and what we're able to deliver for example with HPE blue data is literally a catalog self-service experience where you can select and seamlessly build a pipeline literally in a matter of minutes and it's just all completely hosted seamlessly so making AI and machine learning essentially available for the mainstream through so the ontology data platform makes it possible to see these new classes of applications become routine without forcing the underlying storage administrators themselves to become data scientists absolutely all right the intelligent data platform is a very great concept but it's got to be made real and it's being made real today by HP Calvin Zito's a thought leader at HPE and he's done a series of chalk talks as it pertains to improving storage improving data management one of the more interesting ones was specifically on the intelligent data platform let's watch Calvin Zito's chalk talk hey guys I love it's time for another around the storage black chalk talk in this chalk top we're gonna look at the intelligent Data Platform let me set up the discussion at HP we see the dawn of the intelligence error the flatshare brought a speed with flash flash is now table stakes the cloud era brought new levels of agility and everyone expects as a service experience going forward the intelligence era with an AI driven experience for infrastructure operations in AI enabled unlocking of insights is poised to catapult businesses forward so the intelligent era will see the rise of the intelligent enterprise the enterprise will be always on always fast always agile to respond to different challenges but most of all the intelligent enterprise will be built for innovation innovation that can ilish new services revenue streams and business models every enterprise will need to have an intelligent data strategy where your data is always on and always fast automated an on-demand hybrid by design and applies global intelligence for visibility and lifecycle management our strategy is to deliver an intelligent data platform that turns your data challenges into business opportunities it begins with workload optimized composable systems for multiple workloads and we deliver cloud services for a hybrid cloud environment so that you can seamlessly move data throughout its lifecycle I'll have more on this in a moment the global intelligence engine infuses the entire infrastructure with intelligence it starts with predicting and proactively resolving issues before they occur it creates a unique workload fingerprint and these workload fingerprints combined with global learning enable us to drive recommendations to keep your app workloads and supporting infrastructure always optimized and delivering predictable speed we have a REST API first strategy and offer pre build automation connectors we bring Apple wear protection for both traditional and modern new stack application workloads and you can use the intelligent data platform to build and deliver flexible big data and AI pipelines for driving real-time analytics let's take a quick look at the portfolio of workload optimized composable systems these are systems across mission-critical general-purpose workloads as well secondary data and solutions for the emerging big data and AI applications because our portfolio is built for the cloud we offer comprehensive cloud data services for both production workloads and backup and archive in the cloud HPE info site provides the global intelligence across the portfolio and we give you flexibility of consuming these solutions as a service with HPE Greenlake I want to close with one more thing the HPE intelligent data platform has three main attributes first it's AI driven it removes the burden of managing infrastructure so that IT can focus on innovating and not administrating second it's built for cloud and it enables easy data and workload mobility across hybrid cloud environments finally the intelligent data platform delivers and as a service experience so you can be your own cloud provider to learn more go to hp.com intelligent data always love to hear from you on Twitter where you can find me as calvin zito you can find my blog at hp.com slash blog until next time thanks for joining me on this around the storage black chalk talk I think Calvin makes a compelling case that the opportunity to use these technologies is available today not something that we're just going to wait for in the future and that's good because one of the most important things that business has to think about is how are they going to utilize some of these new AI and related technologies to alter the way that they engage their customers run their businesses and handle their operations and ultimately improve their overall efficiency and effectiveness in the marketplaces it's very clear that this intelligent data platform is required to do many of the advanced AI things that business wants to do but it also requires AI in the platform itself so let's go back to Sandeep Singh and talk to Sandeep about how HPE foresees AI being embedded in them into the intelligent data platform so it can make possible greater utilization of AI and the rest of the application portfolio so we've got the significant problem we now have to figure out how to architect because we want predictability and certainty and and cost clarity and to how we're going to do this part of the challenge or part of the pushers new use cases for AI so we're trying to push data up so that we can build these new use cases but it seems that we have to also have to take some of those very same technologies and drive them down into the infrastructure so we get greater intelligence greater self meter and greater self management self administration within the infrastructure itself I got that right yes absolutely what becomes important for customers is when you think about data and ultimately storage that underlies the data is you can build and deploy fast and reliable storage but that's only solving half the problem greater than 50% of the issues actually end up arising from the higher layers for example you could change the firmware on the host bus adapter inside a server that can trickle down and cause a data unavailability or a performance slowdown issue you need to be able to predict that all the way at that higher level and then prevent that from occurring or your virtual machines might be in a state of over memory commitment at the server level or you CPU over commitment how do you discover those issues and prevent them from happening the other area that's becoming important is when we talk about this whole notion of cloud and hybrid cloud right that complexity tends to multiply exponentially so when the smarts you guys are going after building that hybrid cloud infrastructure fundamental challenges even as I've got a new workload and I want to place that you even on premises because you've had lots of silos how do you even figure out where should I place a workload a and how it'll react with workloads B and C on a given system and now you multiply that across hundreds of systems multiple clouds and the challenge you can see that it's multiplying exponentially oh yeah well I would say that having you know where do I put workload a the right answer today maybe here but the right answer tomorrow maybe some where else and you want to make sure that the service is right required to perform workload a our resident and available without a lot of administrative work necessary to ensure that there's commonality that's kind of what we mean by this hybrid multi cloud world isn't it absolutely and you when you start to think about it basically you end up in requiring and fundamentally needing the data mobility aspect of it because without the data you can't really move your workloads and you need consistency of data services so that your app if it's architected for reliability and a set of data services those just go along with the application and then you need building on top of that the portability for your actual application workload consistently managed with a hybrid management interface there so we want to use an intelligent data platform that's capable of assuring performance assuring availability and assuring security and going beyond that to then deliver a simplified automated experience right so that everything is just available through a self-service interface and then it brings along a level of intelligence that's just built into it globally so that in instead of trying to manually predict and landing in a world of reactive after IT fires have occurred is that there are sea of sensors and it's automatic the infrastructures automatically for predicting and preventing issues before they ever occur and then going beyond that how can you actually fingerprint the individual application workloads to then deliver prescriptive insights right to keep the infrastructure always optimized in that sense so discerning the patterns of data utilization so that the administrative costs of making sure the data is available where it needs to be number one number two assuring that data as assets is made available to developers as they create new applications new new things that create new work but also working very closely with the administrators so that they are not bound [Music] as you know an explosion in the number of tasks adapt to perform to keep this all working across the board yes I want to thank Sandeep Singh and calvin zito both of HPE as well as wiki bonds David Floyd for sharing their ideas on this crucially important topic of how we're going to take more of a platform approach to do a better job of managing crucial data assets in today's and tomorrow's digital businesses I'm Peter Burris and this has been another wiki bomb the cube digital community event sponsored by HPE now stay tuned for our crowd chat which will be your opportunity to ask your questions share your experiences and push for the community's thinking on important issues facing business today thank you very much for watching and now let's crouch [Music]

Published Date : Jul 26 2019

SUMMARY :

of it so that the requirement then is to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Sandeep SinghPERSON

0.99+

David FloydPERSON

0.99+

Peter BurrisPERSON

0.99+

David FlorPERSON

0.99+

threeQUANTITY

0.99+

HPEORGANIZATION

0.99+

David floorPERSON

0.99+

Silicon ValleyLOCATION

0.99+

tomorrowDATE

0.99+

calvin zitoPERSON

0.99+

HPORGANIZATION

0.99+

Calvin ZitoPERSON

0.99+

todayDATE

0.99+

greater than 50%QUANTITY

0.99+

second partQUANTITY

0.99+

AppleORGANIZATION

0.99+

Calvin ZitoPERSON

0.98+

two pathsQUANTITY

0.98+

five yearsQUANTITY

0.98+

over a billion data pointsQUANTITY

0.98+

SandeepPERSON

0.98+

hundreds of thousands of systemsQUANTITY

0.97+

each individual pieceQUANTITY

0.97+

bothQUANTITY

0.97+

first conversationQUANTITY

0.97+

hundreds of systemsQUANTITY

0.97+

eachQUANTITY

0.96+

oneQUANTITY

0.96+

firstQUANTITY

0.96+

three main attributesQUANTITY

0.95+

one setQUANTITY

0.95+

one placeQUANTITY

0.94+

about 25 minutesQUANTITY

0.94+

SandeepORGANIZATION

0.94+

one sizeQUANTITY

0.94+

wiki BondORGANIZATION

0.93+

hundred percentQUANTITY

0.92+

HPETITLE

0.91+

GreenlakeORGANIZATION

0.91+

secondQUANTITY

0.91+

half the problemQUANTITY

0.91+

one locationQUANTITY

0.87+

Palo Alto CaliforniaLOCATION

0.86+

first strategyQUANTITY

0.83+

kloadORGANIZATION

0.83+

a lot of enterprisesQUANTITY

0.81+

hp.comORGANIZATION

0.81+

a lot of decision-makersQUANTITY

0.81+

wiki bondORGANIZATION

0.81+

h2oTITLE

0.81+

Kafka lasticTITLE

0.79+

TwitterORGANIZATION

0.79+

of sensorsQUANTITY

0.71+

six monthsQUANTITY

0.69+

OracleORGANIZATION

0.67+

Infrastructure For Big Data Workloads


 

>> From the SiliconANGLE media office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Hi, everybody, welcome to this special CUBE Conversation. You know, big data workloads have evolved, and the infrastructure that runs big data workloads is also evolving. Big data, AI, other emerging workloads need infrastructure that can keep up. Welcome to this special CUBE Conversation with Patrick Osborne, who's the vice president and GM of big data and secondary storage at Hewlett Packard Enterprise, @patrick_osborne. Great to see you again, thanks for coming on. >> Great, love to be back here. >> As I said up front, big data's changing. It's evolving, and the infrastructure has to also evolve. What are you seeing, Patrick, and what's HPE seeing in terms of the market forces right now driving big data and analytics? >> Well, some of the things that we see in the data center, there is a continuous move to move from bare metal to virtualized. Everyone's on that train. To containerization of existing apps, your apps of record, business, mission-critical apps. But really, what a lot of folks are doing right now is adding additional services to those applications, those data sets, so, new ways to interact, new apps. A lot of those are being developed with a lot of techniques that revolve around big data and analytics. We're definitely seeing the pressure to modernize what you have on-prem today, but you know, you can't sit there and be static. You gotta provide new services around what you're doing for your customers. A lot of those are coming in the form of this Mode 2 type of application development. >> One of the things that we're seeing, everybody talks about digital transformation. It's the hot buzzword of the day. To us, digital means data first. Presumably, you're seeing that. Are organizations organizing around their data, and what does that mean for infrastructure? >> Yeah, absolutely. We see a lot of folks employing not only technology to do that. They're doing organizational techniques, so, peak teams. You know, bringing together a lot of different functions. Also, too, organizing around the data has become very different right now, that you've got data out on the edge, right? It's coming into the core. A lot of folks are moving some of their edge to the cloud, or even their core to the cloud. You gotta make a lot of decisions and be able to organize around a pretty complex set of places, physical and virtual, where your data's gonna lie. >> There's a lot of talk, too, about the data pipeline. The data pipeline used to be, you had an enterprise data warehouse, and the pipeline was, you'd go through a few people that would build some cubes and then they'd hand off a bunch of reports. The data pipeline, it's getting much more complex. You've got the edge coming in, you've got, you know, core. You've got the cloud, which can be on-prem or public cloud. Talk about the evolution of the data pipeline and what that means for infrastructure and big data workloads. >> For a lot of our customers, and we've got a pretty interesting business here at HPE. We do a lot with the Intelligent Edge, so, our Edgeline servers in Aruba, where a a lot of the data is sitting outside of the traditional data center. Then we have what's going on in the core, which, for a lot of customers, they are moving from either traditional EDW, right, or even Hadoop 1.0 if they started that transformation five to seven years ago, to, a lot of things are happening now in real time, or a combination thereof. The data types are pretty dynamic. Some of that is always getting processed out on the edge. Results are getting sent back to the core. We're also seeing a lot of folks move to real-time data analytics, or some people call it fast data. That sits in your core data center, so utilizing things like Kafka and Spark. A lot of the techniques for persistent storage are brand new. What it boils down to is, it's an opportunity, but it's also very complex for our customers. >> What about some of the technical trends behind what's going on with big data? I mean, you've got sprawl, with both data sprawl, you've got workload sprawl. You got developers that are dealing with a lot of complex tooling. What are you guys seeing there, in terms of the big mega-trends? >> We have, as you know, HPE has quite a few customers in the mid-range in enterprise segments. We have some customers that are very tech-forward. A lot of those customers are moving from this, you know, Hadoop 1.0, Hadoop 2.0 system to a set of essentially mixed workloads that are very multi-tenant. We see customers that have, essentially, a mix of batch-oriented workloads. Now they're introducing these streaming type of workloads to folks who are bringing in things like TensorFlow and GPGPUs, and they're trying to apply some of the techniques of AI and ML into those clusters. What we're seeing right now is that that is causing a lot of complexity, not only in the way you do your apps, but the number of applications and the number of tenants who use that data. It's getting used all day long for various different, so now what we're seeing is it's grown up. It started as an opportunity, a science project, the POC. Now it's business-critical. Becoming, now, it's very mission-critical for a lot of the services that drives. >> Am I correct that those diverse workloads used to require a bespoke set of infrastructure that was very siloed? I'm inferring that technology today will allow you to bring those workloads together on a single platform. Is that correct? >> A couple of things that we offer, and we've been helping customers to get off the complexity train, but provide them flexibility and elasticity is, a lot of the workloads that we did in the past were either very vertically-focused and integrated. One app server, networking, storage, to, you know, the beginning of the analytics phase was really around symmetrical clusters and scaling them out. Now we've got a very rich and diverse set of components and infrastructure that can essentially allow a customer to make a data lake that's very scalable. Compute, storage-oriented nodes, GPU-oriented nodes, so it's very flexible and helps us, helps the customers take complexity out of their environment. >> In thinking about, when you talk to customers, what are they struggling with, specifically as it relates to infrastructure? Again, we talked about tooling. I mean, Hadoop is well-known for the complexity of the tooling. But specifically from an infrastructure standpoint, what are the big complaints that you hear? >> A couple things that we hear is that my budget's flat for the next year or couple years, right? We talked earlier in the conversation about, I have to modernize, virtualize, containerizing my existing apps, that means I have to introduce new services as well with a very different type of DevOps, you know, mode of operations. That's all with the existing staff, right? That's the number one issue that we hear from the customers. Anything that we can do to help increase the velocity of deployment through automation. We hear now, frankly, the battle is for whether I'm gonna run these type of workloads on-prem versus off-prem. We have a set of technology as well as services, enabling services with Pointnext. You remember the acquisition we made around cloud technology partners to right-place where those workloads are gonna go and become like a broker in that conversation and assist customers to make that transition and then, ultimately, give them an elastic platform that's gonna scale for the diverse set of workloads that's well-known, sized, easy to deploy. >> As you get all this data, and the data's, you know, Hadoop, it sorta blew up the data model. Said, "Okay, we'll leave the data where it is, "we'll bring the compute there." You had a lot of skunk works projects growing. What about governance, security, compliance? As you have data sprawl, how are customers handling that challenge? Is it a challenge? >> Yeah, it certainly is a challenge. I mean, we've gone through it just recently with, you know, GDPR is implemented. You gotta think about how that's gonna fit into your workflow, and certainly security. The big thing that we see, certainly, is around if the data's residing outside of your traditional data center, that's a big issue. For us, when we have Edgeline servers, certainly a lot of things are coming in over wireless, there's a big buildout in advent of 5G coming out. That certainly is an area that customers are very concerned about in terms of who has their data, who has access to it, how can you tag it, how can you make sure it's secure. That's a big part of what we're trying to provide here at HPE. >> What specifically is HPE doing to address these problems? Products, services, partnerships, maybe you could talk about that a little bit. Maybe even start with, you know, what's your philosophy on infrastructure for big data and AI workloads? >> I mean, for us, we've over the last two years have really concentrated on essentially two areas. We have the Intelligent Edge, which is, certainly, it's been enabled by fantastic growth with our Aruba products in the networks in space and our Edgeline systems, so, being able to take that type of compute and get it as far out to the edge as possible. The other piece of it is around making hybrid IT simple, right? In that area, we wanna provide a very flexible, yet easy-to-deploy set of infrastructure for big data and AI workloads. We have this concept of the Elastic Platform for Analytics. It helps customers deploy that for a whole myriad of requirements. Very compute-oriented, storage-oriented, GPUs, cold and warm data lakes, for that matter. And the third area, what we've really focused on is the ecosystem that we bring to our customers as a portfolio company is evolving rapidly. As you know, in this big data and analytics workload space, the software development portion of it is super dynamic. If we can bring a vetted, well-known ecosystem to our customers as part of a solution with advisory services, that's definitely one of the key pieces that our customers love to come to HP for. >> What about partnerships around things like containers and simplifying the developer experience? >> I mean, we've been pretty public about some of our efforts in this area around OneSphere, and some of these, the models around, certainly, advisory services in this area with some recent acquisitions. For us, it's all about automation, and then we wanna be able to provide that experience to the customers, whether they want to develop those apps and deploy on-prem. You know, we love that. I think you guys tag it as true private cloud. But we know that the reality is, most people are embracing very quickly a hybrid cloud model. Given the ability to take those apps, develop them, put them on-prem, run them off-prem is pretty key for OneSphere. >> I remember Antonio Neri, when you guys announced Apollo, and you had the astronaut there. Antonio was just a lowly GM and VP at the time, and now he's, of course, CEO. Who knows what's in the future? But Apollo, generally at the time, it was like, okay, this is a high-performance computing system. We've talked about those worlds, HPC and big data coming together. Where does a system like Apollo fit in this world of big data workloads? >> Yeah, so we have a very wide product line for Apollo that helps, you know, some of them are very tailored to specific workloads. If you take a look at the way that people are deploying these infrastructures now, multi-tenant with many different workloads. We allow for some compute-focused systems, like the Apollo 2000. We have very balanced systems, the Apollo 4200, that allow a very good mix of CPU, memory, and now customers are certainly moving to flash and storage-class memory for these type of workloads. And then, Apollo 6500 were some of the newer systems that we have. Big memory footprint, NVIDIA GPUs allowing you to do very high calculations rates for AI and ML workloads. We take that and we aggregate that together. We've made some recent acquisitions, like Plexxi, for example. A big part of this is around simplification of the networking experience. You can probably see into the future of automation of the networking level, automation of the compute and storage level, and then having a very large and scalable data lake for customers' data repositories. Object, file, HTFS, some pretty interesting trends in that space. >> Yeah, I'm actually really super excited about the Plexxi acquisition. I think it's because flash, it used to be the bottleneck was the spinning disk, flash pushes the bottleneck largely to the network. Plexxi gonna allow you guys to scale, and I think actually leapfrog some of the other hyperconverged players that are out there. So, super excited to see what you guys do with that acquisition. It sounds like your focus is on optimizing the design for I/O. I'm sure flash fits in there as well. >> And that's a huge accelerator for, even when you take a look at our storage business, right? So, 3PAR, Nimble, All-Flash, certainly moving to NVMe and storage-class memory for acceleration of other types of big data databases. Even though we're talking about Hadoop today, right now, certainly SAP HANA, scale-out databases, Oracle, SQL, all these things play a part in the customer's infrastructure. >> Okay, so you were talking before about, a little bit about GPUs. What is this HPE Elastic Platform for big data analytics? What's that all about? >> I mean, we have a lot of the sizing and scalability falls on the shoulders of our customers in this space, especially in some of these new areas. What we've done is, we have, it's a product/a concept, and what we do is we have this, it's called the Elastic Platform for Analytics. It allows, with all those different components that I rattled off, all great systems in of their own, but when it comes to very complex multi-tenant workloads, what we do is try to take the mystery out of that for our customers, to be able to deploy that cookie-cutter module. We're even gonna get to a place pretty soon where we're able to offer that as a consumption-based service so you don't have to choose for an elastic type of acquisition experience between on-prem and off-prem. We're gonna provide that as well. It's not only a set of products. It's reference architectures. We do a lot of sizing with our partners. The Hortonworks, CloudEra's, MapR's, and a lot of the things that are out in the open source world. It's pretty good. >> We've been covering big data, as you know, for a long, long time. The early days of big data was like, "Oh, this is great, "we're just gonna put white boxes out there "and off the shelf storage!" Well, that changed as big data got, workloads became more enterprise, mainstream, they needed to be enterprise-ready. But my question to you is, okay, I hear you. You got products, you got services, you got perspectives, a philosophy. Obviously, you wanna sell some stuff. What has HPE done internally with regard to big data? How have you transformed your own business? >> For us, we wanna provide a really rich experience, not just products. To do that, you need to provide a set of services and automation, and what we've done is, with products and solutions like InfoSight, we've been able to, we call it AI for the Data Center, or certainly, the tagline of predictive analytics is something that Nimble's brought to the table for a long time. To provide that level of services, InfoSight, predictive analytics, AI for the Data Center, we're running our own big data infrastructure. It started a number of years ago even on our 3PAR platforms and other products, where we had scale-up databases. We moved and transitioned to batch-oriented Hadoop. Now we're fully embedded with real-time streaming analytics that come in every day, all day long, from our customers and telemetry. We're using AI and ML techniques to not only improve on what we've done that's certainly automating for the support experience, and making it easy to manage the platforms, but now introducing things like learning, automation engines, the recommendation engines for various things for our customers to take, essentially, the hands-on approach of managing the products and automate it and put into the products. So, for us, we've gone through a multi-phase, multi-year transition that's brought in things like Kafka and Spark and Elasticsearch. We're using all these techniques in our system to provide new services for our customers as well. >> Okay, great. You're practitioners, you got some street cred. >> Absolutely. >> Can I come back on InfoSight for a minute? It came through an acquisition of Nimble. It seems to us that you're a little bit ahead, and maybe you say a lot a bit ahead of the competition with regard to that capability. How do you see it? Where do you see InfoSight being applied across the portfolio, and how much of a lead do you think you have on competitors? >> I'm paranoid, so I don't think we ever have a good enough lead, right? You always gotta stay grinding on that front. But we think we have a really good product. You know, it speaks for itself. A lot of the customers love it. We've applied it to 3PAR, for example, so we came out with some, we have VMVision for a 3PAR that's based on InfoSight. We've got some things in the works for other product lines that are imminent pretty soon. You can think about what we've done for Nimble and 3PAR, we can apply similar type of logic to Elastic Platform for Analytics, like running at that type of cluster scale to automate a number of items that are pretty pedantic for the customers to manage. There's a lot of work going on within HPE to scale that as a service that we provide with most of our products. >> Okay, so where can I get more information on your big data offerings and what you guys are doing in that space? >> Yeah, so, we have, you can always go to hp.com/bigdata. We've got some really great information out there. We're in our run-up to our big end user event that we do every June in Las Vegas. It's HPE Discover. We have about 15,000 of our customers and trusted partners there, and we'll be doing a number of talks. I'm doing some work there with a British telecom. We'll give some great talks. Those'll be available online virtually, so you'll hear about not only what we're doing with our own InfoSight and big data services, but how other customers like BTE and 21st Century Fox and other folks are applying some of these techniques and making a big difference for their business as well. >> That's June 19th to the 21st. It's at the Sands Convention Center in between the Palazzo and the Venetian, so it's a good conference. Definitely check that out live if you can, or if not, you can all watch online. Excellent, Patrick, thanks so much for coming on and sharing with us this big data evolution. We'll be watching. >> Yeah, absolutely. >> And thank you for watcihing, everybody. We'll see you next time. This is Dave Vellante for theCUBE. (fast techno music)

Published Date : Jun 12 2018

SUMMARY :

From the SiliconANGLE media office and the infrastructure that in terms of the market forces right now to modernize what you have on-prem today, One of the things that we're seeing, of their edge to the cloud, of the data pipeline A lot of the techniques What about some of the technical trends for a lot of the services that drives. Am I correct that a lot of the workloads for the complexity of the tooling. You remember the acquisition we made the data where it is, is around if the data's residing outside Maybe even start with, you know, of the Elastic Platform for Analytics. Given the ability to take those apps, GM and VP at the time, automation of the compute So, super excited to see what you guys do in the customer's infrastructure. Okay, so you were talking before about, and a lot of the things But my question to you and automate it and put into the products. you got some street cred. bit ahead of the competition for the customers to manage. that we do every June in Las Vegas. Definitely check that out live if you can, We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

Dave VellantePERSON

0.99+

ArubaLOCATION

0.99+

AntonioPERSON

0.99+

BTEORGANIZATION

0.99+

Patrick OsbornePERSON

0.99+

HPEORGANIZATION

0.99+

June 19thDATE

0.99+

Antonio NeriPERSON

0.99+

Las VegasLOCATION

0.99+

PointnextORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

third areaQUANTITY

0.99+

21st Century FoxORGANIZATION

0.99+

Apollo 4200COMMERCIAL_ITEM

0.99+

@patrick_osbornePERSON

0.99+

Apollo 6500COMMERCIAL_ITEM

0.99+

InfoSightORGANIZATION

0.99+

MapRORGANIZATION

0.99+

Sands Convention CenterLOCATION

0.99+

Boston, MassachusettsLOCATION

0.98+

Apollo 2000COMMERCIAL_ITEM

0.98+

CloudEraORGANIZATION

0.98+

HPORGANIZATION

0.98+

NimbleORGANIZATION

0.98+

SparkTITLE

0.98+

SAP HANATITLE

0.98+

next yearDATE

0.98+

GDPRTITLE

0.98+

One appQUANTITY

0.98+

VenetianLOCATION

0.98+

two areasQUANTITY

0.98+

todayDATE

0.98+

hp.com/bigdataOTHER

0.97+

oneQUANTITY

0.97+

HortonworksORGANIZATION

0.97+

Mode 2OTHER

0.96+

single platformQUANTITY

0.96+

SQLTITLE

0.96+

OneQUANTITY

0.96+

21stDATE

0.96+

Elastic PlatformTITLE

0.95+

3PARTITLE

0.95+

Hadoop 1.0TITLE

0.94+

seven years agoDATE

0.93+

CUBE ConversationEVENT

0.93+

PalazzoLOCATION

0.93+

HadoopTITLE

0.92+

KafkaTITLE

0.92+

Hadoop 2.0TITLE

0.91+

ElasticsearchTITLE

0.9+

PlexxiORGANIZATION

0.87+

ApolloORGANIZATION

0.87+

of years agoDATE

0.86+

Elastic Platform for AnalyticsTITLE

0.85+

OracleORGANIZATION

0.83+

TensorFlowTITLE

0.82+

EdgelineORGANIZATION

0.82+

Intelligent EdgeORGANIZATION

0.81+

about 15,000 ofQUANTITY

0.78+

one issueQUANTITY

0.77+

fiveDATE

0.74+

HPE DiscoverORGANIZATION

0.74+

both dataQUANTITY

0.73+

dataORGANIZATION

0.73+

yearsDATE

0.72+

SiliconANGLELOCATION

0.71+

EDWTITLE

0.71+

EdgelineCOMMERCIAL_ITEM

0.71+

HPETITLE

0.7+

OneSphereORGANIZATION

0.68+

coupleQUANTITY

0.64+

3PARORGANIZATION

0.63+

Said Syed & Paul Holland, HPE | KubeCon + CloudNativeCon EU 2018


 

>> Announcer: Live from Copenhagen, Denmark, it's theCUBE! Covering KubeCon + CloudNativeCon Europe 2018. Brought to you by the Cloud Native Computing Foundation, and it's ecosystem partners. >> Hello there and welcome back to theCUBE's exclusive coverage of KubeCon 2018, the Cloud Native Compute Foundation. CNCF, I'm John Furrier with theCUBE. My cohost Lauren Cooney is here with me this week. Our next two guests are from HPE Developer program. Paul Holland, Director of Open Source Program Office. And Said Syed, who is the Head of HP Developer Experience. CUBE alumni. Welcome back. Good to see you. >> Thanks for having us. >> Thanks for comin' on. >> Thank you. >> First of all, new logo. I love that, I want to get into it. HPE Developer program. We've had many conversations in the past about the relationship with Docker. The work you guys are doing inside the enterprises with cloud, multi-cloud and hybrid cloud. Why are you guys here? What's the story? What's the update from HPE? >> In December we launched this new program called the HP Community Developer Program. And that's really focused on reaching out to the developers that are out there. Whether these are DevOps developers, Cloud Native application developers, ITOps developers, who are looking to do integration with HPE infrastructure as well as our software defined platforms. It's basically evangelizing all of the good work that HP's doing in the open source program and other areas. Do you want to add something, Paul? >> Yeah, I think part of it is the recognition that HPE is a software company. After all of the separations, the divestiture with HPI and that micro-focus. We're left with really still a lot of developer power. It's the idea that as we work with developers internally and externally, we need to formalize that developer program. Both inside of open source and the general developer. Go through our API's and some of that coordination, to really make the developer work. >> I mean we're talking software defined. Everything now, you guys have been part of that. To give you guys some props, we've interviewed in the past four or five years, you guys were doing, talking micro services early on. >> Syed: That's right. >> Again the enterprise has software defined systems. >> You guys are a big part of that. So I got to ask you, the perfect storm is here. I mean Kubernetes, which is on the scene, is now, at least in my opinion, the defacto standard for interoperability around multi-cloud. This is the perfect storm for a company as big as HP with all the customers. So what is... I mean you guys must be sitting there going, perfect timing! What does it mean for you guys, Kubernetes? This is going to give you certainly a tail wind for deployments, and customer value creation. What's it mean internally for HPE? >> Well I think Kubernetes is at the heart, as you mentioned, of the open source ecosystem. It's about all of those Lego blocks now finally coming together with micro-services. And being able to put 'em together for an enterprise class workload. And given our history and expertise there I think you're right. It's a great opportunity to make sure that it works for the enterprise developer, for general developers. And how everything comes together within it, within a corporate world of development. >> Are you guys doubling down? >> Syed: Absolutely. >> What's the story internally? Is it got the charter from the top? >> That's right, yeah, we're definitely doubling down. As you mentioned, we started early on with micro services, with our partnership at Docker. We have a great relationship with Mesosphere. And we're full on with Kubernetes. You know we have a product that we're actually demoing here on the show floor, called HPE OneSphere. We launched the product in December of last year. And one of the things it actually does, it enables Kubernetes' cluster management on-prem and off-prem. For example in AWS. Deployment, management, all of those things. We are full on. We also have open source projects in the Kubernetes landscape. It's called Project Dory. That enables persistent storage. It's actually contributed by our Nimble big business unit. We're very focused on enabling our developers. Things that enable them is things like, how can I automatically deploy applications? And so on. Using Kubernetes cluster or Kubernetes environment. Working with Paul and others that's exactly what we're focused on. >> What are some of the user cases that you guys are seeing? As you mentioned some of those deployments. Is it really existing integration within HP Solutions? Like OneSphere? And OneSphere's obviously going to be a nice paint a glass and look at the platform of what the cloud offers. Is it Edge? Is it IoT? I mean what are some of the user cases? >> I think it's all of the above. I think what we're seeing is legacy enterprises having all of these legacy applications that they need to migrate this new world. At the same time they're struggling with, how do then I make hybrid? How do I then go to the Edge? And so across the board, I think that's the power of going back to your original question about HPE. Is we've seen all of that in the enterprise. And can we put those proprietary componentry into the products? Like a OneSphere on top of open source components. The reason we're here at Kubernetes, as an example, is to really highlight to developers that if you really want to bring things together. We can help you do that. Whether it be legacy applications, new application, greenfield applications. All within this again Lego block type environment, within Kubernetes and these other open source platforms. >> I mean you guys also again on the composable infrastructure kind of story. It's kind of here, right? >> That's right. Again we started down this journey three, four years ago with Docker. And several others. We built this unified ecosystem. A composable ecosystem. And in the ecosystem I think there's now like 40 some partners. But that's growing. If you look at it from a layered cake point of view. The infrastructure is here. That problem has been solved for a long time. You have infrastructure management. With one view, with our composable API's. Working with components like Docker, and Mesosphere, and Redfish, and other open source products and services, on top of that with OneSphere as the multi-cloud/hybrid cloud management platform, again using the power of our API's. And then integrating north bound with these hybrid multi-cloud management environments, as well as south bound with infrastructure management. Now you have the overall story. We're really exploiting the power of API's. And enabling our developers internally, as well as developers outside of HPE, To come together and start to think about this new idea. Is there a solution for that? Absolutely, there's an app for it. And then the way you build that app is build that API integration. >> You talked about an app store that you guys are working on. It has about 40 different partners in it. What about users of the solutions that are in there? Are you seeing an uptick in that? And what are you seeing in terms of that and what are they using? >> Yeah so I'll give you a quick example. We launched the developer community program in December. We launched the portal in December. And in the past two and a half months, we have seen a significant uptick and actually just people comin' in and hanging out on the portal. I think we are up to about 30,000 unique, unique views of our page. Most people are spending three to four minutes, which is a lot in today's terms. Someone who is going there, reading our content. And then on top of that actually consumer-ship of our projects. Grommet for example is one of our open source projects that HP funds. It's a UX front end. I think it has more than 10,000 people that are following it, and using it. Companies like Netflix, for example, use Grommet as a UX. Most of our SDCG is off our defined applications are now using Grommet. So OneSphere, One View. That's our de facto standard. But it's open source, anyone can use it. >> Are you finding, HP is traditionally been kind of a company that does a lot of things internally. Are you guys opening up for the first time? With allowing your developers to build things that will be put into open source? Can you talk a little bit about that? >> The power of HP is we've had a rich collaboration history for a long, long time. And I think you alluded to it before. From an enterprise perspective, how can we make that easy? Not only for our own internal developers. And maybe this is where this question comes from from an internal perspective. Even ten, 15 years ago with Martin Fink, at the helm of the open source group. And then ultimately as the CTO. And things have shifted through the separations. How do you leverage that power of openness, collaboration, that's in their DNA? And really empowering them to share. How do we take concepts like inner sourcing, which is the open sourcing of activities inside a company, And really start develop those habits and capabilities. Whether or not it's external is just a flip of the switch. But developers know how to contribute. They're also learning best of breed skills. And developing their own career over time. >> Cooney: That is great to hear. >> And enabling that for other enterprises as well. Which is really where a lot of our customers come to us and say, hey you're an enterprise with lots and lots of developers. How do I get that same power with mine? And you kind of walk them through the journey. >> It's interesting, I'd love to get your thoughts on this. I think you guys are doing... First of all I love the new logo. I think it's really important everyone knows you guys have a very active and open source community. And have been on this. This is not a new thing, revelation within HP. But Intel has the same challenge. They're tryna move away from that Intel Inside. You guys are known to a lot of people as a hardware company. You got HP.com is now the printer and the peripheral side. But it's a cloud game. You're still selling servers but people are still buying servers. The cloud providers need servers. They need it. But the software is the key, the software defined infrastructure is now that glue layer. Service meshes are hot. You're seeing SDO's got massive traction. Everything's pointing to this new level of services at scale. >> That's right. >> I want to get your thoughts on the HP story there. Can you take a minute to explain what you guys are doing with that vision? Because Cloud Native isn't just about the cloud. There's a lot of on-prem activity that's moving to a cloud operating model. So it's not a full public cloud. What's your story? >> If you look at the overall strategy. We make hybrid IT simple, recognizing that it's all those different flavors. We have to enable the software capabilities because the world is software enabled. You have all those componentries working together seamlessly and automated. And then we have the services groups to make it happen. With the Pointnext, and the acquisitions of cloud technology partners in the new areas. We have a wide variety of a portfolio of services that are now enabled. And experts to actually go help customers do it. And so we have the capability legacy. We also have the capability of the new generation of IT. And everywhere in between. And then you talk about the Edge. And so with our acquisition Aruba, which it seems like a long time ago. It's just a few years. They've been an integral part of taking that from a data center all the way to the edge and in between. I think we've got those multiple layers of hybrid IT. We have the software enabled activities, which definitely includes open source. Because you can't be software enabled without software and open source. And then from a service perspective, the wealth, depth of bench, in terms of... >> And OneSphere's the key product that, for you guys, that connects all this. Is that kind of where the momentum is? >> Holland: It's one of them. >> One of them, okay. >> And then if you look at some of the acquisitions we have made. CTP, for example, or Cloud Cruiser, for example. These are all helping us build our portfolio of rich services that enable customers to go from a pure on-prem, pure hardware focus company. To now a new age Cloud Native, or hybrid cloud sort of company, where, we have the experience. Now, we have the experience with all of these different acquisitions like CTP, to enable them to have a full hybrid cloud of micro plus macro services kind of migration capabilities. >> What are you guys offering developers? Not that I'm going to ask you for the pitch. Cause everyone, the developers are getting a lot of pitches, if you will. People say I got to own the developer. They don't want to be owned. They want to be collaborative. But they're closer to the front lines than ever, these developers. And they're really looking at business problems. It's not just, here's the specs go code it. They're on the front lines. Right at the point of engagement for the business logic, and the business models of a lot of these applications. What do you guys bring to the table for the developers? Is it marketplace? Is it distribution? Is it opportunity? What is the value proposition that you guys are talking to developers about, specifically? >> I think it's all three. We really start with internal, right? We are aligning our internal developers to really consume our own champagne. Drink your own champagne. So what does that mean? Can you use OneSphere to develop OneSphere? Absolutely. Our mentality is, our OneSphere developers, in fact a couple of our distinguished technologists are here. So more customer focused. Do your development on your own products, on your own products. Does that make sense? >> Yeah. >> So that's number one, right? If they go through the pains of developing on our own products. They will know exactly which areas to focus on. And so that's one thing we are really enabling our developers to do. Is really think outside in, versus inside out. Gone are the days of, we will build it and they will come. No they won't. You have to really give them what they are going to consume. So from a strategy perspective, we're really exposing our developers to the outside world. Hey go out there. Talk to them. Learn what they're looking for. Right, so that's number one. Number two. With the developer community program, and the developer portal, and the open source program. Now that we're collaborating across HPE, at the top end and the bottom end. We're not really able to think about how we use the power of our API's, from layer 1 infrastructure all the way up to layer 7. Or Layer 5 and above. And say, "Alright how do we enable these guys to build value add that really solves their problem?" Whether it's DevOps problems. CI/CD? Whether it deploying applications, managing, monitoring applications. It's all through the power of API. If you can automate it, orchestrate it and manage it. Then we have really solved your problems. This is why we're not only going after and enabling the developers by giving them what they need. We're also partnering with key partners in our ecosystem that actually brings the best of breed. And that's what the customers are used to using today. >> And you guys had it more up to stack. Certainly the application level is a key point. What about the channel opportunity? Cause I'm seeing, and I've been talking about this on theCUBE lately, is developers are the new sales channel, because in the old days VAR's, and ISV's and channel partners would bring solutions. And you guys had a great channel, have a great channel that brings solutions to customers. Now these customers are having programming and developing done from the partners. You guys have to create that. Are you guys looking at that as a significant opportunity, with this program? >> In today's world you have to think about things in a different way. With the advent of DevOps. With the developers no longer in their cubes, not touching production, they're releasing the production daily. Or multiple times per day. And so we're lookin', or have looked with that with, how do the developer work. And get that all the way to production. At the same time, what's the skill sets to work with in the open? Are you talking about the channel? The open source community is a great channel. Not only for ideas and conversations, but also to meet people. Not only are we there. >> Furrier: Your buyers are there. >> Yeah exactly. We're releasing the customers. But customers is part of our community. Vendors are part of our community. Partners are part of our community. And together we're building a community of developers that are doing work that ultimately goes to production multiple times per year. >> When you guys get this right, I think the gains will be huge. >> Well I'll give you an example. One of the largest web companies in the world. We're partnering with them. They're a huge customer of ours. Instead of selling to their frontline, we went and started talking to their developers. And their developer leaderships. To the point where we are working on doing hackathons. So our developers, their developers, in the same conference room, solving joint problems together. >> Cooney: So co-development. >> Co-developing, exactly. We call it a hackathon. But yeah, co-developing, absolutely. That's where we're focused. Because today developers and the line of businesses have more and more and more influence on key technology decisions. That's where the money is. >> Being genuine and authentic in these communities is certainly a great, successful formula. You guys, see that. We'll be following your progress. Thanks for coming on theCUBE and sharing the update. And congratulations on the new program. And the new logo. I'd love to get a shirt when you get a chance. >> Absolutely, yeah. >> Congratulations, great to see you. Thanks for comin' on. We are here at KubeCon 2018 in Europe. This is theCUBE, I'm John Furrier. Thanks for watching. We'll be back with more live coverage after this short break.

Published Date : May 2 2018

SUMMARY :

Brought to you by the Cloud Native Computing Foundation, the Cloud Native Compute Foundation. about the relationship with Docker. It's basically evangelizing all of the good work It's the idea that as we work with developers To give you guys some props, This is going to give you certainly a tail wind of the open source ecosystem. And one of the things it actually does, What are some of the user cases that you guys are seeing? And so across the board, on the composable infrastructure kind of story. And in the ecosystem I think there's now And what are you seeing And in the past two and a half months, Are you guys opening up for the first time? And I think you alluded to it before. And you kind of walk them through the journey. I think you guys are doing... what you guys are doing with that vision? We also have the capability of the new generation of IT. And OneSphere's the key product that, And then if you look at some of the acquisitions What is the value proposition that you guys are Can you use OneSphere to develop OneSphere? that actually brings the best of breed. And you guys had it more up to stack. And get that all the way to production. We're releasing the customers. When you guys get this right, One of the largest web companies in the world. We call it a hackathon. And congratulations on the new program. Congratulations, great to see you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lauren CooneyPERSON

0.99+

Paul HollandPERSON

0.99+

PaulPERSON

0.99+

DecemberDATE

0.99+

Cloud Native Computing FoundationORGANIZATION

0.99+

John FurrierPERSON

0.99+

Cloud Native Compute FoundationORGANIZATION

0.99+

HPORGANIZATION

0.99+

EuropeLOCATION

0.99+

SyedPERSON

0.99+

NetflixORGANIZATION

0.99+

threeQUANTITY

0.99+

DockerORGANIZATION

0.99+

HPEORGANIZATION

0.99+

OneQUANTITY

0.99+

Martin FinkPERSON

0.99+

Copenhagen, DenmarkLOCATION

0.99+

KubeConEVENT

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

PointnextORGANIZATION

0.99+

CooneyPERSON

0.99+

OneSphereORGANIZATION

0.99+

four minutesQUANTITY

0.99+

Said SyedPERSON

0.98+

KubeCon 2018EVENT

0.98+

first timeQUANTITY

0.98+

more than 10,000 peopleQUANTITY

0.98+

NimbleORGANIZATION

0.98+

one viewQUANTITY

0.98+

MesosphereORGANIZATION

0.97+

CUBEORGANIZATION

0.97+

BothQUANTITY

0.97+

HP.comORGANIZATION

0.97+

layer 1OTHER

0.97+

ArubaORGANIZATION

0.97+

December of last yearDATE

0.97+

four years agoDATE

0.97+

Layer 5OTHER

0.97+

40 some partnersQUANTITY

0.96+

two guestsQUANTITY

0.96+

IntelORGANIZATION

0.96+

theCUBEORGANIZATION

0.96+

about 40 different partnersQUANTITY

0.96+

this weekDATE

0.95+

LegoORGANIZATION

0.95+

todayDATE

0.95+

layer 7OTHER

0.95+

CloudNativeCon Europe 2018EVENT

0.95+

threeDATE

0.94+

OneSphereTITLE

0.93+

FirstQUANTITY

0.93+

KubernetesTITLE

0.93+

CNCFORGANIZATION

0.93+

CloudNativeCon EU 2018EVENT

0.92+

about 30,000 uniqueQUANTITY

0.91+

one thingQUANTITY

0.91+

SDCGORGANIZATION

0.9+

One ViewORGANIZATION

0.9+

DevOpsTITLE

0.88+

SDOTITLE

0.86+

EdgeTITLE

0.85+

upQUANTITY

0.81+

KubernetesORGANIZATION

0.81+

15 years agoDATE

0.81+