HPE Data Platform
from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hi I'm Peter Burris analyst wiki Bond welcome to another wiki Bond the cube digital community event this one's sponsored by HPE like all of our digital community events this one will feature about 25 minutes of video followed by a crowd chat which will be your opportunity to ask your questions share your experiences and push forward the community's thinking on important issues facing business today so what are we talking about today over the course of the last say six months or so we've had a lot of conversations with our customers about the core issues that multi-cloud is going to engender with in business one of them clearly is how do we bring greater intelligence to how we move manage and administer data within the enterprise some of the more interesting conversations we've had turns out to have been with HPE and that's what we're going to talk about today we're going to be spending a few minutes with a number of HPE professionals as well as wiki bond professionals and thought leaders talking about the challenges that enterprises face as a consider intelligent data platforms so let's get started the first conversation that we're going to talk about is with Sandeep Singh who is the vice president at HPE Sandeep let's have that conversation about the challenges facing business today as it pertains to data so Sandeep I started off by making the observation that we've got this mountain of data coming in a lot of enterprises at the same time there seems to be a the the notion of how data is going to create new classes of business value seems to be pretty deeply ingrained and acculturated to a lot of decision-makers so they want more value out of their data but they're increasingly concerned about the volume of data that's going to hit them how in your conversations with customers are you hearing them talk about this fundamental challenge so that that's a great question you know across the board data is at the heart of applications pretty much everything that organizations do and when they look at it in conversations with customers it really boils down to a couple of areas one is how is my data just effortlessly available all the time it's always fast because fundamentally that's driving the speed of my business and that's incredibly important and how can my various audiences including developers just consume it like the public cloud in a self-service fashion and then the second part of that conversation is really about this massive data storm or mountain of data that's coming and it's gonna be available how do how do I Drive a competitive advantage how do i unlock these hidden insights in that data to uncover new revenue streams new customer experiences those are the areas that we hear about and fundamentally underlying it the challenge for customers is boy I have a lot of complexity and how do I ensure that I have the necessary insights in a the infrastructure management so I am not beholden am or my IT staff isn't beholden to fighting the IT fires that can cause disruptions and delays to projects so fundamentally we want to be able to push time and attention in the infrastructure in the administration of those devices that handle the data and move that time and attention up into how we deliver the data services and ideally up into the applications that are going to actually generate a new class of work within a digital business so I got that right absolutely it's about infrastructure that just runs seamlessly it's always on it's always fast people don't have to worry about what is it gonna go down is my data available or is it gonna slow down people don't want sometimes faster one always fast right I and that's governing the application performance that ultimately I can deliver and you talked about while geez if it if the data infrastructure just work seamlessly then can I eventually get to the applications and building the right pipelines ultimately for mining that data drive doing the AI and the machine learning analytics driven insides from there great discussion about the importance of data in the enterprise and how it's changing the way we think about business we're going to come back to Sandeep shortly but first let's spend some time talking with David floor who's the wiki bond analyst about the new mindset that is required to take advantage of some of these technologies and solve some of these problems specifically we need to think increasingly about data services let's hear what David has to say explain what that new mindset is yes I completely agree that that new mindset is required and it starts with you want to be able to deal with data wherever it's gonna be you in we are in a hybrid world hybrid cloud world your own clouds other public clouds partner clouds all of these need to be integrated and data is at the core of it so that the requirement then is to have rather than think about each individual piece is to think about services which are going to be applied to that data and can be applied not only to the data in one place but across all of that data and there isn't such a thing is just one set of services there going to be multiple sets of these services available but hope we will see some degree of conversion so they'll be the same lexicon and conceptual etcetera there'll be the same levels of things that are needed within each of these architectures but there'll be different emphasis on different areas we need to look at the way we administer data as a set of services that create outcomes for the business and as opposed to that are then translated into individual devices let me so let's jump into this notion of of what those services look like it seems as though we can list off a couple of them sure yeah so we must have of data reduction techniques so you must have deduplication compression type of techniques and you want to apply that our crosses bigger an amount of data as you can the more data you apply those the higher the levels of compression and deduplication you can get so that's clearly you've got those sort of sets of services across there you must backup and restore data in another place and be able to restore it quickly and easily there's that again is a service how quickly how integrated that recovery again that's going to be a variable that's a differentiation in the service exactly you're going to need data data protection in general end to end protection of once or another for example you need end-to-end encryption across there it's no longer good enough to say this bits been encrypted and then this bits the encrypted has got to be an end-to-end from one location to another location seamlessly provided that sort of thing well let me let me let me press on it cuz I think it's a really important point and and and it's you know the notion that the weakest link determines the strength of the chain right the what you just described says if you have encryption here and you don't have encryption there but because of the nature of digital you can start you start bringing that data together guess what the weakest link determines the protection of the overall data absolutely yes and then you need services like snapshots like like other services which provide much better usage of that data one of the great things about flash and that's brought about this about is that you can take a copy of that in real time and use that first totally different purpose and have that being changed in a different way so there are some really significantly great improvements you can have with services like snapshots and then you need some other services which are becoming even more important in my opinion the advent of [Music] bad actors in the in the world has really bought about the requirement for things like air gaps to have your data with the metadata all in one place and completely separated from everything else there are such things as called logical air gaps I think they as long as they're real in the real sense that the two paths can't interfere with each other those are going to be services which become very very important that's generally as an example of a general class of security data services they require so ultimately what we're describing is we're describing a new mindset that says that a storage administrator has to think about the services that the applications in the business requires and then seek out technologies that can provide those services at the price point with the degree of power consumption in the space or the environmental or with the type of maintenance and services related support that required based on the physical location the degree to which is under their control etc so that kind of what how we're thinking about this I think absolutely and the again if there's going to be multiple of these around in the marketplace one size is not going to fit all yeah you if you're wanting super fast response time at an edge and and if you don't get that response in time it's going to be no use whatsoever you're going to take you're going to have a different architecture a different way of doing it then if you need to be a hundred percent certain that every bit is captured and you know in a financial sort of environment but from a service standpoint you want to be able to look at that specific solution in a common way current policies current bilities correct great observations by David Flor it's very clear that for enterprises to get more control over their data their data assets and how they create value out of data they have to take a services mentality but the challenge that we all face is just taking a service mentality is not going to be enough we have to think about how we're going to organize those services into a platform that is pertinent and relevant to how business operates in a digital sense so let's go back to Sandeep saying and talk to him a little bit about this HPE notion of the intelligent data platform you've been one of the leaders in the complex systems arena for a long time and that includes storage where are you guys taking some of these technologies yeah so our strategy is to deliver an intelligent data platform and that intelligent data platform begins with workload optimized composable systems that can span the mission critical workloads general purpose secondary Big Data ai workloads we also deliver cloud data services that enable you to embrace hybrid cloud all of these systems including all the way to cloud data services are plumbed with data mobility and so for example use cases of even modernizing protection and going all the way to protecting cost effectively in the public cloud are enabled but really all of these systems then are imbued with a level of intelligence with a global intelligence engine that begins with predicting and proactively resolving issues before they occur but it goes way beyond that in delivering these prescriptive insights that are built on top of global learning across hundreds of thousands of systems with over a billion data points coming in on a daily basis to be able to deliver at the information at the fingertips of even the virtual machine admins to say this virtual machine is sapping the performance of this node and if you were to move it to this other node the performance or the SLA for all of the virtual machine farm will be even better we build on top of that to deliver pre-built automation so that it's hooked in with a REST API for strategy so that developers can consume it in a containerized application that's orchestrated with kubernetes or they can leverage it as an infrastructure as code whether it's with ansible puppet or chef we accelerate all of the application workloads and bring up where data protection and so it's available for the traditional business applications whether they're built on sa P or Oracle or sequel or the virtual machine farms or the new stack containerized applications and then customers can build their AI and big data pipelines on top of the infrastructure with a plethora of tools whether they're using basically Kafka lastic map our h2o that complete flexibility exists and within HPE were then able to turn around and deliver all of this with an as a service experience with HPE Greenlake to customers so that's where I want to take you next so how invasive is this going to be to a large shop well it is completely seamless in that way so with Greenlake we're able to deliver a fully managed service experience where the a cloud like page you go consumption model and combining it with HPE financial services we're also able to transform their organization in terms of this journey and make it a fully self-funding journey as well so today the typical administrator the typical shop has got a bunch of administrators that are administrating devices that's starting to change they've introduced automation that typically is associated with those devices but if we think three to five years out folks going to be thinking more in terms of data services and how those services get consumed and that's going to be what the storage part of I t's going to be thinking about they can almost become day to administrators if I got that right yes intelligence is fundamentally changing everything not only on the consumer side but on the business side of it a lot of what we've been talking about is intelligence is the game changer we actually see the dawn of the intelligence era and through this AI driven experience what it means for customers as a it enables a support experience that they just absolutely love secondly it means that the infrastructure is always on it's always fast it's always optimized in that sense and thirdly in terms of making these data services that are available and data insights that are being unlocked it's all about how can you enable your innovators and the data scientists and the data analysts to shrink that time to deriving insights from months literally down to minutes today there's this chasm that exists where there's a great concept of how can i leverage the AI technology and between that concept to making it real to thinking about a where can I actually fit and then how do i implement an end-to-end solution and a technology stack so then I just have a pipeline that's available to me that chasm literally is a matter of months and what we're able to deliver for example with HPE blue data is literally a catalog self-service experience where you can select and seamlessly build a pipeline literally in a matter of minutes and it's just all completely hosted seamlessly so making AI and machine learning essentially available for the mainstream through so the ontology data platform makes it possible to see these new classes of applications become routine without forcing the underlying storage administrators themselves to become data scientists absolutely all right the intelligent data platform is a very great concept but it's got to be made real and it's being made real today by HP Calvin Zito's a thought leader at HPE and he's done a series of chalk talks as it pertains to improving storage improving data management one of the more interesting ones was specifically on the intelligent data platform let's watch Calvin Zito's chalk talk hey guys I love it's time for another around the storage black chalk talk in this chalk top we're gonna look at the intelligent Data Platform let me set up the discussion at HP we see the dawn of the intelligence error the flatshare brought a speed with flash flash is now table stakes the cloud era brought new levels of agility and everyone expects as a service experience going forward the intelligence era with an AI driven experience for infrastructure operations in AI enabled unlocking of insights is poised to catapult businesses forward so the intelligent era will see the rise of the intelligent enterprise the enterprise will be always on always fast always agile to respond to different challenges but most of all the intelligent enterprise will be built for innovation innovation that can ilish new services revenue streams and business models every enterprise will need to have an intelligent data strategy where your data is always on and always fast automated an on-demand hybrid by design and applies global intelligence for visibility and lifecycle management our strategy is to deliver an intelligent data platform that turns your data challenges into business opportunities it begins with workload optimized composable systems for multiple workloads and we deliver cloud services for a hybrid cloud environment so that you can seamlessly move data throughout its lifecycle I'll have more on this in a moment the global intelligence engine infuses the entire infrastructure with intelligence it starts with predicting and proactively resolving issues before they occur it creates a unique workload fingerprint and these workload fingerprints combined with global learning enable us to drive recommendations to keep your app workloads and supporting infrastructure always optimized and delivering predictable speed we have a REST API first strategy and offer pre build automation connectors we bring Apple wear protection for both traditional and modern new stack application workloads and you can use the intelligent data platform to build and deliver flexible big data and AI pipelines for driving real-time analytics let's take a quick look at the portfolio of workload optimized composable systems these are systems across mission-critical general-purpose workloads as well secondary data and solutions for the emerging big data and AI applications because our portfolio is built for the cloud we offer comprehensive cloud data services for both production workloads and backup and archive in the cloud HPE info site provides the global intelligence across the portfolio and we give you flexibility of consuming these solutions as a service with HPE Greenlake I want to close with one more thing the HPE intelligent data platform has three main attributes first it's AI driven it removes the burden of managing infrastructure so that IT can focus on innovating and not administrating second it's built for cloud and it enables easy data and workload mobility across hybrid cloud environments finally the intelligent data platform delivers and as a service experience so you can be your own cloud provider to learn more go to hp.com intelligent data always love to hear from you on Twitter where you can find me as calvin zito you can find my blog at hp.com slash blog until next time thanks for joining me on this around the storage black chalk talk I think Calvin makes a compelling case that the opportunity to use these technologies is available today not something that we're just going to wait for in the future and that's good because one of the most important things that business has to think about is how are they going to utilize some of these new AI and related technologies to alter the way that they engage their customers run their businesses and handle their operations and ultimately improve their overall efficiency and effectiveness in the marketplaces it's very clear that this intelligent data platform is required to do many of the advanced AI things that business wants to do but it also requires AI in the platform itself so let's go back to Sandeep Singh and talk to Sandeep about how HPE foresees AI being embedded in them into the intelligent data platform so it can make possible greater utilization of AI and the rest of the application portfolio so we've got the significant problem we now have to figure out how to architect because we want predictability and certainty and and cost clarity and to how we're going to do this part of the challenge or part of the pushers new use cases for AI so we're trying to push data up so that we can build these new use cases but it seems that we have to also have to take some of those very same technologies and drive them down into the infrastructure so we get greater intelligence greater self meter and greater self management self administration within the infrastructure itself I got that right yes absolutely what becomes important for customers is when you think about data and ultimately storage that underlies the data is you can build and deploy fast and reliable storage but that's only solving half the problem greater than 50% of the issues actually end up arising from the higher layers for example you could change the firmware on the host bus adapter inside a server that can trickle down and cause a data unavailability or a performance slowdown issue you need to be able to predict that all the way at that higher level and then prevent that from occurring or your virtual machines might be in a state of over memory commitment at the server level or you CPU over commitment how do you discover those issues and prevent them from happening the other area that's becoming important is when we talk about this whole notion of cloud and hybrid cloud right that complexity tends to multiply exponentially so when the smarts you guys are going after building that hybrid cloud infrastructure fundamental challenges even as I've got a new workload and I want to place that you even on premises because you've had lots of silos how do you even figure out where should I place a workload a and how it'll react with workloads B and C on a given system and now you multiply that across hundreds of systems multiple clouds and the challenge you can see that it's multiplying exponentially oh yeah well I would say that having you know where do I put workload a the right answer today maybe here but the right answer tomorrow maybe some where else and you want to make sure that the service is right required to perform workload a our resident and available without a lot of administrative work necessary to ensure that there's commonality that's kind of what we mean by this hybrid multi cloud world isn't it absolutely and you when you start to think about it basically you end up in requiring and fundamentally needing the data mobility aspect of it because without the data you can't really move your workloads and you need consistency of data services so that your app if it's architected for reliability and a set of data services those just go along with the application and then you need building on top of that the portability for your actual application workload consistently managed with a hybrid management interface there so we want to use an intelligent data platform that's capable of assuring performance assuring availability and assuring security and going beyond that to then deliver a simplified automated experience right so that everything is just available through a self-service interface and then it brings along a level of intelligence that's just built into it globally so that in instead of trying to manually predict and landing in a world of reactive after IT fires have occurred is that there are sea of sensors and it's automatic the infrastructures automatically for predicting and preventing issues before they ever occur and then going beyond that how can you actually fingerprint the individual application workloads to then deliver prescriptive insights right to keep the infrastructure always optimized in that sense so discerning the patterns of data utilization so that the administrative costs of making sure the data is available where it needs to be number one number two assuring that data as assets is made available to developers as they create new applications new new things that create new work but also working very closely with the administrators so that they are not bound [Music] as you know an explosion in the number of tasks adapt to perform to keep this all working across the board yes I want to thank Sandeep Singh and calvin zito both of HPE as well as wiki bonds David Floyd for sharing their ideas on this crucially important topic of how we're going to take more of a platform approach to do a better job of managing crucial data assets in today's and tomorrow's digital businesses I'm Peter Burris and this has been another wiki bomb the cube digital community event sponsored by HPE now stay tuned for our crowd chat which will be your opportunity to ask your questions share your experiences and push for the community's thinking on important issues facing business today thank you very much for watching and now let's crouch [Music]
SUMMARY :
of it so that the requirement then is to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Sandeep Singh | PERSON | 0.99+ |
David Floyd | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
David Flor | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
David floor | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
calvin zito | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Calvin Zito | PERSON | 0.99+ |
today | DATE | 0.99+ |
greater than 50% | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Calvin Zito | PERSON | 0.98+ |
two paths | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
over a billion data points | QUANTITY | 0.98+ |
Sandeep | PERSON | 0.98+ |
hundreds of thousands of systems | QUANTITY | 0.97+ |
each individual piece | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
first conversation | QUANTITY | 0.97+ |
hundreds of systems | QUANTITY | 0.97+ |
each | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
three main attributes | QUANTITY | 0.95+ |
one set | QUANTITY | 0.95+ |
one place | QUANTITY | 0.94+ |
about 25 minutes | QUANTITY | 0.94+ |
Sandeep | ORGANIZATION | 0.94+ |
one size | QUANTITY | 0.94+ |
wiki Bond | ORGANIZATION | 0.93+ |
hundred percent | QUANTITY | 0.92+ |
HPE | TITLE | 0.91+ |
Greenlake | ORGANIZATION | 0.91+ |
second | QUANTITY | 0.91+ |
half the problem | QUANTITY | 0.91+ |
one location | QUANTITY | 0.87+ |
Palo Alto California | LOCATION | 0.86+ |
first strategy | QUANTITY | 0.83+ |
kload | ORGANIZATION | 0.83+ |
a lot of enterprises | QUANTITY | 0.81+ |
hp.com | ORGANIZATION | 0.81+ |
a lot of decision-makers | QUANTITY | 0.81+ |
wiki bond | ORGANIZATION | 0.81+ |
h2o | TITLE | 0.81+ |
Kafka lastic | TITLE | 0.79+ |
ORGANIZATION | 0.79+ | |
of sensors | QUANTITY | 0.71+ |
six months | QUANTITY | 0.69+ |
Oracle | ORGANIZATION | 0.67+ |