Image Title

Search Results for 25-30million:

11 25 19 HPE Launch Floyer 5 (Do not make public)


 

[upbeat funk music] >> [Female Announcer] From our studios In the heart of Silicon Valley, Palo Alto California This is a Cube Conversation! >> Welcome to the Cube Studios for another Cube Conversation, where we go in depth with thought leaders driving business outcomes with technology. I'm your host, Peter Burris. When we have considered solving storage-related challenges, we found ourselves worrying about things like, how far does the device sit from the server? What kinds of wiring we were going to utilize, what kind of protocol was gonna run over that wiring. These are very physical concerns that were largely driven by the nature of the devices we were using. In a digital business that's using data as an asset, we can't think about storage the same way. We can't approach storage challenges the same way, we need a new mindset to help us better understand how to approach these storage issues, so that we're better serving the business outcomes and not just the device characteristics. Now to have that conversation about this new data services approach, we've got David Floyer, CTO and co-founder of Wikibon and my colleague, here on the Cube with us today. David, welcome to the Cube. >> Many thanks, yes. >> So David, I said upfront that we need a new mindset. Now I know you agree with us, but explain what that new mindset is. >> Yes, I completely agree that that new mindset is required. And it starts with, you want to be able to deal with data, wherever it's gonna be. We are in a hybrid world, a hybrid cloud world. Your own clouds, other public clouds, partner clouds, all of these need to be integrated and data is at the core of it. So that the requirement then is to have rather than think about each individual piece, is to think about services, which are going to be applied to that data and can be applied, not only to the data in one place, but across all of that data. And there isn't such a thing as just one set of services. There're going to be multiple sets of these services available. >> But hope we will see some degree of conversion so- >> Absolutely, yeah, there'll be the same ... >> Lexicon and conceptual, et cetera. >> Yeah, there'll be the same levels of things that are needed within each of these architectures, but there'll be different emphasis on different areas. If you've got a very, very high performance requirement, and recovery, speed of recovery is absolutely paramount with complex databases, then you're going to be thinking about, you know, oracle, cloud per customer as a way of being able to do that sort of thing. If you're wanting to manage containers in an area where it's stateless, then you've got a different set of priorities and requirements that you're gonna put together. >> But you wanna come instead of services. >> Yes. Let me give you an example. So I was talking to a CIO not too long ago, a client, guy I've worked with a lot and I was talking about the development world, and made the observation that you could build really rotten applications in Cobalt but you could also build really rotten applications with Containers. And he totally agreed and the observation he made to me was, you know, what microservices really is, it's an approach to solving a problem, that then suggest new technologies like in Containers, as opposed to being the product that you use to create the new applications. And so in many respects I think it's analogous to notion of data services. We need to look at the way we administer data as a set of services that create outcomes for the business, as opposed to, that are then translated into individual devices. So let's jump into this notion of what those services look like. It seems though we can list off a couple of them. >> Sure, yeah so we must have data reduction techniques. So you must have deduplication, compression, type of techniques and you want apply that across as big an amount of data as you can. The more data you apply those, the higher the levels of compression and deduplication you can get. So that's clearly, you've got those sort of sets of services across there. You must backup and restore data in another place and be able to restore it quickly and easily. There's that again is a service. How quickly, how integrated that recovery, again that's gonna be a variable. >> That's a differentiation in the service. >> Different, exactly. You're gonna need data protection in general. End to end protection of one sort or another. For example, you need end to end encryption across there. It's not longer good enough to say, this bit's been encrypted and then this bit's encrypted. It's gotta be an end to end, from one location to another location, seamlessly provided, that sort of data protection. >> Well let me press on that 'cause I think it's a really important point and it's, you know, the notion that weakest link determines the strength of the chain, right? >> Yeah, yep. >> What you just described says, if you have encryption here and you don't have encryption there, but because of the nature of digital you can start bringing that data together, guess what? The weakest link determines the protection of the old world data. >> The protection of the, absolutely, yes. And then you need services like snapshots, like other services which provide much better usage of that data. One of the great things about Flash and has brought this about is that, you can take a copy of that in real time and use that for a totally different purpose and have that being changed in a different way, so there are some really significantly great improvements you can have with services like snapshots. And then you need some other services which are becoming even more important in my opinion. The advent of bad actors in the world has really brought about the requirement for things like air gaps. To have your data with the metadata all in one place, and completely separated from everything else. There are such things as called logical air gaps, I think as long as they're real, in the real sense that the two paths can't interfere with each other, those are gonna be services which become very, very important indeed. >> And that's generally as an example of a general class of security data service is gonna be required. >> Correct, yes. So ultimately what we're describing is, we're describing a new mindset that says, that a storage administrator has to think about the services that the applications and the business requires and then seek out technologies that can provide those services at the price point, with the degree power consumption, in the space, or the environmentals, or with the type of maintenance and services, really the support that are required, based on the physical location, the degree to which it's under the control, et cetera. Is that kinda how we're thinking about this? >> I think absolutely and again, if there're gonna be multiple of these around in the market place, one size is not gonna fit all. If you're wanting super fast response time at an edge and if you don't get that response in time, it's gonna be no use whatsoever, you're gonna have a different architecture, a different way of doing it, than if you need to be a hundred percent certain that every bit is captured in a financial sort of environment. >> But from the service standpoint you wanna be able to look at that specific solution in a common way across policies and capabilities. >> Correct, correct. >> David Floyer! Once again thanks again for being on the Cube and talking about this important issue and thank you for joining us again. I'm Peter Burris, see you next time. [upbeat funk music]

Published Date : May 1 2019

SUMMARY :

of Wikibon and my colleague, here on the Cube with us today. Now I know you agree with us, So that the requirement then is to have about, you know, oracle, cloud per customer and made the observation that you could build across as big an amount of data as you can. For example, you need end to end encryption across there. but because of the nature of digital that the two paths can't interfere with each other, of a general class of security data that the applications and the business requires in the market place, one size is not gonna fit all. But from the service standpoint and thank you for joining us again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Peter BurrisPERSON

0.99+

David FloyerPERSON

0.99+

WikibonORGANIZATION

0.99+

todayDATE

0.99+

Cube StudiosORGANIZATION

0.98+

eachQUANTITY

0.97+

OneQUANTITY

0.97+

two pathsQUANTITY

0.97+

CubeORGANIZATION

0.97+

one sizeQUANTITY

0.97+

one setQUANTITY

0.96+

each individual pieceQUANTITY

0.94+

Silicon Valley, Palo Alto CaliforniaLOCATION

0.94+

one locationQUANTITY

0.94+

one placeQUANTITY

0.93+

ConversationEVENT

0.92+

hundred percentQUANTITY

0.9+

oneQUANTITY

0.89+

FlashTITLE

0.89+

CobaltTITLE

0.87+

CTOPERSON

0.83+

CubeCOMMERCIAL_ITEM

0.62+

HPEORGANIZATION

0.6+

11 25 19 HPE Launch Floyer 4 (Do not make public)


 

from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation welcome to the cube studios for the cube conversation where we go in-depth with thought leaders driving business outcomes with technology I'm your host Peter Burris digital business and the need to drive the value of data within organizations is creating an explosion of technology in multiple domains systems networking and storage we've seen advances in flash we've seen advances in HD DS we've seen advances and all kinds of different elements but it's essential that users and enterprises still think in terms not just of these individual technologies piecemeal but as solutions that are applied to use cases now you always have to be aware of what are the underlying technology components but it's still important to think about how systems integration is going to bring them together and apply them to serve business outcomes now to have that conversation we've got David Fleur who's the CTO and co-founder of wiki bond and my colleague David welcome to the cube thank you very much Peter all right so I've just laid out this proposition that systems integration as a discipline is not gonna go away when we think about how to build these capabilities that businesses need in digital business so let's talk about that what are some of the key features of systems integration especially in the storage world that will continue to be a helps differentiate between winners and losers absolutely so you you need to be able to use software to be able to combine all these different layers and it has to be an architect software solution that will work wherever you've got equipment and where have you got data so it needs to work in the cloud it needs to work in a private cloud it needs to work at the edge all of these needs to be architected in a way which is available to the users to put where the data is going to be created as opposed to bring it all in in one super large collection of data and so we've got different types of technology at the very fastest we've got DRAM we've got we've got non-volatile DRAM which is coming very fast indeed we've got flash and there are many different sorts of flash there's obtained from Intel that may be trying to get in between there as well and then there are different HD DS as well so we got a long hierarchy the important thing is that we protect the application and the operations from all of that complexity by having an overall hierarchy and utilizing software from an integration standpoint but it suggests that when an enterprise thinks about a solution for how they store their data they need to think in terms of as you said first off physically where is it going to be secondly what kinds of services at the software level am I going to utilize to ensure that I can have a common administrative experience and the differentiated usage experience based on the physical characteristics of where it's being used and then obviously and very importantly from an administration standpoint I need to ensure that I'm not having to learn new and unique administration practices everywhere because I would just blow everything up absolutely but there is a real there's going to be in my opinion a large number of these solutions out there I mean one data architecture is not going to be sufficient for all applications they're gonna have many different architectures out there I think it's probably useful just to start with one as an example in this area just let's take one as an example and then we can see what the major characteristics of you are so let's take something that would fit in most places a mid-range type solution let's take nimble nimble storage which has a very specific architecture so it was started off by being a virtualization of all those different layers so the application sees that everything is in flash and in cash or whatever it is but where it is is totally different it can be anywhere within that hierarchy so the application sees effectively a pool of resources that it can call yes all it sees and and it doesn't know and nobody and it doesn't need to know that it's on disk or a hard disk or in in memory in in in a cache inside the controller or wherever it is so it starts with using nimble as an example nimble is successfully masking the complexities and specificities of that storage heart and from the application right so so and and that's an advantage because it's simpler but it's also needs to cover more things you need to be able to do everything within that virtualized environment so you need for example to be able to take snapshots and you the snapshots need all the metadata about the snapshots needs to be put in a separate place so one of the things you find that comes from this sort of architecture is that the metadata is separated out completely different from the actual data itself but still proximate to the data because data locality still matters absolutely has to be there but it's in a different part of a hierarchy it's much further up the hierarchy all the metadata so what we've got the metadata we've got the flash high speed we've got the the fastest which is the DRAM itself that when for writes is has a protection mechanism for that that part of the DRAM specialized hardware in that area so that allows you to do writes very very quickly indeed and then you come down to the next layer which is flash and indeed within the in the in taking the nimble example you have two sorts of flash you can have the high-speed flash at the top and if you want to you can have lower performance flash you know using the 3d quad flash or whatever it is you can have lower performance flash if that's what you need and then going lower down then you have HD DS and the architecture combines the benefits of flash with the character and the characteristics of flash with the benefits of HD d which is much lower cost but the characteristics of HD d which are slower but very suited to writing out large volumes or reading in large volumes so that's read out to the disk but where where it's all held is held in the metadata so it's really looking at the workloads that are going to be they're gonna hit the data and then with out of making the application aware of it utilizing the underlying storage hierarchy to so best support those workloads again with a virtualized interface that keeps it really simple from an administration development and runtime perspective actually all right David foyer thanks very much for being on the cube and talking about some of these new solution-oriented requirements for thinking about storage over the next few years once again I'm Peter Burris see you next time you [Music]

Published Date : May 1 2019

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FleurPERSON

0.99+

Peter BurrisPERSON

0.99+

PeterPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Peter BurrisPERSON

0.99+

two sortsQUANTITY

0.95+

nimbleORGANIZATION

0.95+

oneQUANTITY

0.93+

David foyerPERSON

0.92+

Palo Alto CaliforniaLOCATION

0.91+

wiki bondORGANIZATION

0.9+

firstQUANTITY

0.83+

one ofQUANTITY

0.76+

secondlyQUANTITY

0.72+

nimbleTITLE

0.64+

11OTHER

0.64+

next few yearsDATE

0.61+

thingsQUANTITY

0.6+

HPE Launch Floyer 4TITLE

0.5+

25OTHER

0.44+

19OTHER

0.38+

11 25 19 HPE Launch Floyer 2


 

(upbeat jazz music) >> From our studios, in the heart of Silicon Valley, Palo Alta, California. This is a Cube Conversation. >> Hi, welcome to the Cube Studio for another Cube Conversation where we go in-depth with thought leaders driving business outcomes with technology. I'm your host Peter Buriss. As enterprise is look to take advantage of new classes of applications like AI and others that make possible this notion of a data first or data driven enterprise in a digital business world. They absolutely have to consider what they need to do with their stored resources to modernize them to make possible new types of performance today, but also sustain and keep open options for how they use data in the future. To have that conversation we're here with David Floyer, CTO and co-founder of Wikibon. David welcome to the conversation. >> Thank you. >> So David you've been looking at this notion of modern storage architectures for 10 years now. >> Yeah. >> And you've been relatively prescient in understanding what's gonna happen. You were one of the first guys to predict well in advance of everybody else that the crossover between flash and HDD was gonna happen sooner rather than later. So I'm not going to spend a lot of time quizzing you. What do you see as a modern storage architecture? Let's, just let it rip. >> Okay well let's start with one simple observation. The days of standalone systems for data have gone we're in a software defined world and you wanna be able to run those data architectures anywhere where the data is. And that means in your data center where it was created or in the cloud or in a public cloud or at the edge. You want to be able to be flexible enough to be able to do all of the data services where the best place is and that means everything has to be software driven. >> Software defined is the first proposition of modern data storage facility? >> Absolutely. >> Second. >> So the second thing is that there are different types of technology. You have the very fastest storage which is in the in the DIRUM itself. You have NVDIMM which is the next one down from that expensive but a lot cheaper than the DIMM. And then you have different sorts of flash. You have the high performance flash and you have the 3D flash, you know as many layers as you can which is much cheaper flash and then at the bottom you have HDD and even tape as storage devices. So how. The key question is how do you manage that sort of environment. >> Where do we start because it still sounds like we still have a storage hierarchy. >> Absolutely. >> And it still sounds like that hierarchy is defined largely in terms of access speeds >> Yeap. >> And price points. >> Price points. Yes. >> Those are the two Mason and bandwidth and latency as well are within that. >> which are tied into that? >> which are tied into those. Yes. So what you, if you're gonna have this everywhere and you need services everywhere what you have to have is an architecture which takes away all of that complexity, so that you, all you see from an application point of view is data and how it gets there and how is put away and how it's stored and how it's protected that's under the covers. So the first thing is you need a virtualization of that data layer. >> The physical layer? >> The virtualization of that physical layer. >> Right right. >> Yes. And secondly you need that physical layer to extend to all the places that may be using this data. You don't wanna be constrained to this data set lives here. You want to be able to say Okay, I wanna move this piece of programming to the data as quickly as I can, that's much much faster than moving the data to the processing. So I want to be able to know where all the data is for this particular dataset or file or whatever it is, where they all are, how they connect together, what the latency is between everything. I wanna understand that architecture and I want to virtualize view of that across that whole the nodes that make up my hybrid cloud. >> So let me be clear here so, so we are going to use a software defined infrastructure >> Yeah. that allows us to place the physical devices that have the right cost performance characteristics where they need to be based on the physical realities of latency power availability, hardening, et cetera. >> And the network >> And the network. But we wanna mask that complexity from the application, application developer and application administrator. >> Yes. >> And software defined helps do that, but doesn't completely do it. >> No. Well you want services which say >> Exactly, so their services on top of all that. >> On top of all that. >> Absolutely. >> That are recognizable by the developer, by the business person, by the administrator, as they think about how they use data towards those outcomes not use storage or user device but use the data. >> Data to reach application outcomes. That's absolutely right. And that's what I call the data plane which is a series of services which enable that to happen and driven by the application requirements themselves. >> So we've looked at this and some of the services include end end compression, duplication, >> Duplication. backup restore, security, data protection. >> Protection. Yeah. So that's kind of, that's kind of the services that now the enterprise buyer needs to think about. >> Yes. >> So that those services can be applied by policy. >> Yes. >> Wherever they're required based on the utilization of the data >> Correct. >> Where the event takes place. >> And then you still have at the bottom of that you have the different types of devices. You still have you still won't >> A lot of hamsters making stuff work. >> You still want hard disk for example they're not disappearing, but if you're gonna use hard disks then you want to use it in the right way for using a hard disk. You wanna give it large box. You want to have it going sequentially in and out all the time. >> So the storage administration and the physical schema and everything else is still important in all these? >> Absolutely. But it's less important, less a centerpiece of the buying decision. >> Correct. >> Increasingly it's how well does this stuff prove support the services that the business is using to achieve your outcomes. >> And you want to use costs the lowest cost that you can and they'll be many different options open, more more options open. But the automation of that is absolutely key and that automation from a vendor point of view one of the key things they have to do is to be able to learn from the usage by their customers, across as broad a number of customers as they can. Learn what works or doesn't work, learn so that they can put automation into their own software their own software service. >> So it sounds like we talking four things. We got software defined, still have a storage hierarchy defined by cost and performance, but with mainly semiconductor stuff. We've got great data services that are relevant to the business and automation that mask the complexity from everything. >> And a lot of the artificial AI there is, automated >> Running things. Fantastic. David Floyer, talking about modern storage architectures. Once again thanks for joining us on the Cube Conversation. And I'm your host Peter Burris. See you next time. (jazz music)

Published Date : May 1 2019

SUMMARY :

in the heart of Silicon Valley, Palo Alta, California. and others that make possible this notion of a data first So David you've been looking at this notion in advance of everybody else that the crossover and that means everything has to be software driven. You have the very fastest storage Where do we start because it still sounds like Yes. Those are the two Mason So the first thing is you need than moving the data to the processing. that have the right cost performance characteristics And the network. And software defined helps do that, on top of all that. by the business person, by the administrator, and driven by the application requirements themselves. that now the enterprise buyer needs to think about. And then you still have at the bottom of that and out all the time. less a centerpiece of the buying decision. that the business is using to achieve your outcomes. one of the key things they have to do and automation that mask the complexity from everything. And I'm your host Peter Burris.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FloyerPERSON

0.99+

Peter BurrisPERSON

0.99+

DavidPERSON

0.99+

Peter BurissPERSON

0.99+

10 yearsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

second thingQUANTITY

0.99+

twoQUANTITY

0.99+

SecondQUANTITY

0.98+

first guysQUANTITY

0.97+

one simple observationQUANTITY

0.96+

oneQUANTITY

0.96+

Palo Alta, CaliforniaLOCATION

0.93+

Silicon Valley,LOCATION

0.91+

Cube StudioORGANIZATION

0.91+

secondlyQUANTITY

0.9+

four thingsQUANTITY

0.9+

ConversationEVENT

0.9+

todayDATE

0.85+

first thingQUANTITY

0.78+

25 19DATE

0.76+

NVDIMMOTHER

0.73+

first propositionQUANTITY

0.71+

CTOPERSON

0.7+

Cube ConversationEVENT

0.69+

11OTHER

0.6+

firstQUANTITY

0.6+

CubeORGANIZATION

0.56+

DIRUMTITLE

0.55+

CubeCOMMERCIAL_ITEM

0.52+

MasonORGANIZATION

0.51+

HPEORGANIZATION

0.4+

FloyerEVENT

0.36+

2OTHER

0.31+

11 25 19 HPE Launch Floyer 1 (Do not make public)


 

(lively funk music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBEConversation. >> Hi, welcome to the CUBE Studio for another CUBEConversation where we go in-depth with the thought leaders driving outcomes with technology. I'm your host, Peter Burris. One of the biggest challenges that enterprises face is how to appropriately apply artificial intelligence. Now, let's be clear, the basic precepts and concepts and approaches to artificial intelligence have been around for a long time. One might argue decades. It's happening now because the technology can perform it. And one of the technologies that's especially important, and is absolutely essential to determining success or failure in AI, is storage. So what we're gonna do now is have a conversation with David Floyer, the CTO and co-founder of Wikibon, about that crucial relationship between AI and storage. David, welcome to the conversation. >> Thanks very much, indeed, Peter. Interesting subject. >> Oh, very interesting subject, so let's get right into it, David. >> Sure. >> What is it about AI and storage that makes the two of them so essential to the co-evolution of each? >> Absolutely, so first of all, you've got different parts of AI. So you've got the part where you're developing all of the models themselves, where you've got a large amount of data. You're trying to capture that data. You're trying to find out what's important in that data. And then you're developing models which you're going to use to do something. Either automate something or give information to somebody about the business process. >> All right, so that's the first one. What's the second one? >> So the second one, they're both concerned with inferencing. There's inferencing close to that data, the overall data, and there's inferencing right at the Edge itself. And they both important and driven in different ways. The inferencing close to the applications, the centralized applications-- >> So inferencing in the data center, so to speak. >> In the data center itself. Those are going to be, essentially, most of them, real-time decisions that are being made. For example, if I am trying to find out what sort of customer you are, what sort of price that I'm gonna give you, what sort of delivery, what sort of terms I'm gonna give you, that's information that I'm gonna have to get from a whole number of different sources, push them all together, and give that information to my systems of record. They are gonna make those decisions and they're gonna push them down to maybe an Edge or Apple-type device to give you the answer to that. That's going on in real-time and has to be extremely rapidly done. >> And now we've got inferencing at the Edge. >> And then you've got inferencing at the Edge. Now here's all of the data coming in, whether it be a mobile Edge or a stationery Edge, huge amounts of data coming in to cameras to other senses of one sort or another. >> Or being generated right there where the-- >> Absolutely, generated, that's the first time that status has ever existed. And what you want to do with that is put the inference there and take what's important from that data. Because 99% or 99.9% of that data is absolutely free of value. So you're trying to extract that 0.01% of data and do actions locally with that and also pass those up the line. So you're actually getting rid of a huge amount of data at the Edge. >> All right, so that's an overall AI taxonomy. >> Yeah. >> How does storage influence what happens at the modeling and development level? What's the relationship between AI modeling and storage? >> So AI modeling is about lots and lots of data. Lots and lots of small files. Imagine thousands of millions of pictures going through millions of any sort of artificial intelligence you're trying to generate on that. So, that's one thing is, it's large amounts of data and you don't do modeling just once. You reuse the data. You run it again. You check it against something else. You're constantly looking for new types of data, new data, large amounts of data, lot of large-scale processing of that data to create models of one sort or another. >> You're not gonna do that on disk. >> You're not gonna to that on disk. That has to be flash. Has to be fast flash. And what you want, if you can, is to integrate the processing and the data, all as one, so that it fits in, it can be viewed as a system for the data scientists, which it sits there and does what they want to do and then can be managed from a storage point-of-view by the professionals. >> So in the center, it's gonna be very fast, very high-performance, very scalable, and flash. >> Yes. >> What about at the Edge? >> So, well, (laughs) >> What about at the activity Edge, let's call it? >> Yeah, activity, that again, is here you've got real-time processing. So again, the emphasis is on flash most of the time. And you've, in fact, got other technologies like, for example, envidems, which are coming in and increasing. So you've got a hierarchy there which you want to be able to use the right sort of storage for that job. But a lot of that's gonna be extremely rapid. And you want to be able to take your current systems of record, squeeze those down to allow space for all this inference work to be added in so that everything is real-time. So that's really, it's much faster. Of course, it doesn't mean you get rid of all of the things like data services and all of the things which you've collected. >> Well, on the contrary, doesn't it mean that those types of things become more important? >> Become more important. >> Well, so here's a hypothesis that I've had for a while and we've talked about, that the traditional storage notion of data, which was size, class, format-- >> Latency. >> IOPs. >> Yeah. >> Those types of things-- >> Bandwidth. >> Means nothing to the data scientists. >> Correct. >> AI is a business problem driving business observations so data services, in many respects, are a way of mediating the performance and other realities at the device level with the business and tool chain requirements at the AI level, right? >> Absolutely, absolutely, and you've gotta have those services. And, indeed, with hybrid computing, you want to move that processing to where the data is created, as much as you can. So if it's created in the Cloud, you go to the Cloud. If it's created-- >> Created or used? >> If you can, you want to do it where the data is created. The less data you move around, the better. So it's much better to send a request to that data where it's created, as close as possible to that. >> Okay, subject to the realities of latency. >> Absolutely. >> So, in many respects, it's still gonna be you want the data where it's gonna be used, but if you don't have to move it to where it's used, because the latency envelope is large enough, then keep it where it's created. >> Keep it where it's created. >> Got it. >> Absolutely, yes. And now, if we go to the Edge, there you really want to avoid having to store data at all. There's 99% of that data is useless. 99.9% of that data is useless. You wanna get rid of that. You want to use the inferencing to store only what is necessary. Now, to begin with, when you're still in the data modeling stage of AI, you may want to send some of that back, quite a lot of it back. But once you get into a normal running of it, you want to get rid of as much of that possible data as you can, take the core of that data, what it matters, the exceptions, etc. Send that up and get rid of it. Just destroy it. >> Well, this is one area where you and I, we generally agree. You say 99%, maybe it's 95%, maybe it's 90% of the data gets, you know, gotten rid of. Because there's always gonna be derivative opportunities to use data in valuable ways. But that's something we're gonna discover over the next few years. >> Sure. >> But we're not gonna go through that process if we don't have storage that can handle these workloads. >> Absolutely. >> All right. >> Yep. >> David Floyer, talking about the relationship between AI and storage. Thanks again for being on the CUBE. >> You're welcome. >> And thanks for joining us for another CUBEConversation. I'm Peter Burris. See you next time. (lively funk music)

Published Date : May 1 2019

SUMMARY :

in the heart of Silicon Valley, One of the biggest challenges Thanks very much, indeed, Peter. so let's get right into it, David. all of the so that's the first one. So the second one, and give that information to my systems of record. Now here's all of the data coming in, of a huge amount of data at the Edge. You reuse the data. the data scientists, So in the center, it's gonna be very fast, and all of the things which you've collected. So if it's created in the Cloud, you go to the Cloud. So it's much better to send a request to that data because the latency envelope is large enough, in the data of the data gets, you know, gotten rid of. that can handle these workloads. Thanks again for being on the CUBE. See you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FloyerPERSON

0.99+

PeterPERSON

0.99+

Peter BurrisPERSON

0.99+

90%QUANTITY

0.99+

99%QUANTITY

0.99+

99.9%QUANTITY

0.99+

95%QUANTITY

0.99+

twoQUANTITY

0.99+

0.01%QUANTITY

0.99+

second oneQUANTITY

0.99+

millionsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

bothQUANTITY

0.98+

CUBEConversationEVENT

0.98+

first oneQUANTITY

0.98+

oneQUANTITY

0.97+

thousands of millions of picturesQUANTITY

0.97+

one thingQUANTITY

0.97+

first timeQUANTITY

0.96+

AppleORGANIZATION

0.96+

eachQUANTITY

0.95+

OneQUANTITY

0.95+

firstQUANTITY

0.95+

one areaQUANTITY

0.9+

Silicon Valley,LOCATION

0.89+

decadesQUANTITY

0.86+

Palo Alto, CaliforniaLOCATION

0.84+

EdgeORGANIZATION

0.78+

EdgeTITLE

0.73+

FloyerPERSON

0.71+

onceQUANTITY

0.65+

yearsDATE

0.64+

CUBE StudioORGANIZATION

0.61+

11OTHER

0.61+

lotsQUANTITY

0.59+

25 19DATE

0.54+

LotsQUANTITY

0.54+

dataQUANTITY

0.49+

CUBEORGANIZATION

0.45+