Phil Bullinger, Western Digital | CUBE Conversation, August 2020
>> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a Cube conversation. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We are in our Palo Alto studios, COVID is still going on, so all of the interviews continue to be remote, but we're excited to have a Cube alumni, he hasn't been on for a long time, and this guy has been in the weeds of the storage industry for a very very long time and we're happy to have him on and get an update because there continues to be a lot of exciting developments. He's Phil Bullinger, he is the SVP and general manager, data center business unit from Western Digital joining us, I think for Colorado, so Phil, great to see you, how's the weather in Colorado today? >> Hi Jeff, it's great to be here. Well, it's a hot, dry summer here, I'm sure like a lot of places. But yeah, enjoying the summer through these unusual times. >> It is unusual times, but fortunately there's great things like the internet and heavy duty compute and store out there so we can get together this way. So let's jump into it. You've been in the he business a long time, you've been at Western Digital, you were at EMC, you worked on Isilon, and you were at storage companies before that. And you've seen kind of this never-ending up and to the right slope that we see kind of ad nauseum in terms of the amount of storage demands. It's not going anywhere but up, and please increase complexity in terms of unstructure data, sources of data, speed of data, you know the kind of classic big V's of big data. So I wonder, before we jump into specifics, if you can kind of share your perspective 'cause you've been kind of sitting in the Catford seat, and Western Digital's a really unique company; you not only have solutions, but you also have media that feeds other people solutions. So you guys are really seeing and ultimately all this compute's got to put this data somewhere, and a whole lot of it's sitting on Western Digital. >> Yeah, it's a great intro there. Yeah, it's been interesting, through my career, I've seen a lot of advances in storage technology. Speeds and feeds like we often say, but the advancement through mechanical innovation, electrical innovation, chemistry, physics, just the relentless growth of data has been driven in many ways by the relentless acceleration and innovation of our ability to store that data, and that's been a very virtuous cycle through what, for me, has been 30 years in enterprise storage. There are some really interesting changes going on though I think. If you think about it, in a relatively short amount of time, data has gone from this artifact of our digital lives to the very engine that's driving the global economy. Our jobs, our relationships, our health, our security, they all kind of depend on data now, and for most companies, kind of irrespective of size, how you use data, how you store it, how you monetize it, how you use it to make better decisions to improve products and services, it becomes not just a matter of whether your company's going to thrive or not, but in many industries, it's almost an existential question; is your company going to be around in the future, and it depends on how well you're using data. So this drive to capitalize on the value of data is pretty significant. >> It's a really interesting topic, we've had a number of conversations around trying to get a book value of data, if you will, and I think there's a lot of conversations, whether it's accounting kind of way, or finance, or kind of good will of how do you value this data? But I think we see it intrinsically in a lot of the big companies that are really data based, like the Facebooks and the Amazons and the Netflixes and the Googles, and those types of companies where it's really easy to see, and if you see the valuation that they have, compared to their book value of assets, it's really baked into there. So it's fundamental to going forward, and then we have this thing called COVID hit, which I'm sure you've seen all the memes on social media. What drove your digital transformation, the CEO, the CMO, the board, or COVID-19? And it became this light switch moment where your opportunities to think about it are no more; you've got to jump in with both feet, and it's really interesting to your point that it's the ability to store this and think about it now differently as an asset driving business value versus a cost that IT has to accommodate to put this stuff somewhere, so it's a really different kind of a mind shift and really changes the investment equation for companies like Western Digital about how people should invest in higher performance and higher capacity and more unified and kind of democratizing the accessibility that data, to a much greater set of people with tools that can now start making much more business line and in-line decisions than just the data scientist kind of on Mahogany Row. >> Yeah, as you mentioned, Jeff, here at Western Digital, we have such a unique kind of perch in the industry to see all the dynamics in the OEM space and the hyperscale space and the channel, really across all the global economies about this growth of data. I have worked at several companies and have been familiar with what I would have called big data projects and fleets in the past. But at Western Digital, you have to move the decimal point quite a few digits to the right to get the perspective that we have on just the volume of data that the world has just relentless insatiably consuming. Just a couple examples, for our drive projects we're working on now, our capacity enterprise drive projects, you know, we used to do business case analysis and look at their lifecycle capacities and we measured them in exabytes, and not anymore, now we're talking about zettabyte, we're actually measuring capacity enterprise drive families in terms of how many zettabyte they're going to ship in their lifecycle. If we look at just the consumption of this data, the last 12 months of industry TAM for capacity enterprise compared to the 12 months prior to that, that annual growth rate was north of 60%. And so it's rare to see industries that are growing at that pace. And so the world is just consuming immense amounts of data, and as you mentioned, the COVID dynamics have been both an accelerant in some areas, as well as headwinds in others, but it's certainly accelerated digital transformation. I think a lot of companies we're talking about, digital transformation and hybrid models and COVID has really accelerated that, and it's certainly driving, continues to drive just this relentless need to store and access and take advantage of data. >> Yeah, well Phil, in advance of this interview, I pulled up the old chart with all the different bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, and zettabytes, and just per the Wikipedia page, what is a zettabyte? It's as much information as there are grains of sand in all the world's beaches. For one zettabyte. You're talking about thinking in terms of those units, I mean, that is just mind boggling to think that that is the scale in which we're operating. >> It's really hard to get your head wrapped around a zettabyte of storage, and I think a lot of the industry thinks when we say zettabyte scale era, that it's just a buzz word, but I'm here to say it's a real thing. We're measuring projects in terms of zettabytes now. >> That's amazing. Well, let's jump into some of the technology. So I've been fortunate enough here at theCUBE to be there at a couple of major announcements along the way. We talked before we turned the cameras on, the helium announcement and having the hard drive sit in the fish bowl to get all types of interesting benefits from this less dense air that is helium versus oxygen. I was down at the Mammer and Hammer announcement, which was pretty interesting; big heavy technology moves there, to again, increase the capacity of the hard drive's base systems. You guys are doing a lot of stuff on RISC-V I know is an Open source project, so you guys have a lot of things happening, but now there's this new thing, this new thing called zonedd storage. So first off, before we get into it, why do we need zoned storage, and really what does it now bring to the table in terms of a capability? >> Yeah, great question, Jeff. So why now, right? Because I mentioned storage, I've been in storage for quite some time. In the last, let's just say in the last decade, we've seen the advent of the hyperscale model and certainly a whole nother explosion level of data and just the veracity with which they hyperscalers can create and consume and process and monetize data. And of course with that, has also come a lot of innovation, frankly, in the compute space around how to process that data and moving from what was just a general purpose CPU model to GPU's and DPU's and so we've seen a lot of innovation on that side, but frankly, in the storage side, we haven't seen much change at all in terms of how operating systems, applications, file systems, how they actually use the storage or communicate with the storage. And sure, we've seen advances in storage capacities; hard drives have gone from two to four, to eight, to 10 to 14, 16, and now our leading 18 and 20 terabyte hard drives. And similarly, on the SSD side, now we're dealing with the capacities of seven, and 15, and 30 terabytes. So things have gotten larger, as you expect. And some interfaces have improved, I think NVME, which we'll talk about, has been a nice advance in the industry; it's really now brought a very modern scalable low latency multi-threaded interface to a NAM flash to take advantage of the inherent performance of transistor based persistent storage. But really when you think about it, it hasn't changed a lot. But what has changed is workloads. One thing that definitely has evolved in the space of the last decade or so is this, the thing that's driving a lot of this explosion of data in the industry is around workloads that I would characterize as sequential in nature, they're serial, you can capture it in written. They also have a very consistent life cycle, so you would write them in a big chunk, you would read them maybe in smaller pieces, but the lifecycle of that data, we can treat more as a chunk of data, but the problem is applications, operating systems, vial systems continue to interface with storage using paradigms that are many decades old. The old 512 byte or even Forte, Sector size constructs were developed in the hard drive industry just as convenient paradigms to structure what is an unstructured sea of magnetic grains into something structured that can be used to store and access data. But the reality is when we talk about SSDs, structure really matters, and so what has changed in the industry is the workloads are driving very very fresh looks at how more intelligence can be applied to that application OS storage device interface to drive much greater efficiency. >> Right, so there's two things going on here that I want to drill down on. On one hand, you talked about kind of the introduction of NAND and flash, and treating it like you did, generically you did a regular hard drive. But you could get away and you could do some things because the interface wasn't taking full advantage of the speed that was capable in the NAND. But NVME has changed that, and now forced kind of getting rid of some of those inefficient processes that you could live with, so it's just kind of classic next level step up and capabilities. One is you get the better media, you just kind of plug it into the old way. Now actually you're starting to put in processes that take full advantage of the speed that that flash has. And I think obviously prices have come down dramatically since the first introduction, where before it was always kind of a clustered off or super high end, super low latency, super high value apps, it just continues to spread and proliferate throughout the data center. So what did NVME force you to think about in terms of maximizing the return on the NAND and flash? >> Yeah, NVME, which we've been involved in the standardization, I think it's been a very successful effort, but we have to remember NVME is about a decade old, or even more when the original work started around defining this interface, but it's been very successful. The NVME standard's body is very productive cross company effort, it's really driven a significant change, and what we see now is the rapid adoption of NVME in all of data center architectures, whether it's very large hyperscale to classic on prem enterprise to even smaller applications, it's just a very efficient interface mechanism for connecting SSDs into a server. So we continue to see evolution at NVME, which is great, and we'll talk about ZNS today as one of those evolutions. We're also very keenly interested in NVME protocol over fabrics, and so one of the things that Western Digital has been talking about a lot lately is incorporating NVME over fabrics as a mechanism for now connecting shared storage into multiple post architectures. We think this is a very attractive way to build shared storage architectures of the future that are scalable, that are composable, that really have a lot more agility with respect to rack level infrastructure and applying that infrastructure to applications. >> Right, now one thing that might strike some people as kind of counterintuitive is within the zoned storage in zoning off parts of the media, to think of the data also kind of in these big chunks, is it feels contrary to kind of atomization that we're seeing in the rest of the data center, right? So smaller units of compute, smaller units of store, so that you can assemble and disassemble them in different quantities as needed. So what was the special attribute that you had to think about and actually come back and provide a benefit in actually kind of re-chunking, if you will, in these zones versus trying to get as atomic as possible? >> Yeah, it's a great question, Jeff, and I think it's maybe not intuitive in terms of why zoned storage actually creates a more efficient storage paradigm when you're storing stuff essentially in larger blocks of data, but this is really where the intersection of structure and workload and sort of the nature of the data all come together. If you turn back the clock maybe four or five years when SMR hard drives host managers SMR hard drives first emerged on the scene. This was really taking advantage of the fact that the right head on a hard disk drive is larger than the read head, or the read head can be much smaller, and so the notion of overlapping or shingling the data on the drive, giving the read head a smaller target to read, but the writer a larger write pad to write the data could actually, what we found was it increases aerial density significantly. And so that was really the emergence of this notion of sequentially written larger blocks of data being actually much more efficiently stored when you think about physically how it's being stored. What's very new now and really gaining a lot of traction is the SSD corollary to SMR on the hard drive, on the SSD side, we had the ZNS specification, which is, very similarly where you'd divide up the name space of an SSD into fixed size zones, and those zones are written sequentially, but now those zones are intimately tied to the underlying physical architecture of the NAND itself; the dyes, the planes, the read pages, the erase pages. So that, in treating data as a block, you're actually eliminating a lot of the complexity and the work that an SSD has to do to emulate a legacy hard drive, and in doing so, you're increasing performance and endurance and the predictable performance of the device. >> I just love the way that you kind of twist the lens on the problem, and on one hand, by rule, just looking at my notes here, the zoned storage device is the ZSD's introduce a number of restrictions and limitations and rules that are outside the full capabilities of what you might do. But in doing so, an aggregate, the efficiency, and the performance of the system in the whole is much much better, even though when you first look at it, you think it's more of a limiter, but it's actually opens up. I wonder if there's any kind of performance stats you can share or any kind of empirical data just to give people kind of a feel for what that comes out as. >> So if you think about the potential of zoned storage in general and again, when I talk about zoned storage, there's two components; there's an HDD component of zoned storage that we refer to as SMR, and there's an SSD version of that that we call ZNS. So we think about SMR, the value proposition there is additional capacity. So effectively in the same drive architecture, with roughly the same bill of material used to build the drive, we can overlap or shingle the data on the drive. And generally for the customer, additional capacity. Today with our 18, 20 terabyte offerings that's on the order of just over 10%, but that delta is going to increase significantly going forward to 20% or more. And when you think about a hyperscale customer that has not hundreds or thousands of racks, but tens of thousands of racks. A 10 or 20% improvement in effective capacity is a tremendous TCO benefit, and the reason we do that is obvious. I mean, the economic paradigm that drives large at-scale data centers is total custom ownership, both acquisition costs and operating costs. And if you can put more storage in a square tile of data center space, you're going to generally use less power, you're going to run it more efficiently, you're actually, from an acquisition cost, you're getting a more efficient purchase of that capacity. And in doing that, our innovation, we benefit from it and our customers benefit from it. So the value proposition for zoned storage in capacity enterprise HDV is very clear, it's additional capacity. The exciting thing is, in the SSD side of things, or ZNS, it actually opens up even more value proposition for the customer. Because SSDs have had to emulate hard drives, there's been a lot of inefficiency and complexity inside an enterprise SSD dealing with things like garbage collection and right amplification reducing the endurance of the device. You have to over-provision, you have to insert as much as 20, 25, even 28% additional man bits inside the device just to allow for that extra space, that working space to deal with delete of data that are smaller than the block erase that the device supports. So you have to do a lot of reading and writing of data and cleaning up. It creates for a very complex environment. ZNS by mapping the zoned size with the physical structure of the SSD essentially eliminates garbage collection, it reduces over-provisioning by as much as 10x. And so if you were over provisioning by 20 or 25% on an enterprise SSD, and a ZNS SSD, that can be one or two percent. The other thing I have to keep in mind is enterprise SSD is typically incorporate D RAM and that D RAM is used to help manage all those dynamics that I just mentioned, but with a much simpler structure where the pointers to the data can be managed without all the D RAM. We can actually reduce the amount of D RAM in an enterprise SSD by as much as eight X. And if you think about the MILA material of an enterprise SSD, D RAM is number two on the list in terms of the most expensive bomb components. So ZNS and SSDs actually have a significant customer total cost of ownership impact. It's an exciting standard, and now that we have the standard ratified through the NVME working group, it can really accelerate the development of the software ecosystem around. >> Right, so let's shift gears and talk a little bit about less about the tech and more about the customers and the implementation of this. So you talked kind of generally, but are there certain types of workloads that you're seeing in the marketplace where this is a better fit or is it just really the big heavy lifts where they just need more and this is better? And then secondly, within these hyperscale companies, as well as just regular enterprises that are also seeing their data demands grow dramatically, are you seeing that this is a solution that they want to bring in for kind of the marginal kind of next data center, extension of their data center, or their next cloud region? Or are they doing lift and shift and ripping stuff out? Or do they enough data growth organically that there's plenty of new stuff that they can put in these new systems? >> Yeah, I love that. The large customers don't rip and shift; they ride their assets for a long lifecycle, 'cause with the relentless growth of data, you're primarily investing to handle what's coming in over the transom. But we're seeing solid adoption. And in SMRS you know we've been working on that for a number of years. We've got significant interest and investment, co-investment, our engineering, and our customer's engineering adapting the application environment's to take advantage of SMR. The great thing is now that we've got the NVME, the ZNS standard gratified now in the NVME working group, we've got a very similar, and all approved now, situation where we've got SMR standards that have been approved for some time, and the SATA and SCSI standards. Now we've got the same thing in the NVME standard, and the great thing is once a company goes through the lift, so to speak, to adapt an application, file system, operating system, ecosystem, to zoned storage, it pretty much works seamlessly between HDD and SSD, and so it's not an incremental investment when you're switching technologies. Obviously the early adopters of these technologies are going to be the large companies who design their own infrastructure, who have mega fleets of racks of infrastructure where these efficiencies really really make a difference in terms of how they can monetize that data, how they compete against the landscape of competitors they have. For companies that are totally reliant on kind of off the shelf standard applications, that adoption curve is going to be longer, of course, because there are some software changes that you need to adapt to enable zoned storage. One of the things Western Digital has done and taken the lead on is creating a landing page for the industry with zoned storage.io. It's a webpage that's actually an area where many companies can contribute Open source tools, code, validation environments, technical documentation. It's not a marketeering website, it's really a website built to land actual Open source content that companies can use and leverage and contribute to to accelerate the engineering work to adapt software stacks to zoned storage devices, and to share those things. >> Let me just follow up on that 'cause, again, you've been around for a while, and get your perspective on the power of Open source. And it used to be the best secrets, the best IP were closely guarded and held inside, and now really we're in an age where it's not necessarily. And the brilliant minds and use cases and people out there, just by definition, it's more groups of engineers, more engineers outside your building than inside your building, and how that's really changed kind of a strategy in terms of development when you can leverage Open source. >> Yeah, Open source clearly has accelerated innovation across the industry in so many ways, and it's the paradigm around which companies have built business models and innovated on top of it, I think it's always important as a company to understand what value ad you're bringing, and what value ad the customers want to pay for. What unmet needs in your customers are you trying to solve for, and what's the best mechanism to do that? And do you want to spend your RND recreating things, or leveraging what's available and innovating on top of it? It's all about ecosystem. I mean, the days where a single company could vertically integrate top to bottom a complete end solution, you know, those are fewer and far between. I think it's about collaboration and building ecosystems and operating within those. >> Yeah, it's such an interesting change, and one more thing, again, to get your perspective, you run the data center group, but there's this little thing happening out there that we see growing, IOT, in the industrial internet of things, and edge computing as we try to move more compute and store and power kind of outside the pristine world of the data center and out towards where this data is being collected and processed when you've got latency issues and all kinds of reasons to start to shift the balance of where the compute is and where the store and relies on the network. So when you look back from the storage perspective in your history in this industry and you start to see basically everything is now going to be connected, generating data, and a lot of it is even Opensource. I talked to somebody the other day doing kind of Opensource computer vision on surveillance video. So the amount of stuff coming off of these machines is growing in crazy ways. At the same time, it can't all be processed at the data center, it can't all be kind of shipped back and then have a decision and then ship that information back out to. So when you sit back and look at Edge from your kind of historical perspective, what goes through your mind, what gets you excited, what are some opportunities that you see that maybe the laymen is not paying close enough attention to? >> Yeah, it's really an exciting time in storage. I get asked that question from time to time, having been in storage for more than 30 years, you know, what was the most interesting time? And there's been a lot of them, but I wouldn't trade today's environment for any other in terms of just the velocity with which data is evolving and how it's being used and where it's being used. A TCO equation may describe what a data center looks like, but data locality will determine where it's located, and we're excited about the Edge opportunity. We see that as a pretty significant, meaningful part of the TAM as we look three to five years. Certainly 5G is driving much of that, I think just any time you speed up the speed of the connected fabric, you're going to increase storage and increase the processing the data. So the Edge opportunity is very interesting to us. We think a lot of it is driven by low latency work loads, so the concept of NVME is very appropriate for that, we think, in general SSDs deployed and Edge data centers defined as anywhere from a meter to a few kilometers from the source of the data. We think that's going to be a very strong paradigm. The workloads you mentioned, especially IOT, just machine-generated data in general, now I believe, has eclipsed human generated data, in terms of just the amount of data stored, and so we think that curve is just going to keep going in terms of machine generated data. Much of that data is so well suited for zoned storage because it's sequential, it's sequentially written, it's captured, and it has a very consistent and homogenous lifecycle associated with it. So we think what's going on with zoned storage in general and ZNS and SMR specifically are well suited for where a lot of the data growth is happening. And certainly we're going to see a lot of that at the Edge. >> Well, Phil, it's always great to talk to somebody who's been in the same industry for 30 years and is excited about today and the future. And as excited as they have been throughout their whole careers. So that really bodes well for you, bodes well for Western Digital, and we'll just keep hoping the smart people that you guys have over there, keep working on the software and the physics, and the mechanical engineering and keep moving this stuff along. It's really just amazing and just relentless. >> Yeah, it is relentless. What's exciting to me in particular, Jeff, is we've driven storage advancements largely through, as I said, a number of engineering disciplines, and those are still going to be important going forward, the chemistry, the physics, the electrical, the hardware capabilities. But I think as widely recognized in the industry, it's a diminishing curve. I mean, the amount of energy, the amount of engineering effort, investment, that cost and complexity of these products to get to that next capacity step is getting more difficult, not less. And so things like zoned storage, where we now bring intelligent data placement to this paradigm, is what I think makes this current juncture that we're at very exciting. >> Right, right, well, it's applied AI, right? Ultimately you're going to have more and more compute power driving the storage process and how that stuff is managed. As more cycles become available and they're cheaper, and ultimately compute gets cheaper and cheaper, as you said, you guys just keep finding new ways to move the curve in. And we didn't even get into the totally new material science, which is also coming down the pike at some point in time. >> Yeah, very exciting times. >> It's been great to catch up with you, I really enjoy the Western Digital story; I've been fortunate to sit in on a couple chapters, so again, congrats to you and we'll continue to watch and look forward to our next update. Hopefully it won't be another four years. >> Okay, thanks Jeff, I really appreciate the time. >> All right, thanks a lot. All right, he's Phil, I'm Jeff, you're watching theCUBE. Thanks for watching, we'll see you next time.
SUMMARY :
all around the world, this so all of the interviews Hi Jeff, it's great to be here. in terms of the amount of storage demands. be around in the future, that it's the ability to store this and the channel, really across and just per the Wikipedia and I think a lot of the and having the hard drive of data and just the veracity with which kind of the introduction and so one of the things of the data center, right? and so the notion of I just love the way that you kind of and the reason we do that is obvious. and the implementation of this. and the great thing is And the brilliant minds and use cases and it's the paradigm around which and all kinds of reasons to start to shift and increase the processing the data. and the mechanical engineering I mean, the amount of energy, driving the storage process I really enjoy the Western Digital story; really appreciate the time. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Phil Bullinger | PERSON | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Colorado | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
Phil | PERSON | 0.99+ |
August 2020 | DATE | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
both feet | QUANTITY | 0.99+ |
Netflixes | ORGANIZATION | 0.99+ |
18 | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
two percent | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Facebooks | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
28% | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
14 | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
COVID-19 | OTHER | 0.99+ |
10x | QUANTITY | 0.99+ |
more than 30 years | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
two components | QUANTITY | 0.99+ |
Opensource | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
25 | QUANTITY | 0.98+ |
20 terabyte | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
18, 20 terabyte | QUANTITY | 0.98+ |
16 | QUANTITY | 0.98+ |
over 10% | QUANTITY | 0.98+ |
COVID | OTHER | 0.97+ |
tens of thousands of racks | QUANTITY | 0.97+ |
first introduction | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
TAM | ORGANIZATION | 0.96+ |
theCUBE Studios | ORGANIZATION | 0.96+ |
NVME | ORGANIZATION | 0.95+ |
last decade | DATE | 0.94+ |
Edge | ORGANIZATION | 0.94+ |
Mammer and Hammer | ORGANIZATION | 0.93+ |
One | QUANTITY | 0.92+ |
COVID | ORGANIZATION | 0.92+ |
Tracey Newell, Informatica | CUBEConversation, July 2018
(futuristic music) >> Welcome back everybody, Jeff Frick here with theCUBE. We're having a CUBE conversation in our Palo Alto studios, we're waiting for the crazy madness of the second half conference season to begin but before that it's nice to get a little bit of a break in the action and we can have people into our studio in Palo Alto. We're really excited to have our next guest really adding to this journey that we've been kind of watching over a course of many years with Informatica, she's Tracey Newell she's the newly announced President, global field operations from Informatica, Tracey great to meet you. >> Yeah nice to meet you. >> Absolutely. So we've following Informatica for a long time, I think our first visit to Informatica world was 2015 back when it was still a public company, I think it was Info which still has this legacy, that's the hashtag for this show. >> It certainly does. >> Which is kind of funny cause it's not really a stock ticker anymore. So it's been quite a journey and really well timed with kind of the big data revolution. You joined the board a couple years ago. >> I did in 2016. >> But you just decided to leave Mahogany Row and take off the board outfit and jump in and get on the field and get dirty. So why did you decide to get into the nitty gritty? >> Yeah so I joined the board because I really believed in the mission so. Digital transformation is something that's real, it's a boardroom discussion. Every enterprise and government around the world's trying to figure this out and so I wanted to be part of that and I've had a front row seat for a couple of years. >> Right right. >> I'm not one to sit on the sidelines for very long and I thought this is just too much fun and I want to get in the game so I asked to step down and I've recently joined as a president of Global Field ops. >> Great so your background is a little bit of confusion due to history, a lot of sales, you've been running sales for lot different companies, been in the valley for a while. But sales is really under you so you haven't really left your sales hat, that's just part of now a bigger role that you're going to be doing with Informatica. >> Yeah that's right, it's a bigger and broader role, but my favorite thing is running sales organizations. So I've done other things too, I've run operations, and customer success, but I was thrilled to join and also run professional services as well cause that's so important to the delivery and for our customers. >> So you'll write the digital transformation, it's the hop topic, it's what everybody is talking about, and it's true and as Informatica is in the middle of it, data is such a big piece of the digital transformation as everybody, we used to joke, there are no companies except software companies. I think we're taking it to the next step, now there are no software companies, really everybody should be a data company, and Informatica is sitting right in the middle of that world. No that's right, yeah data is the new currency, it's become of the most important assets for enterprises, everyone's trying to transform, they're trying to disrupt, they're trying to take on the leader or they're trying to keep their lead. And they need all their information throughout their organization in order to do that and so you know one of the stories hat I really like, Graham Thompson's our CIO and he talks to lots of CIOs and he'll use this analogy in that you know he'll say does your CFO have good containment strategy around their most important asset, and that's revenue. Does your CFO, does he or she know what the data is and inevitably the CIO will say of course. Well that's great does he know or she know how they're spending the money and who's spending the money? Do they have controls and compliance and security around that and of course the answer is yes yes yes and yes. And it inevitably turns to the CIO to say well if data is your most important asset, if that truly is the currency in your organization, do you know where all of your data is? And the answers always no. And there's lots of reasons for that, it's most enterprises have hundreds if not thousands of databases and shadow IT projects everywhere. But if the answers no then how do take advantage and leverage that information to the companies advantage? How do you control it, how do you have compliance and that's where we come in. >> So what's the Informatica special sauce? What's the secret sauce that you guys can bring to the party that nobody else can? >> Yeah so I think inevitably that it would be the platform so our intelligent data platform is really important to the enterprise. The CIOs that I've been meeting with for the last decade have said you know I can't have ten widgets that are all solving a similar problem cause it's just too expensive. I need the bet with the leader in the space and so what we're doing to provide that for enterprises is really important and yet at the same time, you've got to be the best at what you do, you can't just be comprehensive but you have to have best debris technology. We're spending 17 cents of every dollar in r&d and we're so focused on just this one thing, our mission is to lead in digital transformation for the large enterprise and we've been doing this for 25 years so we've spent billions of dollars at making sure our customers are invested in us and that we protect that investment. >> Right. So what is your charge is as you're starting your new role I think the press release just came out a couple days ago. You know what does O'Neil say to you, you know we want you, here's where we want you to go take down that next mountain, what are some of your short term priorities, what are some of your longer term priorities? >> Yeah so we have a great opportunity in front of us. So stating the obvious I'm here to drive growth and expansion both in market share opportunities, we have over nine thousand customers globally and yet we all know that there's a tremendous opportunity to continue direct market shares. This is a global phenomena and yet our largest customers we have 85 of the Fortune 100, they certainly need a lot of support and we're here to help provide that leadership. And we do a lot of best practice sharing, we do a lot around helping customers on their journeys cause we see these themes given that we do work with the largest companies around the world. >> And I'm sure you're going to be getting on a plane and meeting with a whole bunch of customers over the next, over the next several weeks and months but was there something from your board position that you could see was a consistent pattern that you really see an opportunity for growth, kind of an unexploited opportunity as people are going through this digital transformation cause we talk all the time, it's how do I get started and you know I have small projects to give me early success and kind of those types of conversations but clearly we're kind of beyond the beginning and we should be starting to move down the field a little bit. >> Yeah certainly. So we work with all the global SIs and we won't ever try to take their place you know Insentrum, Delite, Capgemeni, Cottonsmith, they're tremendous at what they do and we partner with them very well. But we've absolutely seen consistent themes as we work with these big enterprises, I mean we've seen Coca Cola work on delivering new packaging for the World Cup where they drove exponential sales and they wanted to use the power of all of their data. The data in the Cloud, the data that they have on premise, the data in all the SAS applications and that's where we come in and really help them, helping them to leverage all of their information and to do that in an intelligent way and so we've seen several patterns emerge how customers can get started and we've created a series of workshops and summits and specialists that we we can sell on a pro forma basis in helping customers figure out where those quick fixes are. There's a couple of key big buckets, we see most large enterprise moving from on premise to Cloud and they're trying to figure out a a migration strategy so we help a lot there. Most customers are trying to figure out how to get closer to their customers so we do a lot of work around customer intimacy. Intimacy could be driving the top line, cross sale, up sale, or even customer retention. B&P Paraboss did a lot of work with us there around getting closer to you know in their wealth practice. And then we do quite a bit around governance as you would expect. That's a hot topic with GDPR again if you can't say you know where all your data is well then how can you be compliant? >> Right how can you delete me? >> How can you delete me if you don't know where your data is. There's a number of practices that we've set up and we'll do some not for fee consulting work to help customers try and figure this out. >> Yeah clearly when we first met Informatica in 2015, you know the Cloud was moving, the public Cloud, but it wasn't near what it is today. And I guess you guys just had a recent announcement, Google Cloud Next is coming up in a couple of weeks and so you guys are now doing some stuff with Google Cloud? We are yeah so we're pretty good listeners I think that's important if you're going to be a business partner to your clients you got to know what they want and one of the things that clients have said to us is we need you to partner with our partners. You know the days of proprietary and sole source, you know we're going to be everything to you without working with anyone you know those days are over. And so the key Cloud partners our customers have asked us to work with include Amazon, Google, Microsoft, Azure, so you're right last week we did make an announcement that we've done deep integration and we're spending our r&d dollars for customers that are investing with Google to make those investments more valuable and we announced API management and integration with Google make that easy for customers so. Informatica world we announced native integration in our Ipass platform for Microsoft so over and over again you'll hear us continue to do more with the the partners that our customers want us to and that's a win win for everybody. >> Its just so funny too because when people talk about a company like say Coca Cola which you brought up they talk about it like it's a company. No it's like not a company, it's many many companies, many many projects, many many challenges you know it's not just one entity that has a relationship with one other entity. >> That's right. >> But the other thing I think is interesting times and Coke's a good example or Ford or pick many old line industrial companies that used to have distribution right and what was the purpose of distribution is to break bulk is to communicate information and to get the product close to the customer. But the manufacturer never knew what happened once they shipped that stuff off into their distribution. Now it's a whole different world, they have a direct connection with their in customer, they're collecting data from their in customer, and so they have a relationship and an opportunity and a challenge with that they never had before. They just sent it off to the distributor and off it went and hopefully it doesn't come back for repair. (laughing) >> No that's right but you're exactly right, and that's the challenge that customers are facing. I don't care if it's a customer in the mid market or it's a customer in large enterprise or if it's a government organization. They need to know all aspects of their customer partner supplier information and how to communicate globally if they're going to drive disruption. And one of the CIOs of a Fortune 500 made a comment that we decided that we were going to disrupt ourselves before someone else disrupted us. And that's, that's my comment on why this is a board level discussion, it's super important, and we can help solve those problems. >> It's funny Dave Potrick one of my favorite executives used to be the number two guy at Charles Schwab and I remember him speaking when they went to fix price trading back in the day, I'm aging myself unfortunately but you know he said the same thing, we have to disrupt ourselves before somebody else disrupts us. And if you're not thinking that way you're going to get disrupted so better it be you than someone that you don't even see and usually it's not your side competitor, it's the one coming from a completely different direction that you weren't even paying attention to. >> That's right. And we see that over and over again and you made the right comment in that it's not always easy, some of these Fortune 500s through consolidation, even the Global 2000. They've done all these acquisitions and so you've got hundreds of BUs that don't have any systems tied together and how do you start to create a common connection in so that you can build your brand and you can try differentiation and that's the key, that's back to the intelligent data platform. >> Right and as you said and there's not single systems and now we got API economy, things are all connected so you don't necessarily even have that much direct control over a lot of these opportunities and you said that first I think it's just like okay where's your data? Can you start with the very simple question and a lot of people aren't really sure and can't even start from there. >> That's right. >> So good opportunities. >> Absolutely, there's no question. >> Alright Tracey, well thank you for stopping by, congratulations on your, on your new position and moving from Mahogany Row down into, down into the trenches. >> Down on the field. >> I'm sure they're going to be happy to have you down there on the field. >> Yeah no thanks Jeff I'm happy to be here and thanks for the time today. >> Thank you and we'll see you in Informatica world if not sooner. >> That's right. >> Alright she's Tracey Newell I'm Jeff Frick, you're watching theCube from Palo Alto, thanks for watching. (futuristic music)
SUMMARY :
and we can have people into our studio in Palo Alto. that's the hashtag for this show. You joined the board a couple years ago. and take off the board outfit and jump in Yeah so I joined the board because I really believed in the game so I asked to step down But sales is really under you so you haven't really so important to the delivery and for our customers. and leverage that information to the companies advantage? and that we protect that investment. here's where we want you to go take down that next mountain, So stating the obvious I'm here to drive growth and you know I have small projects to give me early success around getting closer to you know in their wealth practice. if you don't know where your data is. and one of the things that clients have said to us is many many projects, many many challenges you know and to get the product close to the customer. and that's the challenge that customers are facing. the same thing, we have to disrupt ourselves in so that you can build your brand and you can try Right and as you said and there's not single systems Alright Tracey, well thank you for stopping by, I'm sure they're going to be happy to have you down there and thanks for the time today. Thank you and we'll see you in Informatica world you're watching theCube from Palo Alto,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Potrick | PERSON | 0.99+ |
Tracey Newell | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
2015 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
17 cents | QUANTITY | 0.99+ |
July 2018 | DATE | 0.99+ |
Coca Cola | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Coke | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
Tracey | PERSON | 0.99+ |
85 | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Graham Thompson | PERSON | 0.99+ |
O'Neil | PERSON | 0.99+ |
GDPR | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
over nine thousand customers | QUANTITY | 0.98+ |
Azure | ORGANIZATION | 0.98+ |
World Cup | EVENT | 0.98+ |
Mahogany Row | LOCATION | 0.98+ |
B&P Paraboss | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.97+ |
last decade | DATE | 0.96+ |
first visit | QUANTITY | 0.96+ |
both | QUANTITY | 0.95+ |
couple days ago | DATE | 0.95+ |
billions of dollars | QUANTITY | 0.95+ |
Global 2000 | ORGANIZATION | 0.95+ |
Global Field | ORGANIZATION | 0.92+ |
one entity | QUANTITY | 0.92+ |
Cottonsmith | ORGANIZATION | 0.89+ |
ten widgets | QUANTITY | 0.89+ |
Google Cloud | TITLE | 0.88+ |
Ipass | TITLE | 0.87+ |
one thing | QUANTITY | 0.87+ |
one other entity | QUANTITY | 0.86+ |
Cloud | TITLE | 0.84+ |
Insentrum | ORGANIZATION | 0.8+ |
couple years ago | DATE | 0.79+ |
Charles Schwab | ORGANIZATION | 0.77+ |
thousands of databases | QUANTITY | 0.77+ |
Google Cloud Next | TITLE | 0.74+ |
single systems | QUANTITY | 0.72+ |
Capgemeni | ORGANIZATION | 0.67+ |
Mahogany | LOCATION | 0.66+ |
Delite | ORGANIZATION | 0.65+ |
Fortune 500 | ORGANIZATION | 0.64+ |
several weeks | DATE | 0.63+ |
theCUBE | ORGANIZATION | 0.62+ |
second half | QUANTITY | 0.58+ |
two guy | QUANTITY | 0.55+ |
couple of years | QUANTITY | 0.52+ |
C | ORGANIZATION | 0.51+ |
CUBEConversation | EVENT | 0.49+ |
Row | TITLE | 0.49+ |
CUBE | ORGANIZATION | 0.47+ |
Fortune | ORGANIZATION | 0.45+ |
theCube | ORGANIZATION | 0.42+ |
Fortune | TITLE | 0.42+ |
100 | QUANTITY | 0.34+ |
500s | QUANTITY | 0.3+ |
Kevin Bates, Fannie Mae | Corinium Chief Analytics Officer Spring 2018
>> From the Corinium Chief Analytics Officer Conference Spring San Francisco, it's The Cube >> Hey welcome back, Jeff Frick with The Cube We're in downtown San Francisco at the Corinium Chief Analytics Officer Spring event. We go to Chief Data Officer, this is Chief Analytics Officer. There's so much activity around big data and analytics and this one is really focused on the practitioners. Relatively small event, and we're excited to have another practitioner here today and it's Kevin Bates. He's the VP of Enterprise Data Strategy Execution for Fannie Mae. Kevin, welcome. >> It's a mouthful. Thank you. >> You've got it all. You've got strategy, which is good, and then you've got execution. And you've been at a big Fannie Mae for 15 years according to your LinkedIn, so you've seen a lot of changes. Give us kind of your perspective as this train keeps rolling down the tracks. >> OK. Yeah, so it's been a wild ride I've been there, like you say, for 15 years. When I started off there I was writing code, working on their underwriting systems. And I've been in different divisions including the credit loss division, which had a pretty exciting couple of years back around 2008. >> More exciting than you care to - >> Well, there was certainly a lot going on. Data's been sort of a consistent theme throughout my career, so the data, Fannie Mae not unlike most companies, is really the blood that keeps the entire organism functioning. So over the past few years I've actually moved into the Enterprise Data Division of the company where I have responsibility for delivery, operations, platforms, the whole 9 yards. And that's really given me the unique view of what the company does. It's given me the opportunity to touch most of the different business areas and learn a lot about what we need to do better. >> So how is the perspective changed around the data? Before data was almost a liability because you had to store it, keep it, manage it, and take good care of it. Now it's a core asset and we see the valuations up and down. One on one probably the driver of some of the crazy valuations that you see in a lot of the companies. So how has that added to change and what have you done to take advantage of that shift in attitude? >> Sure, it's a great question. So I think the data has always been the life blood and key ingredient to success for the company, but the techniques of managing the data have changed for sure, and with that the culture has to change and how you think about the data has to change. If you go back 10 years ago all of our data was stored in our data center, which means that we had to pay for all of those servers, and every time data kept getting bigger we had to buy more servers and it almost became like a bad thing. >> That's what I said, almost like a liability >> That's right And as we've certainly started adopting the cloud and technologies associated with the cloud you may step into that thinking "OK, now I don't have to manage my own data center I'll let Amazon or whoever do it for me." But it's much more fundamental than that because as you start embracing the cloud and now storage is no longer a limitation and compute is no longer a limitation the numbers of tools that you use is no longer really a limitation. So as an organization you have to change your way of thinking from "I'm going to limit the number of business intelligence tools that my users can take advantage of" to "How can I support them to use whatever tools they want?" So the mentality around the data I think really goes to how can I make sure the right data is available at the right time with the right quality checks so that everybody can say "yep, I can hang my hat on that data" but then get out of the way and let them self serve from there. It's very challenging, there's a lot of new tools and technologies involved. >> And that's a huge piece of the old innovation game to have the right data for the right people with the right tools and let more people play with it. But you've got this other pesky thing like governance. You've got a lot of legal restrictions and regulations and compliances. So how do you fold that into opening up the goodies, if you will. >> So I think one effort we have is we're building a platform we call the Enterprise Data Infrastructure so for that 85 percent of data at Fannie Mae what we do is loans, we create securities from the loans. And there's liabilities. There's a pretty finite set of data areas that are pretty much consistent at Fannie Mae and everybody uses those data sets. So taking those and calling them enterprise data sets that will be centralized they will be presented to our customers in a uniform way with all of the data quality checks in place. That's the big effort. It means that you're standardizing your data. You're performing a consistent data quality approach on that data and then you're making it available through any number of consumption patterns so that can be applications needed, so I'm integrating applications. It could be warehousing analytics. But it's the same data and it comes from that promise that we've tagged it enterprise data and we've done that good stuff to make sure that it's good, that it's healthy. That we know where we stand so if it's not a good data set we know how to tag it and make it such. For all the other data around we have to let our business partners be accountable for how they're enriching that data and innovating and so forth. But governance is not a - I think in the past another part of your question, governance used to be more of a, slow everybody down but if we can incorporate governance and have implied governance in the platform and then allow the customers to self serve off of that platform, governance becomes really that universal good. That thing that allows you to be confident that you can take the data and innovate with that data. >> So I'm curious how much of the value add now comes from the not enterprise data. The outside the core which you've had forever. What's the increasing importance and overlay of that exterior data to your enterprise data to drive more value out of your enterprise data? >> So that enterprise data like I say may be the 85%, it's just the facts. These are the loans we brought in. Here's how we can aggregate risk or how we can aggregate what we call UPB, or the value of our loans. That is pretty generic and it's intended to be. The third party data sets that our business partners may bring in that they bump up against that data can give them strategic advantages. Also the data that those businesses generate our business lines generate within their local applications which we would not call enterprise data, that's very much their special sauce. That's something that the broader organization doesn't need. Those things are all really what our data scientists and our business people combine to create the value added reports that they use for decisioning and so forth. >> And then I'm curious how the big data and the analytics environment has changed from the old day where you had some PHds and some super bright guys that ran super hard algorithms and it was on Mahogany Row and you put in the request and maybe from down high someday you'll get your request versus really trying to enable a broader set of analysts to have access to that data with a much broader set of tools, enabling a bunch of tools versus picking the one or two winners that are very expensive, you got to limit the seats et cetera. How has that changed the culture of the company as well as the way that you are able to deliver products and deliver new applications if you will? >> So I think that's a work in progress. We still have all the PHds and they still really call the shots. They're the ones that get the call from the Executive Vice President and they want to see something today that tells them what decision they should make. We have to enable them. They were enabled in the past by having people basically hustle to get them what they need. The big change we're trying to make now is to present the data in a common platform where they really can take it and run with it so there is a change in how we're delivering our systems to make sure we have the lowest level of granularity. That we have real time data. there's no longer waiting. And the technology tools that have come out in the past 10 years have enabled that. It's not just about implementing that, making it available to all those Phds. There's another population of analysts that is now empowered where they were not before. The guys that suffered just using excel or access databases that were I would call them not the power users but the empowered analysts. The ones who know the data, know how to query data but they're not hard core quants and they're not developers. Those guys have access to a plethora of tools now that were never available before that allow them to wrangle data from 20 different data sets, align it, ask questions of it. And they're really focused on operations and running our systems in a smoother, lower cost way. So I think the granularity, the timing, and support for that explosion of tools we'll still have the big, heavy SAS and R users that are the quants. I think that's the combination everything has to be supported and we'll support it better with higher quality, with more recent data, but the culture change isn't going to happen even in a few years. It will be a longer term path for larger organizations to really see maybe possibilities where they can restructure themselves based on technology. Right now the technologies are early enough and young enough that I think they're going to wait and see. >> Obviously you have a ton of legacy systems, you have all these tools. You have that core set, your enterprise data that doesn't really change that much. What's the objective down the road? Are you looking to expand on that core set? Is it such a fixture that you can't do anything with it in terms of flexibility? Where do you go from here? if we were to sit down three years from now what are we going to be talking about? >> So two things. One, I hope I'll be looking back with excitement at my huge success at transforming those legacy systems. In particular we have what we call the legacy warehouses that have been around well over 20 years that are limited and have not been updated because we've been trying to retire them for many years. Folding all of that into my core enterprise data infrastructure that will be fully aligned on terminology, on near-real time, all those things. That will be a huge success, I'll be looking back and glowing about how we did that and how we've empowered the business with that core data set that is uniquely available on this platform. They don't need to go anywhere else to find it. The other thing I think we'll see is enabling analysts to utilize cloud-based assets and really be successful working both with our on-premises data center, our own data center-supported applications but also starting to move their heavy running quantitative modeling and all the sorts of things they do into the data lake which will be cloud based and really enabling that as a true kind of empowerment for them so they can use a different sent of tools. They can move all that heavy lifting and the servers they sometimes bring down right now move it into an environment where they can really manage their own performance. I think those are going to be the two big changes three years from now that will feel like we're in the next generation. >> All right. Kevin Bates, projecting the future so we look forward to that day. Thanks for taking a few minutes out of your day. >> Thank you. >> All right, thanks. He's Kevin, I'm Jeff. You're watching The Cube from the Corinium Chief Analytics Officer Event in San Francisco. Thanks for watching. (music)
SUMMARY :
We're in downtown San Francisco at the Corinium It's a mouthful. according to your LinkedIn, including the credit loss division, It's given me the opportunity to touch So how has that added to change and what have you done to the culture has to change and how you think the numbers of tools that you use And that's a huge piece of the old innovation game and then allow the customers to self serve off So I'm curious how much of the value add now comes So that enterprise data like I say may be the 85%, How has that changed the culture of the company that are the quants. What's the objective down the road? and the servers they sometimes bring down right now Kevin Bates, projecting the future from the Corinium Chief Analytics Officer Event
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Kevin Bates | PERSON | 0.99+ |
Corinium | ORGANIZATION | 0.99+ |
Fannie Mae | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
15 years | QUANTITY | 0.99+ |
Kevin | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
85 percent | QUANTITY | 0.99+ |
Fannie Mae | PERSON | 0.99+ |
excel | TITLE | 0.99+ |
20 different data sets | QUANTITY | 0.99+ |
9 yards | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two things | QUANTITY | 0.99+ |
Spring 2018 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
10 years ago | DATE | 0.98+ |
three years | QUANTITY | 0.97+ |
two big | QUANTITY | 0.96+ |
over 20 years | QUANTITY | 0.95+ |
Corinium Chief Analytics Officer | EVENT | 0.92+ |
two winners | QUANTITY | 0.92+ |
Executive Vice President | PERSON | 0.91+ |
2008 | DATE | 0.89+ |
Mahogany Row | TITLE | 0.88+ |
Corinium Chief Analytics Officer Conference | EVENT | 0.87+ |
The Cube | ORGANIZATION | 0.85+ |
years | DATE | 0.76+ |
Corinium Chief | EVENT | 0.73+ |
past 10 years | DATE | 0.72+ |
one effort | QUANTITY | 0.72+ |
The Cube | TITLE | 0.7+ |
Fannie | ORGANIZATION | 0.7+ |
Officer | EVENT | 0.69+ |
past | DATE | 0.59+ |
couple of years | DATE | 0.59+ |
Mae | PERSON | 0.55+ |
Enterprise | ORGANIZATION | 0.52+ |
Spring | EVENT | 0.51+ |
Officer | PERSON | 0.48+ |
Data | OTHER | 0.39+ |
Jeff Weidner, Director Information Management | Customer Journey
>> Welcome back everybody. Jeff Frick here with theCube. We're in the Palo Alto studio talking about customer journeys today. And we're really excited to have professional, who's been doing this for a long time, he's Jeff Weidener, he's an Information Management Professional at this moment in time, and still, in the past and future, Jeff Welcome. >> Well thank you for having me. >> So you've been playing in the spheres for a very long time, and we talked a little bit before we turned the cameras on, about one of the great topics that I love in this area is, the customer, the 360 view of the customer. And that the Nirvana that everyone says you know, we're there, we're pulling in all these data sets, we know exactly what's going on, the person calls into the call center and they can pull up all their records, and there's this great vision that we're all striving for. How close are we to that? >> I think we're several years away from that perfect vision that we've talked about, for the last, I would say, 10, 10 to 15 years, that I've dealt with, from folks who were doing catalogs, like Sears catalogs, all the way to today, where we're trying to mix and match all this information, but most companies are not turning that into actionable data, or actionable information, in any way that's reasonable. And it's just because of the historic kind of Silo, nature of all these different systems, I mean, you know, I keep hearing about, we're gonna do it, all these things can tie together, we can dump all the data in a single data lake and pull it out, what are some of the inhibitors and what are some of the approaches to try to break some of those down? >> Most has been around getting that data lake, in order to put the data in its spot, basically try and make sure that, do I have the environment to work in? Many times a traditional enterprise warehouse doesn't have the right processing power, for you, the individual, who wants to do the work, or, doesn't have the capacity that'll allow you to just bring all the data in, try to ratify it. That's really just trying to do the data cleansing, and trying to just make some sense of it, cause many times, there aren't those domain experts. So I usually work in marketing, and on our Customer 360 exercise, was around, direct mail, email, all the interactions from our Salesmaker, and alike. So, when we look at the data, we go, I don't understand why the Salesmaker is forgetting X, of that behavior that we want to roll together. >> Right. >> But really it's finding that environment, second is the harmonization, is I have Bob Smith and Robert Smith, and Master Data Management Systems, are perhaps few and far between, of being real services that I can call as a data scientist, or as a data worker, to be able to say, how do I line these together? How can I make sure that all these customer touchpoints are really talking about the same individual, the company, or maybe just the consumer? >> Right. >> And finally, it is in those Customer 360 projects getting those teams to want to play together, getting that crowdsourcing, either to change the data, such as, I have data, as you mentioned around Chat, and I want you to tell me more about it, or I want you to tell me how I can break it down. >> Right, right. >> And if I wanna make changes to it, you go, we'll wait, where's your money, in order to make that change. >> Right, right. >> And there's so many aspects to it, right. So there's kind of the classic, you know, ingest, you gotta get the data, you gotta run it through the processes you said did harmonize it to bring it together, and then you gotta present it to the person who's in a position at the moment of truth, to do something with it. And those are three very very different challenges. They've been the same challenges forever, but now we're adding all this new stuff to it, like, are you pulling data from other sources outside of the system of record, are you pulling social data, are you pulling other system data that's not necessarily part of the transactional system. So, we're making the job harder, at the same time, we're trying to give more power to more people and not just the data scientists. But as you said I think, the data worker, so how's that transformation taking place where we're enabling more kind of data workers if you will, that aren't necessarily data scientists, to have the power that's available with the analytics, and an aggregated data set behind them. >> Right. Well we are creating or have created the wild west, we gave them tools, and said, go forth and make, make something out of it. Oh okay. Then we started having this decentralization of all the tools, and when we finally gave them the big tools, the big, that's quote unquote, big data tools, like the process, billings of records, that still is the wild west, but at least we're got them centralized with certain tools. So we were able to do at least standardize on the tool set, standardize on the data environment, so that at least when they're working on that space, we get to go, well, what are you working on? How are you working on that? What type of data are you working with? And how do we bring that back as a process, so that we can say, you did something on Chat Data? Great! Bob over here, he likes to work with that Chat data. So that, that exposure and transparency because of these centralization data. Now, new tools are adding on top of that, data catalogs, and putting inside tools that will make it so that you actually tell, that known information, all-in-one wiki-like interface. So we're trying to add more around putting the right permissions on top of that data, cataloging them in some way, with these either worksheets, or these information management tools, so that, if you're starting to deal with privacy data, you've got a flag, from, it's ingest all the way to the end. >> Right. >> But more controls are being seen as a way that a business is improving its maturity. >> Yeah. Now, the good news bad news is, more and more of the actual interactions are electronic. You want it going to places, they're not picking up the phone as much, as they're engaging with the company either via web browser or more and more a mobile browser, a mobile app, whatever. So, now the good news is, you can track all that. The bad news is, you can track all that. So, as we add more complexity, then there's this other little thing that everybody wants to do now, which is real-time, right, so with Kafka and Flink and Spark and all these new technologies, that enable you to basically see all the data as it's flowing, versus a sampling of the data from the past, a whole new opportunity, and challenge. So how are you seeing it and how are you gonna try to take advantage of that opportunity as well as address that challenge in your world. >> Well in my data science world, I've said, hey, give me some more data, keep on going, and when I have to put on the data sheriff hat, I'm now having to ask the executives, and our stakeholders, why streaming? Why do you really need to have all of this? >> It's the newest shiny toy. >> New shiny toy! So, when you talk to a stakeholder and you say, you need a shiny toy, great. I can get you that shiny toy. But I need an outcome. I need a, a value. And that helps me in tempering the next statement I give to them, you want streaming, so, or you want real time data, it's gonna cost you, three X. Are you gonna pay for it? Great. Here's my shiny toy. But yes, with the influx of all of this data, you're having to change the architecture and many times IT traditionally hasn't been able to make that, that rapid transition, which lends itself to shadow IT, or other folks trying to cobble something together, not to make that happen. >> And then there's this other pesky little thing that gets in the way, in the form of governance, and security. >> Compliance, privacy and finally marketability, I wanna give you a, I want you to feel that you're trusting me, in handling your data, but also that when I respond back to you, I'm giving you a good customer experience so called, don't be creepy. >> Right, right. >> Lately, the new compliance rule in Europe, GDPR, a policy that comes with a, well, a shotgun, that says, if there are violations of this policy, which involves privacy, or the ability for me to be forgotten, of the information that a corporation collects, it can mean four percent of a total company's revenue. >> Right. >> And that's on every instance, that's getting a lot of motivation for information governance today. >> Right. >> That risk, but the rules are around, trying to be able to say, where did the data come from? How did the data flow through the system? Who's touched that data? And those information management tools are mostly the human interaction, hey what are you guys working on? How are you guys working on it? What type of assets are you actually driving, so that we can bring it together for that privacy, that compliance, and workflow, and then later on top of that, that deliverability. How do you want to be contacted? How do you, what are the areas that you feel, are the ways that we should engage with you? And of course, everything that gets missed in any optimization exercise, the feedback loop. I get feedback from you that say, you're interested in puppies, but your data set says you're interested in cats. How do I make that go into a Customer 360 product. So, privacy, and being, and coming at, saying, oh, here's an advertisement for, for hippos and you go, what do you know about me that I don't know? >> Wrong browser. >> So you chose Datameer, along the journey, why did you choose them, how did you implement them, and how did they address some of these issues that we've just been discussing? >> Datameer was chosen primarily to take on that self-service data preparational layer from the beginning. Dealing with large amounts of online data, we move from from taking the digital intelligence tools that are out there, knowing about browser activities, the cookies that you have to get your identity, and said, we want the entire feed. We want all of that information, because we wanna make that actionable. I don't wanna just give it to a BI report, I wanna turn it into marketing automation. So we got the entire feed of data, and we worked on that with the usual SQL tools, but after a while, it wasn't manageable, by either, all of the 450 to 950 columns of data, or the fact that there are multiple teams working on it, and I had no idea, what they were able to do. So I couldn't share in that value, I couldn't reuse, the insights that they could have. So Datameer allowed for a visual interface, that was not in a coding language, that allowed people to start putting all of their work inside one interface, that didn't have to worry about saving it up to the server, it was all being done inside one environment. So that it could take not only the digital data, but the Salesforce CRN data, marry them together and let people work with it. And it broadened on the other areas, again allowing it that crowdsourcing of other people's analytics. Why? Mostly because of the state we are in around IT, is an inability to change rapidly, at least for us, in our field. >> Right. >> That my, the biggest problem we had, was there wasn't a scheduler. We didn't have the ability to get value out of our, on our work, without having someone to press the button and run it, and if they ran it, it took eight hours, they walked away, it would fail. And you had no, you had to go back and do it all over again. >> Oh yeah. >> So Datameer allows us to have that self-service interface, that had management that IT could agree upon, to let us have our own lab environment, and execute our work. >> So what was the results, when you suddenly give people access to this tool? I mean, were they receptive, did you have to train them a lot, did some people just get it and some people just don't, they don't wanna act on data, what was kind of the real-world results of rolling this out, within the population? Real-world results allowed us to get ten million dollars in uplift, in our marketing activities across multiple channels. >> Ten million dollars in uplift? How did you measure that? >> That was measured through the operating expenses, by one not sending that work outside, some of the management, of the data, is being, was sent outside, and that team builds their own models off of them, we said, we should be able to drink our own champagne, second, it was on the uplift of a direct mail and email campaign, so having a better response rate, and generally, not sending out a bunch of app store messages, that we weren't needing too. And then turning that into a list that could be sent out to our email and direct mail vendors, to say, this is what we believe, this account or contact is engaged with on the site. Give those a little bit more context. So we add that in, that we were hopefully getting and resonating a better message. >> Right. >> In, and where did you start? What was the easiest way to provide an opportunity for people new to this type of tooling access to have success? >> Mostly it was trying to, was taking pre-doctored worksheets, or already pre-packaged output, and one of the challenges that we had were people saying well I don't wanna work in a visual language, while they're users of tools like Tableau or Clicks, and others that are happy to drag-and-drop in their data, many of the data workers, the tried-and-true, are saying, I wanna write it in SQL. >> Mm hm. >> So, we had to give at least that last mile, analytical data set to them, and say, okay. Yeah, go ahead and move it over to your SQL environment, move it over into the space that you feel comfortable and you feel confident to control, but let' come on back and we'll translate it back to, this tool, we'll show you how easy it was, to go from, working with IT, which would take months, to go and doing it on yourself, which would take weeks, and the processing and the cost of your Siloed, shadowed IT environment, will go down in days. We're able to show them that, that acceleration of time to market of their data. >> What was your biggest surprise? An individual user, an individual use case, something that really you just didn't see coming, that's kind of a pleasant, you know the law of unintended consequences on the positive side. >> That's was such a wide option, I mean honestly, beginning back from the data science background, we thought it would just be, bring your data in, throw it on out there, and we're done. We went from, maybe about 20 large datasets of AdTech and Martech, and information, advertising, technology, marketing technology, data, to CRMM formation, order activity, and many other categories, just within marketing alone, and I think perhaps, the other big ah-ha moment was, since we brought that in, of other divisions data, those own teams came in, said, hey, we can use this too. >> Right. >> The adoption really surprised me that it would, you would have people that say, oh I can work with this, I have this freedom to work with this data. >> Right right. >> Well we see it time and time again, it's a recurring theme of all the things we cover, which is, you know a really, big piece of the innovation story, is giving, you know, more people access to more data, and the tools to actually manipulate it. So that you can unlock that brain power, as opposed to keeping it with the data scientists on Mahogany Row, and the super-big brain. So, sounds like that really validates that whole hypothesis. >> I went through reviewing hands-on 11 different tools, when I chose Datameer. This was everything from, big name companies, to small start-up companies, that have wild artificial intelligence slogans in their marketing material, and we chose it mostly because it had the right fit, as an end-to-end approach. It had the scheduler, it had the visual interface, it had the, enough management and other capabilities that IT would leave us alone. Some of the other products that we were looking at gave you, Pig-El-Lee to work with data, will allow you to schedule data, but they never came all together. And for the value we get out of it, we needed to have something altogether. >> Right. Well Jeff, thanks for taking a few minutes and sharing your story, really appreciate it, and it sounds like it was a really successful project. >> Was! >> All right. He's Jeff Weidener, I'm Jeff Frick, you're watching theCube from Palo Alto. Thanks for watching.
SUMMARY :
We're in the Palo Alto studio talking And that the Nirvana that of the approaches to try to the environment to work in? and I want you to tell me to it, you go, we'll wait, the processes you said did harmonize it so that we can say, you that a business is improving its maturity. of the actual interactions are electronic. I give to them, you want gets in the way, in the form I wanna give you a, I want you of the information that of motivation for that you feel, are the ways of the 450 to 950 columns That my, the biggest problem we had, that self-service interface, of the real-world results the data, is being, was sent and others that are happy to that you feel comfortable that really you just didn't back from the data science me that it would, you would So that you can unlock that And for the value we it was a really successful project. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Weidner | PERSON | 0.99+ |
Jeff Weidener | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
eight hours | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
ten million dollars | QUANTITY | 0.99+ |
Datameer | ORGANIZATION | 0.99+ |
Ten million dollars | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
450 | QUANTITY | 0.99+ |
11 different tools | QUANTITY | 0.99+ |
four percent | QUANTITY | 0.99+ |
Sears | ORGANIZATION | 0.99+ |
GDPR | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
second | QUANTITY | 0.98+ |
AdTech | ORGANIZATION | 0.98+ |
Martech | ORGANIZATION | 0.98+ |
SQL | TITLE | 0.98+ |
360 view | QUANTITY | 0.97+ |
950 columns | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
theCube | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.95+ |
Tableau | TITLE | 0.95+ |
one interface | QUANTITY | 0.93+ |
single | QUANTITY | 0.93+ |
Pig-El-Lee | ORGANIZATION | 0.93+ |
Master Data Management Systems | ORGANIZATION | 0.89+ |
Mahogany Row | TITLE | 0.86+ |
Spark | TITLE | 0.81+ |
one environment | QUANTITY | 0.8+ |
about 20 large datasets | QUANTITY | 0.79+ |
Clicks | TITLE | 0.77+ |
360 | QUANTITY | 0.77+ |
Robert Smith | PERSON | 0.73+ |
Bob | ORGANIZATION | 0.7+ |
Salesmaker | ORGANIZATION | 0.7+ |
Smith | PERSON | 0.67+ |
Salesforce | ORGANIZATION | 0.66+ |
Flink | ORGANIZATION | 0.66+ |
instance | QUANTITY | 0.63+ |
Kafka | ORGANIZATION | 0.52+ |
Nirvana | ORGANIZATION | 0.43+ |
CRN | TITLE | 0.39+ |
Mike Cordano, Western Digital | Western Digital the Next Decade of Big Data 2017
>> Announcer: Live from San Jose, California, it's The Cube. Covering Innovating to Fuel the Next Decade of Big Data. Brought to you by Western Digital. >> Hey, welcome back everybody. Jeff Frick here with The Cube. We're at the Western Digital headquarters in San Jose, the Great Oaks Campus, a really historic place in the history of Silicon Valley and computing. It's The Innovating to Fuel the Next Generation of Big Data event with Western Digital. We're really excited to be joined by our next guest, Mike Cordano. He's the president and chief operating officer of Western Digital. Mike, great to see you. >> Great to see you as well. Happy you guys could be here. It's an exciting day. >> Absolutely. First off, I think the whole merger thing is about done, right? That's got to feel good. >> Yeah, it's done, but there's legs to it, right? So we've combined these companies now, three of them, three large ones, so obviously Western Digital and Hitachi Global Storage, now we've added SanDisk into one Western Digital, so we're all together. Obviously more to do, as you expect in a large scale integration. There will be a year or two of bringing all those business processes and systems together, but I got to say, the teams are coming together great, showing up in our financial performance and our product execution, so things are really coming together. >> Yeah, not an easy task by any stretch of the imagination. >> No, not easy, but certainly a compliment to our team. I mean, we've got great people. You know, like anything, if you can harness the capabilities of your team, there's a lot you can accomplish, and it really is a compliment to the team. >> Excellent. Well, congratulations on that, and talking a bit about this event here today, you've even used "Big Data" in the title of the event, so you guys are obviously in a really unique place, Western Digital. You make systems, big systems. You also make the media that feeds a lot of other people's systems, but as the big data grows, the demand for data grows, it's got to live somewhere, so you're sitting right at the edge where this stuff's got to sit. >> Yeah, that's right, and it's central to our strategy, right? So if you think about it, there's three fundamental technologies that we think are just inherent in all of the evolution of compute and IT architecture. Obviously, there is compute, there is storage or memory, and then there's sort of movement, or interconnect. We obviously live in the storage or memory node, and we have a very broad set of capabilities, all the way from rotating magnetic media, which was our heritage, now including non-volatile memory and flash, and that's just foundational to everything that is going to come, and as you said, we're not going to stop there. It's not just a devices or component company, we're going to continue to innovate above that into platforms and systems, and why that becomes important to us, is there's a lot of technology innovation we can do that enhances the offering that we can bring to market when we control the entire technology stat. >> Right. Now, we've had some other guests on and people can get more information on the nitty-gritty details of the announcement today, the main announcement. Basically, in a nutshell, enabling you to get a lot more capacity in hard drives. But, I thought in your opening remarks this morning, there were some more high-level things I wanted to dig into with you, and specifically, you made an analogy of the data economy, and compared it to the petroleum economy. I've never... A lot of times, they talk about big data, but no one really talks about it, that I've heard, in those terms, because when you think about the petroleum economy, it's so much more than fuel and cars, and the second-order impacts, and the third-order impacts on society are tremendous, and you're basically saying, "We're going to "do this all over again, but now it's based on data." >> Yeah, that's right, and I think it puts it into a form that people can understand, right? I think it's well-proven what happened around petroleum, so the discovery of petroleum, and then the derivative industries, whether it be automobiles, whether it be plastics, you pick it, the entire economy revolved around, and, to some degree, still revolves around petroleum. The same thing will occur around data. You're seeing it with investments, you hear now things like machine learning, or artificial intelligence, that is all ways to transform and mine data to create value. >> Right. >> And we're going to see industries change rapidly. Autonomous cars, that's going to be enabled by data, and capabilities here, so pick your domain. There's going to be innovation across a lot of fronts, across a lot of traditional vertical industries, that is all going to be about data and driven by data. >> It's interesting what Janet, Doctor Janet George talked about too a little bit is the types of data, and the nozzles of the data is also evolving very quickly from data at rest to data in motion, to real-time analytics, to, like you said, the machine learning and the AI, which is based on modeling prior data, but then ingesting new data, and adjusting those models so even the types and the rate and the speed of the data is under dramatic change right now. >> Yeah, that's right, and I think one of the things that we're helping enable is you kind of get to this concept of what do you need to do to do what you describe? There has to be an infrastructure there that actually enables it. So, when you think about the scale of data we're dealing with, that's one thing that we're innovating around, then the issue is, how do you allow multiple applications to simultaneously access and update and transform that? Those are all problems that need to be solved in the infrastructure to enable things like AI, right? And so, where we come into play, is creating that infrastructure layer that actually makes that possible. The other thing I talked about briefly in the Q and A was, think about the problem of a future where the data set is just too large to actually move it in a substantive way to the compute. We actually have to invert that model over time architecturally, and bring the compute to the data, right? Because it becomes too complicated and too expensive to move from the storage layer up to compute and back, right? That is a complex operation. That's why those three pillars of technology are so important. >> And you've talked, and we're seeing in the Cloud right, because this continuing kind of atomization, atomic, not automatic, but making these more atomic. A smaller unit that the Cloud has really popularized, so you need a lot, you need a little, really, by having smaller bits and bytes, it makes that that much more easy. But another concept that you delved into a little was fast data versus big data, and clearly flash has been the bright, shiny object for the last couple years, and you guys play in that market as well, but it is two very different ways to think of the data, and I thought the other statistic that was shared is you know, the amount of data coming off of the machines and people dwarfs the business data, which has been the driver of IT spend for the last several decades. >> Yeah, no, that's right, and sort of that... You think about that, and the best analogy is a broader definition of IOT, right? Where you've got all of these censors, whether it be camera censors, because that's just a censor, creating an image or a video, or if it's more industrialized too, you've got all these sources of data, and they're going to proliferate at an exponential rate, and our ability to aggregate that in some sort of an organized way, and then act upon it, again, let's use the autonomous car as the example. You've got all these censors that are in constant motion. You've got to be able to aggregate the data, and make decisions on it at the edge, so that's not something... You can't deal with latency up to the Cloud and back, if it's an automobile, and it needs to make an instantaneous decision, so you've got to create that capability locally, and so when you think about the evolution of all this, it's really the integration of the Cloud, which, as Janet talked about, is the ability to tap into this historical or legacy data to help inform a decision, but then there's things happening out at the edge that are real time, and you have to have the capability to ingest the content, make a decision on it very quickly, and then act on it. >> Right. There's a great example. We went to the autonomous... Just navigation for the autonomous vehicles. It's own subset that I think Goldman-Sachs said it a seven billion dollar industry in the not-too-distant future, and the great example is this combination of the big data and the live data is, when they actually are working on the road. So you've got maps that tell you, and are updated, kind of what the road looks like, but on Tuesday, they were shifting the lane, and that particular lane now has cones in it, so the combination of the two is such a powerful thing. >> That's right. >> I want to dive into another topic we talked about, which is really architecting for the future. Unlike oil, data doesn't get consumed and is no longer available, right? It's a reusable asset, and you talked about classic stove-topping of data within an application center world where now you want that data available for multiple applications, so very different architecture to be able to use it across many fronts, some of which you don't even know yet. >> That's right. I think that's a key point. One of the things, when we talk to CEOs, or CIOs I should say, what they're realizing, to the extent you can enable a cost-effective mechanism for me to store and keep everything, I don't know how I'll derive value from it some time in the future, because as applications evolve, we're finding new insights into what can help drive decisions or innovation, or, to take it to health care, some sort of innovation that cures disease. That's one of the things that everybody wants to do. I want to build aggregate everything. If I could do that cost effectively enough, I'll find a way to get value out of it over time, and that's something where, when we're thinking about big data and what we talked about today, that's central to that idea, and enabling it. >> Right, and digital transformation, right, the hot buzz word, but we hear, time and time again, such a big piece of that is giving the democratization. Democratization of the data, so more people have access to it, democratization of the tools to manipulate that data, not just Mahogany Row super smart people, and then to have a culture that lets people actually try, experiment, fail fast, and there's a lot of innovation that would be unlocked right within your four walls, that probably are not being tapped into. >> Well, that's right, and that's something that innovation, and an innovation culture is something that we're working hard at, right? So if you think about Western Digital, you might think of us as, you know, legacy Western Digital as sort of a fast following, very operational-centric company. We're still good at those things, but over the last five years, we've really pushed this notion of innovation, and really sort of pressing in to becoming more influential in those feature architectures. That drives a culture that, if we think about the technical community, if we create the right sort of mix of opportunity, appetite for some risk, that allows the best creativity to come out of our technical... Innovating along these lines. >> Right, I'll give you the last word. I can't believe we're going to turn the calendar here on 2017, which is a little scary. As you look forward to 2018, what are some of your top priorities? What are you going to be working on as we come into the new calendar year? >> Yeah, so as we look into 2018 and beyond, we really want to drive this continued architectural shift. You'll see us be very active, and I think you talked about it, you'll see us getting increasingly active in this democratization. So we're going to have to figure out how we engage the broader open-source development world, whether it be hardware or software. We agree with that mantra, we will support that. Obviously we can do unique development, but with some hooks and keys that we can drive a broader ecosystem movement, so that's something that's central to us, and one last word would be, one of the things that Martin Fink has talked about which is really part of our plans as we go onto the new year, is really this inverting the model, where we want to continue to drive an architecture that brings compute to the storage and enables some things that just can't be done today. >> All right, well Mike Cordano, thanks for taking a few minutes, and congratulations on the terrific event. >> Thank you. Appreciate it. >> He's Mike Cordano, I'm Jeff Frick, you're watching The Cube, we're at Western Digital headquarters in San Jose, Great Oaks Campus, it's historic. Check it out. Thanks for watching.
SUMMARY :
Brought to you by Western Digital. It's The Innovating to Fuel the Next Generation of Big Data Great to see you as well. That's got to feel good. Obviously more to do, as you expect and it really is a compliment to the team. of the event, so you guys are obviously in a really unique that is going to come, and as you said, more information on the nitty-gritty details of the and mine data to create value. that is all going to be about data and driven by data. to real-time analytics, to, like you said, the machine architecturally, and bring the compute to the data, right? and people dwarfs the business data, which has been talked about, is the ability to tap into this historical now has cones in it, so the combination of the two to be able to use it across many fronts, some of which that's central to that idea, and enabling it. and then to have a culture that lets people actually and really sort of pressing in to becoming more influential the new calendar year? architecture that brings compute to the storage and enables and congratulations on the terrific event. Thank you. The Cube, we're at Western Digital headquarters in San Jose,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike Cordano | PERSON | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Janet | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
San Jose | LOCATION | 0.99+ |
Janet George | PERSON | 0.99+ |
Tuesday | DATE | 0.99+ |
Mike | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
Goldman-Sachs | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Martin Fink | PERSON | 0.99+ |
San Jose, California | LOCATION | 0.99+ |
Hitachi Global Storage | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
seven billion dollar | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
SanDisk | ORGANIZATION | 0.99+ |
The Cube | TITLE | 0.99+ |
a year | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
three pillars | QUANTITY | 0.97+ |
three fundamental technologies | QUANTITY | 0.97+ |
Great Oaks Campus | LOCATION | 0.97+ |
one | QUANTITY | 0.97+ |
First | QUANTITY | 0.95+ |
two very different ways | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.93+ |
this morning | DATE | 0.92+ |
Next Decade | DATE | 0.92+ |
three large | QUANTITY | 0.91+ |
one last word | QUANTITY | 0.91+ |
Doctor | PERSON | 0.87+ |
The Cube | ORGANIZATION | 0.84+ |
third-order | QUANTITY | 0.84+ |
last couple years | DATE | 0.83+ |
Mahogany Row | ORGANIZATION | 0.82+ |
second-order | QUANTITY | 0.81+ |
Cloud | TITLE | 0.76+ |
last five years | DATE | 0.7+ |
last several decades | DATE | 0.69+ |
four walls | QUANTITY | 0.64+ |
Big Data | EVENT | 0.49+ |