Image Title

Search Results for Siva:

Kumaran Siva, AMD | VMware Explore 2022


 

>>Good morning, everyone. Welcome to the cubes day two coverage of VMware Explorer, 2022 live from San Francisco. Lisa Martin here with Dave Nicholson. We're excited to kick off day two of great conversations with VMware partners, customers it's ecosystem. We've got a V an alumni back with us Kumer on Siva corporate VP of business development from AMD joins us. Great to have you on the program in person. Great >>To be here. Yes. In person. Indeed. Welcome. >>So the great thing yesterday, a lot of announcements and B had an announcement with VMware, which we will unpack that, but there's about 7,000 to 10,000 people here. People are excited, ready to be back, ready to be hearing from this community, which is so nice. Yesterday am B announced. It is optimizing AMD PON distributed services card to run on VMware. Bsphere eight B for eight was announced yesterday. Tell us a little bit about that. Yeah, >>No, absolutely. The Ben Sando smart neck DPU. What it allows you to do is it, it provides a whole bunch of capabilities, including offloads, including encryption DEC description. We can even do functions like compression, but with, with the combination of VMware project Monterey and, and Ben Sando, we we're able to do is even do some of the vSphere, actual offloads integration of the hypervisor into the DPU card. It's, it's pretty interesting and pretty powerful technology. We're we're pretty excited about it. I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, in terms of manageability, in terms of being able to take care of bare metal servers and also, you know, better secure infrastructure, you know, cloudlike techniques into the, into the mainstream on-premises enterprise. >>Okay. Talk a little bit about the DPU data processing unit. They talked about it on stage yesterday, but help me understand that versus the CPU GPU. >>Yeah. So it's, it's, it's a different, it's a different point, right? So normally you'd, you'd have the CPU you'd have we call it dumb networking card. Right. And I say dumb, but it's, it's, you know, it's just designed to go process packets, you know, put and put them onto PCI and have the, the CPU do all of the, kind of the, the packet processing, the, the virtual switching, all of those functions inside the CPU. What the DPU allows you to do is, is actually offload a bunch of those functions directly onto the, onto the deep view card. So it has a combination of these special purpose processors that are programmable with the language called P four, which is one, one of the key things that pan Sando brings. Here's a, it's a, it's a real easy to program, easy to use, you know, kind of set so that not some of, some of our larger enterprise customers can actually go in and, you know, do some custom coding depending on what their network infrastructure looks like. But you can do things like the V switch in, in the, in the DPU, not having to all have that done on the CPU. So you freeze up some of the CPU course, make sure, make sure infrastructure run more efficiently, but probably even more importantly, it provides you with more, with greater security, greater separation between the, between the networking side and the, the CPU side. >>So, so that's, that's a key point because a lot of us remember the era of the tonic TCP, I P offload engine, Nick, this isn't simply offloading CPU cycles. This is actually providing a sort of isolation. So that the network that's right, is the network has intelligence that is separate from the server. Is that, is that absolutely key? Is that absolutely >>Key? Yeah. That's, that's a good way of looking at it. Yeah. And that's, that's, I mean, if you look at some of the, the, the techniques used in the cloud, the, you know, this, this, this in fact brings some of those technologies into, into the enterprise, right. So where you are wanting to have that level of separation and management, you're able to now utilize the DPU card. So that's, that's a really big, big, big part of the value proposition, the manageability manageability, not just offload, but you know, kind of a better network for enterprise. Right. >>Right. >>Can you expand on that value proposition? If I'm a customer what's in this for me, how does this help power my multi-cloud organization? >>Yeah. >>So I think we have some, we actually have a number of these in real customer use cases today. And so, you know, folks will use, for example, the compression and the, sorry, the compression and decompression, that's, that's definitely an application in the storage side, but also on the, just on the, just, just as a, as a DPU card in the mainstream general purpose, general purpose server server infrastructure fleet, they're able to use the encryption and decryption to make sure that their, their, their infrastructure is, is kind of safe, you know, from point to point within the network. So every, every connected, every connection there is actually encrypted and that, that, you know, managing those policies and orchestrating all of that, that's done to the DPU card. >>So, so what you're saying is if you have DPU involved, then the server itself and the CPUs become completely irrelevant. And basically it's just a box of sheet metal at that point. That's, that's a good way of looking at that. That's my segue talking about the value proposition of the actual AMD. >>No, absolutely. No, no. I think, I think, I think the, the, the CPUs are always going to be central in this and look. And so, so I think, I think having, having the, the DPU is extremely powerful and, and it does allow you to have better infrastructure, but the key to having better infrastructure is to have the best CPU. Well, tell >>Us, tell >>Us that's what, tell us us about that. So, so I, you know, this is, this is where a lot of the, the great value proposition between VMware and AMD come together. So VMware really allows enterprises to take advantage of these high core count, really modern, you know, CPU, our, our, our, our epic, especially our Milan, our 7,003 product line. So to be able to take advantage of 64 course, you know, VMware is critical for that. And, and so what they, what they've been able to do is, you know, know, for example, if you have workloads running on legacy, you know, like five year old servers, you're able to take a whole bunch of those servers and consolidate down, down into a single node, right. And the power that VMware gives you is the manageability, the reliability brings all of that factors and allows you to take advantage of, of the, the, the latest, latest generation CPUs. >>You know, we've actually done some TCO modeling where we can show, even if you have fully depreciated hardware, like, so it's like five years old plus, right. And so, you know, the actual cost, you know, it's already been written off, but the cost just the cost of running it in terms of the power and the administration, you know, the OPEX costs that, that are associated with it are greater than the cost of acquiring a new set of, you know, a smaller set of AMD servers. Yeah. And, and being able to consolidate those workloads, run VMware, to provide you with that great, great user experience, especially with vSphere 8.0 and the, and the hooks that VMware have built in for AMD AMD processors, you actually see really, really good. It's just a great user experience. It's also a more efficient, you know, it's just better for the planet. And it's also better on the pocketbook, which is, which is, which is a really cool thing these days, cuz our value in TCO translates directly into a value in terms of sustainability. Right. And so, you know, from, from energy consumption, from, you know, just, just the cost of having that there, it's just a whole lot better >>Talk about on the sustainability front, how AMD is helping its customers achieve their sustainability goals. And are you seeing more and more customers coming to you saying, we wanna understand what AMD is doing for sustainability because it's important for us to work with vendors who have a core focus on it. >>Yeah, absolutely. You know, I think, look, I'll be perfectly honest when we first designed our CPU, we're just trying to build the biggest baddest thing that, you know, that, that comes out in terms of having the, the, the best, the, the number, the, the largest number of cores and the best TCO for our customers, but what it's actually turned out that TCO involves energy consumption. Right. And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, whole bunch of servers. For example, we have one calculation where we showed 27, you know, like I think like five year old servers can be consolidated down into five AMD servers that, that ratio you can see already, you know, huge gains in terms of sustainability. Now you asked about the sustainability conversation. This I'd say not a week goes by where I'm not having a conversation with, with a, a, a CTO or CIO who is, you know, who's got that as part of their corporate, you know, is part of their corporate brand. And they want to find out how to make their, their infrastructure, their data center, more green. Right. And so that's, that's where we come in. Yeah. And it's interesting because at least in the us money is also green. So when you talk about the cost of power, especially in places like California, that's right. There's, there's a, there's a natural incentive to >>Drive in that direction. >>Let's talk about security. You know, the, the, the threat landscape has changed so dramatically in the last couple of years, ransomware is a household word. Yes. Ransomware attacks happened like one every 11 seconds, older technology, a little bit more vulnerable to internal threats, external threats. How is AMD helping customers address the security fund, which is the board level conversation >>That that's, that's, that's a, that's a great, great question. Look, I look at security as being, you know, it's a layered thing, right? I mean, if you talk to any security experts, security, doesn't, you know, there's not one component and we are an ingredient within the, the greater, you know, the greater scheme of things. A few things. One is we have partnered very closely with the VMware. They have enabled our SUV technology, secure encrypted virtualization technology into, into the vSphere. So such that all of the memory transactions. So you have, you have security, you know, at, you know, security, when you store store on disks, you have security over the network and you also have security in the compute. And when you go out to memory, that's what this SUV technology gives you. It gives you that, that security going, going in your, in your actual virtual machine as it's running. And so the, the, we take security extremely seriously. I mean, one of the things, every generation that you see from, from AMD and, and, you know, you have seen us hit our cadence. We do upgrade all of the security features and we address all of the sort of known threats that are out there. And obviously this threats, you know, kind of coming at us all the time, but our CPUs just get better and better from, from a, a security stance. >>So shifting gears for a minute, obviously we know the pending impossible acquisition, the announced acquisition of VMware by Broadcom, AMD's got a relationship with Broadcom independently, right? No, of course. What is, how's that relationship? >>Oh, it's a great relationship. I mean, we, we, you know, they, they have certified their, their, their niche products, their HPA products, which are utilized in, you know, for, for storage systems, sand systems, those, those type of architectures, the hardcore storage architectures. We, we work with them very closely. So they, they, they've been a great partner with us for years. >>And you've got, I know, you know, we are, we're talking about current generation available on the shelf, Milan based architecture, is that right? That's right. Yeah. But if I understand correctly, maybe sometime this year, you're, you're gonna be that's right. Rolling out the, rolling out the new stuff. >>Yeah, absolutely. So later this year, we've already, you know, we already talked about this publicly. We have a 96 core gen platform up to 96 cores gen platform. So we're just, we're just taking that TCO value just to the next level, increasing performance DDR, five CXL with, with memory expansion capability. Very, very neat leading edge technology. So that that's gonna be available. >>Is that NextGen P C I E, or has that shift already been made? It's >>Been it's NextGen. P C I E P C E gen five. Okay. So we'll have, we'll have that capability. That'll be, that'll be out by the end of this year. >>Okay. So those components you talk about. Yeah. You know, you talk about the, the Broadcom VMware universe, those components that are going into those new slots are also factors in performance and >>Yeah, absolutely. You need the balance, right? You, you need to have networking storage and the CPU. We're very cognizant of how to make sure that these cores are fed appropriately. Okay. Cuz if you've just put out a lot of cores, you don't have enough memory, you don't have enough iOS. That's, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, make sure that the systems are balanced. So you get the experience that you've had with, let's say your, you know, your 12 core, your 16 core, you can have that same experience in the 96 core in a node or 96 core socket. So maybe a 192 cores total, right? So you can have that same experience in, in a tune node in a much denser, you know, package server today or, or using Melan technology, you know, 128 cores, super, super good performance. You know, its super good experience it's, it's designed to scale. Right. And especially with VMware as, as our infrastructure, it works >>Great. I'm gonna, Lisa, Lisa's got a question to ask. I know, but bear with me one bear >>With me. Yes, sir. >>We've actually initiated coverage of this question of, you know, just hardware matter right anymore. Does it matter anymore? Yeah. So I put to you the question, do you think hardware still matters? >>Oh, I think, I think it's gonna matter even more and more going forward. I mean just, but it's all cloud who cares just in this conversation today. Right? >>Who cares? It's all cloud. Yeah. >>So, so, so definitely their workloads moving to the cloud and we love our cloud partners don't get me wrong. Right. But there are, you know, just, I've had so many conversations at this show this week about customers who cannot move to the cloud because of regulatory reasons. Yeah. You know, the other thing that you don't realize too, that's new to me is that people have depreciated their data centers. So the cost for them to just go put in new AMD servers is actually very low compared to the cost of having to go buy, buy public cloud service. They still want to go buy public cloud services and that, by the way, we have great, great, great AMD instances on, on AWS, on Google, on Azure, Oracle, like all of our major, all of the major cloud providers, support AMD and have, have great, you know, TCO instances that they've, they've put out there with good performance. Yeah. >>What >>Are some of the key use cases that customers are coming to AMD for? And, and what have you seen change in the last couple of years with respect to every customer needing to become a data company needing to really be data driven? >>No, that's, that's also great question. So, you know, I used to get this question a lot. >>She only asks great questions. Yeah. Yeah. I go down and like all around in the weeds and get excited about the bits and the bites she asks. >>But no, I think, look, I think the, you know, a few years ago and I, I think I, I used to get this question all the time. What workloads run best on AMD? My answer today is unequivocally all the workloads. Okay. Cuz we have processors that run, you know, run at the highest performance per thread per per core that you can get. And then we have processors that have the highest throughput and, and sometimes they're one in the same, right. And Ilan 64 configured the right way using using VMware vSphere, you can actually get extremely good per core performance and extremely good throughput performance. It works well across, just as you said, like a database to data management, all of those kinds of capabilities, DevOps, you know, E R P like there's just been a whole slew slew of applications use cases. We have design wins in, in major customers, in every single industry in every, and these, these are big, you know, the big guys, right? >>And they're, they're, they're using AMD they're successfully moving over their workloads without, without issue. For the most part. In some cases, customers tell us they just, they just move the workload on, turn it on. It runs great. Right. And, and they're, they're fully happy with it. You know, there are other cases where, where we've actually gotten involved and we figured out, you know, there's this configuration of that configuration, but it's typically not a, not a huge lift to move to AMD. And that's that I think is a, is a key, it's a key point. And we're working together with almost all of the major ISV partners. Right. And so just to make sure that, that, that they have run tested certified, I think we have over 250 world record benchmarks, you know, running in all sorts of, you know, like Oracle database, SAP business suite, all of those, those types of applications run, run extremely well on AMD. >>Is there a particular customer story that you think really articulates the value of running on AMD in terms of enabling bus, big business outcome, safer a financial services organization or healthcare organization? Yeah. >>I mean we, yeah, there's certainly been, I mean, across the board. So in, in healthcare we've seen customers actually do the, the server consolidation very effectively and then, you know, take advantage of the, the lower cost of operation because in some cases they're, they're trying to run servers on each floor of a hospital. For example, we've had use cases where customers have been able to do that because of the density that we provide and to be able to, to actually, you know, take, take their compute more even to the edge than, than actually have it in the, in those use cases in, in a centralized matter. The another, another interesting case FSI in financial services, we have customers that use us for general purpose. It, we have customers that use this for kind of the, the high performance we call it grid computing. So, you know, you have guys that, you know, do all this trading during the day, they collect tons and tons of data, and then they use our computers to, or our CPUs to just crunch to that data overnight. >>And it's just like this big, super computer that just crunches it's, it's pretty incredible. They're the, the, the density of the CPUs, the value that we bring really shines, but in, in their general purpose fleet as well. Right? So they're able to use VMware, a lot of VMware customers in that space. We love our, we love our VMware customers and they're able to, to, to utilize this, they use use us with HCI. So hyperconverge infrastructure with V VSAN and that's that that's, that's worked works extremely well. And, and, and our, our enterprise customers are extremely happy with that. >>Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares VMware undergoes its potential change. >>Yeah. So there there's a lot that we have going on. I mean, I gotta say VMware is one of the, let's say premier companies in terms of, you know, being innovative and being, being able to drive new, new, interesting pieces of technology and, and they're very experimentive right. So they, we have, we have a ton of things going with them, but certainly, you know, driving pin Sando is, is very, it is very, very important to us. Yeah. I think that the whole, we're just in the, the cusp, I believe of, you know, server consolidation becoming a big thing for us. So driving that together with VMware and, you know, into some of these enterprises where we can show, you know, save the earth while we, you know, in terms of reducing power, reducing and, and saving money in terms of TCO, but also being able to enable new capabilities. >>You know, the other part of it too, is this new infrastructure enables new workloads. So things like machine learning, you know, more data analytics, more sophisticated processing, you know, that, that is all enabled by this new infrastructure. So we, we were excited. We think that we're on the precipice of, you know, going a lot of industries moving forward to, to having, you know, the next level of it. It's no longer about just payroll or, or, or enterprise business management. It's about, you know, how do you make your, you know, your, your knowledge workers more productive, right. And how do you give them more capabilities? And that, that is really, what's exciting for us. >>Awesome Cooper. And thank you so much for joining Dave and me on the program today, talking about what AMD, what you're doing to supercharge customers, your partnership with VMware and what is exciting. What's on the, the forefront, the frontier, we appreciate your time and your insights. >>Great. Thank you very much for having me. >>Thank you for our guest and Dave Nicholson. I'm Lisa Martin. You're watching the cube live from VMware Explorer, 22 from San Francisco, but don't go anywhere, Dave and I will be right back with our next guest.

Published Date : Aug 31 2022

SUMMARY :

Great to have you on the program in person. So the great thing yesterday, a lot of announcements and B had an announcement with VMware, I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, but help me understand that versus the CPU GPU. And I say dumb, but it's, it's, you know, it's just designed to go process So that the network that's right, not just offload, but you know, kind of a better network for enterprise. And so, you know, folks will use, for example, the compression and the, And basically it's just a box of sheet metal at that point. the DPU is extremely powerful and, and it does allow you to have better infrastructure, And the power that VMware gives you is the manageability, the reliability brings all of that factors the administration, you know, the OPEX costs that, that are associated with it are greater than And are you seeing more and more customers coming to you saying, And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, How is AMD helping customers address the security fund, which is the board level conversation And obviously this threats, you know, kind of coming at us all the time, So shifting gears for a minute, obviously we I mean, we, we, you know, they, they have certified their, their, their niche products, available on the shelf, Milan based architecture, is that right? So later this year, we've already, you know, we already talked about this publicly. That'll be, that'll be out by the end of this year. You know, you talk about the, the Broadcom VMware universe, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, I know, but bear with me one So I put to you the question, do you think hardware still matters? but it's all cloud who cares just in this conversation today. Yeah. But there are, you know, just, I've had so many conversations at this show this week about So, you know, I used to get this question a lot. around in the weeds and get excited about the bits and the bites she asks. Cuz we have processors that run, you know, run at the highest performance you know, running in all sorts of, you know, like Oracle database, SAP business Is there a particular customer story that you think really articulates the value of running on AMD density that we provide and to be able to, to actually, you know, take, take their compute more even So they're able to use VMware, a lot of VMware customers in Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares So driving that together with VMware and, you know, into some of these enterprises where learning, you know, more data analytics, more sophisticated processing, you know, And thank you so much for joining Dave and me on the program today, talking about what AMD, Thank you very much for having me. Thank you for our guest and Dave Nicholson.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

BroadcomORGANIZATION

0.99+

AMDORGANIZATION

0.99+

DavePERSON

0.99+

San FranciscoLOCATION

0.99+

Kumaran SivaPERSON

0.99+

five yearQUANTITY

0.99+

12 coreQUANTITY

0.99+

VMwareORGANIZATION

0.99+

192 coresQUANTITY

0.99+

16 coreQUANTITY

0.99+

96 coreQUANTITY

0.99+

CaliforniaLOCATION

0.99+

five yearsQUANTITY

0.99+

CooperPERSON

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

OracleORGANIZATION

0.99+

LisaPERSON

0.99+

128 coresQUANTITY

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

MilanLOCATION

0.99+

todayDATE

0.99+

GoogleORGANIZATION

0.99+

this yearDATE

0.98+

Yesterday amDATE

0.98+

fiveQUANTITY

0.98+

one componentQUANTITY

0.98+

eightQUANTITY

0.98+

HPAORGANIZATION

0.98+

each floorQUANTITY

0.98+

oneQUANTITY

0.97+

this weekDATE

0.97+

vSphere 8.0TITLE

0.97+

later this yearDATE

0.97+

day twoQUANTITY

0.97+

10,000 peopleQUANTITY

0.96+

96 coreQUANTITY

0.95+

TCOORGANIZATION

0.95+

2022DATE

0.95+

OneQUANTITY

0.95+

27QUANTITY

0.94+

64 courseQUANTITY

0.94+

SandoORGANIZATION

0.94+

one calculationQUANTITY

0.94+

end of this yearDATE

0.93+

VMwaresORGANIZATION

0.93+

Kumaran Siva, AMD | IBM Think 2021


 

>>from around the globe. It's the >>cube >>With digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, love the processors. Epic 7000 and three series was just launched. Its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center and the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation ethic processor that were just announced. Um So the Epic 7003, what we're calling it a series processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package to complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache, um you know, cash is super important for a variety of workloads in the large cache size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java all sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um what we call all in features. So rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases. The pc I interfaces gives you lots and lots of storage capability so all in all super products um and we're super excited to be working with IBM honest. >>Well let's get into some of the details on this impact because obviously it's not just one place where these processes are going to live. You're seeing a distributed surface area core to edge um, cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am be um delivers upon our roadmap. So we, we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation, Epic part and then now in March, we are now in the third generation. Very much on schedule. Very much um, intern expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, Jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh bare metal. Um, as we have been deployed in uh, in IBM cloud and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um and you can, you know, you can run each core uh really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the for the cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the ceo of IBM before he was Ceo at a conference and you know, he's always been, I know him, he's always loved cloud, right? So, um, but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here, I think Hybrid. Um, and so I can almost see the playbook evolving. You know, Red has an operating system, Cloud and Edge is a distributed system, it's got that vibe of a system architecture, almost got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious, could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on. Uh but in terms of our specific joint partnerships and we do have an announcement last year. Um so it's it's it's somewhat public, but we are working together on Ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability, um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh you know, there's some key pillars to this. One of this is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in something assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with uh with unprecedented security and reliability uh in the cloud, >>what's the future of processors, I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables, you're gonna have more and more devices big and small. Um what's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right, So combining that with the Andes Silicon, uh divided and see few devices really really is is it's a great combination, I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count Cpus I see that trend continuing for sure. Uh you know, I think that that is gonna, that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center um and IBM is in a really good position to take advantage of that to go to, to to drive that within the cloud. That income combination with IBM s presence on prem uh and so that's that's where the hybrid hybrid cloud value proposition comes in um and so we actually see ourselves uh you know, playing in both sides, so we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as well. >>I B M and M. D. Great partnership, great for clarifying and and sharing that insight come, I appreciate it. Thanks for for coming on the cube, I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. As you see hybrid cloud developing one of the big trends is this ecosystem play right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at A. M. D. I mean because because people are building on top of these ecosystems are building their own clouds on top of cloud, you're seeing data. Cloud, just seeing these kinds of clouds, specialty clouds. So I mean we could have a cute cloud on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud. So that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't, it isn't a pretty good position with IBM to be able to, to enable that. Um we do have some very significant osD partnerships, a lot of which that are leveraged into IBM um such as Red hat of course, but also like VM ware and Nutanix. Um this provide these always V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P I. And be able to build build um uh the the multi cloud environments that you're talking about. Um and I think that, I think that's right. I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh call the medic cloud that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another cloud environments. I think that's that's that's actually very key and very uh one of the one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very much. >>Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm

Published Date : May 12 2021

SUMMARY :

It's the With digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, love the processors. so that the process of being the complete package to complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect Um and you can, you know, you can run each core uh Um, and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know with IBM cloud, an AMG confidential computing. So so what uh you know, there's some key pillars to this. In in the IBM world? in um and so we actually see ourselves uh you know, playing in both sides, Thanks for for coming on the cube, I do want to ask you while I got you here. I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

ArvinPERSON

0.99+

Cameron SivaPERSON

0.99+

MarchDATE

0.99+

19%QUANTITY

0.99+

64 coresQUANTITY

0.99+

each coreQUANTITY

0.99+

Each coreQUANTITY

0.99+

august of 2019DATE

0.99+

628 lanesQUANTITY

0.99+

256 megabytesQUANTITY

0.99+

last yearDATE

0.99+

2020DATE

0.99+

64 coresQUANTITY

0.99+

NutanixORGANIZATION

0.99+

second thingQUANTITY

0.99+

2021DATE

0.99+

two threadsQUANTITY

0.99+

second generationQUANTITY

0.99+

AMDORGANIZATION

0.99+

both sidesQUANTITY

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

third generationQUANTITY

0.98+

AMGORGANIZATION

0.98+

Epic 7003COMMERCIAL_ITEM

0.97+

JeninPERSON

0.97+

Andes SiliconORGANIZATION

0.97+

Zen threeCOMMERCIAL_ITEM

0.97+

third generationQUANTITY

0.97+

M. D.PERSON

0.94+

four terabytesQUANTITY

0.94+

firstQUANTITY

0.94+

todayDATE

0.94+

one placeQUANTITY

0.94+

EpicORGANIZATION

0.93+

Think 2021COMMERCIAL_ITEM

0.92+

IBM cloudORGANIZATION

0.92+

Epic 7763COMMERCIAL_ITEM

0.91+

oneQUANTITY

0.9+

jeninPERSON

0.9+

three seriesQUANTITY

0.89+

EpicCOMMERCIAL_ITEM

0.88+

A. M.ORGANIZATION

0.85+

A. M.PERSON

0.85+

RedPERSON

0.83+

CeoPERSON

0.82+

Mm Kumaran SivaPERSON

0.8+

about over 400 total instancesQUANTITY

0.79+

64 4QUANTITY

0.78+

johnPERSON

0.77+

up to 128 threadsQUANTITY

0.72+

Epic um 72 F threeCOMMERCIAL_ITEM

0.71+

javaTITLE

0.7+

7000COMMERCIAL_ITEM

0.7+

Epic ForceCOMMERCIAL_ITEM

0.69+

E gen fourCOMMERCIAL_ITEM

0.67+

M. DPERSON

0.67+

IBM29 Kumaran Siva VTT


 

>>from around the globe. It's the >>cube with >>Digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, loved the processors. Epic 7000 and three series was just launched its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center on the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation Epic processor that we just announced. Um So the Epic 7003, what we're calling it a serious processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package, the complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache. Um you know, cash is super important for a variety of workloads in the large cat size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java whole sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um we call all in features so rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases, the Pc I interfaces gives you lots and lots of storage capability. So all in all super products um and we're super excited to be working with IBM honest. >>Well, let's get into some of the details on this impact because obviously it's not just one place where these processes are gonna live. You're seeing a distributed surface area core to edge um cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am d um delivers upon our roadmap. So we we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation. That big part and then now in March, we are now in the third generation, very much on schedule, very much um intent, expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh, bare metal, um, as we have been deployed in uh, in IBM Club and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um And you can, you know, you can run each core really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the with a cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D. Processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the Ceo of IBM before he was Ceo at a conference and you know, he's always been I know him, he's always loved cloud, right? So, um but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here. I think Hybrid. Um and so I can almost see the playbook evolving. You know, Red has an operating system. Cloud and Edge is a distributed system. It's got that vibe of a system architecture, you got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on, uh but in terms of our specific joint partnerships and we did an announcement last year, so it's it's it's somewhat public, but we are working together on ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM s progress and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh, you know, there's some key pillars to this. One of us is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in some of the assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with, work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with with with unprecedented security and reliability uh in the cloud, >>what's the future of processors? I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables. You're gonna have more and more devices big and small. Um What's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M. D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right? So combining that with the Andes silicon, uh divide and see few devices really really is is it's a great combination. I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count cpus, I see that trend continuing for sure. Uh you know, I think that that is gonna that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center. Um and IBM is in a really good position to take advantage of that to go to to to drive that within the cloud. That income combination with IBM s presence on prem. Uh and so that's that's where the hybrid hybrid cloud value proposition comes in. Um and so we actually see ourselves uh you know, playing in both sides. So we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as >>well. I B M and M. D. Great partnership, great for clarifying and and sharing that insight come. I appreciate it. Thanks for for coming on the cube. I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. You know, as you see hybrid cloud developing one of the big trends is this ecosystem play, right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at AMG because because people are building on top of these ecosystems are building their own clouds on top of clouds, just seeing data cloud, just seeing these kinds of clouds, specialty clouds. So we could have a cute cloud on on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud, so that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't it isn't a pretty good position with IBM to be able to to enable that. Um we do have some very significant OsD partnerships, a lot of which that are leveraged into IBM um such as red hat of course, but also like VM ware and Nutanix. Um this provide these OS V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P. I. And be able to build, build um uh the the multi cloud environments that you're talking about. Um and I think that I think that's right, I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh I've been called a meta cloud, that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another crowd environments. I think that's that's that's actually very key and very uh one of the, one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space, with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great, great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very >>much. Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm mm

Published Date : Apr 16 2021

SUMMARY :

It's the Digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, loved the processors. so that the process of being the complete package, the complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect I think, you know, we are, we are a leader in terms of the core density that we Um and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM s progress and know with IBM cloud, an AMG confidential computing. So so what uh, you know, there's some key pillars to this. Um What's the in. Um and so we actually see ourselves uh you know, playing in both sides. Um kind of a curveball question if you don't mind. Um and I think that I think that's right, I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Cameron SivaPERSON

0.99+

MarchDATE

0.99+

august of 2019DATE

0.99+

64 coresQUANTITY

0.99+

19%QUANTITY

0.99+

each coreQUANTITY

0.99+

628 lanesQUANTITY

0.99+

Each coreQUANTITY

0.99+

AMGORGANIZATION

0.99+

256 megabytesQUANTITY

0.99+

ArvinPERSON

0.99+

last yearDATE

0.99+

64 coresQUANTITY

0.99+

OneQUANTITY

0.99+

both sidesQUANTITY

0.99+

second generationQUANTITY

0.99+

second thingQUANTITY

0.99+

third generationQUANTITY

0.98+

Kumaran SivaPERSON

0.98+

bothQUANTITY

0.98+

NutanixORGANIZATION

0.98+

two threadsQUANTITY

0.97+

Epic 7003COMMERCIAL_ITEM

0.96+

EpicCOMMERCIAL_ITEM

0.96+

M. D.PERSON

0.96+

four terabytesQUANTITY

0.95+

third generationQUANTITY

0.94+

todayDATE

0.94+

EpicORGANIZATION

0.93+

Think 2021COMMERCIAL_ITEM

0.93+

oneQUANTITY

0.93+

IBM ClubORGANIZATION

0.92+

one placeQUANTITY

0.92+

M. DPERSON

0.91+

RedPERSON

0.91+

A. M.PERSON

0.9+

Epic 7763COMMERCIAL_ITEM

0.9+

firstQUANTITY

0.9+

AndesORGANIZATION

0.88+

three seriesQUANTITY

0.86+

E gen fourCOMMERCIAL_ITEM

0.86+

jeninPERSON

0.86+

Zen three coreCOMMERCIAL_ITEM

0.85+

2020 plusDATE

0.85+

64 4QUANTITY

0.82+

CeoPERSON

0.81+

about over 400 totalQUANTITY

0.8+

javaTITLE

0.8+

A. M. D.PERSON

0.79+

IBM cloudORGANIZATION

0.76+

johnPERSON

0.75+

CloudTITLE

0.74+

WatsonORGANIZATION

0.73+

72QUANTITY

0.73+

twoQUANTITY

0.72+

Balaji Siva, OpsMx | CUBE Conversation, January 2020


 

(funky music) >> Everyone, welcome to theCUBE studios here in Palo Alto for a CUBE Conversation, I'm John Furrier, we're here with a great guest, Balaji Sivasubramanian, did I get it right? Okay, okay, VP of product and business development at OpsMx, formerly with Cisco doing networking, now you're doing a lot of DevOps, you guys got a great little business there. Realtime hardcore DevOps. >> Absolutely, so we help large enterprises do the digital transformation to help them achieve that transformation. >> You know, Stu Miniman and I were talking about cloud-native, one of the reasons I wanted to bring you in was, we've been talking about cloud-native going mainstream. And cloud-native essentially codewords for cloud, microservices, essentially DevOps 2.0, whatever you want to call it, it's the mainstream of DevOps, DevOps for the past 10 years has been kind of reserved for the pioneers who built out using open source, to the fast followers, building large startups, to now larger companies, now DevOps is turning into cloud-native where you see in the cloud, born in the cloud, on-premises cloud operations, which is hybrid, and now the advent of multicloud, which really brings the edge conversation into view, really a disruption around networking, and data, and this is impacting developers. And pioneers like Netflix, used Spinnaker to kind of deploy, that's what you guys do, this is the real thread for the next 10 years is data, software, is now part of everyday developer life. Now bring that into DevOps, that seems to be a real flashpoint. >> Yeah, so if you look at some of the challenges enterprises have to get the velocity that they have, the technology was a barrier. So with the Docker option, with the Cloudred option, cloud basically made the infrastructure on demand, and then the Docker really allowed, Microsoft architecture allowed people to have velocity in development. Now their bottleneck has been, "Now I can develop faster, I can bring up infra faster, "but how do I deploy things faster?" Because at the end of the day, that's what is the last mile, so to say, of solving the full puzzle. So I think that's where things like Spinnaker or some of the new tools like Tekton and all those things coming up, that allows these enterprises to take their container-based applications, and functions in some cases, and deploy to various clouds, AWS or Google or Azure. >> Balaji, tell me about your view on cloud-native, you look at, just look at the basic data out there, you got AWS, you got KubeCon, which is really the Linux Foundation, CNCF, I mean the vendors that are in there, and the commercialization is going crazy. Then you got the cloud followers from Amazon, you got Azure basically pivoting Office 365 and getting more cloud action. They're investing heavily in Google GCP, Google Cloud Platform. All of 'em talk about microservices. What's your view of the state of cloud-native? >> Yeah, I think, I probably talked to hundreds of customers this last year, and these are large, Fortune 100, 200 companies to smaller companies. 100% of them are doing containers, 100% of them are doing Kubernetes in some fashion or form. If you look at larger enterprises like the financial sectors, and other, what do you call the more Fortune 100 companies, they do actually do OpenShift. RedHat OpenShift for their Kubernetes, even though Kubernetes is free, whatever, but they definitely look at OpenShift as a way to deploy container-based applications. And many of them are obviously looking at AKS, EKS and other cloud form factors of the same thing. And the most thing I've seen is AWS. EKS is the most common one, Azure some parts, and GKS somewhat, so I mean you know the market trend that's there. So essentially, AWS is where most of the developments are happening. >> What do you think about the mainstream IT, typical IT company that's driven by IT, they're transforming just a few, I'd say about a year ago, most hands were like "Oh, the big cloud providers are going to be "not creating an opportunity for the Splunks in the world, "and other people," but now with that shifting, mainstream companies going to the cloud, it's actually been good for those companies, so you're seeing that collision between pure cloud-native and typical corporation enterprise, that are moving to the cloud or moving to at least hybrid. That's helping these Splunks of the world, the Datadogs, and all these other companies. >> I think there's two attacks on those companies that you talk about. One is obviously the open source movement, it's attacking everything. So anything you have in IT is attacked by open source. Software is eating the world, but open source is eating software. Because software is easy to be open source. Hardware, you can't eat it, there's no open source, nobody's doing free hardware for you. But open source software is eating the software, in some sense, but anyway, so any software vendors are fully, everybody's considering open source first. Many companies are doing open source first, so if you want to look at Datadog or Prometheus, I may look at Prometheus. If I look at IBM uDeploy or Spinnaker, I may look at Spinnaker, so everything Kubernetes or maybe some other forms of communities. So I think these vendors that you talk about, one is the open source part of it, the other is that when you go to the cloud, the providers all provide the basic things already. If you look at Google Cloud, I was actually reading about Google networking a lot of things, lot of the load balancers, and all those things are inbuilt as part of the fabric. Things that you typically use, a router or a firewall or those things, they're all inbuilt, so why would I use a F5 load balancer and things like that? So I would say that I don't think their life is that easy, but there's definitely-- >> All right, so here's the question, who's winning and who's losing with cloud-native? I mean what is really going on in that marketplace, what's the top story, what's the biggest thing people should pay attention to, and who's winning and who's losing? >> I think the channelization of the cloud-native technology is definitely helping vendors like AWS, and basically the cloud vendors. Because no longer you have to go to VMware to get anything done, they have proprietary software that they had and you don't have to go there anymore, everybody can provide it, so the vendors, I would say the customers, obviously, because now they have more choices, they're not vendor locked in, they can go to EKS or AKS in a heartbeat and nothing happens. So customers and vendors are big winners. And then I would say the code providers are big winners. open source is really hurting some of the vendors we talked about earlier, I would say the big guys are the-- >> Cloud's getting bigger, the cloud guys are getting bigger and bigger, more powerful. What about VMware, you mentioned VMware, anything to their proprietary, they also run on AWS, natively, so they're still hanging around, they got the operators. But they're not hitting the devs, but they have this new movement with the Kubernetes, they acquired a company to do that. >> I would say that the AWS, VMware on AWS, essentially is, I would say almost a no-op for VMware in some sense, in some sense. They have to be, it's almost like a place to sell their ware. They used to be on-prem vendors already have the infrastructure, then VMware goes, sells to that customer A. Now the customer says that "I'm not using it on on-prem server A, "I'm on AWS, can you provide me the same software." So essentially, number one, by moving to the cloud, they're essentially selling to the same customers, the same stuff, number one. Number two is but once now I'm in the cloud, I would obviously PWA my workload to native AWS or Google, so I think in the long run, I would say that it's a strategy to survive, but I don't think it's a long-term successful. >> Operators don't move that fast, devs move much faster. I got to ask you, in the developer world, and cloud-native and DevOps 2.0, 3.0, what are the biggest challenges that's slowing it down, why isn't it going faster, or is it going fast, what's your view on that? >> Yeah, I think I would say that the biggest change is obviously, I just said, the people. In some sense, people have to transform, and in large organizations, there's a lot of inertia that allows people, they are deploying existing services the way they're deploying services, some of them are custom-built, the guy who wrote it and they no longer exist, they've moved on, and so some of them are built like that, but I think the inertia is basically now "How do I transform them over to the new model?" If the application itself is getting more broken into more microservices, then it's a great opportunity for me to migrate, but if it's not, then I'm not going to touch something that's actually there. So I would say is that technology's complex. Actually, every day we have people, there's a lot of interest, there's a lot of people learning, learning, learning new stuff, but I cannot hire one Kubernetes good engineer if I want to try hard, independently at least. Because it's hard. >> 'Cause they're working somewhere else, right? >> Well they work somewhere else, or the technology is still early enough that people are learning in droves, don't get me wrong there, but I think it's still fairly complex for them to digest all of that. I think in five years, fast forward five years, you would see that technology, knowledge would be more, so it would be easier to hire those people, because if we want to transform internally, let's say I have my enterprise, I want to transform, I need to hire people to do that. >> What are the use cases, what are the top use cases that you're seeing in your work and out in the field in the business that people are rallying around, they can get some wins, top three use cases for end to end cloud-native development? >> I would say the use cases are like if I'm doing any kind of container-based applications, obviously, I would like to do through the new model of doing things, because I don't want to build based on legacy technology, for sure. I would say that the other ones are new age companies, they're definitely adopting cloud first, and they're able to leverage the existing models, the new models more quickly. I mean obviously there are two things, I think that if I'm doing something new, I take advantage of that. >> Do you think microservices is overrated right now, or is it hyped up, or is it? >> No, I think it's real, absurdly real. >> And what's the big use case there? >> The velocity that people get by adopting microservices. Before, I used to work at Cisco, and there's a software release I have planned for six months to release a software because there's so many engineers, and developing so many features, they develop it over a period of time, and then when they actually integrate, there's two, three months of testing before it gets out, because the guy who wrote the code probably left the company already, by the time the software actually sees the light of day. >> Give some data from your perspective, you don't have to name companies, but for the people that are successful with DevOps, at an operating level, what kind of frequency of updates are they doing per day, just give us some order of magnitude numbers on what is a success in terms of it? >> Yeah, I mean the great examples are something like Netflix and all, 7000 deployments a day, but obviously that's in the top of the pyramid, so to say. Many of the other customers are doing, some are bringing in one to two a week, these are very good companies. This is for the service level, I'm not talking about the whole application. Because the application may have 10, 20, 50 services in some way, so there's a lot of updates going on every week, so if you look at a week timeframe, you may have 50 updates for that service, but I think individual service level, essentially it could be one or two a week, and obviously the frequency varies depending on-- >> Just a lot of software being updated all the time. >> Absolutely, absolutely. >> Well Balaji, great to have you in, and I got to say, it's been, we could use your commentary and your insight in some CUBE interviews, love to invite you back, thanks for coming in, appreciate it. I'm John Furrier, here in the CUBE Conversation we have thought leader conversations with experts. From our expert network theCUBE, CUBE alumni, and again, all about bringing you the data here from theCUBE studios, I'm John Furrier, thanks for watching. (funky music)

Published Date : Jan 24 2020

SUMMARY :

here in Palo Alto for a CUBE Conversation, do the digital transformation one of the reasons I wanted to bring you in was, Because at the end of the day, that's what is the last mile, I mean the vendors that are in there, EKS and other cloud form factors of the same thing. "Oh, the big cloud providers are going to be the other is that when you go to the cloud, so the vendors, I would say the customers, obviously, Cloud's getting bigger, the cloud guys already have the infrastructure, then VMware goes, I got to ask you, in the developer world, is obviously, I just said, the people. or the technology is still early enough and they're able to leverage the existing models, before it gets out, because the guy who wrote the code and obviously the frequency varies depending on-- in some CUBE interviews, love to invite you back,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Balaji SivasubramanianPERSON

0.99+

10QUANTITY

0.99+

50 updatesQUANTITY

0.99+

CiscoORGANIZATION

0.99+

six monthsQUANTITY

0.99+

twoQUANTITY

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

January 2020DATE

0.99+

Palo AltoLOCATION

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

100%QUANTITY

0.99+

IBMORGANIZATION

0.99+

two thingsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

OpsMxORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

OpenShiftTITLE

0.99+

DevOpsTITLE

0.99+

CNCFORGANIZATION

0.99+

Office 365TITLE

0.99+

five yearsQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

Balaji SivaPERSON

0.98+

KubernetesTITLE

0.98+

20QUANTITY

0.98+

AKSORGANIZATION

0.98+

NetflixORGANIZATION

0.98+

DevOps 2.0TITLE

0.98+

EKSORGANIZATION

0.98+

Linux FoundationORGANIZATION

0.97+

SpinnakerORGANIZATION

0.97+

BalajiPERSON

0.97+

last yearDATE

0.97+

VMwareORGANIZATION

0.96+

200 companiesQUANTITY

0.96+

two attacksQUANTITY

0.96+

DatadogsORGANIZATION

0.94+

KubernetesORGANIZATION

0.94+

three use casesQUANTITY

0.94+

DockerTITLE

0.94+

two a weekQUANTITY

0.92+

SpinnakerTITLE

0.91+

hundreds of customersQUANTITY

0.91+

AzureTITLE

0.91+

firstQUANTITY

0.89+

7000 deployments a dayQUANTITY

0.89+

Cloud PlatformTITLE

0.88+

CloudredTITLE

0.88+

3.0TITLE

0.87+

50 servicesQUANTITY

0.86+

KubeConORGANIZATION

0.84+

three monthsQUANTITY

0.83+

PrometheusTITLE

0.81+

about a year agoDATE

0.81+

BalajiTITLE

0.79+

Siva Sivakumar, Cisco and Rajiev Rajavasireddy, Pure Storage | Pure Storage Accelerate 2018


 

>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's The Cube, covering Pure Storage Accelerate 2018. Brought to you by Pure Storage. (upbeat techno music) >> Welcome back to The Cube, we are live at Pure Accelerate 2018 at the Bill Graham Civic Auditorium in San Francisco. I'm Lisa Martin, moonlighting as Prince today, joined by Dave Vellante, moonlighting as The Who. Should we call you Roger? >> Yeah, Roger. Keith. (all chuckling) I have a moon bat. (laughing) >> It's a very cool concert venue, in case you don't know that. We are joined by a couple of guests, Cube alumnae, welcoming them back to The Cube. Rajiev Rajavasireddy, the VP of Product Management and Solutions at Pure Storage and Siva Sivakumar, the Senior Director of Data Center Solutions at Cisco. Gentlemen, welcome back. >> Thank you. >> Thank you. >> Rajiev: Happy to be here. >> So talk to us about, you know, lots of announcements this morning, Cisco and Pure have been partners for a long time. What's the current status of the Cisco-Pure partnership? What are some of the things that excite you about where you are in this partnership today? >> You want to take that, Siva, or you want me to take it? >> Sure, sure. I think if you look back at what brought us together, obviously both of us are looking at the market transitions and some of the ways that customers were adopting technologies from our site. The converged infrastructure is truly how the partnership started. We literally saw that the customers wanted simplification, wanted much more of a cloud-like experience. They wanted to see infrastructure come together in a much more easier fashion. That we bring the IT, make it easier for them, and we started, and of course, the best of breed technology on both sides, being a Flash leader from their side, networking and computer leader on our side, we truly felt the partnership brought the best value out of both of us. So it's a journey that started that way and we look back now and we say that this is absolutely going great and the best is yet to come. >> So from my side, basically Pure had started what we now call FlashStack, a converged infrastructure offering, roughly about four years ago. And about two and a half years ago, Cisco started investing a lot in this partnership. We're very thankful to them, because they kind of believed in us. We were growing, obviously. But we were not quite as big as we are right now. But they saw the potential early. So about roughly two-and-a-half years ago, I talked about them investing in us. I'm not sure how many people know about what a Cisco validated design is. It's a pretty exhaustive document. It takes a lot of work on Cisco's site to come up with one of those. And usually, a single CVD takes about two or three of their TMEs, highly technical resources and about roughly three to six months to build those. >> Per CVD? >> Per CVD. >> Wow. >> Like I said, it's very exhaustive, I mean you get your building materials, your versions, your interoperability, your, you can actually, your commands that you actually use to stand up that infrastructure and the applications, so on and so forth. So in a nine-month span, they kind of did seven CVDs for us. That was phenomenal. We were very, very thankful that they did that. And over time, that investment paid off. There was a lot of good market investment that Cisco and Pure jointly made, all those investments paid off really well in terms of the customer adoption, the acquisition. And essentially we are at a really good point right now. When we came out with our FlashArray X70 last April, Cisco was about the same time, they were coming out with the M5 servers. And so they invested again, and gave us five more CVDs. And just recently they've added FlashBlade to that portfolio. As you know, FlashBlade is a new product offering. Well not so new, but relatively new, product offering from PR, so we have a new CV that just got released that includes FlashArray and Flash Blade for Oracle. So FlashArray does the online transaction processing, FlashBlade does data warehousing, obviously Cisco networking and Cisco servers do everything OLTB and data warehouse, it's an end to an architecture. So that was what Matt Burr had talked about on stage today. We are also excited to announce that we had that we had introduced AIRI AI-ready infrastructure along with Nvidia at their expo recently. We are excited to say that Cisco is now part of that AIRI infrastructure that Matt Burr had talked about on stage as well. So as you can tell, in a two and half year period we've come a really long way. We have a lot of customer adoption every quarter. We keep adding a ton of customers and we are mutually benefiting from this partnership. >> So I want to ask you about, follow up on the Oracle solution. Oracle would obviously say, "Okay, you buy our database, "buy our SAS, buy the Red Stack, "single throat to choke, "You're going to run better, "take advantage of all the hooks we have." You've heard it before. And it's an industry discussion. >> Rajiev: Of course. >> Customer have it, Oracle comes in hard. So what's the advantage of working with you guys, versus going with an all-Red Stack? Let's talk about that a little bit. >> Sure. Do you want to do it? >> I think if you look at the Oracle databases being deployed, this is a, this really powers many companies. This is really the IT platform. And one of the things that customers, or major customers standardize on this. Again, if they have a standardization from an Oracle perspective, they have a standardization from an infrastructure perspective. Just a database alone is not necessarily easy to put on a different infrastructure, manage them, operate them, go through lifecycle. So they look for a architecture. They look for something that's a overall platform for IT. "I want to do some virtualization. "I want to run desktop virtualization. "I want to do Oracle. "I want to do SAP." So the typical IT operates as more of "I want to manage my infrastructure as a whole. "I want to manage my database and data as its own. "I want its own way of looking." So while there are way to make very appliancey behaviors, that actually operates one better, the approach we took is truly delivering a architecture for data center. The fact that the network as well as the computer is so programmable it makes it easy to expand. Really brings a value from a complete perspective. But if you look at Pure again, their FlashArrays truly have world-class performance. So the customer also looks at, "Well I can get everything from one vendor. "Am I getting the best of breed? "Am I getting the world-class technology from "every one of those aspects and perspectives?" So we certainly think there are a good class of customers who value what we bring to the table and who certainly choose us for what we are. >> And to add to what Siva has just said, right? So if you looked at pre-Flash, you're mostly right in the sense that, hey, if you built an application, especially if it was mission-vertical application, you wanted it siloed, you didn't want another application jumping in and kind of messing up the performance and response times and all that good stuff, right? So in those kind of cases, yeah, appliances made sense. But now, when you have all Flash, and then you have servers and networking that can actually elaborates the performance of Flash, you don't really have to worry about mixing different applications and messing up performance for one at the expense of the other. That's basically, it's a win-win for the customers to have much more of a consolidated platform for multiple applications as opposed to silos. 'Cause silos are always hard to manage, right? >> Siva, I want to ask you, you know, Pure has been very bullish, really, for many years now. Obviously Cisco works with a lot of other vendors. What was it a couple years ago? 'Cause you talked about the significant resource investment that Cisco has been making for a couple of years now in Pure Storage. What is it that makes this so, maybe this Flash tech, I'm kind of thinking of the three-legged stool that Charlie talked about this morning. But what were some of the things that you guys saw a few years ago, even before Pure was a public company, that really drove Cisco to make such a big investment in this? >> I think they, when you look at how Cisco has evolved our data center portfolio, I mean, we are a very significant part of the enterprise today powered by Cisco, Cisco networking, and then we grew into the computer business. But when you looked at the way we walked into this computer business, the traditional storage as we know today is something we actually led through a variety of partnerships in the industry. And our approach to the partnership is, first of all, technology. Technology choice was very very critical, that we bring the best of breed for the customers. But also, again, the customer themself, speaking to us, and then our channel partners, who are very critical for our enablement of the business, is very very critical. So the way we, and when Pure really launched and forayed into all Flash, and they created this whole notion that storage means Flash and that was never the patterning before. That was a game-changing, sort of a model of offering storage, not just capacity but also Flash as my capacity as well as the performance point. We really realized that was going to be a good set of customers will absorb that. Some select workloads will absorb that. But as Flash in itself evolved to be much more mainstream, every day's data storage can be in a Flash medium. They realize, customers realized, this technology, this partner, has something very unique. They've thought about a future that was coming, which we realized was very critical for us. When we evolved network from 10-gig fabric to 40-gig to 100-gig, the workloads that are the slowest part of any system is the data movement. So when Flash became faster and easier for data to be moved, the fabric became a very critical element for the eventual success of our customer. We realized a partnership with Pure, with all Flash and the faster network, and faster compute, we realized there is something unique that we can bring to bear for the customer. So our partnership minds had really said, "This is the next big one that we are going to "invest time and energy." And so we clearly did that and we continue to do that. I mean we continue to see huge success in the customer base with the joint solutions. >> This issue of "best of breed" versus a kind of integrated stacks, it's been around forever, it's not going to go away. I mean obviously Cisco, in the early days of converged infrastructure, put a lot of emphasis on integrating, and obviously partnerships. Since that time, I dunno what it was, 2009 or whatever it was, things have changed a lot. Y'know, cloud was barely a thought back then. And the cloud has pushed this sort of API economy. Pure talks about platforms and integrating through APIs. How has that changed your ability to integrate "best of breed" more seamlessly? >> Actually, you know, I've been working with UCS since it started, right? And it's perhaps, it was a first server system that was built on an API-first philosophy. So everything in the Cisco UCS system can be basically, anything you can do to it GUI or the command line, you can do it their XML API, right? It's an open API that they provide. And they kind of emphasized the openness of it. When they built the initial converged infrastructure stacks, right, the challenge was the legacy storage arrays didn't really have the same API-first programmability mentality, right? If you had to do an operation, you had a bunch of, a ton of CLI commands that you had to go through to get to one operation, right? So Pure, having the advantage of being built from scratch, when APIs are what people want to work with, does everything through rest APIs. All function features, right? So the huge advantage we have is with both Pure, Pure actually unlocks the potential that UCS always had. To actually be a programmable infrastructure. That was somewhat held back, I don't know if Siva agrees or not, but I will say it. That kind of was held back by legacy hardware that didn't have rest space APIs or XML or whatever. So for example, they have Python, and PowerShell-based toolkits, based on their XML APIs that they built around that. We have Python PowerShell toolkits that we built around our own rest APIs. We have puppet integration installed, and all the other stuff that you saw on the stage today. And they have the same things. So if you're a customer, and you've standardized, you've built your automation around any of these things, right, If you have the Intuit infrastructure that is completely programmable, that cloud paradigms that you're talking about is mainly because of programmability, right, that people like that stuff. So we offer something very similar, the joint-value proposition. >> You're being that dev-ops kind of infrastructure-as-code mentality to systems design and architecture. >> Rajiev: Yeah. >> And it does allow you to bring the cloud operating model to your business. >> An aspect of the cloud operating model, right. There's multiple different things that people, >> Yeah maybe not every single feature, >> Rajiev: Right. >> But the ones that are necessary to be cloud-like. >> Yeah, absolutely. >> Dave: That's kind of what the goal is. >> Let's talk about some customer examples. I think Domino's was on stage last year. >> Right. >> And they were mentioned again this morning about how they're leveraging AI. Are they a customer of Flash tech? Is that maybe something you can kind of dig into? Let's see how the companies that are using this are really benefiting at the business level with this technology. >> I think, absolutely, Domino's is one of our top examples of a Flash tech customer. They obviously took a journey to actually modernize, consolidate many applications. In fact, interestingly, if you look at many of the customer journeys, the place where we find it much much more valuable in this space is the customer has got a variety of workloads and he's also looking to say, "I need to be cloud ready. "I need to have a cloud-like concept, "that I have a hybrid cloud strategy today "or it'll be tomorrow. "I need to be ready to catch him and put him on cloud." And the customer also has the mindset that "While I certainly will keep my traditional applications, "such as Oracle and others, "I also have a very strong interest in the new "and modern workloads." Whether it is analytics, or whether it is even things like containers micro-services, things like that which brings agility. So while they think, "I need to have a variety "of things going." Then they start asking the question, "How can I standardize on a platform, "on an architecture, on something that I can "reuse, repeat, and simplify IT." That's, by far, it may sound like, you know, you got everything kind of thing, but that is by far the single biggest strength of the architecture. That we are versatile, we are multi-workload, and when you really build and deploy and manage, everything from an architecture, from a platform perspective looks the same. So they only worry about the applications they are bringing onboard and worry about managing the lifecycle of the apps. And so a variety of customers, so what has happened because of that is, we started with commercial or mid-size customers, to larger commercial. But now we are much more in enterprise. Large, many large IT shops are starting to standardize on Flash tech, and many of our customers are really measured by the number of repeat purchases they will come back and buy. Because once they like and they bought, they really love it and they come back and buy a lot more. And this is the place where it gets very exciting for all of us that these customers come back and tell us what they want. Whether we build automation or build management architecture, our customer speaks to us and says, "You guys better get together and do this." That's where we want to see our partners come to us and say, "We love this architecture but we want these features in there." So our feedback and our evolution really continues to be a journey driven by the demand and the market. Driven by the customers who we have. And that's hugely successful. When you are building and launching something into the marketplace, your best reward is when customer treats you like that. >> So to basically dovetail into what Siva was talking about, in terms of customers, so he brought up a very valid point. So what customers are really looking for is an entire stack, an infrastructure, that is near invisible. It's programmable, right? And it's, you can kind of cookie-cutter that as you scale. So we have an example of that. I'm not going to use the name of the customer, 'cause I'm sure they're going to be okay with it, but I just don't want to do it without asking their permission. It's a healthcare service provider that has basically, literally dozens of these Flash techs that they've standardized on. Basically, they have vertical applications but they also offer VM as a service. So they have cookie-cuttered this with full automation, integration, they roll these out in a very standard way because of a lot of automation that they've done. And they love the Flash tech just because of the programmability and everything else that Siva was talking about. >> With new workloads coming on, do you see any, you know, architectural limitations? When I say new workloads, data-driven, machine intelligence, AI workloads, do we see any architectural limitations to scale, and how do you see that being addressed in the near future? >> Rajiev: Yeah, that's actually a really good question. So basically, let's start with the, so if you look at Bare Metal VMs and containers, that is one factor. In that factor, we're good because, you know, we support Bare Metal and so does the entire stack, and when I say we, I'm talking about the entire Flash tech servers and storage and network, right. VMs and then also containers. Because you know, most of the containers in the early days were ephemeral, right? >> Yeah. >> Rajiev: Then persistent storage started happening. And a lot of the containers would deploy in the public cloud. Now we are getting to a point where customers are kind of, basically experimenting with large enterprises with containers on prem. And so, the persistent storage that connects to containers is kind of nascent but it's picking up. So there's Kubernetes and Docker are the primary components in there, right? And Docker, we already have Docker native volume plug-ins and Cisco has done a lot of work with Docker for the networking and server pieces. And Kubernetes has flex volumes and we have Kubernetes flex volume integration and Cisco works really well with Kubernetes. So there are no issues in that factor. Now if you're talking about machine learning and Artificial Intelligence, right? So it depends. So for example, Cisco's servers today are primarily driven by Intel-based CPUs, right? And if you look at the Nvidia DGXs, these are mostly GPUs. Cisco has a great relationship with Nvidia. And I will let Siva speak to the machine learning and artificial intelligence pieces of it, but the networking piece for sure, we've already announced today that we are working with Cisco in our AIRI stack, right? >> Dave: Right. >> Yeah, no, I think that the next generation workloads, or any newer workloads, always comes with a different set of, some are just software-level workloads. See typically, software-type of innovation, given the platform architecture is more built with programmability and flexibility, adopting our platforms to a newer software paradigm, such as container micro-services, we certainly can extend the architecture to be able to do that and we have done that several times. So that's a good area that covers. But when there are new hardware innovations that comes with, that is interconnect technologies, or that is new types of Flash models, or machine-learning GPU-style models, what we look at from a platform perspective is what can we bring from an integrated perspective. That, of course, allows IT to take advantage of the new technology, but maintain the operational and IT costs of doing business to be the same. That's where our biggest strength is. Of course Nvidia innovates on the GPU factor, but IT doesn't just do GPUs. They have to integrate into a data center, flow the data into the GPU, run compute along that, and applications to really get most out of this information. And then, of course, processing for any kind of real-time, or any decision making for that matter, now you're really talking about bringing it in-house and integrating into the data center. >> Dave: Right. >> Any time you start in that conversation, that's really where we are. I mean, that's our, we welcome more innovation, but we know when you get into that space, we certainly shine quite well. >> Yeah, it's secured, it's protected, it's move it, it's all kind of things. >> So we love these innovations but then our charter and what we are doing is all in making this experience of whatever the new be, as seamless as possible for IT to take advantage of that. >> Wow, guys, you shared a wealth of information with us. We thank you so much for talking about these Cisco-Pure partnership, what you guys have done with FlashStack, you're helping customers from pizza delivery with Domino's to healthcare services to really modernize their infrastructures. Thanks for you time. >> Thank you. >> Thank you very much. >> For Dave Vellante and Lisa Martin, you're watching the Cube live from Pure Accelerate 2018. Stick around, we'll be right back.

Published Date : May 23 2018

SUMMARY :

Brought to you by Pure Storage. Should we call you Roger? I have a moon bat. and Siva Sivakumar, the Senior Director So talk to us about, you know, We literally saw that the customers wanted simplification, and about roughly three to six months to build those. So that was what Matt Burr had talked about on stage today. "take advantage of all the hooks we have." So what's the advantage of working with you guys, Do you want to do it? The fact that the network as well as the computer that can actually elaborates the performance of Flash, of the three-legged stool "This is the next big one that we are going to And the cloud has pushed this sort of API economy. and all the other stuff that you saw on the stage today. You're being that dev-ops kind of And it does allow you to bring the cloud operating model An aspect of the cloud operating model, right. I think Domino's was on stage last year. Is that maybe something you can kind of dig into? but that is by far the single biggest strength So to basically dovetail into what Siva was talking about, and so does the entire stack, And a lot of the containers would deploy and integrating into the data center. but we know when you get into that space, it's move it, it's all kind of things. So we love these innovations but then what you guys have done with FlashStack, For Dave Vellante and Lisa Martin,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

Rajiev RajavasireddyPERSON

0.99+

CiscoORGANIZATION

0.99+

RogerPERSON

0.99+

DavePERSON

0.99+

RajievPERSON

0.99+

Matt BurrPERSON

0.99+

10-gigQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

Siva SivakumarPERSON

0.99+

100-gigQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

40-gigQUANTITY

0.99+

OracleORGANIZATION

0.99+

last yearDATE

0.99+

DominoORGANIZATION

0.99+

KeithPERSON

0.99+

2009DATE

0.99+

six monthsQUANTITY

0.99+

nine-monthQUANTITY

0.99+

PureORGANIZATION

0.99+

CharliePERSON

0.99+

bothQUANTITY

0.99+

Bill Graham Civic AuditoriumLOCATION

0.99+

one factorQUANTITY

0.99+

PythonTITLE

0.99+

SivaPERSON

0.99+

Domino'sORGANIZATION

0.99+

dozensQUANTITY

0.99+

last AprilDATE

0.99+

threeQUANTITY

0.98+

PowerShellTITLE

0.98+

three-leggedQUANTITY

0.98+

tomorrowDATE

0.98+

todayDATE

0.98+

both sidesQUANTITY

0.98+

oneQUANTITY

0.98+

FlashTITLE

0.98+

Bill Graham AuditoriumLOCATION

0.97+

about two and a half years agoDATE

0.97+

first serverQUANTITY

0.97+

Nigel Moulton, Dell EMC & Siva Sivakumar | Cisco Live 2018


 

thanks Dave I'm Stu minimun and we're here at Cisco live 2018 in Barcelona Spain happy to be joined on the program by Nigel Moulton the AMIA CTO of Dell EMC and Siva Siva Kumar who is the senior director of data center solutions at Cisco gentlemen thanks so much for joining me thank you great so looking at you know a long partnership of Dell and Cisco Siva talk about the partnership first said absolutely I mean if you look back in time when we launched UCS the very first major partnership we brought and the converged infrastructure we brought at the market was we blocked it is it really set the trend for how customers should consume compute network and storage together and we continue to deliver world-class technologies on both sides and the partnership continues to thrive as we see tremendous adoption from our customers so we are here several years down still a very vibrant partnership in trying to get the best product for the customers yeah Nigel would love to get your perspective so I she was right I think I'd adds it defined a market if you think what true conversion infrastructure is it's different and we're going to discuss them all about that as we go through the UCS fabric is unique in the way that it ties a network fabric to a to compute fabric and when you bring those technologies together and converge them and you have a partnership like Cisco you have a partnership with us yeah it's gonna be a fantastic result for the market because the market moves on and I think Vblock the X block actually helped us achieve that all right so so Steve oh we understand there's billions of reasons why Cisco and Dell would want to keep this partnership going but talk about from an innovate innovation standpoint there's the new BX block 1000 what's new talk about what would what's the innovation here absolutely if you look at the VX block perspective the 1,000 perspective first of all it simplifies an extremely fast successful product to the next level it simplifies the the storage options and it provides a seamless way to consume those technologies from a Cisco perspective as you know we are in our fifth generation of UCS platform continues to be a world-class platform leading blade mark blade servers in the Indus but we also bring the innovation of rack mount servers as well as fatigue fabric larger-scale fibre channel technology as well as we bring our compute network as well as a SAN fabric technology together with world-class storage portfolio and then simplify that for a single pane of glass consumption model that's absolutely the highest level of innovation you're gonna fight Nigel I think back in the early days the joke was you can have a V block any way you want as long as it's black yeah it's obviously a lot of diversity product line but what's new and different here how is this impact new customers and existing custom so I think there's a couple of things to pick up on what Trey said what would shiver sets of a simplification piece the way in which we do release certification matrix the way in which you combine a single software image to manage these multiple discrete components that is greatly simplified in BX well in V Xbox one thousands secondly you remove a model number because historically you're right you bought a three series of five series of seven series and that sort of defined the architecture this is now a system-wide architecture so those technologies that you might have thought of as being discrete before or integrated at an RCM level that was perhaps a little complex for some people that's now dramatically simplified so those are the two things I think that we'd amplify one is a simplification and two you're moving a model number and moving to a system-wide architecture I want to give you both the opportunity give us a little bit you know what what's the future when you talk about the 1,000 system future innovations new use cases sure you know I think if you look at the very enterprise are consuming the demand for more powerful systems that will bring together more consolidation and also address the extensive data center migration opportunities we see is very critical that means the customers are really looking at whether it is a you know in-memory database that scales to much larger scale than before or in a large scale cluster databases or even newer workloads for that matter the appetite for a larger system and they need to have it in the market continues to grow we see a huge install base of our customers as well as new customers looking at options in the market truly realize the strength of the portfolio that each one of us bring to the table and bringing the Best of Breed whether it is today or in the future from our innovation standpoint is is absolutely the way that we are approaching building our partnership and building new solutions here Nigel I mean when you're talking to customers out there or they come in saying hey I'm gonna need this for a couple of months I mean if this is investment they're making for a couple years why is this a partnership built to last so an enterprise-class customer certainly is looking for a technology that's synonymous with reliability availability performance and if you look at what we x-block has traditionally done what the 1,000 offers you see that right but shippers write these application architectures are going to change so if you can make an investment in their technology set now that keeps the premise of reliability and available performance to you today but when you look at future application architectures around high-capacity memory adjacent to our high-performance CPU you're almost in a position where you are preparing the ground for what that application architecture will need and the investments that people make in the vx box system with the UCS power underneath it the computer is significant because it lays out a very clear path to how you will integrate future application architectures with existing application object Nigel Moulton Siva Siva Kumar thank you so much for joining talking about the partnership in the future so thank you pleasure sending it back to Dave in the u.s. st. thanks so much for watching the cube from Cisco live Barcelona thank you

Published Date : Feb 18 2018

SUMMARY :

days the joke was you can have a V block

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nigel MoultonPERSON

0.99+

NigelPERSON

0.99+

CiscoORGANIZATION

0.99+

StevePERSON

0.99+

DellORGANIZATION

0.99+

DavePERSON

0.99+

Siva Siva KumarPERSON

0.99+

Siva SivakumarPERSON

0.99+

Siva Siva KumarPERSON

0.99+

Dell EMCORGANIZATION

0.99+

TreyPERSON

0.99+

both sidesQUANTITY

0.99+

two thingsQUANTITY

0.98+

Stu minimunPERSON

0.98+

bothQUANTITY

0.95+

fiveQUANTITY

0.95+

1,000QUANTITY

0.94+

todayDATE

0.94+

AMIAORGANIZATION

0.94+

firstQUANTITY

0.94+

Cisco live 2018EVENT

0.93+

twoQUANTITY

0.93+

UCSORGANIZATION

0.93+

billions of reasonsQUANTITY

0.92+

Barcelona SpainLOCATION

0.92+

thousandsQUANTITY

0.92+

fifth generationQUANTITY

0.91+

Dell EMCORGANIZATION

0.9+

SivaPERSON

0.89+

u.s. st.LOCATION

0.87+

each oneQUANTITY

0.85+

a couple yearsQUANTITY

0.84+

oneQUANTITY

0.83+

Xbox oneCOMMERCIAL_ITEM

0.81+

single software imageQUANTITY

0.8+

BarcelonaLOCATION

0.8+

BX block 1000COMMERCIAL_ITEM

0.8+

VblockCOMMERCIAL_ITEM

0.78+

couple of monthsQUANTITY

0.77+

seven seriesQUANTITY

0.77+

first major partnershipQUANTITY

0.76+

singleQUANTITY

0.76+

1,000 systemQUANTITY

0.73+

Cisco Live 2018EVENT

0.7+

Cisco liveORGANIZATION

0.69+

CTOPERSON

0.68+

three seriesQUANTITY

0.65+

VX blockCOMMERCIAL_ITEM

0.58+

thingsQUANTITY

0.55+

several yearsQUANTITY

0.53+

X blockCOMMERCIAL_ITEM

0.53+

VCOMMERCIAL_ITEM

0.5+

coupleQUANTITY

0.44+

Siva Sivakumar, Cisco & Lee Howard, NetApp | Cisco Live EU 2018


 

>> Live from Barcelona, Spain, it's theCUBE covering Cisco Live 2018. Brought to you by Cisco, Veeam and theCUBE's Ecosystem Partner. >> Welcome back to theCUBE coverage here in Barcelona, Spain. We are live at Cisco Live 2018 Europe. I'm John Furrier, the co-founder SiliconANGLE. My co-host Stu Miniman, analyst at WikiBon.com. Our next two guests is Siva Sivakumar, who's the Senior Director Data Center Solutions at Cisco and Lee Howard, Chief Technologist, Global Industry Solutions and Alliances at NetApp. Great partnership here to talk about the tech involved in the partnership. Obviously, in the industry, it's pretty well known that NetApp's doing really well with Cisco. Congratulations. You guys have been enabling great partner dynamics lately, but all the action's been on the intersection between a raise, better, faster, cheaper storage, but also enabling software defined stuff, value. What's the check involved in the partnership? Why is it going so well? Lee, can you start? >> I think offering choice out there is the best thing that we can do. You've got data fabric from a NetApp perspective is that super interconnected highway and as many on ramps as we can build for folks to get on that highway. The more successful you're going to be able to see. I mean, the IDC numbers speak for themselves, prolific, double digit growth. I think we were at 56% last quarter, listed together on there. That's how tight this partnership's been. Leveraging that combined portfolio has given us a very competitive offering out there in the industry. >> Siva, I want to get your thoughts because actually Cisco, we've been... Stu and I love talking about networking, in Cisco in particular because the old days, provision the network and good stuff happens. Apps get built. Things get done. But with the Cloud, you see the shift where you've got DevOps culture, you got cloud-native happening. The real enabling technologies have to be beyond the network, so you guys have been successful with a variety of other things. What's the key things that's making you guys key partners in the ecosystem? What are you guys truly enabling? Is it network programmability? What's the secret sauce from Cisco's standpoint? >> If you look at the way Data Center has evolved in the last decade or so, the way customers are consuming technology is much more at a platform level. They want things simplified. They want to, as you just said, the innovation that's happening in the above layer, in terms of the software's tech and use cases, is just tremendous. They really want the platform to become simple and that's what Cloud did to you anyways. That level of simplification, that level of optimization, but still a best of breed, it is what got us together. We have continued to build world class platforms that started one way, started mainly looking at virtualization in those place over time. In the last four or five years or so, the amount of innovations we have brought on top of a FlexPod, which is a joined solution together, has been right at the cutting edge of where technology is going and where applications are landing. That, in a very large way, has become the key for the success between the two of us. >> We had talked Brandon on here earlier and he validated our thesis and WikiBon actually had a report that came out last year, in the middle of the year, called "True Private Cloud." It was the only research analyst firm that actually got this one right, in my opinion, which validated by you guys is that... Certainly any (mumbles) would argue that everything is moving to the Cloud, tomorrow. Certainly there's some cloud migration and some stuff in the Public Cloud, no problem. But what WikiBon did is they looked at the true Private Cloud numbers, meaning that the action where the spend is and where the buyers are doing the most work both refreshing and retooling is on premises. Because they're actually changing the operating model on premises now as a way, as a way, as a sequence, to hybrid and then maybe full Multi-Cloud or full Public Cloud, whatever they want to do. So that being said, Lee, what does that mean? Because certainly, I understand what a Cloud operating model is, but I'm talking about storage and networking. >> Yeah. >> What does that look like? Is that a full transformation? How long is that going to take? Your thoughts? Comment on that. >> We're seeing, you saw on the key note this morning them referencing brand new titles and new personnel, new human capital that's coming in. I think that is, both you're enabling and your barring the factor to changing how you're consuming resources on site. Cloud architects as they're coming in to prominence enterprise architects. I think we're getting to a point where there's enough of a intuition to the software that's enabling those consumption trends to shift, that it's now a way for not just those that have the inside information, but it's something that's consumable for the masses. I think 2018, you guys hit on DevOps, highly versatile model going forward and I think Multi-Cloud is going to be the right answer. >> John: The roles are changing. >> Roles are changing and we have been seeking to be that technology provider that regardless of where you're at in that journey, you're able to leverage our portfolio to be able to do it. >> John: Does the product change? >> The product, the tenets behind the product, not so much but I think the way that it's being leveraged does end up changing. >> Siva, your thoughts on this. >> You know, if you start to think about the earlier generation of Cloud, it was mainly seen as a capacity argumentation, mainly on the IS. It really started people to think that everything is moving to Cloud, but if you look at the innovation that happens in the Cloud, the Cloud in itself is a massive ecosystem and people want to go do that. So there is a huge reason why the cloud is successful, but that's not necessarily just taking everything on. That's not the trend. What you really see is customers now starting to reach that level of maturity to say hey, there is a tremendous value in what I can do and on-prim, the data gravity and the latency and those things. >> So you agree with the "True Private Cloud" report, the on-prim action is where? >> We continue to see that from our customers, you see it as option and things like that. We absolutely see that is real as well. >> Let's go back to the data center for a second because some people look at it, and it's like oh, well CI's been happening now for gosh, almost a decade now. HCI has a lot of buzz out there. We want to hear what you're hearing from customers because first of all, what we see is there's still the majority of people, still building their own. They're taking the pieces. FlexPod is a little bit different than say hyper-converged from a single skew, but you've still got to build your own CI. Big partnership >> Absolutely. >> There's a huge revenue. HCI has both Cisco and NetApp have pieces there. Where are the customers today? Why is CI still a meaningful part of the discussion today? >> I think it all comes down to scale and how you want to be able to interface. What do you want your data center to be like today? How are you staffed and proficient at implementing a solution and where do you want that data center to go tomorrow? I think CI and HCI absolutely have a place together in the data center, but as we see RFPs fundamentally shift to reflect the new way that infrastructure's being consumed, a cookie cutter approach that you get with a lot of HCIs isn't always going to be the answer. You want to have that full modularity, that full flexibility. It's in the title, it's FlexPod. You want to be able to have that versatility to address not just the initial scoping project but with Flash and able data centers, assets are staying on the books longer and longer. Those depreciation schedules are getting stretched out. Having the versatility, not just to live in today's operating environment, but the operating environment of tomorrow, I think is what's really driving that main stay of CI. >> Siva, we heard in the key notes this morning a lot of discussion about Multi-Cloud and management. Talk about Cisco and NetApp. How do you view those together? Where do you go to market together, co-engineer, things like that? >> Absolutely. If you guys look at what we did in the FlexPod, we created what we would fundamentally call or say code platform for data center. That was the biggest success. We had a lot of work loads and news cases. But in the last two to three years, what we have both done, because individually we have portfolio products that allow a Cloud journey. Cisco is a big proponent of Multi-Cloud and the journey to Cloud and proving customer the right platform so they can pick and choose when to go to Cloud and how to go to Cloud. There are similar assets from NetApp. What we have done is we have built FlexPod solutions that builds on top of on that leverage, is the Cloud Center products, NetApp's data fabric, some of their technology that's call location within the equinox and so on and so forth. What that has allowed is FlexPod as a platform has blossomed as the Cloud has grown because we now offer the choice. That also brought more customers to realize while these guys really provide me the journey to Cloud model. That is more new solution that we are building that continues to drive that mindset from both companies. >> Stu: Lee, you want to build on that? >> Yeah, providing that operational excellence to where you're able to come in and leverage these assets, not just day zero but through the entire lifespan of that asset and that's the... Quality of life improvements is a big thing from NetApp and Cisco's perspective as we're coming together and we're planning what the future state is going to look like. It's not just hey, this is the specific drive capacity you're putting in, that's yesterday's infrastructure. Tomorrow is all about what quality of life, how much time can we give back to those end users out there? >> So I have a question for you guys both. Lee, we'll start with you. You got the storage compute and switching cause you're leaders in those areas, what's next? What's driving the partnership? You talk about how you present the partnership with Cisco to customers. What's in it for me? What's new? What's fresh? What's the deal? >> The conversation we have out there a lot of times there's perception issues that we are the old guard of technology. FlexPod's been around seven going on eight years and they say what's fresh out there? Well, we're so much more than just the infrastructure piece. It's a combined portfolio. Cisco recently announced their partnership with Google Cloud. We have our NFS Native on Azure going forward. Leveraging those better together stories and each other's Rolodex to be able to come in and truly engineer next generation solutions, that's what's getting people excited. How are you going to set me up for success tomorrow, not just how are we going to be successful today on today's technology? >> Siva, how are you guys successful with that? How do you talk about the relationship because they have a unique capabilities, been around the block for awhile in the storage business? Look at the history of NetApp. Very interesting, very engineering oriented, very customer focused. >> Lee: 25 years. >> What's your position in this? >> I think you have two companies who have a tremendous technology focus in building, but what keeps this partnership going together is easily our customers. We are not young anymore in the partnership. We have over $10 billion of install based customers. We have over 8,000 customers. Just keeping up with those customers and providing them the journey however they want to go, it absolutely becomes our, it's our prerogative to make these customers successful in wherever they want to go next. That's a big driver for how we look at innovation. We continue to provide the capabilities that allows our customers to continue their journey and at the same time, we bring our innovation to make this platform successful. >> So I'm going to put you on the spot here, both of you guys. I know Stu's got a question. I got a couple minutes left. Kubernetes has put a line in the sand and separates the two worlds of developers. App developers, really just looking as a fabric of resource, they're creative, doing cool things. Then you've got the network storage software engineering going on under the hood, it's like a car. You're now an engine. You got to work together. What are you guys doing specifically to make that work, make the engine really powerful? >> In the context of Kubernetes, we are-- >> Under the hood. What's under the hood? Kubernetes is the line there, but you got to sit with that app. You got to make the engine powerful. You guys are working together. What's the sound like for the customers? Why NetApp and Cisco together? >> If you look back at our containerization, micro services that journey, we certainly again, same logic, same model. We are building an ecosystem there. We are developing joint solution that optimizes how Kubernetes and Cisco and Google have made several announcements on how we are bringing innovation and infrastructure automation level, network scale level, that allows a massively scalable container environment of Kubernetes environment to be deployed on top of a Cisco infrastructure. NetApp's innovation around Kubernetes, around building the plug-ins for how the plug-ins interact with the storage subsystem that allows us to say if you are deploying a Kubernetes environment, if you are deploying the best of breed, you certainly need the platform that understands and scales with that. >> All right, Lee. Your differentiation for that power engine under the hood with Cisco. >> It's infrastructure is code. That's what we are together and I don't think that across the competitive landscape that they are, everybody else is really embracing it in such a fashion. It's speaking the language that these developers are wanting to do and we're marrying that up with the core tenets that made us an IT powerhouse together. >> It was the developer angle John- >> All right. (laughs) >> We've been doing so many of these together. Absolutely where we wanted to go. >> Stu and I get the-- Infrastructure is code. The great shows. We do the cloud-native, got Kubernetes, we do under the hood. This is a big journey for customers. There's a lot of fud out there and they want to know one thing. Who's going to be around in the future? Having the partnerships is really key. You guys have been very successful. I'll give you guys the final word. Each of you share what customers should expect from the relationship. Siva, we'll start with you. >> I think continued greatness, continued commitment to making customers successful with the innovation that keeps them worry much more about the above the layer, the application, the business critical elements and make the infrastructure as simple and as versatile as possible is absolutely our commitment. >> I'd boil it down to the human capital out there, the human element and that is bringing conviction to your decisions. We've both been here multiple decades together in our partnership. FlexPod's coming up on a decade. It's conviction and knowing that you can rely on the lifeblood of your business being secure with us together. >> Well, congratulations. Certainly, the developers are going to be testing the hardware under the hood and we got a DevOps culture developing all on-prim and in the Cloud hybrid. It's going to be an interesting couple years. Interesting times we live in. Lee Howard, Chief Technologist with NetApp and Siva Sivakumar, Senior Director Data Center Solutions. Here on theCUBE, I'm John Furrier. Stu Miniman. Live from Barcelona. Cisco Live 2018 in Europe. More live coverage from theCUBE after this short break. (techno music)

Published Date : Jan 30 2018

SUMMARY :

Brought to you by Cisco, Veeam but all the action's been on the intersection between I mean, the IDC numbers speak for themselves, What's the key things that's making you guys key partners the amount of innovations we have brought meaning that the action where the spend is How long is that going to take? and I think Multi-Cloud is going to be the right answer. Roles are changing and we have been seeking to be The product, the tenets behind the product, not so much the data gravity and the latency and those things. We continue to see that from our customers, They're taking the pieces. Why is CI still a meaningful part of the discussion today? in the data center, but as we see RFPs fundamentally shift Where do you go to market together, the journey to Cloud model. to where you're able to come in and leverage these assets, You got the storage compute and switching and each other's Rolodex to be able to come in been around the block for awhile in the storage business? and at the same time, we bring our innovation to make this and separates the two worlds of developers. What's the sound like for the customers? for how the plug-ins interact with the storage subsystem Your differentiation for that power engine that across the competitive landscape that they are, All right. Absolutely where we wanted to go. We do the cloud-native, got Kubernetes, and make the infrastructure as simple It's conviction and knowing that you can rely on Certainly, the developers are going to be testing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stu MinimanPERSON

0.99+

Siva SivakumarPERSON

0.99+

CiscoORGANIZATION

0.99+

Lee HowardPERSON

0.99+

JohnPERSON

0.99+

HCIORGANIZATION

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

2018DATE

0.99+

two companiesQUANTITY

0.99+

last yearDATE

0.99+

LeePERSON

0.99+

eight yearsQUANTITY

0.99+

over $10 billionQUANTITY

0.99+

SivaPERSON

0.99+

BrandonPERSON

0.99+

25 yearsQUANTITY

0.99+

twoQUANTITY

0.99+

Barcelona, SpainLOCATION

0.99+

NetAppORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

bothQUANTITY

0.99+

56%QUANTITY

0.99+

both companiesQUANTITY

0.99+

EuropeLOCATION

0.99+

VeeamORGANIZATION

0.99+

over 8,000 customersQUANTITY

0.99+

last quarterDATE

0.99+

tomorrowDATE

0.99+

WikiBonORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

TomorrowDATE

0.99+

EachQUANTITY

0.99+

WikiBon.comORGANIZATION

0.98+

todayDATE

0.98+

SiliconANGLEORGANIZATION

0.98+

FlexPodCOMMERCIAL_ITEM

0.98+

StuPERSON

0.98+

KubernetesTITLE

0.98+

three yearsQUANTITY

0.97+

CIORGANIZATION

0.97+

two guestsQUANTITY

0.97+

KubernetesORGANIZATION

0.97+

yesterdayDATE

0.97+

Mayur Dewaikar, Pure Storage & Siva Sivakumar, Cisco - Pure Accelerate 2017 - #PureAccelerate


 

>> Announcer: Live from San Francisco, it's theCUBE. Covering Pure Accelerate 2017. Brought to you by Pure Storage. >> Welcome back to Pier 70 in San Francisco everybody. I'm Dave Vellante with my co-host Stu Miniman and this is theCube. We go out to the events. We extract the signal from the noise. A lot going on here at Pure Accelerate 2017. Siva Sivakumar is here as the Senior Director of Data Center Solutions at Cisco, and Mayur Dewaikar is the Product Management Lead for Converge at Pure Storage. Gentlemen, welcome to theCUBE. >> Thank you. >> Glad to be here. We've heard a lot this morning about Converge, the Cisco partnership. We just had a couple customers on that are doing FlashStack. So Siva, let's start with you. Thoughts on Accelerate? >> This is probably the coolest event I've been in many years. >> Different venue, right? >> The ambience, the venue, and the fact that Warriors won last night, it's just joy, it's awesome today. >> Dave: Oh, you want to talk hoops for a little bit? You know, we can do that if you guys. We're Patriots fans so we know. We're not just winning fans. Two out of the last three, it's good. It's good to be a winner, isn't it. >> Yep, absolutely. >> Well, Mayur, give us your thoughts on Converge. You guys are talking about Converge a lot today in FlashStack. We just heard from some customers. Talk about the strategy. What are you guys trying to accomplish there. >> Yeah, so we launched the FlashStack program about three years ago and what we were starting to see in the industry was that there was a very clear preference from customers to buy full stack solutions. So we thought that was an opportunity for us to take our storage business and move it into an adjacent market, which was Converge. And we thought we had really addressed a lot of the storage pain points that people were seeing with the existing Converge solution so with our flash performance and the simplicity that Pure brings to the table, we thought we had an opportunity really to team up with Cisco and build a solution that can be sufficiently differentiated and something that people would really love to try out. >> Mayur, I wonder if you could help clarify something. A lot of times people hear converge and they think coming together. When I think about the solutions that both Cisco, UCS, and Pure, there's lots of software and it's really a distributed-type scalable architecture so how am I both converged and scalable? >> So what we're basically doing is we are trying to, we're bringing best of breed solutions together, right. So I think there's a lot of synergy between the way UCS is architectured and Pure is architectured. So we're both stateless architectures on the compute side and the storage side and what we're doing as part of the Pure Storage for or FlashStack Converge program is that we're really doing these things together with a unified management platform, which really brings everything together. So it really simplifies the deployment, it simplifies the day-to-day management of the entire stack, which is really what people are looking for. >> Yeah, so Siva, we've heard a lot today about Converge, we heard some comments about hyper-converge. What's the difference between converged and hyper-converged? >> I think if you look at the evolution in the industry, these are big ships or the big ways customers want to consume. The genesis of all the work around convergence, if you will, that started it all was the customer started to realize, "I have bigger problems to solve from an IT perspective. "I would rather not solve infrastructure "problems all by myself. "I want the vendors to solve this. "I want the vendors to give me an experience "that is far more turnkey so I invest my time "and resource on higher artifacts" that are more in a business critical from their perspective. That truly allowed us to look into convergence as a strategy and bring together certain use cases and value propositions that is very critical to IT. High availability, scalability, multi-side deployment, which are all critical for an IT to solve. We solve it first ourself as a joint architecture. We validate that and then we provide blueprints that both our customers can choose in and our partners can choose. We had a very big channel partner community. Lot of our partners leverage the work we do to deliver great value to our customers. While Convergence was heavily centered around heavy-based storages that the market was absorbing, the evolution of storage to include more in the software-defined work, created another set of categories that allows customers to say, you know what, my interest is much more on the simplification and start small and those types of models, it propped a new industry at a new paradigm in the industry. From our perspective, there's a huge value in convergence. It's a 7 billion dollar business and IDC thinks it continues to grow. And we absolutely believe we have a purpose built on a ground-up platform that was built for Flash, that's the Pure Storage architecture, is truly here and truly is a big part of our strategy-building dive. And of course, as more use cases are coming to the compute side, we are here to embrace technologies like hyper-convergence because that's obviously something that's great for a software vendor to embrace as well. >> So from your standpoint, I think of you guys as software heavy, software led, but you're not participating in the so-called hyper-converge. Is that because you don't want to own that part of the stack, you'd rather partner for it. What's your point of view there? >> Yeah, so I think from our standpoint, we believe that there is basically use cases both for hyper-converge and converge infrastructure, right. We believe that with the program we have at Cisco, we can basically provide a very good, a very compelling solution of FlashStack. And Cisco already has a solution in hyperflex that addresses the hyper-converge use case and we really see both of these co-existing in a lot of customer environments that are use cases where NCI absolutely shines and then there are use cases where we believe FlashStack is really the right solution. >> But it's interesting you haven't sort of chased that trend, you're more focused on your areas and you're doing very well with it. Is that fundamental to the strategy or is it just sort of you guys are focused elsewhere. >> Yeah, so I think for us, for Pure Storage, I think we are looking at the Converged market really as there is a lot of existing business there that can be had. Which is really tied to legacy storage platforms coming up for refresh as part of the Converged infrastructure deployments people already have. So that in itself is a fairly large opportunity for us and we believe that with the messaging we have, which is you can consolidate a lot of your workloads on FlashStack. I think the platform that FlashStack is providing is really very well-suited for the use cases that Pure Storage has traditionally played in. Which is really the enterprise workloads, in my opinion. >> Is it fair to say that Convergence 1 data, and of course Cisco was heavily involved in Convergence 1.0, you kind of arguably created it along with some partners, but is it fair to say it was just too complex for a lot of customers? And are you trying to take that to the next level? Can you add some color to that? >> Yeah, I can answer that. I think Convergence 1.0 was truly about idea operational simplification. Because they truly wanted to consume these best-of-breed technologies without having to deal with so much of technology consumption itself but as a system-level consumption. But apparently what happened in the industry is obviously the evolution of cloud. Cloud brought a completely different paradigm of how you consume an infrastructure in itself. I mean, email is an infrastructure now because you buy from a cloud winner, you get your VM in an email. So that's a very different way of consumption model which created additionally requirements for more simplification. The turnkey experiences and things like that led to another category. But if you look at FlashStack, what we are doing is we are bringing this simplification model into FlashStack as well. We recognize, while building the best-of-breed is a great idea, and great market for itself, simplification is never lost. People love that as well. So we're looking at bringing together as close to a single pane of glass as possible with such strong technology play to deliver some of the simplification in this model as well. So you're truly trying to bridge the gap and offering something that customers really want to see. >> Yeah, simplification's definitely a big piece of that wave of both converged and hyper-converged. When I think back, when we launched all of these solutions, it was, okay, that Day Zero, I should be able to speed that up and the Day One, the stuff afterwards, we should be able to make that easier. How are you measuring that these days? Any customers you can speak to as to how they dramatically shift that, kind of keeping the lights on versus really being able to focus on the business. >> Yeah, so I think if you really look at a Converged stack, there is three distinct pieces in it, right. So there's compute, storage, networking. And I think Cisco did a phenomenal job with the UCS and UCS manager platforms in helping really put a cookie cutter approach on deploying compute. So if you look at what was remaining, networking was always kind of the low-hanging fruit. Storage was very complex. So with Pure coming in to the picture, we have really simplified the overall deployment and management of storage. So we were talking from days down to a few hours to get storage going and get the entire FlashStack infrastructure going as a result. And then what we're doing is, we're using a lot of existing tools that exist in the ecosystem. So great example of that is UCS Director which is being used very prominently by customers to deploy their entire data centers. We are integrating with that and in addition to that, we're also integrating with a lot of hypervisor level tools like Recenter or hyper v-level tools. And the benefit is that customers are getting to use the tools that they're already used to with the simplicity of UCS and Pure to really simplify the overall deployment and also management of the entire stack. >> So really, the problem you're solving is one of IT labor intensity, right. IT labor is too much IT labor, it's too non-differentiated, it's too expensive. Is that fair? >> Well, yeah. So fundamentally what we are solving is providing you a platform. A platform and an experience that IT wants, IT desires, but that also is optimized so that it can easily provide a platform experience but then the workload and the diversification you see in the market and the one side is an article database. You don't touch for four years kind of a thing. On the other hand, you have a container which you use for two seconds. So you really have a complete range of use cases. Each demand something different from a platform. Our strategy and our goal is to provide a single cohesive platform that uniformly works across all of these use cases from an IT operations and management standpoint. You realize the challenge is quite complex but the solution is a huge value for our customers and that's really our journey in solving this problem. >> Can you share any, what should we expect to see from a kind of joint-engineering deployment going forward. We heard in the keynote this morning, said some really you know, the cloud native, AI, ML type deployments. We're talking less about virtualization, more about containers and microservices. Where should we look to Cisco and Pure in the future? >> So, I think there's an interesting demo on the floor. It really talks about something that's cutting edge. NVMe over Fabric, so the next big innovation from Pure is NVMe, all NVMe, right. That is, obviously, no performance goals there. It's absolutely a screaming box. We have a Cisco adaptor technology that can deliver high performance, low-latency iO transport on top of a fabric, on top of an Ethernet fabric to talk ENVme from the host. Just the power of how much you can do iO subsystem from a compute perspective onto the network and talking to the storage and the ability to bring a superclass performance on a storage perspective is absolutely a next generation cutting edge and vendors like this coming together truly solves the industry's next big problem. Who better to solve a fabric, network, bandwidth issue than Cisco? Partnering with best-of-breed from the storage. Then that's one, just sort of a technology and architectural play if you will. But on a use-case workload type of scenarios, we've done a lot of the traditional use cases quite a bit in the databases and the VDIs of the world. But we are now looking at the next generation of use cases. Containers, microservices. How do I make the docker environment integrate seamlessly with the FlashStack? Now, this is already different, this is a very different paradigm. How do I enable FlashStack to be very simple to consume kubernetes. Because these are use cases where the developer who is much more focused on clouds does not really think there is an infrastructure underneath. He doesn't even care about it. So we need to give him that experience so that it's a seamless way of deploying and managing these DevOps environments as well. So that's the next wave of work we are doing is to provide that agility factor coming out of the FlashStack. and if conditional architecture is being built for this, it obviously helps. >> And you see NVMe over Fabric as kind of one of those foundational aspects, right. >> That'll be another architectural cog in the same context of what we are trying to do. >> Are you, with FlashStack, able to preserve that same experience for customers? The Evergreen experience, the never have to migrate your day, I mean all that wonderful stuff. Does that translate into the partnership? >> We are. So, we are taking a lot of the same goodness we have with the storage platform and we're extending that into FlashStack. So we have, very similar to Pure, you can almost non-disruptively upgrade pretty much everything in the UCS stack and we have special programs now with Cisco to which we can provide people the option to also get new gear every couple of years. Very similar to the Evergreen Storage Program we have through Pure Storage. >> So is it fair to say, well, first of all, is that unique to Pure or is that something that Cisco sort of has innovated on? >> It's, from a storage perspective, Pure, I think truly created the easy button for storage which is nonexistent. It's one of the hardest problems to solve. >> But what about the other pieces? >> And Cisco obviously pioneered the fabric-based stateless compute, which is still a standard in the industry of how to do the easy button for compute is truly what we brought to the table that really revolutionized the industry. I absolutely think that's where the architecture individually are building technology that are great. When you combine that and jointly engineer the solution and provide the turnkey value for the customer then the absolute value is manifested in a very big way. And I think that's our journey. We are hear, obviously we are hearing a lot of great customers coming in but the more customers we hear, the more we learn. >> But you've substantially sort of recreated that experience to a great degree. >> Siva: Absolutely, absolutely. >> I think that's a huge differentiator for Pure. You don't hear a lot of other companies talking about it and when you talk to your customers, they always point to that. You know, the migrations are just such a painful, horrible experience. >> Yep. >> So, good stuff. Alright, we have to leave it there, gents. Thanks very much for coming on theCUBE. Really appreciate it. >> Mayur: Thank you. >> Pleasure, thank you. >> Alright, take care. Keep it right there, buddy. We'll be back with our next guest. This is theCUBE, we're live from Pure Accelerate 2017. Be right back.

Published Date : Jun 13 2017

SUMMARY :

Brought to you by Pure Storage. and Mayur Dewaikar is the Product Management Lead about Converge, the Cisco partnership. This is probably the coolest event The ambience, the venue, and the fact You know, we can do that if you guys. Talk about the strategy. a lot of the storage pain points that people were seeing and they think coming together. So it really simplifies the deployment, What's the difference between converged and hyper-converged? heavy-based storages that the market was absorbing, that part of the stack, you'd rather partner for it. that addresses the hyper-converge use case Is that fundamental to the strategy the messaging we have, which is you can consolidate and of course Cisco was heavily involved in Convergence 1.0, is obviously the evolution of cloud. of that wave of both converged and hyper-converged. And the benefit is that customers are getting to use So really, the problem you're solving On the other hand, you have a container We heard in the keynote this morning, Just the power of how much you can do iO subsystem And you see NVMe over Fabric as kind of in the same context of what we are trying to do. The Evergreen experience, the never have in the UCS stack and we have special programs now It's one of the hardest problems to solve. of great customers coming in but the more customers we hear, that experience to a great degree. and when you talk to your customers, Alright, we have to leave it there, gents. This is theCUBE, we're live from Pure Accelerate 2017.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

DavePERSON

0.99+

two secondsQUANTITY

0.99+

UCSORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

Pure StorageORGANIZATION

0.99+

ConvergeORGANIZATION

0.99+

Mayur DewaikarPERSON

0.99+

San FranciscoLOCATION

0.99+

Siva SivakumarPERSON

0.99+

FlashStackTITLE

0.99+

SivaPERSON

0.99+

TwoQUANTITY

0.99+

Pier 70LOCATION

0.99+

WarriorsORGANIZATION

0.99+

PatriotsORGANIZATION

0.99+

NCIORGANIZATION

0.99+

7 billion dollarQUANTITY

0.99+

three distinct piecesQUANTITY

0.99+

four yearsQUANTITY

0.99+

bothQUANTITY

0.99+

MayurPERSON

0.99+

EachQUANTITY

0.98+

Day OneQUANTITY

0.98+

todayDATE

0.98+

last nightDATE

0.98+

one sideQUANTITY

0.97+

PureORGANIZATION

0.96+

IDCORGANIZATION

0.95+

Convergence 1.0TITLE

0.95+

Pure Accelerate 2017EVENT

0.93+

Day ZeroQUANTITY

0.9+

single paneQUANTITY

0.89+

convergeTITLE

0.89+

#PureAccelerateORGANIZATION

0.88+

this morningDATE

0.87+

three years agoDATE

0.86+

Center SolutionsORGANIZATION

0.82+

UCSTITLE

0.8+

ConvergeOTHER

0.79+

Evergreen Storage ProgramOTHER

0.79+

FlashTITLE

0.78+

oneQUANTITY

0.76+

waveEVENT

0.73+

single cohesive platformQUANTITY

0.73+

Convergence 1TITLE

0.72+

ConvergeTITLE

0.72+

Digging into HeatWave ML Performance


 

(upbeat music) >> Hello everyone. This is Dave Vellante. We're diving into the deep end with AMD and Oracle on the topic of mySQL HeatWave performance. And we want to explore the important issues around machine learning. As applications become more data intensive and machine intelligence continues to evolve, workloads increasingly are seeing a major shift where data and AI are being infused into applications. And having a database that simplifies the convergence of transaction and analytics data without the need to context, switch and move data out of and into different data stores. And eliminating the need to perform extensive ETL operations is becoming an industry trend that customers are demanding. At the same time, workloads are becoming more automated and intelligent. And to explore these issues further, we're happy to have back in theCUBE Nipun Agarwal, who's the Senior Vice President of mySQL HeatWave and Kumaran Siva, who's the Corporate Vice President Strategic Business Development at AMD. Gents, hello again. Welcome back. >> Hello. Hi Dave. >> Thank you, Dave. >> Okay. Nipun, obviously machine learning has become a must have for analytics offerings. It's integrated into mySQL HeatWave. Why did you take this approach and not the specialized database approach as many competitors do right tool for the right job? >> Right? So, there are a lot of customers of mySQL who have the need to run machine learning on the data which is store in mySQL database. So in the past, customers would need to extract the data out of mySQL and they would take it to a specialized service for running machine learning. Now, the reason we decided to incorporate machine learning inside the database, there are multiple reasons. One, customers don't need to move the data. And if they don't need to move the data, it is more secure because it's protected by the same access controlled mechanisms as rest of the data There is no need for customers to manage multiple services. But in addition to that, when we run the machine learning inside the database customers are able to leverage the same service the same hardware, which has been provisioned for OTP analytics and use machine learning capabilities at no additional charge. So from a customer's perspective, they get the benefits that it is a single database. They don't need to manage multiple services. And it is offered at no additional charge. And then as another aspect, which is kind of hard to learn which is based on the IP, the work we have done it is also significantly faster than what customers would get by having a separate service. >> Just to follow up on that. How are you seeing customers use HeatWaves machine learning capabilities today? How is that evolving? >> Right. So one of the things which, you know customers very often want to do is to train their models based on the data. Now, one of the things is that data in a database or in a transaction database changes quite rapidly. So we have introduced support for auto machine learning as a part of HeatWave ML. And what it does is that it fully automates the process of training. And this is something which is very important to database users, very important to mySQL users that they don't really want to hire or data scientists or specialists for doing training. So that's the first part that training in HeatWave ML is fully automated. Doesn't require the user to provide any like specific parameters, just the source data and the task which they want to train. The second aspect is the training is really fast. So the training is really fast. The benefit is that customers can retrain quite often. They can make sure that the model is up to date with any changes which have been made to their transaction database. And as a result of the models being up to date, the accuracy of the prediction is high. Right? So that's the first aspect, which is training. The second aspect is inference, which customers run once they have the models trained. And the third thing, which is perhaps been the most sought after request from the mySQL customers is the ability to provide explanations. So, HeatWave ML provides explanations for any model which has been generated or trained by HeatWave ML. So these are the three capabilities- training, inference and explanations. And this whole process is completely automated, doesn't require a specialist or a data scientist. >> Yeah, that's nice. I mean, training obviously very popular today. I've said inference I think is going to explode in the coming decade. And then of course, AI explainable AI is a very important issue. Kumaran, what are the relevant capabilities of the AMD chips that are used in OCI to support HeatWave ML? Are they different from say the specs for HeatWave in general? >> So, actually they aren't. And this is one of the key features of this architecture or this implementation that is really exciting. Um, there with HeatWave ML, you're using the same CPU. And by the way, it's not a GPU, it's a CPU for both for all three of the functions that Nipun just talked about- inference, training and explanation all done on CPU. You know, bigger picture with the capabilities we bring here we're really providing a balance, you know between the CPU cores, memory and the networking. And what that allows you to do here is be able to feed the CPU cores appropriately. And within the cores, we have these AVX instruc... extensions in with the Zen 2 and Zen 3 cores. We had AVX 2, and then with the Zen 4 core coming out we're going to have AVX 512. But we were able to with that balance of being able to bring in the data and utilize the high memory bandwidth and then use the computation to its maximum we're able to provide, you know, build pride enough AI processing that we are able to get the job done. And then we're built to build a fit into that larger pipeline that that we build out here with the HeatWave. >> Got it. Nipun you know, you and I every time we have a conversation we've got to talk benchmarks. So you've done machine learning benchmarks with HeatWave. You might even be the first in the industry to publish you know, transparent, open ML benchmarks on GitHub. I mean, I, I wouldn't know for sure but I've not seen that as common. Can you describe the benchmarks and the data sets that you used here? >> Sure. So what we did was we took a bunch of open data sets for two categories of tasks- classification and regression. So we took about a dozen data sets for classification and about six for regression. So to give an example, the kind of data sets we used for classifications like the airlines data set, hex sensors bank, right? So these are open data sets. And what we did was for on these data sets we did a comparison of what would it take to train using HeatWave ML? And then the other service we compared with is that RedShift ML. So, there were two observations. One is that with HeatWave ML, the user does not need to provide any tuning parameters, right? The HeatWave ML using RML fully generates a train model, figures out what are the right algorithms? What are the right features? What are the right hyper parameters and sets, right? So no need for any manual intervention not so the case with Redshift ML. The second thing is the performance, right? So the performance of HeatWave ML aggregate on these 12 data sets for classification and the six data sets on regression. On an average, it is 25 times faster than Redshift ML. And note that Redshift ML in turn involves SageMaker, right? So on an average, HeatWave ML provides 25 times better performance for training. And the other point to note is that there is no need for any human intervention. That's fully automated. But in the case of Redshift ML, many of these data sets did not even complete in the set duration. If you look at price performance, one of the things again I want to highlight is because of the fact that AMD does pretty well in all kinds of workloads. We are able to use the same cluster users and use the same cluster for analytics, for OTP or for machine learning. So there is no additional cost for customers to run HeatWave ML if they have provision HeatWave. But assuming a user is provisioning a HeatWave cluster only to run HeatWave ML, right? That's the case, even in that case the price performance advantage of HeatWave ML over Redshift ML is 97 times, right? So 25 times faster at 1% of the cost compared to Redshift ML And all these scripts and all this information is available on GitHub for customers to try to modify and like, see, like what are the advantages they would get on their workloads? >> Every time I hear these numbers, I shake my head. I mean, they're just so overwhelming. Um, and so we'll see how the competition responds when, and if they respond. So, but thank you for sharing those results. Kumaran, can you elaborate on how the specs that you talked about earlier contribute to HeatWave ML's you know, benchmark results. I'm particularly interested in scalability, you know Typically things degrade as you push the system harder. What are you seeing? >> No, I think, I think it's good. Look, yeah. That's by those numbers, just blow me, blow my head too. That's crazy good performance. So look from, from an AMD perspective, we have really built an architecture. Like if you think about the chiplet architecture to begin with, it is fundamentally, you know, it's kind of scaling by design, right? And, and one of the things that we've done here is been able to work with, with the HeatWave team and heat well ML team, and then been able to, to within within the CPU package itself, be able to scale up to take very efficient use of all of the course. And then of course, work with them on how you go between nodes. So you can have these very large systems that can run ML very, very efficiently. So it's really, you know, building on the building blocks of the chiplet architecture and how scaling happens there. >> Yeah. So it's you're saying it's near linear scaling or essentially. >> So, let Nipun comment on that. >> Yeah. >> Is it... So, how about as cluster sizes grow, Nipun? >> Right. >> What happens there? >> So one of the design points for HeatWave is scale out architecture, right? So as you said, that as we add more data set or increase the size of the data, or we add the number of nodes to the cluster, we want the performance to scale. So we show that we have near linear scale factor, or nearly near scale scalability for SQL workloads in the case of HeatWave ML, as well. As users add more nodes to the cluster so the size of the cluster the performance of HeatWave ML improves. So I was giving you this example that HeatWave ML is 25 times faster compared to Redshift ML. Well, that was on a cluster size of two. If you increase the cluster size of HeatWave ML to a larger number. But I think the number is 16. The performance advantage over Redshift ML increases from 25 times faster to 45 times faster. So what that means is that on a cluster size of 16 nodes HeatWave ML is 45 times faster for training these again, dozen data sets. So this shows that HeatWave ML skills better than the computation. >> So you're saying adding nodes offsets any management complexity that you would think of as getting in the way. Is that right? >> Right. So one is the management complexity and which is why by features like last customers can scale up or scale down, you know, very easily. The second aspect is, okay What gives us this advantage, right, of scalability? Or how are we able to scale? Now, the techniques which we use for HeatWave ML scalability are a bit different from what we use for SQL processing. So in the case of HeatWave ML, they really like, you know, three, two trade offs which we have to be careful about. One is the accuracy. Because we want to provide better performance for machine learning without compromising on the accuracy. So accuracy would require like more synchronization if you have multiple threads. But if you have too much of synchronization that can slow down the degree of patterns that we get. Right? So we have to strike a fine balance. So what we do is that in HeatWave ML, there are different phases of training, like algorithm selection, feature selection, hyper probability training. Each of these phases is analyzed. And for instance, one of the ways techniques we use is that if you're trying to figure out what's the optimal hyper parameter to be used? We start up with the search space. And then each of the VMs gets a part of the search space. And then we synchronize only when needed, right? So these are some of the techniques which we have developed over the years. And there are actually paper's filed, research publications filed on this. And this is what we do to achieve good scalability. And what that results to the customer is that if they have some amount of training time and they want to make it better they can just provision a larger cluster and they will get better performance. >> Got it. Thank you. Kumaran, when I think of machine learning, machine intelligence, AI, I think GPU but you're not using GPU. So how are you able to get this type of performance or price performance without using GPU's? >> Yeah, definitely. So yeah, that's a good point. And you think about what is going on here and you consider the whole pipeline that Nipun has just described in terms of how you get you know, your training, your algorithms And using the mySQL pieces of it to get to the point where the AI can be effective. In that process what happens is you have to have a lot of memory to transactions. A lot of memory bandwidth comes into play. And then bringing all that data together, feeding the actual complex that does the AI calculations that in itself could be the bottleneck, right? And you can have multiple bottlenecks along the way. And I think what you see in the AMD architecture for epic for this use case is the balance. And the fact that you are able to do the pre-processing, the AI, and then the post-processing all kind of seamlessly together, that has a huge value. And that goes back to what Nipun was saying about using the same infrastructure, gets you the better TCO but it also gets you gets you better performance. And that's because of the fact that you're bringing the data to the computation. So the computation in this case is not strictly the bottleneck. It's really about how you pull together what you need and to do the AI computation. And that is, that's probably a more, you know, it's a common case. And so, you know, you're going to start I think the least start to see this especially for inference applications. But in this case we're doing both inference explanation and training. All using the the CPU in the same OCI infrastructure. >> Interesting. Now Nipun, is the secret sauce for HeatWave ML performance different than what we've discussed before you and I with with HeatWave generally? Is there some, you know, additive engine additive that you're putting in? >> Right? Yes. The secret sauce is indeed different, right? Just the way I was saying that for SQL processing. The reason we get very good performance and price performance is because we have come up with new algorithms which help the SQL process can scale out. Similarly for HeatWave ML, we have come up with new IP, new like algorithms. One example is that we use meta-learn proxy models, right? That's the technique we use for automating the training process, right? So think of this meta-learn proxy models to be like, you know using machine learning for machine learning training. And this is an IP which we developed. And again, we have published the results and the techniques. But having such kind of like techniques is what gives us a better performance. Similarly, another thing which we use is adaptive sampling that you can have a large data set. But we intelligently sample to figure out that how can we train on a small subset without compromising on the accuracy? So, yes, there are many techniques that you have developed specifically for machine learning which is what gives us the better performance, better price performance, and also better scalability. >> What about mySQL autopilot? Is there anything that differs from HeatWave ML that is relevant? >> Okay. Interesting you should ask. So mySQL Autopilot is think of it to be an application using machine learning. So mySQL Autopilot uses machine learning to automate various aspects of the database service. So for instance, if you want to figure out that what's the right partitioning scheme to partition the data in memory? We use machine learning techniques to figure out that what's the right, the best column based on the user's workload to partition the data in memory Or given a workload, if you want to figure out what is the right cluster size to provision? That's something we use mySQL autopilot for. And I want to highlight that we don't aware of any other database service which provides this level of machine learning based automation which customers get with mySQL Autopilot. >> Hmm. Interesting. Okay. Last question for both of you. What are you guys working on next? What can customers expect from this collaboration specifically in this space? Maybe Nipun, you can start and then Kamaran can bring us home. >> Sure. So there are two things we are working on. One is based on the feedback we have gotten from customers, we are going to keep making the machine learning capabilities richer in HeatWave ML. That's one dimension. And the second thing is which Kamaran was alluding to earlier, We are looking at the next generation of like processes coming from AMD. And we will be seeing as to how we can more benefit from these processes whether it's the size of the L3 cache, the memory bandwidth, the network bandwidth, and such or the newer effects. And make sure that we leverage the all the greatness which the new generation of processes will offer. >> It's like an engineering playground. Kumaran, let's give you the final word. >> No, that's great. Now look with the Zen 4 CPU cores, we're also bringing in AVX 512 instruction capability. Now our implementation is a little different. It was in, in Rome and Milan, too where we use a double pump implementation. What that means is, you know, we take two cycles to do these instructions. But the key thing there is we don't lower our speed of the CPU. So there's no noisy neighbor effects. And it's something that OCI and the HeatWave has taken full advantage of. And so like, as we go out in time and we see the Zen 4 core, we can... we see up to 96 CPUs that that's going to work really well. So we're collaborating closely with, with OCI and with the HeatWave team here to make sure that we can take advantage of that. And we're also going to upgrade the memory subsystem to get to 12 channels of DDR 5. So it should be, you know there should be a fairly significant boost in absolute performance. But more important or just as importantly in TCO value for the customers, the end customers who are going to adopt this great service. >> I love their relentless innovation guys. Thanks so much for your time. We're going to have to leave it there. Appreciate it. >> Thank you, David. >> Thank you, David. >> Okay. Thank you for watching this special presentation on theCUBE. Your leader in enterprise and emerging tech coverage.

Published Date : Sep 14 2022

SUMMARY :

And eliminating the need and not the specialized database approach So in the past, customers How are you seeing customers use So one of the things of the AMD chips that are used in OCI And by the way, it's not and the data sets that you used here? And the other point to note elaborate on how the specs And, and one of the things or essentially. So, how about as So one of the design complexity that you would So in the case of HeatWave ML, So how are you able to get And the fact that you are Nipun, is the secret sauce That's the technique we use for automating of the database service. What are you guys working on next? And the second thing is which Kamaran Kumaran, let's give you the final word. OCI and the HeatWave We're going to have to leave it there. and emerging tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

RomeLOCATION

0.99+

DavePERSON

0.99+

DavidPERSON

0.99+

OCIORGANIZATION

0.99+

Nipun AgarwalPERSON

0.99+

MilanLOCATION

0.99+

45 timesQUANTITY

0.99+

25 timesQUANTITY

0.99+

12 channelsQUANTITY

0.99+

OracleORGANIZATION

0.99+

AMDORGANIZATION

0.99+

Zen 4COMMERCIAL_ITEM

0.99+

KumaranPERSON

0.99+

HeatWaveORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

second aspectQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

12 data setsQUANTITY

0.99+

first aspectQUANTITY

0.99+

97 timesQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

two thingsQUANTITY

0.99+

oneQUANTITY

0.99+

EachQUANTITY

0.99+

1%QUANTITY

0.99+

two cyclesQUANTITY

0.99+

three capabilitiesQUANTITY

0.99+

third thingQUANTITY

0.99+

eachQUANTITY

0.99+

AVX 2COMMERCIAL_ITEM

0.99+

AVX 512COMMERCIAL_ITEM

0.99+

second thingQUANTITY

0.99+

Redshift MLTITLE

0.99+

six data setsQUANTITY

0.98+

HeatWaveTITLE

0.98+

mySQL AutopilotTITLE

0.98+

twoQUANTITY

0.98+

NipunPERSON

0.98+

two categoriesQUANTITY

0.98+

mySQLTITLE

0.98+

two observationsQUANTITY

0.98+

first partQUANTITY

0.98+

mySQL autopilotTITLE

0.98+

threeQUANTITY

0.97+

SQLTITLE

0.97+

One exampleQUANTITY

0.97+

single databaseQUANTITY

0.95+

16QUANTITY

0.95+

todayDATE

0.95+

about sixQUANTITY

0.95+

HeatWavesORGANIZATION

0.94+

about a dozen data setsQUANTITY

0.94+

16 nodesQUANTITY

0.93+

mySQL HeatWaveTITLE

0.93+

AMD Oracle Partnership Elevates MySQLHeatwave


 

(upbeat music) >> For those of you who've been following the cloud database space, you know that MySQL HeatWave has been on a technology tear over the last 24 months with Oracle claiming record breaking benchmarks relative to other database platforms. So far, those benchmarks remain industry leading as competitors have chosen not to respond, perhaps because they don't feel the need to, or maybe they don't feel that doing so would serve their interest. Regardless, the HeatWave team at Oracle has been very aggressive about its performance claims, making lots of noise, challenging the competition to respond, publishing their scripts to GitHub. But so far, there are no takers, but customers seem to be picking up on these moves by Oracle and it's likely the performance numbers resonate with them. Now, the other area we want to explore, which we haven't thus far, is the engine behind HeatWave and that is AMD. AMD's epic processors have been the powerhouse on OCI, running MySQL HeatWave since day one. And today we're going to explore how these two technology companies are working together to deliver these performance gains and some compelling TCO metrics. In fact, a recent Wikibon analysis from senior analyst Marc Staimer made some TCO comparisons in OLAP workloads relative to AWS, Snowflake, GCP, and Azure databases, you can find that research on wikibon.com. And with that, let me introduce today's guest, Nipun Agarwal senior vice president of MySQL HeatWave and Kumaran Siva, who's the corporate vice president for strategic business development at AMD. Welcome to theCUBE gentlemen. >> Welcome. Thank you. >> Thank you, Dave. >> Hey Nipun, you and I have talked a lot about this. You've been on theCUBE a number of times talking about MySQL HeatWave. But for viewers who may not have seen those episodes maybe you could give us an overview of HeatWave and how it's different from competitive cloud database offerings. >> Sure. So MySQL HeatWave is a fully managed MySQL database service offering from Oracle. It's a single database, which can be used to run transactional processing, analytics and machine learning workloads. So, in the past, MySQL has been designed and optimized for transaction processing. So customers of MySQL when they had to run, analytics machine learning, would need to extract the data out of MySQL, into some other database or service, to run analytics or machine learning. MySQL HeatWave offers a single database for running all kinds of workloads so customers don't need to extract data into some of the database. In addition to having a single database, MySQL HeatWave is also very performant compared to one up databases and also it is very price competitive. So the advantages are; single database, very performant, and very good price performance. >> Yes. And you've published some pretty impressive price performance numbers against competitors. Maybe you could describe those benchmarks and highlight some of the results, please. >> Sure. So one thing to notice that the performance of any database is going to like vary, the performance advantage is going to vary based on, the size of the data and the specific workloads, so the mileage varies, that's the first thing to know. So what we have done is, we have published multiple benchmarks. So we have benchmarks on PPCH or PPCDS and we have benchmarks on different data sizes because based on the customer's workload, the mileage is going to vary, so we want to give customers a broad range of comparisons so that they can decide for themselves. So in a specific case, where we are running on a 30 terabyte PPCH workload, HeatWave is about 18 times better price performance compared to Redshift. 18 times better compared to Redshift, about 33 times better price performance, compared to Snowflake, and 42 times better price performance compared to Google BigQuery. So, this is on 30 Terabyte PPCH. Now, if the data size is different, or the workload is different, the characteristics may vary slightly but this is just to give a flavor of the kind of performance advantage MySQL HeatWave offers. >> And then my last question before we bring in Kumaran. We've talked about the secret sauce being the tight integration between hardware and software, but would you add anything to that? What is that secret sauce in HeatWave that enables you to achieve these performance results and what does it mean for customers? >> So there are three parts to this. One is HeatWave has been designed with a scale out architecture in mind. So we have invented and implemented new algorithms for skill out query processing for analytics. The second aspect is that HeatWave has been really optimized for cloud, commodity cloud, and that's where AMD comes in. So for instance, many of the partitioning schemes we have for processing HeatWave, we optimize them for the L3 cache of the AMD processor. The thing which is very important to our customers is not just the sheer performance but the price performance, and that's where we have had a very good partnership with AMD because not only does AMD help us provide very good performance, but the price performance, right? And that all these numbers which I was showing, big part of it is because we are running on AMD which provides very good price performance. So that's the second aspect. And the third aspect is, MySQL autopilot, which provides machine learning based automation. So it's really these three things, a combination of new algorithms, design for scale out query processing, optimized for commodity cloud hardware, specifically AMD processors, and third, MySQL auto pilot which gives us this performance advantage. >> Great, thank you. So that's a good segue for AMD and Kumaran. So Kumaran, what is AMD bringing to the table? What are the, like, for instance, relevance specs of the chips that are used in Oracle cloud infrastructure and what makes them unique? >> Yeah, thanks Dave. That's a good question. So, OCI is a great customer of ours. They use what we call the top of stack devices meaning that they have the highest core count and they also are very, very fast cores. So these are currently Zen 3 cores. I think the HeatWave product is right now deployed on Zen 2 but will shortly be also on the Zen 3 core as well. But we provide in the case of OCI 64 cores. So that's the largest devices that we build. What actually happens is, because these large number of CPUs in a single package and therefore increasing the density of the node, you end up with this fantastic TCO equation and the cost per performance, the cost per for deployed services like HeatWave actually ends up being extraordinarily competitive and that's a big part of the contribution that we're bringing in here. >> So Zen 3 is the AMD micro architecture which you introduced, I think in 2017, and it's the basis for EPIC, which is sort of the enterprise grade that you really attacked the enterprise with. Maybe you could elaborate a little bit, double click on how your chips contribute specifically to HeatWave's, price performance results. >> Yeah, absolutely. So in the case of HeatWave, so as Nipun alluded to, we have very large L3 caches, right? So in our very, very top end parts just like the Milan X devices, we can go all the way up to like 768 megabytes of L3 cache. And that gives you just enormous performance and performance gains. And that's part of what we're seeing with HeatWave today and that not that they're currently on the second generation ROM based product, 'cause it's a 7,002 based product line running with the 64 cores. But as time goes on, they'll be adopting the next generation Milan as well. And the other part of it too is, as our chip led architecture has evolved, we know, so from the first generation Naples way back in 2017, we went from having multiple memory domains and a sort of NUMA architecture at the time, today we've really optimized that architecture. We use a common I/O Die that has all of the memory channels attached to it. And what that means is that, these scale out applications like HeatWave, are able to really scale very efficiently as they go from a small domain of CPUs to, for example the entire chip, all 64 cores that scaling, is been a key focus for AMD and being able to design and build architectures that can take advantage of that and then have applications like HeatWave that scale so well on it, has been, a key aim of ours. >> And Gen 3 moving up the Italian countryside. Nipun, you've taken the somewhat unusual step of posting the benchmark parameters, making them public on GitHub. Now, HeatWave is relatively new. So people felt that when Oracle gained ownership of MySQL it would let it wilt on the vine in favor of Oracle database, so you lost some ground and now, you're getting very aggressive with HeatWave. What's the reason for publishing those benchmark parameters on GitHub? >> So, the main reason for us to publish price performance numbers for HeatWave is to communicate to our customers a sense of what are the benefits they're going to get when they use HeatWave. But we want to be very transparent because as I said the performance advantages for the customers may vary, based on the data size, based on the specific workloads. So one of the reasons for us to publish, all these scripts on GitHub is for transparency. So we want customers to take a look at the scripts, know what we have done, and be confident that we stand by the numbers which we are publishing, and they're very welcome, to try these numbers themselves. In fact, we have had customers who have downloaded the scripts from GitHub and run them on our service to kind of validate. The second aspect is in some cases, they may be some deviations from what we are publishing versus what the customer would like to run in the production deployments so it provides an easy way, for customers to take the scripts, modify them in some ways which may suit their real world scenario and run to see what the performance advantages are. So that's the main reason, first, is transparency, so the customers can see what we are doing, because of the comparison, and B, if they want to modify it to suit their needs, and then see what is the performance of HeatWave, they're very welcome to do so. >> So have customers done that? Have they taken the benchmarks? And I mean, if I were a competitor, honestly, I wouldn't get into that food fight because of the impressive performance, but unless I had to, I mean, have customers picked up on that, Nipun? >> Absolutely. In fact, we have had many customers who have benchmarked the performance of MySQL HeatWave, with other services. And the fact that the scripts are available, gives them a very good starting point, and then they've also tweaked those queries in some cases, to see what the Delta would be. And in some cases, customers got back to us saying, hey the performance advantage of HeatWave is actually slightly higher than what was published and what is the reason. And the reason was, when the customers were trying, they were trying on the latest version of the service, and our benchmark results were posted let's say, two months back. So the service had improved in those two to three months and customers actually saw better performance. So yes, absolutely. We have seen customers download the scripts, try them and also modify them to some extent and then do the comparison of HeatWave with other services. >> Interesting. Maybe a question for both of you how is the competition responding to this? They haven't said, "Hey, we're going to come up "with our own benchmarks." Which is very common, you oftentimes see that. Although, for instance, Snowflake hasn't responded to data bricks, so that's not their game, but if the customers are actually, putting a lot of faith in the benchmarks and actually using that for buying decisions, then it's inevitable. But how have you seen the competition respond to the MySQL HeatWave and AMD combo? >> So maybe I can take the first track from the database service standpoint. When customers have more choice, it is invariably advantages for the customer because then the competition is going to react, right? So the way we have seen the reaction is that we do believe, that the other database services are going to take a closer eye to the price performance, right? Because if you're offering such good price performance, the vendors are already looking at it. And, you know, instances where they have offered let's say discount to the customers, to kind of at least like close the gap to some extent. And the second thing would be in terms of the capability. So like one of the things which I should have mentioned even early on, is that not only does MySQL HeatWave on AMD, provide very good price performance, say on like a small cluster, but it's all the way up to a cluster size of 64 nodes, which has about 1000 cores. So the point is, that HeatWave performs very well, both on a small system, as well as a huge scale out. And this is again, one of those things which is a differentiation compared to other services so we expect that even other database services will have to improve their offerings to provide the same good scale factor, which customers are now starting to expectancy, with MySQL HeatWave. >> Kumaran, anything you'd add to that? I mean, you guys are an arms dealer, you love all your OEMs, but at the same time, you've got chip competitors, Silicon competitors. How do you see the competitive-- >> I'd say the broader answer and the big picture for AMD, we're very maniacally focused on our customers, right? And OCI and Oracle are huge and important customers for us, and this particular use cases is extremely interesting both in that it takes advantage, very well of our architecture and it pulls out some of the value that AMD bring. I think from a big picture standpoint, our aim is to execute, to build to bring out generations of CPUs, kind of, you know, do what we say and say, sorry, say what we do and do what we say. And from that point of view, we're hitting, the schedules that we say, and being able to bring out the latest technology and bring it in a TCO value proposition that generationally keeps OCI and HeatWave ahead. That's the crux of our partnership here. >> Yeah, the execution's been obvious for the last several years. Kumaran, staying with you, how would you characterize the collaboration between, the AMD engineers and the HeatWave engineering team? How do you guys work together? >> No, I'd say we're in a very, very deep collaboration. So, there's a few aspects where, we've actually been working together very closely on the code and being able to optimize for both the large L3 cache that AMD has, and so to be able to take advantage of that. And then also, to be able to take advantage of the scaling. So going between, you know, our architecture is chip like based, so we have these, the CPU cores on, we call 'em CCDs and the inter CCD communication, there's opportunities to optimize an application level and that's something we've been engaged with. In the broader engagement, we are going back now for multiple generations with OCI, and there's a lot of input that now, kind of resonates in the product line itself. And so we value this very close collaboration with HeatWave and OCI. >> Yeah, and the cadence, Nip, and you and I have talked about this quite a bit. The cadence has been quite rapid. It's like this constant cycle every couple of months I turn around, is something new on HeatWave. But for question again, for both of you, what new things do you think that organizations, customers, are going to be able to do with MySQL HeatWave if you could look out next 12 to 18 months, is there anything you can share at this time about future collaborations? >> Right, look, 12 to 18 months is a long time. There's going to be a lot of innovation, a lot of new capabilities coming out on in MySQL HeatWave. But even based on what we are currently offering, and the trend we are seeing is that customers are bringing, more classes of workloads. So we started off with OLTP for MySQL, then it went to analytics. Then we increased it to mixed workloads, and now we offer like machine learning as alike. So one is we are seeing, more and more classes of workloads come to MySQL HeatWave. And the second is a scale, that kind of data volumes people are using HeatWave for, to process these mixed workloads, analytics machine learning OLTP, that's increasing. Now, along the way we are making it simpler to use, we are making it more cost effective use. So for instance, last time, when we talked about, we had introduced this real time elasticity and that's something which is a very, very popular feature because customers want the ability to be able to scale out, or scale down very efficiently. That's something we provided. We provided support for compression. So all of these capabilities are making it more efficient for customers to run a larger part of their workloads on MySQL HeatWave, and we will continue to make it richer in the next 12 to 18 months. >> Thank you. Kumaran, anything you'd add to that, we'll give you the last word as we got to wrap it. >> No, absolutely. So, you know, next 12 to 18 months we will have our Zen 4 CPUs out. So this could potentially go into the next generation of the OCI infrastructure. This would be with the Genoa and then Bergamo CPUs taking us to 96 and 128 cores with 12 channels at DDR five. This capability, you know, when applied to an application like HeatWave, you can see that it'll open up another order of magnitude potentially of use cases, right? And we're excited to see what customers can do do with that. It certainly will make, kind of the, this service, and the cloud in general, that this cloud migration, I think even more attractive. So we're pretty excited to see how things evolve in this period of time. >> Yeah, the innovations are coming together. Guys, thanks so much, we got to leave it there really appreciate your time. >> Thank you. >> All right, and thank you for watching this special Cube conversation, this is Dave Vellante, and we'll see you next time. (soft calm music)

Published Date : Sep 14 2022

SUMMARY :

and it's likely the performance Thank you. and how it's different from So the advantages are; single and highlight some of the results, please. the first thing to know. We've talked about the secret sauce So for instance, many of the relevance specs of the chips that are used and that's a big part of the contribution and it's the basis for EPIC, So in the case of HeatWave, of posting the benchmark parameters, So one of the reasons for us to publish, So the service had improved how is the competition responding to this? So the way we have seen the but at the same time, and the big picture for AMD, for the last several years. and so to be able to Yeah, and the cadence, and the trend we are seeing is we'll give you the last and the cloud in general, Yeah, the innovations we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc StaimerPERSON

0.99+

Dave VellantePERSON

0.99+

NipunPERSON

0.99+

OracleORGANIZATION

0.99+

2017DATE

0.99+

DavePERSON

0.99+

OCIORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

7,002QUANTITY

0.99+

KumaranPERSON

0.99+

second aspectQUANTITY

0.99+

Nipun AgarwalPERSON

0.99+

AMDORGANIZATION

0.99+

12QUANTITY

0.99+

64 coresQUANTITY

0.99+

768 megabytesQUANTITY

0.99+

twoQUANTITY

0.99+

MySQLTITLE

0.99+

third aspectQUANTITY

0.99+

12 channelsQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

HeatWaveORGANIZATION

0.99+

96QUANTITY

0.99+

18 timesQUANTITY

0.99+

BergamoORGANIZATION

0.99+

three partsQUANTITY

0.99+

DeltaORGANIZATION

0.99+

three monthsQUANTITY

0.99+

MySQL HeatWaveTITLE

0.99+

42 timesQUANTITY

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

oneQUANTITY

0.99+

GitHubORGANIZATION

0.99+

OneQUANTITY

0.98+

second generationQUANTITY

0.98+

single databaseQUANTITY

0.98+

128 coresQUANTITY

0.98+

18 monthsQUANTITY

0.98+

three thingsQUANTITY

0.98+

Future of Converged infrastructure


 

>> Announcer: From the SiliconANGLE Media Office, in Boston, Massachusetts, it's The Cube. Now, here's your host, Dave Vellante. >> Hello everyone welcome to this special presentation, The Future of Converged Infrastructure, my name is David Vellante, and I'll be your host, for this event where the focus is on Dell EMC's converged infrastructure announcement. Nearly a decade ago, modern converged infrastructure really came to the floor in the marketplace, and what you had is compute, storage, and network brought together in a single managed entity. And when you talk to IT people, the impact was roughly a 30 to 50% total cost of ownership reduction, really depending on a number of factors. How much virtualization they had achieved, how complex their existing processees were, how much they could save on database and other software licenses and maintenance, but roughly that 30 to 50% range. Fast forward to 2018 and you're looking at a multibillion dollar market for converged infrastructure. Jeff Boudreau is here, he's the President of the Dell EMC Storage Division, Jeff thanks for coming on. >> Thank you for having me. >> You're welcome. So we're going to set up this announcement let me go through the agenda. Jeff and I are going to give an overview of the announcement and then we're going to go to Trey Layton, who's the Chief Technology Officer of the converged infrastructure group at Dell EMC. He's going to focus on the architecture, and some of the announcement details. And then, we're going to go to Cisco Live to a pre-recorded session that we did in Barcelona, and get the Cisco perspective, and then Jeff and I will come back to wrap it up. We also, you might notice we have a crowd chat going on, so underneath this video stream you can ask questions, you got to log in with LinkedIn, Twitter, or Facebook, I prefer Twitter, kind of an ask me anything crowd chat. We have analysts on, Stu Miniman is hosting that call. We're going to talk about what this announcement is all about, what the customer issues are that are being addressed by this announcement. So Jeff, let's get into it. From your perspective, what's the state of converged infrastructure today? >> Great question. I'm really bullish on CI, in regards to what converged infrastructure and kind of the way the market's going. We see continued interest in the growth of the market of our customers. Driven by the need for simplicity, agility, elasticity of those on-prem resources. Dell EMC pioneered the CI market several years ago, with the simple premise of simplify IT, and our focus and commitment to our customers has not changed of simplifying IT. As our customers continue to seek for new ways to simplify and consolidate infrastructure, we expect more and more of our customers to embrace CI, as a fast and easy way to modernize their infrastructure, and transform IT. >> You talk about transformation, we do a lot of events, and everybody's talking about digital transformation, and IT transformation, what role does converged infrastructure play in those types of transformations, maybe you could give us an example? >> Sure, so first I'd say our results speak for themselves. As I said we pioneered the CI industry, as the market leader, we enabled thousands of customers worldwide to drive business transformation and digital transformation. And when I speak to customers specifically, converged infrastructure is not just about the infrastructure, it's about the operating model, and how they simplify IT. I'd say two of the biggest areas of impact that customers highlight to me, are really about the acceleration of application delivery, and then the other big one is around the increase in operational efficiencies allowing customers to free up resources, to reinvest however they see fit. >> Now since the early days of converged infrastructure Cisco has been a big partner of yours, you guys were kind of quasi-exclusive for awhile, they went out and sought other partners, you went out and sought other partners, a lot of people have questions about that relationship, what's your perspective on that relationship. >> So our partnership with Cisco is strong as ever. We're proud of this category we've created together. We've been on this journey for a long time we've been working together, and that partnership will continue as we go forward. In full transparency there are of course some topics where we disagree, just like any normal relationship we have disagreements, an example of that would be HCI, but in the CI space our partnership is as strong as ever. We'll have thousands of customers between the two of us, that we will continue to invest and innovate together on. And I think later in this broadcast you're going to hear directly from Cisco on that, so we're both doubling down on the partnership, and we're both committed to CI. >> I want to ask you about leadership generally, and then specifically as it relates to converged infrastructure and hyper converged. My question is this, hyper converged is booming, it's a high growth market. I sometimes joke that Dell EMC is now your leader in the Gartner Magic Quadrants, 101 Gartner Magic Quadrants out of the 99. They're just leading everything and I think both the CI and the HCI categories, what's your take, is CI still relevant? >> First I'd say it's great to come from a leadership position so I thank you for bringing that up, I think it's really important. As Micheal talks about being the essential infrastructure company, that's huge for us as Dell Technologies, so we're really proud of that and we want to lean into that strength. Now on HCI vs CI, to me it's an AND world. Everybody wants to get stock that's in either or, to me it's about the AND story. All our customers are going on a journey, in regards to how they transform their businesses. But at the end of the day, if I took my macro view, and took a step back, it's about the data. The data's the critical asset. The good news for me and for our team is data always continues to grow, and is growing at an amazing rate. And as that critical asset, customers are really kind of thinking about a modern data strategy as they drive foreword. And as part of that, they're looking at how to store, protect, secure, analyze, move that data, really unleashing that data to provide value back to their businesses. So with all of that, not all data is going to be created equal, as part of that, as they build out those strategies, it's going to be a journey, in regards to how they do it. And if that's software defined, vs purpose built arrays, vs converged, or hyper converged, or even cloud, those deployment models, we, Dell EMC, and Dell Technologies want to be that strategic partner, that trusted advisor to help them on that journey. >> Alright Jeff, thanks for helping me with the setup. I want to ask you to hang around a little bit. >> Jeff: Sure. >> We're going to go to a video, and then we're going to bring back Trey Layton, talk about the architecture so keep it right there, we'll be right back. >> Announcer: Dell EMC has long been number one in converged infrastructure, providing technology that simplifies all aspects of IT, and enables you to achieve better business outcomes, faster, and we continue to lead through constant innovation. Introducing, the VxBlock System 1000, the next generation of converged infrastructure from Dell EMC. Featuring enhanced life cycle management, and a broad choice of technologies, to support a vast array of applications and resources. From general purpose to mission critical, big data to specialized workloads, VxBlock 1000 is the industry's first converged infrastructure system, with the flexible data services, power, and capacity to handle all data center workloads, giving you the ultimate in business agility, data center efficiency, and operational simplicity. Including best-of-breed storage and data protection from Dell EMC, and computer networking from Cisco. (orchestral music) Converged in one system, these technologies enable you to flexibly adapt resources to your evolving application's needs, pool resources to maximize utilization and increase ROI, deliver a turnkey system in lifecycle assurance experience, that frees you to focus on innovation. Four times storage types, two times compute types, and six times faster updates, and VME ready, and future proof for extreme performance. VxBlock 1000, the number one in converged now all-in-one system. Learn more about Dell EMC VxBlock 1000, at DellEMC.com/VxBlock. >> We're back with Trey Layton who's the Senior Vice President and CTO of converged at Dell EMC. Trey it's always a pleasure, good to see you. >> Dave, good to see you as well. >> So we're eight years into Vblock, take us back to the converged infrastructure early days, what problems were you trying to solve with CI. >> Well one of the problems with IT in general is it's been hard, and one of the reasons why it's been hard is all the variability that customers consume. And how do you integrate all that variability in a sustaining manner, to maintain the assets so it can support the business. And, the thing that we've learned is, the original recipe that we had for Vblock, was to go at and solve that very problem. We have referred to that as life cycle. Manage the life cycle services of the biggest inner assets that you're deploying. And we have created some great intellectual property, some great innovation around helping minimize the complexity associated with managing the life cycle of a very complex integration, by way of, one of the largest data center assets that people operate in their environment. >> So you got thousands and thousands of customers telling you life cycle management is critical. They're shifting their labor resource to more strategic activities, is that what's going on? Well there's so much variation and complexity in just maintaining the different integration points, that they're spending an inordinate amount of their time, a lot of nights and weekends, on understanding and figuring out which software combinations, which configuration combinations you need to operate. What we do as an organization, and have done since inception is, we manage that complexity for them. We delivery them an outcome based architecture that is pre-integrated, and we sustain that integration over it's life, so they spend less time doing that, and letting the experts who actually build the components focus on maintaining those integrations. >> So as an analyst I always looked at converged infrastructure as an evolutionary trend, bringing together storage, servers, networking, bespoke components. So my question is, where's the innovation underneath converged infrastructure. >> So I would say the innovation is in two areas. We're blessed with a lot of technology innovations that come from our partner, and our own companies, Dell EMC and Cisco. Cisco produces wonderful innovations in the space of networking compute, in the context of Vblock. Dell EMC, storage innovations, data protection, et cetera. We harmonize all of these very complex integrations in a manner where an organization can put those advanced integrations into solving business problems immediately. So there's two vectors of innovation. There are the technology components that we are acquiring, to solve business problems, and there's the method at which we integrate them, to get to the business of solving problems. >> Okay, let's get into the announcement. What are you announcing, what's new, why should we care. >> We are announcing the VxBlock 1000, and the interesting thing about Vblocks over the years, is they have been individual system architectures. So a compute technology, integrated with a particular storage architecture, would produce a model of Vblock. With VxBlock 1000, we're actually introducing an architecture that provides a full gamut of array optionality for customers. Both blade and rack server options, for customers on the UCS compute side, and before we would integrate data protection technologies as an extension or an add-on into the architecture, data protection is native to the offer. In addition to that, unstructured data storage. So being able to include unstructured data into the architecture as one singular architecture, as opposed to buying individualized systems. >> Okay, so you're just further simplifying the underlying infrastructure which is going to save me even more time? >> Producing a standard which can adapt to virtually any use case that a customer has in a data center environment. Giving them the ability to expand and grow that architecture, as their workload dictates, in their environment, as opposed to buying a system to accommodate one workload, buying another system to accommodate another workload, this is kind of breaking the barriers of traditional CI, and moving it foreword so that we can create an adaptive architecture, that can accommodate not only the technologies available today, but the technologies on the horizon tomorrow. >> Okay so it's workload diversity, which means greater asset leverage from that underlying infrastructure. >> Trey: Absolutely. >> Can you give us some examples, how do you envision customers using this? >> So I would talk specifically about customers that we have today. And when they deploy, or have deployed Vblocks in the past. We've done wonderful by building architectures that accommodate, or they're tailor made for certain types of workloads. And so a customer environment would end up acquiring a Vblock model 700, to accommodate an SAP workload for example. They would acquire a Vblock 300, or 500 to accommodate a VDI workload. And then as those workloads would grow, they would grow those individualized systems. What it did was, it created islands of stranded resource capacities. Vblock 1000 is about bringing all those capabilities into a singular architecture, where you can grow the resources based on pools. And so as your work load shifts in your environment, you can reallocate resources to accommodate the needs of that workload, as opposed to worrying about stranded capacity in the architecture. >> Okay where do you go from here with the architecture, can you share with us, to the extent that you can, a little roadmap, give us a vision as to how you see this playing out over the next several years. >> Well, one of the reasons why we did this was to simplify, and make it easier to operate, these very complex architectures that everyone's consuming around the world. Vblock has always been about simplifying complex technologies in the data center. There are a lot of innovations on the horizon in VME, for example, next generation compute platforms. There are new generation fabric services, that are emerging. VxBlock 1000 is the place at which you will see all of these technologies introduced, and our customers won't have to wait on new models of Vblock to consume those technologies, they will be resident in them upon their availability to the market. >> The buzz word from the vendor community is future proof, but your saying, you'll be able to, if you buy today, you'll be able to bring in things like NVME and these new technologies down the road. >> The architecture inherently supports the idea of adapting to new technologies as they emerge, and will consume those integrations, as a part of the architectural standard footprint, for the life of the architecture. >> Alright excellent Trey, thanks very much for that overview. Cisco obviously a huge partner of yours, with this whole initiative, many many years. A lot of people have questioned where that goes, so we have a segment from Cisco Live, Stu Miniman is out there, let's break to Stu, then we'll come back and pick it up from there. Thanks for watching. >> Thanks Dave, I'm Stu Miniman, and we're here at Cisco Live 2018 in Barcelona, Spain. Happy to be joined on the program by Nigel Moulton the EMEA CTO of Dell EMC, and Siva Sivakumar, who's the Senior Director of Data Center Solutions at Cisco, gentlemen, thanks so much for joining me. >> Thanks Stu. >> Looking at the long partnership of Dell and Cisco, Siva, talk about the partnership first. >> Absolutely. If you look back in time, when we launched UCS, the very first major partnership we brought, and the converged infrastructure we brought to the market was Vblock, it really set the trend for how customers should consume compute, network, and storage together. And we continue to deliver world class technologies on both sides and the partnership continues to thrive as we see tremendous adoption from our customers. So we are here, several years down, still a very vibrant partnership in trying to get the best product for the customers. >> Nigel would love to get your perspective. >> Siva's right I think I'd add, it defined a market, if you think what true conversion infrastructure is, it's different, and we're going to discuss some more about that as we go through. The UCS fabric is unique, in the way that it ties a network fabric to a compute fabric, and when you bring those technologies together, and converge them, and you have a partnership like Cisco, you have a partnership with us, yeah it's going to be a fantastic result for the market because the market moves on, and I think, VxBlock actually helped us achieve that. >> Alright so Siva we understand there's billions of reasons why Cisco and Dell would want to keep this partnership going, but talk about from an innovation standpoint, there's the new VxBlock 1000, what's new, talk about what's the innovation here. >> Absolutely. If you look at the VxBlock perspective, the 1000 perspective, first of all it simplifies an extremely fast successful product to the next level. It simplifies the storage options, and it provides a seamless way to consume those technologies. From a Cisco perspective, as you know we are in our fifth generation of UCS platform, continues to be a world class platform, leading blade service in the industry. But we also bring the innovation of rack mount servers, as well as 40 gig fabric, larger scale, fiber channel technology as well. As we bring our compute, network, as well as a sound fabric technology together, with world class storage portfolio, and then simplify that for a single pane of glass consumption model. That's absolutely the highest level of innovation you're going to find. >> Nigel, I think back in the early days the joke was you could have a Vblock anyway you want, as long as it's black. Obviously a lot of diversity in product line, but what's new and different here, how does this impact new customers and existing customers. >> I think there's a couple of things to pick up on, what Trey said, what Siva said. So the simplification piece, the way in which we do release certification matrix, the way in which you combine a single software image to manage these multiple discreet components, that is greatly simplified in VxBlock 1000. Secondly you remove a model number, because historically you're right, you bought a three series, a five series, and a seven series, and that sort of defined the architecture. This is now a system wide architecture. So those technologies that you might of thought of as being discreet before, or integrated at an RCM level that was perhaps a little complex for some people, that's now dramatically simplified. So those are two things that I think we amplify, one is the simplification and two, you're removing a model number and moving to a system wide architecture. >> Want to give you both the opportunity, gives us a little bit, what's the future when you talk about the 1000 system, future innovations, new use cases. >> Sure, I think if you look at the way enterprise are consuming, the demand for more powerful systems that'll bring together more consolidation, and also address the extensive data center migration opportunities we see, is very critical, that means the customers are really looking at whether it is a in-memory database that scales to, much larger scale than before, or large scale cluster databases, or even newer workloads for that matter, the appetite for a larger system, and the need to have it in the market, continues to grow. We see a huge install base of our customers, as well as new customers looking at options in the market, truly realize, the strength of the portfolio that each one of us brings to the table, and bringing the best-of-breed, whether it is today, or in the future from an innovation standpoint, this is absolutely the way that we are approaching building our partnership and building new solutions here. >> Nigel, when you're talking to customers out there, are they coming saying, I'm going to need this for a couple of months, I mean this is an investment they're making for a couple years, why is this a partnership built to last. >> An enterprise class customer certainly is looking for a technology that's synonymous with reliability, availability, performance. And if you look at what VxBlock has traditionally done and what the 1000 offers, you see that. But Siva's right, these application architectures are going to change. So if you can make an investment in a technology set now that keeps the premise of reliability, availability, and performance to you today, but when you look at future application architectures around high capacity memory, adjacent to a high performance CPU, you're almost in a position where you are preparing the ground for what that application architecture will need, and the investments that people make in the VxBlock system with the UCS power underneath at the compute layer, it's significant, because it lays out a very clear path to how you will integrate future application architectures with existing application architectures. >> Nigel Moulton, Siva Sivakumar, thank you so much for joining, talking about the partnership and the future. >> Siva: Thank you. >> Nigel: Pleasure. >> Sending back to Dave in the US, thanks so much for watching The Cube from Cisco Live Barcelona. >> Thank you. >> Okay thanks Stu, we're back here with Jeff Boudreau. We talked a little bit earlier about the history of conversion infrastructure, some of the impacts that we've seen in IT transformations, Trey took us through the architecture with some of the announcement details, and of course we heard from Cisco, was a lot of fun in Barcelona. Jeff bring it home, what are the take aways. >> Some of the key take aways I have is just I want to make sure everybody knows Dell EMC's continued commitment to modernizing infrastructure for conversion infrastructure. In addition to that was have a strong partnership with Cisco as you heard from me and you also heard from Cisco, that we both continue to invest and innovate in these spaces. In addition to that we're going to continue our leadership in CI, this is critical, and it's extremely important to Dell, and EMC, and Dell EMC's Cisco relationship. And then lastly, that we're going to continue to deliver on our customer promise to simplify IT. >> Okay great, thank you very much for participating here. >> I appreciate it. >> Now we're going to go into the crowd chat, again, it's an ask me anything. What make Dell EMC so special, what about security, how are the organizations affected by converged infrastructure, there's still a lot of, roll your own going on. There's a price to pay for all this integration, how is that price justified, can you offset that with TCO. So let's get into that, what are the other business impacts, go auth in with Twitter, LinkedIn, or Facebook, Twitter is my preferred. Let's get into it thanks for watching everybody, we'll see you in the crowd chat. >> I want IT to be dial tone service, where it's always available for our providers to access. To me, that is why IT exists. So our strategy at the hardware and software level is to ruthlessly standardize leverage in a converged platform technology. We want to create IT almost like a vending machine, where a user steps up to our vending machine, they select the product they want, they put in their cost center, and within seconds that product is delivered to that end user. And we really need to start running IT like a business. Currently we have a VxBlock that we will run our University of Vermont Medical Center epic install on. Having good performance while the provider is within that epic system is key to our foundation of IT. Having the ability to combine the compute, network, and storage in one aspect in one upgrade, where each component is aligned and regression tested from a Dell Technology perspective, really makes it easy as an IT individual to do an upgrade once or twice a year versus continually trying to keep each component of that infrastructure footprint upgraded and aligned. I was very impressed with the VxBlock 1000 from Dell Technologies, specifically a few aspects of it that really intrigued me. With the VxBlock 1000, we now have the ability to mix and match technologies within that frame. We love the way the RCM process works, from a converged perspective, the ability to bring the compute, the storage, and network together, and trust that Dell Technologies is going to upgrade all those components in a seamless manner, really makes it easier from an IT professional to continue to focus on what's really important to our organization, provider and patient outcomes.

Published Date : Feb 13 2018

SUMMARY :

Announcer: From the SiliconANGLE Media Office, Jeff Boudreau is here, he's the President of the Jeff and I are going to give an overview of the announcement and our focus and commitment to our customers as the market leader, we enabled Now since the early days of converged infrastructure but in the CI space our partnership is as strong as ever. both the CI and the HCI categories, But at the end of the day, if I took my macro view, I want to ask you to hang around a little bit. talk about the architecture so keep it right there, and capacity to handle all data center workloads, Trey it's always a pleasure, good to see you. what problems were you trying to solve with CI. and one of the reasons why it's been hard is all the and letting the experts who actually build the components So as an analyst I always looked at converged There are the technology components that we are acquiring, Okay, let's get into the announcement. and the interesting thing about and moving it foreword so that we can create from that underlying infrastructure. stranded capacity in the architecture. playing out over the next several years. There are a lot of innovations on the horizon in VME, and these new technologies down the road. for the life of the architecture. let's break to Stu, Nigel Moulton the EMEA CTO of Dell EMC, Siva, talk about the partnership first. and the converged infrastructure and when you bring those technologies together, Alright so Siva we understand That's absolutely the highest level of innovation you could have a Vblock anyway you want, and that sort of defined the architecture. Want to give you both the opportunity, and the need to have it in the market, continues to grow. I'm going to need this for a couple of months, and performance to you today, talking about the partnership and the future. Sending back to Dave in the US, and of course we heard from Cisco, Some of the key take aways I have is just I want to make how is that price justified, can you offset that with TCO. from a converged perspective, the ability to bring the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

David VellantePERSON

0.99+

JeffPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

Trey LaytonPERSON

0.99+

Nigel MoultonPERSON

0.99+

Jeff BoudreauPERSON

0.99+

DellORGANIZATION

0.99+

Siva SivakumarPERSON

0.99+

BarcelonaLOCATION

0.99+

thousandsQUANTITY

0.99+

NigelPERSON

0.99+

twoQUANTITY

0.99+

EMCORGANIZATION

0.99+

SivaPERSON

0.99+

TreyPERSON

0.99+

40 gigQUANTITY

0.99+

2018DATE

0.99+

MichealPERSON

0.99+

Stu MinimanPERSON

0.99+

30QUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

StuPERSON

0.99+

two thingsQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

billionsQUANTITY

0.99+

two areasQUANTITY

0.99+

The CubeTITLE

0.99+

VxBlock 1000COMMERCIAL_ITEM

0.99+

six timesQUANTITY

0.99+

University of Vermont Medical CenterORGANIZATION

0.99+

Vblock 1000COMMERCIAL_ITEM

0.99+

Barcelona, SpainLOCATION

0.99+

two timesQUANTITY

0.99+

Vblock 300COMMERCIAL_ITEM

0.99+

Boston, MassachusettsLOCATION

0.99+

two vectorsQUANTITY

0.99+

USLOCATION

0.99+

tomorrowDATE

0.99+

VxBlock System 1000COMMERCIAL_ITEM

0.99+

TwitterORGANIZATION

0.99+

bothQUANTITY

0.99+

500COMMERCIAL_ITEM

0.99+

Data Science for All: It's a Whole New Game


 

>> There's a movement that's sweeping across businesses everywhere here in this country and around the world. And it's all about data. Today businesses are being inundated with data. To the tune of over two and a half million gigabytes that'll be generated in the next 60 seconds alone. What do you do with all that data? To extract insights you typically turn to a data scientist. But not necessarily anymore. At least not exclusively. Today the ability to extract value from data is becoming a shared mission. A team effort that spans the organization extending far more widely than ever before. Today, data science is being democratized. >> Data Sciences for All: It's a Whole New Game. >> Welcome everyone, I'm Katie Linendoll. I'm a technology expert writer and I love reporting on all things tech. My fascination with tech started very young. I began coding when I was 12. Received my networking certs by 18 and a degree in IT and new media from Rochester Institute of Technology. So as you can tell, technology has always been a sure passion of mine. Having grown up in the digital age, I love having a career that keeps me at the forefront of science and technology innovations. I spend equal time in the field being hands on as I do on my laptop conducting in depth research. Whether I'm diving underwater with NASA astronauts, witnessing the new ways which mobile technology can help rebuild the Philippine's economy in the wake of super typhoons, or sharing a first look at the newest iPhones on The Today Show, yesterday, I'm always on the hunt for the latest and greatest tech stories. And that's what brought me here. I'll be your host for the next hour and as we explore the new phenomenon that is taking businesses around the world by storm. And data science continues to become democratized and extends beyond the domain of the data scientist. And why there's also a mandate for all of us to become data literate. Now that data science for all drives our AI culture. And we're going to be able to take to the streets and go behind the scenes as we uncover the factors that are fueling this phenomenon and giving rise to a movement that is reshaping how businesses leverage data. And putting organizations on the road to AI. So coming up, I'll be doing interviews with data scientists. We'll see real world demos and take a look at how IBM is changing the game with an open data science platform. We'll also be joined by legendary statistician Nate Silver, founder and editor-in-chief of FiveThirtyEight. Who will shed light on how a data driven mindset is changing everything from business to our culture. We also have a few people who are joining us in our studio, so thank you guys for joining us. Come on, I can do better than that, right? Live studio audience, the fun stuff. And for all of you during the program, I want to remind you to join that conversation on social media using the hashtag DSforAll, it's data science for all. Share your thoughts on what data science and AI means to you and your business. And, let's dive into a whole new game of data science. Now I'd like to welcome my co-host General Manager IBM Analytics, Rob Thomas. >> Hello, Katie. >> Come on guys. >> Yeah, seriously. >> No one's allowed to be quiet during this show, okay? >> Right. >> Or, I'll start calling people out. So Rob, thank you so much. I think you know this conversation, we're calling it a data explosion happening right now. And it's nothing new. And when you and I chatted about it. You've been talking about this for years. You have to ask, is this old news at this point? >> Yeah, I mean, well first of all, the data explosion is not coming, it's here. And everybody's in the middle of it right now. What is different is the economics have changed. And the scale and complexity of the data that organizations are having to deal with has changed. And to this day, 80% of the data in the world still sits behind corporate firewalls. So, that's becoming a problem. It's becoming unmanageable. IT struggles to manage it. The business can't get everything they need. Consumers can't consume it when they want. So we have a challenge here. >> It's challenging in the world of unmanageable. Crazy complexity. If I'm sitting here as an IT manager of my business, I'm probably thinking to myself, this is incredibly frustrating. How in the world am I going to get control of all this data? And probably not just me thinking it. Many individuals here as well. >> Yeah, indeed. Everybody's thinking about how am I going to put data to work in my organization in a way I haven't done before. Look, you've got to have the right expertise, the right tools. The other thing that's happening in the market right now is clients are dealing with multi cloud environments. So data behind the firewall in private cloud, multiple public clouds. And they have to find a way. How am I going to pull meaning out of this data? And that brings us to data science and AI. That's how you get there. >> I understand the data science part but I think we're all starting to hear more about AI. And it's incredible that this buzz word is happening. How do businesses adopt to this AI growth and boom and trend that's happening in this world right now? >> Well, let me define it this way. Data science is a discipline. And machine learning is one technique. And then AI puts both machine learning into practice and applies it to the business. So this is really about how getting your business where it needs to go. And to get to an AI future, you have to lay a data foundation today. I love the phrase, "there's no AI without IA." That means you're not going to get to AI unless you have the right information architecture to start with. >> Can you elaborate though in terms of how businesses can really adopt AI and get started. >> Look, I think there's four things you have to do if you're serious about AI. One is you need a strategy for data acquisition. Two is you need a modern data architecture. Three is you need pervasive automation. And four is you got to expand job roles in the organization. >> Data acquisition. First pillar in this you just discussed. Can we start there and explain why it's so critical in this process? >> Yeah, so let's think about how data acquisition has evolved through the years. 15 years ago, data acquisition was about how do I get data in and out of my ERP system? And that was pretty much solved. Then the mobile revolution happens. And suddenly you've got structured and non-structured data. More than you've ever dealt with. And now you get to where we are today. You're talking terabytes, petabytes of data. >> [Katie] Yottabytes, I heard that word the other day. >> I heard that too. >> Didn't even know what it meant. >> You know how many zeros that is? >> I thought we were in Star Wars. >> Yeah, I think it's a lot of zeroes. >> Yodabytes, it's new. >> So, it's becoming more and more complex in terms of how you acquire data. So that's the new data landscape that every client is dealing with. And if you don't have a strategy for how you acquire that and manage it, you're not going to get to that AI future. >> So a natural segue, if you are one of these businesses, how do you build for the data landscape? >> Yeah, so the question I always hear from customers is we need to evolve our data architecture to be ready for AI. And the way I think about that is it's really about moving from static data repositories to more of a fluid data layer. >> And we continue with the architecture. New data architecture is an interesting buzz word to hear. But it's also one of the four pillars. So if you could dive in there. >> Yeah, I mean it's a new twist on what I would call some core data science concepts. For example, you have to leverage tools with a modern, centralized data warehouse. But your data warehouse can't be stagnant to just what's right there. So you need a way to federate data across different environments. You need to be able to bring your analytics to the data because it's most efficient that way. And ultimately, it's about building an optimized data platform that is designed for data science and AI. Which means it has to be a lot more flexible than what clients have had in the past. >> All right. So we've laid out what you need for driving automation. But where does the machine learning kick in? >> Machine learning is what gives you the ability to automate tasks. And I think about machine learning. It's about predicting and automating. And this will really change the roles of data professionals and IT professionals. For example, a data scientist cannot possibly know every algorithm or every model that they could use. So we can automate the process of algorithm selection. Another example is things like automated data matching. Or metadata creation. Some of these things may not be exciting but they're hugely practical. And so when you think about the real use cases that are driving return on investment today, it's things like that. It's automating the mundane tasks. >> Let's go ahead and come back to something that you mentioned earlier because it's fascinating to be talking about this AI journey, but also significant is the new job roles. And what are those other participants in the analytics pipeline? >> Yeah I think we're just at the start of this idea of new job roles. We have data scientists. We have data engineers. Now you see machine learning engineers. Application developers. What's really happening is that data scientists are no longer allowed to work in their own silo. And so the new job roles is about how does everybody have data first in their mind? And then they're using tools to automate data science, to automate building machine learning into applications. So roles are going to change dramatically in organizations. >> I think that's confusing though because we have several organizations who saying is that highly specialized roles, just for data science? Or is it applicable to everybody across the board? >> Yeah, and that's the big question, right? Cause everybody's thinking how will this apply? Do I want this to be just a small set of people in the organization that will do this? But, our view is data science has to for everybody. It's about bring data science to everybody as a shared mission across the organization. Everybody in the company has to be data literate. And participate in this journey. >> So overall, group effort, has to be a common goal, and we all need to be data literate across the board. >> Absolutely. >> Done deal. But at the end of the day, it's kind of not an easy task. >> It's not. It's not easy but it's maybe not as big of a shift as you would think. Because you have to put data in the hands of people that can do something with it. So, it's very basic. Give access to data. Data's often locked up in a lot of organizations today. Give people the right tools. Embrace the idea of choice or diversity in terms of those tools. That gets you started on this path. >> It's interesting to hear you say essentially you need to train everyone though across the board when it comes to data literacy. And I think people that are coming into the work force don't necessarily have a background or a degree in data science. So how do you manage? >> Yeah, so in many cases that's true. I will tell you some universities are doing amazing work here. One example, University of California Berkeley. They offer a course for all majors. So no matter what you're majoring in, you have a course on foundations of data science. How do you bring data science to every role? So it's starting to happen. We at IBM provide data science courses through CognitiveClass.ai. It's for everybody. It's free. And look, if you want to get your hands on code and just dive right in, you go to datascience.ibm.com. The key point is this though. It's more about attitude than it is aptitude. I think anybody can figure this out. But it's about the attitude to say we're putting data first and we're going to figure out how to make this real in our organization. >> I also have to give a shout out to my alma mater because I have heard that there is an offering in MS in data analytics. And they are always on the forefront of new technologies and new majors and on trend. And I've heard that the placement behind those jobs, people graduating with the MS is high. >> I'm sure it's very high. >> So go Tigers. All right, tangential. Let me get back to something else you touched on earlier because you mentioned that a number of customers ask you how in the world do I get started with AI? It's an overwhelming question. Where do you even begin? What do you tell them? >> Yeah, well things are moving really fast. But the good thing is most organizations I see, they're already on the path, even if they don't know it. They might have a BI practice in place. They've got data warehouses. They've got data lakes. Let me give you an example. AMC Networks. They produce a lot of the shows that I'm sure you watch Katie. >> [Katie] Yes, Breaking Bad, Walking Dead, any fans? >> [Rob] Yeah, we've got a few. >> [Katie] Well you taught me something I didn't even know. Because it's amazing how we have all these different industries, but yet media in itself is impacted too. And this is a good example. >> Absolutely. So, AMC Networks, think about it. They've got ads to place. They want to track viewer behavior. What do people like? What do they dislike? So they have to optimize every aspect of their business from marketing campaigns to promotions to scheduling to ads. And their goal was transform data into business insights and really take the burden off of their IT team that was heavily burdened by obviously a huge increase in data. So their VP of BI took the approach of using machine learning to process large volumes of data. They used a platform that was designed for AI and data processing. It's the IBM analytics system where it's a data warehouse, data science tools are built in. It has in memory data processing. And just like that, they were ready for AI. And they're already seeing that impact in their business. >> Do you think a movement of that nature kind of presses other media conglomerates and organizations to say we need to be doing this too? >> I think it's inevitable that everybody, you're either going to be playing, you're either going to be leading, or you'll be playing catch up. And so, as we talk to clients we think about how do you start down this path now, even if you have to iterate over time? Because otherwise you're going to wake up and you're going to be behind. >> One thing worth noting is we've talked about analytics to the data. It's analytics first to the data, not the other way around. >> Right. So, look. We as a practice, we say you want to bring data to where the data sits. Because it's a lot more efficient that way. It gets you better outcomes in terms of how you train models and it's more efficient. And we think that leads to better outcomes. Other organization will say, "Hey move the data around." And everything becomes a big data movement exercise. But once an organization has started down this path, they're starting to get predictions, they want to do it where it's really easy. And that means analytics applied right where the data sits. >> And worth talking about the role of the data scientist in all of this. It's been called the hot job of the decade. And a Harvard Business Review even dubbed it the sexiest job of the 21st century. >> Yes. >> I want to see this on the cover of Vogue. Like I want to see the first data scientist. Female preferred, on the cover of Vogue. That would be amazing. >> Perhaps you can. >> People agree. So what changes for them? Is this challenging in terms of we talk data science for all. Where do all the data science, is it data science for everyone? And how does it change everything? >> Well, I think of it this way. AI gives software super powers. It really does. It changes the nature of software. And at the center of that is data scientists. So, a data scientist has a set of powers that they've never had before in any organization. And that's why it's a hot profession. Now, on one hand, this has been around for a while. We've had actuaries. We've had statisticians that have really transformed industries. But there are a few things that are new now. We have new tools. New languages. Broader recognition of this need. And while it's important to recognize this critical skill set, you can't just limit it to a few people. This is about scaling it across the organization. And truly making it accessible to all. >> So then do we need more data scientists? Or is this something you train like you said, across the board? >> Well, I think you want to do a little bit of both. We want more. But, we can also train more and make the ones we have more productive. The way I think about it is there's kind of two markets here. And we call it clickers and coders. >> [Katie] I like that. That's good. >> So, let's talk about what that means. So clickers are basically somebody that wants to use tools. Create models visually. It's drag and drop. Something that's very intuitive. Those are the clickers. Nothing wrong with that. It's been valuable for years. There's a new crop of data scientists. They want to code. They want to build with the latest open source tools. They want to write in Python or R. These are the coders. And both approaches are viable. Both approaches are critical. Organizations have to have a way to meet the needs of both of those types. And there's not a lot of things available today that do that. >> Well let's keep going on that. Because I hear you talking about the data scientists role and how it's critical to success, but with the new tools, data science and analytics skills can extend beyond the domain of just the data scientist. >> That's right. So look, we're unifying coders and clickers into a single platform, which we call IBM Data Science Experience. And as the demand for data science expertise grows, so does the need for these kind of tools. To bring them into the same environment. And my view is if you have the right platform, it enables the organization to collaborate. And suddenly you've changed the nature of data science from an individual sport to a team sport. >> So as somebody that, my background is in IT, the question is really is this an additional piece of what IT needs to do in 2017 and beyond? Or is it just another line item to the budget? >> So I'm afraid that some people might view it that way. As just another line item. But, I would challenge that and say data science is going to reinvent IT. It's going to change the nature of IT. And every organization needs to think about what are the skills that are critical? How do we engage a broader team to do this? Because once they get there, this is the chance to reinvent how they're performing IT. >> [Katie] Challenging or not? >> Look it's all a big challenge. Think about everything IT organizations have been through. Some of them were late to things like mobile, but then they caught up. Some were late to cloud, but then they caught up. I would just urge people, don't be late to data science. Use this as your chance to reinvent IT. Start with this notion of clickers and coders. This is a seminal moment. Much like mobile and cloud was. So don't be late. >> And I think it's critical because it could be so costly to wait. And Rob and I were even chatting earlier how data analytics is just moving into all different kinds of industries. And I can tell you even personally being effected by how important the analysis is in working in pediatric cancer for the last seven years. I personally implement virtual reality headsets to pediatric cancer hospitals across the country. And it's great. And it's working phenomenally. And the kids are amazed. And the staff is amazed. But the phase two of this project is putting in little metrics in the hardware that gather the breathing, the heart rate to show that we have data. Proof that we can hand over to the hospitals to continue making this program a success. So just in-- >> That's a great example. >> An interesting example. >> Saving lives? >> Yes. >> That's also applying a lot of what we talked about. >> Exciting stuff in the world of data science. >> Yes. Look, I just add this is an existential moment for every organization. Because what you do in this area is probably going to define how competitive you are going forward. And think about if you don't do something. What if one of your competitors goes and creates an application that's more engaging with clients? So my recommendation is start small. Experiment. Learn. Iterate on projects. Define the business outcomes. Then scale up. It's very doable. But you've got to take the first step. >> First step always critical. And now we're going to get to the fun hands on part of our story. Because in just a moment we're going to take a closer look at what data science can deliver. And where organizations are trying to get to. All right. Thank you Rob and now we've been joined by Siva Anne who is going to help us navigate this demo. First, welcome Siva. Give him a big round of applause. Yeah. All right, Rob break down what we're going to be looking at. You take over this demo. >> All right. So this is going to be pretty interesting. So Siva is going to take us through. So he's going to play the role of a financial adviser. Who wants to help better serve clients through recommendations. And I'm going to really illustrate three things. One is how do you federate data from multiple data sources? Inside the firewall, outside the firewall. How do you apply machine learning to predict and to automate? And then how do you move analytics closer to your data? So, what you're seeing here is a custom application for an investment firm. So, Siva, our financial adviser, welcome. So you can see at the top, we've got market data. We pulled that from an external source. And then we've got Siva's calendar in the middle. He's got clients on the right side. So page down, what else do you see down there Siva? >> [Siva] I can see the recent market news. And in here I can see that JP Morgan is calling for a US dollar rebound in the second half of the year. And, I have upcoming meeting with Leo Rakes. I can get-- >> [Rob] So let's go in there. Why don't you click on Leo Rakes. So, you're sitting at your desk, you're deciding how you're going to spend the day. You know you have a meeting with Leo. So you click on it. You immediately see, all right, so what do we know about him? We've got data governance implemented. So we know his age, we know his degree. We can see he's not that aggressive of a trader. Only six trades in the last few years. But then where it gets interesting is you go to the bottom. You start to see predicted industry affinity. Where did that come from? How do we have that? >> [Siva] So these green lines and red arrows here indicate the trending affinity of Leo Rakes for particular industry stocks. What we've done here is we've built machine learning models using customer's demographic data, his stock portfolios, and browsing behavior to build a model which can predict his affinity for a particular industry. >> [Rob] Interesting. So, I like to think of this, we call it celebrity experiences. So how do you treat every customer like they're a celebrity? So to some extent, we're reading his mind. Because without asking him, we know that he's going to have an affinity for auto stocks. So we go down. Now we look at his portfolio. You can see okay, he's got some different holdings. He's got Amazon, Google, Apple, and then he's got RACE, which is the ticker for Ferrari. You can see that's done incredibly well. And so, as a financial adviser, you look at this and you say, all right, we know he loves auto stocks. Ferrari's done very well. Let's create a hedge. Like what kind of security would interest him as a hedge against his position for Ferrari? Could we go figure that out? >> [Siva] Yes. Given I know that he's gotten an affinity for auto stocks, and I also see that Ferrari has got some terminus gains, I want to lock in these gains by hedging. And I want to do that by picking a auto stock which has got negative correlation with Ferrari. >> [Rob] So this is where we get to the idea of in database analytics. Cause you start clicking that and immediately we're getting instant answers of what's happening. So what did we find here? We're going to compare Ferrari and Honda. >> [Siva] I'm going to compare Ferrari with Honda. And what I see here instantly is that Honda has got a negative correlation with Ferrari, which makes it a perfect mix for his stock portfolio. Given he has an affinity for auto stocks and it correlates negatively with Ferrari. >> [Rob] These are very powerful tools at the hand of a financial adviser. You think about it. As a financial adviser, you wouldn't think about federating data, machine learning, pretty powerful. >> [Siva] Yes. So what we have seen here is that using the common SQL engine, we've been able to federate queries across multiple data sources. Db2 Warehouse in the cloud, IBM's Integrated Analytic System, and Hortonworks powered Hadoop platform for the new speeds. We've been able to use machine learning to derive innovative insights about his stock affinities. And drive the machine learning into the appliance. Closer to where the data resides to deliver high performance analytics. >> [Rob] At scale? >> [Siva] We're able to run millions of these correlations across stocks, currency, other factors. And even score hundreds of customers for their affinities on a daily basis. >> That's great. Siva, thank you for playing the role of financial adviser. So I just want to recap briefly. Cause this really powerful technology that's really simple. So we federated, we aggregated multiple data sources from all over the web and internal systems. And public cloud systems. Machine learning models were built that predicted Leo's affinity for a certain industry. In this case, automotive. And then you see when you deploy analytics next to your data, even a financial adviser, just with the click of a button is getting instant answers so they can go be more productive in their next meeting. This whole idea of celebrity experiences for your customer, that's available for everybody, if you take advantage of these types of capabilities. Katie, I'll hand it back to you. >> Good stuff. Thank you Rob. Thank you Siva. Powerful demonstration on what we've been talking about all afternoon. And thank you again to Siva for helping us navigate. Should be give him one more round of applause? We're going to be back in just a moment to look at how we operationalize all of this data. But in first, here's a message from me. If you're a part of a line of business, your main fear is disruption. You know data is the new goal that can create huge amounts of value. So does your competition. And they may be beating you to it. You're convinced there are new business models and revenue sources hidden in all the data. You just need to figure out how to leverage it. But with the scarcity of data scientists, you really can't rely solely on them. You may need more people throughout the organization that have the ability to extract value from data. And as a data science leader or data scientist, you have a lot of the same concerns. You spend way too much time looking for, prepping, and interpreting data and waiting for models to train. You know you need to operationalize the work you do to provide business value faster. What you want is an easier way to do data prep. And rapidly build models that can be easily deployed, monitored and automatically updated. So whether you're a data scientist, data science leader, or in a line of business, what's the solution? What'll it take to transform the way you work? That's what we're going to explore next. All right, now it's time to delve deeper into the nuts and bolts. The nitty gritty of operationalizing data science and creating a data driven culture. How do you actually do that? Well that's what these experts are here to share with us. I'm joined by Nir Kaldero, who's head of data science at Galvanize, which is an education and training organization. Tricia Wang, who is co-founder of Sudden Compass, a consultancy that helps companies understand people with data. And last, but certainly not least, Michael Li, founder and CEO of Data Incubator, which is a data science train company. All right guys. Shall we get right to it? >> All right. >> So data explosion happening right now. And we are seeing it across the board. I just shared an example of how it's impacting my philanthropic work in pediatric cancer. But you guys each have so many unique roles in your business life. How are you seeing it just blow up in your fields? Nir, your thing? >> Yeah, for example like in Galvanize we train many Fortune 500 companies. And just by looking at the demand of companies that wants us to help them go through this digital transformation is mind-blowing. Data point by itself. >> Okay. Well what we're seeing what's going on is that data science like as a theme, is that it's actually for everyone now. But what's happening is that it's actually meeting non technical people. But what we're seeing is that when non technical people are implementing these tools or coming at these tools without a base line of data literacy, they're often times using it in ways that distance themselves from the customer. Because they're implementing data science tools without a clear purpose, without a clear problem. And so what we do at Sudden Compass is that we work with companies to help them embrace and understand the complexity of their customers. Because often times they are misusing data science to try and flatten their understanding of the customer. As if you can just do more traditional marketing. Where you're putting people into boxes. And I think the whole ROI of data is that you can now understand people's relationships at a much more complex level at a greater scale before. But we have to do this with basic data literacy. And this has to involve technical and non technical people. >> Well you can have all the data in the world, and I think it speaks to, if you're not doing the proper movement with it, forget it. It means nothing at the same time. >> No absolutely. I mean, I think that when you look at the huge explosion in data, that comes with it a huge explosion in data experts. Right, we call them data scientists, data analysts. And sometimes they're people who are very, very talented, like the people here. But sometimes you have people who are maybe re-branding themselves, right? Trying to move up their title one notch to try to attract that higher salary. And I think that that's one of the things that customers are coming to us for, right? They're saying, hey look, there are a lot of people that call themselves data scientists, but we can't really distinguish. So, we have sort of run a fellowship where you help companies hire from a really talented group of folks, who are also truly data scientists and who know all those kind of really important data science tools. And we also help companies internally. Fortune 500 companies who are looking to grow that data science practice that they have. And we help clients like McKinsey, BCG, Bain, train up their customers, also their clients, also their workers to be more data talented. And to build up that data science capabilities. >> And Nir, this is something you work with a lot. A lot of Fortune 500 companies. And when we were speaking earlier, you were saying many of these companies can be in a panic. >> Yeah. >> Explain that. >> Yeah, so you know, not all Fortune 500 companies are fully data driven. And we know that the winners in this fourth industrial revolution, which I like to call the machine intelligence revolution, will be companies who navigate and transform their organization to unlock the power of data science and machine learning. And the companies that are not like that. Or not utilize data science and predictive power well, will pretty much get shredded. So they are in a panic. >> Tricia, companies have to deal with data behind the firewall and in the new multi cloud world. How do organizations start to become driven right to the core? >> I think the most urgent question to become data driven that companies should be asking is how do I bring the complex reality that our customers are experiencing on the ground in to a corporate office? Into the data models. So that question is critical because that's how you actually prevent any big data disasters. And that's how you leverage big data. Because when your data models are really far from your human models, that's when you're going to do things that are really far off from how, it's going to not feel right. That's when Tesco had their terrible big data disaster that they're still recovering from. And so that's why I think it's really important to understand that when you implement big data, you have to further embrace thick data. The qualitative, the emotional stuff, that is difficult to quantify. But then comes the difficult art and science that I think is the next level of data science. Which is that getting non technical and technical people together to ask how do we find those unknown nuggets of insights that are difficult to quantify? Then, how do we do the next step of figuring out how do you mathematically scale those insights into a data model? So that actually is reflective of human understanding? And then we can start making decisions at scale. But you have to have that first. >> That's absolutely right. And I think that when we think about what it means to be a data scientist, right? I always think about it in these sort of three pillars. You have the math side. You have to have that kind of stats, hardcore machine learning background. You have the programming side. You don't work with small amounts of data. You work with large amounts of data. You've got to be able to type the code to make those computers run. But then the last part is that human element. You have to understand the domain expertise. You have to understand what it is that I'm actually analyzing. What's the business proposition? And how are the clients, how are the users actually interacting with the system? That human element that you were talking about. And I think having somebody who understands all of those and not just in isolation, but is able to marry that understanding across those different topics, that's what makes a data scientist. >> But I find that we don't have people with those skill sets. And right now the way I see teams being set up inside companies is that they're creating these isolated data unicorns. These data scientists that have graduated from your programs, which are great. But, they don't involve the people who are the domain experts. They don't involve the designers, the consumer insight people, the people, the salespeople. The people who spend time with the customers day in and day out. Somehow they're left out of the room. They're consulted, but they're not a stakeholder. >> Can I actually >> Yeah, yeah please. >> Can I actually give a quick example? So for example, we at Galvanize train the executives and the managers. And then the technical people, the data scientists and the analysts. But in order to actually see all of the RY behind the data, you also have to have a creative fluid conversation between non technical and technical people. And this is a major trend now. And there's a major gap. And we need to increase awareness and kind of like create a new, kind of like environment where technical people also talks seamlessly with non technical ones. >> [Tricia] We call-- >> That's one of the things that we see a lot. Is one of the trends in-- >> A major trend. >> data science training is it's not just for the data science technical experts. It's not just for one type of person. So a lot of the training we do is sort of data engineers. People who are more on the software engineering side learning more about the stats of math. And then people who are sort of traditionally on the stat side learning more about the engineering. And then managers and people who are data analysts learning about both. >> Michael, I think you said something that was of interest too because I think we can look at IBM Watson as an example. And working in healthcare. The human component. Because often times we talk about machine learning and AI, and data and you get worried that you still need that human component. Especially in the world of healthcare. And I think that's a very strong point when it comes to the data analysis side. Is there any particular example you can speak to of that? >> So I think that there was this really excellent paper a while ago talking about all the neuro net stuff and trained on textual data. So looking at sort of different corpuses. And they found that these models were highly, highly sexist. They would read these corpuses and it's not because neuro nets themselves are sexist. It's because they're reading the things that we write. And it turns out that we write kind of sexist things. And they would sort of find all these patterns in there that were sort of latent, that had a lot of sort of things that maybe we would cringe at if we sort of saw. And I think that's one of the really important aspects of the human element, right? It's being able to come in and sort of say like, okay, I know what the biases of the system are, I know what the biases of the tools are. I need to figure out how to use that to make the tools, make the world a better place. And like another area where this comes up all the time is lending, right? So the federal government has said, and we have a lot of clients in the financial services space, so they're constantly under these kind of rules that they can't make discriminatory lending practices based on a whole set of protected categories. Race, sex, gender, things like that. But, it's very easy when you train a model on credit scores to pick that up. And then to have a model that's inadvertently sexist or racist. And that's where you need the human element to come back in and say okay, look, you're using the classic example would be zip code, you're using zip code as a variable. But when you look at it, zip codes actually highly correlated with race. And you can't do that. So you may inadvertently by sort of following the math and being a little naive about the problem, inadvertently introduce something really horrible into a model and that's where you need a human element to sort of step in and say, okay hold on. Slow things down. This isn't the right way to go. >> And the people who have -- >> I feel like, I can feel her ready to respond. >> Yes, I'm ready. >> She's like let me have at it. >> And the people here it is. And the people who are really great at providing that human intelligence are social scientists. We are trained to look for bias and to understand bias in data. Whether it's quantitative or qualitative. And I really think that we're going to have less of these kind of problems if we had more integrated teams. If it was a mandate from leadership to say no data science team should be without a social scientist, ethnographer, or qualitative researcher of some kind, to be able to help see these biases. >> The talent piece is actually the most crucial-- >> Yeah. >> one here. If you look about how to enable machine intelligence in organization there are the pillars that I have in my head which is the culture, the talent and the technology infrastructure. And I believe and I saw in working very closely with the Fortune 100 and 200 companies that the talent piece is actually the most important crucial hard to get. >> [Tricia] I totally agree. >> It's absolutely true. Yeah, no I mean I think that's sort of like how we came up with our business model. Companies were basically saying hey, I can't hire data scientists. And so we have a fellowship where we get 2,000 applicants each quarter. We take the top 2% and then we sort of train them up. And we work with hiring companies who then want to hire from that population. And so we're sort of helping them solve that problem. And the other half of it is really around training. Cause with a lot of industries, especially if you're sort of in a more regulated industry, there's a lot of nuances to what you're doing. And the fastest way to develop that data science or AI talent may not necessarily be to hire folks who are coming out of a PhD program. It may be to take folks internally who have a lot of that domain knowledge that you have and get them trained up on those data science techniques. So we've had large insurance companies come to us and say hey look, we hire three or four folks from you a quarter. That doesn't move the needle for us. What we really need is take the thousand actuaries and statisticians that we have and get all of them trained up to become a data scientist and become data literate in this new open source world. >> [Katie] Go ahead. >> All right, ladies first. >> Go ahead. >> Are you sure? >> No please, fight first. >> Go ahead. >> Go ahead Nir. >> So this is actually a trend that we have been seeing in the past year or so that companies kind of like start to look how to upscale and look for talent within the organization. So they can actually move them to become more literate and navigate 'em from analyst to data scientist. And from data scientist to machine learner. So this is actually a trend that is happening already for a year or so. >> Yeah, but I also find that after they've gone through that training in getting people skilled up in data science, the next problem that I get is executives coming to say we've invested in all of this. We're still not moving the needle. We've already invested in the right tools. We've gotten the right skills. We have enough scale of people who have these skills. Why are we not moving the needle? And what I explain to them is look, you're still making decisions in the same way. And you're still not involving enough of the non technical people. Especially from marketing, which is now, the CMO's are much more responsible for driving growth in their companies now. But often times it's so hard to change the old way of marketing, which is still like very segmentation. You know, demographic variable based, and we're trying to move people to say no, you have to understand the complexity of customers and not put them in boxes. >> And I think underlying a lot of this discussion is this question of culture, right? >> Yes. >> Absolutely. >> How do you build a data driven culture? And I think that that culture question, one of the ways that comes up quite often in especially in large, Fortune 500 enterprises, is that they are very, they're not very comfortable with sort of example, open source architecture. Open source tools. And there is some sort of residual bias that that's somehow dangerous. So security vulnerability. And I think that that's part of the cultural challenge that they often have in terms of how do I build a more data driven organization? Well a lot of the talent really wants to use these kind of tools. And I mean, just to give you an example, we are partnering with one of the major cloud providers to sort of help make open source tools more user friendly on their platform. So trying to help them attract the best technologists to use their platform because they want and they understand the value of having that kind of open source technology work seamlessly on their platforms. So I think that just sort of goes to show you how important open source is in this movement. And how much large companies and Fortune 500 companies and a lot of the ones we work with have to embrace that. >> Yeah, and I'm seeing it in our work. Even when we're working with Fortune 500 companies, is that they've already gone through the first phase of data science work. Where I explain it was all about the tools and getting the right tools and architecture in place. And then companies started moving into getting the right skill set in place. Getting the right talent. And what you're talking about with culture is really where I think we're talking about the third phase of data science, which is looking at communication of these technical frameworks so that we can get non technical people really comfortable in the same room with data scientists. That is going to be the phase, that's really where I see the pain point. And that's why at Sudden Compass, we're really dedicated to working with each other to figure out how do we solve this problem now? >> And I think that communication between the technical stakeholders and management and leadership. That's a very critical piece of this. You can't have a successful data science organization without that. >> Absolutely. >> And I think that actually some of the most popular trainings we've had recently are from managers and executives who are looking to say, how do I become more data savvy? How do I figure out what is this data science thing and how do I communicate with my data scientists? >> You guys made this way too easy. I was just going to get some popcorn and watch it play out. >> Nir, last 30 seconds. I want to leave you with an opportunity to, anything you want to add to this conversation? >> I think one thing to conclude is to say that companies that are not data driven is about time to hit refresh and figure how they transition the organization to become data driven. To become agile and nimble so they can actually see what opportunities from this important industrial revolution. Otherwise, unfortunately they will have hard time to survive. >> [Katie] All agreed? >> [Tricia] Absolutely, you're right. >> Michael, Trish, Nir, thank you so much. Fascinating discussion. And thank you guys again for joining us. We will be right back with another great demo. Right after this. >> Thank you Katie. >> Once again, thank you for an excellent discussion. Weren't they great guys? And thank you for everyone who's tuning in on the live webcast. As you can hear, we have an amazing studio audience here. And we're going to keep things moving. I'm now joined by Daniel Hernandez and Siva Anne. And we're going to turn our attention to how you can deliver on what they're talking about using data science experience to do data science faster. >> Thank you Katie. Siva and I are going to spend the next 10 minutes showing you how you can deliver on what they were saying using the IBM Data Science Experience to do data science faster. We'll demonstrate through new features we introduced this week how teams can work together more effectively across the entire analytics life cycle. How you can take advantage of any and all data no matter where it is and what it is. How you could use your favorite tools from open source. And finally how you could build models anywhere and employ them close to where your data is. Remember the financial adviser app Rob showed you? To build an app like that, we needed a team of data scientists, developers, data engineers, and IT staff to collaborate. We do this in the Data Science Experience through a concept we call projects. When I create a new project, I can now use the new Github integration feature. We're doing for data science what we've been doing for developers for years. Distributed teams can work together on analytics projects. And take advantage of Github's version management and change management features. This is a huge deal. Let's explore the project we created for the financial adviser app. As you can see, our data engineer Joane, our developer Rob, and others are collaborating this project. Joane got things started by bringing together the trusted data sources we need to build the app. Taking a closer look at the data, we see that our customer and profile data is stored on our recently announced IBM Integrated Analytics System, which runs safely behind our firewall. We also needed macro economic data, which she was able to find in the Federal Reserve. And she stored it in our Db2 Warehouse on Cloud. And finally, she selected stock news data from NASDAQ.com and landed that in a Hadoop cluster, which happens to be powered by Hortonworks. We added a new feature to the Data Science Experience so that when it's installed with Hortonworks, it automatically uses a need of security and governance controls within the cluster so your data is always secure and safe. Now we want to show you the news data we stored in the Hortonworks cluster. This is the mean administrative console. It's powered by an open source project called Ambari. And here's the news data. It's in parquet files stored in HDFS, which happens to be a distributive file system. To get the data from NASDAQ into our cluster, we used IBM's BigIntegrate and BigQuality to create automatic data pipelines that acquire, cleanse, and ingest that news data. Once the data's available, we use IBM's Big SQL to query that data using SQL statements that are much like the ones we would use for any relation of data, including the data that we have in the Integrated Analytics System and Db2 Warehouse on Cloud. This and the federation capabilities that Big SQL offers dramatically simplifies data acquisition. Now we want to show you how we support a brand new tool that we're excited about. Since we launched last summer, the Data Science Experience has supported Jupyter and R for data analysis and visualization. In this week's update, we deeply integrated another great open source project called Apache Zeppelin. It's known for having great visualization support, advanced collaboration features, and is growing in popularity amongst the data science community. This is an example of Apache Zeppelin and the notebook we created through it to explore some of our data. Notice how wonderful and easy the data visualizations are. Now we want to walk you through the Jupyter notebook we created to explore our customer preference for stocks. We use notebooks to understand and explore data. To identify the features that have some predictive power. Ultimately, we're trying to assess what ultimately is driving customer stock preference. Here we did the analysis to identify the attributes of customers that are likely to purchase auto stocks. We used this understanding to build our machine learning model. For building machine learning models, we've always had tools integrated into the Data Science Experience. But sometimes you need to use tools you already invested in. Like our very own SPSS as well as SAS. Through new import feature, you can easily import those models created with those tools. This helps you avoid vendor lock-in, and simplify the development, training, deployment, and management of all your models. To build the models we used in app, we could have coded, but we prefer a visual experience. We used our customer profile data in the Integrated Analytic System. Used the Auto Data Preparation to cleanse our data. Choose the binary classification algorithms. Let the Data Science Experience evaluate between logistic regression and gradient boosted tree. It's doing the heavy work for us. As you can see here, the Data Science Experience generated performance metrics that show us that the gradient boosted tree is the best performing algorithm for the data we gave it. Once we save this model, it's automatically deployed and available for developers to use. Any application developer can take this endpoint and consume it like they would any other API inside of the apps they built. We've made training and creating machine learning models super simple. But what about the operations? A lot of companies are struggling to ensure their model performance remains high over time. In our financial adviser app, we know that customer data changes constantly, so we need to always monitor model performance and ensure that our models are retrained as is necessary. This is a dashboard that shows the performance of our models and lets our teams monitor and retrain those models so that they're always performing to our standards. So far we've been showing you the Data Science Experience available behind the firewall that we're using to build and train models. Through a new publish feature, you can build models and deploy them anywhere. In another environment, private, public, or anywhere else with just a few clicks. So here we're publishing our model to the Watson machine learning service. It happens to be in the IBM cloud. And also deeply integrated with our Data Science Experience. After publishing and switching to the Watson machine learning service, you can see that our stock affinity and model that we just published is there and ready for use. So this is incredibly important. I just want to say it again. The Data Science Experience allows you to train models behind your own firewall, take advantage of your proprietary and sensitive data, and then deploy those models wherever you want with ease. So summarize what we just showed you. First, IBM's Data Science Experience supports all teams. You saw how our data engineer populated our project with trusted data sets. Our data scientists developed, trained, and tested a machine learning model. Our developers used APIs to integrate machine learning into their apps. And how IT can use our Integrated Model Management dashboard to monitor and manage model performance. Second, we support all data. On premises, in the cloud, structured, unstructured, inside of your firewall, and outside of it. We help you bring analytics and governance to where your data is. Third, we support all tools. The data science tools that you depend on are readily available and deeply integrated. This includes capabilities from great partners like Hortonworks. And powerful tools like our very own IBM SPSS. And fourth, and finally, we support all deployments. You can build your models anywhere, and deploy them right next to where your data is. Whether that's in the public cloud, private cloud, or even on the world's most reliable transaction platform, IBM z. So see for yourself. Go to the Data Science Experience website, take us for a spin. And if you happen to be ready right now, our recently created Data Science Elite Team can help you get started and run experiments alongside you with no charge. Thank you very much. >> Thank you very much Daniel. It seems like a great time to get started. And thanks to Siva for taking us through it. Rob and I will be back in just a moment to add some perspective right after this. All right, once again joined by Rob Thomas. And Rob obviously we got a lot of information here. >> Yes, we've covered a lot of ground. >> This is intense. You got to break it down for me cause I think we zoom out and see the big picture. What better data science can deliver to a business? Why is this so important? I mean we've heard it through and through. >> Yeah, well, I heard it a couple times. But it starts with businesses have to embrace a data driven culture. And it is a change. And we need to make data accessible with the right tools in a collaborative culture because we've got diverse skill sets in every organization. But data driven companies succeed when data science tools are in the hands of everyone. And I think that's a new thought. I think most companies think just get your data scientist some tools, you'll be fine. This is about tools in the hands of everyone. I think the panel did a great job of describing about how we get to data science for all. Building a data culture, making it a part of your everyday operations, and the highlights of what Daniel just showed us, that's some pretty cool features for how organizations can get to this, which is you can see IBM's Data Science Experience, how that supports all teams. You saw data analysts, data scientists, application developer, IT staff, all working together. Second, you saw how we support all tools. And your choice of tools. So the most popular data science libraries integrated into one platform. And we saw some new capabilities that help companies avoid lock-in, where you can import existing models created from specialist tools like SPSS or others. And then deploy them and manage them inside of Data Science Experience. That's pretty interesting. And lastly, you see we continue to build on this best of open tools. Partnering with companies like H2O, Hortonworks, and others. Third, you can see how you use all data no matter where it lives. That's a key challenge every organization's going to face. Private, public, federating all data sources. We announced new integration with the Hortonworks data platform where we deploy machine learning models where your data resides. That's been a key theme. Analytics where the data is. And lastly, supporting all types of deployments. Deploy them in your Hadoop cluster. Deploy them in your Integrated Analytic System. Or deploy them in z, just to name a few. A lot of different options here. But look, don't believe anything I say. Go try it for yourself. Data Science Experience, anybody can use it. Go to datascience.ibm.com and look, if you want to start right now, we just created a team that we call Data Science Elite. These are the best data scientists in the world that will come sit down with you and co-create solutions, models, and prove out a proof of concept. >> Good stuff. Thank you Rob. So you might be asking what does an organization look like that embraces data science for all? And how could it transform your role? I'm going to head back to the office and check it out. Let's start with the perspective of the line of business. What's changed? Well, now you're starting to explore new business models. You've uncovered opportunities for new revenue sources and all that hidden data. And being disrupted is no longer keeping you up at night. As a data science leader, you're beginning to collaborate with a line of business to better understand and translate the objectives into the models that are being built. Your data scientists are also starting to collaborate with the less technical team members and analysts who are working closest to the business problem. And as a data scientist, you stop feeling like you're falling behind. Open source tools are keeping you current. You're also starting to operationalize the work that you do. And you get to do more of what you love. Explore data, build models, put your models into production, and create business impact. All in all, it's not a bad scenario. Thanks. All right. We are back and coming up next, oh this is a special time right now. Cause we got a great guest speaker. New York Magazine called him the spreadsheet psychic and number crunching prodigy who went from correctly forecasting baseball games to correctly forecasting presidential elections. He even invented a proprietary algorithm called PECOTA for predicting future performance by baseball players and teams. And his New York Times bestselling book, The Signal and the Noise was named by Amazon.com as the number one best non-fiction book of 2012. He's currently the Editor in Chief of the award winning website, FiveThirtyEight and appears on ESPN as an on air commentator. Big round of applause. My pleasure to welcome Nate Silver. >> Thank you. We met backstage. >> Yes. >> It feels weird to re-shake your hand, but you know, for the audience. >> I had to give the intense firm grip. >> Definitely. >> The ninja grip. So you and I have crossed paths kind of digitally in the past, which it really interesting, is I started my career at ESPN. And I started as a production assistant, then later back on air for sports technology. And I go to you to talk about sports because-- >> Yeah. >> Wow, has ESPN upped their game in terms of understanding the importance of data and analytics. And what it brings. Not just to MLB, but across the board. >> No, it's really infused into the way they present the broadcast. You'll have win probability on the bottom line. And they'll incorporate FiveThirtyEight metrics into how they cover college football for example. So, ESPN ... Sports is maybe the perfect, if you're a data scientist, like the perfect kind of test case. And the reason being that sports consists of problems that have rules. And have structure. And when problems have rules and structure, then it's a lot easier to work with. So it's a great way to kind of improve your skills as a data scientist. Of course, there are also important real world problems that are more open ended, and those present different types of challenges. But it's such a natural fit. The teams. Think about the teams playing the World Series tonight. The Dodgers and the Astros are both like very data driven, especially Houston. Golden State Warriors, the NBA Champions, extremely data driven. New England Patriots, relative to an NFL team, it's shifted a little bit, the NFL bar is lower. But the Patriots are certainly very analytical in how they make decisions. So, you can't talk about sports without talking about analytics. >> And I was going to save the baseball question for later. Cause we are moments away from game seven. >> Yeah. >> Is everyone else watching game seven? It's been an incredible series. Probably one of the best of all time. >> Yeah, I mean-- >> You have a prediction here? >> You can mention that too. So I don't have a prediction. FiveThirtyEight has the Dodgers with a 60% chance of winning. >> [Katie] LA Fans. >> So you have two teams that are about equal. But the Dodgers pitching staff is in better shape at the moment. The end of a seven game series. And they're at home. >> But the statistics behind the two teams is pretty incredible. >> Yeah. It's like the first World Series in I think 56 years or something where you have two 100 win teams facing one another. There have been a lot of parity in baseball for a lot of years. Not that many offensive overall juggernauts. But this year, and last year with the Cubs and the Indians too really. But this year, you have really spectacular teams in the World Series. It kind of is a showcase of modern baseball. Lots of home runs. Lots of strikeouts. >> [Katie] Lots of extra innings. >> Lots of extra innings. Good defense. Lots of pitching changes. So if you love the modern baseball game, it's been about the best example that you've had. If you like a little bit more contact, and fewer strikeouts, maybe not so much. But it's been a spectacular and very exciting World Series. It's amazing to talk. MLB is huge with analysis. I mean, hands down. But across the board, if you can provide a few examples. Because there's so many teams in front offices putting such an, just a heavy intensity on the analysis side. And where the teams are going. And if you could provide any specific examples of teams that have really blown your mind. Especially over the last year or two. Because every year it gets more exciting if you will. I mean, so a big thing in baseball is defensive shifts. So if you watch tonight, you'll probably see a couple of plays where if you're used to watching baseball, a guy makes really solid contact. And there's a fielder there that you don't think should be there. But that's really very data driven where you analyze where's this guy hit the ball. That part's not so hard. But also there's game theory involved. Because you have to adjust for the fact that he knows where you're positioning the defenders. He's trying therefore to make adjustments to his own swing and so that's been a major innovation in how baseball is played. You know, how bullpens are used too. Where teams have realized that actually having a guy, across all sports pretty much, realizing the importance of rest. And of fatigue. And that you can be the best pitcher in the world, but guess what? After four or five innings, you're probably not as good as a guy who has a fresh arm necessarily. So I mean, it really is like, these are not subtle things anymore. It's not just oh, on base percentage is valuable. It really effects kind of every strategic decision in baseball. The NBA, if you watch an NBA game tonight, see how many three point shots are taken. That's in part because of data. And teams realizing hey, three points is worth more than two, once you're more than about five feet from the basket, the shooting percentage gets really flat. And so it's revolutionary, right? Like teams that will shoot almost half their shots from the three point range nowadays. Larry Bird, who wound up being one of the greatest three point shooters of all time, took only eight three pointers his first year in the NBA. It's quite noticeable if you watch baseball or basketball in particular. >> Not to focus too much on sports. One final question. In terms of Major League Soccer, and now in NFL, we're having the analysis and having wearables where it can now showcase if they wanted to on screen, heart rate and breathing and how much exertion. How much data is too much data? And when does it ruin the sport? >> So, I don't think, I mean, again, it goes sport by sport a little bit. I think in basketball you actually have a more exciting game. I think the game is more open now. You have more three pointers. You have guys getting higher assist totals. But you know, I don't know. I'm not one of those people who thinks look, if you love baseball or basketball, and you go in to work for the Astros, the Yankees or the Knicks, they probably need some help, right? You really have to be passionate about that sport. Because it's all based on what questions am I asking? As I'm a fan or I guess an employee of the team. Or a player watching the game. And there isn't really any substitute I don't think for the insight and intuition that a curious human has to kind of ask the right questions. So we can talk at great length about what tools do you then apply when you have those questions, but that still comes from people. I don't think machine learning could help with what questions do I want to ask of the data. It might help you get the answers. >> If you have a mid-fielder in a soccer game though, not exerting, only 80%, and you're seeing that on a screen as a fan, and you're saying could that person get fired at the end of the day? One day, with the data? >> So we found that actually some in soccer in particular, some of the better players are actually more still. So Leo Messi, maybe the best player in the world, doesn't move as much as other soccer players do. And the reason being that A) he kind of knows how to position himself in the first place. B) he realizes that you make a run, and you're out of position. That's quite fatiguing. And particularly soccer, like basketball, is a sport where it's incredibly fatiguing. And so, sometimes the guys who conserve their energy, that kind of old school mentality, you have to hustle at every moment. That is not helpful to the team if you're hustling on an irrelevant play. And therefore, on a critical play, can't get back on defense, for example. >> Sports, but also data is moving exponentially as we're just speaking about today. Tech, healthcare, every different industry. Is there any particular that's a favorite of yours to cover? And I imagine they're all different as well. >> I mean, I do like sports. We cover a lot of politics too. Which is different. I mean in politics I think people aren't intuitively as data driven as they might be in sports for example. It's impressive to follow the breakthroughs in artificial intelligence. It started out just as kind of playing games and playing chess and poker and Go and things like that. But you really have seen a lot of breakthroughs in the last couple of years. But yeah, it's kind of infused into everything really. >> You're known for your work in politics though. Especially presidential campaigns. >> Yeah. >> This year, in particular. Was it insanely challenging? What was the most notable thing that came out of any of your predictions? >> I mean, in some ways, looking at the polling was the easiest lens to look at it. So I think there's kind of a myth that last year's result was a big shock and it wasn't really. If you did the modeling in the right way, then you realized that number one, polls have a margin of error. And so when a candidate has a three point lead, that's not particularly safe. Number two, the outcome between different states is correlated. Meaning that it's not that much of a surprise that Clinton lost Wisconsin and Michigan and Pennsylvania and Ohio. You know I'm from Michigan. Have friends from all those states. Kind of the same types of people in those states. Those outcomes are all correlated. So what people thought was a big upset for the polls I think was an example of how data science done carefully and correctly where you understand probabilities, understand correlations. Our model gave Trump a 30% chance of winning. Others models gave him a 1% chance. And so that was interesting in that it showed that number one, that modeling strategies and skill do matter quite a lot. When you have someone saying 30% versus 1%. I mean, that's a very very big spread. And number two, that these aren't like solved problems necessarily. Although again, the problem with elections is that you only have one election every four years. So I can be very confident that I have a better model. Even one year of data doesn't really prove very much. Even five or 10 years doesn't really prove very much. And so, being aware of the limitations to some extent intrinsically in elections when you only get one kind of new training example every four years, there's not really any way around that. There are ways to be more robust to sparce data environments. But if you're identifying different types of business problems to solve, figuring out what's a solvable problem where I can add value with data science is a really key part of what you're doing. >> You're such a leader in this space. In data and analysis. It would be interesting to kind of peek back the curtain, understand how you operate but also how large is your team? How you're putting together information. How quickly you're putting it out. Cause I think in this right now world where everybody wants things instantly-- >> Yeah. >> There's also, you want to be first too in the world of journalism. But you don't want to be inaccurate because that's your credibility. >> We talked about this before, right? I think on average, speed is a little bit overrated in journalism. >> [Katie] I think it's a big problem in journalism. >> Yeah. >> Especially in the tech world. You have to be first. You have to be first. And it's just pumping out, pumping out. And there's got to be more time spent on stories if I can speak subjectively. >> Yeah, for sure. But at the same time, we are reacting to the news. And so we have people that come in, we hire most of our people actually from journalism. >> [Katie] How many people do you have on your team? >> About 35. But, if you get someone who comes in from an academic track for example, they might be surprised at how fast journalism is. That even though we might be slower than the average website, the fact that there's a tragic event in New York, are there things we have to say about that? A candidate drops out of the presidential race, are things we have to say about that. In periods ranging from minutes to days as opposed to kind of weeks to months to years in the academic world. The corporate world moves faster. What is a little different about journalism is that you are expected to have more precision where people notice when you make a mistake. In corporations, you have maybe less transparency. If you make 10 investments and seven of them turn out well, then you'll get a lot of profit from that, right? In journalism, it's a little different. If you make kind of seven predictions or say seven things, and seven of them are very accurate and three of them aren't, you'll still get criticized a lot for the three. Just because that's kind of the way that journalism is. And so the kind of combination of needing, not having that much tolerance for mistakes, but also needing to be fast. That is tricky. And I criticize other journalists sometimes including for not being data driven enough, but the best excuse any journalist has, this is happening really fast and it's my job to kind of figure out in real time what's going on and provide useful information to the readers. And that's really difficult. Especially in a world where literally, I'll probably get off the stage and check my phone and who knows what President Trump will have tweeted or what things will have happened. But it really is a kind of 24/7. >> Well because it's 24/7 with FiveThirtyEight, one of the most well known sites for data, are you feeling micromanagey on your people? Because you do have to hit this balance. You can't have something come out four or five days later. >> Yeah, I'm not -- >> Are you overseeing everything? >> I'm not by nature a micromanager. And so you try to hire well. You try and let people make mistakes. And the flip side of this is that if a news organization that never had any mistakes, never had any corrections, that's raw, right? You have to have some tolerance for error because you are trying to decide things in real time. And figure things out. I think transparency's a big part of that. Say here's what we think, and here's why we think it. If we have a model to say it's not just the final number, here's a lot of detail about how that's calculated. In some case we release the code and the raw data. Sometimes we don't because there's a proprietary advantage. But quite often we're saying we want you to trust us and it's so important that you trust us, here's the model. Go play around with it yourself. Here's the data. And that's also I think an important value. >> That speaks to open source. And your perspective on that in general. >> Yeah, I mean, look, I'm a big fan of open source. I worry that I think sometimes the trends are a little bit away from open source. But by the way, one thing that happens when you share your data or you share your thinking at least in lieu of the data, and you can definitely do both is that readers will catch embarrassing mistakes that you made. By the way, even having open sourceness within your team, I mean we have editors and copy editors who often save you from really embarrassing mistakes. And by the way, it's not necessarily people who have a training in data science. I would guess that of our 35 people, maybe only five to 10 have a kind of formal background in what you would call data science. >> [Katie] I think that speaks to the theme here. >> Yeah. >> [Katie] That everybody's kind of got to be data literate. >> But yeah, it is like you have a good intuition. You have a good BS detector basically. And you have a good intuition for hey, this looks a little bit out of line to me. And sometimes that can be based on domain knowledge, right? We have one of our copy editors, she's a big college football fan. And we had an algorithm we released that tries to predict what the human being selection committee will do, and she was like, why is LSU rated so high? Cause I know that LSU sucks this year. And we looked at it, and she was right. There was a bug where it had forgotten to account for their last game where they lost to Troy or something and so -- >> That also speaks to the human element as well. >> It does. In general as a rule, if you're designing a kind of regression based model, it's different in machine learning where you have more, when you kind of build in the tolerance for error. But if you're trying to do something more precise, then so much of it is just debugging. It's saying that looks wrong to me. And I'm going to investigate that. And sometimes it's not wrong. Sometimes your model actually has an insight that you didn't have yourself. But fairly often, it is. And I think kind of what you learn is like, hey if there's something that bothers me, I want to go investigate that now and debug that now. Because the last thing you want is where all of a sudden, the answer you're putting out there in the world hinges on a mistake that you made. Cause you never know if you have so to speak, 1,000 lines of code and they all perform something differently. You never know when you get in a weird edge case where this one decision you made winds up being the difference between your having a good forecast and a bad one. In a defensible position and a indefensible one. So we definitely are quite diligent and careful. But it's also kind of knowing like, hey, where is an approximation good enough and where do I need more precision? Cause you could also drive yourself crazy in the other direction where you know, it doesn't matter if the answer is 91.2 versus 90. And so you can kind of go 91.2, three, four and it's like kind of A) false precision and B) not a good use of your time. So that's where I do still spend a lot of time is thinking about which problems are "solvable" or approachable with data and which ones aren't. And when they're not by the way, you're still allowed to report on them. We are a news organization so we do traditional reporting as well. And then kind of figuring out when do you need precision versus when is being pointed in the right direction good enough? >> I would love to get inside your brain and see how you operate on just like an everyday walking to Walgreens movement. It's like oh, if I cross the street in .2-- >> It's not, I mean-- >> Is it like maddening in there? >> No, not really. I mean, I'm like-- >> This is an honest question. >> If I'm looking for airfares, I'm a little more careful. But no, part of it's like you don't want to waste time on unimportant decisions, right? I will sometimes, if I can't decide what to eat at a restaurant, I'll flip a coin. If the chicken and the pasta both sound really good-- >> That's not high tech Nate. We want better. >> But that's the point, right? It's like both the chicken and the pasta are going to be really darn good, right? So I'm not going to waste my time trying to figure it out. I'm just going to have an arbitrary way to decide. >> Serious and business, how organizations in the last three to five years have just evolved with this data boom. How are you seeing it as from a consultant point of view? Do you think it's an exciting time? Do you think it's a you must act now time? >> I mean, we do know that you definitely see a lot of talent among the younger generation now. That so FiveThirtyEight has been at ESPN for four years now. And man, the quality of the interns we get has improved so much in four years. The quality of the kind of young hires that we make straight out of college has improved so much in four years. So you definitely do see a younger generation for which this is just part of their bloodstream and part of their DNA. And also, particular fields that we're interested in. So we're interested in people who have both a data and a journalism background. We're interested in people who have a visualization and a coding background. A lot of what we do is very much interactive graphics and so forth. And so we do see those skill sets coming into play a lot more. And so the kind of shortage of talent that had I think frankly been a problem for a long time, I'm optimistic based on the young people in our office, it's a little anecdotal but you can tell that there are so many more programs that are kind of teaching students the right set of skills that maybe weren't taught as much a few years ago. >> But when you're seeing these big organizations, ESPN as perfect example, moving more towards data and analytics than ever before. >> Yeah. >> You would say that's obviously true. >> Oh for sure. >> If you're not moving that direction, you're going to fall behind quickly. >> Yeah and the thing is, if you read my book or I guess people have a copy of the book. In some ways it's saying hey, there are lot of ways to screw up when you're using data. And we've built bad models. We've had models that were bad and got good results. Good models that got bad results and everything else. But the point is that the reason to be out in front of the problem is so you give yourself more runway to make errors and mistakes. And to learn kind of what works and what doesn't and which people to put on the problem. I sometimes do worry that a company says oh we need data. And everyone kind of agrees on that now. We need data science. Then they have some big test case. And they have a failure. And they maybe have a failure because they didn't know really how to use it well enough. But learning from that and iterating on that. And so by the time that you're on the third generation of kind of a problem that you're trying to solve, and you're watching everyone else make the mistake that you made five years ago, I mean, that's really powerful. But that doesn't mean that getting invested in it now, getting invested both in technology and the human capital side is important. >> Final question for you as we run out of time. 2018 beyond, what is your biggest project in terms of data gathering that you're working on? >> There's a midterm election coming up. That's a big thing for us. We're also doing a lot of work with NBA data. So for four years now, the NBA has been collecting player tracking data. So they have 3D cameras in every arena. So they can actually kind of quantify for example how fast a fast break is, for example. Or literally where a player is and where the ball is. For every NBA game now for the past four or five years. And there hasn't really been an overall metric of player value that's taken advantage of that. The teams do it. But in the NBA, the teams are a little bit ahead of journalists and analysts. So we're trying to have a really truly next generation stat. It's a lot of data. Sometimes I now more oversee things than I once did myself. And so you're parsing through many, many, many lines of code. But yeah, so we hope to have that out at some point in the next few months. >> Anything you've personally been passionate about that you've wanted to work on and kind of solve? >> I mean, the NBA thing, I am a pretty big basketball fan. >> You can do better than that. Come on, I want something real personal that you're like I got to crunch the numbers. >> You know, we tried to figure out where the best burrito in America was a few years ago. >> I'm going to end it there. >> Okay. >> Nate, thank you so much for joining us. It's been an absolute pleasure. Thank you. >> Cool, thank you. >> I thought we were going to chat World Series, you know. Burritos, important. I want to thank everybody here in our audience. Let's give him a big round of applause. >> [Nate] Thank you everyone. >> Perfect way to end the day. And for a replay of today's program, just head on over to ibm.com/dsforall. I'm Katie Linendoll. And this has been Data Science for All: It's a Whole New Game. Test one, two. One, two, three. Hi guys, I just want to quickly let you know as you're exiting. A few heads up. Downstairs right now there's going to be a meet and greet with Nate. And we're going to be doing that with clients and customers who are interested. So I would recommend before the game starts, and you lose Nate, head on downstairs. And also the gallery is open until eight p.m. with demos and activations. And tomorrow, make sure to come back too. Because we have exciting stuff. I'll be joining you as your host. And we're kicking off at nine a.m. So bye everybody, thank you so much. >> [Announcer] Ladies and gentlemen, thank you for attending this evening's webcast. If you are not attending all cloud and cognitive summit tomorrow, we ask that you recycle your name badge at the registration desk. Thank you. Also, please note there are two exits on the back of the room on either side of the room. Have a good evening. Ladies and gentlemen, the meet and greet will be on stage. Thank you.

Published Date : Nov 1 2017

SUMMARY :

Today the ability to extract value from data is becoming a shared mission. And for all of you during the program, I want to remind you to join that conversation on And when you and I chatted about it. And the scale and complexity of the data that organizations are having to deal with has It's challenging in the world of unmanageable. And they have to find a way. AI. And it's incredible that this buzz word is happening. And to get to an AI future, you have to lay a data foundation today. And four is you got to expand job roles in the organization. First pillar in this you just discussed. And now you get to where we are today. And if you don't have a strategy for how you acquire that and manage it, you're not going And the way I think about that is it's really about moving from static data repositories And we continue with the architecture. So you need a way to federate data across different environments. So we've laid out what you need for driving automation. And so when you think about the real use cases that are driving return on investment today, Let's go ahead and come back to something that you mentioned earlier because it's fascinating And so the new job roles is about how does everybody have data first in their mind? Everybody in the company has to be data literate. So overall, group effort, has to be a common goal, and we all need to be data literate But at the end of the day, it's kind of not an easy task. It's not easy but it's maybe not as big of a shift as you would think. It's interesting to hear you say essentially you need to train everyone though across the And look, if you want to get your hands on code and just dive right in, you go to datascience.ibm.com. And I've heard that the placement behind those jobs, people graduating with the MS is high. Let me get back to something else you touched on earlier because you mentioned that a number They produce a lot of the shows that I'm sure you watch Katie. And this is a good example. So they have to optimize every aspect of their business from marketing campaigns to promotions And so, as we talk to clients we think about how do you start down this path now, even It's analytics first to the data, not the other way around. We as a practice, we say you want to bring data to where the data sits. And a Harvard Business Review even dubbed it the sexiest job of the 21st century. Female preferred, on the cover of Vogue. And how does it change everything? And while it's important to recognize this critical skill set, you can't just limit it And we call it clickers and coders. [Katie] I like that. And there's not a lot of things available today that do that. Because I hear you talking about the data scientists role and how it's critical to success, And my view is if you have the right platform, it enables the organization to collaborate. And every organization needs to think about what are the skills that are critical? Use this as your chance to reinvent IT. And I can tell you even personally being effected by how important the analysis is in working And think about if you don't do something. And now we're going to get to the fun hands on part of our story. And then how do you move analytics closer to your data? And in here I can see that JP Morgan is calling for a US dollar rebound in the second half But then where it gets interesting is you go to the bottom. data, his stock portfolios, and browsing behavior to build a model which can predict his affinity And so, as a financial adviser, you look at this and you say, all right, we know he loves And I want to do that by picking a auto stock which has got negative correlation with Ferrari. Cause you start clicking that and immediately we're getting instant answers of what's happening. And what I see here instantly is that Honda has got a negative correlation with Ferrari, As a financial adviser, you wouldn't think about federating data, machine learning, pretty And drive the machine learning into the appliance. And even score hundreds of customers for their affinities on a daily basis. And then you see when you deploy analytics next to your data, even a financial adviser, And as a data science leader or data scientist, you have a lot of the same concerns. But you guys each have so many unique roles in your business life. And just by looking at the demand of companies that wants us to help them go through this And I think the whole ROI of data is that you can now understand people's relationships Well you can have all the data in the world, and I think it speaks to, if you're not doing And I think that that's one of the things that customers are coming to us for, right? And Nir, this is something you work with a lot. And the companies that are not like that. Tricia, companies have to deal with data behind the firewall and in the new multi cloud And so that's why I think it's really important to understand that when you implement big And how are the clients, how are the users actually interacting with the system? And right now the way I see teams being set up inside companies is that they're creating But in order to actually see all of the RY behind the data, you also have to have a creative That's one of the things that we see a lot. So a lot of the training we do is sort of data engineers. And I think that's a very strong point when it comes to the data analysis side. And that's where you need the human element to come back in and say okay, look, you're And the people who are really great at providing that human intelligence are social scientists. the talent piece is actually the most important crucial hard to get. It may be to take folks internally who have a lot of that domain knowledge that you have And from data scientist to machine learner. And what I explain to them is look, you're still making decisions in the same way. And I mean, just to give you an example, we are partnering with one of the major cloud And what you're talking about with culture is really where I think we're talking about And I think that communication between the technical stakeholders and management You guys made this way too easy. I want to leave you with an opportunity to, anything you want to add to this conversation? I think one thing to conclude is to say that companies that are not data driven is And thank you guys again for joining us. And we're going to turn our attention to how you can deliver on what they're talking about And finally how you could build models anywhere and employ them close to where your data is. And thanks to Siva for taking us through it. You got to break it down for me cause I think we zoom out and see the big picture. And we saw some new capabilities that help companies avoid lock-in, where you can import And as a data scientist, you stop feeling like you're falling behind. We met backstage. And I go to you to talk about sports because-- And what it brings. And the reason being that sports consists of problems that have rules. And I was going to save the baseball question for later. Probably one of the best of all time. FiveThirtyEight has the Dodgers with a 60% chance of winning. So you have two teams that are about equal. It's like the first World Series in I think 56 years or something where you have two 100 And that you can be the best pitcher in the world, but guess what? And when does it ruin the sport? So we can talk at great length about what tools do you then apply when you have those And the reason being that A) he kind of knows how to position himself in the first place. And I imagine they're all different as well. But you really have seen a lot of breakthroughs in the last couple of years. You're known for your work in politics though. What was the most notable thing that came out of any of your predictions? And so, being aware of the limitations to some extent intrinsically in elections when It would be interesting to kind of peek back the curtain, understand how you operate but But you don't want to be inaccurate because that's your credibility. I think on average, speed is a little bit overrated in journalism. And there's got to be more time spent on stories if I can speak subjectively. And so we have people that come in, we hire most of our people actually from journalism. And so the kind of combination of needing, not having that much tolerance for mistakes, Because you do have to hit this balance. And so you try to hire well. And your perspective on that in general. But by the way, one thing that happens when you share your data or you share your thinking And you have a good intuition for hey, this looks a little bit out of line to me. And I think kind of what you learn is like, hey if there's something that bothers me, It's like oh, if I cross the street in .2-- I mean, I'm like-- But no, part of it's like you don't want to waste time on unimportant decisions, right? We want better. It's like both the chicken and the pasta are going to be really darn good, right? Serious and business, how organizations in the last three to five years have just And man, the quality of the interns we get has improved so much in four years. But when you're seeing these big organizations, ESPN as perfect example, moving more towards But the point is that the reason to be out in front of the problem is so you give yourself Final question for you as we run out of time. And so you're parsing through many, many, many lines of code. You can do better than that. You know, we tried to figure out where the best burrito in America was a few years Nate, thank you so much for joining us. I thought we were going to chat World Series, you know. And also the gallery is open until eight p.m. with demos and activations. If you are not attending all cloud and cognitive summit tomorrow, we ask that you recycle your

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tricia WangPERSON

0.99+

KatiePERSON

0.99+

Katie LinendollPERSON

0.99+

RobPERSON

0.99+

GoogleORGANIZATION

0.99+

JoanePERSON

0.99+

DanielPERSON

0.99+

Michael LiPERSON

0.99+

Nate SilverPERSON

0.99+

AppleORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

TrumpPERSON

0.99+

NatePERSON

0.99+

HondaORGANIZATION

0.99+

SivaPERSON

0.99+

McKinseyORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Larry BirdPERSON

0.99+

2017DATE

0.99+

Rob ThomasPERSON

0.99+

MichiganLOCATION

0.99+

YankeesORGANIZATION

0.99+

New YorkLOCATION

0.99+

ClintonPERSON

0.99+

IBMORGANIZATION

0.99+

TescoORGANIZATION

0.99+

MichaelPERSON

0.99+

AmericaLOCATION

0.99+

LeoPERSON

0.99+

four yearsQUANTITY

0.99+

fiveQUANTITY

0.99+

30%QUANTITY

0.99+

AstrosORGANIZATION

0.99+

TrishPERSON

0.99+

Sudden CompassORGANIZATION

0.99+

Leo MessiPERSON

0.99+

two teamsQUANTITY

0.99+

1,000 linesQUANTITY

0.99+

one yearQUANTITY

0.99+

10 investmentsQUANTITY

0.99+

NASDAQORGANIZATION

0.99+

The Signal and the NoiseTITLE

0.99+

TriciaPERSON

0.99+

Nir KalderoPERSON

0.99+

80%QUANTITY

0.99+

BCGORGANIZATION

0.99+

Daniel HernandezPERSON

0.99+

ESPNORGANIZATION

0.99+

H2OORGANIZATION

0.99+

FerrariORGANIZATION

0.99+

last yearDATE

0.99+

18QUANTITY

0.99+

threeQUANTITY

0.99+

Data IncubatorORGANIZATION

0.99+

PatriotsORGANIZATION

0.99+