Image Title

Search Results for Cameraman:

Bhavesh Patel, Dell Technologies & Shreya Shah, Dell Technologies | SuperComputing 22


 

(upbeat jingle) >> Cameraman: Just look, Mike. >> Good afternoon everyone, and welcome back to Supercomputing. We're live here with theCUBE in Dallas. I'm joined by my cohost, David. Wonderful to be sharing the afternoon with you. And we are going to be kicking things off with a very thrilling discussion from two important thought leaders at Dell. Bhavesh and Shreya, thank you so much for being on the show. Welcome. How you doing? How does it feel to be at Supercomputing? >> Pretty good. We really enjoying the show and enjoying a lot of customer conversations ongoing. >> Yeah. Are most of your customers here? >> Yes. Most of the customers are, mostly in the Hyatt over there and a lot of discussions ongoing. >> Yeah. Must be nice to see everybody show off. Are you enjoying the show so far, Shreya? >> Yeah, I missed this for two years and so it's nice to be back and meeting people in person. >> Yeah, definitely. We all missed it. So, it's been a very exciting week for Dell. Do you want to talk about what you're most excited about in the announcement portfolio that we saw yesterday? >> Absolutely. >> Go for it, Shreya. >> Yeah, so, you know, before we get into the portfolio side of the house, you know, we really wanted to, kind of, share our thoughts, in terms of, you know, what is it that's, kind of, moving HPC and supercomputing, you know, for a long time- >> Stock trends >> For a long time HPC and supercomputing has been driven by packing the racks, you know, maximizing the performance. And as the work that Bhavesh and I have been doing over the last, you know, couple of generations, we're seeing an emerging trend and that is the thermal dissipated power is actually exploding. And so the idea of packing the racks is now turning into, how do you maximize your performance, but are able to deliver the infrastructure in that limited kilowatts per rack that you have in your data center. >> So I, it's been interesting walking around the show seeing how many businesses associated with cooling- >> Savannah: So many. >> are here. And it's funny to see, they open up the cabinet, and it's almost 19th-century-looking technology. It's pipes and pumps and- >> Savannah: And very industrial-like. >> Yeah, very, very industrial-looking. Yeah, and I think, so that's where the, the trends are more in the power and cooling. That is what everybody is trying to solve from an industry perspective. And what we did when we looked at our portfolio, what we want to bring up in this timeframe for targeting more the HPC and AI space. There are a couple of vectors we had to look at. We had to look at cooling, we had to look at power where the trends are happening. We had to look at, what are the data center needs showing up, be it in the cooler space, be it in the HPC space, be it in the large install happening out there. So, looking at those trends and then factoring in, how do you build a node out? We said, okay, we need to diversify and build out an infrastructure. And that's what me and Shreya looked into, not only looking at the silicon diversity showing up, but more looking at, okay, there is this power, there is this cooling, there is silicon diversity. Now, how do you start packing it up and bringing it to the marketplace? So, kind of, those are some of the trends that we captured. And that's what you see, kind of, in the exhibit floor today, even. >> And Dell technology supports both, liquid cooling, air cooling. Do you have a preference? Is it more just a customer-based? >> It is going to be, and Shreya can allude to it, it's more workload and application-focused. That is what we want to be thinking about. And it's not going to be siloed into, okay, is we going to be just targeting air-cooling, we wanted to target a breadth between air to liquid. And that's how we built into our portfolio when we looked at our GPUs. >> To add to that, if we look at our customer landscape, we see that there's a peak between 35 to 45 kilowatts per rack. We see another peak at 60, we see another peak at 80, and we've got selects, you know, very specialized customers above hundred kilowatts per rack. And so, if we take that 35 to 45 kilowatts per rack, you know, you can pack maybe three or four of these chassis, right? And so, to what Bhavesh is saying, we're really trying to provide the flexibility for what our customers can deliver in their data centers. Whether it be at the 35 end where air cooling may make complete sense. As you get above 45 and above, maybe that's the time to pivot to a liquid-cool solution. >> So, you said that there, so there are situations where you could have 90 kilowatts being consumed by a rack of equipment. So, I live in California where we are very, very closely attuned to things like the price for a kilowatt hour of electricity. >> Seriously. >> And I'm kind of an electric car nerd, so, for the folks who really aren't as attuned, 90 kilowatts, that's like over a hundred horsepower. So, think about a hundred horsepower worth of energy being used for compute in one of these racks. It's insane. So, we, you can kind of imagine a layperson can kind of imagine the variables that go into this equation of, you know, how do we, how do we bring the power and get the maximum bang for, per kilowatt hour. But, are there any, are there any kind of interesting odd twists in your equations that you find when you're trying to figure out. Do you have a- >> Yeah, and we, a lot of these trends when we look at it, okay, it's not, we think about it more from a power density that we want to try to go and solve. We are mindful about all the, from an energy perspective where the energy prices are moving. So, what we do is we try to be optimizing right at the node level and how we going to do our liquid-cooling and air cooled infrastructure. So, it's trying to, how do you keep a balance with it? That's what we are thinking about. And thinking about it is not just delivering or consuming the power that is maybe not needed for that particular node itself. So, that's what we are thinking about. The other way we optimize when we built this infrastructure out is we are thinking about, okay, how are we go going to deliver it at the rack level and more keeping in mind as to how this liquid-cooling plumbing will happen. Where is it coming into the data center? Is it coming in the bottom of the floor? Are we going to do it on the left hand side of your rack or the right hand side? It's a big thing. It's like it becomes, okay, yeah, it doesn't matter which side you put it on, but there is a piece of it going into our decision as to how we are going to build that, no doubt. So, there are multiple factors coming in and besides the power and cooling, which we all touched upon, But, Shreya and me also look at is where this whole GPU and accelerators are moving into. So, we're not just looking at the current set of GPUs and where they're moving from a power perspective. We are looking at this whole silicon diversity that is happening out there. So, we've been looking at multiple accelerators. There are multiple companies out there and we can tell you there'll be over three 30 to 50 silicon companies out there that we are actively engaged and looking into. So, our decision in building this particular portfolio out was being mindful about what the maturity curve is from a software point of view. From a hardware point of view and what can we deliver, what the customer really needs in it, yeah. >> It's a balancing act, yeah. >> Bhavesh: It is a balancing act. >> Let's, let's stay in that zone a little bit. What other trends, Shreya, let's go to you on this one. What other trends are you seeing in the acceleration landscape? >> Yeah, I think you know, to your point, the balancing act is actually a very interesting paradigm. One of the things that Bhavesh and I constantly think about, and we call it the Goldilocks syndrome, which is, you know, at that 90 and and a hundred, right? Density matters. >> Savannah: A lot. >> But, what we've done is we have really figured out what that optimal point is, 'cause we don't want to be the thinnest most possible. You lose a lot of power redundancy, you lose a lot of I/O capability, you lose a lot of storage capability. And so, from our portfolio perspective, we've really tried to think about the Goldilocks syndrome and where that sweet spot is. >> I love that. I love the thought of you all just standing around server racks, having a little bit of porridge and determining >> the porridge. Exactly the thickness that you want in terms of the density trade off there. Yeah, that's, I love that, though. I mean it's very digestible. Are you seeing anything else? >> No, I think that's pretty much, Shreya summed it up and we think about what we are thinking about, where the technology features are moving and what we are thinking, in terms of our portfolio, so it is, yeah. >> So, just a lesson, you know, Shreya, a lesson for us, a rudimentary lesson. You put power into a CPU or a GPU and you're getting something out and a lot of what we get out is heat. Is there a measure, is there an objective measure of efficiency in these devices that we look at? Because you could think of a 100 watt light bulb, an incandescent light bulb is going to give out a certain amount of light and a certain amount of heat. A 100 watt equivalent led, in terms of the lumens that it's putting out, in terms of light, a lot more light for the power going in, a lot less heat. We have led lights around us, thankfully, instead of incandescent lights. >> Savannah: Otherwise we would be melting. >> But, what is, when you put power into a CPU or a GPU, how do you measure that efficiency? 'Cause it's sort of funny, 'cause it's like, it's not moving, so it's not like measuring, putting power into a vehicle and measuring forward motion and heat. You're measuring this, sort of, esoteric thing, this processing thing that you can't see or touch. But, I mean, how much per watt of power, how do you, how do you measure it I guess? Help us out, from the base up understanding, 'cause people generally, most people have never been in a data center before. Maybe they've put their hand behind the fan in a personal computer or they've had a laptop feel warm on their lap. But, we're talking about massive amounts of heat being generated. Can you, kind of, explain the fundamentals of that? >> So, the way we think about it is, you know, there's a performance per dollar metric. There's a performance per dollar per watt metric and that's where the power kind of comes in. But, on the flip side, we have something called PUE, power utilization efficiency from a data center aspect. And so, we try to marry up those concepts together and really try to find that sweet spot. >> Is there anything in the way of harvesting that heat to do other worthwhile work, I mean? >> Yes. >> You know, it's like, hey, everybody that works in the data center, you all have your own personal shower now, water heated. >> Recirculating, too. >> Courtesy of Intel AMD. >> Or a heated swimming pool. >> Right, a heated swimming pool. >> I like the pool. >> So, that's the circulation of, or recycling of that thermal heat that you're talking about, absolutely. And we see that our customers in the, you know, in the Europe region, actually a lot more advanced in terms of taking that power and doing something that's valuable with it, right? >> Cooking croissant and, and making lattes, probably right? >> (laughing) Or heating your home. >> Makes me want to go on >> vacation, a pool, croissants. >> That would be a good use. But, do you, it's more on the PUE aspect of it. It's more thinking about how are we more energy efficient in our design, even, so we are more thinking about what's the best efficiency we can get, but what's the amount of heat capture we can get? Are we just kind of wasting any heat out there? So, that's always the goal when designing these particular platforms, so that's something that we had kept in mind with a lot of our power and cooling experts within Dell. When thinking about, okay, is it, how much can we get, can we capture? If we are not capturing anything, then what are we, kind of, recirculating it back in order to get much better efficiency when we think about it at a rack level and for the other equipment which is going to be purely air-cooled out there and what can we do about it, so. >> Do you think both of these technologies are going to continue to work in tandem, air cooling and liquid cooling? Yeah, so we're not going to see- >> Yeah, we don't, kind of, when we think about our portfolio and what we see the trends moving in the future, I think so, air-cooling is definitely going to be there. There'll be a huge amount of usage for customers looking into air-cooling. Air-cooling is not going to go away. Liquid-cooling is definitely something that a lot of customers are looking into adopting. PUE become the bigger factor for it. How much can I heat capture with it? That's a bigger equation that is coming into the picture. And that's where we said, okay, we have a transition happening. And that's what you see in our portfolio now. >> Yeah, Intel is, Intel, excuse me, Dell is agnostic when it comes to things like Intel, AMD, Broadcom, Nvidia. So, you can look at this landscape and I think make a, you know, make a fair judgment. When we talk about GPU versus CPU, in terms of efficiency, do you see that as something that will live on into the future for some applications? Meaning look, GPU is the answer or is it simply a question of leveraging what we think of as CPU cores differently? Is this going to be, is this going to ebb and flow back and forth? Shreya, are things going to change? 'Cause right now, a lot of what's announced recently, in the high performance computer area, leverages GPUs. But, we're right in the season of AMD and Intel coming out with NextGen processor architectures. >> Savannah: Great point. >> Shreya: Yeah >> Any thoughts? >> Yeah, so what I'll tell you is that it is all application dependent. If you rewind, you know, a couple of generations you'll see that the journey for GPU just started, right? And so there is an ROI, a minimum threshold ROI that customers have to realize in order to move their workloads from CPU-based to GPU-based. As the technology evolves and matures, you'll have more and more applications that will fit within that bucket. Does that mean that everything will fit in that bucket? I don't believe so, but as, you know, the technology will continue to mature on the CPU side, but also on the GPU side. And so, depending on where the customer is in their journey, it's the same for air versus liquid. Liquid is not an if, but it's a when. And when the environment, the data center environment is ready to support that, and when you have that ROI that goes with it is when it makes sense to transition to one way or the other. >> That's awesome. All right, last question for you both in a succinct phrase, if possible, I won't character count. What do you hope that we get to talk about next year when we have you back on theCUBE? Shreya, we'll start with you. >> Ooh, that's a good one. I'm going to let Bhavesh go first. >> Savannah: Go for it. >> (laughs) >> What do you think, Bhavesh? Next year, I think so, what you'll see more, because I'm in the CTI group, more talking about where cache coherency is moving. So, that's what, I'll just leave it at that and we'll talk about it more. >> Savannah: All right. >> Dave: Tantalizing. >> I was going to say, a little window in there, yeah. And I think, to kind of add to that, I'm excited to see what the future holds with CPUs, GPUs, smart NICs and the integration of these technologies and where that all is headed and how that helps ultimately, you know, our customers being able to solve these really, really large and complex problems. >> The problems our globe faces. Wow, well it was absolutely fantastic to have you both on the show. Time just flew. David, wonderful questions, as always. Thank you all for tuning in to theCUBE. Here live from Dallas where we are broadcasting all about supercomputing, high-performance computing, and everything that a hardware nerd, like I, loves. My name is Savannah Peterson. We'll see you again soon. (upbeat jingle)

Published Date : Nov 15 2022

SUMMARY :

And we are going to be kicking things off We really enjoying the show Are most of your customers here? mostly in the Hyatt over there Are you enjoying the show so far, Shreya? and so it's nice to be back in the announcement portfolio have been doing over the last, you know, And it's funny to see, And that's what you see, Do you have a preference? And it's not going to maybe that's the time to pivot So, you said that there, and get the maximum bang and we can tell you there'll be Shreya, let's go to you on this one. Yeah, I think you know, to your point, about the Goldilocks syndrome I love the thought of Exactly the thickness that you want and we think about what and a lot of what we get out is heat. we would be melting. But, what is, when you put So, the way we think you all have your own personal shower now, So, that's the circulation of, Or heating your home. and for the other equipment And that's what you see and I think make a, you and when you have that ROI What do you hope that we get to talk about I'm going to let Bhavesh go first. because I'm in the CTI group, and how that helps ultimately, you know, to have you both on the show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ShreyaPERSON

0.99+

DavidPERSON

0.99+

SavannahPERSON

0.99+

Savannah PetersonPERSON

0.99+

NvidiaORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

DavePERSON

0.99+

100 wattQUANTITY

0.99+

two yearsQUANTITY

0.99+

35QUANTITY

0.99+

DellORGANIZATION

0.99+

Shreya ShahPERSON

0.99+

DallasLOCATION

0.99+

AMDORGANIZATION

0.99+

EuropeLOCATION

0.99+

60QUANTITY

0.99+

IntelORGANIZATION

0.99+

threeQUANTITY

0.99+

BhaveshPERSON

0.99+

BroadcomORGANIZATION

0.99+

80QUANTITY

0.99+

90 kilowattsQUANTITY

0.99+

next yearDATE

0.99+

Bhavesh PatelPERSON

0.99+

Next yearDATE

0.99+

MikePERSON

0.99+

90QUANTITY

0.99+

yesterdayDATE

0.99+

fourQUANTITY

0.99+

45 kilowattsQUANTITY

0.98+

Dell TechnologiesORGANIZATION

0.98+

bothQUANTITY

0.98+

two important thought leadersQUANTITY

0.98+

over a hundred horsepowerQUANTITY

0.97+

firstQUANTITY

0.97+

GoldilocksOTHER

0.96+

SupercomputingORGANIZATION

0.96+

todayDATE

0.96+

theCUBEORGANIZATION

0.93+

CTIORGANIZATION

0.92+

OneQUANTITY

0.91+

50 siliconQUANTITY

0.9+

one wayQUANTITY

0.89+

19th-centuryDATE

0.83+

a hundredQUANTITY

0.78+

aboveQUANTITY

0.77+

coupleQUANTITY

0.76+

CameramanPERSON

0.74+

over three 30QUANTITY

0.74+

HyattLOCATION

0.73+

one of theseQUANTITY

0.68+

a hundred horsepowerQUANTITY

0.68+

hundred kilowatts perQUANTITY

0.67+

above 45QUANTITY

0.6+

Mike Gilfix, IBM | AWS re:Invent 2020 Partner Network Day


 

>> Reporter: From around the globe. It's theCUBE with digital coverage of AWS re:Invent 2020. Special coverage sponsored by AWS global partner network. >> Hello, and welcome to theCUBE virtual and our coverage of AWS re:Invent 2020 and our special coverage of APN partner experience. We are theCUBE virtual and I'm your host, Justin Warren. And today I'm joined by Mike Gilfix who is the Chief Product Officer for IBM Cloud Paks. Mike, welcome to theCUBE. >> Thank you. Thanks for having me. Now, Cloud Paks is a new thing from IBM. I'm not particularly familiar with it, but it's related to IBM's partnership with AWS. So maybe you could just start us off quickly by explaining what is Cloud Paks and what's your role as Chief Product Officer there? >> Well, Cloud Paks is sort of our next generation platform. What we've been doing is bringing the power of IBM software really across the board and bringing it to a hybrid cloud environment. So making it really easy for our customers to consume it wherever they want, however, they want to choose to do it with a consistent skillset and making it really easy to kind of get those things up and running and deliver value quickly. And this is part of IBM's hybrid approach. So what we've seen is organizations that can leverage the same skillset and, you know basically take those workloads make them run where they need to yields about a two and a half times ROI and Caltech sit at the center of that running on the OpenShift platform. So they get consistent security, skills and powerful software to run their business running everywhere. And we've been partnering with AWS because we want to make sure that those customers that have made that choice, can get access to those capabilities easy and as fast as possible. >> Right. And the Cloud Paks and Built On the Red Hat open. Now, let me get this right. It's the open hybrid cloud platform. So is that OpenShift? >> It is OpenShift, yes. I mean IBM is incredibly committed to open software and OpenShift does provide that common layer. And the reason that's important is you want consistent security. You want to avoid lock-in, right? That gives you a very powerful platform, (indistinct) if you will, they can truly run anywhere with any workload. And we've been working very closely with AWS to make sure that is a premiere first-class experience on AWS. >> Yes so the OpenShift on AWS is relatively new from IBM. So could you explain what is OpenShift on AWS and how does that differ from the OpenShift that people may be already familiar with? Well, the kernel, if you will, is the same it's the same sort of central open source software but in working closely with AWS we're now making those things available as simple services that you can quickly provision and run. And that makes it really easy for people to get started, but again sort of carrying forward that same sort of skill sets. So that's kind of a key way in which we see that you can gain that sort of consistency, you know, no matter where you're running that workload. And we've been investing in that integration working closely with them, Amazon. >> Yeah, and we all know Red Hat's commitment to open source software in the open ecosystems. Red hat is rightly famous for it. And I am old enough to remember when it was a brand new thing, particularly in enterprise to allow open source to come in and have anything to do with workloads. And now it's all the rage and people are running quite critical workloads on it. So what are you seeing in the adoption within the enterprise of open software? >> The adoption is massive. I think, well first let me describe what's driving it. I mean, people want to tap into innovation and the beauty of open source is you're kind of crowdsourcing if you will, this massive community of developers that are creating just an incredible amount of innovation at incredible speed. And it's a great way to ensure that you avoid vendor lock-in. So enterprises of all types are looking to open solutions as a way, both of innovating faster and getting protection. And that commitment, is something certainly Red Hat has tapped into. It's behind the great success of Red Hat. And it's something that frankly is permeating throughout IBM in that we're very committed to driving this sort of open approach. And that means that, you know, we need to ensure that people can get access to the innovation they need, run it where they want and ensure that they feel that they have choice. >> And the choice I think is a key part of it that isn't really coming through in some of the narrative. There's a lot of discussion about how you should actually pick, should you go cloud? I remember when it was either you should stay on-site or should you go to cloud? And we had a long discussion there. Hybrid cloud really does seem to have come of age where it's a realistic kind of compromise is probably the wrong word, but it's a trade off between doing all the one thing or all another. And for most enterprises, that doesn't actually seem to be the choice that's actually viable for them. So hybrid seems like it's actually just the practical approach. Would that be accurate? >> Well our studies have shown that if you look statistically at the set of workload that's moved to cloud, you know something like 20% of workloads have only moved to cloud meaning the other 80% is experiencing barriers to move. And some of those barriers is figuring out what to do with all this data that's sitting on-prem or you know, these applications that have years and years of intelligence baked into them that can not easily be ported. And so organizations are looking at the hybrid approaches because they give them more choice. It helps them deal with fragmentation. Meaning as I move more workload, I have consistent skillset. It helps me extend my existing investments and bring it into the cloud world. And all those things again are done with consistent security. That's really important, right? Organizations need to make sure they're protecting their assets, their data throughout, you know leveraging a consistent platform. So that's really the benefit of the hybrid approach. It essentially is going to enable these organizations to unlock more workload and gain the acceleration and the transformative effect of cloud. And that's why it's becoming a necessity, right? Because they just can't get that 80% to move yet. >> Yeah and I've long said that the cloud is a state of mind rather than a particular location. It's more about an operational model of how you do things. So hearing that we've only got 20% of workloads have moved to this new way of doing things does rather suggest that there's a lot more work to be done. What, for those organizations that are just looking to do this now or they've done a bit of it and they're looking for those next new workloads, where do you see customers struggling the most and where do you think that IBM can help them there? >> Well,(indistinct) where are they struggling the most? First I think skills. I mean, they have to figure out a new set of technologies to go and transition from this old world to the new and at the heart of that is lots of really critical debates. Like how do they modernize the way that they do software delivery for many enterprises, right? Embrace new ways of doing software delivery. How do they deal with the data issues that arise from where the data sits, their obligations for data protection, what happens if the data spans multiple different places but you have to provide high quality performance and security. These are all parts of issues that, you know, span different environments. And so they have to figure out how to manage those kinds of things and make it work in one place. I think the benefit of partnering, you know, with Amazon is, clearly there's a huge customer base that's interested in Amazon. I think the benefit of the IBM partnership is, you know, we can help to go and unlock some of those new workloads and find ways to get that cloud benefit and help to move them to the cloud faster again with that consistency of experience. And that's why I think it's a good match partnership where we're giving more customers choice. We're helping them to unlock innovation substantially faster. >> Right. And so for people who might want to just get started without it, how would they approach this? People might have some experience with AWS, it's almost difficult not to these days, but for those who aren't familiar with the Red Hat on AWS with OpenShift on AWS, how would they get started with you to explore what's possible? >> Well, one of the things that we're offering to our clients is a service that we refer to as IBM garage. It's, you know, an engagement model if you will, within IBM, where we work with our clients and we really help them to do co-creation so help to understand their business problem or, you know, the target state of where they want their IT to get to. And in working with them in co-creation, you know, we help them to affect that transition. Let's say that it's about delivering business applications faster. Let's say it's about modernizing the applications they have or offering new services, new business models, again all in the spirit of co-creation. And we found that to be really popular. It's a great way to get started. We've leveraged design thinking and approach. They can think about the customer experience and their outcome. If they're creating new business processes, new applications, and then really help them to uplift their skills and, you know, get ready to adopt cloud technology and everything that they do. >> It sounds like this is a lot of established workloads that people already have in their organizations. It's already there, it's generating real money. It's not those experimental workloads that we saw early on which was a, well let's try this. Cloud is a fabulous way where we can run some experiments. And if it doesn't work, we just turn it off again. These sound like a lot more workloads are kind of more important to the business. Is that be true? >> Yeah. I think that's true. Now I wouldn't say they're just existing workloads because I think there's lots of new business innovation that many of our, you know, clients want to go and launch. And so this gives them an opportunity to do that new innovation, but not forget the past meaning they can bring it forward and bring it forward into an integrated experience. I mean, that's what everyone demands of a true digital business, right? They expect that your experience is integrated, that it's responsive, that it's targeted and personalized. And the only way to do that is to allow for experimentation that integrates in with the, you know, standard business processes and things that you did before. And so you need to be able to connect those things together seamlessly. >> Right. So it sounds like it's a transition more than creating new thing completely from scratch. It's well, look, we've done a lot of innovation over the past decade or so in cloud, we know what works but we still have workloads that people clearly know and value. How do we put those things together and do it in such a way that we maintain the flexibility to be able to make new changes as we learn new things. >> Yeah, leverage what you've got play to your strengths. I mean that's how you create speed. If you have to reinvent the wheel every time it's going to be a slow roll. >> Yeah and that does seem like an area where an organization probably at this point should be looking to partner with other people who have done the hard yards. They've already figured this out. Well, as you say, why can't we make all of these obvious areas yourself when you're starting from scratch, when there's a wealth of experience out there and particularly this whole ecosystem that exists around the open software? In fact maybe you could tell us a little bit about the ecosystem opportunities that are there because Red Hat has been part of this for a very long time. AWS has a very broad ecosystem as we're all familiar with being here at re:Invent yet again. How does that ecosystem play into what's possible? >> Well, let me explain why I think IBM brings a different dimension to that trio, right? IBM brings deep industry expertise. I mean, we've long worked with all of our clients, our partners on solving some of their biggest business problems and being embedded in the thing that they do. So we have deep knowledge of their enterprise challenges, deep knowledge of their business processes. deep knowledge of their business processes. We are able to bring that industry know how mixed with, you know, Red Hat's approach to an open foundational platform, coupled with, you know, the great infrastructure you can get from Amazon and, you know, that's a great sort of powerful combination that we can bring to each of our clients. And, you know, maybe just to bring it back a little bit to that idea, okay so what's the role in Cloud Paks in that? I mean, Cloud Paks are the kind of software that we've built to enable enterprises to run their essential business processes, right? In the central digital operations that they run everything from security to protecting their data or giving them powerful data tools to implement AI and you know, to implement AI algorithms in the heart of their business or giving them powerful automation capabilities so they can digitize their operations. And also we make sure those things are going to run effectively. It's those kinds of capabilities that we're bringing in the form of Cloud Paks think of that as that substrate that runs a digital business that now can be brought through right? Running on AWS infrastructure through this integration that we've done. >> Right. So basically taking things as a pre-packaged module that we can just grab that module drop it in and start using it rather than having to build it ourselves from scratch. >> That's right. And they can leverage those powerful capabilities and get focused on innovating the things that matter, right? So the huge accelerant to getting business value. >> And it does sound a lot easier than trying to learn how to do the complex sort of deep learning and linear algorithms that they're involved in machine learning. I have looked into it a bit and trying to manage that sort of deep masses. I think I'd much rather just grab one off the shelf plug it in and just use it. >> Yeah. It's also better than writing assembler code which was some of my first programming experiences as well. So I think the software industry has moved on just a little bit since then. (chuckles) >> I think we have is that I do not miss the days of handwriting assembly at all. Sometimes for this (indistinct) reasons. But if we want to get things done, I think I'd much rather work in something a little higher level. (Mike laughing) So thank you very much for joining me. My guest Mike Gilfix there from IBM, sorry, from IBM cloud. And this has been, sorry, go ahead. We'll cut that. Can we cut and reedit this outro? >> Cameraman: Yeah, you guys can or you can just go ahead and just start over again. >> I'll just do, I'll just do the outro. Try it again. >> Cameraman: Yeah, sounds good. >> So thank you so much for my guests there Mike Gilfix, Chief Product Officer for IBM Cloud Paks from IBM. This has been theCUBES coverage of AWS re:Invent 2020 and the APN partner experience. I've been your host, Justin Warren, make sure you come back and join us for more coverage later on.

Published Date : Nov 28 2020

SUMMARY :

Reporter: From around the globe. and our coverage of AWS re:Invent 2020 So maybe you could just and bringing it to a And the Cloud Paks and And the reason that's important is Well, the kernel, if you will, is the same And I am old enough to remember And that means that, you know, And the choice I get that 80% to move yet. that are just looking to do And so they have to it's almost difficult not to these days, and everything that they do. important to the business. that many of our, you know, and do it in such a way that I mean that's how you create speed. that exists around the open software? and you know, to implement AI algorithms that we can just grab that module So the huge accelerant to just grab one off the shelf So I think the software is that I do not miss the or you can just go ahead I'll just do, I'll just do the outro. and the APN partner experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Justin WarrenPERSON

0.99+

AmazonORGANIZATION

0.99+

Mike GilfixPERSON

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

20%QUANTITY

0.99+

80%QUANTITY

0.99+

CaltechORGANIZATION

0.99+

MikePERSON

0.99+

Red HatORGANIZATION

0.99+

OpenShiftTITLE

0.99+

eachQUANTITY

0.99+

FirstQUANTITY

0.98+

Cloud PaksTITLE

0.98+

firstQUANTITY

0.98+

CameramanPERSON

0.97+

Red hatTITLE

0.96+

bothQUANTITY

0.96+

todayDATE

0.95+

one placeQUANTITY

0.95+

APNORGANIZATION

0.95+

oneQUANTITY

0.93+

Red HatTITLE

0.91+

theCUBEORGANIZATION

0.91+

Invent 2020 Partner Network DayEVENT

0.88+

pastDATE

0.85+

2020TITLE

0.82+

about a two and a half timesQUANTITY

0.81+

first programming experiencesQUANTITY

0.77+

re:EVENT

0.69+

IBM cloudORGANIZATION

0.67+

re:Invent 2020EVENT

0.65+

yearsQUANTITY

0.64+

AWSEVENT

0.61+

first-classQUANTITY

0.6+

trioQUANTITY

0.56+

Chris Kaddaras, Nutanix & Phil Davis, Hewlett Packard Enterprise | Nutanix .NEXT Conference 2019


 

>> Narrator: Live from Anaheim, California, it's The CUBE covering Nutanix .NEXT 2019. Brought to you by Nutanix. >> Cameraman: Izzy! >> Welcome back, everyone, to The CUBES's live coverage of Nutanix .NEXT here in Anaheim, California. I'm your host, Rebecca Knight, along with my co-host, John Furrier. We have two guests for this segment, we have Phil Davis, he is the president of Hybrid IT Hewlett Packard Entrerprise. Thanks so much for coming on The CUBE, Phil? >> Great to be here. >> And we have Chris Kaddaras, he is the SVP America's Nutanix. Thank you so much, Chris. >> Right, thanks for having me. >> So, two weeks, this partnership between Nutanix and HPE, two weeks old, newly announced. Chris, I wanna ask you, explain to our viewers a little bit about it and how it came about. What is the partnership? >> Sure, now I think the way the partnership came about was really around customer and partner demand, right? The marketplace was really looking for two great companies to get together and provide a solution for what they wanted to kind of cure their problems. The two components of the partnership effectively is, one component is the Nutanix sales teams are gonna be selling their Nutanix solutions and appliances with a great HPE computing infrastructure involved in that appliance. So, that's the first big group part, and I'll let Phil talk about the second part of the relationship. >> Yeah, and the second part is really around how do we enable a consumption model for our customers? I mean, if you think about what's going on with the public cloud, customers wanna be able to scale up or scale down and kind of pay as they go. And so, HPE has been leading with an offering we call Green Lake. It's a couple-billion-dollar business growing over 50% a year, so it kind of shows you the interest in it, and we also, therefore, offer the Nutanix solution on our infrastructure and then wrap that with a consumption model service that allows customers that flexibility. So, those are the two elements of the partnership. >> So, you're selling Nutanix with your Green Lake. >> Embedded in the Green Lake offering, that's correct. >> And Nutanix has selling Compute with their sales worth. >> Phil: Exactly right. >> Chris: Yeah, so with our DX solution, yeah with HPE Compute. >> Got it. Now, you guys have indirect and direct sales, both sides, channel play, is it a channel partnership or both, can you just explain the go-to market? >> Yeah, and I think that what you'll see is there's just a lot of alignment, a lot of synergy. Both companies are very, very channel friendly. I mean, HPE's a 75 plus year old company and our very first sale as a company went through the channel, right? So, our whole DNA is wired towards the channel. Over 70% of our business goes through the channel. So, what we've really made sure is that we make this very, very easy for the channel to consume and also, be paid and compensated on. So, it flows through all the standard HPE channel compensation and programs that we have in play. So, absolutely, very friendly for the channel. >> Yeah, and I think this will work really well for both channel communities that we have. We have a lot of Nutanix channel partners that have not been, for whatever reason, have not been selling HPE and now, they have a perfect opportunity to sell HPE Compute platforms with our DX appliance. We also have a lot of great channel partners who want a better consumption model where customers are looking to flex up and down. We have not been able to provide that for Nutanix software solutions. So, to adopt Green Lake for some of these partners will be a fantastic offering for their customers. >> Maybe just a dove-tail on that comment, one of the things we've worked really hard in the last year is to make Green Lake more channel friendly. Channel reps tend to get paid as the margin comes in. So, if you spread that out over time, they don't make the same money. So, we've changed the rebate 17% up front for the channel partners, we've simplified the offering, we made it quicker, so we're doing a lot to make Green Lake much easier for our channel partners and a lot of excitement about being able to offer Nutanix with Green Lake as well. >> What's the timing on the channel rollout? Is it rolling out now? Is it instantly growing out? Is there timing on-- >> Phil: Instantly. >> Instantly? >> So, we've already briefed the channel, we are making it available, we're providing all the quotes, we have a ton of material available online through our online portals and tools for the channel partners, we have FAQs, we have marketing materials, we have, actually, letters already built up for the channel. So, it's now. >> So, I gotta ask the hard question here because I think one of the things I see that's really awesome is the channel's gonna love this because Nutanix has a channel generated opportunity. Their challenge in that opportunity is when they do a POC, they usually win the business. That's kind of a direct sales model that's favored Nutanix for their success. This is gonna bring a lot of mojo to the channel bringing HPE and Nutanix together for this unique solution. I'm sure the reaction's been positive. Are they seeing an up-step in more POCs and more action with customers? >> Phil: You wanna take that? >> Yeah, we're seeing a lot, actually. So, I was just there actually reviewing my team yesterday. We have a list of now starting to get towards 100 customers that we think we can align with together, right? And multiple go to markets. We have Green Lake opportunities, we have DX opportunities, which is Nutanix on HPE. We also have a lot of opportunities around Nutanix software only on HPE Compute that a lot of customers wanna consume as well in a different way. So, we're seeing that really start to scale. We haven't done the first POC of DX because it hasn't released to the market yet, right? We are doing POCs on software only on HPE servers, but the DX solution will be releasing in the next few months. So Phil, I know the HPE channel pretty well and they love services, wrapping services around an offering. Can you talk about how this impacts from the services side because I gotta be looking at my chops if I'm a dealer partner because I can bring this new solution in and I can wrap cloud-like capabilities around it. >> Yeah, and you look at a lot of our partners, the hardware-only business is getting pressure. And so, a lot of our partners are doing exactly what you just described. They're trying to move more and more into services. And you're right, there's a whole sweep of services the partners can wrap around this. Everything from advisory, upfront, because all of these workloads run on some sort of legacy environment. So, when they do bring in a hyperconverged, they need to move the workloads. So partners can help with that, supporting maintenance, implementation, all the way through to kind of day-to-day break fix. So, there's a range on services. Obviously, HPE has a pretty big services capability. We make those available through our channel partner as well, so if they wanna sell to HPE services they can do that, or if they wanna deliver 'em themselves, they can do that as well. >> I wanna ask you about the customers. You made this point on main stage that you, sort of, likened back to the Henry Ford quote where you can have any color, as long as it's black and the current marketplace was anything you want as long as it's in my stack, and this is how we're gonna do it. So, giving them more choice, more flexibility, what are you hearing so far? What was the problem in terms of their workload and why things were stiffeled or stunted, and now what do you hope this is going to do? >> Well, as I mentioned on main stage, everybody wants to make it easy to get on to their stack and really, really hard to move off of their stack, right? Whether you're a public cloud company, you want all your microservices, you want all the data trapped there, so it's not easy to move and some of our joint competitors are actually trying to lock you into the complete top-down stack. So, the feedback, so far, from customers and partners has been very, very, very positive because one of the things, I've been in the industry 29 years. One of the things that I can tell you is no one company is gonna out-innovate the entire industry. And so, what customers want is to be able to pick and choose the solutions that best meet their needs. And that's really what this partnership, I think, really embodies is the ability to give customers choice at multiple levels within that stack. Choice in the public cloud, choice on prem, choice of hypervisors, and that's really resonating. >> Yeah, and that's really Nutanix's design point, right? Is around choice, right? Choice at every level of a stack that you can have. And this provides us with the biggest choice in the marketplace at this point and time that was missing from our portfolio. The other piece that you mentioned that I'd like to point out is that the thing that a lot of people haven't been talking about is the services component. You know, Nutanix is a great company, we've grown a lot. But one place that we haven't grown to an extent is in the services side. We have a small services organization that really helps our customers, but we really need a services organization that can help our customers transform. And help our customers through a transformation of their underlying infrastructure and reduce the risk of change. And this HPE relationship will help us do that as well. >> And the other thing, too, that's interesting with Cloud and you guys are in the middle of demodernizing the data center, HPE's been there forever in the data center, is the private cloud has shown that the data center's still relevant. However, if you start going cloud-based stuff, integration's huge. So integrating, not just packaging our solutions, customers need to integrate all this stuff. This has been a key part of Nutanix and HPE. How do you guys see this going forward from an integration standpoint? Because on the product side, it's gotta integrate, and then in the customer environment you mentioned the consumption piece. Can you guys just expand on what that means? >> Sure. Yeah, we saw Dheeraj's presentation this morning, right? And Sunil's, our entire design point is how do we make everything invisible, right? How do we make those integration points invisible? Now, we all know that there's a traditional architecture you need to migrate from to take advantage of some of these things. And that's where the risk is, how do you get from A to B into these environments? As I mentioned, we do have a services organization that helps there, but we could use, now we have one of the largest partners in the industry that could help us do that. I think that's a key component. We will always try to innovate being Nutanix, we will always try to innovate in software, right? Let's try to figure out how we can make this so much easier, move it up the stack to make sure this is the easiest thing to migrate and have choice for customers. >> Yeah, and I think, maybe, just to add to that, if you think about it from a customer view in, right? A lot of customers moved a lot of things very quickly to the public cloud and the public cloud will continue to grow fast, but they're also learning some things. It's not quite as cheap as they thought it was gonna be, like twice as expensive. Moving data around is very expensive. The public cloud is charging you to get your own data back out. Data sovereignty matters a lot more than it used to with things like GDPR in Europe. More and more of the data's getting created at the edge. It's not in the cloud or the data center. And so, what we're seeing is customers are now thinking about things as you mentioned, we're kind of hybrid, and they're talking about the right mix. What's the right mix of public? What's the right mix of private? Where should the data live? And that's a tough story and that's a tough journey for them to go on, so they want help up front with the advisory services, they want help in being able to architect that, implement it, and then, in many cases, even kind of run that. And with nearly 25,000 services professionals around the globe, we have a unique footprint to help customers along that journey. >> It's an interesting deal, it's very, I think, gonna be pretty big. So, congratulations. >> Phil: Thank you. >> It was great having you both on The Cube, Phil and Chris. >> Thank you very much, thanks. >> Thanks for having us. >> I'm Rebecca Knight for John Furrier, we will have so much more from Nutanix .NEXT here in Anaheim, California, so stay with us. (electronic dance music)

Published Date : May 8 2019

SUMMARY :

Brought to you by Nutanix. we have Phil Davis, he is the president he is the SVP America's Nutanix. What is the partnership? So, that's the first big group part, Yeah, and the second part is really around so with our DX solution, yeah with HPE Compute. or both, can you just explain the go-to market? HPE channel compensation and programs that we have in play. We have not been able to provide that and a lot of excitement about being able to offer Nutanix for the channel partners, we have FAQs, So, I gotta ask the hard question here We have a list of now starting to get towards 100 customers Yeah, and you look at a lot of our partners, and the current marketplace was anything you want One of the things that I can tell you and reduce the risk of change. And the other thing, too, that's interesting with Cloud As I mentioned, we do have a services organization More and more of the data's getting created at the edge. So, congratulations. we will have so much more from Nutanix

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rebecca KnightPERSON

0.99+

ChrisPERSON

0.99+

NutanixORGANIZATION

0.99+

John FurrierPERSON

0.99+

Chris KaddarasPERSON

0.99+

Phil DavisPERSON

0.99+

EuropeLOCATION

0.99+

PhilPERSON

0.99+

Green LakeORGANIZATION

0.99+

17%QUANTITY

0.99+

firstQUANTITY

0.99+

two guestsQUANTITY

0.99+

second partQUANTITY

0.99+

29 yearsQUANTITY

0.99+

HPEORGANIZATION

0.99+

two elementsQUANTITY

0.99+

two weeksQUANTITY

0.99+

Anaheim, CaliforniaLOCATION

0.99+

two componentsQUANTITY

0.99+

Both companiesQUANTITY

0.99+

yesterdayDATE

0.99+

one componentQUANTITY

0.99+

bothQUANTITY

0.99+

over 50% a yearQUANTITY

0.99+

last yearDATE

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Henry FordPERSON

0.98+

OneQUANTITY

0.98+

both sidesQUANTITY

0.98+

first saleQUANTITY

0.98+

twiceQUANTITY

0.98+

75 plus year oldQUANTITY

0.98+

oneQUANTITY

0.98+

Over 70%QUANTITY

0.98+

100 customersQUANTITY

0.97+

nearly 25,000 servicesQUANTITY

0.97+

NutanixEVENT

0.94+

2019DATE

0.93+

Hewlett Packard EntrerpriseORGANIZATION

0.91+

GDPRTITLE

0.9+

Hybrid ITORGANIZATION

0.9+

NarratorTITLE

0.88+

both channelQUANTITY

0.87+

Vikram Murali, IBM | IBM Data Science For All


 

>> Narrator: Live from New York City, it's theCUBE. Covering IBM Data Science For All. Brought to you by IBM. >> Welcome back to New York here on theCUBE. Along with Dave Vellante, I'm John Walls. We're Data Science For All, IBM's two day event, and we'll be here all day long wrapping up again with that panel discussion from four to five here Eastern Time, so be sure to stick around all day here on theCUBE. Joining us now is Vikram Murali, who is a program director at IBM, and Vikram thank for joining us here on theCUBE. Good to see you. >> Good to see you too. Thanks for having me. >> You bet. So, among your primary responsibilities, The Data Science Experience. So first off, if you would, share with our viewers a little bit about that. You know, the primary mission. You've had two fairly significant announcements. Updates, if you will, here over the past month or so, so share some information about that too if you would. >> Sure, so my team, we build The Data Science Experience, and our goal is for us to enable data scientist, in their path, to gain insights into data using data science techniques, mission learning, the latest and greatest open source especially, and be able to do collaboration with fellow data scientist, with data engineers, business analyst, and it's all about freedom. Giving freedom to data scientist to pick the tool of their choice, and program and code in the language of their choice. So that's the mission of Data Science Experience, when we started this. The two releases, that you mentioned, that we had in the last 45 days. There was one in September and then there was one on October 30th. Both of these releases are very significant in the mission learning space especially. We now support Scikit-Learn, XGBoost, TensorFlow libraries in Data Science Experience. We have deep integration with Horton Data Platform, which is keymark of our partnership with Hortonworks. Something that we announced back in the summer, and this last release of Data Science Experience, two days back, specifically can do authentication with Technotes with Hadoop. So now our Hadoop customers, our Horton Data Platform customers, can leverage all the goodies that we have in Data Science Experience. It's more deeply integrated with our Hadoop based environments. >> A lot of people ask me, "Okay, when IBM announces a product like Data Science Experience... You know, IBM has a lot of products in its portfolio. Are they just sort of cobbling together? You know? So exulting older products, and putting a skin on them? Or are they developing them from scratch?" How can you help us understand that? >> That's a great question, and I hear that a lot from our customers as well. Data Science Experience started off as a design first methodology. And what I mean by that is we are using IBM design to lead the charge here along with the product and development. And we are actually talking to customers, to data scientist, to data engineers, to enterprises, and we are trying to find out what problems they have in data science today and how we can best address them. So it's not about taking older products and just re-skinning them, but Data Science Experience, for example, it started of as a brand new product: completely new slate with completely new code. Now, IBM has done data science and mission learning for a very long time. We have a lot of assets like SPSS Modeler and Stats, and digital optimization. And we are re-investing in those products, and we are investing in such a way, and doing product research in such a way, not to make the old fit with the new, but in a way where it fits into the realm of collaboration. How can data scientist leverage our existing products with open source, and how we can do collaboration. So it's not just re-skinning, but it's building ground up. >> So this is really important because you say architecturally it's built from the ground up. Because, you know, given enough time and enough money, you know, smart people, you can make anything work. So the reason why this is important is you mentioned, for instance, TensorFlow. You know that down the road there's going to be some other tooling, some other open source project that's going to take hold, and your customers are going to say, "I want that." You've got to then integrate that, or you have to choose whether or not to. If it's a super heavy lift, you might not be able to do it, or do it in time to hit the market. If you architected your system to be able to accommodate that. Future proof is the term everybody uses, so have you done? How have you done that? I'm sure API's are involved, but maybe you could add some color. >> Sure. So we are and our Data Science Experience and mission learning... It is a microservices based architecture, so we are completely dockerized, and we use Kubernetes under the covers for container dockerstration. And all these are tools that are used in The Valley, across different companies, and also in products across IBM as well. So some of these legacy products that you mentioned, we are actually using some of these newer methodologies to re-architect them, and we are dockerizing them, and the microservice architecture actually helps us address issues that we have today as well as be open to development and taking newer methodologies and frameworks into consideration that may not exist today. So the microservices architecture, for example, TensorFlow is something that you brought in. So we can just pin up a docker container just for TensorFlow and attach it to our existing Data Science Experience, and it just works. Same thing with other frameworks like XGBoost, and Kross, and Scikit-Learn, all these are frameworks and libraries that are coming up in open source within the last, I would say, a year, two years, three years timeframe. Previously, integrating them into our product would have been a nightmare. We would have had to re-architect our product every time something came, but now with the microservice architecture it is very easy for us to continue with those. >> We were just talking to Daniel Hernandez a little bit about the Hortonworks relationship at high level. One of the things that I've... I mean, I've been following Hortonworks since day one when Yahoo kind of spun them out. And know those guys pretty well. And they always make a big deal out of when they do partnerships, it's deep engineering integration. And so they're very proud of that, so I want to come on to test that a little bit. Can you share with our audience the kind of integrations you've done? What you've brought to the table? What Hortonworks brought to the table? >> Yes, so Data Science Experience today can work side by side with Horton Data Platform, HDP. And we could have actually made that work about two, three months back, but, as part of our partnership that was announced back in June, we set up drawing engineering teams. We have multiple touch points every day. We call it co-development, and they have put resources in. We have put resources in, and today, especially with the release that came out on October 30th, Data Science Experience can authenticate using secure notes. That I previously mentioned, and that was a direct example of our partnership with Hortonworks. So that is phase one. Phase two and phase three is going to be deeper integration, so we are planning on making Data Science Experience and a body management pact. And so a Hortonworks customer, if you have HDP already installed, you don't have to install DSX separately. It's going to be a management pack. You just spin it up. And the third phase is going to be... We're going to be using YARN for resource management. YARN is very good a resource management. And for infrastructure as a service for data scientist, we can actually delegate that work to YARN. So, Hortonworks, they are putting resources into YARN, doubling down actually. And they are making changes to YARN where it will act as the resource manager not only for the Hadoop and Spark workloads, but also for Data Science Experience workloads. So that is the level of deep engineering that we are engaged with Hortonworks. >> YARN stands for yet another resource negotiator. There you go for... >> John: Thank you. >> The trivia of the day. (laughing) Okay, so... But of course, Hortonworks are big on committers. And obviously a big committer to YARN. Probably wouldn't have YARN without Hortonworks. So you mentioned that's kind of what they're bringing to the table, and you guys primarily are focused on the integration as well as some other IBM IP? >> That is true as well as the notes piece that I mentioned. We have a notes commenter. We have multiple notes commenters on our side, and that helps us as well. So all the notes is part of the HDP package. We need knowledge on our side to work with Hortonworks developers to make sure that we are contributing and making end roads into Data Science Experience. That way the integration becomes a lot more easier. And from an IBM IP perspective... So Data Science Experience already comes with a lot of packages and libraries that are open source, but IBM research has worked on a lot of these libraries. I'll give you a few examples: Brunel and PixieDust is something that our developers love. These are visualization libraries that were actually cooked up by IBM research and the open sourced. And these are prepackaged into Data Science Experience, so there is IBM IP involved and there are a lot of algorithms, mission learning algorithms, that we put in there. So that comes right out of the package. >> And you guys, the development teams, are really both in The Valley? Is that right? Or are you really distributed around the world? >> Yeah, so we are. The Data Science Experience development team is in North America between The Valley and Toronto. The Hortonworks team, they are situated about eight miles from where we are in The Valley, so there's a lot of synergy. We work very closely with them, and that's what we see in the product. >> I mean, what impact does that have? Is it... You know, you hear today, "Oh, yeah. We're a virtual organization. We have people all over the world: Eastern Europe, Brazil." How much of an impact is that? To have people so physically proximate? >> I think it has major impact. I mean IBM is a global organization, so we do have teams around the world, and we work very well. With the invent of IP telephoning, and screen-shares, and so on, yes we work. But it really helps being in the same timezone, especially working with a partner just eight miles or ten miles a way. We have a lot of interaction with them and that really helps. >> Dave: Yeah. Body language? >> Yeah. >> Yeah. You talked about problems. You talked about issues. You know, customers. What are they now? Before it was like, "First off, I want to get more data." Now they've got more data. Is it figuring out what to do with it? Finding it? Having it available? Having it accessible? Making sense of it? I mean what's the barrier right now? >> The barrier, I think for data scientist... The number one barrier continues to be data. There's a lot of data out there. Lot of data being generated, and the data is dirty. It's not clean. So number one problem that data scientist have is how do I get to clean data, and how do I access data. There are so many data repositories, data lakes, and data swamps out there. Data scientist, they don't want to be in the business of finding out how do I access data. They want to have instant access to data, and-- >> Well if you would let me interrupt you. >> Yeah? >> You say it's dirty. Give me an example. >> So it's not structured data, so data scientist-- >> John: So unstructured versus structured? >> Unstructured versus structured. And if you look at all the social media feeds that are being generated, the amount of data that is being generated, it's all unstructured data. So we need to clean up the data, and the algorithms need structured data or data in a particular format. And data scientist don't want to spend too much time in cleaning up that data. And access to data, as I mentioned. And that's where Data Science Experience comes in. Out of the box we have so many connectors available. It's very easy for customers to bring in their own connectors as well, and you have instant access to data. And as part of our partnership with Hortonworks, you don't have to bring data into Data Science Experience. The data is becoming so big. You want to leave it where it is. Instead, push analytics down to where it is. And you can do that. We can connect to remote Spark. We can push analytics down through remote Spark. All of that is possible today with Data Science Experience. The second thing that I hear from data scientist is all the open source libraries. Every day there's a new one. It's a boon and a bane as well, and the problem with that is the open source community is very vibrant, and there a lot of data science competitions, mission learning competitions that are helping move this community forward. And it's a good thing. The bad thing is data scientist like to work in silos on their laptop. How do you, from an enterprise perspective... How do you take that, and how do you move it? Scale it to an enterprise level? And that's where Data Science Experience comes in because now we provide all the tools. The tools of your choice: open source or proprietary. You have it in here, and you can easily collaborate. You can do all the work that you need with open source packages, and libraries, bring your own, and as well as collaborate with other data scientist in the enterprise. >> So, you're talking about dirty data. I mean, with Hadoop and no schema on, right? We kind of knew this problem was coming. So technology sort of got us into this problem. Can technology help us get out of it? I mean, from an architectural standpoint. When you think about dirty data, can you architect things in to help? >> Yes. So, if you look at the mission learning pipeline, the pipeline starts with ingesting data and then cleansing or cleaning that data. And then you go into creating a model, training, picking a classifier, and so on. So we have tools built into Data Science Experience, and we're working on tools, that will be coming up and down our roadmap, which will help data scientist do that themselves. I mean, they don't have to be really in depth coders or developers to do that. Python is very powerful. You can do a lot of data wrangling in Python itself, so we are enabling data scientist to do that within the platform, within Data Science Experience. >> If I look at sort of the demographics of the development teams. We were talking about Hortonworks and you guys collaborating. What are they like? I mean people picture IBM, you know like this 100 plus year old company. What's the persona of the developers in your team? >> The persona? I would say we have a very young, agile development team, and by that I mean... So we've had six releases this year in Data Science Experience. Just for the on premises side of the product, and the cloud side of the product it's got huge delivery. We have releases coming out faster than we can code. And it's not just re-architecting it every time, but it's about adding features, giving features that our customers are asking for, and not making them wait for three months, six months, one year. So our releases are becoming a lot more frequent, and customers are loving it. And that is, in part, because of the team. The team is able to evolve. We are very agile, and we have an awesome team. That's all. It's an amazing team. >> But six releases in... >> Yes. We had immediate release in April, and since then we've had about five revisions of the release where we add lot more features to our existing releases. A lot more packages, libraries, functionality, and so on. >> So you know what monster you're creating now don't you? I mean, you know? (laughing) >> I know, we are setting expectation. >> You still have two months left in 2017. >> We do. >> We do not make frame release cycles. >> They are not, and that's the advantage of the microservices architecture. I mean, when you upgrade, a customer upgrades, right? They don't have to bring that entire system down to upgrade. You can target one particular part, one particular microservice. You componentize it, and just upgrade that particular microservice. It's become very simple, so... >> Well some of those microservices aren't so micro. >> Vikram: Yeah. Not. Yeah, so it's a balance. >> You're growing, but yeah. >> It's a balance you have to keep. Making sure that you componentize it in such a way that when you're doing an upgrade, it effects just one small piece of it, and you don't have to take everything down. >> Dave: Right. >> But, yeah, I agree with you. >> Well, it's been a busy year for you. To say the least, and I'm sure 2017-2018 is not going to slow down. So continue success. >> Vikram: Thank you. >> Wish you well with that. Vikram, thanks for being with us here on theCUBE. >> Thank you. Thanks for having me. >> You bet. >> Back with Data Science For All. Here in New York City, IBM. Coming up here on theCUBE right after this. >> Cameraman: You guys are clear. >> John: All right. That was great.

Published Date : Nov 1 2017

SUMMARY :

Brought to you by IBM. Good to see you. Good to see you too. about that too if you would. and be able to do collaboration How can you help us understand that? and we are investing in such a way, You know that down the and attach it to our existing One of the things that I've... And the third phase is going to be... There you go for... and you guys primarily are So that comes right out of the package. The Valley and Toronto. We have people all over the We have a lot of interaction with them Is it figuring out what to do with it? and the data is dirty. You say it's dirty. You can do all the work that you need with can you architect things in to help? I mean, they don't have to and you guys collaborating. And that is, in part, because of the team. and since then we've had about and that's the advantage of microservices aren't so micro. Yeah, so it's a balance. and you don't have to is not going to slow down. Wish you well with that. Thanks for having me. Back with Data Science For All. That was great.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

VikramPERSON

0.99+

JohnPERSON

0.99+

three monthsQUANTITY

0.99+

six monthsQUANTITY

0.99+

John WallsPERSON

0.99+

October 30thDATE

0.99+

2017DATE

0.99+

AprilDATE

0.99+

JuneDATE

0.99+

one yearQUANTITY

0.99+

Daniel HernandezPERSON

0.99+

HortonworksORGANIZATION

0.99+

SeptemberDATE

0.99+

oneQUANTITY

0.99+

ten milesQUANTITY

0.99+

YARNORGANIZATION

0.99+

eight milesQUANTITY

0.99+

Vikram MuraliPERSON

0.99+

New York CityLOCATION

0.99+

North AmericaLOCATION

0.99+

two dayQUANTITY

0.99+

PythonTITLE

0.99+

two releasesQUANTITY

0.99+

New YorkLOCATION

0.99+

two yearsQUANTITY

0.99+

three yearsQUANTITY

0.99+

six releasesQUANTITY

0.99+

TorontoLOCATION

0.99+

todayDATE

0.99+

BothQUANTITY

0.99+

two monthsQUANTITY

0.99+

a yearQUANTITY

0.99+

YahooORGANIZATION

0.99+

third phaseQUANTITY

0.98+

bothQUANTITY

0.98+

this yearDATE

0.98+

first methodologyQUANTITY

0.98+

FirstQUANTITY

0.97+

second thingQUANTITY

0.97+

one small pieceQUANTITY

0.96+

OneQUANTITY

0.96+

XGBoostTITLE

0.96+

CameramanPERSON

0.96+

about eight milesQUANTITY

0.95+

Horton Data PlatformORGANIZATION

0.95+

2017-2018DATE

0.94+

firstQUANTITY

0.94+

The ValleyLOCATION

0.94+

TensorFlowTITLE

0.94+