Rajesh Pohani, Dell Technologies | SuperComputing 22
>>Good afternoon friends, and welcome back to Supercomputing. We're live here at the Cube in Dallas. I'm joined by my co-host, David. My name is Savannah Peterson and our a fabulous guest. I feel like this is almost his show to a degree, given his role at Dell. He is the Vice President of HPC over at Dell. Raja Phan, thank you so much for being on the show with us. How you doing? >>Thank you guys. I'm doing okay. Good to be back in person. This is a great show. It's really filled in nicely today and, and you know, a lot of great stuff happening. >>It's great to be around all of our fellow hardware nerds. The Dell portfolio grew by three products. It it did, I believe. Can you give us a bit of an intro on >>That? Sure. Well, yesterday afternoon and yesterday evening, we had a series of events that announced our new AI portfolio, artificial intelligence portfolio, you know, which will really help scale where I think the world is going in the future with, with the creation of, of all this data and what we can do with it. So yeah, it was an exciting day for us. Yesterday we had a, a session over in a ballroom where we did a product announce and then in the evening had an unveil in our booth here at the SUPERCOMPUTE conference, which was pretty eventful cupcakes, you know, champagne drinks and, and most importantly, Yeah, I know. Good time. Did >>You get the invite? >>No, I, most importantly, some really cool new servers for our customers. >>Well, tell us about them. Yeah, so what's, what's new? What's in the news? >>Well, you know, as you think about artificial intelligence and what customers are, are needing to do and the way artificial intelligence is gonna change how, you know, frankly, the world works. We have now developed and designed new purpose-built hardware, new purpose-built servers for a variety of AI and artificial intelligence needs. We launched our first eight way, you know, Invidia H 100 a a 100 s XM product. Yesterday we launched a four u four way H 100 product yesterday and a two u fully liquid cooled intel data center, Max GPU server yesterday as well. So, you know, a full range of portfolio for a variety of customer needs, depending on their use cases, what they're trying to do, their infrastructure, we're able to now provide, you know, servers to and hardware that help, you know, meet those needs in those use cases. >>So I wanna double click, you just said something interesting, water cooled. >>Yeah. So >>Where does, at what point do you need to move in the direction of water cooling and, you know, I know you mentioned, you know, GPU centric, but, but, but talk about that, that balance between, you know, a density and what you can achieve with the power that's going into the system. Well, you system, >>It all depends on what the customers are trying to accommodate, right? I, I think that there's a dichotomy that's existing now between customers who have already or are planning liquid cooled infrastructures and power distribution to the rack. So you take those two together and if you have the power distribution to the rack, you wanna take advantage of the density to take advantage of the density you need to be able to cool the servers and therefore liquid cooling comes into play. Now you have other customers that either don't have the power to the rack or aren't ready for liquid cooling, and at that point, you know, they're not gonna want to take advantage. They can't take advantage of the density. So there's this dichotomy in products, and that's why we've got our XE 96 40, which is in two U dense liquid cooled, but we also have our XE 86 40, which is a four U air cold, right? Or liquid assisted air cold, right? So depending on where you are on your journey, whether it's power infrastructure, liquid cooling, infrastructure, we've got the right solution for you that, you know, meets your needs. You don't have to take advantage of the density, the expense of liquid cooling, unless you're ready to do that. Otherwise we've got this other option for you. And so that's really what dichotomy is beginning to exist in our customers infrastructures today. >>I was curious about that. So do you see, is there a category or a vertical that is more in the liquid cooling zone because that's a priority in terms of the density or >>Yeah, yeah. I mean, you've got your, your large HTC installations, right? Your large clusters that not only have the power have, you know, the liquid cooling density that they've built in, you've got, you know, federal government installations, you've got financial tech installations, you've got colos that are built for sustainability and density and space that, that can also take advantage of it. Then you've got others that are, you know, more enterprises, more in the mainstream of what they do, where, you know, they're not ready for that. So it just, it just depends on the scale of the customer that we're talking about and what they're trying to do and, and where they're, and where they're doing it. >>So we hear, you know, we hear at Supercomputing conference and HPC is sort of the kind of trailing mini version of supercomputing in a way where maybe you have someone who they don't need 2 million CPU cores, but maybe they need a hundred thousand CPU cores. So it's all a matter of scale. What is, can you identify kind of an HPC sweet spot right now as, as Dell customers are adopting the kinds of things that you just just announced? >>You know, I think >>How big are these clusters at this >>Point? Well, let, let me, let me hit something else first. Yeah, I think people talk about HPC as, as something really specific and what we're seeing now with the, you know, vast amount of data creation, the need for computational analytics, the need for artificial intelligence, the HPC is kind of morphing right into, into, you know, more and more general customer use cases. And so where before you used to think about HPC is research and academics and computational dynamics. Now, you know, there's a significant Venn diagram overlap with just regular artificial intelligence, right? And, and so that is beginning to change the nature of how we think about hpc. You think about the vast data that's being created. You've got data driven HPC where you're running computational analytics on this data that's giving you insights or outcomes or information. It's not just, Hey, I'm running, you know, physics calculations or astronomical how, you know, calculations. It is now expanding in a variety of ways where it's democratizing into, you know, customers who wouldn't actually talk about themselves as HVC customers. And when you meet with them, it's like, well, yeah, but your compute needs are actually looking like HPC customers. So let's talk to you about these products. Let's talk to you about these solutions, whether it's software solutions, hardware solutions, or even purpose-built hardware. Like we're, like we talked about that now becomes the new norm. >>Customer feedback and community engagement is big for you. I know this portfolio of products that was developed based on customer feedback, correct? Yep. >>So everything we do at Dell is customer driven, right? We want to be, we want to drive, you know, customer driven innovation, customer driven value to meet our customer's needs. So yeah, we spent a while, right, researching these products, researching these needs, understanding is this one product? Is it two products? Is it three products? Talking to our partners, right? Driving our own innovation in IP and then where they're going with their roadmaps to be able to deliver kind of a harmonized solution to customers. So yeah, it was a good amount of customer engagement. I know I was on the road quite a bit talking to customers, you know, one of our products was, you know, we almost named after one of our customers, right? I'm like, Hey, this, we've talked about this. This is what you said you wanted. Now he, he was representative of a group of customers and we validated that with other customers and it's also a way of me making sure he buys it. But great, great. Yeah, >>Sharing sales there, >>That was good. But you know, it's heavily customer driven and that's where understanding those use cases and where they fit drove the various products. And, you know, in terms of, in terms of capability, in terms of size, in terms of liquid versus air cooling, in terms of things like number of P C I E lanes, right? What the networking infrastructure was gonna look like. All customer driven, all designed to meet where customers are going in their artificial intelligence journey, in their AI journey. >>It feels really collaborative. I mean, you've got both the intel and the Nvidia GPU on your new product. There's a lot of CoLab between academics and the private sector. What has you most excited today about supercomputing? >>What it's going to enable? If you think about what artificial intelligence is gonna enable, it's gonna enable faster medical research, right? Genomics the next pandemic. Hopefully not anytime soon. We'll be able to diagnose, we'll be able to track it so much faster through artificial intelligence, right? That the data that was created in this last one is gonna be an amazing source of research to, to go address stuff like that in the future and get to the heart of the problem faster. If you think about a manufacturing and, and process improvement, you can now simulate your entire manufacturing process. You don't have to run physical pilots, right? You can simulate it all, get 90% of the way there, which means your, your either factory process will get reinvented factor faster, or a new factory can get up and running faster. Think about retail, how retail products are laid out. >>You can use media analytics to track how customers go through the store, what they're buying. You can lay things out differently. You're not gonna have in the future people going, you know, to test cell phone reception. Can you hear me now? Can you hear me? Now you can simulate where customers are patterns to ensure that the 5G infrastructure is set up, you know, to the maximum advantage. All of that through digital simulation, through digital twins, through media analytics, through natural language processing. Customer experience is gonna be better, communication's gonna be better. All of this stuff with, you know, using this data, training it, and then applying it is probably what excites me the most about super computing and, and really compute in the future. >>So on the hardware front, kind of digging down below the, the covers, you know, the surface a little more, Dell has been well known for democratizing things in it, making them available to, at a variety of levels. Never a one size fits all right? Company, these latest announcements would be fair to say. They represent sort of the tip of the spear in terms of high performance. What about, what about rpc regular performance computing? Where's, where's the overlap? Cause you know, we're in this season where we've got AMD and Intel leapfrogging one another, new bus architectures. The, the, you know, the, the connectivity that's plugged into these things are getting faster and faster and faster. So from a Dell perspective, where does my term rpc regular performance computing and, and HPC begin? Are you seeing people build stuff on kind of general purpose clusters also? >>Well, sure, I mean, you can run a, a good amount of artificial acceleration on, you know, high core count CPUs without acceleration, and you can do it with P C I E accelerators and then, then you can do it with some of the, the, the very specific high performance accelerators like that, the intel, you know, data center, Max GPUs or NVIDIAs a 100 or H 100. So there are these scale up opportunities. I mean, if you think about, >>You know, >>Our mission to democratize compute, not just hpc, but general compute is about making it easier for customers to implement, to get the value out of what they're trying to do. So we focus on that with, you know, reference designs or validated designs that take out a good amount of time that customers would have to do it on their own, right? We can cut by six to 12 months the ability for customers in, in, I'm gonna use an HPC example and then I'll come back to your, your regular performance compute by us doing the work us, you know, setting, you know, determining the configuration, determining the software packages, testing it, tuning it so that by the time it gets to the customer, they get to take advantage of the expertise of Dell Engineers Dell Scale and they are ready to go in a much faster point of view. >>The challenge with AI is, and you talk to customers, is they all know what it can lead to and the benefits of it. Sometimes they just dunno how to start. We are trying to make it easier for customers to start, whether it is using regular RPC or you know, non optimized, non specialized compute, or as you move up the value stack into compute capability, our goal is to make it easier for customers to start to get on their journey and to get to what they're trying to do faster. So where do I see, you know, regular performance compute, you know, it's, it's, you know, they go hand in hand, right? As you think about what customers are trying to do. And I think a lot of customers, like we talked about, don't actually think about what they're trying to do as high performance computing. They don't think of themselves as one of those specialized institutions as their hpc, but they're on this glide path to greater and greater compute needs and greater and greater compute attributes that that merge kind of regular performance computing and high performance computing to where it's hard to really draw the line, especially when you get to data driven HPC data's everywhere >>And so much data. And it sounds like a lot people are very early in this journey. From our conversation with Travis, I mean five AI programs per very large company or less at this point for 75% of customers, that's pretty wild. I mean you're, you're an educational coach, you're teachers, you're innovating on the hardware front, you're doing everything at Dell. Last question for you. You've been at 24 years, >>25 in this coming march. >>What has a company like that done to retain talent like you for more than two and a half decades? >>You know, for me and I, I, and I'd like to say I had an atypical journey, but I don't think I have right there, there has always been opportunity for me, right? You know, I started off as a quality engineer. A couple years later I'm living in Singapore running or you know, running services for Enterprise and apj. I come back couple years in Austin, then I'm in our Bangalore development center helping set that up. Then I come back, then I'm in our Taiwan development center helping with some of the work out there. And then I come back. There has always been the next opportunity before I could even think about am I ready for the next opportunity? Oh. And so for me, why would I leave? Right? Why would I do anything different given that there's always been the next opportunity? The other thing is jobs are what you make of it and Dell embraces that. So if there's something that needs to be done or there was an opportunity, or even in the case of our AI ML portfolio, we saw an opportunity, we reviewed it, we talked about it, and then we went all in. So that innovation, that opportunity, and then most of all the people at Dell, right? I can't ask to work with a better set of set of folks from from the top on down. >>That's fantastic. Yeah. So it's culture. >>It is culture B really, at the end of the day, it is culture. >>That's fantastic. Raja, thank you so much for being here with us. >>Thank you guys, the >>Show. >>Really appreciate it. >>Questions? Yeah, this was such a pleasure. And thank you for tuning into the Cube Live from Dallas here at Supercomputing. My name is Savannah Peterson, and we'll see y'all in just a little bit.
SUMMARY :
Raja Phan, thank you so much for being on the show with us. nicely today and, and you know, a lot of great stuff happening. Can you give us a bit of an intro on which was pretty eventful cupcakes, you know, What's in the news? the way artificial intelligence is gonna change how, you know, frankly, the world works. cooling and, you know, I know you mentioned, you know, either don't have the power to the rack or aren't ready for liquid cooling, and at that point, you know, So do you see, is there a category or a vertical that is more in the more in the mainstream of what they do, where, you know, they're not ready for that. So we hear, you know, we hear at Supercomputing conference and HPC is sort of ways where it's democratizing into, you know, customers who wouldn't actually I know this portfolio of products that was developed customers, you know, one of our products was, you know, we almost named after one of our But you know, it's heavily customer driven and that's where understanding those use cases has you most excited today about supercomputing? you can now simulate your entire manufacturing process. you know, to the maximum advantage. So on the hardware front, kind of digging down below the, the covers, you know, the surface a little more, that, the intel, you know, data center, Max GPUs or NVIDIAs a 100 or H 100. you know, setting, you know, determining the configuration, determining the software packages, testing it, see, you know, regular performance compute, you know, it's, And it sounds like a lot people are very early in this journey. you know, running services for Enterprise and apj. That's fantastic. Raja, thank you so much for being here with us. And thank you for tuning into the Cube Live from Dallas here at
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rajesh Pohani | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Singapore | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
Raja Phan | PERSON | 0.99+ |
24 years | QUANTITY | 0.99+ |
Austin | LOCATION | 0.99+ |
75% | QUANTITY | 0.99+ |
HTC | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
XE 86 40 | COMMERCIAL_ITEM | 0.99+ |
XE 96 40 | COMMERCIAL_ITEM | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
yesterday afternoon | DATE | 0.99+ |
two products | QUANTITY | 0.99+ |
three products | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
yesterday evening | DATE | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Yesterday | DATE | 0.99+ |
25 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one product | QUANTITY | 0.99+ |
12 months | QUANTITY | 0.99+ |
H 100 | COMMERCIAL_ITEM | 0.99+ |
five | QUANTITY | 0.99+ |
Taiwan | LOCATION | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.98+ |
Raja | PERSON | 0.98+ |
Travis | PERSON | 0.98+ |
Supercomputing | ORGANIZATION | 0.98+ |
HPC | ORGANIZATION | 0.98+ |
intel | ORGANIZATION | 0.98+ |
more than two and a half decades | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
Bangalore | LOCATION | 0.97+ |
H 100 a a | COMMERCIAL_ITEM | 0.97+ |
pandemic | EVENT | 0.97+ |
NVIDIAs | ORGANIZATION | 0.96+ |
Invidia | ORGANIZATION | 0.96+ |
2 million CPU | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
A couple years later | DATE | 0.92+ |
Cube | ORGANIZATION | 0.9+ |
Dell Engineers | ORGANIZATION | 0.88+ |
a hundred thousand CPU | QUANTITY | 0.88+ |
Cube Live | TITLE | 0.84+ |
SUPERCOMPUTE | EVENT | 0.82+ |
couple years | QUANTITY | 0.79+ |
100 s XM | COMMERCIAL_ITEM | 0.77+ |
first eight way | QUANTITY | 0.76+ |
SuperComputing 22 | ORGANIZATION | 0.73+ |
CoLab | ORGANIZATION | 0.7+ |
hpc | ORGANIZATION | 0.68+ |
double | QUANTITY | 0.68+ |
Supercomputing | EVENT | 0.66+ |
four way | QUANTITY | 0.62+ |
march | DATE | 0.62+ |
100 | COMMERCIAL_ITEM | 0.6+ |
four | QUANTITY | 0.57+ |
President | PERSON | 0.54+ |
Robert Scoble, Transformation Group - SXSW 2017 - #IntelAI - #theCUBE
>> Narrator: Live from Austin, Texas, it's the Cube covering South by Southwest 2017. Brought to you by Intel. Now, here's John Furrier. >> Hey, welcome back everyone. We're live here in the Cube coverage of South by Southwest. We're at the Intel AI Lounge, hashtag Intel AI. And the theme is AI for social good. So if you really support that, go in Twitter and use the hashtag Intel AI and support our cause. I'm John Furrier with Silicon Angle, I'm here with Robert Scoble, @Scobalizer. Just announcing this week the new formation of his new company, the Transformation Group. I've known Robert for over 12 years now. Influencer, futurist. You've been out and about with the virtual reality, augmented reality, you're wearing the products. >> Yup. >> You've been all over the world, you were just at Mobile World Con, we've been following you. You are the canary in the coalmine poking at all the new technology. >> Well, the next five years, you're going to see some mind blowing things. In fact, just the next year, I predict that this thing is going to turn into a three ounce pair of glasses that's going to put virtual stuff on top of the world. So think about coming back to South by Southwest, you're wearing a couple pairs of glasses, and you are going to see blue lines on the floor taking you to your next meeting or TV screens up here so I can watch the Cube while I walk around the streets here. It's going to be a lot of crazy stuff. >> So, we've been on our opening segment, we talked about it, we just had a segment on social good around volunteering, but what the theme is coming out is this counter culture where there's now this humanization aspect they called the consumerization of IT in the past. But in the global world, the human involvement now has these emersion experiences with technology, and now is colliding with impacting lives. >> Well, absolutely true. >> This is a Microsoft HoloLens first of all. And HoloLens puts virtual stuff on top of the real world. But at home, I have an HTC Vibe, and I have an Oculus Rift for VR, and VR is that immersive media. This is augmented reality or what we call mixed reality, where the images are put on top of the world. So I can see something pop off of you. In fact, last year at South by, I met a guy who started a company called iFluence, he showed me a pair of glasses and you look at a bottle like this and a little menu pops off the side of a bottle, tells you how much it is, tells you what's in the bottle, and lets you buy new versions of this bottle, like a case of it and have it shipped to my house all with my eyes. That's coming out from Google next year. >> So the big thing on the immersion the AR, you look at what's going on at societal impact. What are the things that you see? Obviously, we've been seeing at Mobile World Congress before Peelers came out, autonomous vehicles is game changing, smart cities, median entertainment, the world that we know close to our world, and then smart home. >> Oh yeah. >> Smart home's been around for years, but autonomous vehicles truly is a societal change. >> Yes. >> The car is a data center now. It's got experiences. And there's three new startups you should pay attention to, in the new cars that are coming in the next 18 months. Quanergy is one. They make a new kind of light R, a new sensor. In fact, there's sensors here that are sensing the world as I walk around and seeing all the surfaces. The car works the same way. It has to see ahead to know that there's a kid in front of your car, the car needs to stop, right. And Quanergy is making a focusable semiconductor light R, that's going to be one to watch. And then there's a new kind of brain, a new kind AI coming, and DeepScale is the one that I'm watching. The DeepScale brain uses a new third company called Luminar Technologies, which is making a new kind of 3D map of the world. So think about going down the street. This new map is going to know every pot hole, every piece of paint, every bridge on the street, and it's going to, the brain, the AI, is going to compare the virtual map to the real map, to the real world and see if there's anything new, like a kid crossing across the street. Then the car needs to do something and make a new decision. So 3D startups are going to really change the car. But the reason I'm so focused on mixed reality, is mixed reality is the user interface for the self-driving car, for the smart city, for the internet of things, the fields in your farm or what not, and for your robot, and for your drone. You're going to have drones that are going to know this space, and you can fly it right, I've seen drones already in the R & D labs at Intel. You can fly them straight at the wall, it'll stop an inch from the wall because it knows where the wall is. >> 'Cause it's got the software, it's got the sensors, the internet of things. We are putting out a new research report at Wikibound called IOT and P, Internet Things and People. And this is the key point. I want to get your thoughts on this because you nailed a bunch of things, and I want you to define for the folks watching what you mean by mixed reality because this is not augmented reality. >> Well it is. >> John: You're talking about mixed reality. >> It is augmented reality, it's just-- >> John: But why mixed reality? >> We came up with the new term called mixed reality because on our, we have augmented reality on phones. But the augmented reality you have on phones like the Pokemon's we've been talking about. They're not locked to the world. So when I'm wearing this, there's actually a shark right here on this table, and it's locked on the table, and I can walk around that shark. And it seems like it's sitting here just like this bottle of water is sitting on the table. This is mind blowing. And now we can actually change the table itself and make it something else. Because every pixel in this space is going to be mapped by these new sensors on it. >> So, let's take that to the next level. You had mentioned earlier in your talk just now about user interface to cars. You didn't say in user interface to cars, you didn't say just smart, you kind of implied, I think you meant it's interface to all the environments. >> Robert: Yes. >> Can you expand on what your thoughts on that? >> You're going to be wearing glasses that look like yours in about a year, much smaller than this. This is too dorky and too big for an average consumer to wear around right, but if they're three ounces and they look something like what you're wearing right now. >> Some nice Ray Bans, yup. >> And they're coming. I've seen them in the R & D labs. They're coming from a variety of different companies. Google, Facebook, Loomis, Magic Leap, all sorts of different companies are coming with these lightweight small glasses. You're going to wear them around and it's going to lay interface elements on everything. So think about my watch. Why if I do this gesture, why do I have to look at a little tiny screen right here? Why isn't the whole screen of my calendar pop up right here? They could do that, that's a gesture. This computer in here can sense that I'm doing a gesture and can put a new user interface on top of that. Now, I've seen tractors that have sensors in them. Now, using a glass like this, it shows me what the pumps are doing in the tractor on the glasses. I can walk around a factory floor and see the sensors in the pipes on the factory floor and see the sensors in my electric motors on the factory. All with a one pair of glasses. >> So this is why the Intel AI thing interests me, this whole theme. Because what you just described requires data. So one, you need to have the data available. >> Robert: Yes. >> The data's got to be a frictionless, it can't be locked in some schema as they say in the database world. It's got to be free to be addressed by software. >> Yes. >> You need software that understands what that is. And then you need horsepower, compute power, chips to make it all happen. >> Yeah, think about a new kind of TV that's coming soon. I'm going to look at TV like this one, a physical TV. But it's too small and it's in the wrong angle. So I can just grab the image off the TV and virtually move it over here. And I'll see it, nobody else will see it. But I can put that TV screen right here, so I can watch my TV the way I want to watch it. >> Alright so this is all sci-fi great stuff, which actually-- >> It's not sci-fi, it's here already. You just don't have it. I have it (laughs). >> Well, you can see it's kind of dorky, but I'm not going to say you're a dork 'cause I know you. To mainstream America, mainstream world, it's a bit sci-fi but people are grokking this now. Certainly the younger generation that are digital native all are coming in post-9/11, they understand that this is a native world to them, and they take to it like a fish to water. >> Yes. >> Us old guys, but we are the software guys, we're the tech guys. So continue to the mainstream America, what has to happen in your mind to mainstream this stuff? Obviously self driving cars is coming. It's in fleets first, and then cars. >> We have to take people on a journey away from computing like this or computing like this to computing on glasses. So how do we do that? Well, you have to show deep utility. And these glasses show that. Wearing a HoloLens, I see aliens coming out of the walls. Blowing holes in this physical wall. >> John: Like right now? >> Yeah. >> What are you smoking (laughs)? >> Nothing yet. And then I can shoot them with my fingers because the virtual things are mixing with the real world. It's a mind blowing experience. >> So do you see this being programmed by users or being a library of stuff? >> Some are going to be programmed by users like Minecraft is today on a phone or on a tablet. Most of it is going to be built by developers. So there's a huge opportunity coming for developers. >> Talk about the developer angle, because that's huge. We're seeing massive changes in the developer ecosystems. Certainly, open source is going to be around for awhile. But which friends do you see in open source, I mean, I'm sorry, in the developer community, with this new overlay of 5G connectivity, all this amazing cloud technology? >> There's a new 3D mapping and it's a slam based map. So think about this space, this physical space. These sensors that are on the front of these new kinds of glasses that are coming out are going to sense the world in a new way and put it into a new kind of database, one that we can put programmatic information into. So think about me walking around a shopping mall. I walk in the front door of a shopping mall, I cross geo fence in that shopping mall. And the glasses then show me information about the shopping mall 'cause it knows it's in the shopping mall. And then I say, hey Intel, can you show me, or Siri, or Alexa, or Cortana, or whoever you're talking to. >> Mostly powered by Intel (laughs). >> Most of it is powered by Intel 'cause Intel's in all the data centers and all these glasses. In fact, Intel is the manufacturer of the new kind of controller that's inside this new HoloLens. And when I ask it, I can say, hey, where's the blue jeans in this shopping mall? And all of a sudden, three new pairs of blue jeans will appear in the air, virtual blue jeans, and it'll say this one's a Guess, this one's a Levi's, this one's a whatever. And I'll say, oh I want the Levi's 501, and I'll click on it, and a blue line will appear on the floor taking me right to the product. You know, the shopping mall companies already have the data. They already know where the jeans are in the shopping mall and these glasses are going to take you right to it. >> Robert, so AI is the theme, it's hot, but AI, I mean I love AI, don't get me wrong. AI is a mental model in my mind for people to kind of figure out that this futuristic world's here and it's moving fast. But machine learning is a big part of what AI is becoming. >> Yes. >> So machine learning is becoming automated. >> Well it's becoming a lot faster. >> Faster and available. >> Because it use to take 70,000 images of something like a bottle to train the system that this is a bottle versus a can, bottle versus can. And the scientists have figured out how to make it two images now. So all I need is two images of something new to train the system that we have a bottle versus a can. >> And also the fact that computes available. There's more and more faster processors that this stuff can get crunched, the data can be crunched. >> Absolutely, but it's the data that trains these things. So let's talk about the bleeding edge of AI. I've seen AIs coming out of Israel that are just mind blowing. They take a 3D image of this table, they separate everything into an object. So this is an object. It's separate from the table that it's on. And it then lets me do AI look-ups on the object. So this is a Roxanne bottle of water. The 3D sensor can see the logo in this bottle of water, can look to the cloud, find all sorts of information about the manufacturer here, what the product is, all sorts of stuff. It might even pull down a CAD drawing like the computer that you're on. Pull down a CAD drawing, overlay it on top of the real product, and now we can put videos on the back of your Macintosh or something like that. You can do mind blowing stuff coming soon. That's one angle. Let's talk about medical. In Israel, I went to the AI manufacturers. They're training the MRI machines to recognize cancers. So you're going to be lying in an MRI machine and it's going to tell the people around the machine whether you have cancer or not and which cancer. And it's already faster than the doctor, cheaper than the doctor, and obviously doesn't need a doctor. And that's going to lead into a whole discussion-- >> The Christopher thing. These are societal problems by the way. The policy is the issue, not the technology. How do you deal with the ethical issues around gene sequencing and gene editing? >> That's a whole other thing. I'm just recognizing whether you have cancer on this example. But now we need to talk about jobs. How do we make new jobs in massive quantities. Because we're going to decimate a lot of peoples' jobs with these new technologies, so we need to talk about that, probably on a future Cube. But I think mixed reality is going to create millions of jobs because think about this bottle. In the future, I'm going to be wearing a pair of glasses and Skrillex is going to jump out of the bottle, on to the table, and give a performance, and then jump back into the bottle. That's only four years away according to the guy who's running a new startup called 8i. He's making a new volumetric camera, it's a camera 40 or 50 cameras around-- >> If you don't like Skrillex, Martin Garrix can come on. >> Whatever you want. Remember, this media's going to be personalized to your liking. Spotify is already doing that. Do you listen to Spotify? >> John: Yeah, of course. >> Do you listen to the discovery weekly feature on that? >> No. >> You should. It's magical. It brings you the best music based on what you've already listened and it's personalized. So your discovery weekly on your phone is different than the discovery weekly on my phone. And that's run by AI. >> So these are new collaborative filters. This is all about software? >> Yeah. Software and a little bit of hardware. Because you still need to sense the world in a new way. You're going to get new watches this year that have many more sensors that are looking in your veins for whether you have high blood pressure, whether you're a in shape for running. By the way, you're going to have an artificial coach when you go running in the morning, running next to you, just like when you see Mark Zuckerberg. He can afford to pay a real coach, I can't. So he has a real coach running with him every morning and saying hey, we're going to do some interval training today, we're going to do some sprints to get your cardio up. Well, now the glasses are going to do that for you. It's going to say, let's do some intervals today and you're going to wear the watch that's going to sense your blood pressure and your heart rate and the artificial coach running next you. And that's only two years away. >> Of course, great stuff. Robert Scoble, we have to close the segment. Quickly, how has South by changed in ten years? >> Well, 20, I've been coming for 20 years. I've been coming since it was 500 people and now it's 50,000, 70,000 people, it's crazy. >> How has it changed this year? What's going on this year? >> This is the VR year. Every year we have a year right. There was the Twitter year, there was the Foursquare year. This is the VR year, so if you're over at Capital Factory, you're going to see dozens of VR experiences. In fact, my co-author's playing the Mummy right now. I had to come on your show, I got the short straw (laughs). Sit in the sun instead of playing some cool stuff. But there's VR all over the place. Next year is going to be the mixed reality year, and this is a predictor of the next year that's coming. >> Alright, Robert Scoble, futurist right here on the Cube. Also, congratulations on your new company. You're going out on your own, Transformation Group. >> Yeah, we're helping out brands figure out this mixed reality world. >> Congratulations of course. As always, it is a transformational time in the history of our world and certainly the computer industry is going to a whole other level that we haven't seen before. And this is going to be exciting. Thanks for spending the time with us. It's the Cube here live at South by Southwest special Cube coverage, sponsored by Intel. And the hashtag is Intel AI. If you like it, tweet us at Twitter. We'll be happy to talk to you online. I'm John Furrier. More after this short break. (electronic music)
SUMMARY :
Austin, Texas, it's the Cube of his new company, the the world, you were just at the floor taking you to your But in the global world, the and have it shipped to my What are the things that you see? for years, but autonomous Then the car needs to do for the folks watching what John: You're talking it's locked on the table, So, let's take that to the next level. You're going to be wearing in my electric motors on the factory. have the data available. say in the database world. And then you need horsepower, So I can just grab the image I have it (laughs). Certainly the younger generation are the software guys, aliens coming out of the walls. the virtual things are Some are going to be in the developer ecosystems. And the glasses then show me information In fact, Intel is the Robert, so AI is the theme, it's hot, So machine learning And the scientists have And also the fact And it's already faster than the doctor, These are societal problems by the way. In the future, I'm going to If you don't like Skrillex, going to be personalized is different than the This is all about software? and the artificial coach running next you. to close the segment. and now it's 50,000, This is the VR year, so if futurist right here on the Cube. this mixed reality world. And this is going to be exciting.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert Scoble | PERSON | 0.99+ |
Robert | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
Mark Zuckerberg | PERSON | 0.99+ |
Israel | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
70,000 images | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
20 years | QUANTITY | 0.99+ |
two images | QUANTITY | 0.99+ |
two images | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Loomis | ORGANIZATION | 0.99+ |
Magic Leap | ORGANIZATION | 0.99+ |
40 | QUANTITY | 0.99+ |
Cortana | TITLE | 0.99+ |
three ounces | QUANTITY | 0.99+ |
ten years | QUANTITY | 0.99+ |
Siri | TITLE | 0.99+ |
Minecraft | TITLE | 0.99+ |
last year | DATE | 0.99+ |
Levi's | ORGANIZATION | 0.99+ |
Transformation Group | ORGANIZATION | 0.99+ |
Wikibound | ORGANIZATION | 0.99+ |
Martin Garrix | PERSON | 0.99+ |
Luminar Technologies | ORGANIZATION | 0.99+ |
500 people | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
@Scobalizer | PERSON | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
America | LOCATION | 0.99+ |
50,000, 70,000 people | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
HTC | ORGANIZATION | 0.99+ |
Ray Bans | ORGANIZATION | 0.99+ |
Next year | DATE | 0.99+ |
four years | QUANTITY | 0.99+ |
20 | QUANTITY | 0.98+ |
Alexa | TITLE | 0.98+ |
Mobile World Con | EVENT | 0.98+ |
over 12 years | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Skrillex | PERSON | 0.98+ |
Macintosh | COMMERCIAL_ITEM | 0.98+ |
HoloLens | COMMERCIAL_ITEM | 0.98+ |
Quanergy | ORGANIZATION | 0.98+ |
one angle | QUANTITY | 0.98+ |
Spotify | ORGANIZATION | 0.98+ |
Silicon Angle | ORGANIZATION | 0.98+ |
Vibe | COMMERCIAL_ITEM | 0.98+ |
iFluence | ORGANIZATION | 0.98+ |
three new pairs | QUANTITY | 0.98+ |
SXSW 2017 | EVENT | 0.97+ |
Capital Factory | ORGANIZATION | 0.97+ |
8i | ORGANIZATION | 0.97+ |
Mobile World Congress | EVENT | 0.96+ |
dozens | QUANTITY | 0.96+ |
one pair of glasses | QUANTITY | 0.95+ |
about a year | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
this week | DATE | 0.92+ |
Christopher | PERSON | 0.92+ |
three new startups | QUANTITY | 0.91+ |
three ounce pair of glasses | QUANTITY | 0.91+ |
Oculus | ORGANIZATION | 0.9+ |
Nick Pentreath, IBM STC - Spark Summit East 2017 - #sparksummit - #theCUBE
>> Narrator: Live from Boston, Massachusetts, this is The Cube, covering Spark Summit East 2017. Brought to you by Data Bricks. Now, here are your hosts, Dave Valente and George Gilbert. >> Boston, everybody. Nick Pentry this year, he's a principal engineer a the IBM Spark Technology Center in South Africa. Welcome to The Cube. >> Thank you. >> Great to see you. >> Great to see you. >> So let's see, it's a different time of year, here that you're used to. >> I've flown from, I don't know the Fahrenheit's equivalent, but 30 degrees Celsius heat and sunshine to snow and sleet, so. >> Yeah, yeah. So it's a lot chillier there. Wait until tomorrow. But, so we were joking. You probably get the T-shirt for the longest flight here, so welcome. >> Yeah, I actually need the parka, or like a beanie. (all laugh) >> Little better. Long sleeve. So Nick, tell us about the Spark Technology Center, STC is its acronym and your role, there. >> Sure, yeah, thank you. So Spark Technology Center was formed by IBM a little over a year ago, and its mission is to focus on the Open Source world, particularly Apache Spark and the ecosystem around that, and to really drive forward the community and to make contributions to both the core project and the ecosystem. The overarching goal is to help drive adoption, yeah, and particularly enterprise customers, the kind of customers that IBM typically serves. And to harden Spark and to make it really enterprise ready. >> So why Spark? I mean, we've watched IBM do this now for several years. The famous example that I like to use is Linux. When IBM put $1 billion into Linux, it really went all in on Open Source, and it drove a lot of IBM value, both internally and externally for customers. So what was it about Spark? I mean, you could have made a similar bet on Hadoop. You decided not to, you sort of waited to see that market evolve. What was the catalyst for having you guys all go in on Spark? >> Yeah, good question. I don't know all the details, certainly, of what was the internal drivers because I joined HTC a little under a year ago, so I'm fairly new. >> Translate the hallway talk, maybe. (Nick laughs) >> Essentially, I think you raise very good parallels to Linux and also Java. >> Absolutely. >> So Spark, sorry, IBM, made these investments and Open Source technologies that had ceased to be transformational and kind of game-changing. And I think, you know, most people will probably admit within IBM that they maybe missed the boat, actually, on Hadoop and saw Spark as the successor and actually saw a chance to really dive into that and kind of almost leap frog and say, "We're going to "back this as the next generation analytics platform "and operating system for analytics "and big debt in the enterprise." >> Well, I don't know if you happened to watch the Super Bowl, but there's a saying that it's sometimes better to be lucky than good. (Nick laughs) And that sort of applies, and so, in some respects, maybe missing the window on Hadoop was not a bad thing for IBM >> Yeah, exactly because not a lot of people made a ton of dough on Hadoop and they're still sort of struggling to figure it out. And now along comes Spark, and you've got this more real time nature. IBM talks a lot about bringing analytics and transactions together. They've made some announcements about that and affecting business outcomes in near real time. I mean, that's really what it's all about and one of your areas of expertise is machine learning. And so, talk about that relationship and what it means for organizations, your mission. >> Yeah, machine learning is a key part of the mission. And you've seen the kind of big debt in enterprise story, starting with the kind of Hadoop and data lakes. And that's evolved into, now we've, before we just dumped all of this data into these data lakes and these silos and maybe we had some Hadoop jobs and so on. But now we've got all this data we can store, what are we actually going to do with it? So part of that is the traditional data warehousing and business intelligence and analytics, but more and more, we're seeing there's a rich value in this data, and to unlock it, you really need intelligent systems. You need machine learning, you need AI, you need real time decision making that starts transcending the boundaries of all the rule-based systems and human-based systems. So we see machine learning as one of the key tools and one of the key unlockers of value in these enterprise data stores. >> So Nick, perhaps paint us a picture of someone who's advanced enough to be working with machine learning with BMI and we know that the tool chain's kind of immature. Although, IBM with Data Works or Data First has a fairly broad end-to-end sort of suit of tools, but what are the early-use cases? And what needs to mature to go into higher volume production apps or higher-value production apps? >> I think the early-use cases for machine learning in general and certainly at scale are numerous and they're growing, but classic examples are, let's say, recommendation engines. That's an area that's close to my heart. In my previous life before IBM, I bought the startup that had a recommendation engine service targeting online stores and new commerce players and social networks and so on. So this is a great kind of example use case. We've got all this data about, let's say, customer behavior in your retail store or your video-sharing site, and in order to serve those customers better and make more money, if you can make good recommendations about what they should buy, what they should watch, or what they should listen to, that's a classic use case for machine learning and unlocking the data that is there, so that is one of the drivers of some of these systems, players like Amazon, they're sort of good examples of the recommendation use case. Another is fraud detection, and that is a classic example in financial services, enterprise, which is a kind of staple of IBM's customer base. So these are a couple of examples of the use cases, but the tool sets, traditionally, have been kind of cumbersome. So Amazon bought everything from scratch themselves using customized systems, and they've got teams and teams of people. Nowadays, you've got this bold into Apache Spark, you've got it in Spark, a machine learning library, you've got good models to do that kind of thing. So I think from an algorithmic perspective, there's been a lot of advancement and there's a lot of standardization and almost commoditization of the model side. So what is missing? >> George: Yeah, what else? >> And what are the shortfalls currently? So there's a big difference between the current view, I guess the hype of the machine learning as you've got data, you apply some machine learning, and then you get profit, right? But really, there's a hugely complex workflow that involves this end-to-end story. You've got data coming from various data sources, you have to feed it into one centralized system, transform and process it, extract your features and do your sort of hardcore data signs, which is the core piece that everyone sort of thinks about as the only piece, but that's kind of in the middle and it makes up a relatively small proportion of the overall chain. And once you've got that, you do model training and selection testing, and you now have to take that model, that machine-learning algorithm and you need to deploy it into a real system to make real decisions. And that's not even the end of it because once you've got that, you need to close the loop, what we call the feedback loop, and you need to monitor the performance of that model in the real world. You need to make sure that it's not deteriorating, that it's adding business value. All of these ind of things. So I think that is the real, the piece of the puzzle that's missing at the moment is this end-to-end, delivering this end-to-end story and doing it at scale, securely, enterprise-grade. >> And the business impact of that presumably will be a better-quality experience. I mean, recommendation engines and fraud detection have been around for a while, they're just not that good. Retargeting systems are too little too late, and kind of cumbersome fraud detection. Still a lot of false positives. Getting much better, certainly compressing the time. It used to be six months, >> Yes, yes. Now it's minutes or second, but a lot of false positives still, so, but are you suggesting that by closing that gap, that we'll start to see from a consumer standpoint much better experiences? >> Well, I think that's imperative because if you don't see that from a consumer standpoint, then the mission is failing because ultimately, it's not magic that you just simply throw machine learning at something and you unlock business value and everyone's happy. You have to, you know, there's a human in the loop, there. You have to fulfill the customer's need, you have to fulfill consumer needs, and the better you do that, the more successful your business is. You mentioned the time scale, and I think that's a key piece, here. >> Yeah. >> What makes better decisions? What makes a machine-learning system better? Well, it's better data and more data, and faster decisions. So I think all of those three are coming into play with Apache Spark, end-to-end's story streaming systems, and the models are getting better and better because they're getting more data and better data. >> So I think we've, the industry, has pretty much attacked the time problem. Certainly for fraud detection and recommendation systems the quality issue. Are we close? I mean, are we're talking about 6-12 months before we really sort of start to see a major impact to the consumer and ultimately, to the company who's providing those services? >> Nick: Well, >> Or is it further away than that, you think? >> You know, it's always difficult to make predictions about timeframes, but I think there's a long way to go to go from, yeah, as you mentioned where we are, the algorithms and the models are quite commoditized. The time gap to make predictions is kind of down to this real-time nature. >> Yeah. >> So what is missing? I think it's actually less about the traditional machine-learning algorithms and more about making the systems better and getting better feedback, better monitoring, so improving the end user's experience of these systems. >> Yeah. >> And that's actually, I don't think it's, I think there's a lot of work to be done. I don't think it's a 6-12 month thing, necessarily. I don't think that in 12 months, certainly, you know, everything's going to be perfectly recommended. I think there's areas of active research in the kind of academic fields of how to improve these things, but I think there's a big engineering challenge to bring in more disparate data sources, to better, to improve data quality, to improve these feedback loops, to try and get systems that are serving customer needs better. So improving recommendations, improving the quality of fraud detection systems. Everything from that to medical imaging and counter detection. I think we've got a long way to go. >> Would it be fair to say that we've done a pretty good job with traditional application lifecycle in terms of DevOps, but we now need the DevOps for the data scientists and their collaborators? >> Nick: Yeah, I think that's >> And where is BMI along that? >> Yeah, that's a good question, and I think you kind of hit the nail on the head, that the enterprise applied machine learning problem has moved from the kind of academic to the software engineering and actually, DevOps. Internally, someone mentioned the word train ops, so it's almost like, you know, the machine learning workflow and actually professionalizing and operationalizing that. So recently, IBM, for one, has announced what's in data platform and now, what's in machine learning. And that really tries to address that problem. So really, the aim is to simplify and productionize these end-to-end machine-learning workflows. So that is the product push that IBM has at the moment. >> George: Okay, that's helpful. >> Yeah, and right. I was at the Watson data platform announcement you call the Data Works. I think they changed the branding. >> Nick: Yeah. >> It looked like there were numerous components that IBM had in its portfolio that's now strung together. And to create that end-to-end system that you're describing. Is that a fair characterization, or is it underplaying? I'm sure it is. The work that went into it, but help us maybe understand that better. >> Yeah, I should caveat it by saying we're fairly focused, very focused at HTC on the Open Source side of things, So my work is predominately within the Apache Spark project and I'm less involved in the data bank. >> Dave: So you didn't contribute specifically to Watson data platform? >> Not to the product line, so, you know, >> Yeah, so its really not an appropriate question for you? >> I wouldn't want to kind of, >> Yeah. >> To talk too deeply about it >> Yeah, yeah, so that, >> Simply because I haven't been involved. >> Yeah, that's, I don't want to push you on that because it's not your wheelhouse, but then, help me understand how you will commercialize the activities that you do, or is that not necessarily the intent? >> So the intent with HTC particularly is that we focus on Open Source and a core part of that is that we, being within IBM, we have the opportunity to interface with other product groups and customer groups. >> George: Right. >> So while we're not directly focused on, let's say, the commercial aspect, we want to effectively leverage the ability to talk to real-world customers and find the use cases, talk to other product groups that are building this Watson data platform and all the product lines and the features, data sans experience, it's all built on top of Apache Apache Spark and platform. >> Dave: So your role is really to innovate? >> Exactly, yeah. >> Leverage and Open Source and innovate. >> Both innovate and kind of improve, so improve performance improve efficiency. When you are operating at the scale of a company such as IBM and other large players, your customers and you as product teams and builders of products will come into contact with all the kind of little issues and bugs >> Right. >> And performance >> Make it better. Problems, yeah. And that is the feedback that we take on board and we try and make it better, not just for IBM and their customers. Because it's an Apache product and everyone benefits. So that's really the idea. Take all the feedback and learnings from enterprise customers and product groups and centralize that in the Open Source contributions that we make. >> Great. Would it be, so would it be fair to say you're focusing on making the core Spark, Spark ML and Spark ML Lib capabilities sort of machine learning libraries and in the pipeline, more robust? >> Yes. >> And if that's the case, we know there needs to be improvements in its ability to serve predictions in real time, like high speed. We know there's a need to take the pipeline and sort of share it with other tools, perhaps. Or collaborate with other tool chains. >> Nick: Yeah. >> What are some of the things that the Enterprise customers are looking for along the lines? >> Yeah, that's a great question and very topical at the moment. So both from an Open Source community perspective and Enterprise customer perspective, this is one of the, if not the key, I think, kind of missing pieces within the Spark machine-learning kind of community at the moment, and it's one of the things that comes up most often. So it is a missing piece, and we as a community need to work together and decide, is this something that we built within Spark and provide that functionality? Is is something where we try and adopt open standards that will benefit everybody and that provides a kind of one standardized format, or way or serving models? Or is it something where there's a few Open Source projects out there that might serve for this purpose, and do we get behind those? So I don't have the answer because this is ongoing work, but it's definitely one of the most critical kind of blockers, or, let's say, areas that needs work at the moment. >> One quick question, then, along those lines. IBM, the first thing IBM contributed to the Spark community was Spark ML, which is, as I understand it, it was an ability to, I think, create an ensemble sort of set of models to do a better job or create a more, >> So are you referring to system ML, I think it is? >> System ML. >> System ML, yeah, yeah. >> What are they, I forgot. >> Yeah, so, so. >> Yeah, where does that fit? >> System ML started out as a IBM research project and perhaps the simplest way to describe it is, as a kind of sequel optimizer is to take sequel queries and decide how to execute them in the most efficient way, system ML takes a kind of high-level mathematical language and compiles it down to a execution plan that runs in a distributed system. So in much the same way as your sequel operators allow this very flexible and high-level language, you don't have to worry about how things are done, you just tell the system what you want done. System ML aims to do that for mathematical and machine learning problems, so it's now an Apache project. It's been donated to Open Source and it's an incubating project under very active development. And that is really, there's a couple of different aspects to it, but that's the high-level goal. The underlying execution engine is Spark. It can run on Hadoop and it can run locally, but really, the main focus is to execute on Spark and then expose these kind of higher level APRs that are familiar to users of languages like R and Python, for example, to be able to write their algorithms and not necessarily worry about how do I do large scale matrix operations on a cluster? System ML will compile that down and execute that for them. >> So really quickly, follow up, what that means is if it's a higher level way for people who sort of cluster aware to write machine-learning algorithms that are cluster aware? >> Nick: Precisely, yeah. >> That's very, very valuable. When it works. >> When it works, yeah. So it does, again, with the caveat that I'm mostly focused on Spark and not so much the System ML side of things, so I'm definitely not an expert. I don't claim to be an expert in it. But it does, you know, it works at the moment. It works for a large class of machine-learning problems. It's very powerful, but again, it's a young project and there's always work to be done, so exactly the areas that I know that they're focusing on are these areas of usability, hardening up the APRs and making them easier to use and easier to access for users coming from the R and Python communities who, again are, as you said, they're not necessarily experts on distributed systems and cluster awareness, but they know how to write a very complex machine-learning model in R, for example. And it's really trying to enable them with a set of APR tools. So in terms of the underlying engine, they are, I don't know how many hundreds of thousands, millions of lines of code and years and years of research that's gone into that, so it's an extremely powerful set of tools. But yes, a lot of work still to be done there and ongoing to make it, in a way to make it user ready and Enterprise ready in a sense of making it easier for people to use it and adopt it and to put it into their systems and production. >> So I wonder if we can close, Nick, just a few questions on STC, so the Spark Technology Centers in Cape Town, is that a global expertise center? Is is STC a virtual sort of IBM community, or? >> I'm the only member visiting Cape Town, >> David: Okay. >> So I'm kind of fairly lucky from that perspective, to be able to kind of live at home. The rest of the team is mostly in San Francisco, so there's an office there that's co-located with the Watson west office >> Yeah. >> And Watson teams >> Sure. >> That are based there in Howard Street, I think it is. >> Dave: How often do you get there? >> I'll be there next week. >> Okay. >> So I typically, sort of two or three times a year, I try and get across there >> Right. And interface with the team, >> So, >> But we are a fairly, I mean, IBM is obviously a global company, and I've been surprised actually, pleasantly surprised there are team members pretty much everywhere. Our team has a few scattered around including me, but in general, when we interface with various teams, they pop up in all kinds of geographical locations, and I think it's great, you know, a huge diversity of people and locations, so. >> Anything, I mean, these early days here, early day one, but anything you saw in the morning keynotes or things you hope to learn here? Anything that's excited you so far? >> A couple of the morning keynotes, but had to dash out to kind of prepare for, I'm doing a talk later, actually on feature hashing for scalable machine learning, so that's at 12:20, please come and see it. >> Dave: A breakout session, it's at what, 12:20? >> 20 past 12:00, yeah. >> Okay. >> So in room 302, I think, >> Okay. >> I'll be talking about that, so I needed to prepare, but I think some of the key exciting things that I have seen that I would like to go and take a look at are kind of related to the deep learning on Spark. I think that's been a hot topic recently in one of the areas, again, Spark is, perhaps, hasn't been the strongest contender, let's say, but there's some really interesting work coming out of Intel, it looks like. >> They're talking here on The Cube in a couple hours. >> Yeah. >> Yeah. >> I'd really like to see their work. >> Yeah. >> And that sounds very exciting, so yeah. I think every time I come to a Spark summit, they always need projects from the community, various companies, some of them big, some of them startups that are pushing the envelope, whether it's research projects in machine learning, whether it's adding deep learning libraries, whether it's improving performance for kind of commodity clusters or for single, very powerful single modes, there's always people pushing the envelope, and that's what's great about being involved in an Open Source community project and being part of those communities, so yeah. That's one of the talks that I would like to go and see. And I think I, unfortunately, had to miss some of the Netflix talks on their recommendation pipeline. That's always interesting to see. >> Dave: Right. >> But I'll have to check them on the video (laughs). >> Well, there's always another project in Open Source land. Nick, thanks very much for coming on The Cube and good luck. Cool, thanks very much. Thanks for having me. >> Have a good trip, stay warm, hang in there. (Nick laughs) Alright, keep it right there. My buddy George and I will be back with our next guest. We're live. This is The Cube from Sparks Summit East, #sparksummit. We'll be right back. (upbeat music) (gentle music)
SUMMARY :
Brought to you by Data Bricks. a the IBM Spark Technology Center in South Africa. So let's see, it's a different time of year, here I've flown from, I don't know the Fahrenheit's equivalent, You probably get the T-shirt for the longest flight here, need the parka, or like a beanie. So Nick, tell us about the Spark Technology Center, and the ecosystem. The famous example that I like to use is Linux. I don't know all the details, certainly, Translate the hallway talk, maybe. Essentially, I think you raise very good parallels and kind of almost leap frog and say, "We're going to and so, in some respects, maybe missing the window on Hadoop and they're still sort of struggling to figure it out. So part of that is the traditional data warehousing So Nick, perhaps paint us a picture of someone and almost commoditization of the model side. And that's not even the end of it And the business impact of that presumably will be still, so, but are you suggesting that by closing it's not magic that you just simply throw and the models are getting better and better attacked the time problem. to go from, yeah, as you mentioned where we are, and more about making the systems better So improving recommendations, improving the quality So really, the aim is to simplify and productionize Yeah, and right. And to create that end-to-end system that you're describing. and I'm less involved in the data bank. So the intent with HTC particularly is that we focus leverage the ability to talk to real-world customers and you as product teams and builders of products and centralize that in the Open Source contributions sort of machine learning libraries and in the pipeline, And if that's the case, So I don't have the answer because this is ongoing work, IBM, the first thing IBM contributed to the Spark community but really, the main focus is to execute on Spark When it works. and ongoing to make it, in a way to make it user ready So I'm kind of fairly lucky from that perspective, And interface with the team, and I think it's great, you know, A couple of the morning keynotes, but had to dash out are kind of related to the deep learning on Spark. that are pushing the envelope, whether it's research and good luck. My buddy George and I will be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Valente | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Nick Pentreath | PERSON | 0.99+ |
Howard Street | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Nick Pentry | PERSON | 0.99+ |
$1 billion | QUANTITY | 0.99+ |
Nick | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HTC | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Cape Town | LOCATION | 0.99+ |
South Africa | LOCATION | 0.99+ |
Java | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
12 months | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
IBM Spark Technology Center | ORGANIZATION | 0.99+ |
BMI | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
12:20 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
6-12 month | QUANTITY | 0.99+ |
Watson | ORGANIZATION | 0.98+ |
tomorrow | DATE | 0.98+ |
Spark Technology Center | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Spark Technology Centers | ORGANIZATION | 0.98+ |
this year | DATE | 0.97+ |
Hadoop | TITLE | 0.97+ |
hundreds of thousands | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
30 degrees Celsius | QUANTITY | 0.97+ |
Data First | ORGANIZATION | 0.97+ |
Super Bowl | EVENT | 0.97+ |
single | QUANTITY | 0.96+ |