Manya Rastogi, Dell Technologies & Abdel Bagegni, Telecom Infra Project | MWC Barcelona 2023
>> TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Spain, everybody. We're here at the Theater Live and MWC 23. You're watching theCUBE's Continuous Coverage. This is day two. I'm Dave Vellante with my co-host, Dave Nicholson. Lisa Martin is also in the house. John Furrier out of our Palo Alto studio covering all the news. Check out silicon angle.com. Okay, we're going to dig into the core infrastructure here. We're going to talk a little bit about servers. Manya Rastogi is here. She's in technical marketing at Dell Technologies. And Abdel Bagegni is technical program manager at the Telecom Infra Project. Folks, welcome to theCUBE. Good to see you. >> Thank you. >> Abdel, what is the Telecom Infras Project? Explain to our audience. >> Yeah. So the Telecom Infra Project is a US based non-profit organization community that brings together different participants, suppliers, vendors, operators SI's together to accelerate the adoption of open RAN and open interface solutions across the globe. >> Okay. So that's the mission is open RAN adoption. And then how, when was it formed? Give us the background and some of the, some of the milestones so far. >> Yeah. So the telecom infra project was established five years ago from different vendor leaders and operators across the globe. And then the mission was to bring different players in to work together to accelerate the adoption of, of open RAN. Now open RAN has a lot of potential and opportunities, but in the same time there's challenges that we work together as a community to facilitate those challenges and overcome those barriers. >> And we've been covering all week just the disaggregation of the network. And you know, we've seen this movie sort of before playing out now in, in telecom. And Manya, this is obviously a compute intensive environment. We were at the Dell booth earlier this morning poking around, beautiful booth, lots of servers. Tell us what your angle is here in this marketplace. >> Yeah, so I would just like to say that Dell is kind of leading or accelerating the innovation at the telecom edge with all these ruggedized servers that we are offering. So just continuing the mission, like Abdel just mentioned for the open RAN, that's where a lot of focus will be from these servers will be, so XR 8000, it's it's going to be one of the star servers for telecom with, you know, offering various workloads. So it can be rerun, open run, multi access, edge compute. And it has all these different features with itself and the, if we, we can talk more about the performance gains, how it is based on the Intel CPUs and just try to solve the purpose like along with various vendors, the whole ecosystem solve this challenge for the open RAN. >> So Manya mentioned some of those infrastructure parts. Does and do, do you say TIP or T-I-P for short? >> Abdel: We say TIP. >> TIP. >> Abdel: T-I-P is fine as well. >> Does, does, does TIP or T-I-P have a certification process or a, or a set of guidelines that someone like Dell would either adhere to or follow to be sort of TIP certified? What does that look like? >> Yeah, of course. So what TIP does is TIP accredits what solutions that actually work in a real commercial grade environment. So what we do is we bring the different players together to come up with the most efficient optimized solution. And then it goes through a process that the community sets the, the, the criteria for and accepts. And then once this is accredited it goes into TIP exchange for other operators and the participants and the industry to adopt. So it's a well structured process and it's everything about how we orchestrate the industry to come together and set those requirements and and guidelines. Everything starts with a use case from the beginning. It's based on operators requirements, use cases and then those use cases will be translated into a solution that the industry will approve. >> So when you say operator, I can think of that sort of traditionally as the customer side of things versus the vendor side of things. Typically when organizations get together like TIP, the operator customer side is seeking a couple of things. They want perfect substitutes in all categories so that they could grind vendors down from a price perspective but they also want amazing innovation. How do you, how do you deliver both? >> Yeah, I mean that's an excellent question. We be pragmatic and we bring all players in one table to discuss. MNO's want this, vendors can provide a certain level and we bring them together and they discuss and come up with something that can be deployed today and future proof for the future. >> So I've been an enterprise technology observer for a long time and, you know, I saw the, the attempt to take network function virtualization which never really made much of an impact, but it was a it was the beginning of the enterprise players really getting into this market. And then I would see companies, whether it was Dell or HPE or Cisco, they'd take an X 86 server, put a cool name on it, edge something, and throw it over the fence and that didn't work so well. Now it's like, Manya. We're starting to get serious. You're building relationships. >> Manya: Totally. >> I mentioned we were at the Dell booth you're actually building purpose built systems now for this, this segment. Tell us what's different about this market and the products that you're developing for this market than say the commercial enterprise. >> So you are absolutely right, like, you know, kind of thinking about the journey, there has been a lot of, it has been going for a long time for all these improvements and towards going more open disaggregated and overall that kind of environment and what Dell brings together with our various partners and particularly if you talk about Intel. So these servers are powered by the players four gen intel beyond processors. And so what Intel is doing right now is providing us with great accelerators like vRAN Boost. So it increases performance like doubles what it was able to do before. And power efficiency, it has been an issue for a long, long time and it still continues but there is some improvement. For example 20% reduction overall with the power savings. So that's a step forward in that direction. And then we have done some of our like own testing as well with these servers and continuing that, you know it's not just telecom but also going towards Edge or inferencing like all these comes together not just X 30,000 but for example XR 56 10, 70, 76 20. So these are three servers which combines together to like form telecom and Edge and covers altogether. So that's what it is. >> Great, thank you. So Abdel, I mean I think generally people agree that in the fullness of time all radio access networks are going to be open, right? It's just a matter of okay, how do we get there? How do we make sure that it has the same, you know, quality of service characteristics. So where are we on on that, that journey from your perspective? And, and maybe you could project what, what it's going to look like over this decade. 'Cause it's going to take, you know, years. >> It's going to take a bit of time to mature and be a kind of a plug and play different units together. I think there was a lot, there was a, was a bit of over-promising in a few, in the last few years on the acceleration of open RAN deployment. That, well, a TIP is trying to do is trying to realize the pragmatic approach of the open run deployment. Now we know the innovation cannot happen when you have a kind of closed interfaces when you allow small players to be within the market and bring the value to, to the RAN areas. This is where the innovation happens. I think what would happen on the RAN side of things is that it would be driven by use cases and the operators. And the minute that the operators are no longer can depend on the closed interface vendors because there's use cases that fulfill that are requires some open RAN functionality, be the, the rig or the SMO layers and the different configurations of the rUSE getting the servers to the due side of things. This kind of modular scalability on this layer is when the RAN will, the Open RAN, would boost. This would happen probably, yeah. >> Go ahead. >> Yeah, it would happen in, in the next few years. Not next year or the year after but definitely something within the four to five years from now. >> I think it does feel like it's a second half of the decade and you feel like the, the the RAN intelligent controller is going to be a catalyst to actually sort of force the world into this open environment. >> Let's say that the Rick and the promises that were given to, to the sun 10 years ago, the Rick is realizing it and the closed RAN vendors are developing a lot on the Rick side more than the other parts of the, of the open RAN. So it will be a catalyst that would drive the innovation of open RAN, but only time will tell. >> And there are some naysayers, I mean I've seen some you know, very, very few, but I've seen some works that, oh the economics aren't there. It'll, it'll never get there. What, what do you, what do you say to that? That, that it won't ever, open RAN won't ever be as cost effective as you know, closed networks. >> Open RAN will open innovations that small players would have the opportunity to contribute to the, to the RAN space. This opportunity is not given to small players today. Open RAN provides this kind of opportunity and given that it's a path for innovation, then I would say that, you know, different perspectives some people are making sure that, you know the status quo is the way forward. But it would certainly put barriers on on innovation and this is not the way forward. >> Yeah. You can't protect the past in the future. My own personal opinion is, is that it doesn't have to be comparable from a, from a TCO perspective it can be close enough. It's the innovative, same thing with like you watch the, the, the adoption of Cloud. >> Exactly. >> Like cloud was more expensive it's always more expensive to rent, but people seem to be doing public Cloud, you know, because of the the innovation capabilities and the developer capabilities. Is that a fair analogy in this space, do you think? >> I mean this is what all technologies happens. >> Yeah. >> Right? It starts with a quite costly and then the the cost will start dropping down. I mean the, the cost of, of a megabyte two decades ago is probably higher than what it costly terabyte. So this is how technology evolves and it's any kind of comparison, either copper or even the old generation, the legacy generations could be a, a valid comparison. However, they need to be at a market demand for something like that. And I think the use cases today with what the industry is is looking for have that kind of opportunity to pull this kind of demand. But, but again, it needs to go work close by the what happens in the technology space, be it, you know we always talk about when we, we used to talk about 5G, there was a lot of hypes going on there. But I think once it realized in, in a pragmatic, in a in a real life situation, the minutes that governments decide to go for autonomous vehicles, then you would have limitations on the current closed RAN infrastructures and you would definitely need something to to top it up on the- >> I mean, 5G needs open RAN, I mean that's, you know not going to happen without it. >> Exactly. >> Yeah, yeah. But, but what is, but what would you say the most significant friction is between here and the open RAN nirvana? What are, what are the real hurdles that need to be overcome? There's obviously just the, I don't want to change we've been doing this the same way forever, but what what are the, what are the real, the legitimate concerns that people have when we start talking about open RAN? >> So I think from a technology perspective it will be solved. All of the tech, I mean there's smart engineers in the world today that will fix, you know these kind of problems and all of the interability, interruptability issues and, and all of that. I think it's about the mindset, the, the interfaces between the legacy core and RAN has been became more fluid today. We don't have that kind of a hard line between these kind of different aspects. We have the, the MEC coming closer to the RAN, we have the RAN coming closer to the Core, and we have the service based architectures in the Core. So these kind of things make it needs a paradigm shift between how operators that would need to tackle the open RAN space. >> Are there specific deployment requirements for open RAN that you can speak to from your perspective? >> For sure and going in this direction, like, you know evolution with the technology and how different players are coming together. Like that's something I wanted to comment from the previous question. And that's where like, you know these servers that Dell is offering right now. Specific functionality requirements, for example, it's it's a small server, it's short depth just 430 millimeters of depth and it can fit anywhere. So things like small form factor, it's it's crucial because if you, it can replace like multiple servers 10 years ago with just one server and you can place it like near a base band unit or to a cell site on top of a roof wherever. Like, you know, if it's a small company and you need this kind of 5G connection it kind of solves that challenge with this server. And then there are various things like, you know increasing thermals for example temperatures. It is classified like, you know kind of compliant with the negative 5 to 55 degree Celsius. And then we are also moving towards, for example negative 20 to 65 degree Celsius. Which is, which is kind of great because in situations where, which are out of our hands and you need specific thermals for those situations that's where it can solve that problem. >> Are those, are those statistics in those measurements different than the old NEB's standards, network equipment building standards? Or are they, are they in line with that? >> It is, it is a next step. Like so most of our servers that we have right now are negative five to five degree Celsius, for especially the extremely rugged server series and this one XR 8,000 which is focused for the, it's telecom inspired so it's focused on those customers. So we are trying to come up like go a step ahead and also like offering this additional temperatures testing and yeah compliance. So, so it is. >> Awesome. So we, I said we were at the booth early today. Looks like some good traffic people poking around at different, you know, innovations you got going. Some of the private network stuff is kind of cool. I'm like how much does that cost? I think I might like one of those, you know, but- >> [Private 5G home network. >> Right? Why not? Guys, great to have you on the show. Thanks so much for sharing. Appreciate it. >> Thank you. >> Thank you so much. >> Okay. For Dave Nicholson and Lisa Martin this is Dave Vellante, theCUBE's coverage. MWC 23 live from the Fida in Barcelona. We'll be right back. (outro music)
SUMMARY :
that drive human progress. Lisa Martin is also in the house. Explain to our audience. solutions across the globe. some of the milestones so far. and operators across the globe. of the network. So just continuing the mission, Does and do, do you say the industry to adopt. as the customer side and future proof for the future. the attempt to take network and the products that you're developing by the players four gen intel has the same, you know, quality and the different configurations of in, in the next few years. of the decade and you feel like the, the and the promises that were given to, oh the economics aren't there. the opportunity to contribute It's the innovative, same thing with like and the developer capabilities. I mean this is what by the what happens in the RAN, I mean that's, you know between here and the open RAN in the world today that will fix, you know from the previous question. for especially the extremely Some of the private network Guys, great to have you on the show. MWC 23 live from the Fida in Barcelona.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Manya Rastogi | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
Abdel Bagegni | PERSON | 0.99+ |
Manya | PERSON | 0.99+ |
Abdel | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
65 degree Celsius | QUANTITY | 0.99+ |
one server | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one table | QUANTITY | 0.98+ |
MWC 23 | EVENT | 0.98+ |
55 degree Celsius | QUANTITY | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
five degree Celsius | QUANTITY | 0.98+ |
Telecom Infra Project | ORGANIZATION | 0.98+ |
Telecom Infra Project | ORGANIZATION | 0.98+ |
two decades ago | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
TheCUBE | ORGANIZATION | 0.97+ |
10 years ago | DATE | 0.97+ |
430 millimeters | QUANTITY | 0.97+ |
four | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
five | QUANTITY | 0.95+ |
5 | QUANTITY | 0.95+ |
early today | DATE | 0.93+ |
XR 56 10 | COMMERCIAL_ITEM | 0.92+ |
Barcelona | LOCATION | 0.92+ |
XR 8000 | COMMERCIAL_ITEM | 0.92+ |
next few years | DATE | 0.89+ |
rUSE | TITLE | 0.89+ |
20 | COMMERCIAL_ITEM | 0.89+ |
day two | QUANTITY | 0.88+ |
three servers | QUANTITY | 0.88+ |
five years | QUANTITY | 0.87+ |
Fida | LOCATION | 0.86+ |
intel | ORGANIZATION | 0.86+ |
earlier this morning | DATE | 0.86+ |
10 years ago | DATE | 0.85+ |
Theater Live | LOCATION | 0.83+ |
MWC Barcelona 2023 | EVENT | 0.82+ |
silicon angle.com | OTHER | 0.81+ |
Telecom Infras Project | ORGANIZATION | 0.81+ |
sun | DATE | 0.8+ |
second half | QUANTITY | 0.8+ |
5G | ORGANIZATION | 0.79+ |
NEB | ORGANIZATION | 0.78+ |
Rick | ORGANIZATION | 0.78+ |
XR 8,000 | COMMERCIAL_ITEM | 0.77+ |
MNO | ORGANIZATION | 0.77+ |
X 30,000 | OTHER | 0.72+ |
TCO | ORGANIZATION | 0.71+ |
MWC 23 | LOCATION | 0.66+ |
RAN | TITLE | 0.65+ |
of | DATE | 0.61+ |
86 | COMMERCIAL_ITEM | 0.6+ |
Breaking Analysis: NFTs, Crypto Madness & Enterprise Blockchain
>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCube and ETR, this is Breaking Analysis with Dave Vellante. >> When a piece of digital art sells for $69.3 million, more than has ever been paid for works, by Gauguin or Salvador Dali, making it created the third most expensive living artists in the world. One can't help but take notice and ask, what is going on? The latest craze around NFTs may feel a bit bubblicious, but it's yet another sign, that the digital age is now fully upon us. Hello and welcome to this week's Wikibon's CUBE insights, powered by ETR. In this Breaking Analysis, we want to take a look at some of the trends, that may be difficult for observers and investors to understand, but we think offer significant insights to the future and possibly some opportunities for young investors many of whom are fans of this program. And how the trends may relate to enterprise tech. Okay, so this guy Beeple is now the hottest artist on the planet. That's his Twitter profile. That picture on the inset. His name is Mike Winkelmann. He is actually a normal looking dude, but that's the picture he chose for his Twitter. This collage reminds me of the Million Dollar Homepage. You may already know the story, but many of you may not. Back in 2005 a college kid from England named Alex Tew, T-E-W created The Million Dollar Homepage to fund his education. And his idea was to create a website with a million pixels, and sell ads at a dollar for each pixel. Guess how much money he raised. A million bucks, right? No, wrong. He raised $1,037,100. How so you ask? Well, he auctioned off the last 1000 pixels on eBay, which fetched an additional $38,000. Crazy, right? Well, maybe not. Pretty creative in a way, way early sign of things to come. Now, I'm not going to go deep into NFTs, and explain the justification behind them. There's a lot of material that's been published that can do justice to the topic better than I can. But here are the basics, NFTs stands for Non-Fungible Tokens. They are digital representations of assets that exist in a blockchain. Now, each token as a unique and immutable identifier, and it uses cryptography to ensure its authenticity. NFTs by the name, they're not fungible. So, unlike Bitcoin, Ethereum or other cryptocurrencies, which can be traded on a like-for-like basis, in other words, if you and I each own one bitcoin we know exactly how much each of our bitcoins is worth at any point of time. Non-Fungible Tokens each have their own unique values. So, they're not comparable on a like-to-like basis. But what's the point of this? Well, NFTs can be applied to any property, identities tweets, videos, we're seeing collectables, digital art, pretty much anything. And it's really. The use cases are unlimited. And NFTs can streamline transactions, and they can be bought and sold very efficiently without the need for a trusted third party involved. Now, the other benefit is the probability of fraud, is greatly reduced. So where do NFTs fit as an asset class? Well, they're definitely a new type of asset. And again, I'm not going to try to justify their existence, but I want to talk about the choices, that investors have in the market today. The other day, I was on a call with Jay Po. He is a VC and a Principal at a company called Stage 2 Capital. He's a former Bessemer VC and one of the sharper investors around. And he was talking about the choices that investors have and he gave a nice example that I want to share with you and try to apply here. Now, as an investor, you have alternatives, of course we're showing here a few with their year to date charts. Now, as an example, you can buy Amazon stock. Now, if you bought just about exactly a year ago you did really well, you probably saw around an 80% return or more. But if you want to jump in today, your mindset might be, hmm, well, okay. Amazon, they're going to be around for a long time, so it's kind of low risk and I like the stock, but you're probably going to get, well let's say, maybe a 10% annual return over the longterm, 15% or maybe less maybe single digits, but, maybe more than that but it's unlikely that any kind of reasonable timeframe within any reasonable timeframe you're going to get a 10X return. In order to get that type of return on invested capital, Amazon would have to become a $16 trillion valued company. So, you sit there, you asked yourself, what's the probability that Amazon goes out of business? Well, that's pretty low, right? And what are the chances it becomes a $16 trillion company over the next several years? Well, it's probably more likely that it continues to grow at that more stable rate that I talked about. Okay, now let's talk about Snowflake. Now, as you know, we've covered the company quite extensively. We watched this company grow from an early stage startup and then saw its valuation increase steadily as a private company, but you know, even early last year it was valued around $12 billion, I think in February, and as late as mid September right before the IPO news hit that Marc Benioff and Warren Buffett were going to put in $250 million each at the IPO or just after the IPO and it was projected that Snowflake's valuation could go over $20 billion at that point. And on day one after the IPO Snowflake, closed worth more than $50 billion, the stock opened at 120, but unless you knew a guy, you had to hold your nose and buy on day one. And you know, maybe got it at 240, maybe you got it at 250, you might have got it at higher and at the time you might recall, I said, You're likely going to get a better price than on day one, which is usually the case with most IPOs, stock today's around 230. But you look at Snowflake today and if you want to buy in, you look at it and say, Okay, well I like the company, it's probably still overvalued, but I can see the company's value growing substantially over the next several years, maybe doubling in the near to midterm [mumbles] hit more than a hundred billion dollar valuation back as recently as December, so that's certainly feasible. The company is not likely to flame out because it's highly valued, I have to probably be patient for a couple of years. But you know, let's say I liked the management, I liked the company, maybe the company gets into the $200 billion range over time and I can make a decent return, but to get a 10X return on Snowflake you have to get to a valuation of over a half a trillion. Now, to get there, if it gets there it's going to become one of the next great software companies of our time. And you know, frankly if it gets there I think it's going to go to a trillion. So, if that's what your bet is then you know, you would be happy with that of course. But what's the likelihood? As an investor you have to evaluate that, what's the probability? So, it's a lower risk investment in Snowflake but maybe more likely that Snowflake, you know, they run into competition or the market shifts, maybe they get into the $200 billion range, but it really has to transform the industry execute for you to get in to that 10 bagger territory. Okay, now let's look at a different asset that is cryptocurrency called Compound, way more risky. But Compound is a decentralized protocol that allows you to lend and borrow cryptocurrencies. Now, I'm not saying go out and buy compound but just as a thought exercise is it's got an asset here with a lower valuation, probably much higher upside, but much higher risk. But so for Compound to get to 10X return it's got to get to $20 billion valuation. Now, maybe compound isn't the right asset for your cup of tea, but there are many cryptos that have made it that far and if you do your research and your homework you could find a project that's much, much earlier stage that yes, is higher risk but has a much higher upside that you can participate in. So, this is how investors, all investors really look at their choices and make decisions. And the more sophisticated investors, they're going to use detailed metrics and analyze things like MOIC, Multiple on Invested Capital and IRR, which is Internal Rate of Return, do TAM analysis, Total Available Market. They're going to look at competition. They're going to look at detailed company models in ARR and Churn rates and so forth. But one of the things we really want to talk about today and we brought this up at the snowflake IPO is if you were Buffet or Benioff and you had to, you know, quarter of a dollars to put in you could get an almost guaranteed return with your late in the game, but pre IPO money or a look if you were Mike Speiser or one of the earlier VCs or even someone like Jeremy Burton who was part of the inside network you could get stock or options, much cheaper. You get a 5X, 10X, 50X or even North of a hundred X return like the early VCs who took a big risk. But chances are, you're not one of these in one of these categories. So how can you as a little guy participate in something big and you might remember at the time of the snowflake IPO we showed you this picture, who are these people, Olaf Carlson-Wee, Chris Dixon, this girl Sono. And of course Tim Berners-Lee, you know, that these are some of the folks that inspired me personally to pay attention to crypto. And I want to share the premise that caught my attention. It was this. Think about the early days of the internet. If you saw what Berners-Lee was working on or Linus Torvalds, in one to invest in the internet, you really couldn't. I mean, you couldn't invest in Linux or TCP/IP or HTTP. Suppose you could have invested in Cisco after its IPO that would have paid off pretty big time, for sure. You know, he could have waited for the Netscape IPO but the core infrastructure of the internet was fundamentally not directly a candidate for investment by you or really, you know, by anybody. And Satya Nadella said the other day we have reached maximum centralization. The main protocols of the internet were largely funded by the government and they've been co-opted by the giants. But with crypto, you actually can invest in core infrastructure technologies that are building out a decentralized internet, a new internet, you know call it web three Datto. It's a big part of the investment thesis behind what Carlson-wee is doing. And Andreessen Horowitz they have two crypto funds. They've raised more than $800 million to invest and you should read the firm's crypto investment thesis and maybe even take their crypto startup classes and some great content there. Now, one of the people that I haven't mentioned in this picture is Camila Russo. She's a journalist she's turned into hardcore crypto author is doing great job explaining the white hot defining space or decentralized finance. If you're just at read her work and educate yourself and learn more about the future and be happy perhaps you'll find some 10X or even hundred X opportunities. So look, there's so much innovation going around going on around blockchain and crypto. I mean, you could listen to Warren Buffet and Janet Yellen who implied this is all going to end badly. But while look, these individuals they're smart people. I don't think they would be my go-to source on understanding the potential of the technology and the future of what it could bring. Now, we've talked earlier at the, at the start here about NFTs. DeFi is one of the most interesting and disruptive trends to FinTech, names like Celsius, Nexo, BlockFi. BlockFi let's actually the average person participate in liquidity pools is actually quite interesting. Crypto is going mainstream Tesla, micro strategy putting Bitcoin on their balance sheets. We have a 2017 Jamie diamond. He called Bitcoin a tulip bulb like fraud, yet just the other day JPM announced a structured investment vehicle to give its clients a basket of stocks that have exposure to crypto, PayPal allowing customers to buy, sell, and Hodl crypto. You can trade crypto on Robin Hood. Central banks are talking about launching digital currencies. I talked about the Fedcoin for a number of years and why not? Coinbase is doing an IPO will give it a value of over a hundred billion. Wow, that sounds frothy, but still big names like Mark Cuban and Jamaat palliate Patiala have been active in crypto for a while. Gronk is getting into NFTs. So it goes to have a little bit of that bubble feel to it. But look often when tech bubbles burst they shake out the pretenders but if there's real tech involved, some contenders emerge. So, and they often do so as dominant players. And I really believe that the innovation around crypto is going to be sustained. Now, there is a new web being built out. So if you want to participate, you got to do some research figure out things like how PolkaWorks, make a call on whether you think avalanche is an Ethereum killer dig in and find out about new projects and form a thesis. And you may, as a small player be able to find some big winners, but look you do have to be careful. There was a lot of fraud during the ICO. Craze is your risk. So understand the Tokenomics and maybe as importantly the Pump-a-nomics, because they certainly loom as dangers. This is not for the faint of heart but because I believe it involves real tech. I like it way better than Reddit stocks like GameStop for example, now not to diss Reddit. There's some good information on Reddit. If you're patient, you can find it. And there's lots of good information flowing on Discord. There's people flocking to Telegram as a hedge against big tech. Maybe there's all sounds crazy. And you know what, if you've grown up in a privileged household and you have a US Education you know, maybe it is nuts and a bit too risky for you. But if you're one of the many people who haven't been able to participate in these elite circles there are things going on, especially outside of the US that are democratizing investment opportunities. And I think that's pretty cool. You just got to be careful. So, this is a bit off topic from our typical focus and ETR survey analysis. So let's bring this back to the enterprise because there's a lot going on there as well with blockchain. Now let me first share some quotes on blockchain from a few ETR Venn Roundtables. First comment is from a CIO to diversified holdings company who says correctly, blockchain will hit the finance industry first but there are use cases in healthcare given the privacy and security concerns and logistics to ensure provenance and reduce fraud. And to that individual's point about finance. This is from the CTO of a major financial platform. We're really taking a look at payments. Yeah. Do you think traditional banks are going to lose control of the payment systems? Well, not without a fight, I guess, but look there's some real disruption possibilities here. And just last comment from a government CIO says, we're going to wait until the big platform players they get into their software. And so that is happening Oracle, IBM, VMware, Microsoft, AWS Cisco, they all have blockchain initiatives going on, now by the way, none of these tech companies wants to talk about crypto. They try to distance themselves from that topic which is understandable, I guess, but I'll tell you there's far more innovation going on in crypto than there is in enterprise tech companies at this point. But I predict that the crypto innovations will absolutely be seeping into enterprise tech players over time. But for now the cloud players, they want to support developers who are building out this new internet. The database is certainly a logical place to support a mutable transactions which allow people to do business one-on-one and have total confidence that the source hasn't been hacked or changed and infrastructure to support smart contracts. We've seen that. The use cases in the enterprise are endless asset tracking data access, food, tracking, maintenance, KYC or know your customer, there's applications in different industries, telecoms, oil and gas on and on and on. So look, think of NFTs as a signal crypto craziness is a signal. It's a signal as to how IT in other parts of companies and their data might be organized, managed and tracked and protected, and very importantly, valued. Look today. There's a lot of memes. Crypto kitties, art, of course money as well. Money is the killer app for blockchain, but in the future the underlying technology of blockchain and the many percolating innovations around it could become I think will become a fundamental component of a new digital economy. So get on board, do some research and learn for yourself. Okay, that's it for today. Remember all of these episodes they're available as podcasts, wherever you listen. I publish weekly on wikibon.com and siliconangle.com. Please feel free to comment on my LinkedIn post or tweet me @dvellante or email me at david.vellante@siliconangle.com. Don't forget to check out etr.plus for all the survey action and data science. This is Dave Vellante for theCUBE Insights powered by ETR. Be well, be careful out there in crypto land. Thanks for watching. We'll see you next time. (soft music)
SUMMARY :
bringing you data-driven and at the time you might recall, I said,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mike Winkelmann | PERSON | 0.99+ |
Janet Yellen | PERSON | 0.99+ |
Camila Russo | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Alex Tew | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jeremy Burton | PERSON | 0.99+ |
Chris Dixon | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
$20 billion | QUANTITY | 0.99+ |
2005 | DATE | 0.99+ |
Jay Po | PERSON | 0.99+ |
Olaf Carlson-Wee | PERSON | 0.99+ |
$200 billion | QUANTITY | 0.99+ |
$1,037,100 | QUANTITY | 0.99+ |
December | DATE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
$69.3 million | QUANTITY | 0.99+ |
Stage 2 Capital | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
$38,000 | QUANTITY | 0.99+ |
Mike Speiser | PERSON | 0.99+ |
Warren Buffet | PERSON | 0.99+ |
PayPal | ORGANIZATION | 0.99+ |
February | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Coinbase | ORGANIZATION | 0.99+ |
England | LOCATION | 0.99+ |
$16 trillion | QUANTITY | 0.99+ |
Jamie diamond | PERSON | 0.99+ |
Andreessen Horowitz | PERSON | 0.99+ |
more than $800 million | QUANTITY | 0.99+ |
Gauguin | PERSON | 0.99+ |
15% | QUANTITY | 0.99+ |
Salvador Dali | PERSON | 0.99+ |
Linus Torvalds | PERSON | 0.99+ |
GameStop | ORGANIZATION | 0.99+ |
Tim Berners-Lee | PERSON | 0.99+ |
over a half a trillion | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
first | QUANTITY | 0.99+ |
more than $50 billion | QUANTITY | 0.99+ |
Warren Buffett | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
10X | QUANTITY | 0.99+ |
each pixel | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Mark Cuban | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
Berners-Lee | PERSON | 0.99+ |
around $12 billion | QUANTITY | 0.99+ |
over a hundred billion | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
mid September | DATE | 0.99+ |
Benioff | PERSON | 0.99+ |
Beeple | PERSON | 0.99+ |
Snowflake | EVENT | 0.98+ |
PolkaWorks | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
early last year | DATE | 0.98+ |
250 | QUANTITY | 0.98+ |
over $20 billion | QUANTITY | 0.98+ |
JPM | ORGANIZATION | 0.98+ |
a trillion | QUANTITY | 0.98+ |
eBay | ORGANIZATION | 0.98+ |
First comment | QUANTITY | 0.97+ |
US | LOCATION | 0.97+ |
@dvellante | PERSON | 0.97+ |
a year ago | DATE | 0.97+ |
Marc Benioff | PERSON | 0.96+ |
each | QUANTITY | 0.96+ |
VxRail Taking HCI to Extremes, Dell Technologies
from the cube Studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cute conversation hi I'm Stu minimun and welcome to this special presentation we have a launch from Dell technologies updates to the BX rail family we're gonna do things a little bit different here we actually have a launch video from Janet champion of Dell technologies and the way we do things a lot of times is analysts get a little preview or when you're watching things you might have questions on it though rather than me just walking it are you watching herself I actually brought in a couple of Dell technologies expert two of our cube alumni happy to welcome back to the program Jonathan Segal he is the vice president of product marketing and Chad Dunn who's the vice president at price today of product management both of them with Dell technologies gentlemen thanks so much for joining us it was too great to be here all right and so what we're gonna do is we're gonna be rolling the video here I've got a button I'm gonna press Andrew will stop it here and then we'll kind of dig in a little bit go into some questions when we're all done we're actually holding a crowd chat where you will be able to ask your questions talk to the expert and everything and so a little bit different way to do a product announcement hope you enjoy it and with that it's VX rail taking API to the extremes is is the theme we'll see you know how what that means and everything but without any further ado it but let's look fanon take the video away hello and welcome my name is Shannon champion and I'm looking forward to taking you through what's new with the ex rail let's get started we have a lot to talk about our launch covers new announcements addressing use cases across the core edge and cloud and spans both new hardware platforms and options as well as the latest in software innovations so let's jump right in before we talk about our announcements let's talk about where customers are adopting the ex rail today first of all on behalf of the entire Dell technologies and BX Rail teams I want to thank each of our over 8,000 customers big and small in virtually every industry who have chosen the x rail to address a broad range of workloads deploying nearly a hundred thousand nodes to date thank you our promise to you is that we will add new functionality improve serviceability and support new use cases so that we deliver the most value to you whether in the core at the edge or for the cloud in the core the X rail from day one has been a catalyst to accelerate IT transformation many of our customers started here and many will continue to leverage VX rail to simply extend and enhance your VMware environment now we can support even more demanding applications such as in-memory databases like s AP HANA and more AI and ML applications with support for more and more powerful GPUs at the edge video surveillance which also uses GPUs by the way is an example of a popular use case leveraging the X rail alongside external storage and right now we all know the enhanced role that IT is playing and as it relates to VDI the X Rail has always been a great option for that in the cloud it's all about kubernetes and how dell technologies cloud platform which is VCF on the x rail can deliver consistent infrastructure for both traditional and cloud native applications and we're doing that together with VMware the X ray o is the only jointly engineered HCI system built with VMware for VMware environments designed to enhance the native VMware experience this joint engineering with VMware and investments in software innovation together deliver an optimized operational experience at reduced risk for our customers all right so Shannon talked a bit about you know the important role of IP of course right now with the global pandemic going on it's really you know calling in you know essential things you know putting you know platforms to the test so I'd really love to hear what both of you are hearing from customers also you know VDI of course you know in the early days it was HDI only does VDI now we know there are many solutions but remote work is you know putting that back front and center so John why don't we start with you is you know what you're absolutely so first of all us - thank you I want to do a shout out to our BX real customers around the world it's really been humbling inspiring and just amazing to see the impact of our bx real customers around the world and what they're having on on human progress here you know just for a few examples there are genomics companies that we have running the X rail that have a row about testing at scale we also have research universities out in the Netherlands on doing the antibody detection the US Navy has stood up a hosta floating Hospital >> of course care for those in need so look we are here to help that's been our message to our customers but it's amazing to see how much they're helping society during this so just just a pleasure there but as you mentioned just to hit on the the VDI comments so it's your points do you know HCI and vxr8 EDI that was initially use case years ago and it's been great to see how many of our existing VX real customers have been able to inhibit very quickly leveraging via trail to add and to help bring their remote workforce you know online and support them with your existing VX rail because V it really is flexible it is agile to be able to support those multiple workloads and in addition to that we've also rolled out some new VDI bundles to make it simpler for customers more cost-effective catered to everything from knowledge workers to multimedia workers you name it you know from 250 desktops up to a thousand but again back to your point BX rail ci is well beyond video it had crossed the chasm a couple years ago actually and you know where VDI now is less than a third of the typical workloads any of our customers out there it supports now a range of workloads as you heard from Shannon whether it's video surveillance whether it's general purpose only to mission-critical applications now with SAV ha so you know this is this has changed the game for sure but the range of workloads and the flexibility of yet rail is what's really helping our existing customers from this pandemic we've seen customers really embrace HCI for a number of workloads in their environments from the ones that we serve all knew and loved back in the the initial days of of HCI now the mission-critical things now to cloud native workloads as well and you know sort of the efficiencies that customers are able to get from HCI and specifically VX rail gives them that ability to pivot when these you know shall we say unexpected circumstances arise and I think if that's informing their their decisions and their opinions on what their IT strategies look like as they move forward they want that same level of agility and the ability to react quickly with our overall infrastructure excellent want to get into the announcements what I want my team actually your team gave me access to the CIO from the city of Amarillo so maybe they can dig up that footage talk about how fast they pivoted you know using VX rail to really spin up things fast so let's hear from the announcements first and then definitely want to share that that customer story a little bit later so let's get to the actual news that and it's gonna share okay now what's new I am pleased to announce a number of exciting updates and new platforms to further enable IT modernization across core edge and cloud I will cover each of these announcements in more detail demonstrating how only the X rail can offer the breadth of platform configurations automation orchestration and lifecycle management across a fully integrated hardware and software full stack with consistent simple side operations to address the broadest range of traditional and modern applications I'll start with hybrid cloud and recap what you may have seen in the Dell technologies cloud announcements just a few weeks ago related to VMware cloud foundation on the X rail then I'll cover two brand new VX rail hardware platforms and additional options and finally circle back to talk about the latest enhancements to our VX rail HCI system software capabilities for lifecycle management let's get started with our new cloud offerings based on the ex rail you xrail is the HCI foundation for dell technologies cloud platform bringing automation and financial models similar to public cloud to on-premises environments VMware recently introduced cloud foundation for dotto which is based on vSphere 7 as you likely know by now vSphere 7 was definitely an exciting and highly anticipated release in keeping with our synchronous release commitment we introduced the XR l 7 based on vSphere 7 in late April which was within 30 days of VMware's release two key areas that VMware focused on were embedding containers and kubernetes into vSphere unifying them with virtual machines and the second is improving the work experience for vSphere administrators with vSphere lifecycle manager or VL CM I'll address the second point a bit in terms of how the X rail fits in in a moment for V cf4 with tansu based on vSphere 7 customers now have access to a hybrid cloud platform that supports native kubernetes workloads and management as well as your traditional vm based workloads and this is now available with VCF 4 on the ex rel 7 the X rails tight integration with VMware cloud foundation delivers a simple and direct path not only to the hybrid cloud but also to deliver kubernetes a cloud scale with one complete automated platform the second cloud announcement is also exciting recent VCF for networking advancements have made it easier than ever to get started with hybrid cloud because we're now able to offer a more accessible consolidated architecture and with that Dell technologies cloud platform can now be deployed with a four node configuration lowering the cost of an entry-level hybrid cloud this enables customers to start smaller and grow their cloud deployment over time VCF on the x rail can now be deployed in two different ways for small environments customers can utilize a consolidated architecture which starts with just four nodes since the management and workload domains share resources in this architecture it's ideal for getting started with an entry-level cloud to run general-purpose virtualized workloads with a smaller entry point both in terms of required infrastructure footprint as well as cost but still with a consistent cloud operating model for larger environments we're dedicated resources and role based access control to separate different sets of workloads is usually preferred you can choose to deploy a standard architecture which starts at 8 nodes for independent management and workload domains a standard implementation is ideal for customers running applications that require dedicated workload domains that includes horizon VDI and vSphere with kubernetes all right John there's definitely been a lot of interest in our community around everything that VMware's doing with vSphere 7 understand if you wanted to use the kubernetes piece you know it's it's VCF as that so we you know we've seen the announcements delt partnering there helped us connect that story between you know really the the VMware strategy and how they've talked about cloud and how you know where does the X rail fit in that overall Delta cloud story absolutely so so first of all is through the x-ray of course is integral to the Delta cloud strategy you know it's been VCF on bx r l equals the delta cloud platform and this is our flagship on-prem cloud offering that we've been able to enable operational consistency across any cloud right whether it's on prem in the edge or in a public cloud and we've seen the delta cloud platform embraced by customers for a couple key reasons one is it offers the fastest hybrid cloud deployment in the market and this is really you know thanks to a new subscription on offer that we're now offering out there we're at less than 14 days it can be set up and running and really the deltek cloud does bring a lot of flexibility in terms of consumption models overall comes to the extra secondly I would say is fast and easy upgrades I mean this is this is really this is what VX real brings to the table for all our clothes if you will and it's especially critical in the cloud so the full automation of lifecycle management across the hardware and software stack boss the VMware software stack and in the Dell software however we're supporting that together this enables essentially the third thing which is customers can just relax right they can be rest assured that their infrastructure will be continuously validated and always be in a continuously validated state and this this is the kind of thing that you know those three value propositions together really fit well with with any on print cloud now you take what Shannon just mentioned and the fact that now you can build and run modern applications on the same the x-ray link structure alongside traditional applications this is a game changer yeah it I love you know I remember in the early days that about CI how does that fit in with cloud discussion and align I've used the last couple years this you know modernize the platform then you can modernize the application though as companies are doing their full modernization this plays into what you're talking about all right let's get you know can't let ran and continue get some more before we dig into some more analysis that's good let's talk about new hardware platforms and updates that result in literally thousands of potential new configuration options covering a wide breadth of modern and traditional application needs across a range of the actual use cases first up I am incredibly excited to announce a brand new delhi MCB x rail series the DS series this is a ruggedized durable platform that delivers the full power of the x rail for workloads at the edge in challenging environments or for space constrained areas the X ray LD series offers the same compelling benefits as the rest of the BX rail portfolio with simplicity agility and lifecycle management but in a lightweight short depth at only 20 inches it's a durable form factor that's extremely temperature resilient shock resistant and easily portable it even meets mil spec standards that means you have the full power of lifecycle automation with VX rail HCI system software and 24 by 7 single point of support enabling you to rapidly react to business needs no matter the location or how harsh the conditions so whether you're deploying a data center at a mobile command base running real-time GPS mapping on-the-go or implementing video surveillance in remote areas you can ensure availability integrity and confidence for every workload with the new VX Rail ruggedized D series had would love for you to bring us in a little bit you know that what customer requirement bringing bringing this to market I I remember seeing you know Dell servers ruggedized of course edge you know really important growth to build on what John was talking about clouds so yeah Chad bring us inside what was driving this piece of the offering sure Stu yeah you know having the the hardware platforms that can go out into some of these remote locations is really important and that's being driven by the fact that customers are looking for compute performance and storage out at some of these edges or some of the more exotic locations you know whether that's manufacturing plants oil rigs submarine ships military applications in places that we've never heard of but it's also been extending that operational simplicity of the the sort of way that you're managing your data center that has VX rails you're managing your edges the same way using the same set of tools so you don't need to learn anything else so operational simplicity is is absolutely key here but in those locations you can take a product that's designed for a data center where you're definitely controlling power cooling space and take it to some of these places where you get sand blowing or sub-zero temperatures so we built this D series that was able to go to those extreme locations with extreme heat extreme cold extreme altitude but still offer that operational simplicity if you look at the the resistance that it has to heat it can go from around operates at a 45 degrees Celsius or 113 degrees Fahrenheit range but it can do an excursion up to 55 °c or 131 degrees Fahrenheit for up to eight hours it's also resisted the heats and dust vibration it's very lightweight short depth in fact it's only 20 inches deep this is a smallest form factor obviously that we have in the BX rail family and it's also built to to be able to withstand sudden shocks it's certified it was stand 40 G's of shock and operation of the 15,000 feet of elevation it's pretty high and you know this is this is sort of like where were skydivers go to when they weren't the real real thrill of skydiving where you actually the oxygen to to be a put that out to their milspec certified so mil-std 810g which i keep right beside my bed and read every night and it comes with a VX rail stick hardening package is packaging scripts so that you can auto lock down the rail environment and we've got a few other certifications that are on the roadmap now for for naval chakra quirements EMI and radiation immunity of all that yeah you know it's funny I remember when weights the I first launched it was like oh well everything's going to white boxes and it's going to be you know massive you know no differentiation between everything out there if you look at what you're offering if you look at how public clouds build their things what I call it a few years poor is there's a pure optimization so you need scale you need similarities but you know you need to fit some you know very specific requirements lots of places so interesting stuff yeah certifications you know always keep your teams busy alright let's get back to Shannon we are also introducing three other hardware based editions first a new VX rail eseries model based on were the first time AMD epic processors these single socket 1u nodes offered dual socket performance with CPU options that scale from 8 to 64 cores up to a terabyte of memory and multiple storage options making it an ideal platform for desktop VDI analytics and computer-aided design next the addition of the latest NVIDIA Quadro RT X GPUs brings the most significant advancement in computer graphics in over a decade to professional workflows designers and artists across industries can now expand the boundary of what's possible working with the largest and most complex graphics rendering deep learning and visual computing workloads and Intel obtain DC persistent memory is here and it offers high performance and significantly increase memory capacity with data persistence at an affordable price persistence is a critical feature that maintains data integrity even when power is lost enabling quicker recovery and less downtime with support for Intel obtain DC persistent memory customers can expand in memory intensive workloads and use cases like sa P Hana alright let's finally dig into our HCI system software which is the core differentiation for the xrail regardless of your workload or platform choice our joint engineering with VMware and investments in the x-ray HCI system software innovation together deliver an optimized operational experience at reduced risk for our customers under the covers the xrail offers best-in-class Hardware married with VMware HCI software either vcn or VCF but what makes us different stems from our investments to integrate the two Dell technologies has a dedicated VX rail team of about 400 people to build market sell and support a fully integrated hyper-converged system that team has also developed our unique the X rail HDI system software which is a suite of integrated software elements that extend VMware native capabilities to deliver a seamless automated operational experience that customers cannot find elsewhere the key components of the x rail HDI system software are shown around the arc here that include the X rail manager full stack lifecycle management ecosystem connectors and support I don't have time to get into all the details of these elements today but if you're interested in learning more I encourage you to meet our experts and I will tell you how to do that in a moment I touched on VLC M being a key feature to vSphere seven earlier and I'd like to take the opportunity to expand on that a bit in the context of the xrail lifecycle management the LCM adds valuable automation to the execution of updates for customers but it doesn't eliminate the manual work still needed to define and package the updates and validate all of the components prior to applying them with the X ray all customers have all of these areas addressed automatically on their behalf freeing them to put their time into other important functions for their business customers tell us that lifecycle management continues to be a major source of the maintenance effort they put into their infrastructure and then it tends to lead to overburden IT staff that it can cause disruptions to the business if not managed effectively and that it isn't the most efficient economically Automation of lifecycle management in VX Rail results in the utmost simplicity from a customer experience perspective and offers operational freedom from maintaining infrastructure but as shown here our customers not only realize greater IT team efficiencies they have also reduced downtime with fewer unplanned outages and reduced overall cost of operations with the xrail HCI system software intelligent lifecycle management upgrades of the fully integrated hardware and software stack are automated keeping clusters in continuously validated States while minimizing risks and operational costs how do we ensure continuously validated States Furby xrail the x-ray labs execute an extensive automated repeatable process on every firmware and software upgrade and patch to ensure clusters are in continuously validated states of the customer's choosing across their VX rail environment the VX rail labs are constantly testing analyzing optimising and sequencing all of the components in the upgrade to execute in a single package for the full stack all the while the x rail is backed by Delhi MCS world-class services and support with a single point of contact for both hardware and software IT productivity skyrockets with single-click non-disruptive upgrades of the fully integrated hardware and software stack without the need to do extensive research and testing taking you to the next VX rail version of your choice while always in a continuously validated state you can also confidently execute automated VX rail upgrades no matter what hardware generation or node types are in the cluster they don't have to all be the same and upgrades with VX rail are faster and more efficient with leap frogging simply choose any VX rail version you desire and be assured you will get there in a validated state while seamlessly bypassing any other release in between only the ex rail can do that all right so Chad you know the the lifecycle management piece that Jana was just talking about is you know not the sexiest it's often underappreciated you know there's not only the years of experience but the continuous work you're doing you know reminds me back you know the early V sand deployments versus VX rail jointly develop you know jointly tested between Dell and VMware so you know bring us inside why you know 2020 lifecycle management still you know a very important piece especially in the VL family yeah let's do I think it's sexy but I'm pretty big nerd yes even more the larger the deployments come when you start to look at data centers full of VX rails and all the different hardware software firmware combinations that could exist out there it's really the value that you get out of that VX r l HTI system software that Shannon was talking about and how its optimized around the VMware use case very tightly integrated with each VMware component of course and the intelligence of being able to do all the firmware all of the drivers all of the software altogether tremendous value to our customers but to deliver that we really need to make a fairly large investment so she Anna mentioned we've run about twenty five thousand hours of testing across each major release four patches Express patches that's about seven thousand hours for each of those so obviously there's a lot of parallelism and and we're always developing new test scenarios for each release that we need to build in as we as we introduce new functionality one of the key things that were able to do as Shannon mentioned is to be able to leapfrog releases and get you to that next validated state we've got about 100 engineers just working on creating and executing those test cases on a continuous basis and obviously a huge amount of automation and then when we talk about that investment to execute those tests that's well north of sixty million dollars of investment in our lab in fact we've got just over two thousand VH rail units in our testbed across the u.s. Shanghai China and corn island so a massive amount of testing of each of those those components to make sure that they operate together in a validated state yeah well you know absolutely it's super important not only for the day one but the day two deployments but I think this actually be a great place for us to bring in that customer that Dell gave me access to so we've got the CIO of Amarillo Texas he was an existing VX rail customer and he's going to explain what happened as to how he needed to react really fast to support the work from home initiative as well as you know we get to hear in his words the value of what lifecycle management means though Andrew if we could queue up that that customer segment please it was it's been massive and it's been interesting to see the IT team absorb it you know as we mature and they I think they embrace the ability to be innovative and to work with our departments but this instance really justified why I was driving progress so so fervently why it was so urgent today three years ago we the answer would have been no there would have been we wouldn't have been in a place where we could adapt with it with the x-ray all in place you know in a week we spun up hundreds of instant phones we spawned us a seventy five person call center in a day and a half for our public health we will allow multiple applications for Public Health so they could do remote clinics it's given us the flexibility to be able to to roll out new solutions very quickly and be very adaptive and it's not only been apparent to my team but it's really made an impact on the business and now what I'm seeing is those those are my customers that were a little lagging or a little conservative or understanding the impact of modernizing the way they do business because it makes them adaptable as well all right so rich you talked to a bunch about the the efficiencies that they tie put place how about that that overall just managed you know you talked about how fast you spun up these new VDI instances you need to be able to do things much simpler so you know how does the overall lifecycle management fit into this discussion it makes it so much easier and you know in the in the old environment one it took a lot of man-hours to make change it was it was very disruptive when we did make change this it overburdened I guess that's the word I'm looking for it really over overburdened our staff it cost disruption to business it was it cost-efficient and then you simple things like you know I've worked for multi billion-dollar companies where we had massive QA environments that replicated production simply can't afford that at local government you know having the sort of environment lets me do a scaled-down QA environment and still get the benefit of rolling out non disruptive change as I said earlier it's allow us to take all of those cycles that we were spending on lifecycle management because it's greatly simplified and move those resources and rescale them in in other areas where we can actually have more impact on the business it's hard to be innovated when a hundred percent of your cycles are just keeping the ship afloat all right well you know nothing better than hearing straight from the end-user you know public sector reacting very fast to the Cova 19 and you know you heard him he said if this had hit his before he had run this project he would not have been able to respond so I think everybody out there understands if I didn't actually have access to the latest technology you know it would be much harder all right I'm looking forward to doing the crowd chat and everybody else digging with questions and get follow-up but a little bit more I believe one more announcement he came and got for us though let's roll the final video clip in our latest software release the x-ray of 4.7 dot 510 we continue to add new automation and self-service features new functionality enables you to schedule and run upgrade health checks in advance of upgrades to ensure clusters are in a ready state for the next upgrade or patch this is extremely valuable for customers that have stringent upgrade windows as they can be assured the clusters will seamlessly upgrade within that window of course running health checks on a regular basis also helps ensure that your clusters are always ready for unscheduled patches and security updates we are also offering more flexibility and getting all nodes or clusters to a common release level with the ability to reimage nodes or clusters to a specific the xrail version or down Rev one or more more nodes that may be shipped at a higher Rev than the existing cluster this enables you to easily choose your validated state when adding new nodes or repurposing nodes in cluster to sum up all of our announcements whether you are accelerating data center modernization extending HCI to harsh edge environments deploying an on-premises Dell technologies cloud platform to create a developer ready kubernetes infrastructure BX Rail is there delivering a turnkey experience that enables you to continuously innovate realize operational freedom and predictably evolve the x rail provides an extensive breadth of platform configurations automation and lifecycle management across the integrated hardware and software full stack and consistent hybrid cloud operations to address the broadest range of traditional and modern applications across core edge and cloud I now invite you to engage with us first the virtual passport program is an opportunity to have some fun while learning about the ex rails new features and functionality and score some sweet digital swag while you're at it it delivered via an automated via an augmented reality app all you need is your device so go to the x-ray is slash passport to get started and secondly if you have any questions about anything I talked about or want a deeper conversation we encourage you to join one of our exclusive VX rail meet the experts sessions available for a limited time first-come first-served just go to the x-ray dot is slash expert session to learn more you all right well obviously with everyone being remote there's different ways we're looking to engage so we've got the crowd chat right after this but John gives a little bit more is that how Del's making sure to stay in close contact with customers and what you've got firfer options for them yeah absolutely so as Shannon said so in lieu of not having Dell tech world this year in person where we could have those great in-person interactions and answer questions whether it's in the booth or you know in in meeting rooms you know we are going to have these meet the experts sessions over the next couple of weeks and look we're gonna put our best and brightest from our technical community and make them accessible to to everyone out there so again definitely encourage you we're trying new things here in this virtual environment to ensure that we could still stay in touch answer questions be responsive and really looking forward to you know having these conversations over the next couple weeks all right well John and Chad thank you so much we definitely look forward to the conversation here in int in you'd if you're here live definitely go down below do it if you're watching this on demand you can see the full transcript of it at crowd chat /vx rocks sorry V xrail rocks for myself Shannon on the video John and Chad Andrew man in the booth there thank you so much for watching and go ahead and join the crowd chat
SUMMARY :
fast to the Cova 19 and you know you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jonathan Segal | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Shannon | PERSON | 0.99+ |
15,000 feet | QUANTITY | 0.99+ |
Chad Dunn | PERSON | 0.99+ |
Chad | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
131 degrees Fahrenheit | QUANTITY | 0.99+ |
Janet | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
US Navy | ORGANIZATION | 0.99+ |
40 G | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
113 degrees Fahrenheit | QUANTITY | 0.99+ |
45 degrees Celsius | QUANTITY | 0.99+ |
8 | QUANTITY | 0.99+ |
Netherlands | LOCATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Jana | PERSON | 0.99+ |
Anna | PERSON | 0.99+ |
late April | DATE | 0.99+ |
a day and a half | QUANTITY | 0.99+ |
vSphere 7 | TITLE | 0.99+ |
vSphere | TITLE | 0.99+ |
Amarillo | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
each release | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
30 days | QUANTITY | 0.98+ |
250 desktops | QUANTITY | 0.98+ |
less than 14 days | QUANTITY | 0.98+ |
about 400 people | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
two key areas | QUANTITY | 0.98+ |
less than a third | QUANTITY | 0.98+ |
about seven thousand hours | QUANTITY | 0.98+ |
24 | QUANTITY | 0.97+ |
7 | QUANTITY | 0.97+ |
VX Rail | COMMERCIAL_ITEM | 0.97+ |
20 inches | QUANTITY | 0.97+ |
Dell Technologies | ORGANIZATION | 0.97+ |
about twenty five thousand hours | QUANTITY | 0.97+ |
over two thousand VH | QUANTITY | 0.97+ |
Stu minimun | PERSON | 0.97+ |
over 8,000 customers | QUANTITY | 0.97+ |
u.s. | LOCATION | 0.97+ |
HCI | ORGANIZATION | 0.97+ |
corn island | LOCATION | 0.97+ |
each | QUANTITY | 0.97+ |
64 cores | QUANTITY | 0.97+ |
up to a thousand | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
first time | QUANTITY | 0.96+ |
x rail | TITLE | 0.95+ |
VxRail: Taking HCI to Extremes
>> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCube Conversation. >> Hi, I'm Stu Miniman. And welcome to this special presentation. We have a launch from Dell Technologies updates from the VxRail family. We're going to do things a little bit different here. We actually have a launch video Shannon Champion, of Dell Technologies. And the way we do things a lot of times, is, analysts get a little preview or when you're watching things. You might have questions on it. So, rather than me just wanting it, or you wanting yourself I actually brought in a couple of Dell Technologies expertS two of our Cube alumni, happy to welcome you back to the program. Jon Siegal, he is the Vice President of Product Marketing, and Chad Dunn, who's the Vice President of Product Management, both of them with Dell Technologies. Gentlemen, thanks so much for joining us. >> Good to see you Stu. >> Great to be here. >> All right, and so what we're going to do is we're going to be rolling the video here. I've got a button I'm going to press, Andrew will stop it here and then we'll kind of dig in a little bit, go into some questions when we're all done. We're actually holding a crowd chat, where you will be able to ask your questions, talk to the experts and everything. And so a little bit different way to do a product announcement. Hope you enjoy it. And with that, it's VxRail. Taking HCI to the extremes is the theme. We'll see what that means and everything. But without any further ado, let's let Shannon take the video away. >> Hello, and welcome. My name is Shannon Champion, and I'm looking forward to taking you through what's new with VxRail. Let's get started. We have a lot to talk about. Our launch covers new announcements addressing use cases across the Core, Edge and Cloud and spans both new hardware platforms and options, as well as the latest in software innovations. So let's jump right in. Before we talk about our announcements, let's talk about where customers are adopting VxRail today. First of all, on behalf of the entire Dell Technologies and VxRail teams, I want to thank each of our over 8000 customers, big and small in virtually every industry, who've chosen VxRail to address a broad range of workloads, deploying nearly 100,000 nodes today. Thank you. Our promise to you is that we will add new functionality, improve serviceability, and support new use cases, so that we deliver the most value to you, whether in the Core, at the Edge or for the Cloud. In the Core, VxRail from day one has been a catalyst to accelerate IT transformation. Many of our customers started here and many will continue to leverage VxRail to simply extend and enhance your VMware environment. Now we can support even more demanding applications such as In-Memory databases, like SAP HANA, and more AI and ML applications, with support for more and more powerful GPUs. At the Edge, video surveillance, which also uses GPUs, by the way, is an example of a popular use case leveraging VxRail alongside external storage. And right now we all know the enhanced role that IT is playing. And as it relates to VDI, VxRail has always been a great option for that. In the Cloud, it's all about Kubernetes, and how Dell Technologies Cloud platform, which is VCF on VxRail can deliver consistent infrastructure for both traditional and Cloud native applications. And we're doing that together with VMware. VxRail is the only jointly engineered HCI system built with VMware for VMware environments, designed to enhance the native VMware experience. This joint engineering with VMware and investments in software innovation together deliver an optimized operational experience at reduced risk for our customers. >> Alright, so Shannon talked a bit about, the important role of IT Of course right now, with the global pandemic going on. It's really, calling in, essential things, putting, platforms to the test. So, I really love to hear what both of you are hearing from customers. Also, VDI, of course, in the early days, it was, HCI-only-does-VDI. Now, we know there are many solutions, but remote work is putting that back front and center. So, Jon, why don't we start with you as the what is (muffled speaking) >> Absolutely. So first of all, Stu, thank you, I want to do a shout out to our VxRail customers around the world. It's really been humbling, inspiring, and just amazing to see The impact of our VxRail customers around the world and what they're having on on human progress here. Just for a few examples, there are genomics companies that we have running VxRail that have rolled out testing at scale. We also have research universities out in the Netherlands, doing the antibody detection. The US Navy has stood up a floating hospital to of course care for those in need. So we are here to help that's been our message to our customers, but it's amazing to see how much they're helping society during this. So just just a pleasure there. But as you mentioned, just to hit on the VDI comments, so to your points too, HCI, VxRail, VDI, that was an initial use case years ago. And it's been great to see how many of our existing VxRail customers have been able to pivot very quickly leveraging VxRail to add and to help bring their remote workforce online and support them with their existing VxRail. Because VxRail is flexible, it is agile, to be able to support those multiple workloads. And in addition to that, we've also rolled out some new VDI bundles to make it simpler for customers more cost effective cater to everything from knowlEdge workers to multimedia workers. You name it, you know from 250, desktops up to 1000. But again, back to your point VxRail, HCI, is well beyond VDI, it crossed the chasm a couple years ago actually. And VDI now is less than a third of the typical workloads, any of our customers out there, it supports now a range of workloads that you heard from Shannon, whether it's video surveillance, whether it's general purpose, all the way to mission critical applications now with SAP HAN. So, this has changed the game for sure. But the range of work loads and the flexibility of the actual rules which really helping our existing customers during this pandemic. >> Yeah, I agree with you, Jon, we've seen customers really embrace HCI for a number of workloads in their environments, from the ones that we sure all knew and loved back in the initial days of HCI. Now, the mission critical things now to Cloud native workloads as well, and the sort of the efficiencies that customers are able to get from HCI. And specifically, VxRail gives them that ability to pivot. When these, shall we say unexpected circumstances arise? And I think that that's informing their their decisions and their opinions on what their IP strategies look like as they move forward. They want that same level of agility, and ability to react quickly with their overall infrastructure. >> Excellent. Now I want to get into the announcements. What I want my team actually, your team gave me access to the CIO from the city of Amarillo, so maybe they can dig up that footage, talk about how fast they pivoted, using VxRail to really spin up things fast. So let's hear from the announcement first and then definitely want to share that that customer story a little bit later. So let's get to the actual news that Shannon's going to share. >> Okay, now what's new? I am pleased to announce a number of exciting updates and new platforms, to further enable IT modernization across Core, Edge and Cloud. I will cover each of these announcements in more detail, demonstrating how only VxRail can offer the breadth of platform configurations, automation, orchestration and Lifecycle Management, across a fully integrated hardware and software full stack with consistent, simplified operations to address the broadest range of traditional and modern applications. I'll start with hybrid Cloud and recap what you may have seen in the Dell Technologies Cloud announcements just a few weeks ago, related to VMware Cloud foundation on VxRail. Then I'll cover two brand new VxRail hardware platforms and additional options. And finally circle back to talk about the latest enhancements to our VxRail HCI system software capabilities for Lifecycle Management. Let's get started with our new Cloud offerings based on VxRail. VxRail is the HCI foundation for Dell Technologies, Cloud Platform, bringing automation and financial models, similar to public Cloud to On-premises environments. VMware recently introduced Cloud foundation for Delta, which is based on vSphere 7.0. As you likely know by now, vSphere 7.0 was definitely an exciting and highly anticipated release. In keeping with our synchronous release commitment, we introduced VxRail 7.0 based on vSphere 7.0 in late April, which was within 30 days of VMware's release. Two key areas that VMware focused on we're embedding containers and Kubernetes into vSphere, unifying them with virtual machines. And the second is improving the work experience for vSphere administrators with vSphere Lifecycle Manager or VLCM. I'll address the second point a bit in terms of how VxRail fits in in a moment for VCF 4 with Tom Xu, based on vSphere 7.0 customers now have access to a hybrid Cloud platform that supports native Kubernetes workloads and management, as well as your traditional VM-based workloads. So containers are now first class citizens of your private Cloud alongside traditional VMs and this is now available with VCF 4.0, on VxRail 7.0. VxRail's tight integration with VMware Cloud foundation delivers a simple and direct path not only to the hybrid Cloud, but also to deliver Kubernetes at Cloud scale with one complete automated platform. The second Cloud announcement is also exciting. Recent VCF for networking advancements have made it easier than ever to get started with hybrid Cloud, because we're now able to offer a more accessible consolidated architecture. And with that Dell Technologies Cloud platform can now be deployed with a four-node configuration, lowering the cost of an entry level hybrid Cloud. This enables customers to start smaller and grow their Cloud deployment over time. VCF and VxRail can now be deployed in two different ways. For small environments, customers can utilize a consolidated architecture which starts with just four nodes. Since the management and workload domains share resources in this architecture, it's ideal for getting started with an entry level Cloud to run general purpose virtualized workloads with a smaller entry point. Both in terms of required infrastructure footprint as well as cost, but still with a Consistent Cloud operating model. For larger environments where dedicated resources and role-based access control to separate different sets of workloads is usually preferred. You can choose to deploy a standard architecture which starts at eight nodes for independent management and workload domains. A standard implementation is ideal for customers running applications that require dedicated workload domains that includes Horizon, VDI, and vSphere with Kubernetes. >> Alright, Jon, there's definitely been a lot of interest in our community around everything that VMware is doing with vSphere 7.0. understand if you wanted to use the Kubernetes piece, it's VCF as that so we've seen the announcements, Dell, partnering in there it helps us connect that story between, really the VMware strategy and how they talk about Cloud and where does VxRail fit in that overall, Delta Cloud story? >> Absolutely. So first of all Stu, the VxRail course is integral to the Delta Cloud strategy. it's been VCF on VxRail equals the Delta Cloud platform. And this is our flagship on prem Cloud offering, that we've been able to enable operational consistency across any Cloud, whether it's On-prem, in the Edge or in the public Cloud. And we've seen the Dell tech Cloud Platform embraced by customers for a couple key reasons. One is it offers the fastest hybrid Cloud deployment in the market. And this is really, thanks to a new subscription offer that we're now offering out there where in less than 14 days, it can be still up and running. And really, the Dell tech Cloud does bring a lot of flexibility in terms of consumption models, overall when it comes to VxRail. Secondly, I would say is fast and easy upgrades. This is what VxRail brings to the table for all workloads, if you will, into especially critical in the Cloud. So the full automation of Lifecycle Management across the hardware and software stack across the VMware software stack, and in the Dell software and hardware supporting that, together, this enables essentially the third thing, which is customers can just relax. They can be rest assured that their infrastructure will be continuously validated, and always be in a continuously validated state. And this is the kind of thing that those three value propositions together really fit well, with any on-prem Cloud. Now you take what Shannon just mentioned, and the fact that now you can build and run modern applications on the same VxRail infrastructure alongside traditional applications. This is a game changer. >> Yeah, I love it. I remember in the early days talking with Dunn about CI, how does that fit in with Cloud discussion and the line I've used the last couple years is, modernize the platform, then you can modernize the application. So as companies are doing their full modernization, then this plays into what you're talking about. All right, we can let Shannon continue, we can get some more before we dig into some more analysis. >> That's good. >> Let's talk about new hardware platforms and updates. that result in literally thousands of potential new configuration options. covering a wide breadth of modern and traditional application needs across a range of the actual use cases. First up, I am incredibly excited to announce a brand new Dell EMC VxRail series, the D series. This is a ruggedized durable platform that delivers the full power of VxRail for workloads at the Edge in challenging environments or for space constrained areas. VxRail D series offers the same compelling benefits as the rest of the VxRail portfolio with simplicity, agility and lifecycle management. But in a lightweight short depth at only 20 inches, it's adorable form factor that's extremely temperature-resilient, shock resistant, and easily portable. It even meets milspec standards. That means you have the full power of lifecycle automation with VxRail HCI system software and 24 by seven single point of support, enabling you to rapidly react to business needs, no matter the location or how harsh the conditions. So whether you're deploying a data center at a mobile command base, running real-time GPS mapping on the go, or implementing video surveillance in remote areas, you can ensure availability, integrity and confidence for every workload with the new VxRail ruggedized D series. >> All right, Chad we would love for you to bring us in a little bit that what customer requirement for bringing this to market. I remember seeing, Dell servers ruggedized, of course, Edge, really important growth to build on what Jon was talking about, Cloud. So, Chad, bring us inside, what was driving this piece of the offering? >> Sure Stu. Yeah, yeah, having been at the hardware platforms that can go out into some of these remote locations is really important. And that's being driven by the fact that customers are looking for compute performance and storage out at some of these Edges or some of the more exotic locations. whether that's manufacturing plants, oil rigs, submarine ships, military applications, places that we've never heard of. But it's also about extending that operational simplicity of the the sort of way that you're managing your data center that has VxRails you're managing your Edges the same way using the same set of tools. You don't need to learn anything else. So operational simplicity is absolutely key here. But in those locations, you can take a product that's designed for a data center where definitely controlling power cooling space and take it some of these places where you get sand blowing or seven to zero temperatures, could be Baghdad or it could be Ketchikan, Alaska. So we built this D series that was able to go to those extreme locations with extreme heat, extreme cold, extreme altitude, but still offer that operational simplicity. Now military is one of those applications for the rugged platform. If you look at the resistance that it has to heat, it operates at a 45 degrees Celsius or 113 degrees Fahrenheit range, but it can do an excursion up to 55 C or 131 degrees Fahrenheit for up to eight hours. It's also resistant to heat sand, dust, vibration, it's very lightweight, short depth, in fact, it's only 20 inches deep. This is a smallest form factor, obviously that we have in the VxRail family. And it's also built to be able to withstand sudden shocks certified to withstand 40 G's of shock and operation of the 15,000 feet of elevation. Pretty high. And this is sort of like wherever skydivers go to when they want the real thrill of skydiving where you actually need oxygen to, to be for that that altitude. They're milspec-certified. So, MIL-STD-810G, which I keep right beside my bed and read every night. And it comes with a VxRail stick hardening package is packaging scripts so that you can auto lock down the rail environment. And we've got a few other certifications that are on the roadmap now for naval shock requirements. EMI and radiation immunity often. >> Yeah, it's funny, I remember when we first launched it was like, "Oh, well everything's going to white boxes. "And it's going to be massive, "no differentiation between everything out there." If you look at what you're offering, if you look at how public Clouds build their things, but I called it a few years or is there's a pure optimization. So you need to scale, you need similarities but you know you need to fit some, very specific requirements, lots of places, so, interesting stuff. Yeah, certifications, always keep your teams busy. Alright, let's get back to Shannon to view on the report. >> We are also introducing three other hardware-based additions. First, a new VxRail E Series model based on where the first time AMD EPYC processors. These single socket 1U nodes, offer dual socket performance with CPU options that scale from eight to 64 Cores, up to a terabyte of memory and multiple storage options making it an ideal platform for desktop VDI analytics and computer aided design. Next, the addition of the latest Nvidia Quadro RTX GPUs brings the most significant advancement in computer graphics in over a decade to professional work flows. Designers and artists across industries can now expand the boundary of what's possible, working with the largest and most complex graphics rendering, deep learning and visual computing workloads. And Intel Optane DC persistent memory is here, and it offers high performance and significantly increased memory capacity with data persistence at an affordable price. Data persistence is a critical feature that maintains data integrity, even when power is lost, enabling quicker recovery and less downtime. With support for Intel obtain DC persistent memory customers can expand in memory intensive workloads and use cases like SAP HANA. Alright, let's finally dig into our HCI system software, which is the Core differentiation for VxRail regardless of your workload or platform choice. Our joining engineering with VMware and investments in VxRail HCI system software innovation together deliver an optimized operational experience at reduced risk for our customers. Under the covers, VxRail offers best in class hardware, married with VMware HCI software, either vSAN or VCF. But what makes us different stems from our investments to integrate the two. Dell Technologies has a dedicated VxRail team of about 400 people to build market sell and support a fully integrated hyper converged system. That team has also developed our unique VxRail HCI system software, which is a suite of integrated software elements that extend VMware native capabilities to deliver seamless, automated operational experience that customers cannot find elsewhere. The key components of VxRail HCI system software shown around the arc here that include the extra manager, full stack lifecycle management, ecosystem connectors, and support. I don't have time to get into all the details of these elements today, but if you're interested in learning more, I encourage you to meet our experts. And I will tell you how to do that in a moment. I touched on the LCM being a key feature to the vSphere 7.0 earlier and I'd like to take the opportunity to expand on that a bit in the context of VxRail Lifecycle Management. The LCM adds valuable automation to the execution of updates for customers, but it doesn't eliminate the manual work still needed to define and package the updates and validate all of the components prior to applying them. With VxRail customers have all of these areas addressed automatically on their behalf, freeing them to put their time into other important functions for their business. Customers tell us that Lifecycle management continues to be a major source of the maintenance effort they put into their infrastructure, and then it tends to lead to overburden IT staff, that it can cause disruptions to the business if not managed effectively, and that it isn't the most efficient economically. Automation of Lifecycle Management and VxRail results in the utmost simplicity from a customer experience perspective, and offers operational freedom from maintaining infrastructure. But as shown here, our customers not only realize greater IT team efficiencies, they have also reduced downtime with fewer unplanned outages, and reduced overall cost of operations. With VxRail HCI system software, intelligent Lifecycle Management upgrades of the fully integrated hardware and software stack are automated, keeping clusters and continuously validated states while minimizing risks and operational costs. How do we ensure Continuously validated states for VxRail. VxRail labs execute an extensive, automated, repeatable process on every firmware and software upgrade and patch to ensure clusters are in continuously validated states of the customers choosing across their VxRail environment. The VxRail labs are constantly testing, analyzing, optimizing, and sequencing all of the components in the upgrade to execute in a single package for the full stack. All the while VxRail is backed by Dell EMC's world class services and support with a single point of contact for both hardware and software. IT productivity skyrockets with single click non disruptive upgrades of the fully integrated hardware and software stack without the need to do extensive research and testing. taking you to the next VxRail version of your choice, while always in a continuously validated state. You can also confidently execute automated VxRail upgrades. No matter what hardware generation or node types are in the cluster. They don't have to all be the same. And upgrades with VxRail are faster and more efficient with leapfrogging simply choose any VxRail version you desire. And be assured you will get there in a validated state while seamlessly bypassing any other release in between. Only VxRail can do that. >> All right, so Chad, the lifecycle management piece that Shannon was just talking about is, not the sexiest, it's often underappreciated. There's not only the years of experience, but the continuous work you're doing, reminds me back the early vSAN deployments versus VxRail jointly developed, jointly tested between Dell and VMware. So bring us inside why, 2020 Lifecycle Management still, a very important piece, especially in the VM family line. >> Yes, Stu, I think it's sexy, but, I'm pretty big nerd. (all laughing) Yeah, this is really always been our bread and butter. And in fact, it gets even more important, the larger the deployments come, when you start to look at data centers full of VxRails and all the different hardware software, firmware combinations that could exist out there. It's really the value that you get out of that VxRail HCI system software that Shannon was talking about and how it's optimized around the VMware use case. Very tightly integrated with each VMware component, of course, and the intelligence of being able to do all the firmware, all of the drivers, all the software all together in tremendous value to our customers. But to deliver that we really need to make a fairly large investment. So as Shannon mentioned, we run about 25,000 hours of testing across Each major release for patches, express patches, that's about 7000 hours for each of those. So, obviously, there's a lot of parallelism. And we're always developing new test scenarios for each release that we need to build in as we as we introduce new functionality. And one of the key things that we're able to do, as Shannon mentioned, is to be able to leapfrog releases and get you to that next validated state. We've got about 100 engineers just working on creating and executing those test cases on a continuous basis and obviously, a huge amount of automation. And we've talked about that investment to execute those tests. That's one worth of $60 million of investment in our lab. In fact, we've got just over 2000 VxRail units in our testbed across the US, Shanghai, China and Cork, Ireland. So a massive amount of testing of each of those components to make sure that they operate together in a validated state. >> Yeah, well, absolutely, it's super important not only for the day one, but the day two deployments. But I think this actually a great place for us to bring in that customer that Dell gave me access to. So we've got the CIO of Amarillo, Texas, he was an existing VxRail customer. And he's going to explain what happened as to how he needed to react really fast to support the work-from-home initiative, as well as we get to hear in his words the value of what Lifecycle Management means. So Andrew, if we could queue up that customer segment, please? >> It's been massive and it's been interesting to see the IT team absorb it. As we mature, I think they embrace the ability to be innovative and to work with our departments. But this instance, really justified why I was driving progress. So fervently why it was so urgent today. Three years ago, the answer would have been no. We wouldn't have been in a place where we could adapt With VxRail in place, in a week we spun up hundreds of instant balls. We spun up a 75-person call center in a day and a half, for our public health. We rolled out multiple applications for public health so they could do remote clinics. It's given us the flexibility to be able to roll out new solutions very quickly and be very adaptive. And it's not only been apparent to my team, but it's really made an impact on the business. And now what I'm seeing is those of my customers that work, a little lagging or a little conservative, or understanding the impact of modernizing the way they do business because it makes them adaptable as well. >> Alright, so great, Richard, you talked a bunch about the the efficiencies that that the IT put in place, how about that, that overall just managed, you talked about how fast you spun up these new VDI instances. need to be able to do things much simpler? So how does the overall Lifecycle Management fit into this discussion? >> It makes it so much easier. And in the old environment, one, It took a lot of man hours to make change. It was very disruptive, when we did make change, it overburdened, I guess that's the word I'm looking for. It really overburdened our staff to cause disruption to business. That wasn't cost efficient. And then simple things like, I've worked for multi billion dollar companies where we had massive QA environments that replicated production, simply can't afford that at local government. Having this sort of environment lets me do a scaled down QA environment and still get the benefit of rolling out non disruptive change. As I said earlier, it's allowed us to take all of those cycles that we were spending on Lifecycle Management because it's greatly simplified, and move those resources and rescale them in other areas where we can actually have more impact on the business. It's hard to be innovative when 100% of your cycles are just keeping the ship afloat. >> All right, well, nothing better than hearing it straight from the end user, public sector reacting very fast to the COVID-19. And, if you heard him he said, if this is his, before he had run this project, he would not have been able to respond. So I think everybody out there understands, if I didn't actually have access to the latest technology, it would be much harder. All right, I'm looking forward to doing the CrowdChat letting everybody else dig in with questions and get follow up but a little bit more, I believe one more announcement he can and got for us though. Let's roll the final video clip. >> In our latest software release VxRail 4.7.510, We continue to add new automation and self service features. New functionality enables you to schedule and run upgrade health checks in advance of upgrades, to ensure clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade windows, as they can be assured the clusters will seamlessly upgrade within that window. Of course, running health checks on a regular basis also helps ensure that your clusters are always ready for unscheduled patches and security updates. We are also offering more flexibility and getting all nodes or clusters to a common release level with the ability to reimage nodes or clusters to a specific VxRail version, or down rev one or more nodes that may be shipped at a higher rate than the existing cluster. This enables you to easily choose your validated state when adding new nodes or repurposing nodes in a cluster. To sum up all of our announcements, whether you are accelerating data sets modernization extending HCI to harsh Edge environments, deploying an on-premises Dell Technologies Cloud platform to create a developer ready Kubernetes infrastructure. VxRail is there delivering a turn-key experience that enables you to continuously innovate, realize operational freedom and predictably evolve. VxRail provides an extensive breadth of platform configurations, automation and Lifecycle Management across the integrated hardware and software full stack and consistent hybrid Cloud operations to address the broadest range of traditional and modern applications across Core, Edge and Cloud. I now invite you to engage with us. First, the virtual passport program is an opportunity to have some fun while learning about VxRail new features and functionality and sCore some sweet digital swag while you're at it. Delivered via an augmented reality app. All you need is your device. So go to vxrail.is/passport to get started. And secondly, if you have any questions about anything I talked about or want a deeper conversation, we encourage you to join one of our exclusive VxRail Meet The Experts sessions available for a limited time. First come first served, just go to vxrail.is/expertsession to learn more. >> All right, well, obviously, with everyone being remote, there's different ways we're looking to engage. So we've got the CrowdChat right after this. But Jon, give us a little bit more as to how Dell's making sure to stay in close contact with customers and what you've got for options for them. >> Yeah, absolutely. So as Shannon said, so in lieu of not having done Tech World this year in person, where we could have those great in-person interactions and answer questions, whether it's in the booth or in meeting rooms, we are going to have these Meet The Experts sessions over the next couple weeks, and we're going to put our best and brightest from our technical community and make them accessible to everyone out there. So again, definitely encourage you. We're trying new things here in this virtual environment to ensure that we can still stay in touch, answer questions, be responsive, and really looking forward to, having these conversations over the next couple of weeks. >> All right, well, Jon and Chad, thank you so much. We definitely look forward to the conversation here and continued. If you're here live, definitely go down below and do it if you're watching this on demand. You can see the full transcript of it at crowdchat.net/vxrailrocks. For myself, Shannon on the video, Jon, Chad, Andrew, man in the booth there, thank you so much for watching, and go ahead and join the CrowdChat.
SUMMARY :
Announcer: From the Cube And the way we do things a lot of times, talk to the experts and everything. And as it relates to VDI, So, I really love to hear what both of you and the flexibility of the actual rules and the sort of the efficiencies that Shannon's going to share. the latest enhancements to really the VMware strategy and the fact that now you can build and the line I've used that delivers the full power of VxRail for bringing this to market. and operation of the "And it's going to be massive, and that it isn't the most especially in the VM family line. and all the different hardware software, And he's going to explain what happened the ability to be innovative that that the IT put in and still get the benefit it straight from the end user, for the next upgrade or patch. little bit more as to how to ensure that we can still and go ahead and join the CrowdChat.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
Shannon | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Jon Siegal | PERSON | 0.99+ |
Chad Dunn | PERSON | 0.99+ |
Chad | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
15,000 feet | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
40 G | QUANTITY | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Tom Xu | PERSON | 0.99+ |
$60 million | QUANTITY | 0.99+ |
US Navy | ORGANIZATION | 0.99+ |
131 degrees Fahrenheit | QUANTITY | 0.99+ |
Baghdad | LOCATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
113 degrees Fahrenheit | QUANTITY | 0.99+ |
vSphere 7.0 | TITLE | 0.99+ |
75-person | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
vSphere | TITLE | 0.99+ |
45 degrees Celsius | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
VxRail | TITLE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Shanghai | LOCATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
VxRail 7.0 | TITLE | 0.99+ |
Amarillo | LOCATION | 0.99+ |
less than 14 days | QUANTITY | 0.99+ |
Delta Cloud | TITLE | 0.99+ |
late April | DATE | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
20 inches | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
SAP HANA | TITLE | 0.99+ |
seven | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
VxRail E Series | COMMERCIAL_ITEM | 0.99+ |
each | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
a day and a half | QUANTITY | 0.98+ |
about 400 people | QUANTITY | 0.98+ |
VxRail: Taking HCI to Extremes
>> Announcer: From the Cube studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCube Conversation. >> Hi, I'm Stu Miniman. And welcome to this special presentation. We have a launch from Dell Technologies updates from the VxRail family. We're going to do things a little bit different here. We actually have a launch video Shannon Champion, of Dell Technologies. And the way we do things a lot of times, is, analysts get a little preview or when you're watching things. You might have questions on it. So, rather than me just wanting it, or you wanting yourself I actually brought in a couple of Dell Technologies expertS two of our Cube alumni, happy to welcome you back to the program. Jon Siegal, he is the Vice President of Product Marketing, and Chad Dunn, who's the Vice President of Product Management, both of them with Dell Technologies. Gentlemen, thanks so much for joining us. >> Good to see you Stu. >> Great to be here. >> All right, and so what we're going to do is we're going to be rolling the video here. I've got a button I'm going to press, Andrew will stop it here and then we'll kind of dig in a little bit, go into some questions when we're all done. We're actually holding a crowd chat, where you will be able to ask your questions, talk to the experts and everything. And so a little bit different way to do a product announcement. Hope you enjoy it. And with that, it's VxRail. Taking HCI to the extremes is the theme. We'll see what that means and everything. But without any further ado, let's let Shannon take the video away. >> Hello, and welcome. My name is Shannon Champion, and I'm looking forward to taking you through what's new with VxRail. Let's get started. We have a lot to talk about. Our launch covers new announcements addressing use cases across the Core, Edge and Cloud and spans both new hardware platforms and options, as well as the latest in software innovations. So let's jump right in. Before we talk about our announcements, let's talk about where customers are adopting VxRail today. First of all, on behalf of the entire Dell Technologies and VxRail teams, I want to thank each of our over 8000 customers, big and small in virtually every industry, who've chosen VxRail to address a broad range of workloads, deploying nearly 100,000 nodes today. Thank you. Our promise to you is that we will add new functionality, improve serviceability, and support new use cases, so that we deliver the most value to you, whether in the Core, at the Edge or for the Cloud. In the Core, VxRail from day one has been a catalyst to accelerate IT transformation. Many of our customers started here and many will continue to leverage VxRail to simply extend and enhance your VMware environment. Now we can support even more demanding applications such as In-Memory databases, like SAP HANA, and more AI and ML applications, with support for more and more powerful GPUs. At the Edge, video surveillance, which also uses GPUs, by the way, is an example of a popular use case leveraging VxRail alongside external storage. And right now we all know the enhanced role that IT is playing. And as it relates to VDI, VxRail has always been a great option for that. In the Cloud, it's all about Kubernetes, and how Dell Technologies Cloud platform, which is VCF on VxRail can deliver consistent infrastructure for both traditional and Cloud native applications. And we're doing that together with VMware. VxRail is the only jointly engineered HCI system built with VMware for VMware environments, designed to enhance the native VMware experience. This joint engineering with VMware and investments in software innovation together deliver an optimized operational experience at reduced risk for our customers. >> Alright, so Shannon talked a bit about, the important role of IT Of course right now, with the global pandemic going on. It's really, calling in, essential things, putting, platforms to the test. So, I really love to hear what both of you are hearing from customers. Also, VDI, of course, in the early days, it was, HCI-only-does-VDI. Now, we know there are many solutions, but remote work is putting that back front and center. So, Jon, why don't we start with you as the what is (muffled speaking) >> Absolutely. So first of all, Stu, thank you, I want to do a shout out to our VxRail customers around the world. It's really been humbling, inspiring, and just amazing to see The impact of our VxRail customers around the world and what they're having on on human progress here. Just for a few examples, there are genomics companies that we have running VxRail that have rolled out testing at scale. We also have research universities out in the Netherlands, doing the antibody detection. The US Navy has stood up a floating hospital to of course care for those in need. So we are here to help that's been our message to our customers, but it's amazing to see how much they're helping society during this. So just just a pleasure there. But as you mentioned, just to hit on the VDI comments, so to your points too, HCI, VxRail, VDI, that was an initial use case years ago. And it's been great to see how many of our existing VxRail customers have been able to pivot very quickly leveraging VxRail to add and to help bring their remote workforce online and support them with their existing VxRail. Because VxRail is flexible, it is agile, to be able to support those multiple workloads. And in addition to that, we've also rolled out some new VDI bundles to make it simpler for customers more cost effective cater to everything from knowlEdge workers to multimedia workers. You name it, you know from 250, desktops up to 1000. But again, back to your point VxRail, HCI, is well beyond VDI, it crossed the chasm a couple years ago actually. And VDI now is less than a third of the typical workloads, any of our customers out there, it supports now a range of workloads that you heard from Shannon, whether it's video surveillance, whether it's general purpose, all the way to mission critical applications now with SAP HAN. So, this has changed the game for sure. But the range of work loads and the flexibility of the actual rules which really helping our existing customers during this pandemic. >> Yeah, I agree with you, Jon, we've seen customers really embrace HCI for a number of workloads in their environments, from the ones that we sure all knew and loved back in the initial days of HCI. Now, the mission critical things now to Cloud native workloads as well, and the sort of the efficiencies that customers are able to get from HCI. And specifically, VxRail gives them that ability to pivot. When these, shall we say unexpected circumstances arise? And I think that that's informing their their decisions and their opinions on what their IP strategies look like as they move forward. They want that same level of agility, and ability to react quickly with their overall infrastructure. >> Excellent. Now I want to get into the announcements. What I want my team actually, your team gave me access to the CIO from the city of Amarillo, so maybe they can dig up that footage, talk about how fast they pivoted, using VxRail to really spin up things fast. So let's hear from the announcement first and then definitely want to share that that customer story a little bit later. So let's get to the actual news that Shannon's going to share. >> Okay, now what's new? I am pleased to announce a number of exciting updates and new platforms, to further enable IT modernization across Core, Edge and Cloud. I will cover each of these announcements in more detail, demonstrating how only VxRail can offer the breadth of platform configurations, automation, orchestration and Lifecycle Management, across a fully integrated hardware and software full stack with consistent, simplified operations to address the broadest range of traditional and modern applications. I'll start with hybrid Cloud and recap what you may have seen in the Dell Technologies Cloud announcements just a few weeks ago, related to VMware Cloud foundation on VxRail. Then I'll cover two brand new VxRail hardware platforms and additional options. And finally circle back to talk about the latest enhancements to our VxRail HCI system software capabilities for Lifecycle Management. Let's get started with our new Cloud offerings based on VxRail. VxRail is the HCI foundation for Dell Technologies, Cloud Platform, bringing automation and financial models, similar to public Cloud to On-premises environments. VMware recently introduced Cloud foundation for Delta, which is based on vSphere 7.0. As you likely know by now, vSphere 7.0 was definitely an exciting and highly anticipated release. In keeping with our synchronous release commitment, we introduced VxRail 7.0 based on vSphere 7.0 in late April, which was within 30 days of VMware's release. Two key areas that VMware focused on we're embedding containers and Kubernetes into vSphere, unifying them with virtual machines. And the second is improving the work experience for vSphere administrators with vSphere Lifecycle Manager or VLCM. I'll address the second point a bit in terms of how VxRail fits in in a moment for VCF 4 with Tom Xu, based on vSphere 7.0 customers now have access to a hybrid Cloud platform that supports native Kubernetes workloads and management, as well as your traditional VM-based workloads. So containers are now first class citizens of your private Cloud alongside traditional VMs and this is now available with VCF 4.0, on VxRail 7.0. VxRail's tight integration with VMware Cloud foundation delivers a simple and direct path not only to the hybrid Cloud, but also to deliver Kubernetes at Cloud scale with one complete automated platform. The second Cloud announcement is also exciting. Recent VCF for networking advancements have made it easier than ever to get started with hybrid Cloud, because we're now able to offer a more accessible consolidated architecture. And with that Dell Technologies Cloud platform can now be deployed with a four-node configuration, lowering the cost of an entry level hybrid Cloud. This enables customers to start smaller and grow their Cloud deployment over time. VCF and VxRail can now be deployed in two different ways. For small environments, customers can utilize a consolidated architecture which starts with just four nodes. Since the management and workload domains share resources in this architecture, it's ideal for getting started with an entry level Cloud to run general purpose virtualized workloads with a smaller entry point. Both in terms of required infrastructure footprint as well as cost, but still with a Consistent Cloud operating model. For larger environments where dedicated resources and role-based access control to separate different sets of workloads is usually preferred. You can choose to deploy a standard architecture which starts at eight nodes for independent management and workload domains. A standard implementation is ideal for customers running applications that require dedicated workload domains that includes Horizon, VDI, and vSphere with Kubernetes. >> Alright, Jon, there's definitely been a lot of interest in our community around everything that VMware is doing with vSphere 7.0. understand if you wanted to use the Kubernetes piece, it's VCF as that so we've seen the announcements, Dell, partnering in there it helps us connect that story between, really the VMware strategy and how they talk about Cloud and where does VxRail fit in that overall, Delta Cloud story? >> Absolutely. So first of all Stu, the VxRail course is integral to the Delta Cloud strategy. it's been VCF on VxRail equals the Delta Cloud platform. And this is our flagship on prem Cloud offering, that we've been able to enable operational consistency across any Cloud, whether it's On-prem, in the Edge or in the public Cloud. And we've seen the Dell tech Cloud Platform embraced by customers for a couple key reasons. One is it offers the fastest hybrid Cloud deployment in the market. And this is really, thanks to a new subscription offer that we're now offering out there where in less than 14 days, it can be still up and running. And really, the Dell tech Cloud does bring a lot of flexibility in terms of consumption models, overall when it comes to VxRail. Secondly, I would say is fast and easy upgrades. This is what VxRail brings to the table for all workloads, if you will, into especially critical in the Cloud. So the full automation of Lifecycle Management across the hardware and software stack across the VMware software stack, and in the Dell software and hardware supporting that, together, this enables essentially the third thing, which is customers can just relax. They can be rest assured that their infrastructure will be continuously validated, and always be in a continuously validated state. And this is the kind of thing that those three value propositions together really fit well, with any on-prem Cloud. Now you take what Shannon just mentioned, and the fact that now you can build and run modern applications on the same VxRail infrastructure alongside traditional applications. This is a game changer. >> Yeah, I love it. I remember in the early days talking with Dunn about CI, how does that fit in with Cloud discussion and the line I've used the last couple years is, modernize the platform, then you can modernize the application. So as companies are doing their full modernization, then this plays into what you're talking about. All right, we can let Shannon continue, we can get some more before we dig into some more analysis. >> That's good. >> Let's talk about new hardware platforms and updates. that result in literally thousands of potential new configuration options. covering a wide breadth of modern and traditional application needs across a range of the actual use cases. First up, I am incredibly excited to announce a brand new Dell EMC VxRail series, the D series. This is a ruggedized durable platform that delivers the full power of VxRail for workloads at the Edge in challenging environments or for space constrained areas. VxRail D series offers the same compelling benefits as the rest of the VxRail portfolio with simplicity, agility and lifecycle management. But in a lightweight short depth at only 20 inches, it's adorable form factor that's extremely temperature-resilient, shock resistant, and easily portable. It even meets milspec standards. That means you have the full power of lifecycle automation with VxRail HCI system software and 24 by seven single point of support, enabling you to rapidly react to business needs, no matter the location or how harsh the conditions. So whether you're deploying a data center at a mobile command base, running real-time GPS mapping on the go, or implementing video surveillance in remote areas, you can ensure availability, integrity and confidence for every workload with the new VxRail ruggedized D series. >> All right, Chad we would love for you to bring us in a little bit that what customer requirement for bringing this to market. I remember seeing, Dell servers ruggedized, of course, Edge, really important growth to build on what Jon was talking about, Cloud. So, Chad, bring us inside, what was driving this piece of the offering? >> Sure Stu. Yeah, yeah, having been at the hardware platforms that can go out into some of these remote locations is really important. And that's being driven by the fact that customers are looking for compute performance and storage out at some of these Edges or some of the more exotic locations. whether that's manufacturing plants, oil rigs, submarine ships, military applications, places that we've never heard of. But it's also about extending that operational simplicity of the the sort of way that you're managing your data center that has VxRails you're managing your Edges the same way using the same set of tools. You don't need to learn anything else. So operational simplicity is absolutely key here. But in those locations, you can take a product that's designed for a data center where definitely controlling power cooling space and take it some of these places where you get sand blowing or seven to zero temperatures, could be Baghdad or it could be Ketchikan, Alaska. So we built this D series that was able to go to those extreme locations with extreme heat, extreme cold, extreme altitude, but still offer that operational simplicity. Now military is one of those applications for the rugged platform. If you look at the resistance that it has to heat, it operates at a 45 degrees Celsius or 113 degrees Fahrenheit range, but it can do an excursion up to 55 C or 131 degrees Fahrenheit for up to eight hours. It's also resistant to heat sand, dust, vibration, it's very lightweight, short depth, in fact, it's only 20 inches deep. This is a smallest form factor, obviously that we have in the VxRail family. And it's also built to be able to withstand sudden shocks certified to withstand 40 G's of shock and operation of the 15,000 feet of elevation. Pretty high. And this is sort of like wherever skydivers go to when they want the real thrill of skydiving where you actually need oxygen to, to be for that that altitude. They're milspec-certified. So, MIL-STD-810G, which I keep right beside my bed and read every night. And it comes with a VxRail stick hardening package is packaging scripts so that you can auto lock down the rail environment. And we've got a few other certifications that are on the roadmap now for naval shock requirements. EMI and radiation immunity often. >> Yeah, it's funny, I remember when we first launched it was like, "Oh, well everything's going to white boxes. "And it's going to be massive, "no differentiation between everything out there." If you look at what you're offering, if you look at how public Clouds build their things, but I called it a few years or is there's a pure optimization. So you need to scale, you need similarities but you know you need to fit some, very specific requirements, lots of places, so, interesting stuff. Yeah, certifications, always keep your teams busy. Alright, let's get back to Shannon to view on the report. >> We are also introducing three other hardware-based additions. First, a new VxRail E Series model based on where the first time AMD EPYC processors. These single socket 1U nodes, offer dual socket performance with CPU options that scale from eight to 64 Cores, up to a terabyte of memory and multiple storage options making it an ideal platform for desktop VDI analytics and computer aided design. Next, the addition of the latest Nvidia Quadro RTX GPUs brings the most significant advancement in computer graphics in over a decade to professional work flows. Designers and artists across industries can now expand the boundary of what's possible, working with the largest and most complex graphics rendering, deep learning and visual computing workloads. And Intel Optane DC persistent memory is here, and it offers high performance and significantly increased memory capacity with data persistence at an affordable price. Data persistence is a critical feature that maintains data integrity, even when power is lost, enabling quicker recovery and less downtime. With support for Intel obtain DC persistent memory customers can expand in memory intensive workloads and use cases like SAP HANA. Alright, let's finally dig into our HCI system software, which is the Core differentiation for VxRail regardless of your workload or platform choice. Our joining engineering with VMware and investments in VxRail HCI system software innovation together deliver an optimized operational experience at reduced risk for our customers. Under the covers, VxRail offers best in class hardware, married with VMware HCI software, either vSAN or VCF. But what makes us different stems from our investments to integrate the two. Dell Technologies has a dedicated VxRail team of about 400 people to build market sell and support a fully integrated hyper converged system. That team has also developed our unique VxRail HCI system software, which is a suite of integrated software elements that extend VMware native capabilities to deliver seamless, automated operational experience that customers cannot find elsewhere. The key components of VxRail HCI system software shown around the arc here that include the extra manager, full stack lifecycle management, ecosystem connectors, and support. I don't have time to get into all the details of these elements today, but if you're interested in learning more, I encourage you to meet our experts. And I will tell you how to do that in a moment. I touched on the LCM being a key feature to the vSphere 7.0 earlier and I'd like to take the opportunity to expand on that a bit in the context of VxRail Lifecycle Management. The LCM adds valuable automation to the execution of updates for customers, but it doesn't eliminate the manual work still needed to define and package the updates and validate all of the components prior to applying them. With VxRail customers have all of these areas addressed automatically on their behalf, freeing them to put their time into other important functions for their business. Customers tell us that Lifecycle management continues to be a major source of the maintenance effort they put into their infrastructure, and then it tends to lead to overburden IT staff, that it can cause disruptions to the business if not managed effectively, and that it isn't the most efficient economically. Automation of Lifecycle Management and VxRail results in the utmost simplicity from a customer experience perspective, and offers operational freedom from maintaining infrastructure. But as shown here, our customers not only realize greater IT team efficiencies, they have also reduced downtime with fewer unplanned outages, and reduced overall cost of operations. With VxRail HCI system software, intelligent Lifecycle Management upgrades of the fully integrated hardware and software stack are automated, keeping clusters and continuously validated states while minimizing risks and operational costs. How do we ensure Continuously validated states for VxRail. VxRail labs execute an extensive, automated, repeatable process on every firmware and software upgrade and patch to ensure clusters are in continuously validated states of the customers choosing across their VxRail environment. The VxRail labs are constantly testing, analyzing, optimizing, and sequencing all of the components in the upgrade to execute in a single package for the full stack. All the while VxRail is backed by Dell EMC's world class services and support with a single point of contact for both hardware and software. IT productivity skyrockets with single click non disruptive upgrades of the fully integrated hardware and software stack without the need to do extensive research and testing. taking you to the next VxRail version of your choice, while always in a continuously validated state. You can also confidently execute automated VxRail upgrades. No matter what hardware generation or node types are in the cluster. They don't have to all be the same. And upgrades with VxRail are faster and more efficient with leapfrogging simply choose any VxRail version you desire. And be assured you will get there in a validated state while seamlessly bypassing any other release in between. Only VxRail can do that. >> All right, so Chad, the lifecycle management piece that Shannon was just talking about is, not the sexiest, it's often underappreciated. There's not only the years of experience, but the continuous work you're doing, reminds me back the early vSAN deployments versus VxRail jointly developed, jointly tested between Dell and VMware. So bring us inside why, 2020 Lifecycle Management still, a very important piece, especially in the VM family line. >> Yes, Stu, I think it's sexy, but, I'm pretty big nerd. (all laughing) Yeah, this is really always been our bread and butter. And in fact, it gets even more important, the larger the deployments come, when you start to look at data centers full of VxRails and all the different hardware software, firmware combinations that could exist out there. It's really the value that you get out of that VxRail HCI system software that Shannon was talking about and how it's optimized around the VMware use case. Very tightly integrated with each VMware component, of course, and the intelligence of being able to do all the firmware, all of the drivers, all the software all together in tremendous value to our customers. But to deliver that we really need to make a fairly large investment. So as Shannon mentioned, we run about 25,000 hours of testing across Each major release for patches, express patches, that's about 7000 hours for each of those. So, obviously, there's a lot of parallelism. And we're always developing new test scenarios for each release that we need to build in as we as we introduce new functionality. And one of the key things that we're able to do, as Shannon mentioned, is to be able to leapfrog releases and get you to that next validated state. We've got about 100 engineers just working on creating and executing those test cases on a continuous basis and obviously, a huge amount of automation. And we've talked about that investment to execute those tests. That's one worth of $60 million of investment in our lab. In fact, we've got just over 2000 VxRail units in our testbed across the US, Shanghai, China and Cork, Ireland. So a massive amount of testing of each of those components to make sure that they operate together in a validated state. >> Yeah, well, absolutely, it's super important not only for the day one, but the day two deployments. But I think this actually a great place for us to bring in that customer that Dell gave me access to. So we've got the CIO of Amarillo, Texas, he was an existing VxRail customer. And he's going to explain what happened as to how he needed to react really fast to support the work-from-home initiative, as well as we get to hear in his words the value of what Lifecycle Management means. So Andrew, if we could queue up that customer segment, please? >> It's been massive and it's been interesting to see the IT team absorb it. As we mature, I think they embrace the ability to be innovative and to work with our departments. But this instance, really justified why I was driving progress. So fervently why it was so urgent today. Three years ago, the answer would have been no. We wouldn't have been in a place where we could adapt With VxRail in place, in a week we spun up hundreds of instant balls. We spun up a 75-person call center in a day and a half, for our public health. We rolled out multiple applications for public health so they could do remote clinics. It's given us the flexibility to be able to roll out new solutions very quickly and be very adaptive. And it's not only been apparent to my team, but it's really made an impact on the business. And now what I'm seeing is those of my customers that work, a little lagging or a little conservative, or understanding the impact of modernizing the way they do business because it makes them adaptable as well. >> Alright, so great, Richard, you talked a bunch about the the efficiencies that that the IT put in place, how about that, that overall just managed, you talked about how fast you spun up these new VDI instances. need to be able to do things much simpler? So how does the overall Lifecycle Management fit into this discussion? >> It makes it so much easier. And in the old environment, one, It took a lot of man hours to make change. It was very disruptive, when we did make change, it overburdened, I guess that's the word I'm looking for. It really overburdened our staff to cause disruption to business. That wasn't cost efficient. And then simple things like, I've worked for multi billion dollar companies where we had massive QA environments that replicated production, simply can't afford that at local government. Having this sort of environment lets me do a scaled down QA environment and still get the benefit of rolling out non disruptive change. As I said earlier, it's allowed us to take all of those cycles that we were spending on Lifecycle Management because it's greatly simplified, and move those resources and rescale them in other areas where we can actually have more impact on the business. It's hard to be innovative when 100% of your cycles are just keeping the ship afloat. >> All right, well, nothing better than hearing it straight from the end user, public sector reacting very fast to the COVID-19. And, if you heard him he said, if this is his, before he had run this project, he would not have been able to respond. So I think everybody out there understands, if I didn't actually have access to the latest technology, it would be much harder. All right, I'm looking forward to doing the CrowdChat letting everybody else dig in with questions and get follow up but a little bit more, I believe one more announcement he can and got for us though. Let's roll the final video clip. >> In our latest software release VxRail 4.7.510, We continue to add new automation and self service features. New functionality enables you to schedule and run upgrade health checks in advance of upgrades, to ensure clusters are in a ready state for the next upgrade or patch. This is extremely valuable for customers that have stringent upgrade windows, as they can be assured the clusters will seamlessly upgrade within that window. Of course, running health checks on a regular basis also helps ensure that your clusters are always ready for unscheduled patches and security updates. We are also offering more flexibility and getting all nodes or clusters to a common release level with the ability to reimage nodes or clusters to a specific VxRail version, or down rev one or more nodes that may be shipped at a higher rate than the existing cluster. This enables you to easily choose your validated state when adding new nodes or repurposing nodes in a cluster. To sum up all of our announcements, whether you are accelerating data sets modernization extending HCI to harsh Edge environments, deploying an on-premises Dell Technologies Cloud platform to create a developer ready Kubernetes infrastructure. VxRail is there delivering a turn-key experience that enables you to continuously innovate, realize operational freedom and predictably evolve. VxRail provides an extensive breadth of platform configurations, automation and Lifecycle Management across the integrated hardware and software full stack and consistent hybrid Cloud operations to address the broadest range of traditional and modern applications across Core, Edge and Cloud. I now invite you to engage with us. First, the virtual passport program is an opportunity to have some fun while learning about VxRail new features and functionality and sCore some sweet digital swag while you're at it. Delivered via an augmented reality app. All you need is your device. So go to vxrail.is/passport to get started. And secondly, if you have any questions about anything I talked about or want a deeper conversation, we encourage you to join one of our exclusive VxRail Meet The Experts sessions available for a limited time. First come first served, just go to vxrail.is/expertsession to learn more. >> All right, well, obviously, with everyone being remote, there's different ways we're looking to engage. So we've got the CrowdChat right after this. But Jon, give us a little bit more as to how Dell's making sure to stay in close contact with customers and what you've got for options for them. >> Yeah, absolutely. So as Shannon said, so in lieu of not having done Tech World this year in person, where we could have those great in-person interactions and answer questions, whether it's in the booth or in meeting rooms, we are going to have these Meet The Experts sessions over the next couple weeks, and we're going to put our best and brightest from our technical community and make them accessible to everyone out there. So again, definitely encourage you. We're trying new things here in this virtual environment to ensure that we can still stay in touch, answer questions, be responsive, and really looking forward to, having these conversations over the next couple of weeks. >> All right, well, Jon and Chad, thank you so much. We definitely look forward to the conversation here and continued. If you're here live, definitely go down below and do it if you're watching this on demand. You can see the full transcript of it at crowdchat.net/vxrailrocks. For myself, Shannon on the video, Jon, Chad, Andrew, man in the booth there, thank you so much for watching, and go ahead and join the CrowdChat.
SUMMARY :
Announcer: From the Cube And the way we do things a lot of times, talk to the experts and everything. And as it relates to VDI, So, I really love to hear what both of you and the flexibility of the actual rules and the sort of the efficiencies that Shannon's going to share. the latest enhancements to really the VMware strategy and the fact that now you can build and the line I've used that delivers the full power of VxRail for bringing this to market. and operation of the "And it's going to be massive, and that it isn't the most especially in the VM family line. and all the different hardware software, And he's going to explain what happened the ability to be innovative that that the IT put in and still get the benefit it straight from the end user, for the next upgrade or patch. little bit more as to how to ensure that we can still and go ahead and join the CrowdChat.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
Shannon | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Jon Siegal | PERSON | 0.99+ |
Chad Dunn | PERSON | 0.99+ |
Chad | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
15,000 feet | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
40 G | QUANTITY | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Tom Xu | PERSON | 0.99+ |
$60 million | QUANTITY | 0.99+ |
US Navy | ORGANIZATION | 0.99+ |
131 degrees Fahrenheit | QUANTITY | 0.99+ |
Baghdad | LOCATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
113 degrees Fahrenheit | QUANTITY | 0.99+ |
vSphere 7.0 | TITLE | 0.99+ |
75-person | QUANTITY | 0.99+ |
China | LOCATION | 0.99+ |
vSphere | TITLE | 0.99+ |
45 degrees Celsius | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
VxRail | TITLE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Shanghai | LOCATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
VxRail 7.0 | TITLE | 0.99+ |
Amarillo | LOCATION | 0.99+ |
less than 14 days | QUANTITY | 0.99+ |
Delta Cloud | TITLE | 0.99+ |
late April | DATE | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
20 inches | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
SAP HANA | TITLE | 0.99+ |
seven | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
VxRail E Series | COMMERCIAL_ITEM | 0.99+ |
each | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
a day and a half | QUANTITY | 0.98+ |
about 400 people | QUANTITY | 0.98+ |
UNLIST TILL 4/2 The Data-Driven Prognosis
>> Narrator: Hi, everyone, thanks for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled toward Zero Unplanned Downtime of Medical Imaging Systems using Big Data. My name is Sue LeClaire, Director of Marketing at Vertica, and I'll be your host for this webinar. Joining me is Mauro Barbieri, lead architect of analytics at Philips. Before we begin, I want to encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. And we'll answer as many questions as we're able to during that time. Any questions that we don't get to we'll do our best to answer them offline. Alternatively, you can also visit the vertical forums to post your question there after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide. And yes, this virtual session is being recorded, and we'll be available to view on demand this week. We'll send you a notification as soon as it's ready. So let's get started. Mauro, over to you. >> Thank you, good day everyone. So medical imaging systems such as MRI scanners, interventional guided therapy machines, CT scanners, the XR system, they need to provide hospitals, optimal clinical performance but also predictable cost of ownership. So clinicians understand the need for maintenance of these devices, but they just want to be non intrusive and scheduled. And whenever there is a problem with the system, the hospital suspects Philips services to resolve it fast and and the first interaction with them. In this presentation you will see how we are using big data to increase the uptime of our medical imaging systems. I'm sure you have heard of the company Phillips. Phillips is a company that was founded in 129 years ago in actually 1891 in Eindhoven in Netherlands, and they started by manufacturing, light bulbs, and other electrical products. The two brothers Gerard and Anton, they took an investment from their father Frederik, and they set up to manufacture and sale light bulbs. And as you may know, a key technology for making light bulbs is, was glass and vacuum. So when you're good at making glass products and vacuum and light bulbs, then there is an easy step to start making radicals like they did but also X ray tubes. So Philips actually entered very early in the market of medical imaging and healthcare technology. And this is what our is our core as a company, and it's also our future. So, healthcare, I mean, we are in a situation now in which everybody recognize the importance of it. And and we see incredible trends in a transition from what we call Volume Based Healthcare to Value Base, where, where the clinical outcomes are driving improvements in the healthcare domain. Where it's not enough to respond to healthcare challenges, but we need to be involved in preventing and maintaining the population wellness and from a situation in which we episodically are in touch with healthcare we need to continuously monitor and continuously take care of populations. And from healthcare facilities and technology available to a few elected and reach countries we want to make health care accessible to everybody throughout the world. And this of course, has poses incredible challenges. And this is why we are transforming the Philips to become a healthcare technology leader. So from Philips has been a concern realizing and active in many sectors in many sectors and realizing what kind of technologies we've been focusing on healthcare. And we have been transitioning from creating and selling products to making solutions to addresses ethical challenges. And from selling boxes, to creating long term relationships with our customers. And so, if you have known the Philips brand from from Shavers from, from televisions to light bulbs, you probably now also recognize the involvement of Philips in the healthcare domain, in diagnostic imaging, in ultrasound, in image guided therapy and systems, in digital pathology, non invasive ventilation, as well as patient monitoring intensive care, telemedicine, but also radiology, cardiology and oncology informatics. Philips has become a powerhouse of healthcare technology. To give you an idea of this, these are the numbers for, from 2019 about almost 20 billion sales, 4% comparable sales growth with respect to the previous year and about 10% of the sales are reinvested in R&D. This is also shown in the number of patents rights, last year we filed more than 1000 patents in, in the healthcare domain. And the company is about 80,000 employees active globally in over 100 countries. So, let me focus now on the type of products that are in the scope of this presentation. This is a Philips Magnetic Resonance Imaging Scanner, also called Ingenia 3.0 Tesla is an incredible machine. Apart from being very beautiful as you can see, it's a it's a very powerful technology. It can make high resolution images of the human body without harmful radiation. And it's a, it's a, it's a complex machine. First of all, it's massive, it weights 4.6 thousand kilograms. And it has superconducting magnets cooled with liquid helium at -269 degrees Celsius. And it's actually full of software millions and millions of lines of code. And it's occupied three rooms. What you see in this picture, the examination room, but there is also a technical room which is full of of of equipment of custom hardware, and machinery that is needed to operate this complex device. This is another system, it's an interventional, guided therapy system where the X ray is used during interventions with the patient on the table. You see on the left, what we call C-arm, a robotic arm that moves and can take images of the patient while it's been operated, it's used for cardiology intervention, neurological intervention, cardiovascular intervention. There's a table that moves in very complex ways and it again it occupies two rooms, this room that we see here and but also a room full of cabinets and hardwood and computers. This is another another characteristic of this machine is that it has to operate it as it is used during medical interventions, and so it has to interact with all kind of other equipment. This is another system it's a, it's a, it's a Computer Tomography Scanner Icon which is a unique, it is unique due to its special detection technology. It has an image resolution up to 0.5 millimeters and making thousand by thousand pixel images. And it is also a complex machine. This is a picture of the inside of a compatible device not really an icon, but it has, again three rotating, which waits two and a half turn. So, it's a combination of X ray tube on top, high voltage generators to power the extra tube and in a ray of detectors to create the images. And this rotates at 220 right per minutes, making 50 frames per second to make 3D reconstruction of the of the body. So a lot of technology, complex technology and this technology is made for this situation. We make it for clinicians, who are busy saving people lives. And of course, they want optimal clinical performance. They want the best technology to treat the patients. But they also want predictable cost of ownership. They want predictable system operations. They want their clinical schedules not interrupted. So, they understand these machines are complex full of technology. And these machines may have, may require maintenance, may require software update, sometimes may even say they require some parts, horrible parts to be replaced, but they don't want to have it unplanned. They don't want to have unplanned downtime. They would hate send, having to send patients home and to have to reschedule visits. So they understand maintenance. They just want to have a schedule predictable and non intrusive. So already a number of years ago, we started a transition from what we call Reactive Maintenance services of these devices to proactive. So, let me show you what we mean with this. Normally, if a system has an issue system on the field, and traditional reactive workflow would be that, this the customer calls a call center, reports the problem. The company servicing the device would dispatch a field service engineer, the field service engineer would go on site, do troubleshooting, literally smell, listen to noise, watch for lights, for, for blinking LEDs or other unusual issues and would troubleshoot the issue, find the root cause and perhaps decide that the spare part needs to be replaced. He would order a spare part. The part would have to be delivered at the site. Either immediately or the engineer would would need to come back another day when the part is available, perform the repair. That means replacing the parts, do all the needed tests and validations. And finally release the system for clinical use. So as you can see, there is a lot of, there are a lot of steps, and also handover of information from one to between different people, between different organizations even. Would it be better to actually keep monitoring the installed base, keep observing the machine and actually based on the information collected, detect or predict even when an issue is is going to happen? And then instead of reacting to a customer calling, proactively approach the customer scheduling, preventive service, and therefore avoid the problem. So this is actually what we call Corrective Service. And this is what we're being transitioning to using Big Data and Big Data is just one ingredient. In fact, there are more things that are needed. The devices themselves need to be designed for reliability and predictability. If the device is a black box does not communicate to the outside world the status, if it does not transmit data, then of course, it is not possible to observe and therefore, predict issues. This of course requires a remote service infrastructure or an IoT infrastructure as it is called nowadays. The passivity to connect the medical device with a data center in enterprise infrastructure, collect the data and perform the remote troubleshooting and the predictions. Also the right processes and the right organization is to be in place, because an organization that is, you know, waiting for the customer to call and then has a number of few service engineers available and a certain amount of spare parts and stock is a different organization from an organization that actually is continuously observing the installed base and is scheduling actions to prevent issues. And in other pillar is knowledge management. So in order to realize predictive models and to have predictive service action, it's important to manage knowledge about failure modes, about maintenance procedures very well to have it standardized and digitalized and available. And last but not least, of course, the predictive models themselves. So we talked about transmitting data from the installed base on the medical device, to an enterprise infrastructure that would analyze the data and generate predictions that's predictive models are exactly the last ingredient that is needed. So this is not something that I'm, you know, I'm telling you for the first time is actually a strategic intent of Philips, where we aim for zero unplanned downtime. And we market it that way. We also is not a secret that we do it by using big data. And, of course, there could be other methods to to achieving the same goal. But we started using big data already now well, quite quite many years ago. And one of the reasons is that our medical devices already are wired to collect lots of data about the functioning. So they collect events, error logs that are sensor connecting sensor data. And to give you an idea, for example, just as an order of magnitudes of size of the data, the one MRI scanner can log more than 1 million events per day, hundreds of thousands of sensor readings and tens of thousands of many other data elements. And so this is truly big data. On the other hand, this data was was actually not designed for predictive maintenance, you have to think a medical device of this type of is, stays in the field for about 10 years. Some a little bit longer, some of it's shorter. So these devices have been designed 10 years ago, and not necessarily during the design, and not all components were designed, were designed with predictive maintenance in mind with IoT, and with the latest technology at that time, you know, progress, will not so forward looking at the time. So the actual the key challenge is taking the data which is already available, which is already logged by the medical devices, integrating it and creating predictive models. And if we dive a little bit more into the research challenges, this is one of the Challenges. How to integrate diverse data sources, especially how to automate the costly process of data provisioning and cleaning? But also, once you have the data, let's say, how to create these models that can predict failures and the degradation of performance of a single medical device? Once you have these models and alerts, another challenge is how to automatically recommend service actions based on the probabilistic information on these possible failures? And once you have the insights even if you can recommend action still recommending an action should be done with the goal of planning, maintenance, for generating value. That means balancing costs and benefits, preventing unplanned downtimes without of course scheduling and unnecessary interventions because every intervention, of course, is a disruption for the clinical schedule. And there are many more applications that can be built off such as the optimal management of spare parts supplies. So how do you approach this problem? Our approach was to collect into one database Vertica. A large amount of historical data, first of all historical data coming from the medical devices, so event logs, parameter value system configuration, sensor readings, all the data that we have at our disposal, that in the same database together with records of failures, maintenance records, service work orders, part replacement contracts, so basically the evidence of failures and once you have data from the medical devices, and data from the failures in the same database, it becomes possible to correlate event logs, errors, signal sensor readings with records of failures and records of part replacement and maintenance operations. And we did that also with a specific approach. So we, we create integrated teams, and every integrated team at three figures, not necessarily three people, they were actually multiple people. But there was at least one business owner from a service organization. And this business owner is the person who knows what is relevant, which use case are relevant to solve for a particular type of product or a particular market. What basically is generating value or is worthwhile tackling as an organization. And we have data scientists, data scientists are the one who actually can manipulate data. They can write the queries, they can write the models and robust statistics. They can create visualization and they are the ones who really manipulate the data. Last but not least, very important is subject matter experts. Subject Matter Experts are the people who know the failure modes, who know about the functioning of the medical devices, perhaps they're even designed, they come from the design side, or they come from the service innovation side or even from the field. People who have been servicing the machines in real life for many, many years. So, they are familiar with the failure models, but also familiar with the type of data that is logged and the processes and how actually the systems behave, if you if you if you if you allow me in, in the wild in the in the field. So the combination of these three secrets was a key. Because data scientist alone, just statisticians basically are people who can all do machine learning. And they're not very effective because the data is too complicated. That's why you more than too complex, so they will spend a huge amount of time just trying to figure out the data. Or perhaps they will spend the time in tackling things that are useless, because it's such an interesting knows much quicker which data points are useful, which phenomenon can be found in the data or probably not found. So the combination of subject matter experts and data scientists is very powerful and together gathered by a business owner, we could tackle the most useful use cases first. So, this teams set up to work and they developed three things mainly, first of all, they develop insights on the failure modes. So, by looking at the data, and analyzing information about what happened in the field, they find out exactly how things fail in a very pragmatic and quantitative way. Also, they of course, set up to develop the predictive model with associated alerts and service actions. And a predictive model is just not an alert is just not a flag. Just not a flag, only flag that turns on like a like a traffic light, you know, but there's much more than that. It's such an alert is to be interpreted and used by highly skilled and trained engineer, for example, in a in a call center, who needs to evaluate that error and plan a service action. Service action may involve the ordering a replacement of an expensive part, it may involve calling up the customer hospital and scheduling a period of downtime, downtime to replace a part. So it has an impact on the clinical practice, could have an impact. So, it is important that the alert is coupled with sufficient evidence and information for such a highly skilled trained engineer to plan the service session efficiently. So, it's it's, it's a lot of work in terms of preparing data, preparing visualizations, and making sure that old information is represented correctly and in a compact form. Additionally, These teams develop, get insight into the failure modes and so they can provide input to the R&D organization to improve the products. So, to summarize these graphically, we took a lot of historical data from, coming from the medical devices from the history but also data from relational databases, where the service, work orders, where the part replacement, the contact information, we integrated it, and we set up to the data analytics. From there we don't have value yet, only value starts appearing when we use the insights of data analytics the model on live data. When we process live data with the module we can generate alerts, and the alerts can be used to plan the maintenance and the maintenance therefore the plant maintenance replaces replacing downtime is creating value. To give an idea of the, of the type of I cannot show you the details of these modules, all of these predictive models. But to give you an idea, this is just a picture of some of the components of our medical device for which we have models for which we have, for which we call the failure modes, hard disk, clinical grade monitoring, monitors, X ray tubes, and so forth. This is for MRI machines, a lot of custom hardware and other types of amplifiers and electronics. The alerts are then displayed in a in a dashboard, what we call a Remote monitoring dashboard. We have a team of remote monitoring engineers that basically surveyors the install base, looks at this dashboard picks up these alerts. And an alert as I said before is not just one flag, it contains a lot of information about the failure and about the medical device. And the remote monitor engineer basically will pick up these alerts, they review them and they create cases for the markets organization to handle. So, they see an alert coming in they create a case. So that the particular call center in in some country can call the customer and schedule and make an appointment to schedule a service action or it can add it preventive action to the schedule of the field service engineer who's already supposed to go to visit the customer for example. This is a picture and high-level picture of the overall data person architecture. On the bottom we have install base install base is formed by all our medical devices that are connected to our Philips and more service network. Data is transmitted in a in a secure and in a secure way to our enterprise infrastructure. Where we have a so called Data Lake, which is basically an archive where we store the data as it comes from, from the customers, it is scrubbed and protected. From there, we have a processes ETL, Extract, Transform and Load that in parallel, analyze this information, parse all these files and all this data and extract the relevant parameters. All this, the reason is that the data coming from the medical device is very verbose, and in legacy formats, sometimes in binary formats in strange legacy structures. And therefore, we parse it and we structure it and we make it magically usable by data science teams. And the results are stored in a in a vertica cluster, in a data warehouse. In the same data warehouse, where we also store information from other enterprise systems from all kinds of databases from SQL, Microsoft SQL Server, Tera Data SAP from Salesforce obligations. So, the enterprise IT system also are connected to vertica the data is inserted into vertica. And then from vertica, the data is pulled by our predictive models, which are Python and Rscripts that run on our proprietary environment helps with insights. From this proprietary environment we generate the alerts which are then used by the remote monitoring application. It's not the only application this is the case of remote monitoring. We also have applications for particular remote service. So whenever we cannot prevent or predict we cannot predict an issue from happening or we cannot prevent an issue from happening and we need to react on a customer call, then we can still use the data to very quickly troubleshoot the system, find the root cause and advice or the best service session. Additionally, there are reliability dashboards because all this data can also be used to perform reliability studies and improve the design of the medical devices and is used by R&D. And the access is with all kinds of tools. So Vertica gives the flexibility to connect with JDBC to connect dashboards using Power BI to create dashboards and click view or just simply use RM Python directly to perform analytics. So little summary of the, of the size of the data for the for the moment we have integrated about 500 terabytes worth of data tables, about 30 trillion data points. More than eighty different data sources. For our complete connected install base, including our customer relation management system SAP, we also have connected, we have integrated data from from the factory for repair shops, this is very useful because having information from the factory allows to characterize components and devices when they are new, when they are still not used. So, we can model degradation, excuse me, predict failures much better. Also, we have many years of historical data and of course 24/7 live feeds. So, to get all this going, we we have chosen very simple designs from the very beginning this was developed in the back the first system in 2015. At that time, we went from scratch to production eight months and is also very stable system. To achieve that, we apply what we call Exhaustive Error Handling. When you process, most of people attending this conference probably know when you are dealing with Big Data, you have probably you face all kinds of corner cases you feel that will never happen. But just because of the sheer volume of the data, you find all kinds of strange things. And that's what you need to take care of, if you want to have a stable, stable platform, stable data pipeline. Also other characteristic is that, we need to handle live data, but also be able to, we need to be able to reprocess large historical datasets, because insights into the data are getting generated over time by the team that is using the data. And very often, they find not only defects, but also they have changed requests for new data to be extracted to distract in a different way to be aggregated in a different way. So basically, the platform is continuously crunching data. Also, components have built-in monitoring capabilities. Transparent transparency builds trust by showing how the platform behaves. People actually trust that they are having all the data which is available, or if they don't see the data or if something is not functioning they can see why and where the processing has stopped. A very important point is documentation of data sources every data point as a so called Data Provenance Fields. That is not only the medical device where it comes from, with all this identifier, but also from which file, from which moment in time, from which row, from which byte offset that data point comes. This allows to identify and not only that, but also when this data point was created, by whom, by whom meaning which version of the platform and of the ETL created a data point. This allows us to identify issues and also to fix only the subset of when an issue is identified and fixed. It's possible then to fix only subset of the data that is impacted by that issue. Again, this grid trusts in data to essential for this type of applications. We actually have different environments in our analytic solution. One that we call data science environment is more or less what I've shown so far, where it's deployed in our Philips private cloud, but also can be deployed in in in public cloud such as Amazon. It contains the years of historical data, it allows interactive data exploration, human queries, therefore, it is a highly viable load. It is used for the training of machine learning algorithms and this design has been such that we it is for allowing rapid prototyping and for large data volumes. In other environments is the so called Production Environment where we actually score the models with live data from generation of the alerts. So this environment does not require years of data just months, because a model to make a prediction does not need necessarily years of data, but maybe some model even a couple of weeks or a few months, three months, six months depending on the type of data on the failure which has been predicted. And this has highly optimized queries because the applications are stable. It only only change when we deploy new models or new versions of the models. And it is designed optimized for low latency, high throughput and reliability is no human intervention, no human queries. And of course, there are development staging environments. And one of the characteristics. Another characteristic of all this work is that what we call Data Driven Service Innovation. In all this work, we use the data in every step of the process. The First business case creation. So, basically, some people ask how did you manage to find the unlocked investment to create such a platform and to work on it for years, you know, how did you start? Basically, we started with a business case and the business case again for that we use data. Of course, you need to start somewhere you need to have some data, but basically, you can use data to make a quantitative analysis of the current situation and also make it as accurate as possible estimate quantitative of value creation, if you have that basically, is you can justify the investments and you can start building. Next to that data is used to decide where to focus your efforts. In this case, we decided to focus on the use cases that had the maximum estimated business impact, with business impact meaning here, customer value, as well as value for the company. So we want to reduce unplanned downtime, we want to give value to our customers. But it would be not sustainable, if for creating value, we would start replacing, you know, parts without any consideration for the cost of it. So it needs to be sustainable. Also, then we use data to analyze the failure modes to actually do digging into the data understanding of things fail, for visualization, and to do reliability analysis. And of course, then data is a key to do feature engineering for the development of the predictive models for training the models and for the validation with historical data. So data is all over the place. And last but not least, again, these models is architecture generates new data about the alerts and about the how good the alerts are, and how well they can predict failures, how much downtime is being saved, how money issues have been prevented. So this also data that needs to be analyzed and provides insights on the performance of this, of this models and can be used to improve the models found. And last but not least, once you have performance of the models you can use data to, to quantify as much as possible the value which is created. And it is when you go back to the first step, you made the business value you you create the first business case with estimates. Can you, can you actually show that you are creating value? And the more you can, have this fitness feedback loop closed and quantify the better it is for having more and more impact. Among the key elements that are needed for realizing this? So I want to mention one about data documentation is the practice that we started already six years ago is proven to be very valuable. We document always how data is extracted and how it is stored in, in data model documents. Data Model documents specify how data goes from one place to the other, in this case from device logs, for example, to a table in vertica. And it includes things such as the finish of duplicates, queries to check for duplicates, and of course, the logical design of the tables below the physical design of the table and the rationale. Next to it, there is a data dictionary that explains for each column in the data model from a subject matter expert perspective, what that means, such as its definition and meaning is if it's, if it's a measurement, the use of measure and the range. Or if it's a, some sort of, of label the spec values, or whether the value is raw or or calculated. This is essential for maximizing the value of data for allowing people to use data. Last but not least, also an ETL design document, it explains how the transformation has happened from the source to the destination including very important the failure and the strategy. For example, when you cannot parse part of a file, should you load only what you can parse or drop the entire file completely? So, import best effort or do all or nothing or how to populate records for which there is no value what are the default values and you know, how to have the data is normalized or transform and also to avoid duplicates. This again is very important to provide to the users of the data, if full picture of all the data itself. And this is not just, this the formal process the documents are reviewed and approved by all the stakeholders into the subject matter experts and also the data scientists from a function that we have started called Data Architect. So to, this is something I want to give about, oh, yeah and of course the the documents are available to the end users of the data. And we even have links with documents of the data warehouse. So if you are, if you get access to the database, and you're doing your research and you see a table or a view, you think, well, it could be that could be interesting. It looks like something I could use for my research. Well, the data itself has a link to the document. So from the database while you're exploring data, you can retrieve a link to the place where the document is available. This is just the quick summary of some of the of the results that I'm allowed to share at this moment. This is about image guided therapy, using our remote service infrastructure for remotely connected system with the right contracts. We can achieve we have we have reduced downtime by 14% more than one out of three of cases are resolved remotely without an engineer having to go outside. 82% is the first time right fixed rate that means that the issue is fixed either remotely or if a visit at the site is needed, that visit only one visit is needed. So at that moment, the engineer we decided the right part and fix this straightaway. And this result on average on 135 hours more operational availability per year. This therefore, the ability to treat more patients for the same costs. I'd like to conclude with citing some nice testimonials from some of our customers, showing that the value that we've created is really high impact and this concludes my presentation. Thanks for your attention so far. >> Thank you Morrow, very interesting. And we've got a number of questions that we that have come in. So let's get to them. The first one, how many devices has Philips connected worldwide? And how do you determine which related center data workloads get analyzed with protocols? >> Okay, so this is just two questions. So the first question how many devices are connected worldwide? Well, actually, I'm not allowed to tell you the precise number of connected devices worldwide, but what I can tell is that we are in the order of tens of thousands of devices. And of all types actually. And then, how would we determine which related sensor gets analyzed with vertica well? And a little bit how I set In the in the presentation is a combination of two approaches is a data driven approach and the knowledge driven approach. So a knowledge driven approach because we make maximum use of our knowledge of the failure modes, and the behavior of the medical devices and of their components to select what we think are promising data points and promising features. However, from that moment on data science kicks in, and it's actually data science is used to look at the actual data and come up with quantitative information of what is really happening. So, it could be that an expert is convinced that the particular range of value of a sensor are indicative of a particular failure. And it turns out that maybe it was too optimistic on the other way around that in practice, there are many other situations situation he was not aware of. That could happen. So thanks to the data, then we, you know, get a better understanding of the phenomenon and we get the better modeling. I bet I answered that, any question? >> Yeah, we have another question. Do you have plans to perform any analytics at the edge? >> Now that's a good question. So I can't disclose our plans on this right now, but at the edge devices are certainly one of the options we look at to help our customers towards Zero Unplanned Downtime. Not only that, but also to facilitate the integration of our solution with existing and future hospital IT infrastructure. I mean, we're talking about advanced security, privacy and guarantee that the data is always safe remains. patient data and clinical data remains does not go outside the parameters of the hospital of course, while we want to enhance our functionality provides more value with our services. Yeah, so edge definitely very interesting area of innovation. >> Another question, what are the most helpful vertica features that you rely on? >> I would say, the first that comes to mind, to me at this moment is ease of integration. Basically, with vertica, we will be able to load any data source in a very easy way. And also it really can be interfaced very easily with old type of ions as an application. And this, of course, is not unique to vertica. Nevertheless, the added value here is that this is coupled with an incredible speed, incredible speed for loading and for querying. So it's basically a very versatile tool to innovate fast for data science, because basically we do not end up another thing is multiple projections, advanced encoding and compression. So this allows us to perform the optimizations only when we need it and without having to touch applications or queries. So if we want to achieve high performance, we Basically spend a little effort on improving the projection. And now we can achieve very often dramatic increases in performance. Another feature is EO mode. This is great for for cloud for cloud deployment. >> Okay, another question. What is the number one lesson learned that you can share? >> I think that would my advice would be document control your entire data pipeline, end to end, create positive feedback loops. So I hear that what I hear often is that enterprises I mean Philips is one of them that are not digitally native. I mean, Philips is 129 years old as a company. So you can imagine the the legacy that we have, we will not, you know, we are not born with Web, like web companies are with with, you know, with everything online and everything digital. So enterprises that are not digitally native, sometimes they struggle to innovate in big data or into to do data driven innovation, because, you know, the data is not available or is in silos. Data is controlled by different parts of the organ of the organization with different processes. There is not as a super strong enterprise IT system, providing all the data, you know, for everybody with API's. So my advice is to, to for the very beginning, a creative creating as soon as possible, an end to end solution, from data creation to consumption. That creates value for all the stakeholders of the data pipeline. It is important that everyone in the data pipeline from the producer of the data to the to the consumers, basically in order to pipeline everybody gets a piece of value, piece of the cake. When the value is proven to all stakeholders, everyone would naturally contribute to keep the data pipeline running, and to keep the quality of the data high. That's the students there. >> Yeah, thank you. And in the area of machine learning, what types of innovations do you plan to adopt to help with your data pipeline? >> So, in the error of machine learning, we're looking at things like automatically detecting the deterioration of models to trigger improvement action, as well as connected with active learning. Again, focused on improving the accuracy of our predictive models. So active learning is when the additional human intervention labeling of difficult cases is triggered. So the machine learning classifier may not be able to, you know, classify correctly all the time and instead of just randomly picking up some cases for a human to review, you, you want the costly humans to only review the most valuable cases, from a machine learning point of view, the ones that would contribute the most in improving the classifier. Another error is is deep learning and was not working on it, I mean, but but also applications of more generic anomaly detection algorithms. So the challenge of anomaly detection is that we are not only interested in finding anomalies but also in the recommended proper service actions. Because without a proper service action, and alert generated because of an anomaly, the data loses most of its value. So, this is where I think we, you know. >> Go ahead. >> No, that's, that's it, thanks. >> Okay, all right. So that's all the time that we have today for questions. I want to thank the audience for attending Mauro's presentation and also for your questions. If you weren't able to, if we weren't able to answer your question today, I'd ask let we'll let you know that we'll respond via email. And again, our engineers will be at the vertica, on the vertica quorums awaiting your other questions. It would help us greatly if you could give us some feedback and rate the session before you sign off. Your rating will help us guide us as when we're looking at content to provide for the next vertica BTC. Also, note that a replay of today's event and a PDF copy of the slides will be available on demand, we'll let you know when that'll be by email hopefully later this week. And of course, we invite you to share the content with your colleagues. Again, thank you for your participation today. This includes this breakout session and hope you have a wonderful day. Thank you. >> Thank you
SUMMARY :
in the lower right corner of the slide. and perhaps decide that the spare part needs to be replaced. So let's get to them. and the behavior of the medical devices Do you have plans to perform any analytics at the edge? and guarantee that the data is always safe remains. on improving the projection. What is the number one lesson learned that you can share? from the producer of the data to the to the consumers, And in the area of machine learning, what types the deterioration of models to trigger improvement action, and a PDF copy of the slides will be available on demand,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mauro Barbieri | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Gerard | PERSON | 0.99+ |
Frederik | PERSON | 0.99+ |
Phillips | ORGANIZATION | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
two questions | QUANTITY | 0.99+ |
Mauro | PERSON | 0.99+ |
Eindhoven | LOCATION | 0.99+ |
4.6 thousand kilograms | QUANTITY | 0.99+ |
two rooms | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
14% | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Anton | PERSON | 0.99+ |
4% | QUANTITY | 0.99+ |
135 hours | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
82% | QUANTITY | 0.99+ |
two approaches | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
three rooms | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
more than 1000 patents | QUANTITY | 0.99+ |
1891 | DATE | 0.99+ |
Today | DATE | 0.99+ |
Power BI | TITLE | 0.99+ |
Netherlands | LOCATION | 0.99+ |
one ingredient | QUANTITY | 0.99+ |
three figures | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
over 100 countries | QUANTITY | 0.99+ |
later this week | DATE | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
SQL | TITLE | 0.98+ |
about 10% | QUANTITY | 0.98+ |
about 80,000 employees | QUANTITY | 0.98+ |
six years ago | DATE | 0.98+ |
Python | TITLE | 0.98+ |
three | QUANTITY | 0.98+ |
two brothers | QUANTITY | 0.98+ |
millions | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
about 30 trillion data points | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
about 500 terabytes | QUANTITY | 0.98+ |
Microsoft | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
each column | QUANTITY | 0.98+ |
hundreds of thousands | QUANTITY | 0.98+ |
this week | DATE | 0.97+ |
Salesforce | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
tens of thousands of devices | QUANTITY | 0.97+ |
first system | QUANTITY | 0.96+ |
about 10 years | QUANTITY | 0.96+ |
10 years ago | DATE | 0.96+ |
one visit | QUANTITY | 0.95+ |
Morrow | PERSON | 0.95+ |
up to 0.5 millimeters | QUANTITY | 0.95+ |
More than eighty different data sources | QUANTITY | 0.95+ |
129 years ago | DATE | 0.95+ |
first interaction | QUANTITY | 0.94+ |
one flag | QUANTITY | 0.94+ |
three things | QUANTITY | 0.93+ |
thousand | QUANTITY | 0.93+ |
50 frames per second | QUANTITY | 0.93+ |
First business | QUANTITY | 0.93+ |
Max Schulze, NBF | KubeCon 2018
>> From Seattle, Washington, it's 'theCUBE' Covering KubeCon and CloudNativeCon North America 2018, brought to you by 'redhat' The CloudNative computing foundation and it's ecosystem partners. (upbeat music) >> Hello everyone and welcome back to live CUBE coverage here at Seattle for KubeCon, CloudNativeCon2018. I'm John Furrier. Stu Miniman, breaking down all the action here for CloudNative, trend, a lot of ecosystem partners, a lot of new developers, a lot of great open-source action in the cubes here covering it. We've been there from the beginning, our next guest and user, Max Schulze, Advisor and Founder of NBF, welcome to the CUBE, thanks for coming on. >> Thank-you, thank-you for having me. >> So tell me about what you're working on. You are doing something pretty compelling with Kubernetes and CloudNative, take a minute to explain what you do. >> Yeah actually, we are advising a very large energy utility in the Nordics and what we're trying to do with Openshift and Kubernetes is actually to shift loads between different data centers based on power availability. So if you have wind and solar power, you know that you only get energy when the wind is blowing so you really need to be able to match that load of the data center with the actually energy production which is quite challenging to be honest. >> Max you have different take on 'Follow-the-sun' that we used to talk about in IT I'm guessing, yes? >> Yes >> Take us inside a little bit, the sustainability is really interesting and how some of the power, you know, usage and heat and everything and maybe you can explain that a little bit before we get into the data. >> Of course, so generally how we got to a sustainable data center source was that in the Nordics you see a big growth of data centers in general so all the hyperscalers: Google, Microsoft, AWS. They are all coming to build data centers in Nordics. It's cold, power is cheap, you have lots of renewable energy available and we started to think 'Okay, but they have two problems essentially.' They generate a lot of heat, which is just emitted into the atmosphere so it's wasted, and the second problem is that they want 100% reliable power and reliable power you only get from nuclear, you get from gas, coal fire power plants not from renewables. So we looked into this, and we started to think about okay can we maybe get the heat out? Can we extract the heat from a data center and inject it into district heating grids and actually heat homes? With a hyperscale data center from Microsoft, 300 megawatts you can heat about 150,000 homes, that's quite significant. >> Yeah and how are you doing that? I mean I talked to a company once that was like 'Oh well we're going to, you know, we'll just distribute the servers different places and there will be ambient heat off of it.' But you're extracting the heat and sharing it. Explain that a little bit more. >> So most existing data center projects, they extract the heat out of the air but that's really inefficient. You get to about 100 degrees Fahrenheit which is not uh high quality heat. So what we want is 140 degrees Fahrenheit, about 60 degrees celsius, which means that we have to use liquid. So we have to use water in this case and we use a cooling system that is quite ironic from a start up in Germany called Cloud & Heat that uses hot water to cool servers. So the water really flows at a very very high speed through the data center and on it's way picks up a very low amount of temperature and we get out the temperature, we get out the water at 140 degrees Fahrenheit and we put it in at 120 degrees Fahrenheit. So it's quite, not a big difference, but it flows at a very high speed. >> So it makes it work? Makes the numbers work. >> Exactly. And so what's the home count again you mentioned one hyperscale data center, like a Microsoft data center powers heat for how many homes? >> About 150,000 homes from 300 megawatts worth of data center. >> And you guys put this into a grid so that's, does the location of the homes need to be nearby, is there a co-location kind of map or? >> Yeah actually, in order to do this we have to move data centers closer to cities. But luckily, data centers actually want to be closer to cities because your closer to peering points and one of the reasons why they usually can't come closer to cities is because power is not available near a city. So we um try, we can give them both. Right, they can come closer to the city and we can give them power, and we get the heat in return. So, so everybody wins. >> Yeah so I mean, a lot of the discussion we've had is the interaction between software and my data center infrastructure. You've got a story of software, with you know, actual like city underneath the infrastructure. Maybe you got to help explain how that was built out, what tools you're using and walk us through this all. >> So we originally started with Openstack, which was the first test because we need, in order to do this heat extraction we need to also steer really the software, the workloads that run on the data center because you know a chip only gets hot when the server actually does something so we really had to figure this out. We started with Openstack and then we started looking into load shifting which immediately brought us to Kubernetes and then Openshift because you can use the internal scheduler to basically force loads across different locations. We connect it to our energy systems, to our forecasting systems and to our heat load management systems and then basically push workloads around. Right now we have two sites where we test this and it's not as easy as it sounds. And we basically want to move workloads, concentrate them where we want, we have heat. So um yeah, Redhat is helping us a lot doing this but still it's not that easy. >> Yeah yeah, it's interesting. You know, I think back you know, virtualization was about you know, how can we drive some utilization and get some out? You really want to you know, concentrate and run things hot. >> Yeah, exactly. >> Quite inter- Alright tell us about your involvement in this ecosystem, you know, what brings you to the show this week, what do you get out of coming to a show like this? >> Yeah, actually I came because Redhat invited us to talk at the Openshift gathering at the beginning of the conference. And generally, we don't really have a commercial interest in making data centers or data infrastructure sustainable, we, we don't gain anything from that, but we believe it's necessary. If you look at the growth curve of data centers you can really see that they will consume more and more power, and then the power they consume is not compatible with renewable energy. So we are hoping that we can influence people and we come here to tell people our story and we actually get great feedback from most of the nerds. >> Well it's a great story. It's one of those things where you're starting to see data centers trying to solve these problems. It's great with the renewable energy, having that kind of success story is really huge. Um, You mentioned that data centers want to be close to cities. I got to ask the question, in Europe, well you've lived around a lot of places. Is there a more cloud city oriented, like is it London, you got Paris, you got... I know Amazon's got data centers in Ireland. Is there certain cities that are more CloudNative culture? How would you break down the affinity towards CloudNative? If you had to map Europe, which major countries and cities would you think are advanced, cloud thinking vs. tire kickers or you know, people just kind of just trying it? >> In Europe there is a region called the FLAP region, that's Frankfurt, London, Amsterdam and Paris. Those are where you have the highest concentration of data centers, but it terms of CloudNative adoption, I would say that probably in the UK you have the most adoption rates and in the Netherlands. Germany is always, I am German so I can say this, we are always a bit behind in terms of cloud technology because we're a bit scared and we don't know- >> You'll watch everyone test it out and then you guys will make it go faster. (john laughs) >> Maybe, maybe, maybe a bit more efficient but uh, generally I think the cloud adoption rate in Germany is the lowest and the UK and the Netherlands is the highest I would say, yeah. >> Awesome, well thanks so much. Congratulations on your success, we'll keep following you and when we're in Europe we're going to come by and say hello. Thanks for coming and sharing the stories. The CUBE, breaking down all the action at KubeCon, CloudNativeCon. I'm John with Stu Miniman. Day 2, we got three days of wall to wall coverage. Thanks for watching. (upbeat techno music)
SUMMARY :
2018, brought to you by in the cubes here covering it. minute to explain what you do. load of the data center with some of the power, you know, and the second problem is Yeah and how are you doing that? So we have to use water in this case Makes the numbers work. you mentioned one hyperscale data center, of data center. the city and we can give them with you know, actual like So we originally started You really want to you know, and we actually get great How would you break down the in the UK you have the most it out and then you guys will Netherlands is the highest I would we'll keep following you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ireland | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Max Schulze | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
100% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
UK | LOCATION | 0.99+ |
Paris | LOCATION | 0.99+ |
two sites | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
Netherlands | LOCATION | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
300 megawatts | QUANTITY | 0.99+ |
120 degrees Fahrenheit | QUANTITY | 0.99+ |
second problem | QUANTITY | 0.99+ |
140 degrees Fahrenheit | QUANTITY | 0.99+ |
Nordics | LOCATION | 0.99+ |
two problems | QUANTITY | 0.99+ |
Redhat | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Seattle | LOCATION | 0.99+ |
Seattle, Washington | LOCATION | 0.99+ |
KubeCon | EVENT | 0.99+ |
both | QUANTITY | 0.99+ |
three days | QUANTITY | 0.98+ |
CloudNative | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
john | PERSON | 0.98+ |
first test | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
about 150,000 homes | QUANTITY | 0.98+ |
NBF | ORGANIZATION | 0.98+ |
about 100 degrees Fahrenheit | QUANTITY | 0.97+ |
CloudNativeCon2018 | EVENT | 0.97+ |
about 60 degrees celsius | QUANTITY | 0.97+ |
About 150,000 homes | QUANTITY | 0.97+ |
CloudNativeCon North America 2018 | EVENT | 0.96+ |
KubeCon 2018 | EVENT | 0.92+ |
Redhat | PERSON | 0.92+ |
Day 2 | QUANTITY | 0.89+ |
Openshift | EVENT | 0.87+ |
CUBE | ORGANIZATION | 0.86+ |
Openstack | ORGANIZATION | 0.84+ |
FLAP | LOCATION | 0.79+ |
Heat | ORGANIZATION | 0.74+ |
German | LOCATION | 0.72+ |
Kubernetes | ORGANIZATION | 0.7+ |
CloudNative | OTHER | 0.6+ |
Openshift | ORGANIZATION | 0.57+ |
Kubernetes | TITLE | 0.51+ |
data | QUANTITY | 0.5+ |
Madhu Matta, Lenovo & Dr. Daniel Gruner, SciNet | Lenovo Transform 2018
>> Live from New York City it's theCube. Covering Lenovo Transform 2.0. Brought to you by Lenovo. >> Welcome back to theCube's live coverage of Lenovo Transform, I'm your host Rebecca Knight along with my co-host Stu Miniman. We're joined by Madhu Matta; He is the VP and GM High Performance Computing and Artificial Intelligence at Lenovo and Dr. Daniel Gruner the CTO of SciNet at University of Toronto. Thanks so much for coming on the show gentlemen. >> Thank you for having us. >> Our pleasure. >> So, before the cameras were rolling, you were talking about the Lenovo mission in this area to use the power of supercomputing to help solve some of society's most pressing challenges; and that is climate change, and curing cancer. Can you talk a little bit, tell our viewers a little bit about what you do and how you see your mission. >> Yeah so, our tagline is basically, Solving humanity's greatest challenges. We're also now the number one supercomputer provider in the world as measured by the rankings of the top 500 and that comes with a lot of responsibility. One, we take that responsibility very seriously, but more importantly, we work with some of the largest research institutions, universities all over the world as they do research, and it's amazing research. Whether it's particle physics, like you saw this morning, whether it's cancer research, whether it's climate modeling. I mean, we are sitting here in New York City and our headquarters is in Raleigh, right in the path of Hurricane Florence, so the ability to predict the next anomaly, the ability to predict the next hurricane is absolutely critical to get early warning signs and a lot of survival depends on that. So we work with these institutions jointly to develop custom solutions to ensure that all this research one it's powered and second to works seamlessly, and all their researchers have access to this infrastructure twenty-four seven. >> So Danny, tell us a little bit about SciNet, too. Tell us what you do, and then I want to hear how you work together. >> And, no relation with Skynet, I've been assured? Right? >> No. Not at all. It is also no relationship with another network that's called the same, but, it doesn't matter. SciNet is an organization that's basically the University of Toronto and the associated research hospitals, and we happen to run Canada's largest supercomputer. We're one of a number of computer sites around Canada that are tasked with providing resources and support, support is the most important, to academia in Canada. So, all academics, from all the different universities, in the country, they come and use our systems. From the University of Toronto, they can also go and use the other systems, it doesn't matter. Our mission is, as I said, we provide a system or a number of systems, we run them, but we really are about helping the researchers do their research. We're all scientists. All the guys that work with me, we're all scientists initially. We turned to computers because that was the way we do the research. You can not do astrophysics other than computationally, observationally and computationally, but nothing else. Climate science is the same story, you have so much data and so much modeling to do that you need a very large computer and, of course, very good algorithms and very careful physics modeling for an extremely complex system, but ultimately it needs a lot of horsepower to be able to even do a single simulation. So, what I was showing with Madhu at that booth earlier was results of a simulation that was done just prior us going into production with our Lenovo system where people were doing ocean circulation calculations. The ocean is obviously part of the big Earth system, which is part of the climate system as well. But, they took a small patch of the ocean, a few kilometers in size in each direction, but did it at very, very high resolution, even vertically going down to the bottom of the ocean so that the topography of the ocean floor can be taken into account. That allows you to see at a much smaller scale the onset of tides, the onset of micro-tides that allow water to mix, the cold water from the bottom and the hot water from the top; The mixing of nutrients, how life goes on, the whole cycle. It's super important. Now that, of course, gets coupled with the atmosphere and with the ice and with the radiation from the sun and all that stuff. That calculation was run by a group from, the main guy was from JPL in California, and he was running on 48,000 cores. Single runs at 48,000 cores for about two- to three-weeks and produced a petabyte of data, which is still being analyzed. That's the kind of resolution that's been enabled... >> Scale. >> It gives it a sense of just exactly... >> That's the scale. >> By a system the size of the one we have. It was not possible to do that in Canada before this system. >> I tell you both, when I lived on the vendor side and as an analyst, talking to labs and universities, you love geeking out. Because first of all, you always have a need for newer, faster things because the example you just gave is like, "Oh wait." "If I can get the next generation chipset." "If the networking can be improved." You know you can take that petabyte of data and process it so much faster. >> If I could only get more money to buy a bigger one. >> We've talked to the people at CERN and JPL and things like that. - Yeah. >> And it's like this is where most companies are it's like, yeah it's a little bit better, and it might make things a little better and make things nice, but no, this is critical to move along the research. So talk a little bit more about the infrastructure and what you look for and how that connects to the research and how you help close that gap over time. >> Before you go, I just want to also highlight a point that Danny made on solving humanity's greatest challenges which is our motto. He talked about the data analysis that he just did where they are looking at the surface of the ocean, as well as, going down, what is it, 264 nautical layers underneath the ocean? To analyze that much data, to start looking at marine life and protecting marine life. As you start to understand that level of nautical depth, they can start to figure out the nutrients value and other contents that are in that water to be able to start protecting the marine life. There again, another of humanity's greatest challenge right there that he's giving you... >> Nothing happens in isolation; It's all interconnected. >> Yeah. >> When you finally got a grant, you're able to buy a computer, how do you buy the computer that's going to give you the most bang for your buck? The best computer to do the science that we're all tasked with doing? It's tough, right? We don't fancy ourselves as computer architects; we engage the computer companies who really know about architecture to help us do it. The way we did our procurement was, 'Ok vendors, we have a set pot of money, we're willing to spend every last penny of this money, you give us the biggest and the baddest for our money." Now, it has to have a certain set of criteria. You have to be able to solve a number of benchmarks, some sample calculations that we provided. The ones that give you the best performance that's a bonus. It also has to be able to do it with the least amount of power, so we don't have to heat up the world and pay through the nose with power. Those are objective criteria that anybody can understand. But then, there's also the other criteria, so, how well will it run? How is it architected? How balanced is it? Did we get the iOS sub-system for all the storage that was the one that actually meets the criteria? What other extras do we have that will help us make the system run in a much smoother way and for a wide variety of disciplines because we run the biologists together with the physicists and the engineers and the humanitarians, the humanities people. Everybody uses the system. To make a long story short, the proposal that we got from Lenovo won the bid both in terms of what we got for in terms of hardware and also the way it was put together, which was quite innovative. >> Yeah. >> I want to hear about, you said give us the biggest, the baddest, we're willing to empty our coffers for this, so then where do you go from there? How closely do you work with SciNet, how does the relationship evolve and do you work together to innovate and kind of keep going? >> Yeah. I see it as not a segment or a division. I see High Performance Computing as a practice, and with any practice, it's many pieces that come together; you have a conductor, you have the orchestra, but the end of the day the delivery of that many systems is the concert. That's the way to look at it. To deliver this, our practice starts with multiple teams; one's a benchmarking team that understands the application that Dr. Gruner and SciNet will be running because they need to tune to the application the performance of the cluster. The second team is a set of solution architects that are deep engineers and understand our portfolio. Those two work together to say against this application, "Let's build," like he said, "the biggest, baddest, best-performing solution for that particular application." So, those two teams work together. Then we have the third team that kicks in once we win the business, which is coming on site to deploy, manage, and install. When Dr. Gruner talks about the infrastructure, it's a combination of hardware and software that all comes together and the software is open-source based that we built ourselves because we just felt there weren't the right tools in the industry to manage this level of infrastructure at that scale. All this comes together to essentially rack and roll onto their site. >> Let me just add to that. It's not like we went for it in a vacuum. We had already talked to the vendors, we always do. You always go, and they come to you and 'when's your next money coming,' and it's a dog and pony show. They tell you what they have. With Lenovo, at least the team, as we know it now, used to be the IBM team, iXsystems team, who built our previous system. A lot of these guys were already known to us, and we've always interacted very well with them. They were already aware of our thinking, where we were going, and that we're also open to suggestions for things that are non-conventional. Now, this can backfire, some data centers are very square they will only prescribe what they want. We're not prescriptive at all, we said, "Give us ideas about what can make this work better." These are the intangibles in a procurement process. You also have to believe in the team. If you don't know the team or if you don't know their track record then that's a no-no, right? Or, it takes points away. >> We brought innovations like DragonFly, which Dr. Dan will talk about that, as well as, we brought in for the first time, Excelero, which is a software-defined storage vendor and it was a smart part of the bid. We were able to flex muscles and be more creative versus just the standard. >> My understanding, you've been using water cooling for about a decade now, maybe? - Yes. >> Maybe you could give us a little bit about your experiences, how it's matured over time, and then Madhu will talk and bring us up to speed on project Neptune. >> Okay. Our first procurement about 10 years ago, again, that was the model we came up with. After years of wracking our brains, we could not decide how to build a data center and what computers to buy, it was like a chicken and egg process. We ended up saying, 'Okay, this is what we're going to do. Here's the money, here's is our total cost of operation that we can support." That included the power bill, the water, the maintenance, the whole works. So much can be used for infrastructure, and the rest is for the operational part. We said to the vendors, "You guys do the work. We want, again, the biggest and the baddest that we can operate within this budget." So, obviously, it has to be energy efficient, among other things. We couldn't design a data center and then put in the systems that we didn't know existed or vice-versa. That's how it started. The initial design was built by IBM, and they designed the data center for us to use water cooling for everything. They put rear door heat exchanges on the racks as a means of avoiding the use of blowing air and trying to contain the air which is less efficient, the air, and is also much more difficult. You can flow water very efficiently. You open the door of one of these racks. >> It's amazing. >> And it's hot air coming out, but you take the heat, right there in-situ, you remove it through a radiator. It's just like your car radiator. >> Car radiator. >> It works very well. Now, it would be nice if we could do even better by doing the hot water cooling and all that, but we're not in a university environment, we're in a strip mall out in the boonies, so we couldn't reuse the heat. Places like LRZ they're reusing the heat produced by the computers to heat their buildings. >> Wow. >> Or, if we're by a hospital, that always needs hot water, then we could have done it. But, it's really interesting how the option of that design that we ended up with the most efficient data center, certainly in Canada, and one of the most efficient in North America 10 years ago. Our PUE was 1.16, that was the design point, and this is not with direct water cooling through the chip. >> Right. Right. >> All right, bring us up to speed. Project Neptune, in general? >> Yes, so Neptune, as the name suggests, is the name of the God of the Sea and we chose that to brand our entire suite of liquid cooling products. Liquid cooling products is end to end in the sense that it's not just hardware, but, it's also software. The other key part of Neptune is a lot of these, in fact, most of these, products were built, not in a vacuum, but designed and built in conjunction with key partners like Barcelona Supercomputer, LRZ in Germany, in Munich. These were real-life customers working with us jointly to design these products. Neptune essentially allows you, very simplistically put, it's an entire suite of hardware and software that allows you to run very high-performance processes at a level of power and cooling utilization that's like using a much lower processor, it dissipates that much heat. The other key part is, you know, the normal way of cooling anything is run chilled water, we don't use chilled water. You save the money of chillers. We use ambient temperature, up to 50 degrees, 90% efficiency, 50 degree goes in, 60 degree comes out. It's really amazing, the entire suite. >> It's 50 Celsius, not Fahrenheit. >> It's Celsius, correct. >> Oh. >> Dr. Bruner talked about SciNet with the rado-heat exchanger. You actually got to stand in front of it to feel the magic of this, right? As geeky as that is. You open the door and it's this hot 60-, 65-degree C air. You close the door it's this cool 20-degree air that's coming out. So, the costs of running a data center drop dramatically with either the rado-heat exchanger, our direct to node product, which we just got released the SE650, or we have something call the thermal-transfer module, which replaces a normal heat sink. Where for an air cool we bring water cool goodness to an air cool product. >> Danny, I wonder if you can give us the final word, just the climate science in general, how's the community doing? Any technological things that are holding us back right now or anything that excites you about the research right now? >> Technology holds you back by the virtual size of the calculations that you need to do, but, it's also physics that hold you back. >> Yes. Because doing the actual modeling is very difficult and you have to be able to believe that the physics models actually work. This is one of the interesting things that Dick Peltier, who happens to be our scientific director and he's also one of the top climate scientists in the world, he's proven through some of his calculations that the models are actually pretty good. The models were designed for current conditions, with current data, so that they would reproduce the evolution of the climate that we can measure today. Now, what about climate that started happening 10,000 years ago, right? The climate was going on; it's been going on forever and ever. There's been glaciations; there's been all these events. It turns out that it has been recorded in history that there are some oscillations in temperature and other quantities that happen about every 1,000 years and nobody had been able to prove why they would happen. It turns out that the same models that we use for climate calculations today, if you take them back and do what's called paleoclimate, you start with approximating the conditions that happened 10,000 years ago, and then you move it forward, these things reproduce, those oscillations, exactly. It's very encouraging that the climate models actually make sense. We're not talking in a vacuum. We're not predicting the end of the world, just because. These calculations are right. They're correct. They're predicting the temperature of the earth is climbing and it's true, we're seeing it, but it will continue unless we do something. Right? It's extremely interesting. Now he's he's beginning to apply those results of the paleoclimate to studies with anthropologists and archeologists. We're trying to understand the events that happened in the Levant in the Middle East thousands of years ago and correlate them with climate events. Now, is that cool or what? >> That's very cool. >> So, I think humanity's greatest challenge is again to... >> I know! >> He just added global warming to it. >> You have a fun job. You have a fun job. >> It's all the interdisciplinarity that now has been made possible. Before we couldn't do this. Ten years ago we couldn't run those calculations, now we can. So it's really cool. - Amazing. Great. Well, Madhu, Danny, thank you so much for coming on the show. >> Thank you for having us. >> It was really fun talking to you. >> Thanks. >> I'm Rebecca Knight for Stu Miniman. We will have more from the Lenovo Transform just after this. (tech music)
SUMMARY :
Brought to you by Lenovo. and Dr. Daniel Gruner the CTO of SciNet and that is climate change, and curing cancer. so the ability to predict the next anomaly, and then I want to hear how you work together. and the hot water from the top; The mixing of nutrients, By a system the size of the one we have. and as an analyst, talking to labs and universities, to buy a bigger one. and things like that. and what you look for and how that connects and other contents that are in that water and the humanitarians, the humanities people. of that many systems is the concert. With Lenovo, at least the team, as we know it now, and it was a smart part of the bid. for about a decade now, maybe? and then Madhu will talk and bring us up to speed and the rest is for the operational part. And it's hot air coming out, but you take the heat, by the computers to heat their buildings. that we ended up with the most efficient data center, Right. Project Neptune, in general? is the name of the God of the Sea You open the door and it's this hot 60-, 65-degree C air. by the virtual size of the calculations that you need to do, of the paleoclimate to studies with anthropologists You have a fun job. It's all the interdisciplinarity We will have more from the Lenovo Transform just after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dick Peltier | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Danny | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Raleigh | LOCATION | 0.99+ |
SciNet | ORGANIZATION | 0.99+ |
48,000 cores | QUANTITY | 0.99+ |
Madhu | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Bruner | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
University of Toronto | ORGANIZATION | 0.99+ |
20-degree | QUANTITY | 0.99+ |
Skynet | ORGANIZATION | 0.99+ |
Munich | LOCATION | 0.99+ |
50 degree | QUANTITY | 0.99+ |
CERN | ORGANIZATION | 0.99+ |
two teams | QUANTITY | 0.99+ |
Califo | LOCATION | 0.99+ |
North America | LOCATION | 0.99+ |
JPL | ORGANIZATION | 0.99+ |
Madhu Matta | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Dan | PERSON | 0.99+ |
third team | QUANTITY | 0.99+ |
60 degree | QUANTITY | 0.99+ |
50 Celsius | QUANTITY | 0.99+ |
second team | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
65-degree C | QUANTITY | 0.99+ |
iXsystems | ORGANIZATION | 0.99+ |
LRZ | ORGANIZATION | 0.99+ |
Ten years ago | DATE | 0.99+ |
10,000 years ago | DATE | 0.98+ |
thousands of years ago | DATE | 0.98+ |
Daniel Gruner | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
264 nautical layers | QUANTITY | 0.98+ |
Middle East | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
earth | LOCATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Single | QUANTITY | 0.98+ |
each direction | QUANTITY | 0.98+ |
Earth | LOCATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Gruner | PERSON | 0.98+ |
twenty-four seven | QUANTITY | 0.97+ |
three-weeks | QUANTITY | 0.97+ |
Neptune | LOCATION | 0.96+ |
Barcelona Supercomputer | ORGANIZATION | 0.96+ |
single simulation | QUANTITY | 0.96+ |
today | DATE | 0.95+ |
SE650 | COMMERCIAL_ITEM | 0.94+ |
Dr. | PERSON | 0.94+ |
theCube | COMMERCIAL_ITEM | 0.94+ |
Hurricane Florence | EVENT | 0.94+ |
this morning | DATE | 0.93+ |
up to 50 degrees | QUANTITY | 0.92+ |
Levant | LOCATION | 0.92+ |
Alex Mashinsky, Celsius | Blockchain week NYC 2018
>> Announcer: From New York, it's theCUBE covering Blockchain Week. Now here's John Furrier. >> Hello everyone, welcome back. I'm John Furrier, the host of theCUBE. We're here in New York City for on the ground coverage for three days, wall-to-wall for Blockchain Week, New York's part of Consensus 2018. Sold out show, we're out in the open. Open (mumbles) to all the cons here. Next guest is Alex Mashinsky, Founder and CEO of Celsius. Seasoned entrepreneur, great debater on stage, great brawl recently at the Milk Institute. We'll talk about that. But more importantly, he's got a great project called Celsius, welcome to theCUBE. Thanks. >> Thanks for having us, John. >> So, I love we just chatted before the camera turned on about some of the things you've done. You've gotten into a little bit of a great heated panel discussion. With someone who actually doesn't even hold cryptocurrency. He's saying it's all bullshit. >> Yes >> Right, so tell about the story. It was written up by Bloomberg, what was this famous brawl in the Milk Institute? >> Yeah, so the Milk Institute, they've been having conferences for the last 25 years and they're trying to combine the making money with doing good in the world, right? So, it's doing well and doing good at the same time. And that's what crypto is all about, right? And so, they had a panel about crypto with me and Nouriel Roubini, who's like Doctor Doom who predicted the last 15 recessions. There were only two, but they were predicted all 15 of them. So, I was telling him, even a broken clock is right twice a day, you know? He was going at me, he was going at the community, he was calling it a scam. And when you don't own any coin and you have not come to an event like this and seen 8,000 people celebrate this innovation, power to the people, then what are you talking about? So, I was there to really defend the community. It wasn't about me or him. >> Yeah, you did a good job. Well, thank you for doing that. Also, you're on a great project. I've been talking about a lot of other things I want to get to in the industry that you have a view and opinion on I would like to get. But your project Celsius. Take a minute to explain that, because I think this highlights really what's going on. I chatted earlier today about token economics. This is a new way, a new infrastructure, a new capability, a new mechanism that's really becoming powerful, of a network effect. >> Yes. >> So, the old world was DNS. 30 years-old stack on ecommerce, search engines, they're not accurate for network effects, a new dynamic, new data source is happening and it's creating new value, new data. >> Yes. >> Talk about Celsius the project and your value proposition. >> Right, so Celsius Network is basically trying to create an algorithmic cloud-based solution that does everything in your best interest. So, you have to think of it as a basket of financial services that do simple things like give you a loan or allow you to earn interest, give you access to a lot of great financial products, insurance and other things, that altogether do everything in your best interest. And what we're doing is we're enabling 100 million new people to come into the cryptocommunity and enabling them to benefit from all these things both for the increase in the value of the coins but also allowing their money to earn money for them. And today, if you think about banks, right? They take your money, right? You make a deposit, they take your money, they'll lend it to me on my credit card, they charge me 25%, they give you 1%. So, they take all that margin that you talked about. They squeeze all of that and keep it to themselves. >> And they're representing two people. It's like a realtor, who do you represent, the buyer or the seller? >> They're a toll collector in the middle, exactly. They're not adding any value. >> So, the new shift is on user value-- >> Exactly. >> And you see real-world examples of this. The whole Facebook debacle, who owns your data, and Mark Zuckerberg was testifying in front of the Senate in Congress, saying, "No, we don't sell your data." But they license the data and they use it. >> They extract all the value from it. >> They don't actually sell the data, true. But they license the shit out of it, to target you. >> They squeeze every last penny out of it. >> This is now obvious to people. >> Yes. >> That problem. >> Yes. >> Talk about the cryptobenefits, where is this shift happening, users, the power to the people, I get the phrase, but where is it happening? The token level-- >> So for example, Yeah, let's take an example, so most of the people here on this floor, they take their coins, they put them in exchanges, they celebrate the fact that the coin went up 50, 100% or whatever, but they don't realize that they leaving a lot of money on the table, because these exchanges do shorting, front-running, all kind of other stuff that should be illegal, but they do it, so they announce these amazing earnings, Binance announced amazing earnings and a lot of that earnings comes from the money that should be given back to you and me. So, if you think about the credit card company giving you two percent back, this is kind of the same thing. We are basically taking all of that earnings and giving it back to the coin-holders and we're saying, "Don't keep your money on exchanges, keep your money in a wallet that represents your best interest." It extracts all that value and gives it back to you. >> And so, what's your value proposition? You know what, you should say, "Use our wallet, use our system." >> Right. >> And then you represent their currency? >> So, we huddle together, we create a giant pool of BTC, a giant pool of ETH, or other coins, and we lend against that. So, we can do loans to the community, we charge nine percent for asset-backed loans, basically, so you need a loan against your crypto. This way you don't have to pay taxes, you can defer your tax, you can get liquidity without triggering all the tax that today you have to-- or you can just earn interest. So, without selling the coins, you can basically generate five to nine percent income that's continuous on top of that appreciation, you still get all the appreciation of the coin, but you're also generating income. >> So, you can bring contextual services around the crypto-holder interest. >> Yeah, so we find people willing to pay that. For example, other crypto-holders who want a loan, and they pay us nine percent, we give five percent to the community. Hedge funds who short BTC or ETH, they pay us ten or 15%, we give most of it back to the community. But the beauty is that the coin-holder doesn't have to do anything. They don't have to move from this account to that account. They don't do transactions. All they have to do is decide if Celsius Network is doing everything in their best interest or not. And the point is is that the next 100 million people that are going to join crypto, they're not speculators or anarchists or libertarians like most of the people here on the floor. They're people who kind of look at all this, saying, "It's too complicated, I don't know what to do, I'm not going to get in at the right time, I'm not going to get out at the right time." They don't have anyone they can trust. >> So, I'm going to be able to ask the Average Joe six-pack question, "Hey that's all fine, I love what you're doing. Come on, sign me up. But wait a minute. If you put all this crypto in one spot, the frickin' hackers are going to get it. >> Right. >> Because, how do you protect me against-- I heard, see, Mt. Gox was in the-- and all this stuff's going on, I'm worried that it's going to get hacked. Even wherever I put it." >> Exactly. And then Nouriel basically asked me the same question. So, in 10 years since BitCoin was created, there hasn't been a single instance of anyone cracking the blockchain itself. All the theft, everything that happened was because we gave somebody our private key and we entrusted them with it, and they screwed up. Mt. Gox, it basically broke into the exchange and so on. So, we keep everything in cold storage. And it's not ours, we have a custodian that is a giant company that is willing to accept all that, keep it in cold storage and we lend against it. We lend against the pull. >> So the private key's going in cold storage? >> Everything is staying in cold storage, which is the safest way to keep your crypto. It's much safer than keeping it on an exchange or keeping it in a different place. >> And it's all through--it's encryption, it's never safe to--a private key's a private key. Right, I mean, we've seen this before. >> Exactly. >> It's not rocket science. >> But even if you keep it in your home, in your safe, that's not as safe as putting it in a facility that is resistant to nuclear attack and has four layers of security and no human can get into the last room. It's a physical connection. >> I've heard this problem, just estate planning, someone dies, where's his cryptokey? >> Exactly. >> Unlocking, say 30 to 100 million dollars' worth of crypto. >> Exactly. >> It's not obvious. Well, the guy was smart, he put it in lock boxes all around the country. Wait a minute, no one knows where they are. >> But as a custodian, if you show us that you are the ultimate heir and you have the legal representation, then we can handle it, right? We can transfer that. But really, you're protecting it against a hacker coming in and stealing it from you. All the legal ramifications still apply. >> So, let's talk about the industry. What do you like about the industry right now, and what do you think that needs more work on, faster, or behavior-wise, what's your general temperature-taking of the current community? A lot of back-end work being done. Some complaints I heard about the demos, where some people say the front end was pretty sucky. >> Yes. >> But I think that's because a lot of back end work's being done. >> Well, this reminds me of 95 through 2000, I wrote some of the original Void protocols and everybody told me it's not going to work, the Internet is too slow, you can't scale, it's not safe. >> Yeah. >> I hear the same arguments again and again. >> Exactly. >> Today a billion people use Void every day and they don't even know who created it or how it works. I go in a room, I do speeches, right? And I ask, "Who here knows how Void works?" Not a single hand goes up. So, we need to get to the point where blockchain and crypto works the same way, no one needs to understand how it works, they just need to use it and trust it. So, the biggest thing I think holding us up right now is actually not technical. Because there's over 130 different blockchains. And some of them solves the scalability issues and security issues. The problem is is that we kind of have the early adopter phase, but we cannot leapfrog into the mass adoption phase. Because we're still at the early phase of operation. >> Exactly, is this just evolution or is it something specific? >> Well, the applications that we have today are not things that most of the people on the planet can use. That's what I'm saying, like for example, lending and borrowing is much more attractive than trading coins with each other. >> Yeah, it's like the Web, and Web 1.0, I mean-- >> Exactly. >> Search was the first application, and then everyone went to there, check their stock quotes. >> Looking at travel-- >> Travel, buy your car-- >> Exactly. >> Basic Maslow's hierarchy of needs kind of things. >> Yes. >> So, but that was interesting, because it was a whole new way. And by the way, same arguments I heard in the Web. "It's so slow. A mini-computer's so much faster than this AOL thing at 9600 bot modem." But the apples weren't being compared to other apples. It was replacing direct mail where I used to put stamps on envelopes and mail things. >> That's right, look. The bank gives you one percent. We pay five percent. So, that is a very attractive reason to switch from the bank to Celsius. Also, most people don't realize that the power the bank has is because we make all the deposits there. We stop depositing money there, they will have to pay us five percent, because as the money leaves them, they will have to raise the rates, they're going to have to attract you with more interest. So, it's a win-win, the community wins on the crypto side, and we're forcing the banks to do the right thing. >> Alright, I want to get your opinion, Alex, on ICOs. Did you guys do an ICO? How much did you raise? And what's your general take of the ICO market? I mean, certainly, blockchain, I've said this before, takes inefficiencies and makes them highly efficient, and we know the capital markets are very inefficient, so it's a bubble, okay. I have a choice. Tokens or VC, it's a no-brainer, go tokens. >> So look, I've had coins since 2013, I've invested in over 30 ICOs myself, and then when I couldn't find what Celsius does, I decided to start a new company, this is my eighth company as a founder. And so, I raised a billion dollars on the VC side, I know how that world works, had plenty of exits, and here we went to the community, we excluded all the VCs, we did not take money from a single venture guy because this is all about building the community. So, we just closed our round, about a month ago, we raised $15 million. We had 15,000 people sign up, 95% men. And it just drove me crazy, because half of our company's women, I thought that at least half of the people would be female. And I realized how big the problem is that we do not-- I mean, if you look at the floor here, we do not include the stronger sex. So, she's female, exactly. >> I'm promoting it here. >> I agree, I'm a big supporter too, so, I think when you think about it, if we want to be inclusive and we want this revolution to take hold, we have to solve these problems. What is the killer app, where are the female participants, how do we make it global, how do we make it inclusive, and how do we make the user interface and everything else so simple that you don't have to understand anything to use it every day. >> And what's your vision on how the ICOs are going to trend? >> Right. >> More stability, obviously. It'll level out, the bubble will-- I don't think it'll be a massive pop, I think it's going to be a small squeeze, so I think there's enough community involvement that self-governance will kick, in my opinion, but what's your take on the ICO? >> So, we definitely, this is like a Cambrian explosion. So, we are throwing money at everything. So, we're throwing money at good projects, bad projects, it's like a spray-and-pray mentality of the old days in 95 to 2000, we've seen that before. But from this some great companies are going to be born and I think the winners here are going to be bigger than Google, bigger than Apple, because the market is bigger. Money is the biggest market in the world, right? There's nothing bigger than all the money in the world, by definition. So, it's bigger than advertising, it's bigger than the social networks and it's bigger than Apple and whatever they're making. So, I believe that out of these companies, there are several thousand companies here, 8,000 participants, there were 4,000 ICOs that already took place or that are coming to be and out of that you're going to have your giant winners. And obviously Celsius is hoping to be one of them, but it's whoever builds the biggest community is the one that's going to win. And for us, it's all about giving back everything to the community. >> Your mission is awesome, I love your mission, and I love your expertise, love your experience. I think the community really is great to have you being a champion, being a mentor, I know you're doing a lot of paying it forward, great job. What's your view for the young entrepreneur out there, or someone who's got a growing opportunity that says, "Hey, you know what? I'm actually tailor-made for decentralization, I have a network community, network effect, I have all these great things going on, I want to scale." >> That's a great question because-- >> What's the playbook? >> A lot of people come to me and say, Oh, I'm too late to the game." No one is too late to the game. The experts have a six month experience. So, you talk to most of the people here, this is the first event, this is the first show. So, what I say to a lot of entrepreneurs is that if you pick the right vertical, you can very quickly become the best in the world at it. And I think the first phase of evolution here in the blockchain is all about financial products and financial solutions. I wouldn't go after healthcare, I wouldn't go after-- so like, insurance, or solving financial problems that currently have giant toll collectors who collect all the value, like the banks, or like the financial service providers, the insurance and so on. So, if you can solve those areas, you can scale very quickly, because Interen already has six or seven billion people on it, so now you can just bring them all in and haggle on their behalf in the cryptocommunity. >> I feel like I should lie down on the couch and ask Dr. Alex for some more advice. So, I'm actually going to ask you some specific questions. >> No couch here, man! There's no off switch here. >> I'll pass out, so much action going on. I mean, the vibe here is amazing. So, theCUBE, we're doing an open token model, got a great community, we want to grow and be number one at digital media, covering events with a network effect, video and media. We see token as a great opportunity. What's your advice? You're on our advisory team, what do you tell us to do? >> So, the curation is excellent, I think you guys do a great job at pulling the content. And what's missing in this community is really an automated process that kind of asks the community, "What do you guys believe in?" It's very hard for most people here to figure out which ones of these thousands of projects are trending right now, for example. And we can all vote on our app, for example. If you could create an app that allowed all of us to vote during the show, on what's trending and you had those guys being interviewed instead of me, you would have the killer apps. All of us know what they are and are not, but we should vote on it. >> So, use collective intelligence of the data-- >> Yes. >> And make a content operating system-- >> Exactly, use your metadata that you're already producing to do real-time input and bring those guys here, interview them and ask them about why their projects are hot. Celsius, people ask me all the time, "How do I get involved? How do I get involved? I saw you on Rubena, I saw you on this show." And so, we manage to create a lot of buzz around us and there are a few other projects like that, the community needs to get around the good projects and support them, because when we spend a lot of money on bad projects, we're not giving enough support to the good projects. >> You got to close loop that data, make it a community brand. That's what you're doing, that's what we're trying to do here, covering the events. So, we're going to build a content operating system. >> There we go! >> Run-time assembly, whatever the votes-- >> Let everybody vote in real-time, yes. >> Give me 50 times I see the hashtag-- >> Right, and the size of the name grows based on the adoption. >> You would have to have, like, clips instantly available, you would have to have all the metadata-- >> It's all real-time. >> You'd have to have all that stuff available. >> And the community will post it for you, you just do the final interviews, just bring these guys and say, "Okay, you won number one, number two, number three, and you give them the awards. >> Awesome, I love this conversation, even though we're kind of riffing, having fun. But the point of it is-- >> It's a new start-up let's do an ICO. >> Let's do an ICO, we can (mumbles) with that. No, but this is really fundamental for the entrepreneurs at the tech culture, we're talking about basically dev ops. >> Yes. >> Using cloud computing, we can have unlimited-- >> You can spin it up in a few days. >> You can apply automation, AI, that's your point, trust the software. >> Yes. If you're doing it for the community, they will recognize it and adopt you very quickly. >> They'll apply a human curation layer on top of it. >> With full transparency, you've got to show that you're doing everything for the community, like what we're trying to do, right? We're showing, when we tell you you're going to earn 5.1%, you can dig in and see who's getting paid and why they're getting this much money, what's the allocation, every token that's being given to anyone, all the math behind it is fully transparent, right? >> Final question-- >> Try to ask the bank for that. See what they're saying. >> Transparency? Go find another bank. Final question, your summary of the show. What's your take, was it good? Good vibes? What was the content agenda? What was the most exciting thing you saw, what's your summary of Consensus 2018? >> So, Consensus, when they organized it, they were bragging that 4,000 people are going to show up, and that's why they moved to the Hilton from the Marriott. And then 8,000 people show up, the lines were outside the whole hotel, so it proves that the demand is there. Everybody wants to come and learn about it, they want to know why this is so hot, why this revolution is here to stay, so what I'm taking out of the show is that this innovation is just in its infancy and there's a lot of people who are still yet to join. And the best ideas, the winners, have not yet been decided. So, watch out for all those new ideas that we haven't heard about yet. >> And it's accelerated from other trends. >> Yes, it definitely accelerated. >> Alex Mashinsky, CEO of Celsius, former entrepreneur of multiple startups. See, he knows the old way, he sees the new way, he's been a successful entrepreneur, seasoned community member. Thanks for coming on, we appreciate it. >> Thanks for having us. I appreciate it. >> I'm John Furrier here with theCUBE on the ground out in the open, in the community, CUBE coverage here, Blockchain Week 2018 New York. Thanks for watching. (electronic-based music)
SUMMARY :
Announcer: From New York, it's theCUBE for on the ground coverage for three days, wall-to-wall So, I love we just chatted before the camera turned on Right, so tell about the story. and you have not come to an event like this that you have a view and opinion on I would like to get. So, the old world was DNS. Talk about Celsius the project So, they take all that margin that you talked about. the buyer or the seller? They're a toll collector in the middle, exactly. in front of the Senate in Congress, They don't actually sell the data, true. and a lot of that earnings comes from the money And so, what's your value proposition? so you need a loan against your crypto. So, you can bring contextual services around And the point is is that the next 100 million people the frickin' hackers are going to get it. Because, how do you protect me against-- of anyone cracking the blockchain itself. which is the safest way to keep your crypto. And it's all through--it's encryption, and no human can get into the last room. Well, the guy was smart, he put it in lock boxes and you have the legal representation, and what do you think that needs more work on, faster, But I think that's because a lot of the Internet is too slow, So, the biggest thing I think holding us up right now Well, the applications that we have today and then everyone went to there, check their stock quotes. And by the way, same arguments I heard in the Web. Also, most people don't realize that the power the bank has and we know the capital markets are very inefficient, And I realized how big the problem is so simple that you don't have to understand I think it's going to be a small squeeze, of the old days in 95 to 2000, I think the community really is great to have you is that if you pick the right vertical, So, I'm actually going to ask you some specific questions. There's no off switch here. I mean, the vibe here is amazing. So, the curation is excellent, the community needs to get around the good projects You got to close loop that data, Right, and the size of the name grows And the community will post it for you, But the point of it is-- at the tech culture, You can apply automation, AI, that's your point, they will recognize it and adopt you very quickly. everything for the community, Try to ask the bank for that. What was the most exciting thing you saw, so it proves that the demand is there. See, he knows the old way, he sees the new way, I appreciate it. out in the open, in the community, CUBE coverage here,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Mashinsky | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
ten | QUANTITY | 0.99+ |
Nouriel Roubini | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
five percent | QUANTITY | 0.99+ |
Celsius Network | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
nine percent | QUANTITY | 0.99+ |
50 times | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
15% | QUANTITY | 0.99+ |
Mark Zuckerberg | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
New York | LOCATION | 0.99+ |
one percent | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
5.1% | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
Milk Institute | ORGANIZATION | 0.99+ |
two percent | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
six month | QUANTITY | 0.99+ |
8,000 people | QUANTITY | 0.99+ |
Nouriel | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
8,000 participants | QUANTITY | 0.99+ |
$15 million | QUANTITY | 0.99+ |
Binance | ORGANIZATION | 0.99+ |
4,000 people | QUANTITY | 0.99+ |
Congress | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
2013 | DATE | 0.99+ |
Senate | ORGANIZATION | 0.99+ |
AOL | ORGANIZATION | 0.99+ |
Celsius | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
1% | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
first show | QUANTITY | 0.99+ |
first application | QUANTITY | 0.99+ |
15,000 people | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Interen | ORGANIZATION | 0.99+ |
4,000 ICOs | QUANTITY | 0.99+ |
2000 | DATE | 0.98+ |
Blockchain Week | EVENT | 0.98+ |
first event | QUANTITY | 0.98+ |
10 years | QUANTITY | 0.98+ |
one spot | QUANTITY | 0.98+ |
first phase | QUANTITY | 0.98+ |
twice a day | QUANTITY | 0.98+ |
six-pack | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
100 million people | QUANTITY | 0.98+ |
Bloomberg | ORGANIZATION | 0.97+ |
100% | QUANTITY | 0.97+ |
thousands | QUANTITY | 0.97+ |
50 | QUANTITY | 0.97+ |
100 million dollars' | QUANTITY | 0.97+ |
eighth company | QUANTITY | 0.97+ |
over 30 ICOs | QUANTITY | 0.96+ |
NYC | LOCATION | 0.96+ |
seven billion people | QUANTITY | 0.96+ |
about a month ago | DATE | 0.96+ |
single venture | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
over 130 different blockchains | QUANTITY | 0.95+ |
Marriott | ORGANIZATION | 0.95+ |
one | QUANTITY | 0.95+ |
BitCoin | TITLE | 0.93+ |
apples | ORGANIZATION | 0.93+ |
Blockchain Week 2018 | EVENT | 0.92+ |
earlier today | DATE | 0.92+ |
a minute | QUANTITY | 0.92+ |
Lisa Fetterman, Nomiku | Samsung Developers Conference 2017
>> Voiceover: Live from San Francisco, it's theCUBE, covering Samsung Developer Conference 2017 brought to you by Samsung. >> Welcome back, we're live here in San Francisco. We're here at the SDC, the Samsung Developer Conference. I'm John Furrier, the co-founder of SiliconANGLE and co-host of theCUBE. My next guest, Lisa Fetterman, who is of Nomiku and she's a three-time, triple-star winner, Forbes Under 30- >> Inc 30 Under 30, and Zagat 30 under 30. That's a weird one. >> That's a great one. You're likely to get the Michelin Star soon. Tell us about your company. It's a really super story here. You have this new device you guys started. Tell the story. >> Well, speaking of Michelin Stars, I used to work under the best chefs in the nation. I worked under my Mario Batali, Jean-Georges at the three Michelin Star restaurants and I saw this huge, hulking piece of laboratory equipment. We would cook so many of our components in it and I'd lusted after one for myself, but they were $2000 and up, so that was like you know what, I'm going to save up money and then I went on a date with a plasma physicist and he said, "Hey, you know what, "we could just make it on our own." We run to the hardware store, we make a prototype. We travel all across the United States and teach people how to make their own DIY open-source sous vide kits to the point where we amassed so much attention that Obama invited us to the White House. And then we put it on Kickstarter and it becomes the #1 most-funded project in our category, and we are here today with our connected home sous vide immersion circulator that interacts with Samsung's Smart Fridge. >> That's a fantastic story of all in a very short time. Well done, so let me just back up. You guys have the consumer device that all the top chef's have. >> That's right. >> That's the key thing, right? >> It's consumable, low-priced, what's the price point? >> We do hardware, software, and goods. Right now the price of our machine is $49 on souschef.nomiku.com because it interacts with the food program. So there's food that comes with the machine. You weigh the food in front of the machine. It automatically recognizes the time and temperature. It interacts with different time and temperatures of different bags of food, and you just drop it in. In 30 minutes, you have a gourmet chef-prepared meal just the way that we would do it in Michelin Star restaurants. >> And now you're connecting it to Samsung, so they have this SmartThings Messaging. That's kind of the marketing, SmartCloud, SmartThings. What does that mean, like it's connected to the wifi, does it connect to an app? Take us through how it connects to the home. >> We're connected through Family Hub, which is the system inside of the Samsung Smart Fridge. Every single Samsung Smart Fridge ships with a Nomiku app pre-downloaded inside of it and the fridge and the Nomiku talk to each other so there's inventory management potential. There's learning consumer behaviors to help them. Let's say you cook a piece of chicken at 4:00 AM. You go to a subset of people who also do that, like wow, and then we recognize that those folks do CrossFit. They will eat again at 7:00 AM because they eat more little meals rather than full meals, and then we can recommend things for them as their day goes along, and help manage things for them, like a personal assistant. >> So it's like a supply chain of your personal refrigerator. So can you tell if the chicken's going to go bad so you cook the chicken now, kind of thing? That would be helpful. >> You can actually tell if the chicken's going to go bad. If the chicken, if there's a recall or the chicken's expired and you tap it with the machine, the machine will tell you to throw it out. >> So tell us about some of the travel's you've been under. You said you've traveled the world. You also have done a lot of writing, best-selling author. Tell us about your books and what you're writing about. >> I wrote the book called Sous Vide at Home. It's an international best-seller and it's sous vide recipes. Everybody has been lusting after sous vide since we invented the technology in 2012, so much actually that the market for it grows 2.5x every single year so the adoption rate is insane. The adoption rate for sous vide actually has surpassed that of the internet, the cell phone, and the personal computer. >> Why is the excitement on the Kickstarter, obviously, the record-breaking, and the sales, and the trend, why is it so popular? Is it 'cause it's a convenience? Is it the ease of use, all of the above? What's the main driver? >> All of the above. If you ever cooked in the kitchen and you've lost your confidence, it was mostly because you messed something up in the kitchen. This is foolproof cooking. So at 57 degrees Celsius, that's when the fat and the collagen melt into the muscle of steak, making each bite so juicy, tender, and delicious. We can set it at exactly that magic temperature, drop a steak in, and then put it in the water. When you cook it like that, there's no overcooking the muscle and it becomes effectively marbled by all that juicy, fat deliciousness. >> Aw, I'm kind of hungry already. >> Yeah. >> Lenny wants a steak. I can hear Leonard moaning over there. Okay, let's get down to the science here because a lot of people might not understand what temperatures to cook anything. Do you guys provide some best practices because this is a game-changer for my family of four. >> We want to meal cooked fast, but you want to have meals staged potentially and then recook them. How does someone use it? Is there a playbook? Is there a cookbook? >> Like we say in the industry, there's an app for that. The app is on the Smart Fridge and it's also on your smartphones. Moreover, so the machine acts as a stand-alone sous vide machine for you to cook your own recipes, and it also reads rfid tags from our meals. If you use our meals, then it's a no-brainer. You just tap and then put in the water. There's nothing more. Actually people get flustered that it's so easy. They're like, "That's it? "That was all that was?" But I hate smart devices that actually make people stupider. Being a stand-alone sous vide machine, you can create any of your recipes whether it's from my cookbook, the app, which is community-focused, so we have over 1000 recipes inside there from our community. People make it and they share it with the world. >> So with the Kickstarter, I'm just going to ask that next question. I'll say community layer. >> Sure. Kind of like is it a Reddit page? Do you have your own pages? What's going on with the community? Tell us about the community. >> Oh, the community. Everybody who has an OmniCube downloads our app called Tender and inside you can make your own-- >> Not to be confused with Tinder. >> Correct. >> Tender. >> Although I wouldn't mind if you confused it and instead of going out, I guess you're making dinner. >> Wife left for the steak and right for the chicken. >> (laughing) Exactly, exactly. We love the play on the word. >> That's great. >> When you make your own little profile, it encourages you to share. It's really fun because you can keep your recipes in there so you never have to look it up ever again. You can bing it and it goes directly to your machine. It's great for professional chefs, too 'cause you can share it with your entire team. >> So maybe we should start a Cube food channel. You can get a dedicated recipe channel. Exciting. >> That's great. Will you be my sous chef? >> (laughing) Course, I'm a great guest to have do that. If I can do it, anyone can do it. How do I get one? How do people buy? What's the deal? >> It's namiku.com for just our hardware, and in California, we've launched our food program on souschef.nomiku.com. Right now our machines for the food program are only $49. That is such a great value considering that souv vide machines are usually $200 and up right now. >> Awesome, well thank you so much for coming on. I really appreciate it. Lisa Fetterman is CEO, entrepreneur of Namiku, entrepreneur of great stuff here in the Cube. Of course, we're bringing the food, tech, and remember, farming tech is big, too, so as the culture gets connected, the food from the farm to the table is being changed with data and IT. More after this short break. (innovative tones)
SUMMARY :
brought to you by Samsung. We're here at the SDC, the Samsung Developer Conference. Inc 30 Under 30, and Zagat 30 under 30. You have this new device you guys started. and it becomes the #1 most-funded project in our category, You guys have the consumer device the way that we would do it in Michelin Star restaurants. That's kind of the marketing, SmartCloud, SmartThings. and the fridge and the Nomiku talk to each other So can you tell if the chicken's going to go bad the machine will tell you to throw it out. You also have done a lot of writing, and the personal computer. All of the above. Do you guys provide some best practices We want to meal cooked fast, but you want to have meals sous vide machine for you to cook your own recipes, So with the Kickstarter, Do you have your own pages? called Tender and inside you can make your own-- Although I wouldn't mind if you confused it We love the play on the word. It's really fun because you can keep your recipes You can get a dedicated recipe channel. Will you be my sous chef? What's the deal? Right now our machines for the food program are only $49. the food from the farm to the table
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Fetterman | PERSON | 0.99+ |
Obama | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
$2000 | QUANTITY | 0.99+ |
$200 | QUANTITY | 0.99+ |
Lenny | PERSON | 0.99+ |
Leonard | PERSON | 0.99+ |
$49 | QUANTITY | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
4:00 AM | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
United States | LOCATION | 0.99+ |
7:00 AM | DATE | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
three-time | QUANTITY | 0.99+ |
2.5x | QUANTITY | 0.99+ |
Namiku | ORGANIZATION | 0.99+ |
57 degrees Celsius | QUANTITY | 0.99+ |
Sous Vide at Home | TITLE | 0.99+ |
souschef.nomiku.com | OTHER | 0.99+ |
SDC | EVENT | 0.98+ |
Tender | TITLE | 0.98+ |
over 1000 recipes | QUANTITY | 0.98+ |
each bite | QUANTITY | 0.98+ |
Samsung Developer Conference 2017 | EVENT | 0.98+ |
Samsung Developer Conference | EVENT | 0.98+ |
Mario Batali | PERSON | 0.98+ |
SiliconANGLE | ORGANIZATION | 0.98+ |
Nomiku | PERSON | 0.97+ |
Michelin Stars | TITLE | 0.97+ |
triple-star | QUANTITY | 0.96+ |
Samsung Developers Conference 2017 | EVENT | 0.95+ |
Michelin Star | TITLE | 0.95+ |
three | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
one | QUANTITY | 0.94+ |
Cube | LOCATION | 0.93+ |
Kickstarter | ORGANIZATION | 0.93+ |
#1 | QUANTITY | 0.92+ |
Nomiku | ORGANIZATION | 0.9+ |
Jean-Georges | PERSON | 0.9+ |
ORGANIZATION | 0.87+ | |
four | QUANTITY | 0.86+ |
every single year | QUANTITY | 0.81+ |
Forbes Under 30 | TITLE | 0.81+ |
Cube | ORGANIZATION | 0.78+ |
Smart Fridge | COMMERCIAL_ITEM | 0.78+ |
White House | ORGANIZATION | 0.74+ |
SmartCloud | TITLE | 0.72+ |
Inc 30 Under 30 | TITLE | 0.72+ |
Tinder | TITLE | 0.71+ |
theCUBE | ORGANIZATION | 0.68+ |
30 under 30 | TITLE | 0.66+ |
CrossFit | TITLE | 0.64+ |
Zagat | ORGANIZATION | 0.62+ |
Nomiku | TITLE | 0.6+ |
namiku.com | ORGANIZATION | 0.58+ |
Hub | TITLE | 0.57+ |
family of | QUANTITY | 0.55+ |
OmniCube | COMMERCIAL_ITEM | 0.44+ |
Michelin Star | ORGANIZATION | 0.41+ |
SmartThings | TITLE | 0.37+ |
single | QUANTITY | 0.37+ |
Nick Pentreath, IBM STC - Spark Summit East 2017 - #sparksummit - #theCUBE
>> Narrator: Live from Boston, Massachusetts, this is The Cube, covering Spark Summit East 2017. Brought to you by Data Bricks. Now, here are your hosts, Dave Valente and George Gilbert. >> Boston, everybody. Nick Pentry this year, he's a principal engineer a the IBM Spark Technology Center in South Africa. Welcome to The Cube. >> Thank you. >> Great to see you. >> Great to see you. >> So let's see, it's a different time of year, here that you're used to. >> I've flown from, I don't know the Fahrenheit's equivalent, but 30 degrees Celsius heat and sunshine to snow and sleet, so. >> Yeah, yeah. So it's a lot chillier there. Wait until tomorrow. But, so we were joking. You probably get the T-shirt for the longest flight here, so welcome. >> Yeah, I actually need the parka, or like a beanie. (all laugh) >> Little better. Long sleeve. So Nick, tell us about the Spark Technology Center, STC is its acronym and your role, there. >> Sure, yeah, thank you. So Spark Technology Center was formed by IBM a little over a year ago, and its mission is to focus on the Open Source world, particularly Apache Spark and the ecosystem around that, and to really drive forward the community and to make contributions to both the core project and the ecosystem. The overarching goal is to help drive adoption, yeah, and particularly enterprise customers, the kind of customers that IBM typically serves. And to harden Spark and to make it really enterprise ready. >> So why Spark? I mean, we've watched IBM do this now for several years. The famous example that I like to use is Linux. When IBM put $1 billion into Linux, it really went all in on Open Source, and it drove a lot of IBM value, both internally and externally for customers. So what was it about Spark? I mean, you could have made a similar bet on Hadoop. You decided not to, you sort of waited to see that market evolve. What was the catalyst for having you guys all go in on Spark? >> Yeah, good question. I don't know all the details, certainly, of what was the internal drivers because I joined HTC a little under a year ago, so I'm fairly new. >> Translate the hallway talk, maybe. (Nick laughs) >> Essentially, I think you raise very good parallels to Linux and also Java. >> Absolutely. >> So Spark, sorry, IBM, made these investments and Open Source technologies that had ceased to be transformational and kind of game-changing. And I think, you know, most people will probably admit within IBM that they maybe missed the boat, actually, on Hadoop and saw Spark as the successor and actually saw a chance to really dive into that and kind of almost leap frog and say, "We're going to "back this as the next generation analytics platform "and operating system for analytics "and big debt in the enterprise." >> Well, I don't know if you happened to watch the Super Bowl, but there's a saying that it's sometimes better to be lucky than good. (Nick laughs) And that sort of applies, and so, in some respects, maybe missing the window on Hadoop was not a bad thing for IBM >> Yeah, exactly because not a lot of people made a ton of dough on Hadoop and they're still sort of struggling to figure it out. And now along comes Spark, and you've got this more real time nature. IBM talks a lot about bringing analytics and transactions together. They've made some announcements about that and affecting business outcomes in near real time. I mean, that's really what it's all about and one of your areas of expertise is machine learning. And so, talk about that relationship and what it means for organizations, your mission. >> Yeah, machine learning is a key part of the mission. And you've seen the kind of big debt in enterprise story, starting with the kind of Hadoop and data lakes. And that's evolved into, now we've, before we just dumped all of this data into these data lakes and these silos and maybe we had some Hadoop jobs and so on. But now we've got all this data we can store, what are we actually going to do with it? So part of that is the traditional data warehousing and business intelligence and analytics, but more and more, we're seeing there's a rich value in this data, and to unlock it, you really need intelligent systems. You need machine learning, you need AI, you need real time decision making that starts transcending the boundaries of all the rule-based systems and human-based systems. So we see machine learning as one of the key tools and one of the key unlockers of value in these enterprise data stores. >> So Nick, perhaps paint us a picture of someone who's advanced enough to be working with machine learning with BMI and we know that the tool chain's kind of immature. Although, IBM with Data Works or Data First has a fairly broad end-to-end sort of suit of tools, but what are the early-use cases? And what needs to mature to go into higher volume production apps or higher-value production apps? >> I think the early-use cases for machine learning in general and certainly at scale are numerous and they're growing, but classic examples are, let's say, recommendation engines. That's an area that's close to my heart. In my previous life before IBM, I bought the startup that had a recommendation engine service targeting online stores and new commerce players and social networks and so on. So this is a great kind of example use case. We've got all this data about, let's say, customer behavior in your retail store or your video-sharing site, and in order to serve those customers better and make more money, if you can make good recommendations about what they should buy, what they should watch, or what they should listen to, that's a classic use case for machine learning and unlocking the data that is there, so that is one of the drivers of some of these systems, players like Amazon, they're sort of good examples of the recommendation use case. Another is fraud detection, and that is a classic example in financial services, enterprise, which is a kind of staple of IBM's customer base. So these are a couple of examples of the use cases, but the tool sets, traditionally, have been kind of cumbersome. So Amazon bought everything from scratch themselves using customized systems, and they've got teams and teams of people. Nowadays, you've got this bold into Apache Spark, you've got it in Spark, a machine learning library, you've got good models to do that kind of thing. So I think from an algorithmic perspective, there's been a lot of advancement and there's a lot of standardization and almost commoditization of the model side. So what is missing? >> George: Yeah, what else? >> And what are the shortfalls currently? So there's a big difference between the current view, I guess the hype of the machine learning as you've got data, you apply some machine learning, and then you get profit, right? But really, there's a hugely complex workflow that involves this end-to-end story. You've got data coming from various data sources, you have to feed it into one centralized system, transform and process it, extract your features and do your sort of hardcore data signs, which is the core piece that everyone sort of thinks about as the only piece, but that's kind of in the middle and it makes up a relatively small proportion of the overall chain. And once you've got that, you do model training and selection testing, and you now have to take that model, that machine-learning algorithm and you need to deploy it into a real system to make real decisions. And that's not even the end of it because once you've got that, you need to close the loop, what we call the feedback loop, and you need to monitor the performance of that model in the real world. You need to make sure that it's not deteriorating, that it's adding business value. All of these ind of things. So I think that is the real, the piece of the puzzle that's missing at the moment is this end-to-end, delivering this end-to-end story and doing it at scale, securely, enterprise-grade. >> And the business impact of that presumably will be a better-quality experience. I mean, recommendation engines and fraud detection have been around for a while, they're just not that good. Retargeting systems are too little too late, and kind of cumbersome fraud detection. Still a lot of false positives. Getting much better, certainly compressing the time. It used to be six months, >> Yes, yes. Now it's minutes or second, but a lot of false positives still, so, but are you suggesting that by closing that gap, that we'll start to see from a consumer standpoint much better experiences? >> Well, I think that's imperative because if you don't see that from a consumer standpoint, then the mission is failing because ultimately, it's not magic that you just simply throw machine learning at something and you unlock business value and everyone's happy. You have to, you know, there's a human in the loop, there. You have to fulfill the customer's need, you have to fulfill consumer needs, and the better you do that, the more successful your business is. You mentioned the time scale, and I think that's a key piece, here. >> Yeah. >> What makes better decisions? What makes a machine-learning system better? Well, it's better data and more data, and faster decisions. So I think all of those three are coming into play with Apache Spark, end-to-end's story streaming systems, and the models are getting better and better because they're getting more data and better data. >> So I think we've, the industry, has pretty much attacked the time problem. Certainly for fraud detection and recommendation systems the quality issue. Are we close? I mean, are we're talking about 6-12 months before we really sort of start to see a major impact to the consumer and ultimately, to the company who's providing those services? >> Nick: Well, >> Or is it further away than that, you think? >> You know, it's always difficult to make predictions about timeframes, but I think there's a long way to go to go from, yeah, as you mentioned where we are, the algorithms and the models are quite commoditized. The time gap to make predictions is kind of down to this real-time nature. >> Yeah. >> So what is missing? I think it's actually less about the traditional machine-learning algorithms and more about making the systems better and getting better feedback, better monitoring, so improving the end user's experience of these systems. >> Yeah. >> And that's actually, I don't think it's, I think there's a lot of work to be done. I don't think it's a 6-12 month thing, necessarily. I don't think that in 12 months, certainly, you know, everything's going to be perfectly recommended. I think there's areas of active research in the kind of academic fields of how to improve these things, but I think there's a big engineering challenge to bring in more disparate data sources, to better, to improve data quality, to improve these feedback loops, to try and get systems that are serving customer needs better. So improving recommendations, improving the quality of fraud detection systems. Everything from that to medical imaging and counter detection. I think we've got a long way to go. >> Would it be fair to say that we've done a pretty good job with traditional application lifecycle in terms of DevOps, but we now need the DevOps for the data scientists and their collaborators? >> Nick: Yeah, I think that's >> And where is BMI along that? >> Yeah, that's a good question, and I think you kind of hit the nail on the head, that the enterprise applied machine learning problem has moved from the kind of academic to the software engineering and actually, DevOps. Internally, someone mentioned the word train ops, so it's almost like, you know, the machine learning workflow and actually professionalizing and operationalizing that. So recently, IBM, for one, has announced what's in data platform and now, what's in machine learning. And that really tries to address that problem. So really, the aim is to simplify and productionize these end-to-end machine-learning workflows. So that is the product push that IBM has at the moment. >> George: Okay, that's helpful. >> Yeah, and right. I was at the Watson data platform announcement you call the Data Works. I think they changed the branding. >> Nick: Yeah. >> It looked like there were numerous components that IBM had in its portfolio that's now strung together. And to create that end-to-end system that you're describing. Is that a fair characterization, or is it underplaying? I'm sure it is. The work that went into it, but help us maybe understand that better. >> Yeah, I should caveat it by saying we're fairly focused, very focused at HTC on the Open Source side of things, So my work is predominately within the Apache Spark project and I'm less involved in the data bank. >> Dave: So you didn't contribute specifically to Watson data platform? >> Not to the product line, so, you know, >> Yeah, so its really not an appropriate question for you? >> I wouldn't want to kind of, >> Yeah. >> To talk too deeply about it >> Yeah, yeah, so that, >> Simply because I haven't been involved. >> Yeah, that's, I don't want to push you on that because it's not your wheelhouse, but then, help me understand how you will commercialize the activities that you do, or is that not necessarily the intent? >> So the intent with HTC particularly is that we focus on Open Source and a core part of that is that we, being within IBM, we have the opportunity to interface with other product groups and customer groups. >> George: Right. >> So while we're not directly focused on, let's say, the commercial aspect, we want to effectively leverage the ability to talk to real-world customers and find the use cases, talk to other product groups that are building this Watson data platform and all the product lines and the features, data sans experience, it's all built on top of Apache Apache Spark and platform. >> Dave: So your role is really to innovate? >> Exactly, yeah. >> Leverage and Open Source and innovate. >> Both innovate and kind of improve, so improve performance improve efficiency. When you are operating at the scale of a company such as IBM and other large players, your customers and you as product teams and builders of products will come into contact with all the kind of little issues and bugs >> Right. >> And performance >> Make it better. Problems, yeah. And that is the feedback that we take on board and we try and make it better, not just for IBM and their customers. Because it's an Apache product and everyone benefits. So that's really the idea. Take all the feedback and learnings from enterprise customers and product groups and centralize that in the Open Source contributions that we make. >> Great. Would it be, so would it be fair to say you're focusing on making the core Spark, Spark ML and Spark ML Lib capabilities sort of machine learning libraries and in the pipeline, more robust? >> Yes. >> And if that's the case, we know there needs to be improvements in its ability to serve predictions in real time, like high speed. We know there's a need to take the pipeline and sort of share it with other tools, perhaps. Or collaborate with other tool chains. >> Nick: Yeah. >> What are some of the things that the Enterprise customers are looking for along the lines? >> Yeah, that's a great question and very topical at the moment. So both from an Open Source community perspective and Enterprise customer perspective, this is one of the, if not the key, I think, kind of missing pieces within the Spark machine-learning kind of community at the moment, and it's one of the things that comes up most often. So it is a missing piece, and we as a community need to work together and decide, is this something that we built within Spark and provide that functionality? Is is something where we try and adopt open standards that will benefit everybody and that provides a kind of one standardized format, or way or serving models? Or is it something where there's a few Open Source projects out there that might serve for this purpose, and do we get behind those? So I don't have the answer because this is ongoing work, but it's definitely one of the most critical kind of blockers, or, let's say, areas that needs work at the moment. >> One quick question, then, along those lines. IBM, the first thing IBM contributed to the Spark community was Spark ML, which is, as I understand it, it was an ability to, I think, create an ensemble sort of set of models to do a better job or create a more, >> So are you referring to system ML, I think it is? >> System ML. >> System ML, yeah, yeah. >> What are they, I forgot. >> Yeah, so, so. >> Yeah, where does that fit? >> System ML started out as a IBM research project and perhaps the simplest way to describe it is, as a kind of sequel optimizer is to take sequel queries and decide how to execute them in the most efficient way, system ML takes a kind of high-level mathematical language and compiles it down to a execution plan that runs in a distributed system. So in much the same way as your sequel operators allow this very flexible and high-level language, you don't have to worry about how things are done, you just tell the system what you want done. System ML aims to do that for mathematical and machine learning problems, so it's now an Apache project. It's been donated to Open Source and it's an incubating project under very active development. And that is really, there's a couple of different aspects to it, but that's the high-level goal. The underlying execution engine is Spark. It can run on Hadoop and it can run locally, but really, the main focus is to execute on Spark and then expose these kind of higher level APRs that are familiar to users of languages like R and Python, for example, to be able to write their algorithms and not necessarily worry about how do I do large scale matrix operations on a cluster? System ML will compile that down and execute that for them. >> So really quickly, follow up, what that means is if it's a higher level way for people who sort of cluster aware to write machine-learning algorithms that are cluster aware? >> Nick: Precisely, yeah. >> That's very, very valuable. When it works. >> When it works, yeah. So it does, again, with the caveat that I'm mostly focused on Spark and not so much the System ML side of things, so I'm definitely not an expert. I don't claim to be an expert in it. But it does, you know, it works at the moment. It works for a large class of machine-learning problems. It's very powerful, but again, it's a young project and there's always work to be done, so exactly the areas that I know that they're focusing on are these areas of usability, hardening up the APRs and making them easier to use and easier to access for users coming from the R and Python communities who, again are, as you said, they're not necessarily experts on distributed systems and cluster awareness, but they know how to write a very complex machine-learning model in R, for example. And it's really trying to enable them with a set of APR tools. So in terms of the underlying engine, they are, I don't know how many hundreds of thousands, millions of lines of code and years and years of research that's gone into that, so it's an extremely powerful set of tools. But yes, a lot of work still to be done there and ongoing to make it, in a way to make it user ready and Enterprise ready in a sense of making it easier for people to use it and adopt it and to put it into their systems and production. >> So I wonder if we can close, Nick, just a few questions on STC, so the Spark Technology Centers in Cape Town, is that a global expertise center? Is is STC a virtual sort of IBM community, or? >> I'm the only member visiting Cape Town, >> David: Okay. >> So I'm kind of fairly lucky from that perspective, to be able to kind of live at home. The rest of the team is mostly in San Francisco, so there's an office there that's co-located with the Watson west office >> Yeah. >> And Watson teams >> Sure. >> That are based there in Howard Street, I think it is. >> Dave: How often do you get there? >> I'll be there next week. >> Okay. >> So I typically, sort of two or three times a year, I try and get across there >> Right. And interface with the team, >> So, >> But we are a fairly, I mean, IBM is obviously a global company, and I've been surprised actually, pleasantly surprised there are team members pretty much everywhere. Our team has a few scattered around including me, but in general, when we interface with various teams, they pop up in all kinds of geographical locations, and I think it's great, you know, a huge diversity of people and locations, so. >> Anything, I mean, these early days here, early day one, but anything you saw in the morning keynotes or things you hope to learn here? Anything that's excited you so far? >> A couple of the morning keynotes, but had to dash out to kind of prepare for, I'm doing a talk later, actually on feature hashing for scalable machine learning, so that's at 12:20, please come and see it. >> Dave: A breakout session, it's at what, 12:20? >> 20 past 12:00, yeah. >> Okay. >> So in room 302, I think, >> Okay. >> I'll be talking about that, so I needed to prepare, but I think some of the key exciting things that I have seen that I would like to go and take a look at are kind of related to the deep learning on Spark. I think that's been a hot topic recently in one of the areas, again, Spark is, perhaps, hasn't been the strongest contender, let's say, but there's some really interesting work coming out of Intel, it looks like. >> They're talking here on The Cube in a couple hours. >> Yeah. >> Yeah. >> I'd really like to see their work. >> Yeah. >> And that sounds very exciting, so yeah. I think every time I come to a Spark summit, they always need projects from the community, various companies, some of them big, some of them startups that are pushing the envelope, whether it's research projects in machine learning, whether it's adding deep learning libraries, whether it's improving performance for kind of commodity clusters or for single, very powerful single modes, there's always people pushing the envelope, and that's what's great about being involved in an Open Source community project and being part of those communities, so yeah. That's one of the talks that I would like to go and see. And I think I, unfortunately, had to miss some of the Netflix talks on their recommendation pipeline. That's always interesting to see. >> Dave: Right. >> But I'll have to check them on the video (laughs). >> Well, there's always another project in Open Source land. Nick, thanks very much for coming on The Cube and good luck. Cool, thanks very much. Thanks for having me. >> Have a good trip, stay warm, hang in there. (Nick laughs) Alright, keep it right there. My buddy George and I will be back with our next guest. We're live. This is The Cube from Sparks Summit East, #sparksummit. We'll be right back. (upbeat music) (gentle music)
SUMMARY :
Brought to you by Data Bricks. a the IBM Spark Technology Center in South Africa. So let's see, it's a different time of year, here I've flown from, I don't know the Fahrenheit's equivalent, You probably get the T-shirt for the longest flight here, need the parka, or like a beanie. So Nick, tell us about the Spark Technology Center, and the ecosystem. The famous example that I like to use is Linux. I don't know all the details, certainly, Translate the hallway talk, maybe. Essentially, I think you raise very good parallels and kind of almost leap frog and say, "We're going to and so, in some respects, maybe missing the window on Hadoop and they're still sort of struggling to figure it out. So part of that is the traditional data warehousing So Nick, perhaps paint us a picture of someone and almost commoditization of the model side. And that's not even the end of it And the business impact of that presumably will be still, so, but are you suggesting that by closing it's not magic that you just simply throw and the models are getting better and better attacked the time problem. to go from, yeah, as you mentioned where we are, and more about making the systems better So improving recommendations, improving the quality So really, the aim is to simplify and productionize Yeah, and right. And to create that end-to-end system that you're describing. and I'm less involved in the data bank. So the intent with HTC particularly is that we focus leverage the ability to talk to real-world customers and you as product teams and builders of products and centralize that in the Open Source contributions sort of machine learning libraries and in the pipeline, And if that's the case, So I don't have the answer because this is ongoing work, IBM, the first thing IBM contributed to the Spark community but really, the main focus is to execute on Spark When it works. and ongoing to make it, in a way to make it user ready So I'm kind of fairly lucky from that perspective, And interface with the team, and I think it's great, you know, A couple of the morning keynotes, but had to dash out are kind of related to the deep learning on Spark. that are pushing the envelope, whether it's research and good luck. My buddy George and I will be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
George Gilbert | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave Valente | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Nick Pentreath | PERSON | 0.99+ |
Howard Street | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Nick Pentry | PERSON | 0.99+ |
$1 billion | QUANTITY | 0.99+ |
Nick | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HTC | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Cape Town | LOCATION | 0.99+ |
South Africa | LOCATION | 0.99+ |
Java | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
12 months | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
IBM Spark Technology Center | ORGANIZATION | 0.99+ |
BMI | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
Spark | TITLE | 0.99+ |
12:20 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
6-12 month | QUANTITY | 0.99+ |
Watson | ORGANIZATION | 0.98+ |
tomorrow | DATE | 0.98+ |
Spark Technology Center | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
Spark Technology Centers | ORGANIZATION | 0.98+ |
this year | DATE | 0.97+ |
Hadoop | TITLE | 0.97+ |
hundreds of thousands | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
30 degrees Celsius | QUANTITY | 0.97+ |
Data First | ORGANIZATION | 0.97+ |
Super Bowl | EVENT | 0.97+ |
single | QUANTITY | 0.96+ |