Image Title

Search Results for Superdome:

Round table discussion


 

>>Thank you for joining us for accelerate next event. I hope you're enjoying it so far. I know you've heard about the industry challenges the I. T. Trends HP strategy from leaders in the industry and so today what we wanna do is focus on going deep on workload solutions. So in the most important workload solutions, the ones we always get asked about and so today we want to share with you some best practices, some examples of how we've helped other customers and how we can help you. All right with that. I'd like to start our panel now and introduce chris idler, who's the vice president and general manager of the element. Chris has extensive solution expertise, he's led HP solution engineering programs in the past. Welcome chris and Mark Nickerson, who is the Director of product management and his team is responsible for solution offerings, making sure we have the right solutions for our customers. Welcome guys, thanks for joining me. >>Thanks for having us christa. >>Yeah, so I'd like to start off with one of the big ones, the ones that we get asked about all the time, what we've been all been experienced in the last year, remote work, remote education and all the challenges that go along with that. So let's talk a little bit about the challenges that customers have had in transitioning to this remote work and remote education environments. >>Uh So I I really think that there's a couple of things that have stood out for me when we're talking with customers about V. D. I. Um first obviously there was a an unexpected and unprecedented level of interest in that area about a year ago and we all know the reasons why, but what it really uncovered was how little planning had gone into this space around a couple of key dynamics. One is scale. Um it's one thing to say, I'm going to enable V. D. I. For a part of my work force in a pre pandemic environment where the office was still the central hub of activity for work. It's a completely different scale. When you think about okay I'm going to have 50, 60, 80, maybe 100 of my workforce now distributed around the globe. Um Whether that's in an educational environment where now you're trying to accommodate staff and students in virtual learning, Whether that's in the area of things like Formula one racing, where we had the desire to still have events going on. But the need for a lot more social distancing. Not as many people able to be trackside but still needing to have that real time experience. This really manifested in a lot of ways and scale was something that I think a lot of customers hadn't put as much thought into. Initially the other area is around planning for experience a lot of times the V. D. I. Experience was planned out with very specific workloads are very specific applications in mind. And when you take it to a more broad based environment, if we're going to support multiple functions, multiple lines of business, there hasn't been as much planning or investigation that's gone into the application side. And so thinking about how graphically intense some applications are. Uh one customer that comes to mind would be Tyler I. S. D. Who did a fairly large rollout pre pandemic and as part of their big modernization effort, what they uncovered was even just changes in standard Windows applications Had become so much more graphically intense with Windows 10 with the latest updates with programs like Adobe that they were really needing to have an accelerated experience for a much larger percentage of their install base than they had counted on. So, um, in addition to planning for scale, you also need to have that visibility into what are the actual applications that are going to be used by these remote users? How graphically intense those might be. What's the logging experience going to be as well as the operating experience. And so really planning through that experience side as well as the scale and the number of users is kind of really two of the biggest, most important things that I've seen. >>You know, Mark, I'll just jump in real quick. I think you covered that pretty comprehensively there and it was well done. The a couple of observations I've made, one is just that um, V. D. I suddenly become like mission critical for sales. It's the front line, you know, for schools, it's the classroom, you know, that this isn't Uh cost cutting measure or uh optimization in IT. measure anymore. This is about running the business in a way it's a digital transformation. One aspect of about 1000 aspects of what does it mean to completely change how your business does. And I think what that translates to is that there's no margin for error, right? You know, you really need to to deploy this in a way that that performs, that understands what you're trying to use it for. That gives that end user the experience that they expect on their screen or on their handheld device or wherever they might be, whether it's a racetrack classroom or on the other end of a conference call or a boardroom. Right? So what we do in the engineering side of things when it comes to V. D. I. R. Really understand what's a tech worker, What's a knowledge worker? What's the power worker? What's a gP really going to look like? What time of day look like, You know, who's using it in the morning, Who is using it in the evening? When do you power up? When do you power down? Does the system behave? Does it just have the, it works function and what our clients can can get from H. P. E. Is um you know, a worldwide set of experiences that we can apply to, making sure that the solution delivers on its promises. So we're seeing the same thing you are christa, We see it all the time on beady eye and on the way businesses are changing the way they do business. >>Yeah. It's funny because when I talked to customers, you know, one of the things I heard that was a good tip is to roll it out to small groups first so you can really get a good sense of what the experiences before you roll it out to a lot of other people and then the expertise. Um It's not like every other workload that people have done before. So if you're new at it make sure you're getting the right advice expertise so that you're doing it the right way. Okay. One of the other things we've been talking a lot about today is digital transformation and moving to the edge. So now I'd like to shift gears and talk a little bit about how we've helped customers make that shift and this time I'll start with chris. >>All right Hey thanks. Okay so you know it's funny when it comes to edge because um the edge is different for every customer and every client and every single client that I've ever spoken to of. H. P. S. Has an edge somewhere. You know whether just like we were talking about the classroom might be the edge. But I think the industry when we're talking about edges talking about you know the internet of things if you remember that term from not too not too long ago you know and and the fact that everything is getting connected and how do we turn that into um into telemetry? And I think Mark is going to be able to talk through a a couple of examples of clients that we have in things like racing and automotive. But what we're learning about Edge is it's not just how do you make the Edge work? It's how do you integrate the edge into what you're already doing? And nobody's just the edge. Right. And so if it's if it's um ai ml dl there that's one way you want to use the edge. If it's a customer experience point of service, it's another, you know, there's yet another way to use the edge. So, it turns out that having a broad set of expertise like HP does, um, to be able to understand the different workloads that you're trying to tie together, including the ones that are running at the, at the edge. Often it involves really making sure you understand the data pipeline. What information is at the edge? How does it flow to the data center? How does it flow? And then which data center, which private cloud? Which public cloud are you using? Um, I think those are the areas where we, we really sort of shine is that we we understand the interconnectedness of these things. And so, for example, Red Bull, and I know you're going to talk about that in a minute mark, um the racing company, you know, for them the edges, the racetrack and, and you know, milliseconds or partial seconds winning and losing races, but then there's also an edge of um workers that are doing the design for the cars and how do they get quick access? So, um, we have a broad variety of infrastructure form factors and compute form factors to help with the edge. And this is another real advantage we have is that we we know how to put the right piece of equipment with the right software. And we also have great containerized software with our admiral container platform. So we're really becoming um, a perfect platform for hosting edge centric workloads and applications and data processing. Uh, it's uh um all the way down to things like a Superdome flex in the background. If you have some really, really, really big data that needs to be processed and of course our workhorse reliance that can be configured to support almost every combination of workload you have. So I know you started with edge christa but and and we're and we nail the edge with those different form factors, but let's make sure, you know, if you're listening to this, this show right now, um make sure you you don't isolate the edge and make sure they integrated with um with the rest of your operation, Mark, you know, what did I miss? >>Yeah, to that point chris I mean and this kind of actually ties the two things together that we've been talking about here at the Edge has become more critical as we have seen more work moving to the edge as where we do work, changes and evolves. And the edge has also become that much more closer because it has to be that much more connected. Um, to your point talking about where that edge exists, that edge can be a lot of different places. Um, but the one commonality really is that the edge is an area where work still needs to get accomplished. It can't just be a collection point and then everything gets shipped back to a data center back to some other area for the work. It's where the work actually needs to get done. Whether that's edge work in a used case like V. D. I. Or whether that's edge work. In the case of doing real time analytics, you mentioned red bull racing, I'll bring that up. I mean, you talk about uh, an area where time is of the essence, everything about that sport comes down to time. You're talking about wins and losses that are measured as you said in milliseconds. And that applies not just to how performance is happening on the track, but how you're able to adapt and modify the needs of the car, adapt to the evolving conditions on the track itself. And so when you talk about putting together a solution for an edge like that, you're right. It can't just be, here's a product that's going to allow us to collect data, ship it back someplace else and and wait for it to be processed in a couple of days, you have to have the ability to analyze that in real time. When we pull together a solution involving our compute products are storage products or networking products. When we're able to deliver that full package solution at the edge, what you see results like a 50 decrease in processing time to make real time analytic decisions about configurations for the car and adapting to real time test and track conditions. >>Yeah, really great point there. Um, and I really love the example of edge and racing because I mean that is where it all every millisecond counts. Um, and so important to process that at the edge. Now, switching gears just a little bit. Let's talk a little bit about um some examples of how we've helped customers when it comes to business agility and optimizing the workload for maximum outcome for business agility. Let's talk about some things that we've done to help customers with that >>mark, give it a >>shot. >>Uh, So when we, when we think about business agility, what you're really talking about is the ability to implement on the fly to be able to scale up and scale down the ability to adapt to real time changing situations. And I think the last year has been, has been an excellent example of exactly how so many businesses have been forced to do that. Um I think one of the areas that I think we've probably seen the most ability to help with customers in that agility area is around the space of private and hybrid clouds. Um if you take a look at the need that customers have to be able to migrate workloads and migrate data between public cloud environments, app development environments that may be hosted on site or maybe in the cloud, the ability to move out of development and into production and having the agility to then scale those application rollouts up, having the ability to have some of that. Um some of that private cloud flexibility in addition to a public cloud environment is something that is becoming increasingly crucial for a lot of our customers. >>All right, well, we could keep going on and on, but I'll stop it there. Uh, thank you so much Chris and Mark. This has been a great discussion. Thanks for sharing how we help other customers and some tips and advice for approaching these workloads. I thank you all for joining us and remind you to look at the on demand sessions. If you want to double click a little bit more into what we've been covering all day today, you can learn a lot more in those sessions. And I thank you for your time. Thanks for tuning in today.

Published Date : Apr 23 2021

SUMMARY :

so today we want to share with you some best practices, some examples of how we've helped Yeah, so I'd like to start off with one of the big ones, the ones that we get asked about in addition to planning for scale, you also need to have that visibility into what are It's the front line, you know, for schools, it's the classroom, one of the things I heard that was a good tip is to roll it out to small groups first so you can really the edge with those different form factors, but let's make sure, you know, if you're listening to this, is of the essence, everything about that sport comes down to time. Um, and so important to process that at the edge. at the need that customers have to be able to migrate And I thank you for your time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

Mark NickersonPERSON

0.99+

chrisPERSON

0.99+

MarkPERSON

0.99+

50QUANTITY

0.99+

HPORGANIZATION

0.99+

todayDATE

0.99+

60QUANTITY

0.99+

80QUANTITY

0.99+

100QUANTITY

0.99+

twoQUANTITY

0.99+

two thingsQUANTITY

0.99+

Red BullORGANIZATION

0.99+

WindowsTITLE

0.99+

Windows 10TITLE

0.99+

christaPERSON

0.99+

last yearDATE

0.99+

AdobeORGANIZATION

0.98+

50 decreaseQUANTITY

0.98+

Tyler I. S. D.PERSON

0.98+

firstQUANTITY

0.98+

OneQUANTITY

0.97+

H. P. S.PERSON

0.97+

chris idlerPERSON

0.96+

oneQUANTITY

0.95+

about 1000 aspectsQUANTITY

0.92+

daysQUANTITY

0.92+

One aspectQUANTITY

0.9+

one customerQUANTITY

0.87+

aboutDATE

0.85+

one thingQUANTITY

0.84+

a year agoDATE

0.84+

one wayQUANTITY

0.8+

EdgeTITLE

0.75+

racingEVENT

0.75+

single clientQUANTITY

0.72+

H. P. E.ORGANIZATION

0.71+

Formula oneTITLE

0.66+

SuperdomeORGANIZATION

0.66+

coupleQUANTITY

0.61+

doubleQUANTITY

0.6+

EdgeORGANIZATION

0.57+

V. D.TITLE

0.56+

V. D. I.PERSON

0.55+

pandemicEVENT

0.53+

V. D. I.ORGANIZATION

0.4+

V.EVENT

0.39+

flexCOMMERCIAL_ITEM

0.37+

D. I.ORGANIZATION

0.35+

bullORGANIZATION

0.35+

HPE Spotlight Segment v2


 

>>from around the globe. It's the Cube with digital coverage of HP Green Lake day made possible by Hewlett Packard Enterprise. Okay, we're not gonna dive right into some of the news and get into the Green Lake Announcement details. And with me to do that is Keith White is the senior vice president and general manager for Green Lake Cloud Services and Hewlett Packard Enterprise. Keith, thanks for your time. Great to see you. >>Hey, thanks so much for having me. I'm really excited to be here. >>You're welcome. And so listen, before we get into the hard news, can you give us an update on just Green Lake and the business? How's it going? >>You bet. No, it's fantastic. And thanks, you know, for the opportunity again. And hey, I hope everyone's at home staying safe and healthy. It's been a great year for HP Green Lake. There's a ton of momentum that we're seeing in the market place. Uh, we've booked over $4 billion of total contract value to date, and that's over 1000 customers worldwide, and frankly, it's worldwide. It's in 50 50 different countries, and this is a variety of solutions. Variety of workloads. So really just tons of momentum. But it's not just about accelerating the current momentum. It's really about listening to our customers, staying ahead of their demands, delivering more value to them and really executing on the HB Green Lake. Promise. >>Great. Thanks for that and really great detail. Congratulations on the progress, but I know you're not done. So let's let's get to the news. What do people need to know? >>Awesome. Yeah, you know, there's three things that we want to share with you today. So first is all about it's computing. So I could go into some details on that were actually delivering new industry work clothes, which I think will be exciting for a lot of the major industries that are out there. And then we're expanding RHP capabilities just to make things easier and more effective. So first off, you know, we're excited to announce today, um, acceleration of mainstream as adoption for high performance computing through HP Green Lake. And you know, in essence, what we're really excited about is this whole idea of it's a. It's a unique opportunity to write customers with the power of an agile, elastic paper use cloud experience with H. P s market. See systems. So pretty soon any enterprise will be able to tackle their most demanding compute and did intensive workloads, power, artificial intelligence and machine learning initiatives toe provide better business insights and outcomes and again providing things like faster time to incite and accelerated innovation. So today's news is really, really gonna help speed up deployment of HPC projects by 75% and reduced TCO by upto 40% for customers. >>That's awesome. Excited to learn more about the HPC piece, especially. So tell us what's really different about the news today From your perspective. >>No, that's that's a great thing. And the idea is to really help customers with their business outcomes, from building safer cars to improving their manufacturing lines with sustainable materials. Advancing discovery for drug treatment, especially in this time of co vid or making critical millisecond decisions for those finance markets. So you'll see a lot of benefits and a lot of differentiation for customers in a variety of different scenarios and industries. >>Yeah, so I wonder if you could talk a little bit mawr about specifically, you know exactly what's new. Can you unpack some of that for us? >>You bet. Well, what's key is that any enterprise will be able to run their modeling and simulation work clothes in a fully managed because we manage everything for them pre bundled. So we'll give folks this idea of small, medium and large H p e c h piece services to operate in any data center or in a cold a location. These were close air, almost impossible to move to the public cloud because the data so large or it needs to be close by for Leighton see issues. Oftentimes, people have concerns about I p protection or applications and how they run within that that local environment. So if customers are betting their business on this insight and analytics, which many of them are, they need business, critical performance and experts to help them with implementation and migration as well as they want to see resiliency. >>So is this a do it yourself model? In other words, you know the customers have toe manage it on their own. Or how are you helping there? >>No, it's a great question. So the fantastic thing about HP Green Lake is that we manage it all for the customer. And so, in essence, they don't have to worry about anything on the back end, we can flow that we manage capacity. We manage performance, we manage updates and all of those types of things. So we really make it. Make it super simple. And, you know, we're offering these bundled solutions featuring RHP Apollo systems that are purpose built for running things like modeling and simulation workloads. Um, and again, because it's it's Green Lake. And because it's cloud services, this provides itself. Service provides automation. And, you know, customers can actually, um, manage however they want to. We can do it all for them. They could do some on their own. It's really super easy, and it's really up to them on how they want to manage that system. >>What about analytics? You know, you had a lot of people want to dig deeper into the data. How are you supporting that? >>Yeah, Analytics is key. And so one of the best things about this HPC implementation is that we provide unopened platform so customers have the ability to leverage whatever tools they want to do for analytics. They can manage whatever systems they want. Want to pull data from so they really have a ton of flexibility. But the key is because it's HP Green Lake, and because it's HP es market leading HPC systems, they get the fastest they get the it all managed for them. They only pay for what they use, so they don't need to write a huge check for a large up front. And frankly, they get the best of all those worlds together in order to come up with things that matter to them, which is that true business outcome, True Analytics s so that they could make the decisions they need to run their business. >>Yeah, that's awesome. You guys clearly making some good progress here? Actually, I see it really is a game changer for the types of customers that you described. I mean, particularly those folks that you like. You said You think they can't move stuff into the cloud. They've got to stay on Prem. But they want that cloud experience. I mean, that's that's really exciting. We're gonna have you back in a few minutes to talk about the Green Lake Cloud services and in some of the new industry platforms that you see evolving >>awesome. Thanks so much. I look forward to it. >>Yeah, us too. So Okay, right now we're gonna check out the conversation that I had earlier with Pete Ungaro and Addison Snell on HPC. Let's watch welcome everybody to the spotlight session here green. Late day, We're gonna dig into high performance computing. Let me first bring in Pete Ungaro, Who's the GM for HPC and Mission Critical solutions, that Hewlett Packard Enterprise. And then we're gonna pivot Addison Snell, who is the CEO of research firm Intersect 3. 60. So, Pete, starting with you Welcome. And really a pleasure to have you here. I want to first start off by asking you what is the key trends that you see in the HPC and supercomputing space? And I really appreciate if you could talk about how customer consumption patterns are changing. >>Yeah, I appreciate that, David, and thanks for having me. You know, I think the biggest thing that we're seeing is just the massive growth of data. And as we get larger and larger data sets larger and larger models happen, and we're having more and more new ways to compute on that data. So new algorithms like A. I would be a great example of that. And as people are starting to see this, especially they're going through a digital transformations. You know, more and more people I believe can take advantage of HPC but maybe don't know how and don't know how to get started on DSO. They're looking for how to get going into this environment and many customers that are longtime HBC customers, you know, just consume it on their own data centers. They have that capability, but many don't and so they're looking at. How can I do this? Do I need to build up that capability myself? Do I go to the cloud? What about my data and where that resides. So there's a lot of things that are going into thinking through How do I start to take advantage of this new infrastructure? >>Excellent. I mean, we all know HPC workloads. You're talking about supporting research and discovery for some of the toughest and most complex problems, particularly those that affecting society. So I'm interested in your thoughts on how you see Green Lake helping in these endeavors specifically, >>Yeah, One of the most exciting things about HPC is just the impact that it has, you know, everywhere from, you know, building safer cars and airplanes. Thio looking at climate change, uh, to, you know, finding new vaccines for things like Covic that we're all dealing with right now. So one of the biggest things is how do we take advantage event and use that to, you know, benefit society overall. And as we think about implementing HPC, you know, how do we get started? And then how do we grow and scale as we get more and more capability? So that's the biggest things that we're seeing on that front. >>Yes. Okay, So just about a year ago, you guys launched the Green Lake Initiative and the whole, you know, complete focus on as a service. So I'm curious as to how the new Green Lake services the HPC services specifically as it relates to Greenlee. How do they fit in the H. P s overall high performance computing portfolio and the strategy? >>Yeah, great question. You know, Green Lake is a new consumption model for eso. It's a very exciting We keep our entire HPC portfolio that we have today, but extend it with Green Lake and offer customers you know, expanded consumption choices. So, you know, customers that potentially are dealing with the growth of their data or they're moving toe digital transformation applications they can use green light just easily scale up from workstations toe, you know, manage their system costs or operational costs, or or if they don't have staff to expand their environment. Green Light provides all of that in a manage infrastructure for them. So if they're going from like a pilot environment up into a production environment over time, Green Lake enables them to do that very simply and easily without having toe have all that internal infrastructure people, computer data centers, etcetera. Green Lake provides all that for them so they can have a turnkey solution for HBC. >>So a lot easier entry strategies. A key key word that you use. There was choice, though. So basically you're providing optionality. You're not necessarily forcing them into a particular model. Is that correct? >>Yeah, 100%. Dave. What we want to do is just expand the choices so customers can buy a new choir and use that technology to their advantage is whether they're large or small. Whether they're you know, a startup or Fortune 500 company, whether they have their own data centers or they wanna, you know, use a Coehlo facility whether they have their own staff or not, we want to just provide them the opportunity to take advantage of this leading edge resource. >>Very interesting, Pete. It really appreciate the perspective that you guys have bring into the market. I mean, it seems to me it's gonna really accelerate broader adoption of high performance computing, toe the masses, really giving them an easier entry point I want to bring in now. Addison Snell to the discussion. Addison. He's the CEO is, I said of Intersect 3 60 which, in my view, is the world's leading market research company focused on HPC. Addison, you've been following the space for a while. You're an expert. You've seen a lot of changes over the years. What do you see is the critical aspect in the market, specifically as it relates toward this as a service delivery that we were just discussing with Pete and I wonder if you could sort of work in their the benefits in terms of, in your view, how it's gonna affect HPC usage broadly. Yeah, Good morning, David. Thanks very much for having me, Pete. It's great to see you again. So we've been tracking ah lot of these utility computing models in high performance computing for years, particularly as most of the usage by revenue is actually by commercial endeavors. Using high performance computing for their R and D and engineering projects and the like. And cloud computing has been a major portion of that and has the highest growth rate in the market right now, where we're seeing this double digit growth that accounted for about $1.4 billion of the high performance computing industry last year. But the bigger trend on which makes Green like really interesting is that we saw an additional about a billion dollars worth of spending outside what was directly measured in the cloud portion of the market in in areas that we deemed to be cloud like, which were as a service types of contracts that were still utility computing. But they might be under a software as a service portion of the budget under software or some other managed services type of contract that the user wasn't reported directly is cloud, but it was certainly influenced by utility computing, and I think that's gonna be a really dominant portion of the market going forward. And when we look at growth rate and where the market's been evolving, so that's interesting. I mean, basically, you're saying this, you know, the utility model is not brand new. We've seen that for years. Cloud was obviously a catalyst that gave that a boost. What is new, you're saying is and I'll say it this way. I'd love to get your independent perspective on this is so The definition of cloud is expanding where it's you know, people always say it's not a place, it's an experience and I couldn't agree more. But I wonder if you could give us your independent perspective on that, both on the thoughts of what I just said. But also, how would you rate H. P. E s position in this market? Well, you're right, absolutely, that the definition of cloud is expanding, and that's a challenge when we run our surveys that we try to be pedantic in a sense and define exactly what we're talking about. And that's how we're able to measure both the direct usage of ah, typical public cloud, but also ah more flexible notion off as a service. Now you asked about H P E. In particular, And that's extremely relevant not only with Green Lake but with their broader presence in high performance computing. H P E is the number one provider of systems for high performance computing worldwide, and that's largely based on the breath of H. P s offerings, in addition to their performance in various segments. So picking up a lot of the commercial market with their HP apology and 10 plus, they hit a lot of big memory configurations with Superdome flex and scale up to some of the most powerful supercomputers in the world with the HP Cray X platforms that go into some of the leading national labs. Now, Green Light gives them an opportunity to offer this kind of flexibility to customers rather than committing all it wants to a particular purchase price. But if you want to do position those on a utility computing basis pay for them as a service without committing to ah, particular public cloud. I think that's an interesting role for Green Lake to play in the market. Yeah, it's interesting. I mean earlier this year, we celebrated Exa scale Day with support from HP, and it really is all about a community and an ecosystem is a lot of camaraderie going on in the space that you guys are deep into, Addison says. We could wrap. What should observers expect in this HPC market in this space over the next a few years? Yeah, that's a great question. What to expect because of 2020 has taught us anything. It's the hazards of forecasting where we think the market is going. When we put out a market forecast, we tend not to look at huge things like unexpected pandemics or wars. But it's relevant to the topic here because, as I said, we were already forecasting Cloud and as a service, models growing. Any time you get into uncertainty, where it becomes less easy to plan for where you want to be in two years, three years, five years, that model speaks well to things that are cloud or as a service to do very well, flexibly, and therefore, when we look at the market and plan out where we think it is in 2020 2021 anything that accelerates uncertainty actually is going. Thio increase the need for something like Green Lake or and as a service or cloud type of environment. So we're expecting those sorts of deployments to come in over and above where we were already previously expected them in 2020 2021. Because as a service deals well with uncertainty. And that's just the world we've been in recently. I think there's a great comments and in a really good framework. And we've seen this with the pandemic, the pace at which the technology industry in particular, of course, HP specifically have responded to support that your point about agility and flexibility being crucial. And I'll go back toe something earlier that Pete said around the data, the sooner we can get to the data to analyze things, whether it's compressing the time to a vaccine or pivoting our business is the better off we are. So I wanna thank Pete and Addison for your perspectives today. Really great stuff, guys. Thank you. >>Yeah, Thank you. >>Alright, keep it right there from, or great insights and content you're watching green leg day. Alright, Great discussion on HPC. Now we're gonna get into some of the new industry examples and some of the case studies and new platforms. Keith HP, Green Lake It's moving forward. That's clear. You're picking up momentum with customers, but can you give us some examples of platforms for industry use cases and some specifics around that? >>You know, you bet, and actually you'll hear more details from Arwa Qadoura she leads are green like the market efforts in just a little bit. But specifically, I want to highlight some examples where we provide cloud services to help solve some of the most demanding workloads on the planet. So, first off in financial services, for example, traditional banks are facing increased competition and evolving customer expectations they need to transform so that they can reduce risk, manage cop and provided differentiated customer experience. We'll talk about a platform for Splunk that does just that. Second, in health care institutions, they face the growing list of challenges, some due to the cove in 19 Pandemic and others. Years in the making, like our aging population and rise in chronic disease, is really driving up demands, and it's straining capital budgets. These global trance create a critical need for transformation. Thio improve that patient experience and their business outcomes. Another example is in manufacturing. They're facing many challenges in order to remain competitive, right, they need to be able to identify new revenue streams run more efficiently from an operation standpoint and scale. Their resource is so you'll hear more about how we're optimizing and delivery for manufacturing with S. A P Hana and always gonna highlight a little more detail on today's news how we're delivering supercomputing through HP Green Lake It's scale and finally, how we have a robust ecosystem of partners to help enterprises easily deploy these solutions. For example, I think today you're gonna be talking to Skip Bacon from Splunk. >>Yeah, absolutely. We sure are. And some really great examples there, especially a couple industries that that stood out. I mean, financial services and health care. They're ripe for transformation and maybe disruption if if they don't move fast enough. So Keith will be coming back to you a little later today to wrap things up. So So thank you. Now, now we're gonna take a look at how HP is partnering with Splunk and how Green Lake compliments, data rich workloads. Let's watch. We're not going to dig deeper into a data oriented workload. How HP Green Lake fits into this use case and with me, a Skip Bacon vice president, product management at Splunk Skip. Good to see >>you. Good to see you as well there. >>So let's talk a little bit about Splunk. I mean, you guys are a dominant player and security and analytics and you know, it's funny, Skip, I used to comment that during the big data, the rise of big data Splunk really never positioned themselves is this big data player, and you know all that hype. But But you became kind of the leader in big data without really, even, you know, promoting it. It just happened overnight, and you're really now rapidly moving toward a subscription model. You're making some strategic moves in the M and a front. Give us your perspective on what's happening at the company and why customers are so passionate about your software. >>Sure, a great, great set up, Dave. Thanks. So, yeah, let's start with the data that's underneath big data, right? I think I think it is usual. The industry sort of seasons on a term and never stops toe. Think about what it really means. Sure, one big part of big data is your transaction and stuff, right? The things that catch generated by all of your Oracle's USC Cheops that reflect how the business actually occurred. But a much bigger part is all of your digital artifacts, all of the machine generated data that tells you the whole story about what led up to the things that actually happened right within the systems within the interactions within those systems. That's where Splunk is focused. And I think what the market is the whole is really validating is that that machine generated data those digital artifacts are a tely least is important, if not more so, than the transactional artifacts to this whole digital transformation problem right there. Critical to showing I t. How to get better developing and deploying and operating software, how to get better securing these systems, and then how to take this real time view of what the business looks like as it's executing in the software right now. And hold that up to and inform the business and close that feedback loop, right? So what is it we want to do differently digitally in order to do different better on the transformation side of the house. So I think a lot of splints. General growth is proof of the value crop and the need here for sure, as we're seeing play out specifically in the domains of ICTs he operations Dev, ops, Cyber Security, right? As well as more broadly in that in that cloak closing the business loop Splunk spin on its hair and growing our footprint overall with our customers and across many new customers, we've been on its hair with moving parts of that footprints who and as a service offering and spawn cloud. But a lot of that overall growth is really fueled by just making it simpler. Quicker, faster, cheaper, easier toe operates Plunkett scale because the data is certainly not slowing down right. There's more and more and more of it every day, more late, their potential value locked up in it. So anything that we can do and that our partners conducive to improve the cost economics to prove the agility to improve the responsiveness of these systems is huge. That that customer value crop and that's where we get so excited about what's going on with green life >>Yeah, so that makes sense. I mean, the digital businesses, a data business. And that means putting data at the core. And Splunk is obviously you keep part of that. So, as I said earlier, spunk your leader in this space, what's the deal with your HP relationship? You touched on that? What should we know about your your partnership? And what's that solution with H h p E? What's that customer Sweet spot. >>Yep. Good. All good questions. So we've been working with HP for quite a while on on a number of different fronts. This Green lake peace is the most interesting and sort of the intersection of, you know, purist intersection of both of these threads of these factories, if you will. So we've been working to take our core data platform deployed on an enterprise operator for kubernetes. Stick that a top H P s green like which is really kubernetes is a service platform and go prove performance, scalability, agility, flexibility, cost economics, starting with some of slugs, biggest customers. And we've proven, you know, alot of those things In great measure, I think the opportunity you know, the ability to vertically scale Splunk in containers that taught beefy boxes and really streamline the automation, the orchestration, the operations, all of that yields what, in the words of one of our mutual customers, literally put it as This is a transformational platform for deploying and operating spot for us so hard at work on the engineering side, hard at work on the architectural referencing, sizing, you know, capacity planning sides, and then increasing really rolling up our sleeves and taking the stuff the market together. >>Yeah, I mean, we're seeing the just the idea of cloud. The definition of cloud expanding hybrid brings in on Prem. We talked about the edge and and I really We've seen Splunk rapidly transitioning its pricing model to a subscription, you know, platform, if you will. And of course, that's what Green Lakes all about. What makes Splunk a good fit for Green Lake and vice versa? What does it mean for customers? >>Sure, So a couple different parts, I think, make make this a perfect marriage. Splunk at its core, if you're using it well, you're using it in a very iterative discovery driven kind of follow you the path to value basis that makes it a little hard to plan the infrastructure and decides these things right. We really want customers to be focused on how to get more data in how to get more value out. And if you're doing it well, those things, they're going to go up and up and up over time. You don't wanna be constrained by size and capacity planning, procurement cycles for infrastructure. So the Green Lake model, you know, customers got already deployed systems already deployed, capacity available in and as the service basis, very fast, very agile. If they need a next traunch of capacity to bring in that next data set or run, that next set of analytics right it's available immediately is a service, not hey, we've got to kick off the procurement cycle for a whole bunch more hardware boxes. So that flexibility, that agility or key to the general pattern for using Splunk and again that ability to vertically scale stick multiple Splunk instances into containers and load more and more those up on these physical boxes right gives you great cost economics. You know, Splunk has a voracious appetite for data for doing analytics against that data less expensive, we can make that processing the better and the ability to really fully sweat, you know, sweat the assets fully utilize those assets. That kind of vertical scale is the other great element of the Green Lake solution. >>Yes. I mean, when you think about the value prop for for customers with Splunk and HP green, that gets a lot of what you would expect from what we used to talk about with the early days of cloud. Uh, that that flexibility, uh, it takes it away. A lot of the sort of mundane capacity planning you can shift. Resource is you talked about, you know, scale in a in a number of of use cases. So that's sort of another interesting angle, isn't it? >>Yeah. Faster. It's the classic text story. Faster, quicker, cheaper, easier, right? Just take in the whole whole new holy levels and hold the extremes with these technologies. >>What do you see? Is the differentiators with Splunk in HP, Maybe what's different from sort of the way we used to do things, but also sort of, you know, modern day competition. >>Yeah. Good. All good. All good questions. So I think the general attributes of splinter differentiated green Laker differentiated. I think when you put them together, you get this classic one plus one equals three story. So what? I hear from a lot of our target customers, big enterprises, big public sector customers. They can see the path to these benefits. They understand in theory how these different technologies would work together. But they're concerned about their own skills and abilities to go building. Run those and the rial beauty of Green Lake and Splunk is this. All comes sort of pre design, pre integrated right pre built HP is then they're providing these running containers as a service. So it's taking a lot of the skills and the concerns off the customers plate right, allowing them to fast board to, you know, cutting edge technology without any of the wrist. And then, most importantly, allowing customers to focus their very finite resource is their peoples their time, their money, their cycles on the things that are going to drive differentiated value back to the business. You know, let's face facts. Buying and provisioning Hardware is not a differentiating activity, running containers successfully, not differentiating running the core of Splunk. Not that differentiating. He can take all of those cycles and focus them instead on in the simple mechanics. How do we get more data in? Run more analytics on it and get more value out? Right then you're on the path to really delivering differentiated, you know, sustainable competitive basis type stuff back to the business, back to that digital transformation effort. So taking the skills out, taking the worries out, taking the concerns about new tech, out taking the procurement cycles, that improving scalability again quicker, faster, cheaper. Better for sure. >>It's kind of interesting when you when you look at the how the parlance has evolved from cloud and then you had Private Cloud. We talk a lot about hybrid, but I'm interested in your thoughts on why Splunk and HP Green Light green like now I mean, what's happening in the market that makes this the right place and in the right time, so to speak. >>Yeah, again, I put cloud right up there with big data is one of those really overloaded terms. Everything we keep keep redefining as we go if we define it. One way is as an experience instead of outcomes that customers looking for right, what does anyone of our mutual customers really want Well, they want capabilities that air quick to get up and running that air fast, to get the value that are aligned with how the price wise, with how they deliver value to the business and that they can quickly change right as the needs of the business and the operation shift. I think that's the outcome set that people are looking thio. Certainly the early days of cloud we thought were synonymous with public cloud. And hey, the way that you get those outcomes is you push things out. The public cloud providers, you know, what we saw is a lot of that motion in cases where there wasn't the best of alignment, right? You didn't get all those outcomes that you were hoping for. The cost savings weren't there or again. These big enterprises, these big organizations have a whole bunch of other work clothes that aren't necessarily public cloud amenable. But what they want is that same cloud experience. And this is where you see the evolution in the hybrid clouds and into private clouds. Yeah, any one of our customers is looking across the entirety of this landscape, things that are on Prem that they're probably gonna be on Prem forever. Things that they're moving into private cloud environments, things that they're moving into our growing or expanding or landing net new public cloud. They want those same outcomes, the same characteristics across all of that. That's a lot of Splunk value. Crop is a provider, right? Is we can go monitor and help you operate and developed and secure exactly all of that, no matter where it's located. Splunk on Green Lake is all about that stack, you know, working in that very cloud native way even where it made sense for customers to deploy and operate their own software. Even if this want, they're running over here themselves is hoping the modern, secure other work clothes that they put into their public cloud environments. >>Well, it Z another key proof point that we're seeing throughout the day here. Your software leader, you know, HP bring it together. It's ecosystem partners toe actually deliver tangible value. The customers skip. Great to hear your perspective today. Really appreciate you coming on the program. >>My pleasure. And thanks so much for having us take care. Stay well, >>Yeah, Cheers. You too. Okay, keep it right there. We're gonna go back to Keith now. Have him on a close out this segment of the program. You're watching HP Green Lake Day on the Cube. All right, We're So we're seeing some great examples of how Green Lake is supporting a lot of different industries. A lot of different workloads we just heard from Splunk really is part of the ecosystem. Really? A data heavy workload. And we're seeing the progress. HPC example Manufacturing. We talked about healthcare financial services, critical industries that are really driving towards the subscription model. So, Keith, thanks again for joining us. Is there anything else that we haven't hit that you feel are audience should should know about? >>Yeah, you bet. You know, we didn't cover some of the new capabilities that are really providing customers with the holistic experience to address their most demanding workloads with HP Green Lake. So first is our Green Lake managed security services. So this provides customers with an enterprise grade manage security solution that delivers lower costs and frees up a lot of their resource is the second is RHP advisory and Professional Services Group. So they help provide customers with tools and resource is to explore their needs for their digital transformation. Think about workshops and trials and proof of concepts and all of that implementation. Eso You get the strategy piece, you get the advisory piece, and then you get the implementation piece that's required to help them get started really quickly. And then third would be our H. P s moral software portfolio. So this provides customers with the ability to modernize their absent data unify, hybrid cloud and edge computing and operationalized artificial intelligence and machine learning and analytics. >>You know, I'm glad that you brought in the sort of machine intelligence piece in the machine learning because that's, ah, lot of times. That's the reason why people want to go to the cloud at the same time you bring in the security piece a lot of reasons why people want to keep things on Prem. And, of course, the use cases here. We're talking about it, really bringing that cloud experience that consumption model on Prem. I think it's critical critical for companies because they're expanding their notion of cloud computing really extending into hybrid and and the edge with that similar experience or substantially the same experience. So I think folks are gonna look at today's news as real progress. We're pushing you guys on some milestones and some proof points towards this vision is a critical juncture for organizations, especially those look, they're looking for comprehensive offerings to drive their digital transformations. Your thoughts keep >>Yeah, I know you. You know, we know as many as 70% of current and future APS and data are going to remain on Prem. They're gonna be in data centers, they're gonna be in Colo's, they're gonna be at the edge and, you know, really, for critical reasons. And so hybrid is key. As you mentioned, the number of times we wanna help customers transform their businesses and really drive business outcomes in this hybrid, multi cloud world with HP Green Lake and are targeted solutions. >>Excellent. Keith, Thanks again for coming on the program. Really appreciate your time. >>Always. Always. Thanks so much for having me and and take Take care of. Stay healthy, please. >>Alright. Keep it right there. Everybody, you're watching HP Green Lake day on the Cube

Published Date : Dec 2 2020

SUMMARY :

It's the Cube with digital coverage I'm really excited to be here. And so listen, before we get into the hard news, can you give us an update on just And thanks, you know, for the opportunity again. So let's let's get to the news. And you know, really different about the news today From your perspective. And the idea is to really help customers with Yeah, so I wonder if you could talk a little bit mawr about specifically, experts to help them with implementation and migration as well as they want to see resiliency. In other words, you know the customers have toe manage it on So the fantastic thing about HP Green Lake is that we manage it all for the You know, you had a lot of people want to dig deeper into the data. And so one of the best things about this HPC implementation is and in some of the new industry platforms that you see evolving I look forward to it. And really a pleasure to have you here. customers that are longtime HBC customers, you know, just consume it on their own for some of the toughest and most complex problems, particularly those that affecting society. that to, you know, benefit society overall. the new Green Lake services the HPC services specifically as it relates to Greenlee. today, but extend it with Green Lake and offer customers you know, A key key word that you use. Whether they're you know, a startup or Fortune 500 is a lot of camaraderie going on in the space that you guys are deep into, but can you give us some examples of platforms for industry use cases and some specifics You know, you bet, and actually you'll hear more details from Arwa Qadoura she leads are green like So Keith will be coming back to you a little later Good to see you as well there. I mean, you guys are a dominant player and security and analytics and you that tells you the whole story about what led up to the things that actually happened right within And that means putting data at the And we've proven, you know, alot of those things you know, platform, if you will. So the Green Lake model, you know, customers got already deployed systems A lot of the sort of mundane capacity planning you can shift. Just take in the whole whole new holy levels and hold the extremes with these different from sort of the way we used to do things, but also sort of, you know, modern day competition. of the skills and the concerns off the customers plate right, allowing them to fast board It's kind of interesting when you when you look at the how the parlance has evolved from cloud And hey, the way that you get those outcomes is Your software leader, you know, HP bring it together. And thanks so much for having us take care. hit that you feel are audience should should know about? Eso You get the strategy piece, you get the advisory piece, That's the reason why people want to go to the cloud at the same time you bring in the security they're gonna be at the edge and, you know, really, for critical reasons. Really appreciate your time. Thanks so much for having me and and take Take care of. Keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

PetePERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

AddisonPERSON

0.99+

HPORGANIZATION

0.99+

Pete UngaroPERSON

0.99+

KeithPERSON

0.99+

2020DATE

0.99+

Addison SnellPERSON

0.99+

DavePERSON

0.99+

Keith WhitePERSON

0.99+

SplunkORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

Green Lake Cloud ServicesORGANIZATION

0.99+

Green LakeORGANIZATION

0.99+

Green LightORGANIZATION

0.99+

100%QUANTITY

0.99+

75%QUANTITY

0.99+

OracleORGANIZATION

0.99+

last yearDATE

0.99+

Arwa QadouraPERSON

0.99+

thirdQUANTITY

0.99+

three yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

about $1.4 billionQUANTITY

0.99+

CoehloORGANIZATION

0.99+

SecondQUANTITY

0.99+

70%QUANTITY

0.99+

firstQUANTITY

0.99+

pandemicEVENT

0.99+

over $4 billionQUANTITY

0.99+

secondQUANTITY

0.98+

HP Green LakeORGANIZATION

0.98+

Keith HPPERSON

0.98+

HBCORGANIZATION

0.98+

Addison SnellPERSON

0.98+

bothQUANTITY

0.98+

Exa scale DayEVENT

0.98+

over 1000 customersQUANTITY

0.98+

Intersect 3. 60ORGANIZATION

0.98+

todayDATE

0.98+

two yearsQUANTITY

0.98+

three storyQUANTITY

0.98+

three thingsQUANTITY

0.98+

about a billion dollarsQUANTITY

0.97+

Green Lake CloudORGANIZATION

0.97+

H P EORGANIZATION

0.97+

oneQUANTITY

0.97+

HPCORGANIZATION

0.97+

Pete Ungaro & Addison Snell


 

>> Announcer: From around the globe it's theCUBE with digital coverage of HPE GreenLake Day made possible by Hewlett Packard Enterprise. >> Welcome everybody to this spotlight session here at GreenLake Day and we're going to dig into high-performance computing. Let me first bring in Pete Ungaro who's the GM for HPC and Mission Critical Solutions at Hewlett Packard Enterprise. And then we're going to pivot to Addison Snell, who's the CEO of research firm Intersect360. So Pete started with you welcome and really a pleasure to have you here. I want to first start off by asking you what are the key trends that you see in the HPC and super computing space. And I really appreciate if you could talk about how customer consumption patterns are changing. >> Yeah, appreciate that Dave and thanks for having me. I think the biggest thing that we're seeing is just the massive growth of data. And as we get larger and larger data sets larger and larger models happen and we're having more and more new ways to compute on that data. So new algorithms like AI would be a great example of that. And as people are starting to see this, especially as they're going through digital transformations, more and more people I believe can take advantage of HPC but maybe don't know how and don't know how to get started. And so they're looking for how to get going into this environment. And many customers that are long-time HPC customers just consume it on their own data centers, they have that capability but many don't. And so they're looking at how can I do this? Do I need to build up that capability myself? Do I go to the Cloud? What about my data and where that resides? So there's a lot of things that are going into thinking through how do I start to take advantage of this new infrastructure? >> Excellent, I mean, we all know HPC workloads. You're talking about fording research and discovery for some of the toughest and most complex problems particularly those that are affecting society. So I'm interested in your thoughts on how you see GreenLake helping in these endeavors specifically. >> Yeah, one of the most exciting things about HPC is just the impact that it has. Everywhere from building safer cars and airplanes to looking at climate change to finding new vaccines for things like COVID that we're all dealing with right now. So one of the biggest things is how do we take advantage of that and use that to benefit society overall. And as we think about implementing HPC, how do we get started and then how do we grow and scale as we get more and more capabilities. So that's the biggest things that we're seeing on that front. >> Yeah, okay, so just about a year ago you guys launched the GreenLake initiative and the whole complete focus on as a service. So I'm curious as to how the new GreenLake services the HPC services specifically as it relates to GreenLake, how do they fit into HP's overall high-performance computing portfolio and the strategy? >> Yeah, great question. GreenLake is a new consumption model for us. So it's a very exciting. We keep our entire HPC portfolio that we have today but extend it with GreenLake and offer customers expanded consumption choices. So customers that potentially are dealing with the growth of their data or they're moving to digital transformation applications, they can use GreenLake just easily scale up from workstations to manage their system costs or operational costs or if they don't have staff to expand their environment, GreenLake provides all of that in a managed infrastructure for them. So if they're going from like a pilot environment, I've been to a production environment over time, GreenLake enables them to do that very simply and easily without having to have all that internal infrastructure people, computer data centers, et cetera, GreenLake provides all that for them. So they can have a turnkey solution for HPC. >> So a lot easier entry strategy is a key word that you use there was choice though. So basically you're providing optionality, you're not necessarily forcing them into a particular model, is that correct? >> Yeah, 100% Dave. What we want to do is just expand the choices so customers can buy and acquire and use that technology to their advantages. Whether they're large or small, whether they're a startup or a fortune 500 company, whether they have their own data centers or they want to use a colo facility, whether they have their own staff or not. We want to just provide them the opportunity to take advantage of this leading edge resource. >> Very interesting, Pete, I really appreciate the perspectives that you guys are bringing to the market. I mean, it seems to me it's going to really accelerate broader adoption of high-performance computing to the masses, really giving them an easier entry point. I want to bring in now Addison Snell to the discussion. Addison, he's a CEO, as I said of Intersect360 which in my view is the world's leading market research company focused on HPC. Addison you've been following this space for a while. You're an expert, you've seen a lot of changes over the years. What do you see as the critical aspects in the market specifically as it relates toward this as a service delivery that we were just discussing with Pete? And I wonder if you could sort of work in there the benefits in terms of in your view how it's going to affect HPC usage broadly. >> Yeah, good morning Dave, and thanks very much for having me. Pete it's great to see you again. So we've been tracking a lot of these utility computing models in high-performance computing for years. Particularly as most of the usage by revenue is actually by commercial endeavors using high-performance computing for their R and D and engineering projects and the like. And cloud computing has been a major portion of that and has the highest growth rate in the market right now where we're seeing this double digit growth that accounted for about $1.4 billion of the high-performance computing industry last year. But the bigger trend and which makes GreenLake really interesting is that we saw an additional about a billion dollars worth of spending outside what was directly measured in the cloud portion of the market in areas that we deemed to be cloud-like which were as a service types of contracts that were still utility computing, but they might be under a software as a service portion of a budget under software or some other managed services type of contract that the user wasn't reporting directly as cloud but was certainly influenced by utility computing. And I think that's going to be a really dominant portion of the market going forward when we look at a growth rate and where the market's been evolving. >> So that's interesting. I mean, basically you're saying this utility model is not brand new, we've seen that for years. Cloud was obviously a catalyst that gave that a boost. What is new you're saying is, and I'll say it this way. I'd love to get your independent perspective on this is sort of the definition of cloud is expanding where we people always say, it's not a place, it's an experience and I couldn't agree more. But I wonder if you could give us your independent perspective on that, both on the thoughts of what I just said but also how would you rate HPE position in this market? >> Well, you're right absolutely that the definition of cloud is expanding. And that's a challenge when we run our surveys that we try to be pedantic in a sense and define exactly what we're talking about. And that's how we're able to measure both the direct usage of a typical public cloud but also a more flexible notion of as a service. Now you asked about HPE in particular and that's extremely relevant, not only with GreenLake, but with their broader presence in high-performance computing. HPE is the number one provider of systems for high-performance computing worldwide. And that's largely based on the breadth of HPE's offerings in addition to their performance at various segments. So picking up a lot of the commercial market with our HPE Apollo Gen10 plus, they hit a lot of big memory configurations with the Superdome Flex and scale up to some of the most powerful supercomputers in the world with the HPE Cray EX platforms that go into some of the leading national labs. Now GreenLake gives them an opportunity to offer this kind of flexibility to customers rather than committing all at once to a particular purchase price. But if you want to do position those on a utility computing basis, pay for them as a service without committing to a particular public cloud, I think that's an interesting role for GreenLake to play in the market. >> Yeah, yeah it's interesting. I mean, earlier this year we celebrated Exascale Day with the support from HPE and it really is all about a community and an ecosystem. Is a lot of comradery going on in the space that you guys are deep into. Addison, it says we can wrap what should observe as expect in this HPC market, in this space over the next few years? >> Yeah, that's a great question what to expect because if 2020 has taught us anything it's the hazards of forecasting where we think the market is going. Like when we put out a market forecast, we tend not to look at huge things like unexpected pandemics or wars but it's relevant to the topic here. Because as I said, we were already forecasting cloud and as a service models growing. Anytime you get into uncertainty where it becomes less easy to plan for where you want to be in two years, three years, five years, that model speaks well to things that are cloud or as a service to do very well flexibly. And therefore, when we look at the market and plan out where we think it is in 2020, 2021, anything that accelerates uncertainty actually is going to increase the need for something like GreenLake or an as a service or cloud type of environment. So we're expecting those sorts of deployments to come in over and above where we were already previously expected them in 2020, 2021. Because as a service deals well with uncertainty and that's just the world we've been in recently. >> I think those are great comments and a really good framework. And we've seen this with the pandemic, the pace at which the technology industry in particular and of course HPE specifically have responded to support that. Your point about agility and flexibility being crucial. And I'll go back to something earlier that Pete said around the data, the sooner we can get to the data to analyze things, whether it's compressing the time to a vaccine or pivoting our businesses, the better off we are. So I want to thank Pete and Addison for your perspectives today. Really great stuff, guys, thank you. >> Yeah, thank you. >> Thank you. >> All right, keep it right there for more great insights and content. You're watching GreenLake Day. (ambient music)

Published Date : Nov 23 2020

SUMMARY :

the globe it's theCUBE and really a pleasure to have you here. and don't know how to get started. for some of the toughest So that's the biggest and the whole complete or they're moving to digital into a particular model, is that correct? just expand the choices the perspectives that you guys And I think that's going to both on the thoughts of what I just said that the definition of cloud is expanding. in the space that you guys are deep into. and that's just the world the time to a vaccine for more great insights and content.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Pete UngaroPERSON

0.99+

DavePERSON

0.99+

PetePERSON

0.99+

2020DATE

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Addison SnellPERSON

0.99+

AddisonPERSON

0.99+

2021DATE

0.99+

Intersect360ORGANIZATION

0.99+

GreenLake DayTITLE

0.99+

three yearsQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

five yearsQUANTITY

0.99+

HPEORGANIZATION

0.99+

last yearDATE

0.99+

about $1.4 billionQUANTITY

0.99+

Addison SnellPERSON

0.99+

HPORGANIZATION

0.99+

pandemicEVENT

0.99+

Exascale DayEVENT

0.99+

HPCORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

Superdome FlexCOMMERCIAL_ITEM

0.99+

GreenLake DayEVENT

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.98+

firstQUANTITY

0.98+

100%QUANTITY

0.98+

two yearsQUANTITY

0.98+

about a billion dollarsQUANTITY

0.97+

todayDATE

0.97+

earlier this yearDATE

0.96+

COVIDOTHER

0.94+

about a year agoDATE

0.93+

HPE GreenLake DayEVENT

0.87+

HPE Apollo Gen10 plusCOMMERCIAL_ITEM

0.84+

doubleQUANTITY

0.78+

number oneQUANTITY

0.75+

HPE Cray EXCOMMERCIAL_ITEM

0.7+

few yearsDATE

0.6+

yearsQUANTITY

0.57+

500QUANTITY

0.5+

GreenLakeTITLE

0.49+

Barbara Hallmans, HPE | Microsoft Ignite 2019


 

>>live from Orlando, Florida It's the cue covering Microsoft Ignite Brought to you by Cho He City Welcome >>back, everyone to the Cubes Live coverage of Microsoft IC Night. 26,000 people were here. The cube, the middle of the show floor. It's an exciting time. I'm your host. Rebecca Night, along with my co host, Stew Minutemen. We're joined by Barbara Homans. She is the director. Global ecosystem strategy and micro ecosystem lead at HP Thank you so much for coming on the Cube direct from Munich. Yes, Rebecca. Glad to be here. So you have You have two Rolls Global Ecosystem Strategy and Michael Microsoft's ecosystem lead. Explain how those work and how they there is synergy between those two roles. Yeah, I mean, I started >>off with the Microsoft role, but what we figured out is that actually, the world is much bigger than just one alliance, and that's why we call ourselves the Ecosystem. So it's all about driving alliances from different partner speed as I speed Eyes V's or also smaller partners in different segments and build a whole ecosystem play. That's what I'm attempting to do. >>So how do HB and Microsoft worked together. So we've >>seen partnering for 30 years strong, strong relationship with Microsoft and really nice to see. Also today, you know some of the H p e solutions on stage and even deepening our partnership. We have several areas. Probably 34 I can talk about in the next few minutes on how we work together with Microsoft specifically. >>Yeah. So? So Barbara, You know, I think most of us remember back, you know, early if you're talking about windows and office and you know HP here what's now part of HP Inc? Not sure. As many people know about all of the places that H p e Partners, obviously on the server side, it makes sense. But Azure is something. And the Azure arc announcement Help us understand, you know, Azure stack and beyond. Where? HP. Ethan with Microsoft on the Enterprise side. >>Perfect. Absolutely. We have still in Microsoft. Oh, am business where we have actually service attached with licenses. That's not going away rights. We absolutely. It's a strong business class. We work very closely around sequel with Microsoft, and that's also worried this whole azure arc announcement fits in. But it's more than just a sequel right with this as your arc. For me, it's a announcement around deepening relationships. Both. We're interested in a hybrid strategy. I really like Thio here from Satya today. How important hybrid is for Microsoft and this announcement as your ark. That's in public preview now, right? Well, give somewhat details on that. So we'd love to work with customers on that we actually our part of the public review and if anyone is interested, love to hear from customers. Please come to me, Barbara Holman's and we'll hook you up and get into the program. It's really about the hybrid piece, right that we both worked >>in Barbara H. P. E. If my understanding plays on both sides of it, it's not just in the data center with some gear there, but as you said, there's a sequel. The application side, you know, hybrid HP, you know, plays across the board, >>Indeed, So I don't know if you know about HB is actually a expert MSP partner for Azure. We got that last year. We're very proud of what I think we're one of 50 world by its partners. That also means we can actually offer Manage Service's Migration Service is helping people to move to an azure based clout. And that actually came partially because off our position off CTP Cloud Technology Partners, but also read pixie in the UK, and there are no old part off our point. Next service is group, and so as such, we have numerous customers were actually helped into the public cloud. Help them to find the right place. Because if you don't know if you've seen the video from Eric Poodle, that was part of the announcement today as well around as your ark, this is all about finding the right mix off your applications, and this is where we work together and a perfect fit. >>What are some of the biggest challenges you're seeing from your cut from your customers in terms of how you might, how Azure Arc might be the solution for them >>so as your ark? It's hard to say at this >>stage, because I just really don't work for Michael >>Self. So, yeah, we have to ask these people. But again, what I understand division is really that way will be able to manage hybrid environments in a in a better way, and again, this is what HP You know, we have a lot off our tour, of course, but we also announce that our hardware, all of that, will be available as a service within the next two for years. So we're moving in that direction in addition to Azure. And I think this will help customers to take adventures in the end. But it's hard to say Right, So you on this. This is very new. At this stage, the odds are right >>and this is a Microsoft show, not on HP show, but I I read somewhere that you had done a talk. Fear no cloud with H. P m. Our company's afraid. I mean, how would you describe the atmosphere with the companies that you work with? I worked >>in the cloud space, but for the last 10 years or longer, you know, it was on different parts off the industry there and from the early adoption. Really. People looking into you know, should I trust my data in this specific with this cloud provider or which applications am I gonna move? And I think today people have lost the fear a little bit, but they still don't know what to put where and there's applications, you do not want to move in a cloud. There's others that you for your specific company, you don't want to move, and another company may do that. And that's what we're trying to help them, right? So don't you don't have to fear the cloud you can. Actually, we can help you to adopt it at your pace in your way and so that you take most of the advantage out of it. >>But Barbara would love to hear any color you could give from the joint HP, EA and Microsoft customers very much. The announcement today feels like it completely. It's an update on the hybrid message, but A B and Microsoft have been working together on solutions like Azure Stack for a number of years. So what? What's working well today? What do you think you know? This will mean down the road a CZ. Some of these solutions start start to mature even further. >>Maybe moving to another area that HB and Microsoft worked very well together is around the modern workplace practice, and in there we just had a really nice win with Portia thing, actually in Austria, but planning to roll this out no further than that, and h b E's team has helped them to move from the current applicator from the current environment. Thio up two dates. Microsoft 3 65 Environment There's em OD in the UK and it's fast twice if I can talk about M. O D on stage here and they said yes, another customer that we should help to move to a Microsoft 3 65 environment. So there's numerous customers that trust HP with Microsoft in moving their their information to the to the clouds. Yeah, that's one example Asha Stack we have. You know, there's several customers that hard won about ashes. Takis. Difficult to talk about the customers because a lot of them are in the government sector on. So you know, there's a few that we can talk about, but they're mostly service providers, but the really big names, unfortunately, we can talk about because of the conference shit Confidentiality. Yeah, >>trust is one of the things that we keep hearing so much of it about at this conference. Satya Nadella talked about it on the main stage this morning in terms of the relationship that you have and HP standing in the technology world. How do you feel trust with customers? And how do you make sure you are maintaining that? That bond of trust and also the reputation of being a trustworthy partner? >>Yeah, I think I love you know, I love Saturdays, Point on trust because that actually makes the difference between you. Just deliver hardware and you walk away. And this is probably coming back to Azure stack Hop, as it's called now, right? You know, we've been told actually by Microsoft that we've accomplished with the customers from a delivery from a You know, we don't just walk away and say Good luck with the equipment you're on your own really helped them thio and make sure it's working for them. So for me, that's the key that you can come back to a customer afterwards and the customer will actually have you in your office again. >>Well, Barbara, I think back for most of my career what one of the hallmarks of an H. P e solution Was that the turnkey offering we know from, you know, ordering through delivery through, you know, up and running. HP has been streamlining that you know, I think back my entire career cloud has been not necessarily the simplest solutions out there. So maybe give us directionally. How does HPD partner with Microsoft on dhe your customers toe make? I would easier as WeII go through this journey >>S O s aside. Whereas your expert MSP partner a such we have done several of course trainings with Microsoft. We make sure that our people are educated on it way have, you know, with red pixy in the UK it's now part of point next, but I love to say the name because people really associate still with this a specific, strong and trustworthy team. You really build up a very good practice with Microsoft. There's, you know, local deal clinics where we really work in the specific deal. Steal by deal on how we can make it better for the customer. So a lot off local engagement. But for me, that all happens in country. Write me at a global level. I can only help them and steered a little bit. But that's also for me trust. It's a person to person relationship that happens in country. >>And would you say there are big differences country to country in terms of how willingly trust you and and and then how long it takes to build that relationship. >>So I'm gonna get in >>trouble now with some of the country. >>No, I you know the >>somewhere, even your CEO. >>You know, it's no, I mean you and I personally lift in Canada for a while, and so for me, it's some people are harder, you know, you need to get to know them. But then trust is even deeper then some of the others. But I have to say, it's all we're I mean, we're, I would say, from all those who look at h p were really a global company, right? And from this goes from Japan, Thio South Pacific too. You know, many countries in Asia will be very successful with ashes, stack specifically and always in Europe, the Middle East, all the way to North America, South America. So, I mean, that's the nice thing about HPD, I would say for the customers as well that they really get a global view on DA, a global company that can trust. >>So you're here, Ed ignite from Germany. What are the kinds of conversations you're having. And what do you think you're gonna take back with you when you go back to the office next week? So the other piece >>and we have ah, quite big. Both hear it at the event, right? We have a very nice edge line 8000 with us, which is kind of a ruggedized us or a smaller version. It's kindof almost my hand back, kind of to carry along, which has caught a lot of interest from the customers. So just standing there, watching the customers, asking, What is it? Can you tell me more about it? Rest is, you know, I love the bus and I love the actually part of the Microsoft Advisory Council for inspired, which is the partner event, right? But I love the bus to see here what's what's going on and always like to see how other people what they do, what they what they do at these events and then just Microsoft. I think it's wonderful, wonderful company. The inspiration. The story today was just into end a great story with great customer stories as well. So she does to the Microsoft team. Well done. >>Congratulations. Your gear was highlighted in the keynote this morning, so I'm sure that's driving a lot of traffic through for people Thio CC the latest. >>I would >>hope Superdome flex was there and then the actual stick. Both of them were there. So we worked hard for that. Thank you, Michael Self, for giving us the opportunity to be present and the keynote today. Well, >>thank you so much for coming on the Cube. It was a pleasure having you on Barbara. >>Thank you, Rebecca. Thank you. Stupid. >>I'm Rebecca Knight. First to minimum. Stay tuned for more of cubes. Live coverage of Microsoft ignite.

Published Date : Nov 4 2019

SUMMARY :

So you have You have two Rolls Global Ecosystem Strategy and Michael Microsoft's ecosystem off with the Microsoft role, but what we figured out is that actually, the world is much bigger than So how do HB and Microsoft worked together. Also today, you know some of the H p e solutions on stage And the Azure arc announcement Help us understand, you know, Azure stack and beyond. It's really about the hybrid piece, right that we both worked it's not just in the data center with some gear there, but as you said, there's a sequel. Indeed, So I don't know if you know about HB is actually a expert MSP partner for Azure. it's hard to say Right, So you on this. I mean, how would you describe the atmosphere with the in the cloud space, but for the last 10 years or longer, you know, it was on different parts But Barbara would love to hear any color you could give from the joint HP, on. So you know, there's a few that we can talk about, but they're mostly about it on the main stage this morning in terms of the relationship that you have and HP So for me, that's the key that you can come back to a customer afterwards that you know, I think back my entire career cloud has been not it way have, you know, with red pixy in the UK it's now And would you say there are big differences country to country in terms of how willingly me, it's some people are harder, you know, you need to get to know them. And what do you think you're gonna take back with you when you go back to the office next week? But I love the bus to see here what's a lot of traffic through for people Thio CC the latest. So we worked hard for that. thank you so much for coming on the Cube. Thank you, Rebecca. First to minimum.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

Barbara HomansPERSON

0.99+

BarbaraPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

Michael SelfPERSON

0.99+

EuropeLOCATION

0.99+

AustriaLOCATION

0.99+

CanadaLOCATION

0.99+

Satya NadellaPERSON

0.99+

MunichLOCATION

0.99+

UKLOCATION

0.99+

HPORGANIZATION

0.99+

Rebecca NightPERSON

0.99+

Barbara HallmansPERSON

0.99+

HPDORGANIZATION

0.99+

AsiaLOCATION

0.99+

EAORGANIZATION

0.99+

Barbara HolmanPERSON

0.99+

EthanPERSON

0.99+

Orlando, FloridaLOCATION

0.99+

MichaelPERSON

0.99+

North AmericaLOCATION

0.99+

BothQUANTITY

0.99+

Middle EastLOCATION

0.99+

Microsoft Advisory CouncilORGANIZATION

0.99+

South AmericaLOCATION

0.99+

last yearDATE

0.99+

GermanyLOCATION

0.99+

next weekDATE

0.99+

26,000 peopleQUANTITY

0.99+

todayDATE

0.99+

Barbara H. P. E.PERSON

0.99+

30 yearsQUANTITY

0.99+

FirstQUANTITY

0.99+

H p e PartnersORGANIZATION

0.99+

two rolesQUANTITY

0.99+

TakisPERSON

0.99+

both sidesQUANTITY

0.99+

Eric PoodlePERSON

0.99+

oneQUANTITY

0.98+

two datesQUANTITY

0.98+

HBORGANIZATION

0.98+

bothQUANTITY

0.98+

twoQUANTITY

0.98+

ThioPERSON

0.97+

HP IncORGANIZATION

0.97+

M. O DPERSON

0.96+

twiceQUANTITY

0.96+

Azure StackTITLE

0.95+

JapanLOCATION

0.95+

h b EORGANIZATION

0.94+

AzureTITLE

0.94+

Michael Woodacre, HPE | Micron Insight 2019


 

>>live from San Francisco. It's the Q covering Micron Insight 2019. Brought to you by Micron. >>Welcome back to Pier 27 sentences. You're beautiful day here. You're watching the Cube, the leader in live tech coverage recovering micron inside 2019 hashtag micron in sight. My co host, David Floy er and I are pleased to welcome Michael Wood, Acre Cube alum and a fellow at Hewlett Packard Enterprise. Michael, good to see you again. Thanks. Coming on. >>Thanks for having me. >>So you're welcome? So you're talking about HBC on a panel today? But of course, your role inside of HP is is a wider scope. Talk about that a little bit. >>She also I'm the lead technologists in our Compute Solutions business unit that pack out Enterprise. So I've come from the group that worked on in memory computing the Superdome flex platform around things like traditional enterprise computing s it, Hannah. But I'm now responsible not only for that mission critical solutions platform, but also looking at our blades and edge line businesses. Well said broader technology. >>Okay. And then, of course, today we're talking a lot about data, the growth of data and As you say, you're sitting on a panel talking about high performance computing and the impact on science. What are you seeing? One of the big trends in terms of the intersection between data in the collision with H. P. C and science. >>So what we're seeing is just this explosion of data and this really move from traditionally science of space around how you put equations into supercomputers. Run simulations. You test your theories out, look at results. >>Come back in a couple weeks, >>exactly a potential years. Now. We're seeing a lot of work around collecting data from instruments or whether it's genomic analysis, satellite observations of the planner or of the universe. These aerial generating data in vast quantities, very high rates. And so we need to rethink how we're doing our science to gain insights from this massive data increase with seeing, >>you know, when we first started covering the 10th year, the Cuban So in 2010 if you could look at the high performance computing market as sort of an indicator of some of the things that were gonna happen in so called big data, and some of those things have played out on I think it probably still is a harbinger. I wonder, how are you seeing machine intelligence applied to all this data? And what can we learn from that? In your opinion, in terms of its commercial applications. >>So a CZ we'll know this massive data explosion is how do we gain insights from this data? And so, as I mentioned, we serve equations of things like computational fluid dynamics. But now things are progressing, so we need to use other techniques to gain understanding. And so we're using artificial intelligence and particularly today, deep learning techniques to basically gain insights from the state of Wei. Don't have equations that we can use to mind this information. So we're using these aye aye techniques to effectively generate the algorithms that can. Then you bring patterns of interest to our you know, focused of them, really understand what is the scientific phenomenon that's driving the things particular pattern we're seeing within the data? So it's just beyond the ability of the number of HPC programmers, we have the sort of traditional equation based methodologies algorithms to gain insight. We're moving into this world where way just have outstripped knowledge and capabilities to gain insight. >>So So how does that? How is that being made possible? What are the differences in the architecture that you've had to put in, for example, to make this sort of thing possible? >>Yeah, it's it's really interesting time, actually, a few years ago seemed like computing was starting to get boring because wears. Now we've got this explosion of new hardware devices being built, basically moving into the more of a hetero genius. Well, because we have this expo exponential growth of data. But traditional computing techniques are slowing down, so people are looking at exaggerate er's to close that gap and all sorts of hatred genius devices. So we've really been thinking. How do we change that? The whole computing infrastructure to move from a compute centric world to a memory centric world? And how can we use memory driven computing techniques to close that gap to gain insight, so kind of rethinking the whole architectural direction basically merge, sort of collapsing down the traditional hierarchy you have, from storage to memory to the CPU to get rid of the legacy bottlenecks in converting protocols from process of memory storage down to just a simple basically memory driven architecture where you have access to the entire data set you're looking at, which could be many terabytes to pad of eyes to exabytes that you can do simple programming. Just directly load store to that huge data set to gain insights. So that's that's really changed. >>Fascinating, isn't it? So it's the Gen Z. The hope of Gen Z is actually taking place now. >>Yes, so Gen Z is an industry led consulting around a memory fabric and the, you know, Hewlett Packard Enterprise Onda whole host of industry partners, a part of the ecosystem looking at building a memory fabric where people can bring different innovations to operate, whether it's processing types, memory types, that having that common infrastructure. I mean, there's other work to in the industry the Compute Express Link Consortium. So there's a lot of interest now in getting memory semantics out of the process, er into a common fabric for people to innovate. >>Do you have some examples of where this is making a difference now, from from the work in the H B and your commercial work? >>Certainly. Yeah, we're working with customers in areas like precision medicine, genomex basically exaggerating the ability to gain insights into you know what medical pathway to go on for a particular disease were working in cybersecurity. Looking at how you know, we're worried about security of our data and things like network intrusion. So we're looking at How can you gain insights not only into known attacking patterns on a network that the unknown patents that just appearing? So we're actually a flying machine learning techniques on sort of graft data to understand those things. So there's there's really a very broad spectrum where you can apply these techniques to Data Analytics >>are all scientists now, data scientists. And what's the relationship between sort of a classic data scientist, where you think of somebody with stats and math and maybe a little bit of voting expertise and a scientist that has much more domain expertise you're seeing? You see, data scientists sort of traversed domains. How are those two worlds coming together? >>It's funny you mentioned I had that exact conversation with one of the members of the Cosmos Group in Cambridge is the Stephen Hawking's cosmology team, and he said, actually, he realized a couple of years ago, maybe he should call himself a day two scientists not cosmologist, because it seemed like what he was doing was exactly what you said. It's all about understanding their case. They're taking their theoretical ideas about the early universe, taking the day to measurements from from surveys of the sky, the background, the cosmic background radiation and trying to pair these together. So I think your data science is tremendously important. Right now. Thio exhilarate you as they are insights into data. But it's not without you can't really do in isolation because a day two scientists in isolation is just pointing out peaks or troughs trends. But how do you relate that to the underlying scientific phenomenon? So you you need experts in whatever the area you're looking at data to work with, data scientists to really reach that gap. >>Well, with all this data and all this performance, computing capacity and almost all its members will be fascinating to see what kind of insights come out in the next 10 years. Michael, thanks so much for coming on. The Cube is great to have you. >>Thank you very much. >>You're welcome. And thank you for watching. Everybody will be right back at Micron Insight 2019 from San Francisco. You're watching the Cube

Published Date : Oct 24 2019

SUMMARY :

Brought to you by Micron. Michael, good to see you again. So you're talking about HBC on a panel today? So I've come from the As you say, you're sitting on a panel talking about high performance computing and the impact on science. traditionally science of space around how you put equations into supercomputers. to gain insights from this massive data increase with seeing, you know, when we first started covering the 10th year, the Cuban So in 2010 if So it's just beyond the ability of the number merge, sort of collapsing down the traditional hierarchy you have, from storage to memory So it's the Gen Z. The hope of Gen Z is actually a memory fabric and the, you know, to gain insights into you know what medical pathway to go on for a where you think of somebody with stats and math and maybe a little bit of voting expertise and So you you need experts in whatever to see what kind of insights come out in the next 10 years. And thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MichaelPERSON

0.99+

2010DATE

0.99+

Michael WoodPERSON

0.99+

David Floy erPERSON

0.99+

San FranciscoLOCATION

0.99+

CambridgeLOCATION

0.99+

Compute Express Link ConsortiumORGANIZATION

0.99+

oneQUANTITY

0.99+

HPORGANIZATION

0.99+

HBCORGANIZATION

0.99+

Cosmos GroupORGANIZATION

0.99+

HannahPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Stephen HawkingPERSON

0.99+

Michael WoodacrePERSON

0.99+

todayDATE

0.98+

two worldsQUANTITY

0.98+

ThioPERSON

0.95+

MicronORGANIZATION

0.94+

2019DATE

0.93+

firstQUANTITY

0.93+

10th yearQUANTITY

0.92+

two scientistsQUANTITY

0.91+

Micron InsightORGANIZATION

0.87+

next 10 yearsDATE

0.86+

couple of years agoDATE

0.83+

Acre CubeORGANIZATION

0.78+

OneQUANTITY

0.76+

HPEORGANIZATION

0.7+

CubanOTHER

0.69+

a few years agoDATE

0.69+

micronORGANIZATION

0.68+

Micron Insight 2019TITLE

0.67+

CubePERSON

0.67+

Z.OTHER

0.54+

Gen ZOTHER

0.54+

coupleQUANTITY

0.5+

sentencesQUANTITY

0.47+

micronTITLE

0.43+

27OTHER

0.39+

Superdome flexCOMMERCIAL_ITEM

0.37+

PierTITLE

0.36+

Randy Meyer, HPE & Paul Shellard, University of Cambridge | HPE Discover 2017 Madrid


 

>> Announcer: Live from Madrid, Spain, it's the Cube, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, Spain everybody, this is the Cube, the leader in live tech coverage. We're here covering HPE Discover 2017. I'm Dave Vellante with my cohost for the week, Peter Burris, Randy Meyer is back, he's the vice president and general manager Synergy and Mission Critical Solutions at Hewlett Packard Enterprise and Paul Shellerd is here, the director of the Center for Theoretical Cosmology at Cambridge University, thank you very much for coming on the Cube. >> It's a pleasure. >> Good to see you again. >> Yeah good to be back for the second time this week. I think that's, day stay outlets play too. >> Talking about computing meets the cosmos. >> Well it's exciting, yesterday we talked about Superdome Flex that we announced, we talked about it in the commercial space, where it's taking HANA and Orcale databases to the next level but there's a whole different side to what you can do with in memory compute. It's all in this high performance computing space. You think about the problems people want to solve in fluid dynamics, in forecasting, in all sorts of analytics problems, high performance compute, one of the things it does is it generates massive amounts of data that people then want to do things with. They want to compare that data to what their model said, okay can I run that against, they want to take that data and visualize it, okay how do I go do that. The more you can do that in memory, it means it's just faster to deal with because you're not going and writing this stuff off the disk, you're not moving it to another cluster back and forth, so we're seeing this burgeoning, the HPC guys would call it fat nodes, where you want to put lots of memory and eliminate the IO to go make their jobs easier and Professor Shallard will talk about a lot of that in terms of what they're doing at the Cosmos Institute, but this is a trend, you don't have to be a university. We're seeing this inside of oil and gas companies, aerospace engineering companies, anybody that's solving these complex computational problems that have an analytical element to whether it's comparative model, visualize, do something with that once you've done that. >> Paul, explain more about what it is you do. >> Well in the Cosmos Group, of which I'm the head, we're interested in two things, cosmology, which is trying to understand where the universe comes from, the whole big bang and then we're interested in black holes, particularly their collisions which produce gravitational waves, so they're the two main areas, relativity and cosmology. >> That's a big topic. I don't even know where to start, I just want to know okay what have you learned and can you summarize it for a lay person, where are you today, what can you share with us that we can understand? >> What we do is we take our mathematical models and we make predictions about the real universe and so we try and compare those to the latest observational data. We're in a particularly exciting period of time at the moment because of a flood of new data about the universe and about black holes and in the last two years, gravitational waves were discovered, there's a Nobel prize this year so lots of things are happening. It's a very data driven science so we have to try and keep up with this flood of new data which is getting larger and larger and also with new types of data, because suddenly gravitational waves are the latest thing to look at. >> What are the sources of data and new sources of data that you're tapping? >> Well, in cosmology we're mainly interested in the cosmic microwave background. >> Peter: Yeah the sources of data are the cosmos. >> Yeah right, so this is relic radiation left over from the big bang fireball, it's like a photograph of the universe, a blueprint and then also in the distribution of galaxies, so 3D maps of the universe and we've only, we're in a new age of exploration, we've only got a tiny fraction of the universe mapped so far and we're trying to extract new information about the origin of the universe from that data. In relativity, we've got these gravitational waves, these ripples in space time, they're traversing across the universe, they're essentially earthquakes in the universe and they're sound waves or seismic waves that propagate to us from these very violent events. >> I want to take you to the gravitational waves because in many respects, it's an example of a lot of what's here in action. Here's what I mean, that the experiment and correct me if I'm wrong, but it's basically, you create a, have two lasers perpendicular to each other, shooting a signal about two or three miles in that direction and it is the most precise experiment ever undertaken because what you're doing is you're measuring the time it takes for one laser versus another laser and that time is a function of the slight stretching that comes from the gravitational rays. That is an unbelievable example of edge computing, where you have just the tolerances to do that, that's not something you can send back to the cloud, you gotta do a lot of the compute right there, right? >> That's right, yes so a gravitational wave comes by and you shrink one way and you stretch the other. >> Peter: It distorts the space time. >> Yeah you become thinner and these tiny, tiny changes are what's measured and nobody expected gravitational waves to be discovered in 2015, we all thought, oh another five years, another five years, they've always been saying, we'll discover them, we'll discover them, but it happened. >> And since then, it's been used two or three times to discover new types of things and there's now a whole, I'm sure this is very centric to what you're doing, there's now a whole concept of gravitational information, can in fact becomes an entirely new branch of cosmology, have I got that right? >> Yeah you have, it's called multimessenger astronomy now because you don't just see the universe in electromagnetic waves, in light, you hear the universe. This is qualitatively different, it's sound waves coming across the universe and so combining these two, the latest event was where they heard the event first, then they turned their telescope and they saw it. So much information came out of that, even information about cosmology, because these signals are traveling hundreds of billions of light years across to us, we're getting a picture of the whole universe as they propagate all that way, so we're able to measure the expansion rate of the universe from that point. >> The techniques for the observational, the technology for observation, what is that, how has that evolved? >> Well you've got the wrong guy here. I'm from the theory group, we're doing the predictions and these guys with their incredible technology, are seeing the data, seeing and it's imagined, the whole point is you've gotta get the predictions and then you've gotta look in the data for a needle in the haystack which is this signature of these black holes colliding. >> You think about that, I have a model, I'm looking for the needle in the haystack, that's a different way to describe an in memory analytic search pattern recognition problem, that's really what it is. This is the world's largest pattern recognition problem. >> Most precise, and literally. >> And that's an observation that confirms your theory right? >> Confirms the theory, maybe it was your theory. >> I'm actually a cosmologist, so in my group we have relativists who are actively working on the black hole collisions and making predictions about this stuff. >> But they're dampening vibration from passing trucks and these things and correcting it, it's unbelievable. But coming back to the technology, the technology is, one of the reasons why this becomes so exciting and becomes practical is because for the first time, the technology has gotten to the point where you can assume that the problem you're trying to solve, that you're focused on and you don't have to translate it in technology terms, so talk a little bit about, because in many respects, that's where business is. Business wants to be able to focus on the problem and how to think the problem differently and have the technology to just respond. They don't want to have to start with the technology and then imagine what they can do with it. >> I think from our point of view, it's a very fast moving field, things are changing, new data's coming in. The data's getting bigger and bigger because instruments are getting packed tighter and tighter, there's more information, so we've got a computational problem as well, so we've got to get more computational power but there's new types of data, like suddenly there's gravitational waves. There's new types of analysis that we want to do so we want to be able to look at this data in a very flexible way and ingest it and explore new ideas more quickly because things are happening so fast, so that's why we've adopted this in memory paradigm for a number of years now and the latest incarnation of this is the HP Superdome flex and that's a shared memory system, so you can just pull in all your data and explore it without carefully programming how the memory is distributed around. We find this is very easy for our users to develop data analytic pipelines to develop their new theoretical models and to compare the two on the single system. It's also very easy for new users to use. You don't have to be an advanced programmer to get going, you can just stay with the science in a sense. >> You gotta have a PhD in Physics to do great in Physics, you don't have to have a PhD in Physics and technology. >> That's right, yeah it's a very flexible program. A flexible architecture with which to program so you can more or less take your laptop pipeline, develop your pipeline on a laptop, take it to the Superdome and then scale it up to these huge memory problems. >> And get it done fast and you can iterate. >> You know these are the most brilliant scientists in the world, bar none, I made the analogy the other day. >> Oh, thanks. >> You're supposed to say aw, chucks. >> Peter: Aw, chucks. >> Present company excepted. >> Oh yeah, that's right. >> I made the analogy of, imagine I.M. Pei or Frank Lloyd Wright or someone had to be their own general contractor, right? No, they're brilliant at designing architectures and imagining things that no one else could imagine and then they had people to go do that. This allows the people to focus on the brilliance of the science without having to go become the expert programmer, we see that in business too. Parallel programming techniques are difficult, spoken like an old tandem guy, parallelism is hard but to the extent that you can free yourself up and focus on the problem and not have to mess around with that, it makes life easier. Some problems parallelize well, but a lot of them don't need to be and you can allow the data to shine, you can allow the science to shine. >> Is it correct that the barrier in your ability to reach a conclusion or make a discovery is the ability to find that needle in a haystack or maybe there are many, but. >> Well, if you're talking about obstacles to progress, I would say computational power isn't the obstacle, it's developing the software pipelines and it's the human personnel, the smart people writing the codes that can look for the needle in the haystack who have the efficient algorithms to do that and if they're cobbled by having to think very hard about the hardware and the architecture they're working with and how they've parallelized the problem, our philosophy is much more that you solve the problem, you validate it, it can be quite inefficient if you like, but as long as it's a working program that gets you to where you want, then your second stage you worry about making it efficient, putting it on accelerators, putting it on GPUs, making it go really fast and that's, for many years now we've bought these very flexible shared memory or in memory is the new word for it, in memory architectures which allow new users, graduate students to come straight in without a Master's degree in high performance computing, they can start to tackle problems straight away. >> It's interesting, we hear the same, you talk about it at the outer reaches of the universe, I hear it at the inner reaches of the universe from the life sciences companies, we want to map the genome and we want to understand the interaction of various drug combinations with that genetic structure to say can I tune exactly a vaccine or a drug or something else for that patient's genetic makeup to improve medical outcomes? The same kind of problem, I want to have all this data that I have to run against a complex genome sequence to find the one that gets me to the answer. From the macro to the micro, we hear this problem in all different sorts of languages. >> One of the things we have our clients, mainly in business asking us all the time, is with each, let me step back, as analysts, not the smartest people in the world, as you'll attest I'm sure for real, as analysts, we like to talk about change and we always talked about mainframe being replaced by minicomputer being replaced by this or that. I like to talk in terms of the problems that computing's been able to take on, it's been able to take on increasingly complex, challenging, more difficult problems as a consequence of the advance of technology, very much like you're saying, the advance of technology allows us to focus increasingly on the problem. What kinds of problems do you think physicists are gonna be able to attack in the next five years or so as we think about the combination of increasingly powerful computing and an increasingly simple approach to use it? >> I think the simplification you're indicating here is really going to more memory. Holding your whole workload in memory, so that you, one of the biggest bottlenecks we find is ingesting the data and then writing it out, but if you can do everything at once, then that's the key element, so one of the things we've been working on a great deal is in situ visualization for example, so that you see the black holes coming together and you see that you've set the right parameters, they haven't missed each other or something's gone wrong with your simulation, so that you do the post-processing at the same time, you never need the intermediate data products, so larger and larger memory and the computational power that balances with that large memory. It's all very well to get a fat node, but you don't have the computational power to use all those terrabytes, so that's why this in memory architecture of the Superdome Flex is much more balanced between the two. What are the problems that we're looking forward to in terms of physics? Well, in cosmology we're looking for these hints about the origin of the universe and we've made a lot of progress analyzing the Plank satellite data about the cosmic microwave background. We're honing in on theories of inflation, which is where all the structure in the universe comes from, from Heisenberg's uncertainty principle, rapid period of expansion just like inflation in the financial markets in the very early universe, okay and so we're trying to identify can we distinguish between different types and are they gonna tell us whether the universe comes from a higher dimensional theory, ten dimensions, gets reduced to three plus one or lots of clues like that, we're looking for statistical fingerprints of these different models. In gravitational waves of course, this whole new area, we think of the cosmic microwave background as a photograph of the early universe, well in fact gravitational waves look right back to the earliest moment, fractions of a nanosecond after the big bang and so it may be that the answers, the clues that we're looking for come from gravitational waves and of course there's so much in astrophysics that we'll learn about compact objects, about neutron stars, about the most energetic events there are in the whole universe. >> I never thought about the idea, because cosmic radiation background goes back what, about 300,000 years if that's right. >> Yeah that's right, you're very well informed, 400,000 years because 300 is. >> Not that well informed. >> 370,000. >> I never thought about the idea of gravitational waves as being noise from the big bang and you make sense with that. >> Well with the cosmic microwave background, we're actually looking for a primordial signal from the big bang, from inflation, so it's yeah. Well anyway, what were you gonna say Randy? >> No, I just, it's amazing the frontiers we're heading down, it's kind of an honor to be able to enable some of these things, I've spent 30 years in the technology business and heard customers tell me you transformed by business or you helped me save costs, you helped me enter a new market. Never before in 30 plus years of being in this business have I had somebody tell me the things that you're providing are helping me understand the origins of the universe. It's an honor to be affiliated with you guys. >> Oh no, the honor's mine Randy, you're producing the hardware, the tools that allow us to do this work. >> Well now the honor's ours for coming onto the Cube. >> That's right, how do we learn more about your work and your discoveries, inclusions. >> In terms of looking at. >> Are there popular authors we could read other than Stephen Hawking? >> Well, read Stephen's books, they're very good, he's got a new one called A Briefer History of Time so it's more accessible than the Brief History of Time. >> So your website is. >> Yeah our website is ctc.cam.ac.uk, the center for theoretical cosmology and we've got some popular pages there, we've got some news stories about the latest things that have happened like the HP partnership that we're developing and some nice videos about the work that we're doing actually, very nice videos of that. >> Certainly, there were several videos run here this week that if people haven't seen them, go out, they're available on Youtube, they're available at your website, they're on Stephen's Facebook page also I think. >> Can you share that website again? >> Well, actually you can get the beautiful videos of Stephen and the rest of his group on the Discover website, is that right? >> I believe so. >> So that's at HP Discover website, but your website is? >> Is ctc.cam.ac.uk and we're just about to upload those videos ourselves. >> Can I make a marketing suggestion. >> Yeah. >> Simplify that. >> Ctc.cam.ac.uk. >> Yeah right, thank you. >> We gotta get the Cube at one of these conferences, one of these physics conferences and talk about gravitational waves. >> Bone up a little bit, you're kind of embarrassing us here, 100,000 years off. >> He's better informed than you are. >> You didn't need to remind me sir. Thanks very much for coming on the Cube, great pleasure having you today. >> Thank you. >> Keep it right there everybody, Mr. Universe and I will be back after this short break. (upbeat techno music)

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. the director of the Center for Theoretical Cosmology Yeah good to be back for the second time this week. to what you can do with in memory compute. Well in the Cosmos Group, of which I'm the head, okay what have you learned and can you summarize it and in the last two years, gravitational waves in the cosmic microwave background. in the universe and they're sound waves or seismic waves and it is the most precise experiment ever undertaken and you shrink one way and you stretch the other. Yeah you become thinner and these tiny, tiny changes of the universe from that point. I'm from the theory group, we're doing the predictions for the needle in the haystack, that's a different way and making predictions about this stuff. the technology has gotten to the point where you can assume to get going, you can just stay with the science in a sense. You gotta have a PhD in Physics to do great so you can more or less take your laptop pipeline, in the world, bar none, I made the analogy the other day. This allows the people to focus on the brilliance is the ability to find that needle in a haystack the problem, our philosophy is much more that you solve From the macro to the micro, we hear this problem One of the things we have our clients, at the same time, you never need the I never thought about the idea, Yeah that's right, you're very well informed, from the big bang and you make sense with that. from the big bang, from inflation, so it's yeah. It's an honor to be affiliated with you guys. the hardware, the tools that allow us to do this work. and your discoveries, inclusions. so it's more accessible than the Brief History of Time. that have happened like the HP partnership they're available at your website, to upload those videos ourselves. We gotta get the Cube at one of these conferences, of embarrassing us here, 100,000 years off. You didn't need to remind me sir. Keep it right there everybody, Mr. Universe and I

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Dave VellantePERSON

0.99+

Peter BurrisPERSON

0.99+

2015DATE

0.99+

PaulPERSON

0.99+

Randy MeyerPERSON

0.99+

PeterPERSON

0.99+

30 yearsQUANTITY

0.99+

HeisenbergPERSON

0.99+

Frank Lloyd WrightPERSON

0.99+

Paul ShellerdPERSON

0.99+

twoQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Cosmos InstituteORGANIZATION

0.99+

30 plus yearsQUANTITY

0.99+

Center for Theoretical CosmologyORGANIZATION

0.99+

A Briefer History of TimeTITLE

0.99+

Cosmos GroupORGANIZATION

0.99+

RandyPERSON

0.99+

100,000 yearsQUANTITY

0.99+

ten dimensionsQUANTITY

0.99+

three milesQUANTITY

0.99+

yesterdayDATE

0.99+

five yearsQUANTITY

0.99+

second stageQUANTITY

0.99+

Paul ShellardPERSON

0.99+

threeQUANTITY

0.99+

ctc.cam.ac.ukOTHER

0.99+

ShallardPERSON

0.99+

Stephen HawkingPERSON

0.99+

three timesQUANTITY

0.99+

Brief History of TimeTITLE

0.99+

HPEORGANIZATION

0.99+

todayDATE

0.98+

first timeQUANTITY

0.98+

this weekDATE

0.98+

Ctc.cam.ac.ukOTHER

0.98+

two lasersQUANTITY

0.98+

Madrid, SpainLOCATION

0.98+

400,000 yearsQUANTITY

0.98+

hundreds of billions of light yearsQUANTITY

0.98+

this yearDATE

0.98+

DiscoverORGANIZATION

0.98+

MadridLOCATION

0.98+

second timeQUANTITY

0.98+

oneQUANTITY

0.97+

about 300,000 yearsQUANTITY

0.96+

two main areasQUANTITY

0.96+

University of CambridgeORGANIZATION

0.96+

Superdome flexCOMMERCIAL_ITEM

0.96+

Nobel prizeTITLE

0.95+

OneQUANTITY

0.95+

about twoQUANTITY

0.95+

one wayQUANTITY

0.95+

one laserQUANTITY

0.94+

HANATITLE

0.94+

single systemQUANTITY

0.94+

HP DiscoverORGANIZATION

0.94+

eachQUANTITY

0.93+

YoutubeORGANIZATION

0.93+

HPORGANIZATION

0.93+

two thingsQUANTITY

0.92+

UniversePERSON

0.92+

firstQUANTITY

0.92+

ProfessorPERSON

0.89+

last two yearsDATE

0.88+

I.M. PeiPERSON

0.88+

CubeCOMMERCIAL_ITEM

0.87+

370,000QUANTITY

0.86+

Cambridge UniversityORGANIZATION

0.85+

SynergyORGANIZATION

0.8+

PlankLOCATION

0.8+

300QUANTITY

0.72+

several videosQUANTITY

0.65+

next five yearsDATE

0.64+

HPCORGANIZATION

0.61+

a nanosecondQUANTITY

0.6+

Sharad Singhal, The Machine & Michael Woodacre, HPE | HPE Discover Madrid 2017


 

>> Man: Live from Madrid, Spain, it's the Cube! Covering HPE Discover Madrid, 2017. Brought to you by: Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is The Cube, the leader in live tech coverage. My name is Dave Vellante, I'm here with my co-host, Peter Burris, and this is our second day of coverage of HPE's Madrid Conference, HPE Discover. Sharad Singhal is back, Director of Machine Software and Applications, HPE and Corps and Labs >> Good to be back. And Mike Woodacre is here, a distinguished engineer from Mission Critical Solutions at Hewlett-Packard Enterprise. Gentlemen, welcome to the Cube, welcome back. Good to see you, Mike. >> Good to be here. >> Superdome Flex is all the rage here! (laughs) At this show. You guys are happy about that? You were explaining off-camera that is the first jointly-engineered product from SGI and HPE, so you hit a milestone. >> Yeah, and I came into Hewett Packard Enterprise just over a year ago with the SGI Acquisition. We're already working on our next generation in memory computing platform. We basically hit the ground running, integrated the engineering teams immediately that we closed the acquisition so we could drive through the finish line and with the product announcement just recently, we're really excited to get that out into the market. Really represent the leading in memory, computing system in the industry. >> Sharad, a high performance computer, you've always been big data, needing big memories, lots of performance... How has, or has, the acquisition of SGI shaped your agenda in any way or your thinking, or advanced some of the innovations that you guys are coming up with? >> Actually, it was truly like a meeting of the minds when these guys came into HPE. We had been talking about memory-driven computing, the machine prototype, for the last two years. Some of us were aware of it, but a lot of us were not aware of it. These guys had been working essentially in parallel on similar concepts. Some of the work we had done, we were thinking in terms of our road maps and they were looking at the same things. Their road maps were looking incredibly similar to what we were talking about. As the engineering teams came about, we brought both the Superdome X technology and The UV300 technology together into this new product that Mike can talk a lot more about. From my side, I was talking about the machine and the machine research project. When I first met Mike and I started talking to him about what they were doing, my immediate reaction was, "Oh wow wait a minute, this is exactly what I need!" I was talking about something where I could take the machine concepts and deliver products to customers in the 2020 time frame. With the help of Mike and his team, we are able to now do essentially something where we can take the benefits we are describing in the machine program and- make those ideas available to customers right now. I think to me that was the fun part of this journey here. >> So what are the key problems that your team is attacking with this new offering? >> The primary use case for the Superdome Flex is really high-performance in memory database applications, typically SAP Hana is sort of the industry leading solution in that space right now. One of the key things with the Superdome Flex, you know, Flex is the active word, it's the flexibility. You can start with a small building block of four socket, three terabyte building block, and then you just connect these boxes together. The memory footprint just grows linearly. The latency across our fabric just stays constant as you add these modules together. We can deliver up to 32 processes, 48 terabytes of in-memory data in a single rack. So it's really the flexibility, sort of a pay as you grow model. As their needs grow, they don't have to throw out the infrastructure. They can add to it. >> So when you take a look ultimately at the combination, we talked a little bit about some of the new types of problems that can be addressed, but let's bring it practical to the average enterprise. What can the enterprise do today, as a consequence of this machine, that they couldn't do just a few weeks ago? >> So it sort of builds on the modularity, as Lance explained. If you ask a CEO today, "what's my database requirement going to be in two or three years?" they're like, "I hope my business is successful, I hope I'm gonna grow my needs," but I really don't know where that side is going to grow, so the flexibility to just add modules and scale up the capacity of memory to bring that- so the whole concept of in-memory databases is basically bringing your online transaction processing and your data-analytics processing together. So then you can do this in real time and instead of your data going to a data warehouse and looking at how the business is operating days or weeks or months ago, I can see how it's acting right now with the latest updates of transactions. >> So this is important. You mentioned two different things. Number one is you mentioned you can envision- or three things. You can start using modern technology immediately on an extremely modern platform. Number two, you can grow this and scale this as needs follow, because Hana in memory is not gonna have the same scaling limitations that you know, Oracle on a bunch of spinning discs had. >> Mike: Exactly. >> So, you still have the flexibility to learn and then very importantly, you can start adding new functions, including automation, because now you can put the analytics and the transaction processing together, close that loop so you can bring transactions, analytics, boom, into a piece of automation, and scale that in unprecedented ways. That's kind of three things that the business can now think about. Have I got that right? >> Yeah, that's exactly right. It lets people really understand how their business is operating in real time, look for trends, look for new signatures in how the business is operating. They can basically build on their success and basically having this sort of technology gives them a competitive advantage over their competitors so they can out-compute or out-compete and get ahead of the competition. >> But it also presumably leads to new kinds of efficiencies because you can converge, that converge word that we've heard so much. You can not just converge the hardware and converge the system software management, but you can now increasingly converge tasks. Bring those tasks in the system, but also at a business level, down onto the same platform. >> Exactly, and so moving in memory is really about bringing real time to the problem instead of batch mode processing, you bring in the real-time aspect. Humans, we're interactive, we like to ask a question, get an answer, get on to the next question in real time. When processes move from batch mode to real time, you just get a step change in the innovation that can occur. We think with this foundation, we're really enabling the industry to step forward. >> So let's create a practical example here. Let's apply this platform to a sizeable system that's looking at customer behavior patterns. Then let's imagine how we can take the e-commerce system that's actually handling order, bill, fulfillment and all those other things. We can bring those two things together not just in a way that might work, if we have someone online for five minutes, but right now. Is that kind of one of those examples that we're looking at? >> Absolutely, you can basically- you have a history of the customers you're working with. In retail when you go in a store, the store will know your history of transactions with them. They can decide if they want to offer you real time discounts on particular items. They'll also be taking in other data, weather conditions to drive their business. Suddenly there's going to be a heat wave, I want more ice cream in the store, or it's gonna be freezing next week, I'm gonna order in more coats and mittens for everyone to buy. So taking in lots of transactional data, not just the actual business transaction, but environmental data, you can accelerate your ability to provide consumers with the things they will need. >> Okay, so I remember when you guys launched Apollo. Antonio Neri was running the server division, you might have had networking to him. He did a little reveal on the floor. Antonio's actually in the house over there. >> Mike: (laughs) Next door. There was an astronaut at the reveal. We covered it on the Cube. He's always been very focused on this part of the business of the high-performance computing, and obviously the machine has been a huge project. How has the leadership been? We had a lot of skeptics early on that said you were crazy. What was the conversation like with Meg and Antonio? Were they continuously supportive, were they sometimes skeptical too? What was that like? >> So if you think about the total amount of effort we've put in the machine program, and truly speaking, that kind of effort would not be possible if the senior leadership was not behind us inside this company. Right? A lot of us in HP labs were working on it. It was not just a labs project, it was a project where our business partners were working on it. We brought together engineering teams from the business groups who understood how projects were put together. We had software people working with us who were working inside the business, we had researchers from labs working, we had supply chain partners working with us inside this project. A project of this scale and scope does not succeed if it's a handful of researchers doing this work. We had enormous support from the business side and from our leadership team. I give enormous thanks to our leadership team to allow us to do this, because it's an industry thing, not just an HP Enterprise thing. At the same time, with this kind of investment, there's clearly an expectation that we will make it real. It's taken us three years to go from, "here is a vague idea from a group of crazy people in labs," to something which actually works and is real. Frankly, the conversation in the last six months has been, "okay, so how do we actually take it to customers?" That's where the partnership with Mike and his team has become so valuable. At this point in time, we have a shared vision of where we need to take the thing. We have something where we can on-board customers right now. We have something where, frankly, even I'm working on the examples we were talking about earlier today. Not everybody can afford a 16-socket, giant machine. The Superdome Flex allows my customer, or anybody who is playing with an application to start small, something that is reasonably affordable, try that application out. If that application is working, they have the ability to scale up. This is what makes the Superdome Flex such a nice environment to work in for the types of applications I'm worrying about because it takes something which when we had started this program, people would ask us, "when will the machine product be?" From day one, we said, "the machine product will be something that might become available to you in some form or another by the end of the decade." Well, suddenly with Mike, I think I can make it happen right now. It's not quite the end of the decade yet, right? So I think that's what excited me about this partnership we have with the Superdome Flex team. The fact that they had the same vision and the same aspirations that we do. It's a platform that allows my current customers with their current applications like Mike described within the context of say, SAB Hana, a scalable platform, they can operate it now. It's also something that allows them to involve towards the future and start putting new applications that they haven't even thought about today. Those were the kinds of applications we were talking about. It makes it possible for them to move into this journey today. >> So what is the availability of Superdome Flex? Can I buy it today? >> Mike: You can buy it today. Actually, I had the pleasure of installing the first early-access system in the UK last week. We've been delivering large memory platforms to Stephen Hawking's team at Cambridge University for the last twenty years because they really like the in-memory capability to allow them, as they say, to be scientists, not computer scientists, in working through their algorithms and data. Yeah, it's ready for sale today. >> What's going on with Hawking's team? I don't know if this is fake news or not, but I saw something come across that said he says the world's gonna blow up in 600 years. (laughter) I was like, uh-oh, what's Hawking got going now? (laughs) That's gotta be fun working with those guys. >> Yeah, I know, it's been fun working with that team. Actually, what I would say following up on Sharad's comment, it's been really fun this last year, because I've sort of been following the machine from outside when the announcements were made a couple of years ago. Immediately when the acquisition closed, I was like, "tell me about the software you've been developing, tell me about the photonics and all these technologies," because boy, I can now accelerate where I want to go with the technology we've been developing. Superdome Flex is really the first step on the path. It's a better product than either company could have delivered on their own. Now over time, we can integrate other learnings and technologies from the machine research program. It's a really exciting time. >> Excellent. Gentlemen, I always love the SGI acquisitions. Thought it made a lot of sense. Great brand, kind of put SGI back on the map in a lot of ways. Gentlemen, thanks very much for coming on the Cube. >> Thank you again. >> We appreciate you. >> Mike: Thank you. >> Thanks for coming on. Alright everybody, We'll be back with our next guest right after this short break. This is the Cube, live from HGE Discover Madrid. Be right back. (energetic synth)

Published Date : Nov 29 2017

SUMMARY :

it's the Cube! the leader in live tech coverage. Good to be back. that is the first jointly-engineered the finish line and with the product How has, or has, the acquisition of Some of the work we had done, One of the key things with the What can the enterprise do today, so the flexibility to just add gonna have the same scaling limitations that the transaction processing together, how the business is operating. You can not just converge the hardware and the innovation that can occur. Let's apply this platform to a not just the actual business transaction, Antonio's actually in the house We covered it on the Cube. the same aspirations that we do. Actually, I had the pleasure of he says the world's gonna blow up in 600 years. Superdome Flex is really the first Gentlemen, I always love the SGI This is the Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

MikePERSON

0.99+

Dave VellantePERSON

0.99+

MegPERSON

0.99+

Sharad SinghalPERSON

0.99+

AntonioPERSON

0.99+

Mike WoodacrePERSON

0.99+

SGIORGANIZATION

0.99+

HawkingPERSON

0.99+

UKLOCATION

0.99+

five minutesQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Antonio NeriPERSON

0.99+

LancePERSON

0.99+

HPEORGANIZATION

0.99+

48 terabytesQUANTITY

0.99+

Hewlett-Packard EnterpriseORGANIZATION

0.99+

todayDATE

0.99+

three yearsQUANTITY

0.99+

next weekDATE

0.99+

OracleORGANIZATION

0.99+

HPORGANIZATION

0.99+

Michael WoodacrePERSON

0.99+

Stephen HawkingPERSON

0.99+

last weekDATE

0.99+

MadridLOCATION

0.99+

Hewett Packard EnterpriseORGANIZATION

0.99+

first stepQUANTITY

0.99+

2020DATE

0.99+

SharadPERSON

0.99+

OneQUANTITY

0.99+

Cambridge UniversityORGANIZATION

0.99+

two thingsQUANTITY

0.99+

HGE Discover MadridORGANIZATION

0.98+

firstQUANTITY

0.98+

last yearDATE

0.98+

three thingsQUANTITY

0.98+

twoQUANTITY

0.98+

three terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.97+

600 yearsQUANTITY

0.97+

16-socketQUANTITY

0.97+

second dayQUANTITY

0.97+

The MachineORGANIZATION

0.96+

Superdome FlexORGANIZATION

0.96+

Madrid, SpainLOCATION

0.95+

two different thingsQUANTITY

0.95+

upQUANTITY

0.93+

single rackQUANTITY

0.91+

CubeCOMMERCIAL_ITEM

0.9+

endDATE

0.9+

HPE DiscoverEVENT

0.88+

32 processesQUANTITY

0.88+

Superdome FlexCOMMERCIAL_ITEM

0.88+

few weeks agoDATE

0.88+

SAB HanaTITLE

0.86+

couple of years agoDATE

0.86+

overDATE

0.84+

Number twoQUANTITY

0.83+

Mission Critical SolutionsORGANIZATION

0.83+

four socketQUANTITY

0.82+

end of the decadeDATE

0.82+

last six monthsDATE

0.81+

a year agoDATE

0.81+

earlier todayDATE

0.8+

Sharad Singhal, The Machine & Matthias Becker, University of Bonn | HPE Discover Madrid 2017


 

>> Announcer: Live from Madrid, Spain, it's theCUBE, covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is theCUBE, the leader in live tech coverage and my name is Dave Vellante, and I'm here with Peter Burris, this is day two of HPE Hewlett Packard Enterprise Discover in Madrid, this is their European version of a show that we also cover in Las Vegas, kind of six month cadence of innovation and organizational evolution of HPE that we've been tracking now for several years. Sharad Singal is here, he covers software architecture for the machine at Hewlett Packard Enterprise, and Matthias Becker, who's a postdoctoral researcher at the University of Bonn. Gentlemen, thanks so much for coming in theCUBE. >> Thank you. >> No problem. >> You know, we talk a lot on theCUBE about how technology helps people make money or save money, but now we're talking about, you know, something just more important, right? We're talking about lives and the human condition and >> Peter: Hard problems to solve. >> Specifically, yeah, hard problems like Alzheimer's. So Sharad, why don't we start with you, maybe talk a little bit about what this initiative is all about, what the partnership is all about, what you guys are doing. >> So we started on a project called the Machine Project about three, three and a half years ago and frankly at that time, the response we got from a lot of my colleagues in the IT industry was "You guys are crazy", (Dave laughs) right. We said we are looking at an enormous amount of data coming at us, we are looking at real time requirements on larger and larger processing coming up in front of us, and there is no way that the current architectures of the computing environments we create today are going to keep up with this huge flood of data, and we have to rethink how we do computing, and the real question for those of us who are in research in Hewlett Packard Labs, was if we were to design a computer today, knowing what we do today, as opposed to what we knew 50 years ago, how would we design the computer? And this computer should not be something which solves problems for the past, this should be a computer which deals with problems in the future. So we are looking for something which would take us for the next 50 years, in terms of computing architectures and what we will do there. In the last three years we have gone from ideas and paper study, paper designs, and things which were made out of plastic, to a real working system. We have around Las Vegas time, we'd basically announced that we had the entire system working with actual applications running on it, 160 terabytes of memory all addressable from any processing core in 40 computing nodes around it. And the reason is, although we call it memory-driven computing, it's really thinking in terms of data-driven computing. The reason is that the data is now at the center of this computing architecture, as opposed to the processor, and any processor can return to any part of the data directly as if it was doing, addressing in local memory. This provides us with a degree of flexibility and freedom in compute that we never had before, and as a software person, I work in software, as a software person, when we started looking at this architecture, our answer was, well, we didn't know we could do this. Now if, given now that I can do this and I assume that I can do this, all of us in the programmers started thinking differently, writing code differently, and we suddenly had essentially a toy to play with, if you will, as programmers, where we said, you know, this algorithm I had written off decades ago because it didn't work, but now I have enough memory that if I were to think about this algorithm today, I would do it differently. And all of a sudden, a new set of algorithms, a new set of programming possibilities opened up. We worked with a number of applications, ranging from just Spark on this kind of an environment, to how do you do large scale simulations, Monte Carlo simulations. And people talk about improvements in performance from something in the order of, oh I can get you a 30% improvement. We are saying in the example applications we saw anywhere from five, 10, 15 times better to something which where we are looking at financial analysis, risk management problems, which we can do 10,000 times faster. >> So many orders of magnitude. >> Many, many orders >> When you don't have to wait for the horrible storage stack. (laughs) >> That's right, right. And these kinds of results gave us the hope that as we look forward, all of us in these new computing architectures that we are thinking through right now, will take us through this data mountain, data tsunami that we are all facing, in terms of bringing all of the data back and essentially doing real-time work on those. >> Matthias, maybe you could describe the work that you're doing at the University of Bonn, specifically as it relates to Alzheimer's and how this technology gives you possible hope to solve some problems. >> So at the University of Bonn, we work very closely with the German Center for Neurodegenerative Diseases, and in their mission they are facing all diseases like Alzheimer's, Parkinson's, Multiple Sclerosis, and so on. And in particular Alzheimer's is a really serious disease and for many diseases like cancer, for example, the mortality rates improve, but for Alzheimer's, there's no improvement in sight. So there's a large population that is affected by it. There is really not much we currently can do, so the DZNE is focusing on their research efforts together with the German government in this direction, and one thing about Alzheimer's is that if you show the first symptoms, the disease has already been present for at least a decade. So if you really want to identify sources or biomarkers that will point you in this direction, once you see the first symptoms, it's already too late. So at the DZNE they have started on a cohort study. In the area around Bonn, they are now collecting the data from 30,000 volunteers. They are planning to follow them for 30 years, and in this process we generate a lot of data, so of course we do the usual surveys to learn a bit about them, we learn about their environments. But we also do very more detailed analysis, so we take blood samples and we analyze the complete genome, and also we acquire imaging data from the brain, so we do an MRI at an extremely high resolution with some very advanced machines we have, and all this data is accumulated because we do not only have to do this once, but we try to do that repeatedly for every one of the participants in the study, so that we can later analyze the time series when in 10 years someone develops Alzheimer's we can go back through the data and see, maybe there's something interesting in there, maybe there was one biomarker that we are looking for so that we can predict the disease better in advance. And with this pile of data that we are collecting, basically we need something new to analyze this data, and to deal with this, and when we heard about the machine, we though immediately this is a system that we would need. >> Let me see if I can put this in a little bit of context. So Dave lives in Massachusetts, I used to live there, in Framingham, Massachusetts, >> Dave: I was actually born in Framingham. >> You were born in Framingham. And one of the more famous studies is the Framingham Heart Study, which tracked people over many years and discovered things about heart disease and relationship between smoking and cancer, and other really interesting problems. But they used a paper-based study with an interview base, so for each of those kind of people, they might have collected, you know, maybe a megabyte, maybe a megabyte and a half of data. You just described a couple of gigabytes of data per person, 30,000, multiple years. So we're talking about being able to find patterns in data about individuals that would number in the petabytes over a period of time. Very rich detail that's possible, but if you don't have something that can help you do it, you've just collected a bunch of data that's just sitting there. So is that basically what you're trying to do with the machine is the ability to capture all this data, to then do something with it, so you can generate those important inferences. >> Exactly, so with all these large amounts of data we do not only compare the data sets for a single person, but once we find something interesting, we have also to compare the whole population that we have captured with each other. So there's really a lot of things we have to parse and compare. >> This brings together the idea that it's not just the volume of data. I also have to do analytics and cross all of that data together, right, so every time a scientist, one of the people who is doing biology studies or informatic studies asks a question, and they say, I have a hypothesis which this might be a reason for this particular evolution of the disease or occurrence of the disease, they then want to go through all of that data, and analyze it as as they are asking the question. Now if the amount of compute it takes to actually answer their questions takes me three days, I have lost my train of thought. But if I can get that answer in real time, then I get into this flow where I'm asking a question, seeing the answer, making a different hypothesis, seeing a different answer, and this is what my colleagues here were looking for. >> But if I think about, again, going back to the Framingham Heart Study, you know, I might do a query on a couple of related questions, and use a small amount of data. The technology to do that's been around, but when we start looking for patterns across brain scans with time series, we're not talking about a small problem, we're talking about an enormous sum of data that can be looked at in a lot of different ways. I got one other question for you related to this, because I gotta presume that there's the quid pro quo for getting people into the study, is that, you know, 30,000 people, is that you'll be able to help them and provide prescriptive advice about how to improve their health as you discover more about what's going on, have I got that right? >> So, we're trying to do that, but also there are limits to this, of course. >> Of course. >> For us it's basically collecting the data and people are really willing to donate everything they can from their health data to allow these large studies. >> To help future generations. >> So that's not necessarily quid pro quo. >> Okay, there isn't, okay. But still, the knowledge is enough for them. >> Yeah, their incentive is they're gonna help people who have this disease down the road. >> I mean if it is not me, if it helps society in general, people are willing to do a lot. >> Yeah of course. >> Oh sure. >> Now the machine is not a product yet that's shipping, right, so how do you get access to it, or is this sort of futures, or... >> When we started talking to one another about this, we actually did not have the prototype with us. But remember that when we started down this journey for the machine three years ago, we know back then that we would have hardware somewhere in the future, but as part of my responsibility, I had to deal with the fact that software has to be ready for this hardware. It does me no good to build hardware when there is no software to run on it. So we have actually been working at the software stack, how to think about applications on that software stack, using emulation and simulation environments, where we have some simulators with essentially instruction level simulator for what the machine does, or what that prototype would have done, and we were running code on top of those simulators. We also had performance simulators, where we'd say, if we write the application this way, this is how much we think we would gain in terms of performance, and all of those applications on all of that code we were writing was actually on our large memory machines, Superdome X to be precise. So by the time we started talking to them, we had these emulation environments available, we had experience using these emulation environments on our Superdome X platform. So when they came to us and started working with us, we took their software that they brought to us, and started working within those emulation environments to see how fast we could make those problems, even within those emulation environments. So that's how we started down this track, and most of the results we have shown in the study are all measured results that we are quoting inside this forum on the Superdome X platform. So even in that emulated environment, which is emulating the machine now, on course in the emulation Superdome X, for example, I can only hold 24 terabytes of data in memory. I say only 24 terabytes >> Only! because I'm looking at much larger systems, but an enormously large number of workloads fit very comfortably inside the 24 terabytes. And for those particular workloads, the programming techniques we are developing work at that scale, right, they won't scale beyond the 24 terabytes, but they'll certainly work at that scale. So between us we then started looking for problems, and I'll let Matthias comment on the problems that they brought to us, and then we can talk about how we actually solved those problems. >> So we work a lot with genomics data, and usually what we do is we have a pipeline so we connect multiple tools, and we thought, okay, this architecture sounds really interesting to us, but if we want to get started with this, we should pose them a challenge. So if they can convince us, we went through the literature, we took a tool that was advertised as the new optimal solution. So prior work was taking up to six days for processing, they were able to cut it to 22 minutes, and we thought, okay, this is a perfect challenge for our collaboration, and we went ahead and we took this tool, we put it on the Superdome X that was already running and stepped five minutes instead of just 22, and then we started modifying the code and in the end we were able to shrink the time down to just 30 seconds, so that's two magnitudes faster. >> We took something which was... They were able to run in 22 minutes, and that was already had been optimized by people in the field to say "I want this answer fast", and then when we moved it to our Superdome X platform, the platform is extremely capable. Hardware-wise it compares really well to other platforms which are out there. That time came down to five minutes, but that was just the beginning. And then as we modified the software based on the emulation results we were seeing underneath, we brought that time down to 13 seconds, which is a hundred times faster. We started this work with them in December of last year. It takes time to set up all of this environment, so the serious coding was starting in around March. By June we had 9X improvement, which is already a factor of 10, and since June up to now, we have gotten another factor of 10 on that application. So I'm now at a 100X faster than what the application was able to do before. >> Dave: Two orders of magnitude in a year? >> Sharad: In a year. >> Okay, we're out of time, but where do you see this going? What is the ultimate outcome that you're hoping for? >> For us, we're really aiming to analyze our data in real time. Oftentimes when we have biological questions that we address, we analyze our data set, and then in a discussion a new question comes up, and we have to say, "Sorry, we have to process the data, "come back in a week", and our idea is to be able to generate these answers instantaneously from our data. >> And those answers will lead to what? Just better care for individuals with Alzheimer's, or potentially, as you said, making Alzheimer's a memory. >> So the idea is to identify Alzheimer long before the first symptoms are shown, because then you can start an effective treatment and you can have the biggest impact. Once the first symptoms are present, it's not getting any better. >> Well thank you for your great work, gentlemen, and best of luck on behalf of society, >> Thank you very much >> really appreciate you coming on theCUBE and sharing your story. You're welcome. All right, keep it right there, buddy. Peter and I will be back with our next guest right after this short break. This is theCUBE, you're watching live from Madrid, HPE Discover 2017. We'll be right back.

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. that we also cover in Las Vegas, So Sharad, why don't we start with you, and frankly at that time, the response we got When you don't have to computing architectures that we are thinking through and how this technology gives you possible hope and in this process we generate a lot of data, So Dave lives in Massachusetts, I used to live there, is the Framingham Heart Study, which tracked people that we have captured with each other. Now if the amount of compute it takes to actually the Framingham Heart Study, you know, there are limits to this, of course. and people are really willing to donate everything So that's not necessarily But still, the knowledge is enough for them. people who have this disease down the road. I mean if it is not me, if it helps society in general, Now the machine is not a product yet and most of the results we have shown in the study that they brought to us, and then we can talk about and in the end we were able to shrink the time based on the emulation results we were seeing underneath, and we have to say, "Sorry, we have to process the data, Just better care for individuals with Alzheimer's, So the idea is to identify Alzheimer Peter and I will be back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NeilPERSON

0.99+

Dave VellantePERSON

0.99+

JonathanPERSON

0.99+

JohnPERSON

0.99+

Ajay PatelPERSON

0.99+

DavePERSON

0.99+

$3QUANTITY

0.99+

Peter BurrisPERSON

0.99+

Jonathan EbingerPERSON

0.99+

AnthonyPERSON

0.99+

Mark AndreesenPERSON

0.99+

Savannah PetersonPERSON

0.99+

EuropeLOCATION

0.99+

Lisa MartinPERSON

0.99+

IBMORGANIZATION

0.99+

YahooORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Paul GillinPERSON

0.99+

Matthias BeckerPERSON

0.99+

Greg SandsPERSON

0.99+

AmazonORGANIZATION

0.99+

Jennifer MeyerPERSON

0.99+

Stu MinimanPERSON

0.99+

TargetORGANIZATION

0.99+

Blue Run VenturesORGANIZATION

0.99+

RobertPERSON

0.99+

Paul CormierPERSON

0.99+

PaulPERSON

0.99+

OVHORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

PeterPERSON

0.99+

CaliforniaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

SonyORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

Andy JassyPERSON

0.99+

RobinPERSON

0.99+

Red CrossORGANIZATION

0.99+

Tom AndersonPERSON

0.99+

Andy JazzyPERSON

0.99+

KoreaLOCATION

0.99+

HowardPERSON

0.99+

Sharad SingalPERSON

0.99+

DZNEORGANIZATION

0.99+

U.S.LOCATION

0.99+

five minutesQUANTITY

0.99+

$2.7 millionQUANTITY

0.99+

TomPERSON

0.99+

John FurrierPERSON

0.99+

MatthiasPERSON

0.99+

MattPERSON

0.99+

BostonLOCATION

0.99+

JessePERSON

0.99+

Red HatORGANIZATION

0.99+

Alain Andreoli, HPE | HPE Discover Madrid 2017


 

>> Announcer: Live from Madrid, Spain. It's the Cube. Covering HPE Discover Madrid 2017, brought to you by Hewlett Packard Enterprise. >> Welcome back to Madrid everybody. This is the Cube, the leader in live tech coverage, and this is day two of HPE Discover 2017. I'm Dave Vellante with my co-host, Peter Burris, Alain Andreoli is here. He's the Senior Vice President and general manager of the hybrid IT group at HPE. Great to see you again. >> Great to see you David, great to see you Peter. >> So, a lot of good energy here, the story Alain is coming together. >> Alain: Yes. >> We've seen it over the last five years but really fine-tuned the organization and seems like things are going well. >> We have more clarity on our strategy than I've ever seen in a company, and this was not easy to do because the market is changing so fast. We addressing $120 billion market in hybrid IT, we lead the market in compute, we lead the market in storage, we lead the market with private cloud, we have invented composable, we are ramping up our Harper converge offering, and now on top of the infrastructure, we building these layers of one sphere, which is managing a multi-cloud environment for the data, and we are adjusting our services to become advisory and consumption models. This is having such an impact on our customers, 74 percent of our customers are going for hybrid IT journey. So we have organized ourselves to make this journey to be basically the partner of choice for our customers as they go through that. >> I mean so cloud of the last five, seven years, cloud and open-source software have really disrupted our industry. You've had to respond to that, and basically bringing cloud-like operating models to your customers. >> Alain: Yes. >> How have you done that, how do you rate your progress and where are you to date in that regard? >> So the first decision we had to make is are we a neutral party to our customers? (laughing) >> Dave: Yeah. >> We need to redo it. (laughing) >> They're getting you back, right? So, I don't know if you can see that, alright? Alain came by on his scooter, here we go, let's catch this. Here we go, this is called payback. (laughing) During Dr. Tom's interview, Alain came by with his scooter. (laughing) >> I will get you, I will get you for this. (laughing) >> It's great fun on the Cube. >> We can kid, that's alright. >> That's good. >> So the decision we had to make is are we the partner for our customers to go to the cloud or are we saying on PRIM is better? >> Dave: Yeah. >> And we 'vedecided to be this partner. Because we believe there is value for everyone and we believe it is not a one-way street. And we see actually that 32 percent of the customers who have moved work loads to the cloud are bringing these work loads back on PRIM. So we had to advise them. We helped them go through this journey, we really mean it, we helped them to go on Amazon, we helped them to go on Azure, we helped them to go on Google, and we helped them make it work, and this is why it's a service-led journey. The problem if you go on the public cloud is that we don't really know how much it is going to cost you, and you don't really have a single pane of glass to have all your data being managed across, what is now an ecosystem. We enabled them to do that. And the market we are directly addressing on PRIM is not shrinking. We still see huge pockets of growth, in flash storage, in HPC, you've seen the results we have in HPC. In Mission-Critical X86, in Hyperconvert, so we are basically moving from the one-size fits all type of organization of freeing X86 and start off storage, to become a company that offers value to customers, in specialized pools of compute, of storage, of networking, and offering them the end to end journey across the different stack. What I think is going to make a huge difference, if you look at the five-year horizon, is the growth of The Edge and the fact that 70 percent of the data are going to come from The Edge, and then you will really see the power of our strategy of private IT which goes from The Edge, to the core, to the cloud, because we will be able to enable our customers to have their data moving seamlessly across this journey. And we have exactly organized the company that way. >> One of the obvious use cases from what I like to call machine intelligence or artificial intelligence is really infusing artificial intelligence into infrastructure for predictive analytics and predictive maintenance, IT operations management, Infocyte, you got through an acquisition of Nimble and have been impressed with the pace at which you pushed that throughout the portfolio, I wondered if you could address that. >> We've been almost surprised. We looked at, we wanted to become the flash company because we saw that the market over three years, would completely move to flash. And when there is such a pendulum shift, you want to be at the forefront. >> Dave: Right. >> So we looked at all these companies who were having very strong positions on flash and Nimble intrigued us because they had, by far, when we talked to their customers, the highest customer satisfaction, I think it was something like 87 percent. >> The NPS is off the charts. >> The NPS is off the charts, right? And then we peeled the onion and we saw Infocyte, which was almost enough to start south because it was not part of our list, right? Initially of our list of this is how we are gonna select a company we want to acquire, and when we got into Infocyte, how it works, how we can actually port easily these to three power and then to SimpliVity and then to the rest of the portfolio we felt this is the crown jewel that is going to be the foundation of us making >> Dave: And not just the storage portfolio. >> No, end to end so we're gonna do these for everything, now we cannot do it in one day. The priority was to give a seamless experience to customers going three power or Nimble, so we've done that very quickly. We acquired the company six months ago and it's already there for three power. Next one will be Simplivity, very soon in a few weeks, then we go to the whole computes platform as well, then finally to networking. I hope, it's not a commitment, but I hope that by the end of next year, and under a year, we will be done for the whole infrastructure portfolio. >> And explain the benefit to customers. >> And then the benefit is that you basically have, you eliminate the need for level one and level two support because it's proactively, now you have to be wanting to have your device calling home, right? Because otherwise, if you want your device to be in the data center and insulated from communicating with the network effect, that is not going to work, so but assuming you want your device to be connected centrally, so that it can be monitored centrally the artificial intelligence that is embedded in Infocyte is basically going to monitor the behavior of your device compared with hundreds of thousands of other ones and therefore anything that is deviant will be flagged as a potential problem and resolved before you even know about it. That's one. So when you end up having a problem eventually, which is becoming very, very rare, then you directly call the level three engineer who is an expert and who has, on the screen, the behavior of your device for the last month compared to others, and the resolution is in less than a minute. So it's a revolution in the way to do service. >> So, one of the things that we've observed as we've talked to customers is that the characteristics of the problems that they're now trying to solve have real world elements, and that's really what The Edge is about in many respects. For the first 50 years of IT, we were doing accounting, and HR, and supply chain, and we were able to define what the data models looked like, we could therefore say, the data's going to be here, the processing is going to be here, we could build data centers. Now as you said, 70 percent of the data is going to be coming from The Edge. It's not clear, necessarily where the best place to process that data is. Where's the compute going to be? How's it going to integrate with people? In many respects, hybrid IT is about diminishing the degree to which infrastructure dictates the way the problem gets solved. Would you agree with that? It's kind of like where does, let the data reside where it needs to reside, and make sure that the business is a natural infrastructure that reflects and corresponds to the work that needs to get done. >> I totally agree with your problem statement, and the way you position the question. In terms of semantics, I would just say we need to make infrastructure invisible. It's still there because it's all running on infrastructure. The iPhone is infrastructure, your PC is infrastructure, your camera is infrastructure, it's all there. >> A C.I.O said to me not too long ago... >> But you know what? We are having this interview, we are not thinking about what makes it happen. >> Peter: Right, right, right. >> Our business is to talk and communicate right now, this all has got to be seamless and that's how we need to make IT, seamless. >> I had a conversation with a C.I.O. >> Invisible. >> Yeah, who said that the value of my infrastructure is inversely proportional to the degree to which anybody knows anything about it. So, is that kind of what the HP promise is, is we're gonna let the data and the work loads define where the infrastructure goes and ensure we have those options? >> It's exactly right and the vehicle to do that, we call it autonomous data centers. Your phone is a data center. Your data center is a data center. Your off-frame cloud is a data center that you are subcontracting, right? So we want all of these to be autonomous, in terms of self-healing and everything else, and then the intelligence of where these data are being moved and how you use what and when is the single pane of glass that we are developing around one sphere. And how to get the customers to move their work loads and their business around that is what we do with point next with services. This is our strategy. >> So let me break that down a little bit. So, we've got devices that are powerful enough that we could put new types of control, new types of work loads there if we wanted to, we've got now the ability to package infrastructure, and have a single pane of glass, and have a common management framework. >> Right. >> But when you say the autonomous data center, it's we have a common business approach thinking about policy, thinking about value, thinking about how we're gonna do things, and we can put that into this entire vision, and let it actually execute how that manifests itself from a business standpoint. >> Exactly right. >> Have I got that right? >> It's exactly right. I love the way you put it. That's exactly what we are trying to do. it's not going to be done in one day, but that is our strategy, and we have organized, once again, the whole company around it, to execute this strategy and to make it happen for our customers. >> So if we think about what an HPE customer is gonna look like in, you know a really good HPE customer in 2023, what.. >> Alain: That's a long time. >> That's, five years, but I'm giving you that much run way, because you're right, it's not there yet and if it's too ambitious then so be it, but how is a business person going to think differently about working, about the role that IT is going to play in the business, and what it means to have a great partnership with a company like HP? >> Yeah, so we are basically, our motto is One size doesn't fit all, so we are first trying to understand the business of the customer, and then we will apply solutions to enhance this business, or to empower this business, right? So, we have the biggest brace of infrastructure that you can think of, think about this infrastructure becoming self-healing, but this infrastructure is more and more specialized, there is HPC, there is Mission-Critical, we just found Superdome flex, or SAP, we have all these specializations that, for those customers to optimize their business outcome. Then we have the single pane of glass that allows everything to seamlessly operate the data around, and then our point-neck services are going to work with the customers to architect their IT model in a way that their work loads are optimized. And one of the key is the right mix. The right mix of what you do yourself, what you got from multi-cloud, how much do you pay for it, how much do you anticipate that you're gonna pay for it, do you want this to be CAPEX, do you want this to be OPEX? And then how do you manage The Edge, and with Aruba and with Edgeline, and then with all your IT platforms that can manage the data across The Edge. We have the capability to also let the customer decide, do I want a lot of analytics and decisions to be made at The Edge, in my devices, and this is highly valuable depending on what customer business model we are talking about, or, do I want all the data from the analog world through the censors to come straight back to the ranch. All of these decisions, we are gonna have platforms to allow customers to make these decisions, to decide, kind of templates if you want, this is how I want it to run, and to be executed, and then to be automatically, autonomously operated. That's our vision of how we can help our customers moving forward. >> Last question, so the attendees of Discover, your customers, when they go back and he or she talks to their boss, what do you want them to say about Discover 2018? >> I invested two or three days of my time to come to HPE Discover. It was really exciting because I felt that it's like a new company, it's the company I know. I know they are customer first and customer last, and they are the ones who help me when I have a problem, whether they created it or not, they are here to help me. This is not going away, but they are taking us to the new world. They are gonna help us to build our hybrid IT model, and I think we need to trust them to have a seat at the table when we make these decisions, boss. >> Intimacy, innovation... >> Alain: Yeah, innovation. >> Trust. >> HPE's no longer wandering in the desert. (laughing) >> Alain Andreoli thanks so much for coming on the Cube, it is always a pleasure. >> It was a pleasure. Take care, thanks Peter. >> Keep it right there, everybody, Peter and I will be back with our next guest, right after this short break, we're live from Madrid. You're watching the Cube. (techno music)

Published Date : Nov 29 2017

SUMMARY :

brought to you by Hewlett Packard Enterprise. Great to see you again. So, a lot of good energy here, the story Alain We've seen it over the last five years and we are adjusting our services to become advisory I mean so cloud of the last five, seven years, We need to redo it. Alain came by on his scooter, here we go, let's catch this. I will get you, I will get you for this. the data are going to come from The Edge, and then you One of the obvious use cases from what I like to call because we saw that the market over three years, So we looked at all these companies who were having then we go to the whole computes platform as well, on the screen, the behavior of your device for the last diminishing the degree to which infrastructure dictates we need to make infrastructure invisible. we are not thinking about what makes it happen. this all has got to be seamless and that's how we need to inversely proportional to the degree to which anybody And how to get the customers to move their work loads there if we wanted to, we've got now the ability to and we can put that into this entire vision, I love the way you put it. So if we think about what an HPE customer of the customer, and then we will apply solutions to and I think we need to trust them to have a seat (laughing) Alain Andreoli thanks so much for coming on the Cube, It was a pleasure. Peter and I will be back with our next guest,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Peter BurrisPERSON

0.99+

PeterPERSON

0.99+

Dave VellantePERSON

0.99+

Alain AndreoliPERSON

0.99+

AlainPERSON

0.99+

MadridLOCATION

0.99+

DavePERSON

0.99+

twoQUANTITY

0.99+

$120 billionQUANTITY

0.99+

InfocyteORGANIZATION

0.99+

HPEORGANIZATION

0.99+

2023DATE

0.99+

HPORGANIZATION

0.99+

87 percentQUANTITY

0.99+

NimbleORGANIZATION

0.99+

32 percentQUANTITY

0.99+

70 percentQUANTITY

0.99+

five yearsQUANTITY

0.99+

three daysQUANTITY

0.99+

74 percentQUANTITY

0.99+

The EdgeORGANIZATION

0.99+

six months agoDATE

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

ArubaORGANIZATION

0.99+

Madrid, SpainLOCATION

0.99+

less than a minuteQUANTITY

0.99+

first 50 yearsQUANTITY

0.99+

oneQUANTITY

0.99+

AmazonORGANIZATION

0.99+

TomPERSON

0.99+

HarperORGANIZATION

0.98+

five-yearQUANTITY

0.98+

DiscoverORGANIZATION

0.98+

Superdome flexORGANIZATION

0.98+

OneQUANTITY

0.97+

HPCORGANIZATION

0.97+

firstQUANTITY

0.96+

one dayQUANTITY

0.96+

over three yearsQUANTITY

0.95+

X86TITLE

0.95+

GoogleORGANIZATION

0.95+

AzureTITLE

0.95+

hundreds of thousandsQUANTITY

0.94+

seven yearsQUANTITY

0.94+

under a yearQUANTITY

0.92+

EdgelineORGANIZATION

0.92+

single paneQUANTITY

0.91+

HPE Discover 2017EVENT

0.91+

one sphereQUANTITY

0.9+

end of next yearDATE

0.9+

SimplivityORGANIZATION

0.9+

one-wayQUANTITY

0.88+

SimpliVityORGANIZATION

0.88+

Dr.PERSON

0.86+

HyperconvertTITLE

0.86+

SAPORGANIZATION

0.83+

HPE DiscoverORGANIZATION

0.83+

level oneQUANTITY

0.82+

single pane of glassQUANTITY

0.82+

level threeQUANTITY

0.82+

last monthDATE

0.8+

level twoQUANTITY

0.79+

Randy Meyer & Alexander Zhuk | HPE Discover 2017 Madrid


 

>> Announcer: Live from Madrid, Spain. It's the Cube. Covering HP Discover Madrid 2017. Brought to you by Hewlett Packard Enterprise. >> Good afternoon from Madrid everybody. Good morning on the East Coast. Good really early morning on the West Coast. This is the Cube, the leader in live tech coverage. We're here day one at HPE Discover Madrid 2017. My name is Dave Velonte, I'm here with my cohost Peter Berse. Randy Meyers here is the Vice President and General Manager of the Mission Critical business unit at Hewlett Packard Enterprise. And he's joined by Alexander Zhuk, who is the SAP practice lead at Eldorado. Welcome to the Cube, thanks for coming on. >> Thanks for having us. >> Thank you. >> Randy we were just reminiscing about the number of times you've been on the Cube, consecutive years, it's like the Patriots winning the AFC East it just keeps happening. >> Or Cal Ripkin would probably be you. >> Me and Tom Brady. >> You're the Cal Ripken of the Cube. So give us the update, what's happening in the Mission Critical Business unit. What's going on here at Discover. >> Well, actually just lots of exciting things going on, in fact we just finished the main general session keynote. And that was the coming out party for our new Superdome Flex product. So, we've been in the Mission Critical space for quite some time now. Driving the HANA business, we've got 2500 customers around the world, small, large. And with out acquisition last year of SGI, we got this fabulous technology, that not only scales up to the biggest and most baddest thing that you can imagine to the point where we're talking about Stephen Hawking using that to explore the universe. But it scales down, four sockets, one terabyte, for lots of customers doing various things. So I look at that part of the Mission Critical business, and it's just so exciting to take technology, and watch it scale both directions, to the biggest problems that are out there, whether they are commercial and enterprise, and Alexander will talk about lots of things we're doing in that space. Or even high performance computing now, so we've kind of expanded into that arena. So, that's really the big news Super Dome Flex coming out, and really expanding that customer base. >> Yeah, Super Dome Flex, any memory in that baby? (laughing) >> 32 sockets, 48 terabyte if you want to go that big, and it will get bigger and bigger and bigger over time as we get more density that's there. And we really do have customers in the commercial space using that. I've got customers that are building massive ERP systems, massive data warehouses to address that kind of memory. >> Alright, let's hear from the customer. Alexander, first of all, tell us about your role, and tell us about Eldorado. >> I'm responsible for SAP basis and infrastructure. I'm working in Eldorado who is one of the largest consumer electronics network in Russia. We have more than 600 shops all over the country in more than 200 cities and towns, and have more than 16,000 employees. We have more than 50,000 stock keeping units, and proceeding over three and a half million orders with our international primarily. >> SAP practice lead, obviously this is a HANA story, so can you take us through your HANA journey, what led to the decision for HANA, maybe give us the before, during and after. Leading up to the decision to move to HANA, what was life like, and why HANA? >> We first moved our business warehouse system to HANA back in 2011. It's a time we got strong business requirements to have weak reporting. So, retail business, it's a business whose needs and very rapid decision making. So after we moved to HANA, we get the speed increasing of reports giving at 15 times. We got stock replenishment reports nine times faster. We got 50 minute sales reports every hour, instead of two hours. May I repeat this? >> No, it makes sense. So, the move to HANA was really precipitated by a need to get more data faster, so in memory allows you to do that. What about the infrastructure platform underneath, was it always HP at the time, that was 2011. What's HP's role, HPE's role in that, HANA? >> Initially we were on our business system in Germany, primarily on IBM solutions. But then according to the law requirements, we intended to go to Russia. And here we choose HP solutions as the main platform for our HANA database and traditional data bases. >> Okay Data residency forced you to move this whole solution back to Russia. If I may, Dave, one of the things that we're talking about and I want to test this with you, Alexander, is businesses not only have to be able to scale, but we talk about plastic infrastructure, where they have to be able to change their work loads. They have to be able to go up and down, but they also have to be able to add quickly. As you went through the migration process, how were you able to use the technology to introduce new capabilities into the systems to help your business to grow even faster? >> At that time, before migration, we had strong business requirements for our business growing and had some forecasts how HANA will grow. So we represented to our possible partners, our needs, for example, our main requirement was the possibility to scale up our CRM system up to nine terabytes memory. So, at that time, there was only HP who could provide that kind of solution. >> So, you migrated from a traditional RDBMS environment, your data warehouse previously was a traditional data base, is that right? And then you moved to HANA? >> Not all systems, but the most critical, the most speed critical system, it's our business warehouse and our CRM system. >> How hard was that? So, the EDW and the CRM, how difficult was that migration, did you have to freeze code, was it a painful migration? >> Yes, from the application point of view it was very painful, because we had to change everything, some our reports they had to be completely changed, reviewed, they had to adopt some abap code for the new data base. Also, we got some HANA level troubles, because it was very elaborate. >> Early days of HANA, I think it was announced in 2011. Maybe 2012... (laughing) >> That's one of the things for most customers that we talk to, it's a journey. You're moving from a tried and true environment that you've run for years, but you want the benefits in memory of speed, of massive data that you can use to change your business. But you have to plan that. It was a great point. You have to plan it's gonna scale up, some things might have to scale out, and at the same time you have to think about the application migration, the data migration, the data residency rules, different countries have different rules on what has to be there. And I think that's one of the things we try to take into account as HPE when we're designing systems. I want to let you partition them. I want to let you scale them up or down depending on the work load that's there. Because you don't just have one, you have BW and CRM, you have development environments, test environments, staging environments. The more we can help that look similar, and give you flexibility, the easier that is for customers. And then I think it's incumbent on us also to make sure we support our customers with knowledge, service, expertise, because it really is a journey, but you're right, 2011 it was the Wild West. >> So, give us the HPE HANA commercial. Everybody always tells us, we're great at HANA, we're best at HANA. What makes HPE best at HANA, different with HANA? >> What makes us best at HANA, one, we're all in on this, we have a partnership with SAP, we're designing for the large scale, as you said, that nobody else is building up into this space. Lots of people are building one terabyte things, okay. But when you really want to get real, when you want to get to 12 terabytes, when you want to get to 24 to 48. We're not only building systems capable of that, we're doing co-engineering and co-innovation work with SAP to make that work, to test that. I put systems on site in Waldorf, Germany, to allow them to go do that. We'll go diagnose software issues in the HANA code jointly, and say, here's where you're stressing that, and how we can go leverage that. You couple that with our services capability, and our move towards, you'll consume HANA in a lot of different ways. There will be some of it that you want on premise, in house, there will be some things that you say, that part of it might want to be in the Cloud. Yes, my answer to all of those things is yes. How do I make it easy to fit your business model, your business requirements, and the way you want to consume things economically? How do I alow you to say yes to that? 2500 customers, more than half of the installed base of all HANA systems worldwide reside on Hewlett Packard Enterprise. I think we're doing a pretty good job of enabling customers to say, that's a real choice that we can go forward with, not just today, but tomorrow. >> Alexander, are you doing things in the Cloud? I'm sure you are, what are you doing in the Cloud? Are you doing HANA in the Cloud? >> We have not traditional Cloud, as to use it to say, now we have a private Cloud. We have during some circumstance, we got all the hardware into our property. Now, it's operating by our partner. Between two company they are responsible for all those layers from hardware layer, service contracts, hardware maintenance, to the basic operation systems support, SEP support. >> So, if you had to do it all over again, what might you do differently? What advice would you give to other customers going down this journey? >> My advice is to at first, choose the right team and the right service provider. Because when you go to solution, some technical overview, architectural overview, you should get some confirmation from vendor. At first, it should be confirmed by HP. It should be confirmed by SEP. Also, there is a financial question, how to sponsor all this thing. And we got all these things from HP and our service partner. >> Right, give you the last word. >> So, one, it's an exciting time. We're watching this explosion of data happening. I believe we've only just scratched the surface. Today, we're looking at tens of thousands of skews for a customer, and looking at the velocity of that going through a retail chain. But every device that we have, is gonna have a sensor in it, it's gonna be connected all the time. It's gonna be generating data to the point where you say, I'm gonna keep it, and I'm gonna use it, because it's gonna let me take real time action. Some day they will be able to know that the mobile phone they care about is in their store, and pop up an offer to a customer that's exactly meaningful to do that. That confluence of sensor data, location data, all the things that we will generate over time. The ability to take action on that in real time, whether it's fix a part before it fails, create a marketing offer to the person that's already in the store, that allows them to buy more. That allows us to search the universe, in search for how did we all get here. That's what's happening with data. It is exploding. We are at the very front edge of what I think is gonna be transformative for businesses and organizations everywhere. It is cool. I think the advent of in memory, data analytics, real time, it's gonna change how we work, it's gonna change how we play. Frankly, it's gonna change human kind when we watch some of these researchers doing things on a massive level. It's pretty cool. >> Yeah, and the key is being able to do that wherever the data lives. >> Randy: Absolutely >> Gentlemen, thanks very much for coming on the Cube. >> Thank you for having us. >> Your welcome, great to see you guys again. Alright, keep it right there everybody, Peter and I will be back with our next guest, right after this short break. This is the Cube, we're live from HPE Discover Madrid 2017. We'll be right back. (upbeat music)

Published Date : Nov 28 2017

SUMMARY :

Brought to you by Hewlett Packard Enterprise. and General Manager of the Mission Critical the number of times you've been on the Cube, in the Mission Critical Business unit. So I look at that part of the Mission Critical business, 32 sockets, 48 terabyte if you want to go that big, Alright, let's hear from the customer. We have more than 600 shops all over the country this is a HANA story, so can you take us It's a time we got strong business requirements So, the move to HANA was really precipitated But then according to the law requirements, If I may, Dave, one of the things that we're So, at that time, there was only HP Not all systems, but the most critical, it was very painful, because we had to change everything, Early days of HANA, I think it was announced in 2011. and at the same time you have to think about So, give us the HPE HANA commercial. in house, there will be some things that you say, as to use it to say, now we have a private Cloud. and the right service provider. It's gonna be generating data to the point where you say, Yeah, and the key is being able to do that This is the Cube, we're live from HPE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BersePERSON

0.99+

Alexander ZhukPERSON

0.99+

Dave VelontePERSON

0.99+

GermanyLOCATION

0.99+

HPORGANIZATION

0.99+

Randy MeyersPERSON

0.99+

PeterPERSON

0.99+

RussiaLOCATION

0.99+

2011DATE

0.99+

2012DATE

0.99+

two hoursQUANTITY

0.99+

Stephen HawkingPERSON

0.99+

MadridLOCATION

0.99+

50 minuteQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Tom BradyPERSON

0.99+

Cal RipkinPERSON

0.99+

tomorrowDATE

0.99+

AlexanderPERSON

0.99+

DavePERSON

0.99+

24QUANTITY

0.99+

one terabyteQUANTITY

0.99+

IBMORGANIZATION

0.99+

Cal RipkenPERSON

0.99+

EldoradoORGANIZATION

0.99+

2500 customersQUANTITY

0.99+

32 socketsQUANTITY

0.99+

more than 16,000 employeesQUANTITY

0.99+

HANATITLE

0.99+

Randy MeyerPERSON

0.99+

TodayDATE

0.99+

12 terabytesQUANTITY

0.99+

RandyPERSON

0.99+

more than 200 citiesQUANTITY

0.99+

nine timesQUANTITY

0.99+

todayDATE

0.99+

15 timesQUANTITY

0.99+

Madrid, SpainLOCATION

0.99+

SGIORGANIZATION

0.99+

48QUANTITY

0.99+

more than 600 shopsQUANTITY

0.99+

Waldorf, GermanyLOCATION

0.99+

two companyQUANTITY

0.99+

last yearDATE

0.99+

four socketsQUANTITY

0.99+

PatriotsORGANIZATION

0.99+

more than 50,000 stockQUANTITY

0.98+

48 terabyteQUANTITY

0.98+

Super Dome FlexCOMMERCIAL_ITEM

0.98+

oneQUANTITY

0.98+

both directionsQUANTITY

0.97+

West CoastLOCATION

0.97+

over three and a half million ordersQUANTITY

0.97+

DiscoverORGANIZATION

0.97+

East CoastLOCATION

0.97+

firstQUANTITY

0.96+

SEPORGANIZATION

0.96+

HPETITLE

0.93+

Eric Herzog, IBM | VMworld 2015


 

from the noise it's the cube covering vmworld 2015 brought to you by vmware and it's ecosystem sponsors and now your host dave vellante we're back at Moscone everybody this is the cube SiliconANGLE Wikibon it's continuous production of vmworld 2015 we're riding the data wave Eric Harris dog is here he's a vice president marketing IBM storage in the Hawaiian shirt great to see you again my friend well Dave thank you very much as I keep telling people it's not about data lakes people have oceans a day to these days yes I oceans a day to dos today that oceans a data now so what's the story get the Hawaiian shirt on what do you got going on across the straw our big thing really is oceans of data so between all the solutions we have from a storage solution set a platform computing environment our joint deal that we do with Cisco with what we call the versus stack and our spectrum family of software now our customers are saying everything's going digital and it doesn't matter whether you're a global enterprise a midsize company or even an SMB with everything going digital it isn't about lakes of data it's about oceans of data so let's start maybe at the versus stack as a hyper converge is sort of taken the world by storm you're seeing vmware's obviously talking about it you got a bunch of startups talking about it when you guys made the move to to sell the the server business the x86 server business to lenovo BNT the acquisition of B&C went with it opened up whole new opportunities for IBM from a partnership standpoint and one of the first guys you went to a cisco so talk about that well we've had a great partnership with Cisco we deliver the versus tak through our mutual channel partners so globally so we have channel partners in all of the gos that are selling the versus stack solution we started originally with our v7000 product which allows us to not only provide a strong mid to your offering but because of our integration of our spectrum virtualized actually will virtualize heterogeneous torso over 300 arrays from our competitors can be virtualized giving any data center or cloud deployment single way to replicate single way to snapshot and of course a single way actually my great dinner which is a huge issue obviously in big deployment well and the same volume controller was really the first platform to do that that was the right gold standard and the whole the original you know tier 1 tier two storage sort of was defined by the sand volume controller kept really now you've built those capabilities into an end to the array so we started with our v7000 storwize was the first with a versus tack we announced last week two new versions one hour v nine thousand which incorporates that same value of the sand volume controller but an all-flash array okay that product is been incredibly successful for us we have thousands of customers we have deployed more petabytes than anyone in the industry and more units than anyone in the issue for you know some of those analysts that track the number side of the business we've done more than any pricing it right is what you're telling me we are definitely pricing it right we do north petabytes more minutes and more units than anybody by far but not the most revenue second most revenue so you well we're a fair price for a fair job as opposed to a high price for okay job that's what we believe in delivering more value for the money so we've got that so that opens up heavy virtualized environments heavy cloud environments big data analytics all those applications were all flash high-end Oracle deployments SP Hana configs all those sort of things are ideal same time you brought in the v5000 at the lower entry place of the mid-tier and it's with the UCS mini from Cisco so it gives you a lower entry price and allows a couple things one you can go in department until deployments a big enterprise to you can go into remote office deployments and also of large enterprise but three it allows you to take the value of a converged infrastructure down into smaller customers because it's a lower entry price point it's got all the value of the virtualization engine we have in all of our V family of products that v5 to be seven in the v9 all flash but it's at a much lower price point with a lower cost UCS mini and a lower cost switch infrastructure from from Cisco so it's a great solution for those big offices but again remote and department level and ideal though to move converged infrastructure down into smaller companies so so cisco has been incredibly successful with that space when Cisco first came out I a misunderstood I said how they going to fall flat in their face and servers and I was totally wrong about that because I didn't understand that they were trying to change the game what's it like partnering with those guys and how is it added value to your business well it's been very strong for us one they've got an excellent channel two they have a great direct sales model as does IBM three we've been partnering them for ages and ages and ages in fact in the 90s we sold a bunch of our networking technology to Cisco and is now deployed by Cisco so some of the networking technology at Cisco puts out there to the to their end users to their channel partners into you know their big telcos that actually came from IBM when we sold our networking division to Cisco in the mid-90s so strong partnership ever since then so let's talk more about the portfolio particularly i'm sickly interested in the whole TSM vs TSM came over to the storage group which thrilled me i think there was a great move by IBM to do that whoever made that decision smart move how has that affected having that storage software capability embedded into the storage business how has that affected your ability to go to market well it's been great so that's our spectrum family there are six elements to that spectrum protect which used to be TSM spectrum control which used to be the tsc product spectrum virtualized which is a software version of the sand volume controller so you can get as a software-only solution spectrum archive spectrum accelerate which is a scale-out block solution think of it as a software version of our XIV platform but software only and spectrum scale which gives incredible scale-out nas capability in fact spectrum scale has a number of customers in the enterprise side not in the HPC market but in global enterprises over 100 petabytes and we even have one customer that has one exabyte in production under spectrum scale exabyte one exabyte in production and not an hpc customer or not not one of the big universities not one of the think tanks but a commercial large global fortune 500 company we an exabyte with spectrum scale so so talk a little bit more about the strategy I think people all times misunderstand IBM's approach they say okay IBM getting out of the hardware business which they think Inferno must get another storage business you're not get out of the storage business obviously they hired hogging store oh so talk more about the strategy and how you're you know pursuing that yeah well I'd say a couple things so first of all our commitment to storage is very strong we're investing a billion in all flash technology and a billion in spectrum software in addition to our normal engineering development for our store wise family and our other members of our products that we've already had so a billion extra in flash and a billion extra in our software family in addition to that we've got a method of consumption that we're looking at so some end users want a full storage solution our ds8000 our flash systems are storwize some customers want to move to the software-defined storage and in several cases such as XIV software only spectrum virtualize okay we've got a number of different ways that you can consume the product and then lastly in several of the products such as spectrum scale spectrum accelerate and a lite version of spectrum control that we call spectrum control storage insights available through a cloud consumption model so if the customer wants a comprehensive solution we have it if the customer wants software-defined storage we have it if the customer wants integrated infrastructure with our vs stack we have it and if the customer wants a cloud storage model of consumption we have that too and quite honestly we think in bigger accounts they may have multiple consumption models for example core data center might go for a full storage solution but guess what the cloud solutions would be ideal for a remote or branch office so talk to me more about the cloud you're talking about the SoftLayer we here we go to the IBM shows you a soft layer of bluemix you know so a lot of money or the devops crowd what's going on bactrim accelerate spectrum scale and spectrum control are all available as a soft layer offering they are not targeting test and Dev they are not targeting you know just the bluemix out these are targeting core data center they could be testing dev or they could be remote office branch office opportunities for large enterprises that want to spend a full storage solution and spend that money on the core data center but for the remote office have spectrum scale delivered over softlayer an ideal solution and various consumption models which ever fits their need so David flora just wrote a piece on Wikibon calm of talking about latency and capacity storage at a very high level sort of segmenting the market those ways it's sort of sizing it up and projecting some of the trends and obviously latency storage he's thinking you know more flash oriented capacity storage more more disk spinning disk and tape is that a reasonable way to look at the business and how does it apply to your portfolio so we do think that's a reasonable way to look at it you have if you will a performance segment and a capacity segment depending the number of things that people need to really look at when they buy storage first of all I'm a storage guy for 30 years no one cares about storage it's all about the data it's all about the data that your storage optimizes it's about the workload the activation the use case for me I do too but unfortunately almost every time you know see how it's going to say almost every CIO is a software guy so it's how does the storage optimize my software environment and that's what's critical to them so we see certain applications that are very performance exit certain SLA s they need to meet we have some that are medium sensitive and we have some that of course are very capacity oriented which is our spectrum scale one exabyte with a single customer now that's capacity that's an ocean of data but we also have solutions we're able to put it together so for example in a lot of data analytics workloads that would run in spectrum scale we actually sell a lot of our all flash flash systems use the flash to ingest the data use flash to manage the metadata use the flash to run the search engine in a big giant config such as that and when you're running an analytics workload you run the analytics workload on that flash yet you're really doing a very large deployment hundreds of petabytes to an exabyte with our spectrum scale so we see if you will a continuum and the key thing as IBM offers all of the various piece parts to any level of the continuum and in that example I just gave combining high performance and deep high capacity software in a single solution to meet a business I mean IBM is an unbelievable company think about Watson cloud bluemix the analytics business deep deep heavy rd z mainframe so you got all the pieces how is the storage business how can it better leverage those other pieces and and is it or is it is it relevant or is it just just take the storage hill so we see our storage products as integrating with our other so for example we do a lot of deals where they buy a mainframe in our ds8000 sure we offer integrated infrastructure not only with cisco but actually with the power family as well it's called pure power and that has an integrated v7000 with a power server and we're looking at deepening that relationship as well a lot of analytics were lot alex workloads going scale so whether they buy the big insights whether they use in Watson we've got several customers use Watson but by flash systems because it's obviously very compute intensive so they use flash systems to do that so you know we fit in at the same time we have plenty of customers that don't buy anything else from IBM and just buy storage so we are appealing to a very broad audience those that are traditional IBM shops that by a lot of different products from IBM and those that go in fact one of our public references general mills they had not bought anything from any division of IBM for 50 years and one of our channel partners in Minnesota we are able to get in there with our XIV product and now not only do they buy XIV and some spectrum protect for backup but they've actually started to buy some other technology from IBM and for 50 years they bought nothing from IBM from any division so in that case storage led the way so again in certain accounts we're in there with the ds8000 and Z or were in there with Watson and flash systems and other accounts were pioneering and in some cases we're the only product they buy they don't buy from IBM we will meet whichever need they have now in periods in the last I mean it's been Evan flow in the storage business for IBM periods the last decade IBM deep rd but the products couldn't seem to go to market now you shared with me under under NDA so we can't talk about it in detail but shared with me the roadmap and and the product roadmap is accelerating from release maybe it's just my impression from what I'm used to should we expect to see a much more you know steady cadence of product delivery from IBM going forward absolutely so keeping in our spirit of oceans we ride the wave we don't fight the way and in today's era in any era of high-tech not just in store it doesn't matter whether storage whether its servers whether it's web to know whatever it is it's all about innovation and doing it quickly so we're going to ride that wave of innovation we're going to have a regular cadence of releases we released four different members of spectrum plus two verses stocks and next quarter you'll see five really five major product releases in one quarter and then in q1 you're going to see another three so we're making sure that as this trajectory of innovation hits all of high tech in all segments that IBM storage is not going to be left behind and we're going to continue to innovate on an accelerated pace that pace is is really important you know IBM again spends a lot of money on R&D it's key to get that product into the pipeline let's talk about vmware and vmworld obviously we're here at vmworld so on vmware very important constituency a lot of customers you got a you got to talk to vmware if you want to be in the data center today what is your strategy around vmware specifically but also generally as it relates to multi cloud environments whether it's your own cloud or other clouds OpenStack or what if you could talk about those so let's take virtualization first so we support a number of different hypervisors we support VMware extensively we support hyper-v we support kvm we support ovm we support open initiatives like OpenStack cinder we support Hadoop we have Hadoop connectors in many of our products so whether it's a cloud deployment or a virtual deployment we want to make sure we support everybody for example spectrum protect was announced last week with support for softlayer as a target device basically a tier well guess what in 1h we're going to support amazon and as you're not just softlayer so again we want to make sure we support everything with VMware specifically for the first time ever VMware has invited IBM storage on stave at three questions iBM has done things in the server world in the past but we have never ever ever been invited by VMware to their technical sessions in fact when is it five o'clock today it's called Project capstone which they publicly announced last week and it's about deploying Oracle environments in VMware virtualization it's a partnership with VMware with IBM flash systems all flash and with HP superdome servers and that's going to be on stage at five o'clock today here at moscone center awesome so we're starting to see a tighter relationship with with VMware building out the portfolio what do you say to the customer says yeah I hear you but vmware's doing all this sort of interesting stuff around things like v san what do you what do you tell a customer you know what about that so we see the San as it you know in this era of behemoths everyone is your partner everyone is your competitor but we work with Intel all the time other divisions of IBM think Intel's a major competitor some of our server division work with some of our storage competitors so we think you know we will work with everyone and while we work with VMware a number of angles so if he sounds a little bit of a competitor that's fine and we see an open space for all of the solutions in the market today we got to leave it there the last question so take us through sort of your objectives for IBM storage over the you know near and midterm what do you what should we be well so our big thing is to make sure we keep the cadence up there's so much development going on whether that be in software defined and integrated infrastructure in all flash in all the areas that we are going to make sure that we continue to develop in every area we've got the billion dollars in all flash in the billion dollars in software to find we are going to spend it and we're going to bring those products to market that fit the need so that the oceans of data that everyone is dealing with can be handled appropriately cost-effectively and quite honestly that oceans of data it's about the business value of the data not the storage underneath so we're going to make sure that for all those oceans a data we will allow them to drive real business value and make sure that those data oceans are protected meet their SLA s and are always available to their end user base I love it yet the Steve Mills billion-dollar playbook obviously worked in Linux it was well over a billion in analytics business IBM's a leader they're applying it to flash great acquisition of Texas memory systems you become a leader they're now going after the software to find Eric Herzog thanks very much for coming to the cubes great very much we love to have all right everybody will be back with our next guest right after this World we're live from vmworld and Moscone keep right there you

Published Date : Sep 1 2015

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

Eric HerzogPERSON

0.99+

MinnesotaLOCATION

0.99+

30 yearsQUANTITY

0.99+

billion dollarsQUANTITY

0.99+

DavePERSON

0.99+

fiveQUANTITY

0.99+

Eric HarrisPERSON

0.99+

six elementsQUANTITY

0.99+

ciscoORGANIZATION

0.99+

last weekDATE

0.99+

one hourQUANTITY

0.99+

TexasLOCATION

0.99+

Steve MillsPERSON

0.99+

David floraPERSON

0.99+

two versesQUANTITY

0.99+

vmwareORGANIZATION

0.99+

LinuxTITLE

0.99+

first platformQUANTITY

0.99+

vmworldORGANIZATION

0.99+

WatsonTITLE

0.98+

next quarterDATE

0.98+

todayDATE

0.98+

one customerQUANTITY

0.98+

hundreds of petabytesQUANTITY

0.98+

HPORGANIZATION

0.98+

90sDATE

0.98+

firstQUANTITY

0.98+

billion-dollarQUANTITY

0.98+

ds8000COMMERCIAL_ITEM

0.97+

mid-90sDATE

0.97+

ZCOMMERCIAL_ITEM

0.97+

first timeQUANTITY

0.97+

OracleORGANIZATION

0.97+

thousands of customersQUANTITY

0.97+

dave vellantePERSON

0.97+

HadoopTITLE

0.97+

threeQUANTITY

0.97+

single wayQUANTITY

0.97+

over 100 petabytesQUANTITY

0.97+

iBMORGANIZATION

0.97+

one exabyteQUANTITY

0.96+

HawaiianOTHER

0.96+

v7000COMMERCIAL_ITEM

0.96+

B&CORGANIZATION

0.96+

MosconeORGANIZATION

0.96+

twoQUANTITY

0.96+

single solutionQUANTITY

0.96+

oneQUANTITY

0.96+

IntelORGANIZATION

0.95+