Mike Banic, Vectra | AWS re:Inforce 2019
>> live from Boston, Massachusetts. It's the Cube covering A W s reinforce 2019 brought to you by Amazon Web service is and its ecosystem partners. >> Okay, welcome back. Everyone keeps live coverage here in Boston. Messages of AWS reinforce That's Amazon. Webster's his first inaugural commerce around cloud security on John Kerry with David Lantz. One of the top stories here, the announced being announced here reinforced is the VPC traffic nearing and we wanted to bring in alumni and friend Mike Banner was the VP of marketing at a Vectra who specializes in networking. Welcome to the Q. We go way back. HP networking got a hot start up here so wanted to really bring you in to help unpack this VPC traffic mirroring product is probably medias announcement of everything on stage. That other stuff was general availability of security have which is great great product, Absolutely. And guard guard duty. Well, all this other stuff have it. But the VPC traffic nearing is a killer feature for a lot of reasons, absolutely. But it brings some challenges and some opportunities that might be downstream. I don't get the thoughts on what is your take on the BBC traffic nearing >> a tte. The highest level brings a lot of value because it allows you get visibility and something that's really opaque, which is the traffic within the cloud. And in the past, the way people were solving this was they had to put an agent on the workload, and nobody wants that one. It's hard to manage. You don't want dozens to hundreds or thousands of agents, and also it's going to slow things down. On third, it could be subverted. You get the advanced attacker in there. He knows how to get below that level and operated on in a way where he can hide his communication and and his behavior isn't seen. With traffic nearing that, we're getting a copy of the packet from below. The hyper visor cannot be subverted, and so we're seeing everything, and we're also not slowing down the traffic in the virtual private cloud. So it allows us to extract just the right data for a security application, which is our case, metadata and enrich it with information that's necessary for detecting threats and also of performing an investigation. >> Yeah, it was definitely the announcement that everybody has been talking about has the buzz. So from a from a partner perspective, how do you guys tie into that? What do you do? Was the value that you bring to the customer, >> So the value that we're bringing really stems from what you can do with our platform. There's two things everybody is looking to do with him at the highest level, which is detect threats and respond to threats. On the detection side, we could take the metadata that we've extracted and we've enriched. We're running through machine learning algorithms, and from there we not only get a detection, but we can correlated to the workers we're seeing it on. And so we could present much more of an incident report rather than just a security alert, saying, Hey, something bad happened over there. It's not just something bad happened, but these four bad things happen and they happen in this time sequence over this period of time, and it involved these other work looks. We can give you a sense of what the attack campaign looks like. So you get a sense of like with cancer, such as you have bad cells in your liver, but they've metastasized to these other places. Way also will keep that metadata in something we call cognito recall, which is in AWS. And it has pre built analytics and save searches so that once you get that early warning signal from cognito detect, you know exactly where to start looking for. You can peel back all the unrelated metadata, and you can look specifically at what's happened during the time of that incident. In order, perform your threat investigation and respond rapidly to that threat. >> So you guys do have a lot of machine intelligence. OK, ay, ay chops. How close are we to be able to use that guy to really identify? Detect, but begin to automate responses? We there yet eyes. It's something that people want don't want. >> We're getting close to being there. It's answer your first question, and people are sure that they want it yet. And here's some of the rationale behind it. You know, like we generally say that Aria is pretty smart, but security operations people are still the brains of the operation. There's so much human intelligence, so much contextual knowledge that a security operations person can apply to the threats that we detect. They can look at something and say, Oh, yeah, I see the user account. The service is being turned on from, you know, this particular workload. I know exactly what's happening with that. They add so much value. So we look at what we're doing is augmenting the security operations team. We're reducing their workload by taking all the mundane work and automating that and putting the right details at their fingertips so they could take action. Now there's some things that are highly repeatable that they do like to use playbooks for So we partner with companies like Phantom, which got bought by spunk, and to Mr which Palazzo Networks acquired. They've built some really good playbooks for some of those well defying situations. And there was a couple presentations on the floor that talked about those use >> cases. Fan of fan was pretty good. Solid product was built in the security hub. Suit helps nice product, but I'll get back to the VPC traffic, not smearing. It makes so much sense. It's about time. Yes, Finally they got it done. This make any sense? It wasn't done before, but I gotta ask first with the analytics, you and you said on the Q. Before network doesn't lie, >> the network is no line >> they were doesn't lie with subversion pieces of key piece. It's better be the lowest level possible. That's a great spot for the data. So totally agree. Where do you guys create Valley? Because now that everyone's got available BBC traffic mirroring How do you guys take advantage of that? What's next for you guys is that Where's the differentiation come from? Where's the value go next? >> Yeah, there's really three things that I tend to focus on. One is we enrich the metadata that we're extracting with a lot of important data that makes it. It really accelerates the threat investigation. So things like directionality, things like building a notion of what's the identity of the workload or when you're running us on prem. The device, because I P addresses changed. There's dynamic things in there, so having a sense of of consistency over a period of time is extremely valuable for performing a threat investigation so that information gets put in tow. Recall for the metadata store. If people have a data leak that they wanna have ascended to, whether it's elastic or spawn, Kafka then that is included in what we send to them and Zeke formatting use. Others eat tooling so they're not wasting any money there. And in the second piece is around the way that we build analytics. There's always, ah, a pairing of somebody from security research with the data scientist. This is the security researcher explains the tools, the tactics, the techniques of the attacker. So that way, the data scientist isn't being completely random about what features do they want to find in the network traffic. They're being really specific to what features are gonna actually pair to that tool, tactic and technique. So that way, the efficacy of the algorithm is better. We've been doing this for five plus years, and history speaks for something because some of the learning we've had is all right. In the beginning, there were maybe a couple different supervised techniques to apply. Well, now we're applying those supervised techniques with some deep learning techniques. So that way, the performance of the algorithm is actually 90% more effective than it was five years ago. >> Appreciating with software. Get the data extract the data, which the metadata, Yes, you're doing. Anyway. Now, It's more efficient, correct, low speed, No, no problems with informants in the agents you mentioned earlier. Now it's better data impact the customers. What's the What's the revelation here For the end of the day, your customer and Amazons customers through you? What do they get out of it? What's the benefit to them? >> So it's all about reducing the time to detect in the time to respond. Way had one of our fortune to 50 customers present last week at the Gardener Security Summit. Still on stage. Gentlemen from Parker Hannifin talked about how they had an incident that they got an urgent alert from from Cognito. It told him about an attack campaign. He was immediately alerted the 45 different machines that were sending data to the cloud. He automatically knew about what were the patterns of data, the volume of data. They immediately know exactly what the service is that were being used with in the cloud. They were able to respond to this and get it all under control. Listen 24 hours, but it's because they had the right data at their fingertips to make rapid decisions before there was any risk. You know what they ended up finding was it was actually a new application, but somebody had actually not followed the procedures of the organization that keeps them compliant with so many of their end users. In the end, it's saved tremendous time and money, and if that was a real breach, it would have actually prevented them from losing proprietary information. >> Well, historically, it would take 250 days to even find out that there was a breach, right? And then by then who knows what What's been exfiltrate ID? >> Yeah, we had a couple. We had a couple of firms that run Red team exercises for a living come by and they said, I said to them, Do you know who we are? And they said, Of course we know where you are. There's one tool out there, then finds us. It's victory. That's >> a That's a kind of historical on Prem. So what do you do for on Pramuk? This is all running any ws. Is it cloud only? >> It's actually both, so we know that there's a lot of companies that come here that have never owned a server, and everything's been in AWS from day one and for I t. Exactly. And for them waken run everything. We have the sensor attached to the VPC traffic nearing in AWS. We could have the brain of the cognitive platform in eight of us, you know. So for them they don't need anything on prime. There's a lot of people that are in the lift and shift mode. It can be on Prem and in eight of us, eh? So they can choose where they want the brain. And they could have sensors in both places. And we have people that are coming to this event that their hybrid cloud, they've got I t infrastructure in Azure. But they have production in eight of us and they have stuff that's on Prem. And we could meet that need to because we work with the V Top from Azure and so that we're not religious about that. It's all about giving the right data right place, reducing the time to detective respond, >> Mike, Thanks for coming and sharing the insights on the VP. Your perspective on the vpc traffic mirror appreciated. Give a quick plug for the company. What you guys working on? What's the key focus? You hiring. Just got some big funding news. Take a minute to get the plug in for electric. >> Yeah, So we've gone through several years of consecutive more than doubling in. Not in a recurring revenue. I've been really fortunate to have to be earning a lot of customer business from the largest enterprises in the world. Recently had funding $100,000,000 led by T C V out of Menlo Park. Total capitalization is over to 22 right now on the path to continue that doubling. But, you know, we've been really focusing on moving where the you know already being where the puck is going to by working with Amazon. Advance on the traffic nearing. And, you know, we know that today people are using containers in the V M environment. We know that you know where they want to go. Is more serverless on, you know, leveraging containers more. You know, we're already going in that direction. So >> great to see congratulates we've known each other for many, many years is our 10th anniversary of the Q. You were on year one. Great to know you. And congratulations. Successive victor and great announcement. Amazon gives you a tailwind. >> Thanks a lot. It's great to see your growth as well. Congratulations. >> Thanks, Mike. Mike Banning unpacking the relevance of the VPC traffic mirroring feature. >> This is kind >> of conversation we're having here. Deep conversation around stuff that matters around security and cloud security. Of course, the cubes bring any coverage from the inaugural event it reinforced for me. Ws will be right back after this short break.
SUMMARY :
It's the Cube covering I don't get the thoughts on what is your take on the BBC traffic nearing And in the past, the way people were solving this was Was the value that you bring So the value that we're bringing really stems from what you can do with our platform. So you guys do have a lot of machine intelligence. And here's some of the rationale behind it. but I gotta ask first with the analytics, you and you said on the Q. Before network doesn't lie, Because now that everyone's got available BBC traffic mirroring How do you guys And in the second piece is around the way that we build analytics. What's the benefit to them? So it's all about reducing the time to detect in the time to respond. And they said, Of course we know where you are. So what do you do for on Pramuk? We have the sensor attached to the VPC Mike, Thanks for coming and sharing the insights on the VP. Advance on the traffic nearing. great to see congratulates we've known each other for many, many years is our 10th anniversary of the Q. It's great to see your growth as well. Of course, the cubes bring any coverage from the inaugural event it reinforced for me.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Lantz | PERSON | 0.99+ |
Mike Banner | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Kerry | PERSON | 0.99+ |
$100,000,000 | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Mike Banic | PERSON | 0.99+ |
250 days | QUANTITY | 0.99+ |
Mike | PERSON | 0.99+ |
24 hours | QUANTITY | 0.99+ |
BBC | ORGANIZATION | 0.99+ |
Mike Banning | PERSON | 0.99+ |
first question | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one tool | QUANTITY | 0.99+ |
50 customers | QUANTITY | 0.99+ |
five plus years | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
45 different machines | QUANTITY | 0.99+ |
three things | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Gardener Security Summit | EVENT | 0.98+ |
Menlo Park | LOCATION | 0.98+ |
2019 | DATE | 0.97+ |
10th anniversary | QUANTITY | 0.97+ |
two things | QUANTITY | 0.97+ |
five years ago | DATE | 0.97+ |
Palazzo Networks | ORGANIZATION | 0.97+ |
prime | COMMERCIAL_ITEM | 0.97+ |
first | QUANTITY | 0.96+ |
Phantom | ORGANIZATION | 0.95+ |
Aria | ORGANIZATION | 0.95+ |
Azure | TITLE | 0.93+ |
four bad things | QUANTITY | 0.93+ |
Vectra | ORGANIZATION | 0.92+ |
Webster | PERSON | 0.92+ |
third | QUANTITY | 0.91+ |
couple | QUANTITY | 0.9+ |
Hannifin | PERSON | 0.87+ |
both places | QUANTITY | 0.86+ |
year one | QUANTITY | 0.86+ |
thousands of agents | QUANTITY | 0.85+ |
one | QUANTITY | 0.83+ |
day one | QUANTITY | 0.82+ |
Amazon Web service | ORGANIZATION | 0.78+ |
over | QUANTITY | 0.75+ |
first inaugural | QUANTITY | 0.75+ |
One of the top stories | QUANTITY | 0.72+ |
Cognito | TITLE | 0.71+ |
Red | ORGANIZATION | 0.65+ |
Ws | ORGANIZATION | 0.65+ |
Prem | ORGANIZATION | 0.6+ |
Zeke | PERSON | 0.6+ |
V Top | ORGANIZATION | 0.57+ |
Kafka | PERSON | 0.56+ |
cognito | TITLE | 0.55+ |
V | PERSON | 0.53+ |
Parker | ORGANIZATION | 0.52+ |
Cube | COMMERCIAL_ITEM | 0.52+ |
22 | QUANTITY | 0.49+ |
spunk | ORGANIZATION | 0.49+ |
years | QUANTITY | 0.49+ |
cognito | ORGANIZATION | 0.44+ |
Pramuk | ORGANIZATION | 0.43+ |
Infrastructure For Big Data Workloads
>> From the SiliconANGLE media office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Hi, everybody, welcome to this special CUBE Conversation. You know, big data workloads have evolved, and the infrastructure that runs big data workloads is also evolving. Big data, AI, other emerging workloads need infrastructure that can keep up. Welcome to this special CUBE Conversation with Patrick Osborne, who's the vice president and GM of big data and secondary storage at Hewlett Packard Enterprise, @patrick_osborne. Great to see you again, thanks for coming on. >> Great, love to be back here. >> As I said up front, big data's changing. It's evolving, and the infrastructure has to also evolve. What are you seeing, Patrick, and what's HPE seeing in terms of the market forces right now driving big data and analytics? >> Well, some of the things that we see in the data center, there is a continuous move to move from bare metal to virtualized. Everyone's on that train. To containerization of existing apps, your apps of record, business, mission-critical apps. But really, what a lot of folks are doing right now is adding additional services to those applications, those data sets, so, new ways to interact, new apps. A lot of those are being developed with a lot of techniques that revolve around big data and analytics. We're definitely seeing the pressure to modernize what you have on-prem today, but you know, you can't sit there and be static. You gotta provide new services around what you're doing for your customers. A lot of those are coming in the form of this Mode 2 type of application development. >> One of the things that we're seeing, everybody talks about digital transformation. It's the hot buzzword of the day. To us, digital means data first. Presumably, you're seeing that. Are organizations organizing around their data, and what does that mean for infrastructure? >> Yeah, absolutely. We see a lot of folks employing not only technology to do that. They're doing organizational techniques, so, peak teams. You know, bringing together a lot of different functions. Also, too, organizing around the data has become very different right now, that you've got data out on the edge, right? It's coming into the core. A lot of folks are moving some of their edge to the cloud, or even their core to the cloud. You gotta make a lot of decisions and be able to organize around a pretty complex set of places, physical and virtual, where your data's gonna lie. >> There's a lot of talk, too, about the data pipeline. The data pipeline used to be, you had an enterprise data warehouse, and the pipeline was, you'd go through a few people that would build some cubes and then they'd hand off a bunch of reports. The data pipeline, it's getting much more complex. You've got the edge coming in, you've got, you know, core. You've got the cloud, which can be on-prem or public cloud. Talk about the evolution of the data pipeline and what that means for infrastructure and big data workloads. >> For a lot of our customers, and we've got a pretty interesting business here at HPE. We do a lot with the Intelligent Edge, so, our Edgeline servers in Aruba, where a a lot of the data is sitting outside of the traditional data center. Then we have what's going on in the core, which, for a lot of customers, they are moving from either traditional EDW, right, or even Hadoop 1.0 if they started that transformation five to seven years ago, to, a lot of things are happening now in real time, or a combination thereof. The data types are pretty dynamic. Some of that is always getting processed out on the edge. Results are getting sent back to the core. We're also seeing a lot of folks move to real-time data analytics, or some people call it fast data. That sits in your core data center, so utilizing things like Kafka and Spark. A lot of the techniques for persistent storage are brand new. What it boils down to is, it's an opportunity, but it's also very complex for our customers. >> What about some of the technical trends behind what's going on with big data? I mean, you've got sprawl, with both data sprawl, you've got workload sprawl. You got developers that are dealing with a lot of complex tooling. What are you guys seeing there, in terms of the big mega-trends? >> We have, as you know, HPE has quite a few customers in the mid-range in enterprise segments. We have some customers that are very tech-forward. A lot of those customers are moving from this, you know, Hadoop 1.0, Hadoop 2.0 system to a set of essentially mixed workloads that are very multi-tenant. We see customers that have, essentially, a mix of batch-oriented workloads. Now they're introducing these streaming type of workloads to folks who are bringing in things like TensorFlow and GPGPUs, and they're trying to apply some of the techniques of AI and ML into those clusters. What we're seeing right now is that that is causing a lot of complexity, not only in the way you do your apps, but the number of applications and the number of tenants who use that data. It's getting used all day long for various different, so now what we're seeing is it's grown up. It started as an opportunity, a science project, the POC. Now it's business-critical. Becoming, now, it's very mission-critical for a lot of the services that drives. >> Am I correct that those diverse workloads used to require a bespoke set of infrastructure that was very siloed? I'm inferring that technology today will allow you to bring those workloads together on a single platform. Is that correct? >> A couple of things that we offer, and we've been helping customers to get off the complexity train, but provide them flexibility and elasticity is, a lot of the workloads that we did in the past were either very vertically-focused and integrated. One app server, networking, storage, to, you know, the beginning of the analytics phase was really around symmetrical clusters and scaling them out. Now we've got a very rich and diverse set of components and infrastructure that can essentially allow a customer to make a data lake that's very scalable. Compute, storage-oriented nodes, GPU-oriented nodes, so it's very flexible and helps us, helps the customers take complexity out of their environment. >> In thinking about, when you talk to customers, what are they struggling with, specifically as it relates to infrastructure? Again, we talked about tooling. I mean, Hadoop is well-known for the complexity of the tooling. But specifically from an infrastructure standpoint, what are the big complaints that you hear? >> A couple things that we hear is that my budget's flat for the next year or couple years, right? We talked earlier in the conversation about, I have to modernize, virtualize, containerizing my existing apps, that means I have to introduce new services as well with a very different type of DevOps, you know, mode of operations. That's all with the existing staff, right? That's the number one issue that we hear from the customers. Anything that we can do to help increase the velocity of deployment through automation. We hear now, frankly, the battle is for whether I'm gonna run these type of workloads on-prem versus off-prem. We have a set of technology as well as services, enabling services with Pointnext. You remember the acquisition we made around cloud technology partners to right-place where those workloads are gonna go and become like a broker in that conversation and assist customers to make that transition and then, ultimately, give them an elastic platform that's gonna scale for the diverse set of workloads that's well-known, sized, easy to deploy. >> As you get all this data, and the data's, you know, Hadoop, it sorta blew up the data model. Said, "Okay, we'll leave the data where it is, "we'll bring the compute there." You had a lot of skunk works projects growing. What about governance, security, compliance? As you have data sprawl, how are customers handling that challenge? Is it a challenge? >> Yeah, it certainly is a challenge. I mean, we've gone through it just recently with, you know, GDPR is implemented. You gotta think about how that's gonna fit into your workflow, and certainly security. The big thing that we see, certainly, is around if the data's residing outside of your traditional data center, that's a big issue. For us, when we have Edgeline servers, certainly a lot of things are coming in over wireless, there's a big buildout in advent of 5G coming out. That certainly is an area that customers are very concerned about in terms of who has their data, who has access to it, how can you tag it, how can you make sure it's secure. That's a big part of what we're trying to provide here at HPE. >> What specifically is HPE doing to address these problems? Products, services, partnerships, maybe you could talk about that a little bit. Maybe even start with, you know, what's your philosophy on infrastructure for big data and AI workloads? >> I mean, for us, we've over the last two years have really concentrated on essentially two areas. We have the Intelligent Edge, which is, certainly, it's been enabled by fantastic growth with our Aruba products in the networks in space and our Edgeline systems, so, being able to take that type of compute and get it as far out to the edge as possible. The other piece of it is around making hybrid IT simple, right? In that area, we wanna provide a very flexible, yet easy-to-deploy set of infrastructure for big data and AI workloads. We have this concept of the Elastic Platform for Analytics. It helps customers deploy that for a whole myriad of requirements. Very compute-oriented, storage-oriented, GPUs, cold and warm data lakes, for that matter. And the third area, what we've really focused on is the ecosystem that we bring to our customers as a portfolio company is evolving rapidly. As you know, in this big data and analytics workload space, the software development portion of it is super dynamic. If we can bring a vetted, well-known ecosystem to our customers as part of a solution with advisory services, that's definitely one of the key pieces that our customers love to come to HP for. >> What about partnerships around things like containers and simplifying the developer experience? >> I mean, we've been pretty public about some of our efforts in this area around OneSphere, and some of these, the models around, certainly, advisory services in this area with some recent acquisitions. For us, it's all about automation, and then we wanna be able to provide that experience to the customers, whether they want to develop those apps and deploy on-prem. You know, we love that. I think you guys tag it as true private cloud. But we know that the reality is, most people are embracing very quickly a hybrid cloud model. Given the ability to take those apps, develop them, put them on-prem, run them off-prem is pretty key for OneSphere. >> I remember Antonio Neri, when you guys announced Apollo, and you had the astronaut there. Antonio was just a lowly GM and VP at the time, and now he's, of course, CEO. Who knows what's in the future? But Apollo, generally at the time, it was like, okay, this is a high-performance computing system. We've talked about those worlds, HPC and big data coming together. Where does a system like Apollo fit in this world of big data workloads? >> Yeah, so we have a very wide product line for Apollo that helps, you know, some of them are very tailored to specific workloads. If you take a look at the way that people are deploying these infrastructures now, multi-tenant with many different workloads. We allow for some compute-focused systems, like the Apollo 2000. We have very balanced systems, the Apollo 4200, that allow a very good mix of CPU, memory, and now customers are certainly moving to flash and storage-class memory for these type of workloads. And then, Apollo 6500 were some of the newer systems that we have. Big memory footprint, NVIDIA GPUs allowing you to do very high calculations rates for AI and ML workloads. We take that and we aggregate that together. We've made some recent acquisitions, like Plexxi, for example. A big part of this is around simplification of the networking experience. You can probably see into the future of automation of the networking level, automation of the compute and storage level, and then having a very large and scalable data lake for customers' data repositories. Object, file, HTFS, some pretty interesting trends in that space. >> Yeah, I'm actually really super excited about the Plexxi acquisition. I think it's because flash, it used to be the bottleneck was the spinning disk, flash pushes the bottleneck largely to the network. Plexxi gonna allow you guys to scale, and I think actually leapfrog some of the other hyperconverged players that are out there. So, super excited to see what you guys do with that acquisition. It sounds like your focus is on optimizing the design for I/O. I'm sure flash fits in there as well. >> And that's a huge accelerator for, even when you take a look at our storage business, right? So, 3PAR, Nimble, All-Flash, certainly moving to NVMe and storage-class memory for acceleration of other types of big data databases. Even though we're talking about Hadoop today, right now, certainly SAP HANA, scale-out databases, Oracle, SQL, all these things play a part in the customer's infrastructure. >> Okay, so you were talking before about, a little bit about GPUs. What is this HPE Elastic Platform for big data analytics? What's that all about? >> I mean, we have a lot of the sizing and scalability falls on the shoulders of our customers in this space, especially in some of these new areas. What we've done is, we have, it's a product/a concept, and what we do is we have this, it's called the Elastic Platform for Analytics. It allows, with all those different components that I rattled off, all great systems in of their own, but when it comes to very complex multi-tenant workloads, what we do is try to take the mystery out of that for our customers, to be able to deploy that cookie-cutter module. We're even gonna get to a place pretty soon where we're able to offer that as a consumption-based service so you don't have to choose for an elastic type of acquisition experience between on-prem and off-prem. We're gonna provide that as well. It's not only a set of products. It's reference architectures. We do a lot of sizing with our partners. The Hortonworks, CloudEra's, MapR's, and a lot of the things that are out in the open source world. It's pretty good. >> We've been covering big data, as you know, for a long, long time. The early days of big data was like, "Oh, this is great, "we're just gonna put white boxes out there "and off the shelf storage!" Well, that changed as big data got, workloads became more enterprise, mainstream, they needed to be enterprise-ready. But my question to you is, okay, I hear you. You got products, you got services, you got perspectives, a philosophy. Obviously, you wanna sell some stuff. What has HPE done internally with regard to big data? How have you transformed your own business? >> For us, we wanna provide a really rich experience, not just products. To do that, you need to provide a set of services and automation, and what we've done is, with products and solutions like InfoSight, we've been able to, we call it AI for the Data Center, or certainly, the tagline of predictive analytics is something that Nimble's brought to the table for a long time. To provide that level of services, InfoSight, predictive analytics, AI for the Data Center, we're running our own big data infrastructure. It started a number of years ago even on our 3PAR platforms and other products, where we had scale-up databases. We moved and transitioned to batch-oriented Hadoop. Now we're fully embedded with real-time streaming analytics that come in every day, all day long, from our customers and telemetry. We're using AI and ML techniques to not only improve on what we've done that's certainly automating for the support experience, and making it easy to manage the platforms, but now introducing things like learning, automation engines, the recommendation engines for various things for our customers to take, essentially, the hands-on approach of managing the products and automate it and put into the products. So, for us, we've gone through a multi-phase, multi-year transition that's brought in things like Kafka and Spark and Elasticsearch. We're using all these techniques in our system to provide new services for our customers as well. >> Okay, great. You're practitioners, you got some street cred. >> Absolutely. >> Can I come back on InfoSight for a minute? It came through an acquisition of Nimble. It seems to us that you're a little bit ahead, and maybe you say a lot a bit ahead of the competition with regard to that capability. How do you see it? Where do you see InfoSight being applied across the portfolio, and how much of a lead do you think you have on competitors? >> I'm paranoid, so I don't think we ever have a good enough lead, right? You always gotta stay grinding on that front. But we think we have a really good product. You know, it speaks for itself. A lot of the customers love it. We've applied it to 3PAR, for example, so we came out with some, we have VMVision for a 3PAR that's based on InfoSight. We've got some things in the works for other product lines that are imminent pretty soon. You can think about what we've done for Nimble and 3PAR, we can apply similar type of logic to Elastic Platform for Analytics, like running at that type of cluster scale to automate a number of items that are pretty pedantic for the customers to manage. There's a lot of work going on within HPE to scale that as a service that we provide with most of our products. >> Okay, so where can I get more information on your big data offerings and what you guys are doing in that space? >> Yeah, so, we have, you can always go to hp.com/bigdata. We've got some really great information out there. We're in our run-up to our big end user event that we do every June in Las Vegas. It's HPE Discover. We have about 15,000 of our customers and trusted partners there, and we'll be doing a number of talks. I'm doing some work there with a British telecom. We'll give some great talks. Those'll be available online virtually, so you'll hear about not only what we're doing with our own InfoSight and big data services, but how other customers like BTE and 21st Century Fox and other folks are applying some of these techniques and making a big difference for their business as well. >> That's June 19th to the 21st. It's at the Sands Convention Center in between the Palazzo and the Venetian, so it's a good conference. Definitely check that out live if you can, or if not, you can all watch online. Excellent, Patrick, thanks so much for coming on and sharing with us this big data evolution. We'll be watching. >> Yeah, absolutely. >> And thank you for watcihing, everybody. We'll see you next time. This is Dave Vellante for theCUBE. (fast techno music)
SUMMARY :
From the SiliconANGLE media office and the infrastructure that in terms of the market forces right now to modernize what you have on-prem today, One of the things that we're seeing, of their edge to the cloud, of the data pipeline A lot of the techniques What about some of the technical trends for a lot of the services that drives. Am I correct that a lot of the workloads for the complexity of the tooling. You remember the acquisition we made the data where it is, is around if the data's residing outside Maybe even start with, you know, of the Elastic Platform for Analytics. Given the ability to take those apps, GM and VP at the time, automation of the compute So, super excited to see what you guys do in the customer's infrastructure. Okay, so you were talking before about, and a lot of the things But my question to you and automate it and put into the products. you got some street cred. bit ahead of the competition for the customers to manage. that we do every June in Las Vegas. Definitely check that out live if you can, We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Patrick | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Aruba | LOCATION | 0.99+ |
Antonio | PERSON | 0.99+ |
BTE | ORGANIZATION | 0.99+ |
Patrick Osborne | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
June 19th | DATE | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Pointnext | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
third area | QUANTITY | 0.99+ |
21st Century Fox | ORGANIZATION | 0.99+ |
Apollo 4200 | COMMERCIAL_ITEM | 0.99+ |
@patrick_osborne | PERSON | 0.99+ |
Apollo 6500 | COMMERCIAL_ITEM | 0.99+ |
InfoSight | ORGANIZATION | 0.99+ |
MapR | ORGANIZATION | 0.99+ |
Sands Convention Center | LOCATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.98+ |
Apollo 2000 | COMMERCIAL_ITEM | 0.98+ |
CloudEra | ORGANIZATION | 0.98+ |
HP | ORGANIZATION | 0.98+ |
Nimble | ORGANIZATION | 0.98+ |
Spark | TITLE | 0.98+ |
SAP HANA | TITLE | 0.98+ |
next year | DATE | 0.98+ |
GDPR | TITLE | 0.98+ |
One app | QUANTITY | 0.98+ |
Venetian | LOCATION | 0.98+ |
two areas | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
hp.com/bigdata | OTHER | 0.97+ |
one | QUANTITY | 0.97+ |
Hortonworks | ORGANIZATION | 0.97+ |
Mode 2 | OTHER | 0.96+ |
single platform | QUANTITY | 0.96+ |
SQL | TITLE | 0.96+ |
One | QUANTITY | 0.96+ |
21st | DATE | 0.96+ |
Elastic Platform | TITLE | 0.95+ |
3PAR | TITLE | 0.95+ |
Hadoop 1.0 | TITLE | 0.94+ |
seven years ago | DATE | 0.93+ |
CUBE Conversation | EVENT | 0.93+ |
Palazzo | LOCATION | 0.93+ |
Hadoop | TITLE | 0.92+ |
Kafka | TITLE | 0.92+ |
Hadoop 2.0 | TITLE | 0.91+ |
Elasticsearch | TITLE | 0.9+ |
Plexxi | ORGANIZATION | 0.87+ |
Apollo | ORGANIZATION | 0.87+ |
of years ago | DATE | 0.86+ |
Elastic Platform for Analytics | TITLE | 0.85+ |
Oracle | ORGANIZATION | 0.83+ |
TensorFlow | TITLE | 0.82+ |
Edgeline | ORGANIZATION | 0.82+ |
Intelligent Edge | ORGANIZATION | 0.81+ |
about 15,000 of | QUANTITY | 0.78+ |
one issue | QUANTITY | 0.77+ |
five | DATE | 0.74+ |
HPE Discover | ORGANIZATION | 0.74+ |
both data | QUANTITY | 0.73+ |
data | ORGANIZATION | 0.73+ |
years | DATE | 0.72+ |
SiliconANGLE | LOCATION | 0.71+ |
EDW | TITLE | 0.71+ |
Edgeline | COMMERCIAL_ITEM | 0.71+ |
HPE | TITLE | 0.7+ |
OneSphere | ORGANIZATION | 0.68+ |
couple | QUANTITY | 0.64+ |
3PAR | ORGANIZATION | 0.63+ |
Arun Varadarajan, Cognizant | Informatica World 2018
>> Voiceover: Live from Las Vegas, it's theCUBE. Covering Informatica World 2018, brought to you by Informatica. >> Hey, welcome back everyone, we're here live at the Venetian, we're at the Sands Convention Center, Venetian, the Palazzo, for Informatica World 2018. I'm John Furrier, with Peter Burris, my co-host with you. Our next guest, Arun Varadarajan, who's the VP of AI and Analytics at Cognizant. Great to see you. It's been awhile. Thanks for coming on. >> Thank you. Thank you John, it's wonderful meeting you again. >> So, last time you were on was 2015 in the queue. We were at the San Francisco, where the event was. You kind of nailed the real time piece; also, the disruption of data. Look ing forward, right now, we're kind of right at the spot you were talking about there. What's different? What's new for you? ASI data's at the center of the value preposition. >> Arun: Yep. People are now realizing, I need to have strategic data plan, not just store it, and go do analytics on it. GDPR is a signal; obviously we're seeing that. What's new? >> So, I think a couple of things, John. One is, I think the customers have realized that there is a need to have a very deliberate approach. Last time, when we spoke, we spoke about digital transformation; it was a cool thing. It had this nice feel to it. But I think what has happened in the last couple of years is that we've been able to help our clients understand what exactly is digital transformation, apart from it being a very simple comparative tactic to deal with the fact that digital natives are, you know, barking down your path. It also is an opportunity for you to really reimagine your business architecture. So, what we're telling our clients is that when you're thinking about digital transformation, think of it from a 3-layer standpoint, the first layer being your business model itself, right? Because, if you're a traditional taxi service, and you're dealing with the Uber war, you better reimagine your business model. It starts there. And then, if your business model has to change to compete in the digital world, your operating model has to be extremely aligned to that new business model paradigm that you've defined. And, to that, if you don't have a technology model that is adapting to that change, none of this is going to happen. So, we're telling our clients, when you think about digital transformation, think of it from these three dimensions. >> It's interesting, because back in the old days, your technology model dictated what you could do. It's almost flipped around, where the business model is dictating the direction. So, business model, operating model, technology model. Is that because technology is more versatile? Or, as Peter says, processes are known, and you can manage it? It used to be, hey, let's pick a technology decision. Which database, and we're off to the races. Now it seems to be flipped around. >> There are two reasons for that. One is, I think, technology itself has proliferated so much that there are so many choices to be made. And if you start looking at technology first, you get kind of burdened by the choices you need to make. Because, at the end of the day, the choice you make on technology has to have a very strong alignment and impact to business. So, what we're telling our clients is, choices are there; there are plenty of choices. There are compute strategies available that are out there. There's new analytical capabilities. There's a whole lot of that. But if you do not purpose and engineer your technology model to a specific business objective, it's lost. So, when we think about business architecture, and really competing in the digital space, it's really about you saying, how do I make sure that my business model is such that I can thwart the competition that is likely to come from digital natives? You saw Amazon the other day, right? They bought an insurance company. Who knows what they're going to buy next? My view is that Uber may buy one of the auto companies, and completely change the car industry. So, what does Ford do? What does General Motors do? And, if they're going to go about this in a very incremental fashion, my view is that they may not exist. >> So, we have been in our research arguing that digital transformation does mean something. We think that it's the difference between a business and a digital business is the role that data plays in a digital 6business, and whether or not a business treats data as an asset. Now, in every business, in every business strategy, the most simple, straightforward, bottom-line thing you can acknowledge is that businesses organize work around assets. >> John: Yep. >> So, does it comport with your observation that, to many respects, what we're talking about here is, how are we reinstitutionalizing work around data, and what impact does that have on our business model, our operating model, and our technology selection? Does that line up for you? >> Totally, totally. So, if you think about business model change, to me, it starts by re-imagining your engagement process with your customers. Re-imagining customer experience. Now, how are you going to be able to re-imagine customer experience and customer engagement if you don't know your customer? Right? So, the first building block in my mind is, do you have customer intelligence? So, when you're talking about data as an asset, to me, the asset is intelligence, right? So, customer intelligence, to me, is the first analytical building block for you to start re-imagining your business model. The second block, very clearly, is fantastic. I've re-imagined customer experience. I've re-imagined how I am going to engage with my customer. Is your product, and service, intelligent enough to develop that experience? Because, experience has to change with customers wanting new things. You know, today I was okay with buying that item online, and getting the shipment done to me in 4 days. But, that may change; I may need overnight shipping. How do you know that, right? Are you really aware of my preferences, and how quickly is your product and service aligning to that change? And, to your point, if I have customer intelligence, and product intelligence sorted out, I better make sure that my business processes are equally capable of institutionalizing intelligence. Right? So, my process orchestration, whether it's my supply chain, whether it's my auto management, whether it's my, you know, let's say fulfillment process; all of these must be equally intelligent. So, in my mind, these are three intelligent blocks: there's customer intelligence, product intelligence, and operations intelligence. If you have these three building blocks in place, then I think you can start thinking about what should your new data foundation look like. >> I want to take that and overlay kind of like, what's going on in the landscape of the industry. You have infrastructure world, which you buy some rack and stack the servers; clouds now on the scene, so there's overlapping there. We used to have a big data category. You know, ADO; but, that's now AI and machine learning, and data ware. It's kind of its own category, call it AI. And then, you have kind of emerging tech, whether you call, block chain, these kind of... confluence of all these things. But there's a data component that sits in the center of all these things. Security, data, IOT, traverse infrastructure, cloud, the classic data industry, analytics, AI, and emerging. You need data that traverses all these new environments. How does someone set up their architecture so that, because now I say, okay, I got a dat big data analytics package over here. I'm doing some analytics, next gen analytics. But, now I got to move data around for its cloud services, or for an application. So, you're seeing data as to being architected to be addressable across multiple industries. >> Great point John. In fact, that leads logically to the next thing that me and my team are working on. So we are calling it the Adaptive Data Foundation. Right? The reason why we chose the word adaptive is because in my mind it's all about adapting to change. I think Chal Salvan, or somebody said that the survival of the fittest is not, the survival is not of the survival of the fittest or the survival of the species that is intelligent, but it's the survival of those who can adapt to change, right? To me, your data foundation has to be super adaptive. So what we've done is, in fact, my notion, and I keep throwing this at you every time I meet you, in my opinion, big data is legacy. >> John: Yeah, I would agree with that. >> And its coming.. >> John: The debate. >> It's pretty much legacy in my mind. Today it's all about scale-out, responsive, compute. The data world. Now, if you looked at most of the architectures of the past of the data world, it was all about store and forward. Right? I would, it's a left to right architecture. To me it's become a multi-directional architecture. Therefore what we have done is, and this is where I think the industry is still struggling, and so are our customers. I understand I need to have a new modern data foundation, but what does that look like? What does it feel like? So with the Adaptive Data Foundation... >> They've never seen it before by the way. >> They have not seen it. >> This is new. >> They are not able to envision it. >> It is net new. >> Exactly. They're not able to envision it. So what I tell my clients is, if you really want to reimagine, just as you're reimagining your business model, your operating model, you better reimagine your data model. Is your data model capable of high velocity resolutions? Whether it's identity resolution of a client who's calling in. Whether it's the resolution of the right product and service to deliver to the client. Whether it's your process orchestration, they're able to quickly resolve that this data, this distribution center is better capable of servicing their customer need. You better have that kind of environment, right? So, somebody told me the other day that Amazon can identify an analytical opportunity and deliver a new experience and productionize it in 11.56 seconds. Today my customers, on average, the enterprise customers, barely get to have a reasonable release on a monthly basis. Forget about 11.56 seconds. So if they have to move at that kind of velocity, and that kind of responsiveness, they need to reimagine their data foundation. What we have done is, we have tried to break it down into three broad components. The first component that they're saying is that you need a highly responsive architecture. The question that you asked. And a highly responsive architecture, we've defined, we've got about seven to eight attributes that defines what a responsive architecture is. And in my mind, you'll hear a lot of, I've been hearing a lot of this that a friend, even in today's conference, people are saying, 'Oh, its going to be a hybrid world. There's going to be Onprim, there's going to be cloud, there's going to be multicloud. My view is, if you're going to have all of that mess, you're going to die, right? So I know I'm being a little harsh on this subject, but my view is you got to move to a very simplified responsive architecture right up front. >> Well you'd be prepared for any architecture. >> I've always said, we've debated this many times, I think it's a cloud world, public cloud, everything. Where the data center on premise is a huge edge. Right, so? If you think of the data center as an edge, you can say okay, it's a large edge. It's a big fat edge. >> Our fundamentalists, I don't think it exists. Our fundamental position is data increasingly, the physical realities of data, the legal realities of data, the intellectual property control realities of data, the cost realities of data are going to dictate where the processing actually takes place. There's going to be a tendency to try to move the activity as close to the data as possible so you don't have to move the data. It's not in opposition, but we think increasingly people are going to not move the data to the cloud, but move the cloud to the data. That's how we think. >> That's an interesting notion. My view is that the data has to be really close to the source of position and execution, right? >> Peter: Yeah. Data has got to be close to the activity. >> It has to be very close to the activity. >> The locality matters. >> Exactly, exactly, and my view is, if you can, I know it's tough, but a lot of our clients are struggling with that, I'm pushing them to move their data to the cloud, only for one purpose. It gives them that accessibility to a wide ranging of computer and analytical options. >> And also microservices. >> Oh yeah. >> We had a customer on earlier who's moved to the cloud. This is what we're saying about the edge being data centered. Hybrid cloud just means you're running cloud operations. Which just means you got to have a data architecture that supports cloud operations. Which means orchestration, not having siloed systems, but essentially having these kind of, data traversal, but workload management, and I think that seems to be the consistency there. This plays right into what you're saying. That adaptive platform has to enable that. >> Exactly. >> If it forecloses it, then you're missing an opportunity. I guess, how do you... Okay tell me about a customer where you had the opportunity to do the adaptive platform, and they say no, I want a silo inside my network. I got the cloud for that. I got the proprietary system here. Which is eventually foreclosing their future revenue. How do you handle that scenario? >> So the way we handle that scenario, is again, focusing on what the end objective, that the client has, from an analytical opportunity, respectfully. What I mean by that is that semi-customer says I need to be significantly more responsive in my service management, right? So if he says I want to get that achieved, then what we start thinking about is, what is that responsive data architecture that can tell us a better outcome because like you said, and you said, there's stuff on the data center, there's stuff all over the place, it's going to be difficult to take that all away. But can I create a purpose for change? Many times you need a purpose for change. So the purpose being if I can get to a much more intelligent service management framework, I will be able to either take cost out or I can increase my revenue through services. It has to be tied to an outcome. So then the conversation becomes very easy because you're building a business case for investing in change, resulting in a measurable, business outcome. So that engineer to purpose is the way I'm finding it easier to have that conversation. And I'm telling the plan, keep what you have so you've got all the speckety messes somebody said, right? You've got all of the speckety mess out there. Let us focus on, if there are 15 data sets, that we think are relevant for us to deliver service management intelligence, let's focus on those 15 data sets. Let's get that into a new scalable, hyper responsive modern architecture. Then it becomes easier. Then I can tell the customer, now we have created an equal system where we can truly get to the 11.56 seconds analytical opportunity getting productionized. Move to an experiment as a service. That's another concept. So all of that, in my opinion John, is if he can put a purpose around it, as opposed to saying let's rip and replay, let's do this large scale transformation program, those things cost a lot of money. >> Well the good news is containers and Cubernetties is stowing away to get those projects moving cloud natives as fast as possible. Love the architecture vision. Love to fault with you on that. Great conversation. I think that's a path, in my opinion. Now short-term, the house in on fire in many areas. I want to get your thoughts on this final question. GDPR, the house is on fire, it's kind of critical, it's kind of tactical. People don't like freaking out. Saying okay, saying what does this mean? Okay, it's a signal, it is important. I think it's a technical mess. I mean where's the data? What schema? John Furrier, am I J Furrier, or Furrier, John? There's data on me everywhere inside the company. It's hard. >> Arun: It is. >> So, how are you guys helping customers and navigate the landscape of GDPR? >> GDPR is a whole, it's actually a much bigger problem than we all thought it was. It is securing things at the source system because there's volatibilities of source system. Forget about it entering into any sort of mastering or data barrels. They're securing its source, that is so critical. Then, as you said, the same John Furrier, who was probably exposed to GDPR is defined in ten different ways. How do I make sure that those ten definitions are managed? >> Tells you, you need an adaptive data platform to understands. >> So right now most of our work, is just doing that impactive analysis, right? Whether it's at a source system level, it has data coverance issues, it has data security issues, it has mastering issues. So it's a fairly complex problem. I think customers are still grappling with it. They're barely, in my opinion, getting to the point of having that plan because May 18, 2018 May, was supposed to, for you to show evidence of a plan. So I think there... >> The plan is we have no plan. >> Right, the plan of the plan, I guess is what they're going to show. It may, as opposed to the plan. >> Well I'm sure it's keeping you guys super busy. I know it's on everyone's mind. We've been talking a lot about it. Great to have you on again. Great to see you. Live here at Informatica World. Day one of two days of coverage at theCUBE here. In Las Vegas, I'm John here with Peter Burris with more coverage after this short break. (techno music)
SUMMARY :
brought to you by Informatica. Great to see you. it's wonderful meeting you again. right at the spot you were talking about there. People are now realizing, I need to have And, to that, if you don't have a technology model Now it seems to be flipped around. Because, at the end of the day, the choice you make is the role that data plays in a digital 6business, and getting the shipment done to me in 4 days. But, now I got to move data around In fact, that leads logically to the next thing Now, if you looked at most of the architectures of the to reimagine, just as you're reimagining your If you think of the data center as an edge, of data, the cost realities of data are going to to the source of position and execution, right? Data has got to be close to the activity. It gives them that accessibility to a wide ranging That adaptive platform has to enable that. opportunity to do the adaptive platform, and they So the purpose being if I can get to a much more Love to fault with you on that. probably exposed to GDPR is defined in ten different ways. platform to understands. They're barely, in my opinion, getting to the point It may, as opposed to the plan. Great to have you on again.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Arun Varadarajan | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
John | PERSON | 0.99+ |
General Motors | ORGANIZATION | 0.99+ |
Ford | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Arun | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
3-layer | QUANTITY | 0.99+ |
15 data sets | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Adaptive Data Foundation | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two reasons | QUANTITY | 0.99+ |
second block | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
first layer | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Today | DATE | 0.99+ |
11.56 seconds | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
one purpose | QUANTITY | 0.99+ |
first component | QUANTITY | 0.98+ |
Chal Salvan | PERSON | 0.98+ |
4 days | QUANTITY | 0.98+ |
Venetian | LOCATION | 0.98+ |
Cognizant | PERSON | 0.98+ |
Cognizant | ORGANIZATION | 0.97+ |
Informatica World 2018 | EVENT | 0.97+ |
Sands Convention Center | LOCATION | 0.96+ |
11.56 seconds | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
ten definitions | QUANTITY | 0.92+ |
J Furrier | PERSON | 0.92+ |
May 18, 2018 May | DATE | 0.9+ |
Day one | QUANTITY | 0.89+ |
eight attributes | QUANTITY | 0.89+ |
Informatica World | EVENT | 0.87+ |
about 11.56 seconds | QUANTITY | 0.87+ |
three intelligent blocks | QUANTITY | 0.87+ |
Palazzo | LOCATION | 0.86+ |
Onprim | ORGANIZATION | 0.85+ |
three building blocks | QUANTITY | 0.84+ |
three dimensions | QUANTITY | 0.82+ |
Furrier | ORGANIZATION | 0.79+ |
first building | QUANTITY | 0.77+ |
ten different ways | QUANTITY | 0.74+ |
ADO | TITLE | 0.7+ |
three | QUANTITY | 0.69+ |
years | DATE | 0.66+ |
last couple | DATE | 0.63+ |
about seven | QUANTITY | 0.59+ |
components | QUANTITY | 0.57+ |
2018 | DATE | 0.48+ |
theCUBE | ORGANIZATION | 0.43+ |
Caitlin Gordon, Dell EMC | Dell Technologies World 2018
>> Announcer: Live from Las Vegas, it's the Cube. Covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> Well welcome back. Glad to have you live here on the Cube as we continue our coverage of Dell Technologies World 2018. We are live in Las Vegas. We're in the Sands Exposition Center. I'm with Keith Townsend who had a heck of a night last night. Just a good chicken-and-waffle Las Vegas night. >> You know what? One o'clock in the morning is chicken and waffles here in the Grand Lux, and the view of Venetian, I have to eat at Palazzo because the one in the Venetian closes at 11. >> Oh my, well you know how to live. You know how to live. And I've always said that about you. (laughs) It's a pleasure to welcome as our first guest of the day, Caitlin Gordon, who is the Director of Storage Marketing at Dell EMC. And good afternoon, Caitlin. Thanks for joining us. >> Thank you so much for having me. >> John: A Cube vet, right? You're a Cube veteran. >> I mean as three, is that like, is you're over the hump as a veteran? >> John: Oh absolutely. >> All right, then yes, I'm in. >> You deserve a varsity letter now. >> Aw, do I get a letter jacket too? >> Well, we'll work on that later. We'll give you a Cube sticker for now how 'about that? >> Okay, I'll take a sticker. >> All right, so you've given, you've launched I would say given birth, but you've launched a brand new product today, PowerMax. Tell us all about that. First off, paint us the big picture, and we'll drill down a little bit and find out what's so new about this. >> Yeah, absolutely. So hot off the presses. Announced just two hours ago in the keynote this morning. So PowerMax is, really, the future of storage. The way we're talking about it, it is fast. It is smart and it's efficient. So we could kind of go through each one of those, but the headline here, this is modern tier zero storage. It's designed for traditional applications of today, but also next gen applications like real-time analytics. We have some metrics that show us that up to 70% of companies are going to have these mission-critical, real-time analytic workloads. And they're going to need a platform to support those and why shouldn't it be the same platform that they already have for those traditional workloads. >> So let's just go back. What makes it smarter? And what makes it more efficient? You know, what makes it faster? >> Caitlin: Can we start with fast? >> Yeah sure. >> Okay, that's my favorite one. So fast. I've got some good hero numbers for ya. So we'll start there. 10 million IOPS. That makes it the world's fastest storage array. Full stop. No caveats to that. 150 gigabytes a second throughput. We've got under 300 microseconds latency. That's up to 50% faster than what we already have with VMAX All Flash. So that's great. That's wicked fast, as Bob said, right? But how do we actually do that is a little bit more interesting. So the architecture behind that, it is a multi-controller, scale out architecture. Okay, that's good. That's check. You had a good start with that. But the next thing we did is we built that with end-to-end NVME. So end-to-end NVME means it's NVME-based drives, flash drives now, SCM drives, next generation media coming soon. It's also NVME over Fabric ready. So we're going to have a non-disruptive upgrade in the very near future to add support for NVME over Fabric. So that means you can get all the way from server across the network, to your storage array with NVME. It's really NVME done right. >> So let's talk about today. We've NVME, Fabric ready, which I love NVME over Fabric. Connectivity getting 10 million IOPS to the server in order to take care of that. What are the practical use cases for that much performance? What type of workloads are we seeing? >> Where we see this going in is to data centers where they want to consolidate all of their workloads, all of their practices, all of their processes, on a single platform. 10 million IOPS means you will never have to think about if that array can support that workload. You will be able to support everything. And again, traditional apps, but also these emerging apps, but also mainframe. IBM i, file, all on the same system. >> So can we talk about that as opposed to, let's say you even compare it to another Dell family technology. We just had the team Sean Amay and his VMware customer talking about SAP HANA on XtremIO. XtremIO is really great for one-to-one application mapping, so that's as SAP HANA. So are you telling me that PowerMax is positioned that I can run SAP HANA and in addition to my other data center workloads and get similar performance? >> Absolutely, it is the massive consolidator. It's kind of an app hoarder. You can put anything on it that you've got. And it's block, it's file, and then it's also got support for mainframe and IBM i, which there's still a significant amount of that out there. >> So that's an interesting thing. You're having all of these traditional data services. Usually when we see tier zero type of arrays, Dell EMC had one just last year, there's no services because you just, it's either go really fast or moderately fast and data services. How do you guys do that? >> Yeah well the benefit of where we're coming from is that we built this on the platform of the flagship storage array that's been leading the industry for decades. So what we did is we took the foundation of what we had with VMAX, and we built from that this end-to-end NVME PowerMax. So you get all of that best-in-class hardware, that optimized software, but it comes with all the data services. So you get six nines availability, best-in-class data protection, resiliency, everything that you'd need, so you never have to worry. So this is truly built for your mission-critical applications. >> Yeah, so really interesting speeds and feeds. Let's talk about managing this box. VMAX has come a long way from the Symmetrix days, so much easier to manage. However, we're worried today about data tiering, moving workloads from one area to another. These analytics workloads move fast. How does PowerMax help with day two operations? >> So you've heard the mention of autonomous infrastructure, right? Really PowerMax is autonomous storage. So what is has is it has a built-in, real-time, machine learning engine. And that's designed to use pattern recognition. It actually looks at the IOs and it can determine in a sub-millisecond time, what data is hot, what data should be living where, which data should be compressed. It can optimize the data placement. It can optimize the data reduction. And we see this as a critical enabler to actually leveraging next-generation media in the most effective way. We see some folks out there talking about SCM and using it more as a cache. We're going to have SCM in the array, side-by-side with Flash. Now we know that the price point on that when it comes out the door is going to be more than Flash. So how do you cost-effectively use that? You have a machine learning engine that can analyze that data set and automatically place the data on that when it gets hot or before it even gets hot, and then move it off it when it needs to. So you can put in just as much as you need and no more than that. >> So let's talk about scale. You know I'm a typical storage ad man. I have my spreadsheet. I know what lines I map to what data and to what application. And I've statically managed this for the past 15 years. And it's served me well. How much better is PowerMax than my storage ad man? I can move two or three data sets a day from cache to Flash. >> Really what this enables from a storage administrator perspective, you can focus on much more strategic initiatives. You don't have to do the day-to-day management. You don't have to worry about what data's sending where. You don't have to worry about how much of the different media types you've put into that array. You just deploy it and it manages itself. You can focus on more tasks. The other part I wanted to mention is the fact that you heard Jeff mention this morning that we have Cloud.IQ in the portfolio. Cloud.IQ we're going to be bringing across the entire storage portfolio, including to PowerMax. So that will also really enable this Cloud-based monitoring predictive analytics to really take that to the next level as well. Simplify that even more. >> You know, I'd like to step back to the journey. More or less. When you start out on a project like this and you're reinventing, right, in a way. Do you set, how do you set the specs? You just ran off a really impressive array of capability. >> Caitlin: Yeah. >> Was that the initial goal line or how was that process, how do you manage that? How do you set those kinds of goals? And how do you get your teams to realize that kind of potential, and some people might look at you a little cross-eyed and say, are you kidding? >> Caitlin: Right, right. >> How are we going to get there? I don't know. (laughs) >> We always shoot for the moon. >> John: Right. >> So we always, this type of product takes well over a year to get into market. So you saw PowerMax Bob on stage there talking about it. So his team is the one that really brings this to market. They developed those requirements two years ago. And they were really looking to make sure that at this time, as soon as the technology curve is ready on NVME, we were there, right? So this just shipping with enterprise class, dual port, NVME drives. Those were not ready until right now. Right, those boxes start shipping next week. They are ready next week, right? So we're at the cutting edge of that. And that takes an extraordinary world-class engineering team. A product management team that understands our customers' requirements that we have today, 'cause we have thousands of customers, but more importantly is looking to what's also coming in the future. And then at some point in the process things do fall off, right? So we have even more coming in future releases as well. >> So let's talk connectivity into the box. How do I connect to this? Is this iSCSI, is this fiber channel? What connectivity-- >> So this is definitely fiber channel. And so our NVME over Fabric will be supported over fiber channel with this array. But we find with the install base with our VMAX install base especially they're very heavily invested in fiber channel today. So right now that's where we're still focused. 'Cause that's going to enable the most people to leverage it as quickly as possible. We're obviously looking at when it makes sense to have an IP-based protocol supported as well. >> So this storage is expensive on the back end. Talk to me about if data efficiency, dedup, are we coming out with. 'Cause a lot of these tier zero solutions don't have dedup out the box. >> Or they have it, but if you use it you can't actually get the performance that you paid for, right? >> There's no point in turning it on. >> Yeah, it's like yeah, we checked the box, but there's really no point. Yeah, so VMAX had compression. VMAX also had compression, and what we've done with PowerMax is we now have inline deduplication and compression. The secret to that is that it's a hardware-assisted. So it's designed to, that card actually will take in, it'll compress the data, and it also passes out the hashes you need for dedup. So that it's inline, it will not have a performance impact on the system. It can also be turned on and off by application and it can give you up to five-to-one data reduction. And you can leverage it with all your data services. Some competitive arrays, if you want to use encryption, sorry you can't actually use dedup. The way we've implemented it, you can actually do both the data reduction and the data services you need, especially encryption. >> So before we say goodbye, I'm just, I'm curious, when you see something like this get launched, right. Huge project. Year-long as you've been saying. And even further back in the making. Just from a personal standpoint, you get pumped? Are you, I would imagine-- >> Caitlin: I got to tell ya-- >> This is the end of a really long road for you. >> We have been worked, for the marketing team, we've been working on this for months. It is the best product I've ever launched. It's the best team I've ever worked with. In the past two days since I landed here to getting that keynote out the door has been so much adrenaline, built up, that we're just so excited to get this out there and share it with customers. >> And what's this done to the bar in your mind? Because you were here, now you're here. But tell me about this. What have you jumped over in your mind? >> We have set a very high bar. I'm not really sure what we're going to do at this point, right? From a product standpoint it is in a class by itself. There is just nothing else like it And from an overall what the team has delivered, from engineering all the way from my team, what we've brought together, what we've gotten from the executive, we've never done anything like it before. So we've set a high bar for ourselves, but we've jumped over some high bars before. So we've got some other plans in the future. >> I'm sorry go ahead. >> Let's not end the conversation too quickly. >> All right, all right, sure, all right. >> There is some-- >> He's got some burning questions. >> Yeah, I have burning, this is a big product. So I still have a lot of questions from a customer perspective. Let's talk data protection. You can't have mission-critical all this consolidation without data protection. >> Caitlin: Absolutely. >> What are the data protection features of the PowerMax? >> I'm so glad you asked. I spent a decade in data protection. It is a passionate topic of mine, right? So you look at data protection and kind of think of it as layered, within the array, so we have very efficient snapshot technology. You can take as many snaps as you need. Very, very efficient to take those. They don't take any extra space on them when you make those copies. >> Then can I use those as tertiary copies to actually perform, to point to workloads such as refreshing, QA, DAB, et cetera? >> Yeah, absolutely. You can mount those snapshots and leverage those for any type of use case. So it's not just for data protection. It's absolutely for active use as well. So it's kind of the on the array, and then the next level out is okay, how do I make a copy of that off the array? So the first one would be well do that to another PowerMax. So as you probably know, the VMAX really pioneered the entire primary storage replication concept. So we have certainly asynch if you have a longer distance, but a synchronous replication, but also Metro, if you have that truly active active-use case so, truly the gold standard in replication technologies. And our customers, it's one of the number one reasons why they say there is no other platform on the planet that they would ever use. And then, you go to the next level of we're really talking about backup. We have built in to PowerMax the capabilities to do a direct backup from PowerMax to a data domain. And that gets you that second protection copy also on a protection storage. So you have those multiple layers of protection. All the copies across all of the different places to ensure that have that operational recovery, disaster recovery in that array, and that the data's accessible at all times no matter what the scenario. >> So let's talk about what else we see. When we look at it, we go into our data center and you see a VMAX array, there's a big box with cabinets of shelves, and you're thinking, wow, this thing is rock solid. Look at the PowerMax. That thing is what about a six-- >> Caitlin: I think it's pretty cute, right? >> Yeah it's pretty cute. I love, that's a pretty array. (laughs) >> Yeah. >> You have one over there. So when you see a VMAX, it just gives you this feeling of comfort. PowerMax, let's talk about resiliency. Do we still have that same VMAX, rock solid, you go into a data center and you see two VMAX, and you're thinking this company's never going to go down. >> Caitlin: Right. >> What about PowerMax? >> Guess what? It is the same system. It's just a lot more compact. We have people consolidating from either VMAXs or competitive arrays, but they're in four racks and they come down into maybe half a rack. But you have all the same operating system, all the same data services, so you have non-disruptive upgrades. If you have to do a code upgrade across the whole array at the same time. You don't have to do rolling reboots of all the controllers. You can just upgrade that all at the same time. We have component-level fault isolation. So if a component fails, the whole controller doesn't go down. All you lose is that one little component on there until you're able to swap that out. So you have all of the resiliency that over six nines availability built into this array. Just like you did with the ones that used to be taking up a bit more floor tile space. The PowerMax is about 40% lower power consumption than you have with VMAX All Flash 'cause it can be supported in such a small footprint. >> So are we going to see PowerMax and converge system configurations? >> Yeah, absolutely. So if you're familiar with the VxBlock 1000, which we launched back in February, it will be available in a VxBlock 1000. And of course the big news on that is you have the flexibility to really choose any array. So it could be an X2 and a PowerMax in a VxBlock 1000. >> So that's curious. What is the, now that we have PowerMax, where's the position of the VMAX 250? >> So the, I'm glad you asked, 'cause it's an important thing to remember. VMAX All Flash is absolutely still around and we expect people to buy it for a good amount of time. The main reason being that the applications, the workloads, the customers, the data centers, that are buying these arrays, they have a very strict qualification policy. They take six, nine months, sometimes a year, to really qualify, even a new operating system. >> Keith: Right. >> Let alone a new platform. So we absolutely will be selling a lot of VMAX All Flash for the foreseeable future. >> Well, Caitlin, it's been a long time in the making, right? >> Absolutely. >> Huge day for you. >> Yes. >> So congratulations on that. >> Thank you, thank you. >> Great to have you here on the Cube. And best of luck, I'm sure, well you don't need it. Like I said, superior product, great start. And I wish you all the best down the road. >> Thank you. Hope to see you guys again soon. >> Caitlin Gordon. Now that'd be four. >> Yes, it'd be four. >> We'd love to have you back. Caitlin Gordon joining us from Dell EMC. PowerMax, the big launch coming just a couple hours ago here at Dell Technologies World 2018. Back with more live coverage here on the Cube after this short time out. (upbeat music)
SUMMARY :
Brought to you by Dell EMC Glad to have you live here on the Cube and the view of Venetian, first guest of the day, Caitlin Gordon, You're a Cube veteran. We'll give you a Cube sticker and find out what's so new about this. So hot off the presses. So let's just go back. So that means you can get all the way What are the practical use IBM i, file, all on the same system. So are you telling me that Absolutely, it is the How do you guys do that? So you get all of that from the Symmetrix days, So how do you cost-effectively use that? and to what application. You don't have to do the You know, I'd like to How are we going to get there? So his team is the one that connectivity into the box. enable the most people don't have dedup out the box. the data services you need, And even further back in the making. This is the end of a It is the best product I've ever launched. What have you jumped over in your mind? from the executive, we've never done Let's not end the So I still have a lot of questions So you look at data protection So it's kind of the on the array, and you see a VMAX I love, that's a pretty array. So when you see a VMAX, it just gives you all the same data services, so you have And of course the big news on that is So that's curious. So the, I'm glad you So we absolutely will be selling a lot Great to have you here on the Cube. Hope to see you guys again soon. Caitlin Gordon. We'd love to have you back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Caitlin Gordon | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Sean Amay | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
Jeff | PERSON | 0.99+ |
Caitlin | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
February | DATE | 0.99+ |
Keith | PERSON | 0.99+ |
next week | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
VxBlock 1000 | COMMERCIAL_ITEM | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
150 gigabytes | QUANTITY | 0.99+ |
VMAX | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
SAP HANA | TITLE | 0.99+ |
nine months | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
VMAX 250 | COMMERCIAL_ITEM | 0.99+ |
X2 | COMMERCIAL_ITEM | 0.99+ |
last year | DATE | 0.99+ |
Sands Exposition Center | LOCATION | 0.99+ |
two years ago | DATE | 0.99+ |
First | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Venetian | LOCATION | 0.98+ |
Cloud.IQ | TITLE | 0.98+ |
three | QUANTITY | 0.98+ |
half a rack | QUANTITY | 0.98+ |
two hours ago | DATE | 0.98+ |
first guest | QUANTITY | 0.97+ |
PowerMax | COMMERCIAL_ITEM | 0.97+ |
XtremIO | TITLE | 0.97+ |
Dell Technologies World 2018 | EVENT | 0.97+ |
a year | QUANTITY | 0.97+ |
four racks | QUANTITY | 0.97+ |
under 300 microseconds | QUANTITY | 0.96+ |
Flash | TITLE | 0.96+ |
first one | QUANTITY | 0.96+ |
PowerMax | ORGANIZATION | 0.96+ |
11 | DATE | 0.95+ |
single platform | QUANTITY | 0.95+ |
up to 50% | QUANTITY | 0.94+ |
10 million IOPS | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
last night | DATE | 0.94+ |
one little component | QUANTITY | 0.94+ |
each one | QUANTITY | 0.93+ |
four | QUANTITY | 0.93+ |
decades | QUANTITY | 0.93+ |
a day | QUANTITY | 0.92+ |
NVME | TITLE | 0.92+ |
Sushil Kumar, CA Technologies | AWS re:Invent
>> Announcer: Live from Las Vegas, it's the Cube, covering AWS Reinvent 2017, presented by AWS, Intel and our ecosystem of partners. (ambient music) >> We're back live here on the Cube, along with Stu Miniman, I am John Walls and we're live here right smack dab in the middle of the show floor. A giant show floor here at the Sands Expo between the Palazzo and the Venetian, Las Vegas. Reinvent AWS putting on a four day extravaganza. Keynotes this morning, they were jammed pack. The show floor continues to be just all a buzz with a lot of positive vibe and activity and here to talk about not only what's happening here but what's happening at CA Technologies is Sushil Kumar, who's the SVP of Product Management there, or an SVP. Sushil, nice to have you with us. Thanks for being here on the Cube. >> Thank you it is my pleasure, thanks for having me here. >> Let's talk about first off your idea about what's going on here, as far as the show goes, you said this particular event has a different feel to you than others you been at in the past. How so? >> Absolutely, you know, and that starts with the name itself, right? It's called Reinvent and what is very different about this show is this is all about creation, so all the conversation is about how can we use the latest and greatest technologies to build something new, right? And kudos to Amazon for creating that environment where they position themself as enabler rather than creators and the power of creation lies in masses, right, so it's amazing to see the energy and the creativity and truly it's infectious, right, so. >> So Sushil, it's interesting, I've seen a lot of AWS ads at airports. They're targeting, you know, their audience are builders and that's why I think exactly what you're saying. Want to hear what you're hearing from customers because you know, for some customers that's super exciting. You talk about the developer community. For some customers it's like wait, hold on, are they talking to me? Are they leaving me behind? I'm curious what you're hearing from the customer. >> You know that's a great question because look, we have been a 40 year old company and the only way you survive for so long is you constantly adapt yourself to the changing needs that customers have. And one of the biggest challenges that customers have today is around reinvention, right? Which is in, whether you call it digital transformation, whether you call it software-defined business, businesses across all industries and segments, they're all trying to reinvent and become as creative as, you know, some of the start-ups from Silicon Valley. And one of the biggest challenges that they are facing is how to go about that process, right? And we know a thing or two about that because we ourself have been a 40 year old company. So, our motto as a company is around helping our customers become modern software factories, which is all about how to become an agile and iterative, and essentially be obsessively focused around customer experience, because that's one thing, you know, people may have a lot of definitions of digital transformation, but one thing that separates Amazon and Netflix of the world is obsession on customer experience. And, we are playing a part, right? The whole model of software factory is about making these businesses much more customer-centric. And the part of business that I come from is all about how can we provide other customers the to ability to measure customer experience, use that information to improve their product and always constantly iterate so that they meet or exceed customer experience. So one of the products that we have is called Digital Experience Insights, right. It's a product that essentially provides a holistic overview of what I call the entire digital delivery chain, which starts all the way with the user's device, which could be a mobile device, could be web, all the way to the layers of business transactions, and then the maze of entrusted care, which may involve cloud, maybe more than one cloud, maybe even in some cases even mainframes, so that's another thing that we see customers struggling with because many times, as geeks, we tend to paint the picture black and white. Either you are modern or you are legacy. A lot of customers fall somewhere in the middle. And, you know, they are looking up to us to help successfully navigate that transformation. And that's exactly what we are focused on. That's why Amazon has been such a great partner. >> I want to, you've used the term reinvent a bunch, and when I think about the analytics space, we've gone through a bunch of waves here. Big data, one, lot of discussion, some mixed results from customers. Real renaissance in what's happening really in the analytics space. Amazon, of course, participating in that. What are you seeing, what's new at CA in working with AWS on that? >> You know, again, if you look at the problem that I described, you know, the problem statement is very simple. We all want to understand customer experience. We all want to put ourself in the shoes that customers have. We live in a world where the customer attention span is three seconds. If within three seconds, your page doesn't load, or something that they expect doesn't happen, you have lost that customer forever. But in order to solve that simple problem, which is to be proactive and, you know, and have empathy for customers, the challenge is that you need so much of data, right? You need data from the customer's handheld devices, you need data from all the servers, all the applications. You need log data, you need metric data, you need event data, and all of the sudden you realize that there is so much of data that the commercial way of monitoring, which around dashboard alerting, doesn't work anymore, right? What you truly need to do is to take all of these signals and automatically analyze to extract insight, that one or two actionable insight that helps you stay ahead of the curve. And that's all about analytics. In fact, I cannot think of a better use case for analytics than this whole digital experience management, because it's not that the customers haven't had these data points. They have had all the data points, but mostly they have been siloed, and like we saw this morning in Andy Jassy's announcement, the machine learning and AI field is constantly evolving, but the machines can only do so much unless you help feed them the data, right? And that's one of the things that we are trying to do with our operational intelligence solutions as well as Digital Experience Insight, is to bring all the data together, feed it to the machines, and algorithms, so that they can make sense out of that, and extract holistic insight that helps customers stay ahead of the curve. I'll give you three examples. For example, you know, we have a major broadcast partner, who did a mobile ad during the election time, and they needed to engage their customers better. They needed to understand what customers going through, and through the use of the application experience analytics, they were able to iterate applications and within three months their user retention increased three-folds. There's another customer which is a major broadcaster based out of Europe. They use our product to essentially get the data from the cycling events to provide their customers a very unique second screen experience. And that's I think the exciting part. As much as we all love our products and tools, it's all about what unique opportunity provide to our customers to innovate and succeed in the marketplace. >> Now, we've talked about reinvent as a term. It's necessary, right, but it's scary too. I mean, if I'm a company and I'm just moving along, my life's fine, right? I don't want to have to upset my applecart if I don't have to. And yet, when you bring these notions to me of new capabilities, of putting OI to practice for me, new analytics, my eyes start to roll a little bit, my head starts to spin a little bit. So what kind of hand holding do you have to do at the end of the day to show them that there is a better mousetrap, there is a better way, and that if you don't change, your success today is going to be gone tomorrow. >> And that's a really great question. I think what we see from all customers, almost all of them want to change. But, in oftentimes the magnitude of the exercise is so big that they're daunted. They don't even know where to start. >> John: Like where do I bite first? >> Yeah, and that's exactly what we're doing as part of a modern software factory, because this process involves cultural transformation. It's just not about tools. It's just not about technology. It's about how do you become iterative, right? Going from this year-long development process to being able to build something in two weeks, right? So that's why what we have done is to bring together a set of solutions that definitely includes our product but as a company, we have one of the largest agile coaches. So we essentially meet customers halfway in terms of handholding them, and doing everything that we can do to walk the journey with them step by step. The same thing we do in terms of providing customer the flexibility on how they want to navigate this journey. So for example, now we have both SASS as well as on-prem product. Some customers are ready to go all into cloud, other customers need time, and that's why we have adopted a very practical approach of maintaining a single code base. You can use our product in SASS, or we can deliver that product in a more private single-tenant mode, or we can give you the same code on-prem. And the whole goal is to walk the journey with you at the pace that you are comfortable. And I think of our partnership with Amazon, there is also of equally importance because customers now need to take their technology from Amazon, products from us, kind of marry it altogether and the more alliances, and the closer partnership that industry we can build, like we have with Amazon, the easier, it becomes an easier task for our customers. >> That's kind of how you characterize this show too, right? Collaborative, collegial, enabling. >> It's very collaborative, very collaborative, very energetic and very inspiring too. >> John: Alright, thanks for the time. >> Thank you so much, thank you. >> John: Thank you for being with us and good luck down the road. Thanks for being here on the Cube. >> My pleasure being here. >> We're back with more live coverage here from Reinvent. We're at AWS in Las Vegas and Stu and I will be back with more right after this. (ambient music)
SUMMARY :
Announcer: Live from Las Vegas, it's the Cube, Sushil, nice to have you with us. as far as the show goes, you said Absolutely, you know, and that starts because you know, for some customers that's super exciting. and the only way you survive What are you seeing, what's new at CA the challenge is that you need so much of data, right? So what kind of hand holding do you have But, in oftentimes the magnitude of the exercise And the whole goal is to walk the journey with you That's kind of how you characterize this show too, right? It's very collaborative, very collaborative, and good luck down the road. We're back with more live coverage here from Reinvent.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Sushil Kumar | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
three seconds | QUANTITY | 0.99+ |
Sushil | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Sands Expo | EVENT | 0.99+ |
Stu | PERSON | 0.99+ |
three months | QUANTITY | 0.99+ |
SASS | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
two weeks | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
CA | LOCATION | 0.96+ |
three examples | QUANTITY | 0.96+ |
four day | QUANTITY | 0.96+ |
40 year old | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.95+ |
more than one cloud | QUANTITY | 0.92+ |
second screen experience | QUANTITY | 0.92+ |
this morning | DATE | 0.91+ |
a thing | QUANTITY | 0.91+ |
CA Technologies | ORGANIZATION | 0.85+ |
Palazzo | LOCATION | 0.85+ |
single | QUANTITY | 0.85+ |
single code base | QUANTITY | 0.82+ |
Venetian, Las Vegas | LOCATION | 0.78+ |
Reinvent | TITLE | 0.77+ |
Reinvent 2017 | EVENT | 0.75+ |
three- | QUANTITY | 0.75+ |
Cube | COMMERCIAL_ITEM | 0.72+ |
Technologies | ORGANIZATION | 0.52+ |
Invent | EVENT | 0.47+ |
applecart | COMMERCIAL_ITEM | 0.46+ |
Reinvent | ORGANIZATION | 0.46+ |
Scott Masepohl, Intel PSG | AWS re:Invent
>> Narrator: Live from Las Vegas, it's theCUBE covering AWS re:Invent 2017. Presented by AWS, Intel, and our ecosystem of partners. >> Hey, welcome back everyone. We are here live at AWS re:Invent in Las Vegas. This is 45,000 people are here inside the Sands Convention Center at the Venetian, the Palazzo, and theCUBE is here >> Offscreen: I don't have an earpiece, by the way. >> for the fifth straight year, and we're excited to be here, and I wanna say it's our fifth year, we've got two sets, and I wanna thank Intel for their sponsorship, and of course our next guest is from Intel. Scott Macepole, director of the CTO's office at Intel PSG. Welcome to theCUBE. >> Thank you. >> Thanks for coming on. So, had a lot of Intel guests on, lot of great guests from customers of Amazon, Amazon executives, Amy Jessup coming on tomorrow. The big story is all this acceleration. of software development. >> Scott: Right. >> You guys at the FPGA within intel are doing acceleration at a whole nother level. 'Cause these clouds have data centers, they have to power the machines even though it's going serverless. What's going on with FPGAs, and how does that relate to the cloud world? >> Well, FPGAs I think have a unique place in the cloud. They're used in a number of different areas, and I think the great thing about them is they're inherently parallel. So you know, they're programmable hardware, so instead of something like a GPU or a purpose-built accelerator, you can make them do a whole bunch of different things, so they can do computer acceleration, they can do network acceleration, and they can do those at the same time. They can also do things like machine learning, and there's structures built inside of them that really help them achieve all of those tasks. >> Why is it gonna pick up lately? Because what are they doing differently now with FPGAs than they were before? Because there's more talk of that now more than ever. >> You know, I mean, I think it's just finally come to a confluence where the programmability is finally really needed. It's very difficult to actually create customized chips for specific markets, and it takes a long time to actually go do that. So by the time you actually create this chip, you may have not had the right solution. FPGAs are unique in that they're programmable, and you can actually create the solution on the fly, and if the solution's not correct you can go and you can actually change that, and they're actually pretty performant now. So the performance has continued to increase generation to generation, and I think that's really what sets them apart. >> So what's the relationship with Amazon? Because now I'm kinda connecting the dots in my head. Amazon's running full speed ahead. >> Scott: Yeah. And they're moving fast, I mean thousands of services. Does FPGAs give you guys faster time to market when they do joint designs with Intel? And how does your relationship with Amazon connect on all this? >> Absolutely, we have a number of relationships with Amazon, clearly the Xeon processors being one of them. The FPGAs are something that we continue to try to work with them on, but we're also in a number of their other applications, such as Alexa, so and there's actually technologies within Alexa that we could take and implement either in Xeon CPUs or actually in FPGAs to further accelerate those, so a lot of the speech processing, a lot of the AI that's behind that, and that's something that, it's not very prevalent now, but I think it'll be in the future. >> So, all that speech stuff matters for you guys, right? That helps you guys, the speech, all the voice stuff that's happening, and the Alexa news, machine learning. >> Right. >> That's good for you, right? I mean, that, I mean... >> It's very good, and it's actually, it's really in the FPGA sweet spot. There's a lot of structures within the FPGAs that make them a lot better for AI than a GPU. So for instance, they have a lot of memory on the inside of the device, and you can actually do the compute and the memory right next to where it needs to be, and that's actually very important, because you want the latency to be very low so that you can process these things very quickly. And there's just a phenomenal amount of bandwidth inside of an FPGA today. There's over 60 terabytes a second of bandwidth in our mid-range Stratix 10 device. And when you couple that together with the unique math capabilities, you can really build exactly what you want. So when you look at GPUs, they're kinda limited to double precision floating pointers, single precision, or integer. The FPGAs can do all of those and more, and you can actually custom build your mathematical path to what you need, save power, be more efficient, and lower the latency. So... >> So Andy Jessup talked about this is a builder's conference. The developers, giving the tools to the developers they need to create amazing things. One of the big announcements was the bare metal servers from AWS. >> Scott: Yeah. How do you see something like an FPGA playing in a service like that? >> Well, the FPGAs could use to help provide security for that. They could obviously be used to help do some of the network processing as well. In addition, they could be used in a lot of classical modes that they could be used in, whether it's like an attached solution for pure acceleration. So just because it's bare metal doesn't mean it can't be bare metal with FPGA to do acceleration. >> And then, let's talk about some of the... You guys, FPGAs is pretty big in the networking space. >> Scott: Yeah. >> Let's talk about some of the surrounding Intel technologies around FPGAs. How are you guys enabling your partners, network partners, to take advantage of X86, Xeon, FPGAs, and accelerating networking services inside of a solution like Amazon. >> We have a number of solutions that we're developing, both with partners and ourselves, to attach to our nix, and other folks' nix, to help accelerate those. We've also released what's called the acceleration stack, and what that's about is really just kinda lowering the barrier of entry for FPGAs, and it has actually a driver solution that goes with it as well, it's called OPAE, and what that driver solution does, it actually creates kind of a containerized environment with an open source software driver so that it just really helps remove the barrier of, you know, you have this FPGA next to a CPU. How do I talk to it? How can we connect to it with our software? And so we're trying to make all of this a lot simpler, and then we're making it all open so that everybody can contribute and that the market can grow faster. >> Yeah, and let's talk about ecosystem around data, the telemetry data coming off of systems. A lot of developers want as much telemetry data, even from AWS, as possible. >> Scott: Yeah. >> Are you guys looking to expose any of that to developers? >> It's always something under consideration, and one of the things that FPGAs are really good at is that you can kinda put them towards the edge so that they can actually process the data so that you don't have to dump the full stream of data that gets generated down off to some other processing vehicle, right? So you can actually do a ton of the processing and then send limited packets off of that. >> So we looked at the camera today, super small device doing some really amazing things, how does FPGAs playing a role in that, the IOT? >> They do a lot of, FPGAs are great for image processing. They can do that actually much quicker than most other things. When you start listening, or reading a little bit about AI, you'll see that a lot of times when you're processing images, you'll have to take a whole batch of them for GPUs to be efficient. FPGAs can operate down at a batch size of one, so they can respond very quickly. They can work on individual images, and again, they can actually do it not just efficiently in terms of the, kinda the amount of hardware that you implement, but efficiently in the power that's required to go do that. >> So when we look at advanced IOT use cases, what are some of the things that end-user customers will be able to do potentially with FPGAs out to the edge, of course less data, less power needed to go back to the cloud, but practically, what are some of the business outcomes from using FPGAs out at the edge? >> You know, there's a number of different applications, you know, for the edge. If you go back to the Alexa, there's a lot of processing smarts that actually go on there. This is an example where the FPGA could actually be used right next to the Xeons to further accelerate some of the speech, and that's stuff that we're looking at now. >> What's the number one use case you're seeing that people, what's the number one use case that you're seeing that people could relate to? Is it Alexa? Is it the video-- >> For the edge, or? >> Host: For FPGAs, the value of accelerating. >> For FPGAs, I mean, while there's usage well beyond data center, you know. There's a classic what we would call wire line where it's used in everything today. You know, if you're making a cellphone call, it likely goes through an FPGA at some point. In terms of data center, I think where it's really being used today, there's been a couple of very public announcements. Obviously in network processing in some of the top cloud providers, as well as AI. So, you know, and I think a lot of people were surprised by some of those announcements, but as people look into them a little further, I think they'll see that there's a lot of merit to that. >> The devices get smaller and faster and just the deep lens device has got a graphics engine that would've been on a mainframe a few years ago. I mean, it's huge software power. >> Yeah. >> You guys accelerate that, right? I mean I'm looking, is that a direction? What is the future direction for you guys? What's the future look like for FPGAs? >> It's fully programmable, so, you know, it's really limited by what our customers and us really wanna go invest in. You know, one of the other things that we're trying to do to make FPGAs more usable is remove the kind of barrier where people traditionally do RTL, if you're familiar with that, they actually do the design, and really make it a lot more friendly for software developers, so that they can write things in C or openCL, and that application will actually end up on the inside of the FPGA using some of these other frameworks that I talked about, the acceleration stack. So they don't have to really go and build all the guts of the FPGA, they just focus on their application, you have the FPGA here whether it's attached to the network, coherently attached to a processor, or next to a processor on a, on PCI Express, all of those can be supported, and there's a nice software model to help you do all that development. >> So you wanna make it easy for developers. >> Scott: We wanna make it very easy. >> What specifically do you have for them right now? >> We have the, they call it the DLA framework, the deep learning framework that we released. As I said before, we have the acceleration stack, we have the OPEA which is the driver stack that goes along with that, as well of all our, what we call our high-level synthesis tools, HLS, and that supports C and openCL. So it basically will take your classic software and convert it into gates, and help you get that on the FPGA. >> Will bots be programming this soon? Soon AI's going to be programming the FPGAs? Software, programming software? >> That might be a little bit of a stretch right now, but you know, in the coming years perhaps. >> Host: Scott, thanks for coming onto theCUBE, really appreciate it. >> Thanks for having me. >> Scott Macepole who is with Intel, he's the director of the CTO's office at Intel PSG, they make FPGAs, really instrumental device in software to help accelerate the chips, make it better for developers, power your phone, Alexa, all the things pretty much in our life. Thanks for coming on the Cube, appreciate it. >> Thank you. >> We'll be back with more live coverage. 45,000 people here in Las Vegas, it's crazy. It's Amazon Web Services re:Invent, we'll be right back. (soft electronic music)
SUMMARY :
and our ecosystem of partners. the Sands Convention Center at the Venetian, of the CTO's office at Intel PSG. So, had a lot of Intel guests on, and how does that relate to the cloud world? and they can do those at the same time. Because what are they doing differently now with FPGAs So by the time you actually create this chip, Because now I'm kinda connecting the dots in my head. Does FPGAs give you guys faster time to market a lot of the AI that's behind that, and the Alexa news, machine learning. I mean, that, I mean... and you can actually do the compute and the memory One of the big announcements was How do you see something like an FPGA in a lot of classical modes that they could be used in, You guys, FPGAs is pretty big in the networking space. Let's talk about some of the surrounding and that the market can grow faster. the telemetry data coming off of systems. and one of the things that FPGAs are really good at kinda the amount of hardware that you implement, you know, for the edge. in some of the top cloud providers, as well as AI. and just the deep lens device has got a graphics engine and build all the guts of the FPGA, and help you get that on the FPGA. but you know, in the coming years perhaps. really appreciate it. Thanks for coming on the Cube, appreciate it. We'll be back with more live coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Scott Macepole | PERSON | 0.99+ |
Andy Jessup | PERSON | 0.99+ |
Amy Jessup | PERSON | 0.99+ |
Scott Masepohl | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
fifth year | QUANTITY | 0.99+ |
45,000 people | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
two sets | QUANTITY | 0.99+ |
Sands Convention Center | LOCATION | 0.99+ |
fifth straight year | QUANTITY | 0.99+ |
tomorrow | DATE | 0.98+ |
over 60 terabytes | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
CTO | ORGANIZATION | 0.96+ |
Venetian | LOCATION | 0.96+ |
Alexa | TITLE | 0.96+ |
both | QUANTITY | 0.95+ |
openCL | TITLE | 0.95+ |
today | DATE | 0.95+ |
One | QUANTITY | 0.92+ |
Cube | COMMERCIAL_ITEM | 0.91+ |
thousands of services | QUANTITY | 0.9+ |
AWS | EVENT | 0.88+ |
few years ago | DATE | 0.86+ |
Stratix 10 | COMMERCIAL_ITEM | 0.85+ |
C | TITLE | 0.84+ |
Intel PSG | ORGANIZATION | 0.82+ |
re:Invent 2017 | EVENT | 0.82+ |
Invent | EVENT | 0.8+ |
Palazzo | LOCATION | 0.8+ |
X86 | COMMERCIAL_ITEM | 0.78+ |
a second | QUANTITY | 0.77+ |
re:Invent | EVENT | 0.76+ |
intel | ORGANIZATION | 0.72+ |
Xeon | COMMERCIAL_ITEM | 0.71+ |
single | QUANTITY | 0.71+ |
theCUBE | ORGANIZATION | 0.7+ |
OPAE | TITLE | 0.63+ |
a ton | QUANTITY | 0.62+ |
DLA | TITLE | 0.58+ |
OPEA | ORGANIZATION | 0.5+ |
Xeons | COMMERCIAL_ITEM | 0.48+ |
Xeon | ORGANIZATION | 0.48+ |
PSG | LOCATION | 0.37+ |
theCUBE | TITLE | 0.33+ |
Kevin Reid, Virtustream - Dell EMC World 2017
>> Announcer: Live from Las Vegas, it's theCUBE, covering Dell EMC World 2017. Brought to you by Dell EMC. >> Welcome back inside Dell EMC World 2017 here on theCUBE, we continue our coverage. Day 3 here in Las Vegas from the Sands Expo, sandwiched in between the Palazzo and the Venetian. A great show, a great vibe, and it's been a good show for Virtustream. And we have with us the president and CTO and a co-founder from Virtustream joining us now, Kevin Reid. Kevin, good to see you, how've you been? >> Been great, it's just very energizing being here this week. >> Yeah, what about the week for you? I'm sure you have a couple of announcements we'll get to in just a moment, but just want to get your take on the show here as we wind down. >> You know, the show's just been incredible. You know of course, it's the first year that they're all coming together, if you will, as the brand of Dell EMC as one show for the stage. It's been a great stage for us, great audience, looking at the range of countries and clients represented. We've actually just been blown away at the energy behind what Dell Technologies now represents as the overall set of brands in the portfolio. >> So let's get to the news that you made this week. One in the healthcare space, I know very important space for you, and in the connector space as well with vCloud. Let's go ahead and take them one at a time if you would. >> Absolutely, so healthcare cloud, you know, for us just a fantastic area when you look at just all the regulatory issues associated with healthcare in general, and certainly we don't have enough time on this show to go into what all that means, but the ramifications are. >> No, we'd like to get into HIPAA compliance if you don't mind. >> We actually talked about it yesterday, so if you want to talk about it again. >> Just kidding, just kidding, we don't have time here, right (laughs). >> It's just been fantastic because with all that change becomes all the investments that the healthcare companies are having to make, whether it's in EHR or EMR and as you look at changing out those systems of record that really run the critical patient care for those healthcare providers, it really presents a great opportunity. So what we've done is said let's leverage our core competencies of mission critical and let's gear that towards the healthcare space and let's leverage our compliance in HIPAA and other things like that and be able to bring to the market a capability that's multi-talented, that's utility oriented, but has that mission critical SLA that we're accustomed to providing our clients over the years. So we're very excited about that. We think it's a great market, a great industry overall, and we've seen fantastic feedback even in this show from clients who are very excited to now engage and what that could mean for them. >> So, Kevin, the connector announcement. VMware, we're at De\ll EMC World, VMware a huge part of the Dell Technologies' portfolio. What's the news around connector? >> So with the connector, what it really allows us to do is take what has been our cornerstone differentiation over the years, which is really around the mission critical, high service levels, when you think about guaranteed service levels, almost think of us as more of a managed infrastructure, as a service that has those high SLAs associated with it. So having clients be able to take the VMware estate and then be able to provision and manage workloads that are then being provisioned into the Virtustream, high SLA mission critical environment, is a big step for those enterprise customers. It's a big step for Dell Technologies as a brand. And it doesn't necessarily change the fact that you can also do that to other public cloud providers, you just get a higher level of service by doing it with Virtustream. >> So let's talk a little bit about that value prop of Virtustream. Doing the acquisition, it kind of made sense to me. EMC, traditional enterprise, high availability company, you know you had the VMAX 5, 6, 9s array. Virtustream, you guys did a wonderful job with taking a complex application SAP, providing some provisioning tools around that, and making that a consumable resource in the cloud. Talk to me a little bit about the conversations you've had on the show floor with traditional Dell EMC customers. Are they starting to really warm up to this expansion of the Virtustream brand beyond SAP into other mission critical apps? >> Absolutely, and that really represents the huge growth opportunity for us. As Virtustream we were very successful, as you mentioned, going into the SAP application space because typically SAP will be the system of record for a lot of these large enterprises. And what happens is, with your system of record, things like data persistence and performance guarantees, high IO, large footprint workloads, they're absolutely germane to those systems of record, but SAP is not the only application that fits into that category. When you look at all the different verticals and you look at the areas like we mentioned with healthcare and some of the key applications like Epic and MEDITECH and Cerner, and then you look at the other verticals, there are always these very key systems of record that require that sort of heavy weight capability around mission critical. And so leveraging all of our learnings in the application space, that we can bring that level of mission critical infrastructure performance with that application-centric automation that is focused on that kind of capability, it just makes sense. So it opens up the aperture in terms of the number of apps that we can now run on the Virtustream platform. Technically we could do it before, but now with the reach of Dell EMC, it not only allows us the account penetration by getting in there with relationships that are already leverageable with Dell EMC, but it allows us to also reach the partner community on the software side and be able to talk to application vendors that we can actually bring on to the platform as well. So we're very excited about that. >> So this isn't really anything new for you guys in essence. SAP is what I like to call one of those core center of gravity applications, it's heavy. You're going to have a lot of applications around SAP, and those applications are going to be just as critical, transaction applications, payment processing, big data apps, and you guys have hosted those applications before. What are some of the lessons learned from hosting the SAP ecosystem of applications that you're being able to now transfer that to other enterprise applications? >> Well there are a couple of very key lessons that we've learned. So first of all, you're absolutely right in the sense that when you have that mission critical nucleus, all the things that sit in the ecosystem come along with it. And for us, we've always for years been able to run anything that runs on the x86 platform, so we're certainly not limited to any specific application set. But what we've learned actually over the years in dealing with that concept of the ecosystem, the peripheral systems, is integration. And not only integration in the sense of technically allowing those systems to talk to each other, but we find is when a client is looking to set up a new training environment, or a new testing or QA environment, or they're even leveraging the concept of utility and consumption, if you don't need that system active at night, then you should shut it down. And if you don't need it on the weekend, you should shut it down, but yet in a lot of these complicated systems, the way in which the integration comes up and which systems talk first and then second and then third, are very critical. So over the years we've picked up on things like that level of application automation, what we call landscape management. So you're not just managing a VM, you're managing an entire landscape, which you have to blueprint and then say, for that blueprint, if you're going to shut it down, what's the way of doing that that's fastest, but also runs the least risk of data corruption or other issues that can occur if you just, for some reason, fall out of sequence. So that's one of the very critical lessons that we've learned. The other piece of it is really around tweaking the environments where we've found that by analyzing the actual resource consumption of these apps, which we measure on five minute increments, it allows us to have a much better introspection, if you will, of that entire landscape. And so it allows us to predict whether it's at night or certain times of the month, you know if you're in financial close as an example, or for some of our very labor intensive environments that have warehouses or manufacturing, time and attendance systems that kick in at certain shift change hours. And being able to predict when they need the resources and allocate those resources accordingly. So these are some of the very critical lessons that we plan to take from our years of running and perfecting the art of running SAP and taking them to some of these other mission critical applications as well. >> Well, Kevin, again, great news that you've launched this week in a couple of respects. Glad to hear the show's going well. And just want to congratulate you personally, I mean, I always like having co-founder on the show. It's just, you build something from scratch and obviously it's worked extremely well, so congratulations on that. >> Thank you very much. >> John: I admire that, so good for you. >> Thank you. >> Good to have you, Kevin Reid from Virtustream with us here on theCUBE. Back with more from Dell EMC World 2017 in just a bit. You are watching theCUBE here on SiliconANGLE TV. (bright techno music)
SUMMARY :
Brought to you by Dell EMC. Day 3 here in Las Vegas from the Sands Expo, sandwiched Been great, it's just very energizing on the show here as we wind down. You know of course, it's the first year that they're all So let's get to the news that you made this week. at just all the regulatory issues associated with healthcare if you don't mind. so if you want to talk about it again. we don't have time here, right (laughs). that the healthcare companies are having to make, VMware a huge part of the Dell Technologies' portfolio. that you can also do that to other public cloud providers, and making that a consumable resource in the cloud. on the software side and be able to talk that to other enterprise applications? of doing that that's fastest, but also runs the least risk I always like having co-founder on the show. Good to have you, Kevin Reid from Virtustream
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kevin Reid | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
HIPAA | TITLE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
third | QUANTITY | 0.99+ |
Sands Expo | EVENT | 0.99+ |
Virtustream | ORGANIZATION | 0.98+ |
Dell Technologies' | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
five minute | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
second | QUANTITY | 0.97+ |
one show | QUANTITY | 0.97+ |
Dell | ORGANIZATION | 0.95+ |
Day 3 | QUANTITY | 0.95+ |
first year | QUANTITY | 0.94+ |
vCloud | TITLE | 0.93+ |
Venetian | LOCATION | 0.92+ |
EMC | ORGANIZATION | 0.92+ |
VMAX 5 | COMMERCIAL_ITEM | 0.91+ |
SAP | TITLE | 0.9+ |
Dell EMC World 2017 | EVENT | 0.89+ |
first | QUANTITY | 0.89+ |
9s | COMMERCIAL_ITEM | 0.88+ |
6 | COMMERCIAL_ITEM | 0.85+ |
MEDITECH | ORGANIZATION | 0.84+ |
EMC World 2017 | EVENT | 0.83+ |
SiliconANGLE TV | ORGANIZATION | 0.82+ |
Epic | ORGANIZATION | 0.8+ |
De\ll EMC World | ORGANIZATION | 0.79+ |
Virtustream | TITLE | 0.77+ |
One | QUANTITY | 0.76+ |
Palazzo | LOCATION | 0.76+ |
SAP | ORGANIZATION | 0.73+ |
x86 | TITLE | 0.72+ |
Cerner | ORGANIZATION | 0.7+ |
VMware | TITLE | 0.7+ |
couple | QUANTITY | 0.69+ |
EMC World | EVENT | 0.62+ |
CTO | PERSON | 0.58+ |
theCUBE | ORGANIZATION | 0.53+ |
2017 | TITLE | 0.47+ |