Pat Conte, Opsani | AWS Startup Showcase
(upbeat music) >> Hello and welcome to this CUBE conversation here presenting the "AWS Startup Showcase: "New Breakthroughs in DevOps, Data Analytics "and Cloud Management Tools" featuring Opsani for the cloud management and migration track here today, I'm your host John Furrier. Today, we're joined by Patrick Conte, Chief Commercial Officer, Opsani. Thanks for coming on. Appreciate you coming on. Future of AI operations. >> Thanks, John. Great to be here. Appreciate being with you. >> So congratulations on all your success being showcased here as part of the Startups Showcase, future of AI operations. You've got the cloud scale happening. A lot of new transitions in this quote digital transformation as cloud scales goes next generation. DevOps revolution as Emily Freeman pointed out in her keynote. What's the problem statement that you guys are focused on? Obviously, AI involves a lot of automation. I can imagine there's a data problem in there somewhere. What's the core problem that you guys are focused on? >> Yeah, it's interesting because there are a lot of companies that focus on trying to help other companies optimize what they're doing in the cloud, whether it's cost or whether it's performance or something else. We felt very strongly that AI was the way to do that. I've got a slide prepared, and maybe we can take a quick look at that, and that'll talk about the three elements or dimensions of the problem. So we think about cloud services and the challenge of delivering cloud services. You've really got three things that customers are trying to solve for. They're trying to solve for performance, they're trying to solve for the best performance, and, ultimately, scalability. I mean, applications are growing really quickly especially in this current timeframe with cloud services and whatnot. They're trying to keep costs under control because certainly, it can get way out of control in the cloud since you don't own the infrastructure, and more importantly than anything else which is why it's at the bottom sort of at the foundation of all this, is they want their applications to be a really a good experience for their customers. So our customer's customer is actually who we're trying to solve this problem for. So what we've done is we've built a platform that uses AI and machine learning to optimize, meaning tune, all of the key parameters of a cloud application. So those are things like the CPU usage, the memory usage, the number of replicas in a Kubernetes or container environment, those kinds of things. It seems like it would be simple just to grab some values and plug 'em in, but it's not. It's actually the combination of them has to be right. Otherwise, you get delays or faults or other problems with the application. >> Andrew, if you can bring that slide back up for a second. I want to just ask one quick question on the problem statement. You got expenditures, performance, customer experience kind of on the sides there. Do you see this tip a certain way depending upon use cases? I mean, is there one thing that jumps out at you, Patrick, from your customer's customer's standpoint? Obviously, customer experience is the outcome. That's the app, whatever. That's whatever we got going on there. >> Sure. >> But is there patterns 'cause you can have good performance, but then budget overruns. Or all of them could be failing. Talk about this dynamic with this triangle. >> Well, without AI, without machine learning, you can solve for one of these, only one, right? So if you want to solve for performance like you said, your costs may overrun, and you're probably not going to have control of the customer experience. If you want to solve for one of the others, you're going to have to sacrifice the other two. With machine learning though, we can actually balance that, and it isn't a perfect balance, and the question you asked is really a great one. Sometimes, you want to over-correct on something. Sometimes, scalability is more important than cost, but what we're going to do because of our machine learning capability, we're going to always make sure that you're never spending more than you should spend, so we're always going to make sure that you have the best cost for whatever the performance and reliability factors that you you want to have are. >> Yeah, I can imagine. Some people leave services on. Happened to us one time. An intern left one of the services on, and like where did that bill come from? So kind of looked back, we had to kind of fix that. There's a ton of action, but I got to ask you, what are customers looking for with you guys? I mean, as they look at Opsani, what you guys are offering, what's different than what other people might be proposing with optimization solutions? >> Sure. Well, why don't we bring up the second slide, and this'll illustrate some of the differences, and we can talk through some of this stuff as well. So really, the area that we play in is called AIOps, and that's sort of a new area, if you will, over the last few years, and really what it means is applying intelligence to your cloud operations, and those cloud operations could be development operations, or they could be production operations. And what this slide is really representing is in the upper slide, that's sort of the way customers experience their DevOps model today. Somebody says we need an application or we need a feature, the developers pull down something from get. They hack an early version of it. They run through some tests. They size it whatever way they know that it won't fail, and then they throw it over to the SREs to try to tune it before they shove it out into production, but nobody really sizes it properly. It's not optimized, and so it's not tuned either. When it goes into production, it's just the first combination of settings that work. So what happens is undoubtedly, there's some type of a problem, a fault or a delay, or you push new code, or there's a change in traffic. Something happens, and then, you've got to figure out what the heck. So what happens then is you use your tools. First thing you do is you over-provision everything. That's what everybody does, they over-provision and try to soak up the problem. But that doesn't solve it because now, your costs are going crazy. You've got to go back and find out and try as best you can to get root cause. You go back to the tests, and you're trying to find something in the test phase that might be an indicator. Eventually your developers have to hack a hot fix, and the conveyor belt sort of keeps on going. We've tested this model on every single customer that we've spoken to, and they've all said this is what they experience on a day-to-day basis. Now, if we can go back to the side, let's talk about the second part which is what we do and what makes us different. So on the bottom of this slide, you'll see it's really a shift-left model. What we do is we plug in in the production phase, and as I mentioned earlier, what we're doing is we're tuning all those cloud parameters. We're tuning the CPU, the memory, the Replicas, all those kinds of things. We're tuning them all in concert, and we're doing it at machine speed, so that's how the customer gets the best performance, the best reliability at the best cost. That's the way we're able to achieve that is because we're iterating this thing in machine speed, but there's one other place where we plug in and we help the whole concept of AIOps and DevOps, and that is we can plug in in the test phase as well. And so if you think about it, the DevOps guy can actually not have to over-provision before he throws it over to the SREs. He can actually optimize and find the right size of the application before he sends it through to the SREs, and what this does is collapses the timeframe because it means the SREs don't have to hunt for a working set of parameters. They get one from the DevOps guys when they send it over, and this is how the future of AIOps is being really affected by optimization and what we call autonomous optimization which means that it's happening without humans having to press a button on it. >> John: Andrew, bring that slide back up. I want to just ask another question. Tuning in concert thing is very interesting to me. So how does that work? Are you telegraphing information to the developer from the autonomous workload tuning engine piece? I mean, how does the developer know the right knobs or where does it get that provisioning information? I see the performance lag. I see where you're solving that problem. >> Sure. >> How does that work? >> Yeah, so actually, if we go to the next slide, I'll show you exactly how it works. Okay, so this slide represents the architecture of a typical application environment that we would find ourselves in, and inside the dotted line is the customer's application namespace. That's where the app is. And so, it's got a bunch of pods. It's got a horizontal pod. It's got something for replication, probably an HPA. And so, what we do is we install inside that namespace two small instances. One is a tuning pod which some people call a canary, and that tuning pod joins the rest of the pods, but it's not part of the application. It's actually separate, but it gets the same traffic. We also install somebody we call Servo which is basically an action engine. What Servo does is Servo takes the metrics from whatever the metric system is is collecting all those different settings and whatnot from the working application. It could be something like Prometheus. It could be an Envoy Sidecar, or more likely, it's something like AppDynamics, or we can even collect metrics off of Nginx which is at the front of the service. We can plug into anywhere where those metrics are. We can pull the metrics forward. Once we see the metrics, we send them to our backend. The Opsani SaaS service is our machine learning backend. That's where all the magic happens, and what happens then is that service sees the settings, sends a recommendation to Servo, Servo sends it to the tuning pod, and we tune until we find optimal. And so, that iteration typically takes about 20 steps. It depends on how big the application is and whatnot, how fast those steps take. It could be anywhere from seconds to minutes to 10 to 20 minutes per step, but typically within about 20 steps, we can find optimal, and then we'll come back and we'll say, "Here's optimal, and do you want to "promote this to production," and the customer says, "Yes, I want to promote it to production "because I'm saving a lot of money or because I've gotten "better performance or better reliability." Then, all he has to do is press a button, and all that stuff gets sent right to the production pods, and all of those settings get put into production, and now he's now he's actually saving the money. So that's basically how it works. >> It's kind of like when I want to go to the beach, I look at the weather.com, I check the forecast, and I decide whether I want to go or not. You're getting the data, so you're getting a good look at the information, and then putting that into a policy standpoint. I get that, makes total sense. Can I ask you, if you don't mind, expanding on the performance and reliability and the cost advantage? You mentioned cost. How is that impacting? Give us an example of some performance impact, reliability, and cost impacts. >> Well, let's talk about what those things mean because like a lot of people might have different ideas about what they think those mean. So from a cost standpoint, we're talking about cloud spend ultimately, but it's represented by the settings themselves, so I'm not talking about what deal you cut with AWS or Azure or Google. I'm talking about whatever deal you cut, we're going to save you 30, 50, 70% off of that. So it doesn't really matter what cost you negotiated. What we're talking about is right-sizing the settings for CPU and memory, Replica. Could be Java. It could be garbage collection, time ratios, or heap sizes or things like that. Those are all the kinds of things that we can tune. The thing is most of those settings have an unlimited number of values, and this is why machine learning is important because, if you think about it, even if they only had eight settings or eight values per setting, now you're talking about literally billions of combinations. So to find optimal, you've got to have machine speed to be able to do it, and you have to iterate very, very quickly to make it happen. So that's basically the thing, and that's really one of the things that makes us different from anybody else, and if you put that last slide back up, the architecture slide, for just a second, there's a couple of key words at the bottom of it that I want to want to focus on, continuous. So continuous really means that we're on all the time. We're not plug us in one time, make a change, and then walk away. We're actually always measuring and adjusting, and the reason why this is important is in the modern DevOps world, your traffic level is going to change. You're going to push new code. Things are going to happen that are going to change the basic nature of the software, and you have to be able to tune for those changes. So continuous is very important. Second thing is autonomous. This is designed to take pressure off of the SREs. It's not designed to replace them, but to take the pressure off of them having to check pager all the time and run in and make adjustments, or try to divine or find an adjustment that might be very, very difficult for them to do so. So we're doing it for them, and that scale means that we can solve this for, let's say, one big monolithic application, or we can solve it for literally hundreds of applications and thousands of microservices that make up those applications and tune them all at the same time. So the same platform can be used for all of those. You originally asked about the parameters and the settings. Did I answer the question there? >> You totally did. I mean, the tuning in concert. You mentioned early as a key point. I mean, you're basically tuning the engine. It's not so much negotiating a purchase SaaS discount. It's essentially cost overruns by the engine, either over burning or heating or whatever you want to call it. I mean, basically inefficiency. You're tuning the core engine. >> Exactly so. So the cost thing is I mentioned is due to right-sizing the settings and the number of Replicas. The performance is typically measured via latency, and the reliability is typically measured via error rates. And there's some other measures as well. We have a whole list of them that are in the application itself, but those are the kinds of things that we look for as results. When we do our tuning, we look for reducing error rates, or we look for holding error rates at zero, for example, even if we improve the performance or we improve the cost. So we're looking for the best result, the best combination result, and then a customer can decide if they want to do so to actually over-correct on something. We have the whole concept of guard rail, so if performance is the most important thing, or maybe some customers, cost is the most important thing, they can actually say, "Well, give us the best cost, "and give us the best performance and the best reliability, "but at this cost," and we can then use that as a service-level objective and tune around it. >> Yeah, it reminds me back in the old days when you had filtering white lists of black lists of addresses that can go through, say, a firewall or a device. You have billions of combinations now with machine learning. It's essentially scaling the same concept to unbelievable. These guardrails are now in place, and that's super cool and I think really relevant call-out point, Patrick, to kind of highlight that. At this kind of scale, you need machine learning, you need the AI to essentially identify quickly the patterns or combinations that are actually happening so a human doesn't have to waste their time that can be filled by basically a bot at that point. >> So John, there's just one other thing I want to mention around this, and that is one of the things that makes us different from other companies that do optimization. Basically, every other company in the optimization space creates a static recommendation, basically their recommendation engines, and what you get out of that is, let's say it's a manifest of changes, and you hand that to the SREs, and they put it into effect. Well, the fact of the matter is is that the traffic could have changed then. It could have spiked up, or it could have dropped below normal. You could have introduced a new feature or some other code change, and at that point in time, you've already instituted these changes. They may be completely out of date. That's why the continuous nature of what we do is important and different. >> It's funny, even the language that we're using here: network, garbage collection. I mean, you're talking about tuning an engine, am operating system. You're talking about stuff that's moving up the stack to the application layer, hence this new kind of eliminating of these kind of siloed waterfall, as you pointed out in your second slide, is kind of one integrated kind of operating environment. So when you have that or think about the data coming in, and you have to think about the automation just like self-correcting, error-correcting, tuning, garbage collection. These are words that we've kind of kicking around, but at the end of the day, it's an operating system. >> Well in the old days of automobiles, which I remember cause I'm I'm an old guy, if you wanted to tune your engine, you would probably rebuild your carburetor and turn some dials to get the air-oxygen-gas mix right. You'd re-gap your spark plugs. You'd probably make sure your points were right. There'd be four or five key things that you would do. You couldn't do them at the same time unless you had a magic wand. So we're the magic wand that basically, or in modern world, we're sort of that thing you plug in that tunes everything at once within that engine which is all now electronically controlled. So that's the big differences as you think about what we used to do manually, and now, can be done with automation. It can be done much, much faster without humans having to get their fingernails greasy, let's say. >> And I think the dynamic versus static is an interesting point. I want to bring up the SRE which has become a role that's becoming very prominent in the DevOps kind of plus world that's happening. You're seeing this new revolution. The role of the SRE is not just to be there to hold down and do the manual configuration. They had a scale. They're a developer, too. So I think this notion of offloading the SRE from doing manual tasks is another big, important point. Can you just react to that and share more about why the SRE role is so important and why automating that away through when you guys have is important? >> The SRE role is becoming more and more important, just as you said, and the reason is because somebody has to get that application ready for production. The DevOps guys don't do it. That's not their job. Their job is to get the code finished and send it through, and the SREs then have to make sure that that code will work, so they have to find a set of settings that will actually work in production. Once they find that set of settings, the first one they find that works, they'll push it through. It's not optimized at that point in time because they don't have time to try to find optimal, and if you think about it, the difference between a machine learning backend and an army of SREs that work 24-by-seven, we're talking about being able to do the work of many, many SREs that never get tired, that never need to go play video games, to unstress or whatever. We're working all the time. We're always measuring, adjusting. A lot of the companies we talked to do a once-a-month adjustment on their software. So they put an application out, and then they send in their SREs once a month to try to tune the application, and maybe they're using some of these other tools, or maybe they're using just their smarts, but they'll do that once a month. Well, gosh, they've pushed code probably four times during the month, and they probably had a bunch of different spikes and drops in traffic and other things that have happened. So we just want to help them spend their time on making sure that the application is ready for production. Want to make sure that all the other parts of the application are where they should be, and let us worry about tuning CPU, memory, Replica, job instances, and things like that so that they can work on making sure that application gets out and that it can scale, which is really important for them, for their companies to make money is for the apps to scale. >> Well, that's a great insight, Patrick. You mentioned you have a lot of great customers, and certainly if you have your customer base are early adopters, pioneers, and grow big companies because they have DevOps. They know that they're seeing a DevOps engineer and an SRE. Some of the other enterprises that are transforming think the DevOps engineer is the SRE person 'cause they're having to get transformed. So you guys are at the high end and getting now the new enterprises as they come on board to cloud scale. You have a huge uptake in Kubernetes, starting to see the standardization of microservices. People are getting it, so I got to ask you can you give us some examples of your customers, how they're organized, some case studies, who uses you guys, and why they love you? >> Sure. Well, let's bring up the next slide. We've got some customer examples here, and your viewers, our viewers, can probably figure out who these guys are. I can't tell them, but if they go on our website, they can sort of put two and two together, but the first one there is a major financial application SaaS provider, and in this particular case, they were having problems that they couldn't diagnose within the stack. Ultimately, they had to apply automation to it, and what we were able to do for them was give them a huge jump in reliability which was actually the biggest problem that they were having. We gave them 5,000 hours back a month in terms of the application. They were they're having pager duty alerts going off all the time. We actually gave them better performance. We gave them a 10% performance boost, and we dropped their cloud spend for that application by 72%. So in fact, it was an 80-plus % price performance or cost performance improvement that we gave them, and essentially, we helped them tune the entire stack. This was a hybrid environment, so this included VMs as well as more modern architecture. Today, I would say the overwhelming majority of our customers have moved off of the VMs and are in a containerized environment, and even more to the point, Kubernetes which we find just a very, very high percentage of our customers have moved to. So most of the work we're doing today with new customers is around that, and if we look at the second and third examples here, those are examples of that. In the second example, that's a company that develops websites. It's one of the big ones out in the marketplace that, let's say, if you were starting a new business and you wanted a website, they would develop that website for you. So their internal infrastructure is all brand new stuff. It's all Kubernetes, and what we were able to do for them is they were actually getting decent performance. We held their performance at their SLO. We achieved a 100% error-free scenario for them at runtime, and we dropped their cost by 80%. So for them, they needed us to hold-serve, if you will, on performance and reliability and get their costs under control because everything in that, that's a cloud native company. Everything there is cloud cost. So the interesting thing is it took us nine steps because nine of our iterations to actually get to optimal. So it was very, very quick, and there was no integration required. In the first case, we actually had to do a custom integration for an underlying platform that was used for CICD, but with the- >> John: Because of the hybrid, right? >> Patrick: Sorry? >> John: Because it was hybrid, right? >> Patrick: Yes, because it was hybrid, exactly. But within the second one, we just plugged right in, and we were able to tune the Kubernetes environment just as I showed in that architecture slide, and then the third one is one of the leading application performance monitoring companies on the market. They have a bunch of their own internal applications and those use a lot of cloud spend. They're actually running Kubernetes on top of VMs, but we don't have to worry about the VM layer. We just worry about the Kubernetes layer for them, and what we did for them was we gave them a 48% performance improvement in terms of latency and throughput. We dropped their error rates by 90% which is pretty substantial to say the least, and we gave them a 50% cost delta from where they had been. So this is the perfect example of actually being able to deliver on all three things which you can't always do. It has to be, sort of all applications are not created equal. This was one where we were able to actually deliver on all three of the key objectives. We were able to set them up in about 25 minutes from the time we got started, no extra integration, and needless to say, it was a big, happy moment for the developers to be able to go back to their bosses and say, "Hey, we have better performance, "better reliability. "Oh, by the way, we saved you half." >> So depending on the stack situation, you got VMs and Kubernetes on the one side, cloud-native, all Kubernetes, that's dream scenario obviously. Not many people like that. All the new stuff's going cloud-native, so that's ideal, and then the mixed ones, Kubernetes, but no VMs, right? >> Yeah, exactly. So Kubernetes with no VMs, no problem. Kubernetes on top of VMs, no problem, but we don't manage the VMs. We don't manage the underlay at all, in fact. And the other thing is we don't have to go back to the slide, but I think everybody will remember the slide that had the architecture, and on one side was our cloud instance. The only data that's going between the application and our cloud instance are the settings, so there's never any data. There's never any customer data, nothing for PCI, nothing for HIPPA, nothing for GDPR or any of those things. So no personal data, no health data. Nothing is passing back and forth. Just the settings of the containers. >> Patrick, while I got you here 'cause you're such a great, insightful guest, thank you for coming on and showcasing your company. Kubernetes real quick. How prevalent is this mainstream trend is because you're seeing such great examples of performance improvements. SLAs being met, SLOs being met. How real is Kubernetes for the mainstream enterprise as they're starting to use containers to tip their legacy and get into the cloud-native and certainly hybrid and soon to be multi-cloud environment? >> Yeah, I would not say it's dominant yet. Of container environments, I would say it's dominant now, but for all environments, it's not. I think the larger legacy companies are still going through that digital transformation, and so what we do is we catch them at that transformation point, and we can help them develop because as we remember from the AIOps slide, we can plug in at that test level and help them sort of pre-optimize as they're coming through. So we can actually help them be more efficient as they're transforming. The other side of it is the cloud-native companies. So you've got the legacy companies, brick and mortar, who are desperately trying to move to digitization. Then, you've got the ones that are born in the cloud. Most of them aren't on VMs at all. Most of them are on containers right from the get-go, but you do have some in the middle who have started to make a transition, and what they've done is they've taken their native VM environment and they've put Kubernetes on top of it so that way, they don't have to scuttle everything underneath it. >> Great. >> So I would say it's mixed at this point. >> Great business model, helping customers today, and being a bridge to the future. Real quick, what licensing models, how to buy, promotions you have for Amazon Web Services customers? How do people get involved? How do you guys charge? >> The product is licensed as a service, and the typical service is an annual. We license it by application, so let's just say you have an application, and it has 10 microservices. That would be a standard application. We'd have an annual cost for optimizing that application over the course of the year. We have a large application pack, if you will, for let's say applications of 20 services, something like that, and then we also have a platform, what we call Opsani platform, and that is for environments where the customer might have hundreds of applications and-or thousands of services, and we can plug into their deployment platform, something like a harness or Spinnaker or Jenkins or something like that, or we can plug into their their cloud Kubernetes orchestrator, and then we can actually discover the apps and optimize them. So we've got environments for both single apps and for many, many apps, and with the same platform. And yes, thanks for reminding me. We do have a promotion for for our AWS viewers. If you reference this presentation, and you look at the URL there which is opsani.com/awsstartupshowcase, can't forget that, you will, number one, get a free trial of our software. If you optimize one of your own applications, we're going to give you an Oculus set of goggles, the augmented reality goggles. And we have one other promotion for your viewers and for our joint customers here, and that is if you buy an annual license, you're going to get actually 15 months. So that's what we're putting on the table. It's actually a pretty good deal. The Oculus isn't contingent. That's a promotion. It's contingent on you actually optimizing one of your own services. So it's not a synthetic app. It's got to be one of your own apps, but that's what we've got on the table here, and I think it's a pretty good deal, and I hope your guys take us up on it. >> All right, great. Get Oculus Rift for optimizing one of your apps and 15 months for the price of 12. Patrick, thank you for coming on and sharing the future of AIOps with you guys. Great product, bridge to the future, solving a lot of problems. A lot of use cases there. Congratulations on your success. Thanks for coming on. >> Thank you so much. This has been excellent, and I really appreciate it. >> Hey, thanks for sharing. I'm John Furrier, your host with theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
for the cloud management and Appreciate being with you. of the Startups Showcase, and that'll talk about the three elements kind of on the sides there. 'cause you can have good performance, and the question you asked An intern left one of the services on, and find the right size I mean, how does the and the customer says, and the cost advantage? and that's really one of the things I mean, the tuning in concert. So the cost thing is I mentioned is due to in the old days when you had and that is one of the things and you have to think about the automation So that's the big differences of offloading the SRE and the SREs then have to make sure and certainly if you So most of the work we're doing today "Oh, by the way, we saved you half." So depending on the stack situation, and our cloud instance are the settings, and get into the cloud-native that are born in the cloud. So I would say it's and being a bridge to the future. and the typical service is an annual. and 15 months for the price of 12. and I really appreciate it. I'm John Furrier, your host with theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Emily Freeman | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Pat Conte | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Patrick Conte | PERSON | 0.99+ |
15 months | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
nine steps | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Oculus | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
72% | QUANTITY | 0.99+ |
48% | QUANTITY | 0.99+ |
10 microservices | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
second slide | QUANTITY | 0.99+ |
first case | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
one | QUANTITY | 0.99+ |
20 services | QUANTITY | 0.99+ |
Prometheus | TITLE | 0.99+ |
second example | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
five key | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
80-plus % | QUANTITY | 0.99+ |
eight settings | QUANTITY | 0.99+ |
Opsani | PERSON | 0.99+ |
third examples | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
services | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
eight values | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
nine | QUANTITY | 0.98+ |
three elements | QUANTITY | 0.98+ |
Servo | ORGANIZATION | 0.98+ |
80% | QUANTITY | 0.98+ |
opsani.com/awsstartupshowcase | OTHER | 0.98+ |
first one | QUANTITY | 0.98+ |
two small instances | QUANTITY | 0.98+ |
10 | QUANTITY | 0.97+ |
three things | QUANTITY | 0.97+ |
once a month | QUANTITY | 0.97+ |
one time | QUANTITY | 0.97+ |
70% | QUANTITY | 0.97+ |
GDPR | TITLE | 0.97+ |
zero | QUANTITY | 0.97+ |
Servo | TITLE | 0.97+ |
about 20 steps | QUANTITY | 0.97+ |
12 | QUANTITY | 0.96+ |
Kubernetes | TITLE | 0.96+ |
four times | QUANTITY | 0.96+ |
HPE Ezmeral Preview | HPE Ezmeral \\ Analytics Unleashed
>>on March 17th at 8 a.m. >>Pacific. The >>Cube is hosting Israel Day with support from Hewlett Packard. Enterprise I am really excited about is moral. It's H. P s set of solutions that will allow containerized apps and workloads to run >>anywhere. Talking on Prem in the public cloud across clouds >>are really anywhere, including the emergent edge you can think of, as well as a data fabric and a platform to allow you to manage work across all >>these domains. >>That is more all day. We have an exciting lineup of guests, including Kirk Born, who was a famed >>astrophysicist and >>extraordinary data scientist. >>He's from Booz >>Allen. Hamilton will also be joined by my longtime friend Kumar. Sorry >>Conte, who is CEO >>and head of software at HP. In addition, you'll hear from Robert Christiansen >>of HPV will discuss >>data strategies that make sense >>for you, >>and we'll hear from >>customers and partners from around the globe who >>are using as moral >>capabilities to >>create and deploy transformative >>products and solutions that are >>impacting lives every single day. We'll also give you a chance to have a few breakout rooms >>and go deeper on specific topics >>that are important to you, and we'll give you a demo toward the end. So you want to hang around now? Most of all, we >>have a team of experts >>standing by to answer any questions that you may have. >>So please >>do join in on the chat room. It's gonna be a great event. So grab your coffee, your tea or your favorite beverage and grab a note >>pad. We'll see >>you there. March 17th at 8 a.m. >>8 a.m. Pacific >>on the Cube.
SUMMARY :
that will allow containerized apps and workloads to run Talking on Prem in the public cloud across clouds We have an exciting lineup of guests, including Kirk Born, Hamilton will also be joined by my longtime friend Kumar. and head of software at HP. We'll also give you a chance to have a few breakout that are important to you, and we'll give you a demo toward the end. do join in on the chat room. We'll see you there.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Robert Christiansen | PERSON | 0.99+ |
Kirk Born | PERSON | 0.99+ |
Kumar | PERSON | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
Conte | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
8 a.m. Pacific | DATE | 0.98+ |
Hamilton | PERSON | 0.95+ |
Allen | PERSON | 0.93+ |
HPE | ORGANIZATION | 0.9+ |
March 17th at 8 a.m. | DATE | 0.87+ |
Israel Day | EVENT | 0.82+ |
H. P | ORGANIZATION | 0.8+ |
Pacific | LOCATION | 0.78+ |
HPE Ezmeral | ORGANIZATION | 0.75+ |
Prem | PERSON | 0.75+ |
single day | QUANTITY | 0.71+ |
Ezmeral | PERSON | 0.66+ |
HPV | ORGANIZATION | 0.64+ |
Cube | COMMERCIAL_ITEM | 0.62+ |
Booz | ORGANIZATION | 0.51+ |
MedTec Entrepreneurship Education at Stanford University
>>thank you very much for this opportunity to talk about Stamp with a bio design program, which is entrepreneurship education for the medical devices. My name is Julia Key Can. Oh, I am Japanese. I have seen the United States since two doesn't want on the more than half of my life after graduating from medical school is in the United States. I hope I can contribute to make them be reached between Japan that you were saying right I did the research in the period of medical devices with a patient all over the world today is my batteries met their country finished medication stamp of the city. Yeah, North Korea academia, but also a wrong. We in the industry sectors sometimes tried to generate new product which can generate revenue from their own research outward, it is explained by three steps. The first one is the debut river, which is the harbor Wrong research output to the idea which can be product eventually. That they are hard, though, is the best body, which is a hot Arboria. From idea to commercial for the other one is that we see which is a harder to make a martial hold up to become a big are revenue generating products for the academia that passed the heart is a critical on the essential to make a research output to the idea. Yeah, they're two different kind of squash for the developing process in the health care innovation, Why's bio and by all the farmer under the other one is medical device regarding the disciplining method is maybe in mechanical engineering. Electrical engineering on the medical under surgical by Obama is mainly chemical engineering, computer science, biology and genetics. However, very important difference off these to be the innovation process. Medic is suitable on these digital innovation and by Obama, is suitable discovery process needs. Yeah, in general transformation of medical research between the aroma academia output to the commercial product in the medical field is called bench to bed. It means from basically such to critical applications. But it is your bio on the path. Yeah, translation. Medical research for medical devices is better. Bench on back to bed, which means quicker Amit needs to bench on back to Greek application. The difference off the process is the same as the difference off the commercialization. Yeah, our goal is to innovate the newer devices for patient over the war. Yeah, yeah, there are two process to do innovation. One is technology push type of innovation. The other one is news, full type of innovation. Ignore the push stop Innovation is coming from research laboratory. It is suitable for the farm on the bios. Happy type of innovation. New, useful or used driven type of type of innovation is suitable for medical devices. Either Take this topic of innovation or useful type of innovation. It is important to have Mini's. We should think about what? It's waas Yeah, in 2001 stop for the Cube, API has started to stop with Bio Design program, which is on entrepreneurship education for medical devices. Our mission is educated on empowering helps technology, no based innovators on the reading, the transition to a barrier to remain a big innovation ecosystem. Our vision is to be a global leader in advancing Hearst technology innovation to improve lives everywhere. There are three steps in our process. Off innovation, identify invent on England. Yeah, yeah. The most important step is the cluster, which is I didn't buy. I didn't buy a well characterized needs is the Vienna off a grating vision. Most of the value off medical device development is due to Iraq Obina unmet needs. So we focused in this gated by creates the most are the mosque to find on the Civic on appropriate. Yeah, our barrels on the student Hickory World in March, disparate 19 that ideally include individual, which are background in many thing engineering on business. Yeah, how to find our needs. Small team will go to the hospital or clinic or environment to offer them the healthcare providers with naive eyes. The team focused. You look to keep all the um, it needs not technology. This method is senior CTO. It's a rocket car approach which can be applied all that design, thinking the team will generate at least 200 needs from economic needs. Next stick to identify Pace is to select the best. Amit Knees were used for different aspect, which can about it the nominees. These background current existing solutions market size on the stakeholders. Once we pick up ur madness from 200 nominees, they can move to the invention pates. Finally, they can't be the solution many people tend to invent on at the beginning base without carefree evaluating its unmet knees to result in a better tend to pouring love. Their whole idea, even amid NIS, is not what this is. Why most of the medical device innovation fail due to the lack off unmet needs. To avoid this Peter Hall, our approach is identify good needs. First on invention is the sex to generate the idea wrong. Unmet knees. We will use seven Rules off race Tony B B zero before judgment encourage wild ideas built on the ideas off. Others. Go Conte. One conversation time. Stay focused on the topic. The brainstorming is like association game. Somebody's idea can stimulate the others ideas. After generating many ideas, the next step is sleeping of idea whether use five different Dustin to embody the ideas. Intellectual property regulatory. Remember National Business Model on technology How, after this election step, we can have the best solution with system it needs, and finally team will go to the implementation pace. This place is more business oriented mothers. The strategy off business implementations on the business planning. Yeah, yeah, students want more than 50 starting up are spinning off from by design program. Let me show one example This is a case of just reputations. If patient your chest pain, most of that patient go to family doctor and trust. The first are probably Dr before the patient to General Securities. General Card, obviously for the patient Director, Geologist, Director, API geologist will make a reservation. Horta uses it. Test patient will come to the clinic people for devices in machine on his chest. Well, what? Two days? Right? That patient will visit clinic to put all the whole decency After a few days off. Analysis patient Come back to Dr to hear the result Each step in his money to pay. This is a minute, Knees. This is a rough sketch off the solutions. The product name is die. A patch on it can save about $620. Part maybe outpatient right here. >>Yeah, yeah. Life is stressful. We all depend on our heart with life source of our incredible machine. The body, however, sometimes are hard Need to check up. Perhaps you felt dizzy heart racing or know someone who has had a serious heart problem The old fashioned monitors that used to get from most doctors or bulky And you can't wear them exercising or in the shower. If appropriate for you, sudden life will provide you the eye rhythm. Zero patch to buy five inch band aid like patch would. You can apply to your chest in the comfort of your own home or in the gym. It will monitor your heart rate for up to 14 days. You never have to come into a doctor's office as you mail back. Patched us shortly after you were receiving. Easy to understand report of your heart activity, along with recommendations from a heart specialists to understand the next steps in your heart. Health sudden life bringing heart monitoring to you. >>This is from the TV broadcasting become Ah, this is a core value we can stamping on his breast. He has a connotation of the decent died. Now the company names Iris is in the public market cap off. This company is more than six billion di parts is replacing grasp all or that you see the examination. However, our main product is huge. The product lifecycle Very divisive, recent being it's. But if we can educate the human decision oil because people can build with other people beyond space and yeah, young broader stop on by design education is now runs the media single on Japan. He doesn't 15 PBS probably star visited Stamp of the diversity and Bang. He announced that Japan, by design, will runs with vampires. That problem? Yeah, Japan Barzan program has started a University of Tokyo Osaka University and we've asked corroborating with Japanese government on Japanese medical device Industry s and change it to that. Yeah, this year that it's batch off Japan better than parachute on. So far more than five. Starting up as being that's all. Thank you very much for your application.
SUMMARY :
is. Why most of the medical device innovation fail due to the lack off unmet The body, however, sometimes are hard Need to check up. This is from the TV broadcasting become Ah,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Obama | PERSON | 0.99+ |
Peter Hall | PERSON | 0.99+ |
Julia Key Can | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
200 nominees | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
Two days | QUANTITY | 0.99+ |
Amit Knees | PERSON | 0.99+ |
2001 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
more than 50 | QUANTITY | 0.99+ |
more than six billion | QUANTITY | 0.99+ |
three steps | QUANTITY | 0.99+ |
Tony B B | PERSON | 0.99+ |
PBS | ORGANIZATION | 0.98+ |
about $620 | QUANTITY | 0.98+ |
Japan | LOCATION | 0.98+ |
Japanese | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
five inch | QUANTITY | 0.98+ |
University of Tokyo Osaka University | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
England | LOCATION | 0.98+ |
this year | DATE | 0.97+ |
first one | QUANTITY | 0.97+ |
15 | QUANTITY | 0.96+ |
Each step | QUANTITY | 0.96+ |
Zero patch | QUANTITY | 0.96+ |
Stanford University | ORGANIZATION | 0.95+ |
today | DATE | 0.95+ |
Japanese | OTHER | 0.95+ |
up to 14 days | QUANTITY | 0.94+ |
more than five | QUANTITY | 0.93+ |
more than half | QUANTITY | 0.93+ |
Stamp of the diversity | TITLE | 0.92+ |
Hickory World | ORGANIZATION | 0.92+ |
Bang | TITLE | 0.91+ |
MedTec | ORGANIZATION | 0.89+ |
one example | QUANTITY | 0.88+ |
two process | QUANTITY | 0.88+ |
five | QUANTITY | 0.88+ |
seven Rules | QUANTITY | 0.87+ |
19 | QUANTITY | 0.86+ |
Vienna | LOCATION | 0.86+ |
One | QUANTITY | 0.86+ |
at least 200 needs | QUANTITY | 0.85+ |
single | QUANTITY | 0.84+ |
One conversation time | QUANTITY | 0.78+ |
Greek | OTHER | 0.76+ |
two different kind | QUANTITY | 0.76+ |
days | DATE | 0.75+ |
Amit | PERSON | 0.74+ |
Conte | PERSON | 0.73+ |
Design | OTHER | 0.7+ |
one | QUANTITY | 0.66+ |
API | ORGANIZATION | 0.63+ |
NIS | ORGANIZATION | 0.63+ |
Securities | ORGANIZATION | 0.61+ |
North Korea | LOCATION | 0.6+ |
Iris | ORGANIZATION | 0.59+ |
Horta | PERSON | 0.55+ |
Pace | ORGANIZATION | 0.54+ |
Japan | ORGANIZATION | 0.54+ |
zero | QUANTITY | 0.52+ |
Dustin | COMMERCIAL_ITEM | 0.5+ |
Iraq Obina | LOCATION | 0.49+ |
Cube | ORGANIZATION | 0.4+ |
Japan Barzan | OTHER | 0.34+ |
HPE Discover 2020 Analysis | HPE Discover 2020
>>from around the globe. It's the Cube covering HP. Discover Virtual experience Brought to you by HP. >>Welcome back to the Cube's coverage of HP Discover. 2020. The virtual experience. The Cube. The Cube has been virtualized. My name is Dave Vellante. I'm here with Stuart Minuteman and our good friend Tim Crawford is here. He's a strategic advisor to see Io's with boa. Tim, Great to see you. Stuart. Thanks for coming on. >>Great to see you as well, Dave. >>Yes. So let's unpack. What's going on in that Discover Antonio's, He notes, Maybe talk a little bit about the prospects for HP of coming forward in this decade. You know, last decade was not a great one for HP, HP. I mean, there was a lot of turmoil. There was a botched acquisitions. There was breaking up the company and spin merges and a lot of distractions. And so now that companies really and you hear this from Antonio kind of positioning for innovation for the next decade. So So I think this is probably a lot of excitement inside the company, but I want to touch on a couple of points and then you get your guys reaction, I guess, you know, to start off. Obviously, Antonio's talking about Cove in the role that they played in that whole, you know, pandemic and the transition toe the the isolation economy. But so let me start with you, Tim. I mean, what is the sort of posture amongst cios that you talk to? How strategic is HB H B two? The folks that you talk to in your community? >>Well, I think if you look at how CIOs are thinking, especially as we head into covert it into Corona virus and kind of mapping through that, that price, um, it really came down to Can they get their hands on technology? Can they get people back to work working from home? Can they do it in a secure fashion? Um, keeping people productive. I mean, there was a lot of block and tackling, and even to this day, there's still a fair amount of that was taking place. Um, we really haven't seen the fallout from the cybersecurity impact of expanding our foot print. Um, quite. But we'll see that, probably in the coming months. There are some initial inklings there when it comes to HP specifically I think it comes back to just making sure that they had the product on hand, that they understood that customers are going through dramatic change. And so all bets are off. You have to kind of step back and say, Okay, those plans that I had 60 9100 and 20 days ago those strategies that I may have already started down the path with those are up for grabs. I need to step back from those and figure out What do I do now? And I think each company, HP included, needs to think about how do they start to meld themselves, to be able to address those changing customer needs? And I think that's that's where this really kind of becomes the rubber hits the road is is HP capable of doing that? And are they making the right changes? And quite frankly, that starts with empathy. And I think we've heard pretty clearly from Antonio that he is sympathetic to the plight of their customers and the world >>on the whole. >>Yeah, and I think culturally 10 minutes do I mean I think you know HP is kind of getting back to some of its roots, and Tony has been there for a long time. I think people I think is very well liked. Andi, I think, ease of use, and I'm sure he's tough. But he's also a very fair individual, and he's got a vision and he's focused. And so, you know, I think again, as they said, looking forward to this decade, I think could be one that is, you know, one of innovation. Although, you know, look, you look at the stock price, you know, it's kind of piqued in November 19. It's obviously down like many stocks, so there's a lot of work to do there, and it's too. We're certainly hearing from HP. This notion of everything is a service that we've talked about green like a lot. What's your sense of their prospects going forward in this, you know, New Era? >>Yeah, I mean, Dave, one of the biggest attacks we've heard about H E in the last couple of years, you know the line Michael Dell would use is you're not going to grow by, say, abstraction. But as a platform company, HP is much more open. From what I've seen in the HP that I remember from, you know, 5 to 10 years ago. So you look at their partner ecosystem. It's robust. So, you know, years ago, it seemed to be if it didn't come out of HP Labs, it wasn't a product, you know. That was the services arm all wanted to sell HP here. Now, in this software defined world working in a cloud environment, they're much more open to finding that innovation and enabling it. So, you know, we talk about Green Lake Day. Three lakes got about 1000 customers right now, and a big piece of that is a partner. Port Police, whether it's VM Ware Amazon Annex, were H B's full stack themselves. They have optionality in there, and that's what we hear from from users is that they want flexibility they don't want. You know, you look at the cloud providers, it's not, you know, here's a solution. You look at Amazon. There's dozens of databases that you can use from Amazon or, if you use on top of Amazon, so H p e. You know, not a public cloud provider, but looking more like that cloud experience. They've done so many acquisitions over the years. Many of them were troubled. They got rid of some of the pieces that they might have over paid for. But you look at something like CTP them in this multi cloud world in the networking space, they've got a really cool, open source company, the company behind spiffy, inspire. And, you know, companies that are looking at containers and kubernetes, you know, really respond to say, Hey, these are projects that were interesting Oh, who's the company that that's driving that it's HP so more open, more of a partner ecosystem definitely feels that there's a lot there that I respect and like that hp >>well, I mean, the intent of splitting the company was so that HP could be more focused but focused on innovation was the intent was to be the growth company. It hasn't fully played out yet. But Tim, when you think about the conversations that CIOs are having with with HPI today versus what they were having with hpe HP, the the conglomerate of that the Comprising e ds and PCs, I guess I don't know, in a way, more more Dell like so Certainly Michael Dell's having strategic conversations, CIOs. But you got to believe that the the conversations are more focused today. Is that a good thing or a jury's still out? >>No, it absolutely is a good thing. And I think one of the things that you have to look at is we're getting back to brass tax. We're getting back to that focus around business objectives. So no longer is that hey, who has the coolest tech? And how can we implement that tax? Kind of looking from a tech business? Ah, spectrum, you're now focused squarely is a C i. O. You have to be squarely focused on what are the business objectives that you are teamed up for, and if you're not, you're on a very short leash and that doesn't end well. And I think the great thing about the split of HP HP e split and I think you almost have to kind of step back for a second. Let's talk about leadership because leadership plays a very significant role, especially for CIOs that are thinking about long term decisions and strategic partners. I don't think that HP necessarily had the right leadership in place to carry them into that strategic world. I think Antonio really makes a change there. I mean, they made some really poor decisions. Post split. Um, that really didn't bode well for HP. Um, and frankly, I talked a bit about that I know wasn't really popular within HP, but quite frankly, they needed to hear it. And I think that actually has been heard. And I think they are listening to their customers. And one of the big changes is they're getting back into the software business. And when you talk about strategic initiatives, you have to get beyond just the hardware and start moving up the proverbial stack, getting closer to those business initiatives. And that is software. >>Yeah, well, Antonio talked about sort of the insights. I mean, something I've said a lot about borrowed from the very Meeker conversations that that data is plentiful. Something I've always said. Insights aren't. And so you're right. You've seen a couple of acquisitions, you know, Matt bahr They picked up, I think pretty inexpensively. Kind of interesting cause, remember, HP hp had an investment in Horton works, which, of course, is now Cloudera and Blue Data. Ah Kumar Conte's company, you know, kind of focusing on maybe automating data, you know, they talked about Ed centric, cloud enabled, data driven. Nobody's gonna argue with those things. But you're right, Tim. I mean, you're talking more software than kind of jettisons the software business and now sort of have to rebuild it. And then, of course, do this cloud. What do you make of HP ease Cloud play? >>Yeah, well, I >>mean, >>Dave, you the pieces. You were just talking about math bar and blue data, where HP connects it together is, you know, ai ops. So you know, where are we going with infrastructure? There needs to be a lot more automation. We heard a great quote. I love from automation anywhere. Dave was, if you talk about digital transformation without automation, it's hallucination. So, you know, HP baking that into what they're doing. So, you know, I fully agree with Tim software software software, you know, is where the innovation is. So it can't just be the infrastructure. How do you have eyes and books into the applications? How are you helping customers build those new pieces? And what's the other software that you build around that? So, you know, absolutely. It's an interesting piece. And you know, HP has got a lot of interesting pieces. You know, you talk about the edge. Aruba is a great asset for that kind of environment and from a partnership, that is a damn point. Dave. They have. John Chambers was in the keynote. John, of course. Long time partners. He's with Cisco for many years Intel. Cisco started eating with HP on the server business, but now he's also the chairman of pensando. HP is an investor in pensando general availability this month of that solution, and that's going to really help build out that next generation edge. So, you know, a chip set that HP E can offer similar to what we see how Amazon builds outpost s. So that is a solution both for the enterprise and beyond. Is as a B >>yeah course. Do. Of course, it's kind of, but about three com toe. Add more fuel to that tension. Go ahead, Tim. >>Well, I was going to pick apart some of those pieces because you know, at edge is not an edge is not an edge. And I think it's important to highlight some of the advantages that HP is bringing to the table where Pensando comes in, where Aruba comes in and also we're really comes in. I think there are a number of these components that I want to make sure that we don't necessarily gloss over that are really key for HP in terms of the future. And that is when you step back and you look at how customers are gonna have to consume services, how they're going to have to engage with both the edge and the cloud and everything in between. HP has a great portfolio of hardware. What they haven't necessarily had was the glue, that connective tissue to bring all of that together. And I think that's where things like Green Lake and Green Lake Central really gonna play a role. And even their, um, newer cloud services are going to play a role. And unlike outposts and unlike some of the other private cloud services that are on the market today, they're looking to extend a cloud like experience all the way to the edge and that continuity creating that simplicity is going to be key for enterprises. And I think that's something that shouldn't be understated. It's gonna be really important because when I look at in the conversations I'm having when we're looking at edge to cloud and everything in between. Oh my gosh, that's really complicated. And you have to figure out how to simplify that. And the only way you're going to do that is if you take it up a layer and start thinking about management tools. You start thinking about autumn, and as companies start to take data from the edge, they start analyzing it at the edge and intermediate points on the way to cloud. It's going to be even more important to bring continuity across this entire spectrum. And so that's one of the things that I'm really excited about that I'm hearing from Antonio's keynote and others. Ah, here at HP Discover. >>Yeah, >>well, let's let's stay on that stupid. Let's stay on that for a second. >>Yeah, I wanted to see oh interested him because, you know, it's funny. You think back. You know, HP at one point in time was a leader in, you know, management solutions. You know, HP one view, you know, in the early days, it was really well respected. I think what I'm hearing from you, I think about outpost is Amazon hasn't really put management for the edge. All they're doing is extending the cloud piece and putting a piece out of the edge. It feels like we need a management solution that built from the ground up for this kind of solution. And do I hear you right? You believe that to be as some of those pieces today? >>Well, let's compare and contrast briefly on that. I think Amazon and the way Amazon is well, is Google and Microsoft, for that matter. The way that they are encompassing the edge into their portfolio is interesting, but it's an extension of their core business, their core public cloud services business. Most of the enterprise footprint is not in public cloud. It's at the other end of that spectrum, and so being able to take not just what's happening at the edge. But what about in your corporate data center in your corporate data center? You still have to manage that, and that doesn't fall under the purview of Cloud. And so that's why I'm looking at HP is a way to create that connective tissue between what companies are doing within the corporate data center today, what they're doing at the edge as well as what they're doing, maybe in private cloud and an extension public cloud. But let's also remember something else. Most of these enterprises, they're also in a multi cloud environment, so they're touching into different public cloud providers for different services. And so now you talk about how do I manage this across the spectrum of edge to cloud. But then, across different public cloud providers, things get really complicated really fast. And I think the hints of what I'm seeing in software and the new software branding give me a moment of pause to say, Wait a second. Is HP really gonna head down that path? And if so, that's great because it is of high demand in the enterprise. >>Well, let's talk about that some more because I think this really is the big opportunity and we're potentially innovation is. So my question is how much of Green Lake and Green Lake services are really designed for sort of on Prem to make that edge to on Prem? No, I want to ask about Cloud, how much of that is actually delivering Cloud Native Services on AWS on Google on Azure and Ali Cloud etcetera versus kind of creating a cloud like experience for on Prem in it and eventually the edge. I'm not clear on that. You guys have insight on how much effort is going into that cloud. Native components in the public cloud. >>Well, I would say that the first thing is you have to go back to the applications to truly get that cloud native experience. I think HP is putting the components together to a prize. This to be able to capitalize on that cloud like experience with cloud native APS. But the vast majority of enterprise app they're not cloud native. And so I think the way that I'm interpreting Green Lake and I think there are a lot of questions Greenland and how it's consumed by enterprises there. There was some initial questions around the branding when it first came out. Um, and so you know it's not perfect. I think HP definitely have some work to do to clarify what it is and what it isn't in a way that enterprises can understand. But from what I'm seeing, it looks to be creating and a cloud like experience for enterprises from edge to cloud, but also providing the components so that if you do have applications that are shovel ready for cloud or our cloud native, you can embrace Public Cloud as well as private cloud and pull them under the Green Lake >>Rela. Yeah, ostensibly stew kubernetes is part of the answer to that, although you know, as we've talked about, Kubernetes is necessary containers and necessary but not sufficient for that experience. And I guess the point I'm getting to is, you know we do. We've talked about this with Red Hat, certainly with VM Ware and others the opportunity to have that experience across clouds at the Edge on Prim. That's expensive from an R and D standpoint. And so I want to kind of bring that into the discussion. HP last year spent about 1.8 billion in R and D Sounds like a lot of money. It's about 6% of its of it's revenues, but it's it's spread thin now. It does are indeed through investments, for instance, like Pensando or other acquisitions. But in terms of organic R and D, you know, it's it's it's not at the top of the heap. I mean, obviously guys like Amazon and Google have surpassed them. I've written about this with regard to IBM because they, like HP, spend a lot on dividends on share buybacks, which they have to do to prop up the stock price and placate Wall Street. But it But it detracts from their ability to fund R and d student your take on that sort of innovation roadmap for the next decade. >>Yeah, I mean, one of the things we look at it in the last year or so there's been what we were talking about earlier, that management across these environments and kubernetes is a piece of it. So, you know, Google laid down and those you've got Microsoft with Azure, our VM ware with EMS. Ooh! And to Tim's point, you know, it feels like Green Lake fits kind of in that category, but there's there's pieces that fall outside of it. So, you know, when I first thought of Green Lake, it was Oh, well, I've got a private cloud stack like an azure stack is one of the solutions that they have there. How does that tie into that full solution? So extending that out, moving that brand I do here, you know good things from the field, the partners and customers. Green Lake is well respected, and it feels like that is, that is a big growth. So it's HB 50 from being more thought of, as you know, a box seller to more of that solution in subscription model. Green Lake is a vehicle for that. And as you pointed out, you know, rightfully so. Software so important. And I feel when that thing I'd say HPI ee feels toe have more embracing of software than, say, they're closest competitors. Which is Dell, which, you know, Dell Statement is always to be the leading infrastructure writer, and the arm of VM Ware is their software. So, you know, just Dell alone without VM ware, HP has to be that full solution of what Dell and VM ware together. >>Yeah, and VM Ware Is that the crown jewel? And of course, HP doesn't have a VM ware, but it does have over 8000 software engineers. Now I want to ask you about open source. I mean, I would hope that they're allocating a large portion of those software engineers. The open source development developing tooling at the edge, developing tooling from multi cloud certainly building hooks in from their hardware. But is HP Tim doing enough in open source? >>Well, I don't want to get on the open source bandwagon, and I don't necessarily want to jump off it. I think the important thing here is that there are places where open source makes sense in places where it doesn't, um, and you have to look at each particular scenario and really kind of ask yourself, does it make sense to address it here? I mean, it's a way to to engage your developers and engage your customers in a different mode. What I see from HP E is more of a focus around trying to determine where can we provide the greatest value for our customers, which, frankly, is where their focus should be, whether that shows up in open source for software, whether that shows up in commercial products. Um, we'll see how that plays out. But I think the one thing that I give HP e props on one of several things I would say is that they are kind of getting back to their roots and saying, Look, we're an infrastructure company, that is what we do really well We're not trying to be everything to everyone. And so let's try and figure out what are customers asking for? How do we step through that? I think this is actually one of the challenges that Antonio's predecessors had was that they tried to do jump into all the different areas, you know, cloud software. And they were really X over, extending themselves in ways that they probably should. But they were doing it in ways that really didn't speak to their four, and they weren't connecting those dots. They weren't connecting that that connective tissue they needed to dio. So I do think that, you know, whether it's open source or commercial software, we'll see how that plays out. Um, but I'm glad to see that they are stepping back and saying Okay, let's be mindful about how we ease into this >>well, so the reason I bring up open source is because I think it's the mainspring of innovation in the industry on that, but of course it's very tough to make money, but we've talked a lot about H B's strength since breath is, we haven't talked much about servers, but they're strong in servers. That's fine We don't need to spend time there. It's culture. It seems to be getting back to some of its roots. We've touched on some of its its weaknesses and maybe gaps. But I want to talk about the opportunities, and there's a huge opportunity to the edge. David Flores quantified. He says that Tam is four. Trillion is enormous, but here's my question is the edge Right now we're seeing from companies like HP and Dell. Is there largely taking Intel based servers, kind of making a new form factor and putting them out on the edge? Is that the right approach? Will there be an emergence of alternative processors? Whether it's our maybe, maybe there's some NVIDIA in there and just a whole new architecture for the edge to authority. Throw it out to you first, get Tim Scott thoughts. >>Yeah, So what? One thing, Dave, You know, HP does have a long history of partnering with a lot of those solutions. So you see NVIDIA up on stage when you think about Moonshot and the machine and some of the other platforms that they felt they've looked at alternative options. So, you know, I know from Wicky Bon standpoint, you know, David Foyer wrote the piece. That arm is a huge opportunity at the edge there. And you would think that HP would be one of the companies that would be fast to embrace that >>Well, that's why I might like like Moonshot. I think that was probably ahead of its time. But the whole notion of you know, a very slim form factor that can pop in and pop out. You know, different alternative processor architecture is very efficient, potentially at the edge. Maybe that's got got potential. But do you have any thoughts on this? I mean, I know it's kind of Yeah, any hardware is, but, >>well, it is a little hardware, but I think you have to come back to the applicability of it. I mean, if you're taking a slim down ruggedized server and trying Teoh essentially take out, take off all the fancy pieces and just get to the core of it and call that your edge. I think you've missed a huge opportunity beyond that. So what happens with the processing that might be in camera or in a robot or in an inch device? These are custom silicon custom processors custom demand that you can't pull back to a server for everything you have to be able to to extend it even further. And, you know, if I compare and contrast for a minute, I think some of the vendors that are looking at Hey, our definition of edge is a laptop or it is this smaller form factor server. I think they're incredibly limiting themselves. I think there is a great opportunity beyond that, and we'll see more of those kind of crop up, because the reality is the applicability of how Edge gets used is we do data collection and data analysis in the device at the device. So whether it's a camera, whether it's ah, robot, there's processing that happens within that device. Now some of that might come back to an intermediate area, and that intermediate area might be one of these smaller form factor devices, like a server for a demo. But it might not be. It might be a custom type of device that's needed in a remote location, and then from there you might get back to that smaller form factor. Do you have all of these stages and data and processing is getting done at each of these stages as more and more resources are made available. Because there are things around AI and ML that you could only do in cloud, you would not be able to do even in a smaller form factor at the edge. But there are some that you can do with the edge and you need to do at the edge, either for latency reasons or just response time. And so that's important to understand the applicability of this. It's not just a simple is saying, Hey, you know, we've got this edge to cloud portfolio and it's great and we've got the smaller servers. You have to kind of change the vernacular a little bit and look at the applicability of it and what people are actually doing >>with. I think those are great points. I think you're 100% right on. You are going to be doing AI influencing at the edge. The data of a lot of data is going to stay at the edge and I personally think and again David Floor is written about this, that it's going to require different architectures. It's not going to be the data center products thrown over to the edge or shrunk down. As you're saying, That's maybe not the right approach, but something that's very efficient, very low cost of when you think about autonomous vehicles. They could have, you know, quote unquote servers in there. They certainly have compute in there. That could be, you know, 2344 $5000 worth of value. And I think that's an opportunity. I'd love to see HP Dell, others really invest in R and D, and this is a new architecture and build that out really infuse ai at the edge. Last last question, guys, we're running out of time. One of the things I'll start with you. Still what things you're gonna watch for HP as indicators of success of innovation in the coming decade. As we said last decade, kind of painful for HP and HP. You know, this decade holds a lot of promise. One of the things you're gonna be watching in terms of success indicators. >>So it's something we talked about earlier is how are they helping customers build new things, So a ws always focuses on builders. Microsoft talks a lot. I've heard somethin double last year's talk about building those new applications. So you know infrastructure is only there for the data, and the applications live on top of it. And if you mention Dave, there's a number of these acquisitions. HP has moved up the stack. Some eso those proof points on new ways of doing business. New ways of building new applications are what I'm looking for from HP, and it's robust ecosystem. >>Tim. Yeah, yeah, and I would just pick you back right on. What's do was saying is that this is a, you know, going back to the Moonshot goals. I mean, it's about as far away as HP ease, and HP is routes used to be and that that hardware space. But it's really changing business outcomes, changing business experiences and experiences for the customers of their customers. And so is far cord that that eight p e can get. I wouldn't expect them to get all the way there, although in conversations I am having with HP and with others that it seems like they are thinking about that. But they have to start moving in that direction. And that's actually something that when you start with the builder conversation like Microsoft has had, an Amazon has had Google's had and even Dell, to some degree has had. I think you missed the bigger picture, so I'm not saying exclude the builder conversation. But you have to put it in the right context because otherwise you get into this siloed mentality of right. We have solved one problem, one unique problem, and built this one unique solution. And we've got bigger issues to be able to address as enterprises, and that's going to involve a lot of different moving parts. And you need to know if you're a builder, you've it or even ah ah, hardware manufacturer. You've got to figure out, How does your piece fit into that bigger picture and you've got to connect those dots very, very quickly. And that's one of the things I'll be looking for. HP as well is how they take this new software initiative and really carry it forward. I'm really encouraged by what I'm seeing. But of course the future could hold something completely different. We thought 2020 would look very different six months ago or a year ago than it does today. >>Well, I wanna I want to pick up on that, I think I would add, and I agree with you. I'm really gonna be looking for innovation. Can h P e e get back to kind of its roots? Remember, H B's router invents it was in the logo. I can't translate its R and D into innovation. To me, it's all about innovation. And I think you know cios like Antonio Neri, Michael Dell, Arvind Krishna. They got a They have a tough, tough position because they're on the one hand, they're throwing off cash, and they can continue Teoh to bump along and, you know, placate Wall Street, give back dividends and share buybacks. And and that's fine. And everybody would be kind of happy. But I'll point out that Amazon in 2007 spent spend less than a $1,000,000,000 in R and D. Google spent about the back, then about the same amount of each B E spends today. So the point is, if the edge is really such a huge opportunity, this $4 trillion tam is David Foyer points out, there's a There's a way in which some of these infrastructure companies could actually pull a kind of mini Microsoft and reinvent themselves in a way that could lead to massive shareholder returns. But it was really will take bold vision and a brave leader to actually make that happen. So that's one of things I'm gonna be watching very closely hp invent turn r and D into dollars. And so you guys really appreciate you coming on the Cube and breaking down the segment for ah, the future of HP be well, and, uh and thanks very much. Alright. And thank you for watching everybody. This is Dave Volante for Tim Crawford and Stupid men. Our coverage of HP ease 2020 Virtual experience. We'll be right back right after this short break. >>Yeah, yeah, yeah, yeah.
SUMMARY :
Discover Virtual experience Brought to you by HP. He's a strategic advisor to see Io's with boa. And so now that companies really and you hear this from Antonio kind of positioning for innovation for the next decade. I think it comes back to just making sure that they had the product on hand, And so, you know, that I remember from, you know, 5 to 10 years ago. But you got to believe that the the conversations And I think one of the things that you have to look you know, kind of focusing on maybe automating data, And you know, HP has got a lot of interesting pieces. Add more fuel to that tension. And that is when you step back and you look at how customers are gonna have to consume services, Let's stay on that for a second. You know, HP one view, you know, in the early days, it was really well respected. And so now you talk about how do I manage this across Well, let's talk about that some more because I think this really is the big opportunity and we're potentially innovation edge to cloud, but also providing the components so that if you do have applications And I guess the point I'm getting to is, you know we do. Which is Dell, which, you know, Dell Statement is always to be the leading infrastructure Yeah, and VM Ware Is that the crown jewel? had was that they tried to do jump into all the different areas, you know, Throw it out to you first, get Tim Scott thoughts. And you would think that HP would be one of the companies that would be fast But the whole notion of you custom demand that you can't pull back to a server for everything They could have, you know, quote unquote servers in there. And if you mention Dave, that this is a, you know, going back to the Moonshot goals. And I think you know cios like Antonio Neri, Michael Dell, Arvind Krishna. Yeah, yeah, yeah,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Tim Crawford | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Flores | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Antonio | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Tim | PERSON | 0.99+ |
November 19 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Tim Scott | PERSON | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Stuart | PERSON | 0.99+ |
John | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
John Chambers | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
David Floor | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
$4 trillion | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
Day 2 Livestream | Enabling Real AI with Dell
>>from the Cube Studios >>in Palo Alto and >>Boston connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back here. Ready? Jeff Frick here with the Cube. We're doing a special presentation today really talking about AI and making ai really with two companies that are right in the heart of the Dell EMC as well as Intel. So we're excited to have a couple Cube alumni back on the program. Haven't seen him in a little while. First off from Intel. Lisa Spelman. She is the corporate VP and GM for the Xeon Group in Jersey on and Memory Group. Great to see you, Lisa. >>Good to see you again, too. >>And we've got Ravi Pinter. Conte. He is the SBP server product management, also from Dell Technologies. Ravi, great to see you as well. >>Good to see you on beast. Of course, >>yes. So let's jump into it. So, yesterday, Robbie, you guys announced a bunch of new kind of ai based solutions where if you can take us through that >>Absolutely so one of the things we did Jeff was we said it's not good enough for us to have a point product. But we talked about hope, the tour of products, more importantly, everything from our workstation side to the server to these storage elements and things that we're doing with VM Ware, for example. Beyond that, we're also obviously pleased with everything we're doing on bringing the right set off validated configurations and reference architectures and ready solutions so that the customer really doesn't have to go ahead and do the due diligence. Are figuring out how the various integration points are coming for us in making a solution possible. Obviously, all this is based on the great partnership we have with Intel on using not just their, you know, super cues, but FPG's as well. >>That's great. So, Lisa, I wonder, you know, I think a lot of people you know, obviously everybody knows Intel for your CPU is, but I don't think they recognize kind of all the other stuff that can wrap around the core CPU to add value around a particular solution. Set or problems. That's what If you could tell us a little bit more about Z on family and what you guys are doing in the data center with this kind of new interesting thing called AI and machine learning. >>Yeah. Um, so thanks, Jeff and Ravi. It's, um, amazing. The way to see that artificial intelligence applications are just growing in their pervasiveness. And you see it taking it out across all sorts of industries. And it's actually being built into just about every application that is coming down the pipe. And so if you think about meeting toe, have your hardware foundation able to support that. That's where we're seeing a lot of the customer interest come in. And not just a first Xeon, but, like Robbie said on the whole portfolio and how the system and solution configuration come together. So we're approaching it from a total view of being able to move all that data, store all of that data and cross us all of that data and providing options along that entire pipeline that move, um, and within that on Z on. Specifically, we've really set that as our cornerstone foundation for AI. If it's the most deployed solution and data center CPU around the world and every single application is going to have artificial intelligence in it, it makes sense that you would have artificial intelligence acceleration built into the actual hardware so that customers get a better experience right out of the box, regardless of which industry they're in or which specialized function they might be focusing on. >>It's really it's really wild, right? Cause in process, right, you always move through your next point of failure. So, you know, having all these kind of accelerants and the ways that you can carve off parts of the workload part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution side. Nobody wants General Ai just for ai sake. It's a nice word. Interesting science experiment. But it's really in the applied. A world is. We're starting to see the value in the application of this stuff, and I wonder you have a customer. You want to highlight Absalon, tell us a little bit about their journey and what you guys did with them. >>Great, sure. I mean, if you didn't start looking at Epsilon there in the market in the marketing business, and one of the crucial things for them is to ensure that they're able to provide the right data. Based on that analysis, there run on? What is it that the customer is looking for? And they can't wait for a period of time, but they need to be doing that in the near real time basis, and that's what excellent does. And what really blew my mind was the fact that they actually service are send out close to 100 billion messages. Again, it's 100 billion messages a year. And so you can imagine the amount of data that they're analyzing, which is in petabytes of data, and they need to do real time. And that's all possible because of the kind of analytics we have driven into the power It silver's, you know, using the latest of the Intel Intel Xeon processor couple with some of the technologies from the BGS side, which again I love them to go back in and analyze this data and service to the customers very rapidly. >>You know, it's funny. I think Mark Tech is kind of an under appreciated ah world of ai and, you know, in machine to machine execution, right, That's the amount of transactions go through when you load a webpage on your site that actually ideas who you are you know, puts puts a marketplace together, sells time on that or a spot on that ad and then lets people in is a really sophisticated, as you said in massive amounts of data going through the interesting stuff. If it's done right, it's magic. And if it's done, not right, then people get pissed off. You gotta have. You gotta have use our tools. >>You got it. I mean, this is where I talked about, you know, it can be garbage in garbage out if you don't really act on the right data. Right. So that is where I think it becomes important. But also, if you don't do it in a timely fashion, but you don't service up the right content at the right time. You miss the opportunity to go ahead and grab attention, >>right? Right. Lisa kind of back to you. Um, you know, there's all kinds of open source stuff that's happening also in the in the AI and machine learning world. So we hear things about tense or flow and and all these different libraries. How are you guys, you know, kind of embracing that world as you look at ai and kind of the development. We've been at it for a while. You guys are involved in everything from autonomous vehicles to the Mar Tech. Is we discussed? How are you making sure that these things were using all the available resources to optimize the solutions? >>Yeah, I think you and Robbie we're just hitting on some of those examples of how many ways people have figured out how to apply AI now. So maybe at first it was really driven by just image recognition and image tagging. But now you see so much work being driven in recommendation engines and an object detection for much more industrial use cases, not just consumer enjoyment and also those things you mentioned and hit on where the personalization is a really fine line you walk between. How do you make an experience feel good? Personalized versus creepy personalized is a real challenge and opportunity across so many industries. And so open source like you mentioned, is a great place for that foundation because it gives people the tools to build upon. And I think our strategy is really a stack strategy that starts first with delivering the best hardware for artificial intelligence and again the other is the foundation for that. But we also have, you know, Milat type processing for out of the Edge. And then we have all the way through to very custom specific accelerators into the data center, then on top about the optimized software, which is going into each of those frameworks and doing the work so that the framework recognizes the specific acceleration we built into the CPU. Whether that steel boost or recognizes the capabilities that sit in that accelerator silicon, and then once we've done that software layer and this is where we have the opportunity for a lot of partnership is the ecosystem and the solutions work that Robbie started off by talking about. So Ai isn't, um, it's not easy for everyone. It has a lot of value, but it takes work to extract that value. And so partnerships within the ecosystem to make sure that I see these are taking those optimization is building them in and fundamentally can deliver to customers. Reliable solution is the last leg of that of that strategy, but it really is one of the most important because without it you get a lot of really good benchmark results but not a lot of good, happy customer, >>right? I'm just curious, Lee says, because you kind of sit in the catbird seat. You guys at the core, you know, kind of under all the layers running data centers run these workloads. How >>do you see >>kind of the evolution of machine learning and ai from kind of the early days, where with science projects and and really smart people on mahogany row versus now people are talking about trying to get it to, like a citizen developer, but really a citizen data science and, you know, in exposing in the power of AI to business leaders or business executioners. Analysts, if you will, so they can apply it to their day to day world in their day to day life. How do you see that kind of evolving? Because you not only in it early, but you get to see some of the stuff coming down the road in design, find wins and reference architectures. How should people think about this evolution? >>It really is one of those things where if you step back from the fundamentals of AI, they've actually been around for 50 or more years. It's just that the changes in the amount of computing capability that's available, the network capacity that's available and the fundamental efficiency that I t and infrastructure managers and get out of their cloud architectures as allowed for this pervasiveness to evolve. And I think that's been the big tipping point that pushed people over this fear. Of course, I went through the same thing that cloud did where you had maybe every business leader or CEO saying Hey, get me a cloud and I'll figure out what for later give me some AI will get a week and make it work, But we're through those initial use pieces and starting to see a business value derived from from those deployments. And I think some of the most exciting areas are in the medical services field and just the amount, especially if you think of the environment we're in right now. The amount of efficiency and in some cases, reduction in human contact that you could require for diagnostics and just customer tracking and ability, ability to follow their entire patient History is really powerful and represents the next wave and care and how we scale our limited resource of doctors nurses technician. And the point we're making of what's coming next is where you start to see even more mass personalization and recommendations in that way that feel very not spooky to people but actually comforting. And they take value from them because it allows them to immediately act. Robbie reference to the speed at which you have to utilize the data. When people get immediately act more efficiently. They're generally happier with the service. So we see so much opportunity and we're continuing to address across, you know, again that hardware, software and solution stack so we can stay a step ahead of our customers, >>Right? That's great, Ravi. I want to give you the final word because you guys have to put the solutions together, it actually delivering to the customer. So not only, you know the hardware and the software, but any other kind of ecosystem components that you have to bring together. So I wonder if you can talk about that approach and how you know it's it's really the solution. At the end of the day, not specs, not speeds and feeds. That's not really what people care about. It's really a good solution. >>Yeah, three like Jeff, because end of the day I mean, it's like this. Most of us probably use the A team to retry money, but we really don't know what really sits behind 80 and my point being that you really care at that particular point in time to be able to put a radio do machine and get your dollar bills out, for example. Likewise, when you start looking at what the customer really needs to know, what Lisa hit upon is actually right. I mean what they're looking for. And you said this on the whole solution side house. To our our mantra to this is very simple. We want to make sure that we use the right basic building blocks, ensuring that we bring the right solutions using three things the right products which essentially means that we need to use the right partners to get the right processes in GPU Xen. But then >>we get >>to the next level by ensuring that we can actually do things we can either provide no ready solutions are validated reference architectures being that you have the sausage making process that you now don't need to have the customer go through, right? In a way. We have done the cooking and we provide a recipe book and you just go through the ingredient process of peering does and then off your off right to go get your solution done. And finally, the final stages there might be helped that customers still need in terms of services. That's something else Dell technology provides. And the whole idea is that customers want to go out and have them help deploying the solutions. We can also do that we're services. So that's probably the way we approach our data. The way we approach, you know, providing the building blocks are using the right technologies from our partners, then making sure that we have the right solutions that our customers can look at. And finally, they need deployment. Help weaken due their services. >>Well, Robbie, Lisa, thanks for taking a few minutes. That was a great tee up, Rob, because I think we're gonna go to a customer a couple of customer interviews enjoying that nice meal that you prepared with that combination of hardware, software, services and support. So thank you for your time and a great to catch up. All right, let's go and run the tape. Hi, Jeff. I wanted to talk about two examples of collaboration that we have with the partners that have yielded Ah, really examples of ah put through HPC and AI activities. So the first example that I wanted to cover is within your AHMAD team up in Canada with that team. We collaborated with Intel on a tuning of algorithm and code in order to accelerate the mapping of the human brain. So we have a cluster down here in Texas called Zenith based on Z on and obtain memory on. And we were able to that customer with the three of us are friends and Intel the norm, our team on the Dell HPC on data innovation, injuring team to go and accelerate the mapping of the human brain. So imagine patients playing video games or doing all sorts of activities that help understand how the brain sends the signal in order to trigger a response of the nervous system. And it's not only good, good way to map the human brain, but think about what you can get with that type of information in order to help cure Alzheimer's or dementia down the road. So this is really something I'm passionate about. Is using technology to help all of us on all of those that are suffering from those really tough diseases? Yeah, yeah, way >>boil. I'm a project manager for the project, and the idea is actually to scan six participants really intensively in both the memory scanner and the G scanner and see if we can use human brain data to get closer to something called Generalized Intelligence. What we have in the AI world, the systems that are mathematically computational, built often they do one task really, really well, but they struggle with other tasks. Really good example. This is video games. Artificial neural nets can often outperform humans and video games, but they don't really play in a natural way. Artificial neural net. Playing Mario Brothers The way that it beats the system is by actually kind of gliding its way through as quickly as possible. And it doesn't like collect pennies. For example, if you play Mary Brothers as a child, you know that collecting those coins is part of your game. And so the idea is to get artificial neural nets to behave more like humans. So like we have Transfer of knowledge is just something that humans do really, really well and very naturally. It doesn't take 50,000 examples for a child to know the difference between a dog and a hot dog when you eat when you play with. But an artificial neural net can often take massive computational power and many examples before it understands >>that video games are awesome, because when you do video game, you're doing a vision task instant. You're also doing a >>lot of planning and strategy thinking, but >>you're also taking decisions you several times a second, and we record that we try to see. Can we from brain activity predict >>what people were doing? We can break almost 90% accuracy with this type of architecture. >>Yeah, yeah, >>Use I was the lead posts. Talk on this collaboration with Dell and Intel. She's trying to work on a model called Graph Convolution Neural nets. >>We have being involved like two computing systems to compare it, like how the performance >>was voting for The lab relies on both servers that we have internally here, so I have a GPU server, but what we really rely on is compute Canada and Compute Canada is just not powerful enough to be able to run the models that he was trying to run so it would take her days. Weeks it would crash, would have to wait in line. Dell was visiting, and I was invited into the meeting very kindly, and they >>told us that they started working with a new >>type of hardware to train our neural nets. >>Dell's using traditional CPU use, pairing it with a new >>type off memory developed by Intel. Which thing? They also >>their new CPU architectures and really optimized to do deep learning. So all of that sounds great because we had this problem. We run out of memory, >>the innovation lab having access to experts to help answer questions immediately. That's not something to gate. >>We were able to train the attic snatch within 20 minutes. But before we do the same thing, all the GPU we need to wait almost three hours to each one simple way we >>were able to train the short original neural net. Dell has been really great cause anytime we need more memory, we send an email, Dell says. Yeah, sure, no problem. We'll extended how much memory do you need? It's been really simple from our end, and I think it's really great to be at the edge of science and technology. We're not just doing the same old. We're pushing the boundaries. Like often. We don't know where we're going to be in six months. In the big data world computing power makes a big difference. >>Yeah, yeah, yeah, yeah. The second example I'd like to cover is the one that will call the data accelerator. That's a publisher that we have with the University of Cambridge, England. There we partnered with Intel on Cambridge, and we built up at the time the number one Io 500 storage solution on. And it's pretty amazing because it was built on standard building blocks, power edge servers until Xeon processors some envy me drives from our partners and Intel. And what we did is we. Both of this system with a very, very smart and elaborate suffering code that gives an ultra fast performance for our customers, are looking for a front and fast scratch to their HPC storage solutions. We're also very mindful that this innovation is great for others to leverage, so the suffering Could will soon be available on Get Hub on. And, as I said, this was number one on the Iot 500 was initially released >>within Cambridge with always out of focus on opening up our technologies to UK industry, where we can encourage UK companies to take advantage of advanced research computing technologies way have many customers in the fields of automotive gas life sciences find our systems really help them accelerate their product development process. Manage Poor Khalidiya. I'm the director of research computing at Cambridge University. Yeah, we are a research computing cloud provider, but the emphasis is on the consulting on the processes around how to exploit that technology rather than the better results. Our value is in how we help businesses use advanced computing resources rather than the provision. Those results we see increasingly more and more data being produced across a wide range of verticals, life sciences, astronomy, manufacturing. So the data accelerators that was created as a component within our data center compute environment. Data processing is becoming more and more central element within research computing. We're getting very large data sets, traditional spinning disk file systems can't keep up and we find applications being slowed down due to a lack of data, So the data accelerator was born to take advantage of new solid state storage devices. I tried to work out how we can have a a staging mechanism for keeping your data on spinning disk when it's not required pre staging it on fast envy any stories? Devices so that can feed the applications at the rate quiet for maximum performance. So we have the highest AI capability available anywhere in the UK, where we match II compute performance Very high stories performance Because for AI, high performance storage is a key element to get the performance up. Currently, the data accelerated is the fastest HPC storage system in the world way are able to obtain 500 gigabytes a second read write with AI ops up in the 20 million range. We provide advanced computing technologies allow some of the brightest minds in the world really pushed scientific and medical research. We enable some of the greatest academics in the world to make tomorrow's discoveries. Yeah, yeah, yeah. >>Alright, Welcome back, Jeff Frick here and we're excited for this next segment. We're joined by Jeremy Raider. He is the GM digital transformation and scale solutions for Intel Corporation. Jeremy, great to see you. Hey, thanks for having me. I love I love the flowers in the backyard. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Garden, Right To very beautiful places to visit in Portland. >>Yeah. You know, you only get him for a couple. Ah, couple weeks here, so we get the timing just right. >>Excellent. All right, so let's jump into it. Really? And in this conversation really is all about making Ai Riel. Um, and you guys are working with Dell and you're working with not only Dell, right? There's the hardware and software, but a lot of these smaller a solution provider. So what is some of the key attributes that that needs to make ai riel for your customers out there? >>Yeah, so, you know, it's a it's a complex space. So when you can bring the best of the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore you're getting into Memory technologies, network technologies and kind of a little less known as how many resources we have focused on the software side of things optimizing frameworks and optimizing, and in these key ingredients and libraries that you can stitch into that portfolio to really get more performance in value, out of your machine learning and deep learning space. And so you know what we've really done here with Dell? It has started to bring a bunch of that portfolio together with Dell's capabilities, and then bring in that ai's V partner, that software vendor where we can really take and stitch and bring the most value out of that broad portfolio, ultimately using using the complexity of what it takes to deploy an AI capability. So a lot going on. They're bringing kind of the three legged stool of the software vendor hardware vendor dental into the mix, and you get a really strong outcome, >>right? So before we get to the solutions piece, let's stick a little bit into the Intel world. And I don't know if a lot of people are aware that obviously you guys make CPUs and you've been making great CPIs forever. But there's a whole lot more stuff that you've added, you know, kind of around the core CPU. If you will in terms of of actual libraries and ways to really optimize the seond processors to operate in an AI world. I wonder if you can kind of take us a little bit below the surface on how that works. What are some of the examples of things you can do to get more from your Gambira Intel processors for ai specific applications of workloads? >>Yeah, well, you know, there's a ton of software optimization that goes into this. You know that having the great CPU is definitely step one. But ultimately you want to get down into the libraries like tensor flow. We have data analytics, acceleration libraries. You know, that really allows you to get kind of again under the covers a little bit and look at it. How do we have to get the most out of the kinds of capabilities that are ultimately used in machine learning in deep learning capabilities, and then bring that forward and trying and enable that with our software vendors so that they can take advantage of those acceleration components and ultimately, you know, move from, you know, less training time or could be a the cost factor. But those are the kind of capabilities we want to expose to software vendors do these kinds of partnerships. >>Okay. Ah, and that's terrific. And I do think that's a big part of the story that a lot of people are probably not as aware of that. There are a lot of these optimization opportunities that you guys have been leveraging for a while. So shifting gears a little bit, right? AI and machine learning is all about the data. And in doing a little research for this, I found actually you on stage talking about some company that had, like, 350 of road off, 315 petabytes of data, 140,000 sources of those data. And I think probably not great quote of six months access time to get that's right and actually work with it. And the company you're referencing was intel. So you guys know a lot about debt data, managing data, everything from your manufacturing, and obviously supporting a global organization for I t and run and ah, a lot of complexity and secrets and good stuff. So you know what have you guys leveraged as intel in the way you work with data and getting a good data pipeline. That's enabling you to kind of put that into these other solutions that you're providing to the customers, >>right? Well, it is, You know, it's absolutely a journey, and it doesn't happen overnight, and that's what we've you know. We've seen it at Intel on We see it with many of our customers that are on the same journey that we've been on. And so you know, this idea of building that pipeline it really starts with what kind of problems that you're trying to solve. What are the big issues that are holding you back that company where you see that competitive advantage that you're trying to get to? And then ultimately, how do you build the structure to enable the right kind of pipeline of that data? Because that's that's what machine learning and deep learning is that data journey. So really a lot of focus around you know how we can understand those business challenges bring forward those kinds of capabilities along the way through to where we structure our entire company around those assets and then ultimately some of the partnerships that we're gonna be talking about these companies that are out there to help us really squeeze the most out of that data as quickly as possible because otherwise it goes stale real fast, sits on the shelf and you're not getting that value out of right. So, yeah, we've been on the journey. It's Ah, it's a long journey, but ultimately we could take a lot of those those kind of learnings and we can apply them to our silicon technology. The software optimization is that we're doing and ultimately, how we talk to our enterprise customers about how they can solve overcome some of the same challenges that we did. >>Well, let's talk about some of those challenges specifically because, you know, I think part of the the challenge is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Little bit was there's a whole lot that goes into it. Besides just doing the analysis, there's a lot of data practice data collection, data organization, a whole bunch of things that have to happen before. You can actually start to do the sexy stuff of AI. So you know, what are some of those challenges. How are you helping people get over kind of these baby steps before they can really get into the deep end of the pool? >>Yeah, well, you know, one is you have to have the resource is so you know, do you even have the resource is if you can acquire those Resource is can you keep them interested in the kind of work that you're doing? So that's a big challenge on and actually will talk about how that fits into some of the partnerships that we've been establishing in the ecosystem. It's also you get stuck in this poc do loop, right? You finally get those resource is and they start to get access to that data that we talked about. It start to play out some scenarios, a theorize a little bit. Maybe they show you some really interesting value, but it never seems to make its way into a full production mode. And I think that is a challenge that has faced so many enterprises that are stuck in that loop. And so that's where we look at who's out there in the ecosystem that can help more readily move through that whole process of the evaluation that proved the r a y, the POC and ultimately move that thing that capability into production mode as quickly as possible that you know that to me is one of those fundamental aspects of if you're stuck in the POC. Nothing's happening from this. This is not helping your company. We want to move things more quickly, >>right? Right. And let's just talk about some of these companies that you guys are working with that you've got some reference architectures is data robot a Grid dynamics H 20 just down the road in Antigua. So a lot of the companies we've worked with with Cube and I think you know another part that's interesting. It again we can learn from kind of old days of big data is kind of generalized. Ai versus solution specific. Ai and I think you know where there's a real opportunity is not AI for a sake, but really it's got to be applied to a specific solution, a specific problem so that you have, you know, better chatbots, better customer service experience, you know, better something. So when you were working with these folks and trying to design solutions or some of the opportunities that you saw to work with some of these folks to now have an applied a application slash solution versus just kind of AI for ai's sake. >>Yeah. I mean, that could be anything from fraud, detection and financial services, or even taking a step back and looking more horizontally like back to that data challenge. If if you're stuck at the AI built a fantastic Data lake, but I haven't been able to pull anything back out of it, who are some of the companies that are out there that can help overcome some of those big data challenges and ultimately get you to where you know, you don't have a data scientist spending 60% of their time on data acquisition pre processing? That's not where we want them, right? We want them on building out that next theory. We want them on looking at the next business challenge. We want them on selecting the right models, but ultimately they have to do that as quickly as possible so that they can move that that capability forward into the next phase. So, really, it's about that that connection of looking at those those problems or challenges in the whole pipeline. And these companies like data robot in H 20 quasi. Oh, they're all addressing specific challenges in the end to end. That's why they've kind of bubbled up as ones that we want to continue to collaborate with, because it can help enterprises overcome those issues more fast. You know more readily. >>Great. Well, Jeremy, thanks for taking a few minutes and giving us the Intel side of the story. Um, it's a great company has been around forever. I worked there many, many moons ago. That's Ah, that's a story for another time, but really appreciate it and I'll interview you will go there. Alright, so super. Thanks a lot. So he's Jeremy. I'm Jeff Frick. So now it's time to go ahead and jump into the crowd chat. It's crowdchat dot net slash make ai real. Um, we'll see you in the chat. And thanks for watching
SUMMARY :
Boston connecting with thought leaders all around the world. She is the corporate VP and GM Ravi, great to see you as well. Good to see you on beast. solutions where if you can take us through that reference architectures and ready solutions so that the customer really doesn't have to on family and what you guys are doing in the data center with this kind of new interesting thing called AI and And so if you think about meeting toe, have your hardware foundation part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution we have driven into the power It silver's, you know, using the latest of the Intel Intel of ai and, you know, in machine to machine execution, right, That's the amount of transactions I mean, this is where I talked about, you know, How are you guys, you know, kind of embracing that world as you look But we also have, you know, Milat type processing for out of the Edge. you know, kind of under all the layers running data centers run these workloads. and, you know, in exposing in the power of AI to business leaders or business the speed at which you have to utilize the data. So I wonder if you can talk about that approach and how you know to retry money, but we really don't know what really sits behind 80 and my point being that you The way we approach, you know, providing the building blocks are using the right technologies the brain sends the signal in order to trigger a response of the nervous know the difference between a dog and a hot dog when you eat when you play with. that video games are awesome, because when you do video game, you're doing a vision task instant. that we try to see. We can break almost 90% accuracy with this Talk on this collaboration with Dell and Intel. to be able to run the models that he was trying to run so it would take her days. They also So all of that the innovation lab having access to experts to help answer questions immediately. do the same thing, all the GPU we need to wait almost three hours to each one do you need? That's a publisher that we have with the University of Cambridge, England. Devices so that can feed the applications at the rate quiet for maximum performance. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Ah, couple weeks here, so we get the timing just right. Um, and you guys are working with Dell and you're working with not only Dell, right? the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore What are some of the examples of things you can do to get more from You know, that really allows you to get kind of again under the covers a little bit and look at it. So you know what have you guys leveraged as intel in the way you work with data and getting And then ultimately, how do you build the structure to enable the right kind of pipeline of that is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Yeah, well, you know, one is you have to have the resource is so you know, do you even have the So a lot of the companies we've worked with with Cube and I think you know another that can help overcome some of those big data challenges and ultimately get you to where you we'll see you in the chat.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeremy | PERSON | 0.99+ |
Lisa Spelman | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
Texas | LOCATION | 0.99+ |
Robbie | PERSON | 0.99+ |
Lee | PERSON | 0.99+ |
Portland | LOCATION | 0.99+ |
Xeon Group | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Ravi | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
60% | QUANTITY | 0.99+ |
Jeremy Raider | PERSON | 0.99+ |
Ravi Pinter | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
20 million | QUANTITY | 0.99+ |
Mar Tech | ORGANIZATION | 0.99+ |
50,000 examples | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
Mario Brothers | TITLE | 0.99+ |
six months | QUANTITY | 0.99+ |
Antigua | LOCATION | 0.99+ |
University of Cambridge | ORGANIZATION | 0.99+ |
Jersey | LOCATION | 0.99+ |
140,000 sources | QUANTITY | 0.99+ |
six participants | QUANTITY | 0.99+ |
315 petabytes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
500 gigabytes | QUANTITY | 0.99+ |
AHMAD | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
Cube Studios | ORGANIZATION | 0.99+ |
first example | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Memory Group | ORGANIZATION | 0.99+ |
two examples | QUANTITY | 0.99+ |
Cambridge University | ORGANIZATION | 0.98+ |
Rose Garden | LOCATION | 0.98+ |
today | DATE | 0.98+ |
both servers | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
Intel Corporation | ORGANIZATION | 0.98+ |
Khalidiya | PERSON | 0.98+ |
second example | QUANTITY | 0.98+ |
one task | QUANTITY | 0.98+ |
80 | QUANTITY | 0.98+ |
intel | ORGANIZATION | 0.97+ |
Epsilon | ORGANIZATION | 0.97+ |
Rocket | PERSON | 0.97+ |
both | QUANTITY | 0.97+ |
Cube | ORGANIZATION | 0.96+ |