Image Title

Search Results for Seamus:

Seamus Jones & Milind Damle


 

>>Welcome to the Cube's Continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Doley. He's a senior director for software and solutions at amd, and we're also joined by Shamus Jones, who's a director of server engineering at Dell Technologies. Welcome gentlemen. How are you? >>Very good, thank >>You. Welcome to the Cube. So let's start out really quickly, Shamus, what, give us a thumbnail sketch of what you do at Dell. >>Yeah, so I'm the director of technical marketing engineering here at Dell, and our team really takes a look at the technical server portfolio and solutions and ensures that we can look at, you know, the performance metrics, benchmarks, and performance characteristics, so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy Power Edge from Dell. >>Milland, how about you? What's, what's new at a M D? What do you do there? >>Great to be here. Thank you for having me at amd, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of saying we do a lot of benchmarks, improved performance and demonstrate with wonderful partners such as Shamus and Dell, the combined leverage that AMD four generation processes and Dell systems can bring to bear on a multitude of applications across the industry spectrum. >>Shamus, talk about that relationship a little bit more. The relationship between a M D and Dell. How far back does it go? What does it look like in practical terms? >>Absolutely. So, you know, ever since AM MD reentered the server space, we've had a very close relationship. You know, it's one of those things where we are offering solutions that are out there to our customers no matter what generation A portfolio, if they're, if they're demanding either from their competitor or a m d, we offer a portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better. Really exciting things happening from a m D at the moment, and we're seeing that as we engineer those CPU stacks into our, our server portfolio, you know, we're really seeing unprecedented performance across the board. So excited about the, the history, you know, my team and Lin's team work very closely together, so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the, the, the benchmarks testing and, and validation efforts. >>So Melind, are you happy with these PowerEdge boxes that Seamus is building to, to house, to house your baby? >>We are delighted, you know, it's hard to find stronger partners than Shamus and Dell with AMD's, second generation epic service CPUs. We already had undisputable industry performance leadership, and then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform, at the CPU level, everybody focuses on the high core counts, but there's also the DDR five, the memory, the io, and the storage subsystem. So we believe we have a fantastic performance and performance per dollar performance per what edge over competition, and we look to partners such as Dell to help us showcase that leadership. >>Well. So Shay Yeah, through Yeah, go ahead >>Dave. What, what I'd add, Dave, is that through the, the partnership that we've had, you know, we've been able to develop subsystems and platform features that historically we couldn't have really things around thermals power efficiency and, and efficiency within the platform. That means that customers can get the most out of their compute infrastructure. >>So this is gonna be a big question moving forward as next generation platforms are rolled out, there's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a, in a physical enclosure versus 96 cores, and, and I guess the, the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Shamus, you wanna, you wanna hit that first or you guys are integrated? >>Absolutely, yeah, sorry. Absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment, right? We've taken a look at the cost basis of keeping older infrastructure in place, let's say five or seven year old infrastructure servers that are, that are drawing more power maybe are, are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and, and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding right is that they can take a dynamic consolidation sometimes 5, 7, 8 to one consolidation depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Lin's team partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from things like TP C X AI as a framework, and I'm talking about here specifically the CPU U based performance. >>Even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AM MD portfolio today offer multiple G P U offerings. So we're seeing a balance between a huge amount of C P U gain and performance, as well as more and more GPU offerings within the platform. That was real, that was a real challenge for us because of the thermal challenges. I mean, you think GPUs are going up 300, 400 watt, these CPUs at 96 core are, are quite demanding thermally, but what we're able to do is through some, some unique smart cooling engineering within the, the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform so that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. >>Melin the cube was at the Supercomputing conference in Dallas this year, supercomputing conference 2022, and a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you are bringing to the party? It's kind of a potluck, you know, we, we, we, we mentioned P C I E Gen five or 5.0, whatever you want to call it, new DDR storage cards, Nicks, accelerators, all of those, all of those things. How do you keep that straight when those aren't things that you actually build? >>Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the, the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem from the IO subsystem, P c i lanes for storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers h HPC space, some of the AI applications. But on the lower end you have database applications or some other is s v applications that care a lot about those. So it's, I guess different things matter to different folks across verticals. >>So we partnered with Dell very early in the cycle, and it's really a joint co-engineering. Shamus talked about the focus on AI with TP C X xci, I, so we set five world records in that space just on that one benchmark with AD and Dell. So fantastic kick kick off to that across a multitude of scale factors. But PPP c Xci is not just the only thing we are focusing on. We are also collaborating with Dell and des e i on some of the transformer based natural language processing models that we worked on, for example. So it's not just a steep CPU story, it's CPU platform, es subsystem software and the whole thing delivering goodness across the board to solve end user problems in AI and and other verticals. >>Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and, and they're, they're fantastic. I know Shamus, you know, that, you know, end user customers might, might immediately have the reaction, well, I don't need a Ferrari in my data center, or, you know, what I need is to be able to do more with less. Well, aren't we delivering that also? And you know, you imagine you milland you mentioned natural, natural language processing. Shamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? >>I mean, while the adoption of the top bin CPU stack is, is definitely the exception, not the rule today we are seeing marked performance, even when we look at the mid bin CPU offerings from from a m d, those are, you know, the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Melin was talking about. You know, the fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but as actually all the way through to the, to the application performance. So it's, it's trying to find the correct balance between the application needs, your budget, power draw and infrastructure within the, the data center, right? Because not only could you, you could be purchasing and, and look to deploy the most powerful systems, but if you don't have an infrastructure that's, that's got the right power, right, that's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems, might you wanna ensure that, that you can accommodate those for not just today but in the future, right? >>So it's, it's planning that balance. >>If I may just add onto that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, we've got so many cores. But as Shamus correctly said, it's not just that one core count opn, it's, it's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have, you know, dozens and dozens of offerings. We have a fairly simple skew stack, but we also have a very efficient skew stack. So even, even though at the top end we've got 96 scores, the thermal budget that we require is fairly reasonable. And look, with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they're also super focused on performance per want. And so we believe with this generation, we really delivered not just on raw performance, but also on performance per dollar and performance per one. >>Yeah. And it's not just Europe, I'm, we're, we are here in Palo Alto right now, which is in California where we all know the cost of an individual kilowatt hour of electricity because it's quite, because it's quite high. So, so thermals, power cooling, all of that, all of that goes together and that, and that drives cost. So it's a question of how much can you get done per dollar shame as you made the point that you, you're not, you don't just have a one size fits all solution that it's, that it's fit for function. I, I'm, I'm curious to hear from you from the two of you what your thoughts are from a, from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public, some will write stories for you based on prom, some will create images for you. One of the more popular ones will create sort of a, your superhero alter ego for, I, I can't wait to do it, I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things. They can think on their own in a certain way. W what do, what do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it >>Melan? Yeah, I can go first. Yeah, yeah, yeah, yeah, >>Sure. Yeah. Good. >>So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty and curiosity. You know, people using AI to write stories or poems or, you know, even carve out little jokes, check grammar and spelling very useful, but still, you know, kind of in the realm of novelty in the mainstream, in the enterprise. Look, in my opinion, AI is not just gonna be a vertical, it's gonna be a horizontal capability. We are seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing to things like image classification or object detection that you talked about in, in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability and frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. >>Yeah, absolutely. There's an, an AI as an outcome is really something that companies, I, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melin was talking about is, is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say 5, 6, 7 years, right? The fact that you have object detection within manufacturing to be able to, to be able to do defect detection within manufacturing lines. Now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, you know, in the data center, you can bring it right out to the edge and have that high performance, you know, inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially, so that way you can, you know, have more and, and better use cases for some of these, these instances things like, you know, smart cities with, with video detection. >>So that way they can see, especially during covid, we saw a lot of hospitals and a lot of customers that were using using image and, and spatial detection within their, their video feeds to be able to determine who and what employees were at risk during covid. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I, I know my kids, my daughters love that, that portion of it, but really what's been happening has been exciting for quite a, quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a, a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's that's right out at the edge. You know, I know Dave in the past you've said things like the data center of, you know, 20 years ago is now in my hand as, as my cell phone. That's right. And, and that's, that's a fact and I'm, it's exciting to think where it's gonna be in the next 10 or 20 years. >>One terabyte baby. Yeah. One terabyte. Yeah. It's mind bo. Exactly. It's mind boggling. Yeah. And it makes me feel old. >>Yeah, >>Me too. And, and that and, and Shamus, that all sounded great. A all I want is a picture of me as a superhero though, so you guys are already way ahead of the curve, you know, with, with, with that on that note, Seamus wrap us up with, with a, with kind of a summary of the, the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from a md. >>Absolutely. So within the TPC xai frameworks that Melin and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen on gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a, a huge amount of benefit, both in reduction in the number of units that you need to deploy and the, the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per wat, specifically on the Tu socket two U platform using the fourth generation a m d Epic, you know, we're seeing a 55% higher C P U performance per wat that is that, you know, when for people who aren't necessarily looking at these statistics, every generation of servers, that that's, that is a huge jump leap forward. >>That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top bin, we're actually seeing, you know, large percentile improvements across the mid bins as well, you know, things in the range of like 70 to 90% performance improvements in those standard bins. So it, it's a, it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on, on their deployment size. >>Thanks for that Shamus, sadly, gentlemen, our time has expired. With that, I want to thank both of you. It's a very interesting conversation. Thanks for, thanks for being with us, both of you. Thanks for joining us here on the Cube for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on does hardware matter.com.

Published Date : Dec 9 2022

SUMMARY :

I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. Shamus, what, give us a thumbnail sketch of what you do at Dell. and ensures that we can look at, you know, the performance metrics, benchmarks, and Dell, the combined leverage that AMD four generation processes and Shamus, talk about that relationship a little bit more. So, you know, ever since AM MD reentered the server space, We are delighted, you know, it's hard to find stronger partners That means that customers can get the most out you wanna, you wanna hit that first or you guys are integrated? So we, I'll tell you what, and make the most efficient use case by having things like telemetry within the platform It's kind of a potluck, you know, we, But on the lower end you have database applications or some But PPP c Xci is not just the only thing we are focusing on. Yeah, the two of you are at the tip of the spear from a performance perspective. the fact that balanced memory configs can give you marked performance improvements, but any generation in the past, there's a natural tendency to zero in on the top bin and say, the two of you what your thoughts are from a, from a general AI and ML perspective. Yeah, I can go first. So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty have that high performance, you know, inferencing training models. So the technology that's been developed in the data center around all And it makes me feel old. so you guys are already way ahead of the curve, you know, with, with, with that on that note, So the fact that you can get 220% uplift gen you know, large percentile improvements across the mid bins as well, Thanks for that Shamus, sadly, gentlemen, our time has

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

70QUANTITY

0.99+

40QUANTITY

0.99+

55%QUANTITY

0.99+

fiveQUANTITY

0.99+

DavePERSON

0.99+

220%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

121%QUANTITY

0.99+

96 coresQUANTITY

0.99+

CaliforniaLOCATION

0.99+

AMDORGANIZATION

0.99+

Shamus JonesPERSON

0.99+

12 coresQUANTITY

0.99+

ShamusORGANIZATION

0.99+

ShamusPERSON

0.99+

2023DATE

0.99+

eightQUANTITY

0.99+

96 coreQUANTITY

0.99+

300QUANTITY

0.99+

bothQUANTITY

0.99+

twoQUANTITY

0.99+

dozensQUANTITY

0.99+

seven yearQUANTITY

0.99+

5QUANTITY

0.99+

FerrariORGANIZATION

0.99+

96 scoresQUANTITY

0.99+

60%QUANTITY

0.99+

90%QUANTITY

0.99+

Milland DoleyPERSON

0.99+

first guestQUANTITY

0.99+

thirdQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

amdORGANIZATION

0.99+

todayDATE

0.98+

LinPERSON

0.98+

20 years agoDATE

0.98+

MelindaPERSON

0.98+

One terabyteQUANTITY

0.98+

SeamusORGANIZATION

0.98+

one coreQUANTITY

0.98+

MelindPERSON

0.98+

fourth generationQUANTITY

0.98+

this yearDATE

0.97+

7 yearsQUANTITY

0.97+

Seamus JonesPERSON

0.97+

DallasLOCATION

0.97+

OneQUANTITY

0.97+

MelinPERSON

0.97+

oneQUANTITY

0.97+

6QUANTITY

0.96+

Milind DamlePERSON

0.96+

MelanPERSON

0.96+

firstQUANTITY

0.95+

8QUANTITY

0.94+

second generationQUANTITY

0.94+

SeamusPERSON

0.94+

TP C XTITLE

0.93+

Pradeep Sindhu, Fungible | theCUBE on Cloud 2021


 

>>from around the globe. It's the Cube presenting Cuban cloud brought to you by silicon angle. As I've said many times on the Cube for years, decades, even we've marched to the cadence of Moore's law, relying on the doubling of performance every 18 months or so. But no longer is this the mainspring of innovation for technology. Rather, it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build out of a massively distributed computer network. Very importantly, in the last several years, alternative processors have emerged to support offloading work and performing specific Test GP use of the most widely known example of this trend, with the ascendancy of in video for certain applications like gaming and crypto mining and, more recently, machine learning. But in the middle of last decade, we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years. As we move into the next era of Cloud. And with me is deep. Sindhu, who's this co founder and CEO of Fungible, a company specializing in the design and development of GPU deep Welcome to the Cube. Great to see you. >>Thank you, Dave. And thank you for having me. >>You're very welcome. So okay, my first question is, don't CPUs and GP use process data already? Why do we need a DPU? >>Um you know that that is a natural question to ask on. CPUs have been around in one form or another for almost, you know, 55 maybe 60 years. And, uh, you know, this is when general purpose computing was invented, and essentially all CPI use went to x 80 60 x 86 architecture. Uh, by and large arm, of course, is used very heavily in mobile computing, but x 86 primarily used in data center, which is our focus. Um, now, you can understand that that architectural off general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time, uh, improvements you refer the Moore's Law, which is really the improvements off the price performance off silicon over time. Um, that, combined with architectural improvements, was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. Uh, you're not going to get very much. You're not going to squeeze more blood out of that storm from the general purpose computer architectures. What has also happened over the last decade is that Moore's law, which is essentially the doubling off the number of transistors, um, on a chip has slowed down considerably on and to the point where you're only getting maybe 10 20% improvements every generation in speed off the grandest er. If that. And what's happening also is that the spacing between successive generations of technology is actually increasing from 2, 2.5 years to now three, maybe even four years. And this is because we are reaching some physical limits in Seamus. Thes limits are well recognized, and we have to understand that these limits apply not just to general purpose if use, but they also apply to GP use now. General purpose, if used, do one kind of confrontation. They really general on bacon do lots and lots of different things. It is actually a very, very powerful engine, Um, and then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of processor called the GPU, which specializes in executing vector floating point arithmetic operations much, much better than CPL. Maybe 2030 40 times better. Well, GPS have now been around for probably 15, 20 years, mostly addressing graphics computations. But recently, in the last decade or so, they have been used heavily for AI and analytics computations. So now the question is, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago, and I recognize I was still at Juniper Networks, which is another company that I found it. I recognize that in the data center, um, as the workload changes due to addressing Mawr and Mawr, larger and larger corpus is of data number one. And as people use scale out as the standard technique for building applications, what happens is that the amount of East West traffic increases greatly. And what happens is that you now have a new type off workload which is coming, and today probably 30% off the workload in a data center is what we call data centric. I want to give you some examples of what is the data centric E? >>Well, I wonder if I could interrupt you for a second, because Because I want you to. I want those examples, and I want you to tie it into the cloud because that's kind of the topic that we're talking about today and how you see that evolving. It's a key question that we're trying to answer in this program. Of course, Early Cloud was about infrastructure, a little compute storage, networking. And now we have to get to your point all this data in the cloud and we're seeing, by the way, the definition of cloud expand into this distributed or I think the term you use is disaggregated network of computers. So you're a technology visionary, And I wonder, you know how you see that evolving and then please work in your examples of that critical workload that data centric workload >>absolutely happy to do that. So, you know, if you look at the architectural off cloud data centers, um, the single most important invention was scale out scale out off identical or near identical servers, all connected to a standard i p Internet network. That's that's the architectural. Now, the building blocks of this architecture er is, uh, Internet switches, which make up the network i p Internet switches. And then the servers all built using general purpose X 86 CPUs with D ram with SSD with hard drives all connected, uh, inside the CPU. Now, the fact that you scale these, uh, server nodes as they're called out, um, was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose computer? But this architectures, Dave, is it compute centric architectures and the reason it's a compute centric architectures. If you open this a server node, what you see is a connection to the network, typically with a simple network interface card. And then you have CP use, which are in the middle of the action. Not only are the CPUs processing the application workload, but they're processing all of the aisle workload, what we call data centric workload. And so when you connect SSD and hard drives and GPU that everything to the CPU, um, as well as to the network, you can now imagine that the CPUs is doing to functions it z running the applications, but it's also playing traffic cop for the I O. So every Io has to go to the CPU and you're executing instructions typically in the operating system, and you're interrupting the CPU many, many millions of times a second now. General Purpose CPUs and the architecture of the CPS was never designed to play traffic cop, because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's. It's critical that in this new architecture, where there's a lot of data, a lot of East West traffic, the percentage of work clothes, which is data centric, has gone from maybe 1 to 2% to 30 to 40%. I'll give you some numbers, which are absolutely stunning if you go back to, say, 1987 and which is, which is the year in which I bought my first personal computer. Um, the network was some 30 times slower. Then the CPI. The CPI was running at 50 megahertz. The network was running at three megabits per second. Well, today the network runs at 100 gigabits per second and the CPU clock speed off. A single core is about 3 to 2.3 gigahertz. So you've seen that there is a 600 x change in the ratio off I'll to compute just the raw clock speed. Now you can tell me that. Hey, um, typical CPUs have lots of lots, of course, but even when you factor that in, there's bean close toe two orders of magnitude change in the amount of ill to compute. There is no way toe address that without changing the architectures on this is where the DPU comes in on the DPU actually solves two fundamental problems in cloud data centers on these air. Fundamental. There's no escaping it, no amount off. Clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architectures the interactions between server notes are very inefficient. Okay, that's number one problem number one. Problem number two is that these data center computations and I'll give you those four examples the network stack, the storage stack, the virtualization stack and the security stack. Those four examples are executed very inefficiently by CBS. Needless to say that that if you try to execute these on GPS, you'll run into the same problem, probably even worse because GPS are not good at executing these data centric computations. So when U. S o What we were looking to do it fungible is to solve these two basic problems and you don't solve them by by just using taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the for the last 40 years. So what we did was we created this new microprocessor that we call the DPU from ground doctor is a clean sheet design and it solve those two problems. Fundamental. >>So I want to get into that. But I just want to stop you for a second and just ask you a basic question, which is so if I understand it correctly, if I just took the traditional scale out, If I scale out compute and storage, you're saying I'm gonna hit a diminishing returns, It z Not only is it not going to scale linear linearly, I'm gonna get inefficiencies. And that's really the problem that you're solving. Is that correct? >>That is correct. And you know this problem uh, the workloads that we have today are very data heavy. You take a I, for example, you take analytics, for example. It's well known that for a I training, the larger the corpus of data relevant data that you're training on, the better the result. So you can imagine where this is going to go, especially when people have figured out a formula that, hey, the more data I collect, I can use those insights to make money. >>Yeah, this is why this is why I wanted to talk to you, because the last 10 years we've been collecting all this data. Now I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research and the first thing people said is they want to improve their infrastructure on. They want to do that by moving to the cloud, and they also there was a security angle there as well. That's a whole nother topic. We could discuss the other staff that jumped out at me. There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs with alternative processing technology. So that's sort of, you know, I know it's self serving, but z right on the conversation we're having. So I >>want to >>understand the architecture. Er, aan den, how you've approached this, You've you've said you've clearly laid out the X 86 is not going to solve this problem. And even GP use are not going to solve this problem. So help us understand the architecture and how you do solve this problem. >>I'll be I'll be very happy to remember I use this term traffic cough. Andi, I use this term very specifically because, uh, first let me define what I mean by a data centric computation because that's the essence off the problem resolved. Remember, I said two problems. One is we execute data centric work clothes, at least in order of magnitude, more efficiently than CPUs or GPS, probably 30 times more efficiently on. The second thing is that we allow notes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first, let's look at the data centric piece, the data centric piece, um, for for workload to qualify as being data centric. Four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads, so I'm not saying anything new. Secondly, uh, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently. Thousands of them. Yeah, that's number two. So a lot of multiplexing number three is that this workload is state fel. In other words, you have to you can't process back. It's out of order. You have to do them in order because you're terminating network sessions on the last one Is that when you look at the actual computation, the ratio off I Oto arithmetic is medium to high. When you put all four of them together, you actually have a data centric workout, right? And this workload is terrible for general purpose, C p s not only the general purpose, C p is not executed properly. The application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them, you're going to be in trouble. So what did we do? Well, what we did was our architecture consists off very, very heavily multi threaded, general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some some of those accelerators, um, de Emma accelerators, then radio coding accelerators, compression accelerators, crypto accelerators, um, compression accelerators, thes air, just something. And then look up accelerators. These air functions that if you do not specialized, you're not going to execute them efficiently. But you cannot just put accelerators in there. These accelerators have to be multi threaded to handle. You know, we have something like 1000 different threads inside our DPU toe address. These many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that that is very important to understand is that given the paucity off transistors, I know that we have hundreds of billions of transistors on a chip. But the problem is that those transistors are used very inefficiently today. If the architecture, the architecture of the CPU or GPU, what we have done is we've improved the efficiency of those transistors by 30 times. Yeah, so you can use >>the real estate. You can use their real estate more effectively, >>much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that, you know, we're gonna end up in the same bucket where General Focus CPS are today. We were trying to solve the specific problem off data centric computations on off improving the note to note efficiency. So let me go to Point number two, because that's equally important, because in a scale out architecture, the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that, you start to get back. It drops because there are some fundamental problems caused by congestion on the network, which are unsolved as we speak today. There only one solution, which is to use DCP well. DCP is a well known is part of the D. C. P I. P. Suite. DCP was never designed to handle the agencies and speeds inside data center. It's a wonderful protocol, but it was invented 42 year 43 years ago, now >>very reliable and tested and proven. It's got a good track record, but you're a >>very good track record, unfortunately, eats a lot off CPU cycles. So if you take the idea behind TCP and you say, Okay, what's the essence of TCP? How would you apply to the data center? That's what we've done with what we call F C P, which is a fabric control protocol which we intend toe open way. Intend to publish standards on make it open. And when you do that and you you embed F c p in hardware on top of his standard I P Internet network, you end up with the ability to run at very large scale networks where the utilization of the network is 90 to 95% not 20 to 25% on you end up with solving problems of congestion at the same time. Now, why is this important today that zall geek speak so far? But the reason this stuff is important is that it such a network allows you to disaggregate pool and then virtualized, the most important and expensive resource is in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like the Ram wants to be disaggregated in food. Well, if I put everything inside a general purpose server, the problem is that those resource is get stranded because they're they're stuck behind the CPI. Well, once you disaggregate those resources and we're saying hyper disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources >>and then you're gonna re aggregate them, right? I mean, that's >>obviously exactly and the network is the key helping. So the reason the company is called fungible is because we are able to disaggregate virtualized and then pull those resources and you can get, you know, four uh, eso scale out cos you know the large aws Google, etcetera. They have been doing this aggregation and pulling for some time, but because they've been using a compute centric architecture, er that this aggregation is not nearly as efficient as we could make on their off by about a factor of three. When you look at enterprise companies, they're off by any other factor of four. Because the utilization of enterprises typically around 8% off overall infrastructure, the utilization the cloud for A W S and G, C, P and Microsoft is closer to 35 to 40%. So there is a factor off almost, uh, 4 to 8, which you can gain by disaggregated and pulling. >>Okay, so I wanna interrupt again. So thes hyper scaler zehr smart. A lot of engineers and we've seen them. Yeah, you're right. They're using ah, lot of general purpose. But we've seen them, uh, move Make moves toward GP use and and embrace things like arm eso I know, I know you can't name names but you would think that this is with all the data that's in the cloud again Our topic today you would think the hyper scaler zehr all over this >>all the hyper scale is recognized it that the problems that we have articulated are important ones on they're trying to solve them. Uh, with the resource is that they have on all the clever people that they have. So these air recognized problems. However, please note that each of these hyper scale er's has their own legacy now they've been around for 10, 15 years, and so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some >>point. Have technical debt. You mean they >>have? I'm not going to say they have technical debt, but they have a certain way of doing things on. They are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you heard the term smart neck, and all your listeners must have heard that term. Well, a smart thing is not a deep you what a smart Nick is. It's simply taking general purpose arm cores put in the network interface on a PC interface and integrating them all in the same chip and separating them from the CPI. So this does solve the problem. It solves the problem off the data centric workload, interfering with the application work, work. Good job. But it does not address the architectural problem. How to execute data centric workloads efficiently. >>Yeah, it reminds me. It reminds me of you I I understand what you're saying. I was gonna ask you about smart. Next. It does. It's almost like a bridge or a Band Aid. It's always reminds me of >>funny >>of throwing, you know, a flash storage on Ah, a disc system that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. I don't know if it's a valid analogy, but we've seen this in computing for a long time. >>Yeah, this analogy is close because, you know. Okay, so let's let's take hyper scaler X. Okay, one name names. Um, you find that, you know, half my CPUs are twiddling their thumbs because they're executing this data centric workload. Well, what are you going to do? All your code is written in, uh, C c plus plus, um, on x 86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different Let's say we use arm simply because you know x 86 licenses are not available to people to build their own CPUs. So arm was available, so they put a bunch of encores. Let's stick a PC. I express and network interface on you. Port that quote from X 86 Tow arm. Not difficult to do, but it does yield you results on, By the way, if, for example, um, this hyper scaler X shall we call them if they're able to remove 20% of the workload from general purpose CPUs? That's worth billions of dollars. So of course you're going to do that. It requires relatively little innovation other than toe for quote from one place to another place. >>That's what that's what. But that's what I'm saying. I mean, I would think again. The hyper scale is why Why can't they just, you know, do some work and do some engineering and and then give you a call and say, Okay, we're gonna We're gonna attack these workloads together. You know, that's similar to how they brought in GP use. And you're right. It's it's worth billions of dollars. You could see when when the hyper scale is Microsoft and and Azure, uh, and and AWS both announced, I think they depreciated servers now instead of four years. It's five years, and it dropped, like a billion dollars to their bottom line. But why not just work directly with you guys. I mean, Z the logical play. >>Some of them are working with us. So it's not to say that they're not working with us. So you know, all of the hyper scale is they recognize that the technology that we're building is a fundamental that we have something really special, and moreover, it's fully programmable. So you know, the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit, which is on the boundary off a server, and the network is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architectural where the functionality is programmable? But it is also very high speed for this particular set of applications. So the analogy with GPS is nearly perfect because GP use, and particularly in video that's implemented or they invented coulda, which is a programming language for GPS on it made them easy to use mirror fully programmable without compromising performance. Well, this is what we're doing with DP use. We've invented a new architectures. We've made them very easy to program. And they're these workloads or not, Workload. The computation that I talked about, which is security virtualization storage and then network. Those four are quintessential examples off data centric, foreclosed on. They're not going away. In fact, they're becoming more and more and more important over time. >>I'm very excited for you guys, I think, and really appreciate deep we're gonna have you back because I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, crypto accelerators. I want to understand that. I know there's envy me in here. There's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now, uh, into this domain, this build out of this I like this term dis aggregated, massive disaggregated network s so hyper disaggregated. Even better. And I would say this on way. I gotta go. But what got us here the last decade is not the same is what's gonna take us through the next decade. Pretty Thanks. Thanks so much for coming on the cube. It's a great company. >>You have it It's really a pleasure to speak with you and get the message of fungible out there. >>E promise. Well, I promise we'll have you back and keep it right there. Everybody, we got more great content coming your way on the Cube on Cloud, This is David. Won't stay right there.

Published Date : Jan 22 2021

SUMMARY :

a company specializing in the design and development of GPU deep Welcome to the Cube. So okay, my first question is, don't CPUs and GP use process And for the longest time, uh, improvements you refer the Moore's Law, the definition of cloud expand into this distributed or I think the term you use is disaggregated change in the amount of ill to compute. But I just want to stop you for a second and just ask you a basic So you can imagine where this is going to go, There's 80% of the customers that you surveyed said they'll be augmenting their X 86 CPUs and how you do solve this problem. sessions on the last one Is that when you look at the actual computation, the real estate. centric computations on off improving the note to note efficiency. but you're a disaggregate, the meaning, the hyper and the hyper disaggregate simply means that you can and then pull those resources and you can get, you know, four uh, all the data that's in the cloud again Our topic today you would think the hyper scaler all the hyper scale is recognized it that the problems that we have articulated You mean they of course, you heard the term smart neck, and all your listeners must have heard It reminds me of you I I understand what you're saying. that was designed for spinning disk gave you something, but it doesn't solve the fundamental problem. Well, the easiest thing to do is to separate the cores that run this workload. you know, do some work and do some engineering and and then give you a call and say, And so the whole trick is how do you come up I really want to get into some of the secret sauce you talked about these accelerators, Erasure coding, You have it It's really a pleasure to speak with you and get the message of fungible Well, I promise we'll have you back and keep it right there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
20%QUANTITY

0.99+

DavePERSON

0.99+

SindhuPERSON

0.99+

90QUANTITY

0.99+

AWSORGANIZATION

0.99+

30%QUANTITY

0.99+

50 megahertzQUANTITY

0.99+

CBSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Juniper NetworksORGANIZATION

0.99+

30 timesQUANTITY

0.99+

80%QUANTITY

0.99+

1QUANTITY

0.99+

four yearsQUANTITY

0.99+

55QUANTITY

0.99+

15QUANTITY

0.99+

Pradeep SindhuPERSON

0.99+

DavidPERSON

0.99+

five yearsQUANTITY

0.99+

two problemsQUANTITY

0.99+

20QUANTITY

0.99+

600 xQUANTITY

0.99+

first questionQUANTITY

0.99+

next decadeDATE

0.99+

60 yearsQUANTITY

0.99+

firstQUANTITY

0.99+

billion dollarsQUANTITY

0.99+

todayDATE

0.99+

30QUANTITY

0.99+

two thingsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

fourQUANTITY

0.99+

40%QUANTITY

0.99+

1987DATE

0.99+

1000 different threadsQUANTITY

0.99+

FirstQUANTITY

0.98+

FungibleORGANIZATION

0.98+

OneQUANTITY

0.98+

threeQUANTITY

0.98+

8QUANTITY

0.98+

25%QUANTITY

0.98+

Four thingsQUANTITY

0.98+

second thingQUANTITY

0.98+

10QUANTITY

0.98+

35QUANTITY

0.98+

one solutionQUANTITY

0.97+

singleQUANTITY

0.97+

around 8%QUANTITY

0.97+

third elementQUANTITY

0.97+

SecondlyQUANTITY

0.97+

95%QUANTITY

0.97+

billions of dollarsQUANTITY

0.97+

100 gigabits per secondQUANTITY

0.97+

hundreds of billions of transistorsQUANTITY

0.97+

2.3 gigahertzQUANTITY

0.97+

single coreQUANTITY

0.97+

2030DATE

0.97+

4QUANTITY

0.96+

CubanOTHER

0.96+

2%QUANTITY

0.96+

eachQUANTITY

0.95+

MoorePERSON

0.95+

last decadeDATE

0.95+

three megabits per secondQUANTITY

0.95+

10 20%QUANTITY

0.95+

42 yearDATE

0.94+

bothQUANTITY

0.94+

40 timesQUANTITY

0.93+

two fundamental problemsQUANTITY

0.92+

15 yearsQUANTITY

0.92+

Problem number twoQUANTITY

0.91+

two basic problemsQUANTITY

0.9+

43 years agoDATE

0.9+

86OTHER

0.9+

one placeQUANTITY

0.9+

one sideQUANTITY

0.89+

first personal computerQUANTITY

0.89+

Photonic Accelerators for Machine Intelligence


 

>>Hi, Maya. Mr England. And I am an associate professor of electrical engineering and computer science at M I T. It's been fantastic to be part of this team that Professor Yamamoto put together, uh, for the entity Fire program. It's a great pleasure to report to you are update from the first year I will talk to you today about our recent work in photonic accelerators for machine intelligence. You can already get a flavor of the kind of work that I'll be presenting from the photonic integrated circuit that services a platonic matrix processor that we are developing to try toe break some of the bottle next that we encounter in inference, machine learning tasks in particular tasks like vision, games control or language processing. This work is jointly led with Dr Ryan heavily, uh, scientists at NTT Research, and he will have a poster that you should check out. Uh, in this conference should also say that there are postdoc positions available. Um, just take a look at announcements on Q P lab at m i t dot eu. So if you look at these machine learning applications, look under the hood. You see that a common feature is that they used these artificial neural networks or a and ends where you have an input layer of, let's say, and neurons and values that is connected to the first layer of, let's Say, also and neurons and connecting the first to the second layer would, if you represented it biomatrix requiring and biomatrix that has of order and squared free parameters. >>Okay, now, in traditional machine learning inference, you would have to grab these n squared values from memory. And every time you do that, it costs quite a lot of energy. Maybe you can match, but it's still quite costly in energy, and moreover, each of the input values >>has to be multiplied by that matrix. And if you multiply an end by one vector by an end square matrix, you have to do a border and squared multiplication. Okay, now, on a digital computer, you therefore have to do a voter in secret operations and memory access, which could be quite costly. But the proposition is that on a photonic integrated circuits, perhaps we could do that matrix vector multiplication directly on the P. I C itself by encoding optical fields on sending them through a programmed program into parameter and the output them would be a product of the matrix multiplied by the input vector. And that is actually the experiment. We did, uh, demonstrating that That this is, you know, in principle, possible back in 2017 and a collaboration with Professor Marine Soldier Judge. Now, if we look a little bit more closely at the device is shown here, this consists of a silicon layer that is pattern into wave guides. We do this with foundry. This was fabricated with the opposite foundry, and many thanks to our collaborators who helped make that possible. And and this thing guides light, uh, on about of these wave guides to make these two by two transformations Maxine and the kilometers, as they called >>input to input wave guides coming in to input to output wave guides going out. And by having to phase settings here data and five, we can control any arbitrary, uh, s U two rotation. Now, if I wanna have any modes coming in and modes coming out that could be represented by an S u N unitary transformation, and that's what this kind of trip allows you to dio and That's the key ingredient that really launched us in in my group. I should at this point, acknowledge the people who have made this possible and in particular point out Leon Bernstein and Alex lots as well as, uh, Ryan heavily once more. Also, these other collaborators problems important immigrant soldier dish and, of course, to a funding in particular now three entity research funding. So why optics optics has failed many times before in building computers. But why is this different? And I think the difference is that we now you know, we're not trying to build an entirely new computer out of optics were selective in how we apply optics. We should use optics for what it's good at. And that's probably not so much from non linearity, unnecessarily I mean, not memory, um, communication and fan out great in optics. And as we just said, linear algebra, you can do in optics. Fantastic. Okay, so you should make use of these things and then combine it judiciously with electronic processing to see if you can get an advantage in the entire system out of it, okay. And eso before I move on. Actually, based on the 2017 paper, uh, to startups were created, like intelligence and like, matter and the two students from my group, Nick Harris. And they responded, uh, co started this this this jointly founded by matter. And just after, you know, after, like, about two years, they've been able to create their first, uh, device >>the first metrics. Large scale processor. This is this device has called Mars has 64 input mode. 64 Promodes and the full program ability under the hood. Okay. So because they're integrating wave guides directly with Seamus Electron ICS, they were able to get all the wiring complexity, dealt with all the feedback and so forth. And this device is now able to just process a 64 or 64 unitary majors on the sly. Okay, parameters are three wants total power consumption. Um, it has ah, late and see how long it takes for a matrix to be multiplied by a factor of less than a nanosecond. And because this device works well over a pretty large 20 gigahertz, you could put many channels that are individually at one big hurts, so you can have tens of S U two s u 65 or 64 rotations simultaneously that you could do the sort of back in the envelope. Physics gives you that per multiply accumulate. You have just tens of Tempted jewels. Attn. A moment. So that's very, very competitive. That's that's awesome. Okay, so you see, plan and potentially the breakthroughs that are enabled by photonics here And actually, more recently, they actually one thing that made it possible is very cool Eyes thes My face shifters actually have no hold power, whereas our face shifters studios double modulation. These use, uh, nano scale mechanical modulators that have no hold power. So once you program a unitary, you could just hold it there. No energy consumption added over >>time. So photonics really is on the rise in computing on demand. But once again, you have to be. You have to be careful in how you compare against a chance to find where is the game to be had. So what I've talked so far about is wait stationary photonic processing. Okay, up until here. Now what tronics has that also, but it doesn't have the benefits of the coherence of optical fields transitioning through this, uh, to this to this matrix nor the bandwidth. Okay, Eso So that's Ah, that is, I think a really exciting direction. And these companies are off and they're they're building these trips and we'll see the next couple of months how well this works. Uh, on the A different direction is to have an output stationary matrix vector multiplication. And for this I want to point to this paper we wrote with Ryan, Emily and the other team members that projects the activation functions together with the weight terms onto a detector array and by the interference of the activation function and the weight term by Hamad and >>Affection. It's possible if you think about Hamad and affection that it actually automatically produces the multiplication interference turn between two optical fields gives you the multiplication between them. And so that's what that is making use of. I wanna talk a little bit more about that approach. So we actually did a careful analysis in the P R X paper that was cited in the last >>page and that analysis of the energy consumption show that this device and principal, uh, can compute at at an energy poor multiply accumulate that is below what you could theoretically dio at room temperature using irreversible computer like like our digital computers that we use in everyday life. Um, so I want to illustrate that you can see that from this plot here, but this is showing. It's the number of neurons that you have per layer. And on the vertical axis is the energy per multiply accumulate in terms of jewels. And when we make use of the massive fan out together with this photo electric multiplication by career detection, we estimate that >>we're on this curve here. So the more right. So since our energy consumption scales us and whereas for a for a digital computer it skills and squared, we, um we gain mawr as you go to a larger matrices. So for largest matrices like matrices of >>scale 1,005,000, even with present day technology, we estimate that we would hit and energy per multiply accumulate of about a center draw. Okay, But if we look at if we imagine a photonic device that >>uses a photonic system that uses devices that have already been demonstrated individually but not packaged in large system, you know, individually in research papers, we would be on this curve here where you would very quickly dip underneath the lander, a limit which corresponds to the thermodynamic limit for doing as many bit operations that you would have to do to do the same depth of neural network as we do here. And I should say that all of these numbers were computed for this simulated >>optical neural network, um, for having the equivalent, our rate that a fully digital computer that a digital computer would have and eso equivalent in the error rate. So it's limited in the error by the model itself rather than the imperfections of the devices. Okay. And we benchmark that on the amnesty data set. So that was a theoretical work that looked at the scaling limits and show that there's great, great hope to to really gain tremendously in the energy per bit, but also in the overall latency and throughput. But you shouldn't celebrate too early. You have to really do a careful system level study comparing, uh, electronic approaches, which oftentimes happened analogous approach to the optical approaches. And we did that in the first major step in this digital optical neural network. Uh, study here, which was done together with the PNG who is an electron ICS designer who actually works on, uh, tronics based on c'mon specifically made for machine on an acceleration. And Professor Joel, member of M I t. Who is also a fellow at video And what we studied there in particular, is what if we just replaced on Lee the communication part with optics, Okay. And we looked at, you know, getting the same equivalent error rates that you would have with electronic computer. And that showed that that way should have a benefit for large neural networks, because large neural networks will require lots of communication that eventually do not fit on a single Elektronik trip anymore. At that point, you have to go longer distances, and that's where the optical connections start to win out. So for details, I would like to point to that system level study. But we're now applying more sophisticated studies like this, uh, like that simulate full system simulation to our other optical networks to really see where the benefits that we might have, where we can exploit thes now. Lastly, I want to just say What if we had known nominee Garrity's that >>were actually reversible. There were quantum coherent, in fact, and we looked at that. So supposed to have the same architectural layout. But rather than having like a sexual absorption absorption or photo detection and the electronic non linearity, which is what we've done so far, you have all optical non linearity, okay? Based, for example, on a curve medium. So suppose that we had, like, a strong enough current medium so that the output from one of these transformations can pass through it, get an intensity dependent face shift and then passes into the next layer. Okay, What we did in this case is we said okay. Suppose that you have this. You have multiple layers of these, Uh um accent of the parameter measures. Okay. These air, just like the ones that we had before. >>Um, and you want to train this to do something? So suppose that training is, for example, quantum optical state compression. Okay, you have an optical quantum optical state you'd like to see How much can I compress that to have the same quantum information in it? Okay. And we trained that to discover a efficient algorithm for that. We also trained it for reinforcement, learning for black box, quantum simulation and what? You know what is particularly interesting? Perhaps in new term for one way corner repeaters. So we said if we have a communication network that has these quantum optical neural networks stationed some distance away, you come in with an optical encoded pulse that encodes an optical cubit into many individual photons. How do I repair that multi foot on state to send them the corrected optical state out the other side? This is a one way error correcting scheme. We didn't know how to build it, but we put it as a challenge to the neural network. And we trained in, you know, in simulation we trained the neural network. How toe apply the >>weights in the Matrix transformations to perform that Andi answering actually a challenge in the field of optical quantum networks. So that gives us motivation to try to build these kinds of nonlinear narratives. And we've done a fair amount of work. Uh, in this you can see references five through seven. Here I've talked about thes programmable photonics already for the the benchmark analysis and some of the other related work. Please see Ryan's poster we have? Where? As I mentioned we where we have ongoing work in benchmarking >>optical computing assed part of the NTT program with our collaborators. Um And I think that's the main thing that I want to stay here, you know, at the end is that the exciting thing, really is that the physics tells us that there are many orders of magnitude of efficiency gains, uh, that are to be had, Uh, if we you know, if we can develop the technology to realize it. I was being conservative here with three orders of magnitude. This could be six >>orders of magnitude for larger neural networks that we may have to use and that we may want to use in the future. So the physics tells us there are there is, like, a tremendous amount of gap between where we are and where we could be and that, I think, makes this tremendously exciting >>and makes the NTT five projects so very timely. So with that, you know, thank you for your attention and I'll be happy. Thio talk about any of these topics

Published Date : Sep 21 2020

SUMMARY :

It's a great pleasure to report to you are update from the first year I And every time you do that, it costs quite a lot of energy. And that is actually the experiment. And as we just said, linear algebra, you can do in optics. rotations simultaneously that you could do the sort of back in the envelope. You have to be careful in how you compare So we actually did a careful analysis in the P R X paper that was cited in the last It's the number of neurons that you have per layer. So the more right. Okay, But if we look at if we many bit operations that you would have to do to do the same depth of neural network And we looked at, you know, getting the same equivalent Suppose that you have this. And we trained in, you know, in simulation we trained the neural network. Uh, in this you can see references five through seven. Uh, if we you know, if we can develop the technology to realize it. So the physics tells us there are there is, you know, thank you for your attention and I'll be happy.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2017DATE

0.99+

JoelPERSON

0.99+

RyanPERSON

0.99+

Nick HarrisPERSON

0.99+

EmilyPERSON

0.99+

MayaPERSON

0.99+

YamamotoPERSON

0.99+

two studentsQUANTITY

0.99+

NTT ResearchORGANIZATION

0.99+

HamadPERSON

0.99+

AlexPERSON

0.99+

firstQUANTITY

0.99+

second layerQUANTITY

0.99+

20 gigahertzQUANTITY

0.99+

less than a nanosecondQUANTITY

0.99+

first layerQUANTITY

0.99+

64QUANTITY

0.99+

first metricsQUANTITY

0.99+

LeePERSON

0.99+

todayDATE

0.99+

tensQUANTITY

0.98+

sevenQUANTITY

0.98+

EnglandPERSON

0.98+

sixQUANTITY

0.98+

1,005,000QUANTITY

0.98+

fiveQUANTITY

0.98+

twoQUANTITY

0.98+

64 PromodesQUANTITY

0.98+

two transformationsQUANTITY

0.98+

five projectsQUANTITY

0.97+

eachQUANTITY

0.97+

Leon BernsteinPERSON

0.97+

M I T.ORGANIZATION

0.96+

GarrityPERSON

0.96+

NTTORGANIZATION

0.96+

PNGORGANIZATION

0.96+

about two yearsQUANTITY

0.94+

one thingQUANTITY

0.94+

one wayQUANTITY

0.94+

ThioPERSON

0.92+

MarinePERSON

0.92+

two optical fieldsQUANTITY

0.89+

first yearQUANTITY

0.88+

64 rotationsQUANTITY

0.87+

one vectorQUANTITY

0.87+

three entityQUANTITY

0.85+

three ordersQUANTITY

0.84+

oneQUANTITY

0.84+

singleQUANTITY

0.84+

ProfessorPERSON

0.83+

next couple of monthsDATE

0.81+

threeQUANTITY

0.79+

tens of Tempted jewelsQUANTITY

0.74+

M I t.ORGANIZATION

0.73+

AndiPERSON

0.71+

DrPERSON

0.68+

first major stepQUANTITY

0.67+

Seamus Electron ICSORGANIZATION

0.63+

65QUANTITY

0.62+

WhoPERSON

0.59+

Q PORGANIZATION

0.57+

PTITLE

0.48+

ordersQUANTITY

0.47+

MarsLOCATION

0.43+

MaxinePERSON

0.41+

Rob Lee, Pure Storage | Pure Accelerate 2019


 

>> from Austin, Texas. It's Theo Cube, covering your storage. Accelerate 2019. Brought to you by pure storage. >> Hi, Lisa Martin with the Cube. Dave Ilan Taste. My co host were at pure Accelerate 2019 in Austin, Texas. One of our Cube alumni is back with us. We have probably the VP and chief architect at Pier Storage. Rob. Welcome back. >> Thanks for having. >> We're glad you have a voice. We know how challenging these events are with about 3000 partners, customers press everybody wanting to talk to one of the men that was on the keynote stage yesterday for announcements came out really enjoyed yesterday's keynote. But let's talk about one of those announcements in particular Piers Bridge to the hybrid cloud. >> Absolutely, absolutely. Yeah. No, I mean, I think it's been a really exciting conference for us so far. Like you said, a lot of payload coming out, you know, as faras the building, the bridge of the hybrid cloud. This has been, you know, this has been I would say a long time coming, right? We've been working down this path for for a couple of years. We started by bringing some of the cloud like capabilities that customers really wanted and were able to achieve into the cloud back into the data center. Right. So you saw us do this in terms of making our own prem products easier to manage, easier to use, easier to automate, you know. But what? Working with customers of the last couple of years, you know, we realized, is that, uh as the cloud hype kind of subsided and people were taking a more measured view of where the cloud fits into their strategies, what tools it brings. You know, we realized that we could add value in the public cloud environment, the same types of enterprise capabilities, the same type of features rich data service is feature sets things like that that we do on premise in the cloud. And so what we're looking to achieve is actually quite simple, all right. We want to give customers the choice whether whether customers want to run on premise or in the cloud. That's just a choice of we wanted. We wanted to make an environmental choice. We don't want it. We don't wanna put customers in a position where they have to make that choice and feel trapped in one location another because of lack of features, lack of capabilities. You know, our economics on DSO the way that we do that is by building the same types of capabilities that we do on Prem in the cloud giving customers the freedom and flexibility to be agile. >> But, you know, you mentioned economics and you were talking from a customer standpoint. I wanna flip it from a from a technology supplier standpoint, the economics of a vendor who traditionally cells on Prem. You would think would be better than one in the cloud. Because you gotta you pay an Amazon for all their service is or I guess, the customers paying for it. But you kind of saw your way through that. A lot of companies would be defensive on. I wonder if you could add any comment. Yeah. No, I mean so So, look, I think >> the >> hardware is only one piece of it, right? At the end of the day, you know, even our products on Prem are really they're really priced for value. Right? There were delivering value to customers in our capabilities are ease of use or simplicity. The types of applications and work close to being able. Um, and basically, everything I just said is pretty much driven by software features by bringing those same capabilities into the cloud, you know, naturally, we you know, naturally that most of that work is really in software, you know, And then, as faras comparing the economics directly of on Prem versus Cloud. You know, it's it's really no secret as the industry's gotten Maur. Understanding that, you know the cloud isn't isn't the low cost option in a lot of use cases, right? And so, rather than comparing apples to apples on premises cloud either on performance or economics, our goal is really to build the best products in either environment. So if a customer wants to run on Prem wanna build the best darn products in that environment, the customer wants to run in the public cloud. We want to build the best darn product for them in that environment on dhe. Increasingly, as customers want Thio use, both environments hand in hand, want to build the right capabilities to allow them. TOC mostly do that >> Well, I think it makes sense because, as you know, we're talking to some customers. Last night he asking what they have in their data center. And they got a lot of stuff in the data center. To the extent that a company like pure can say, OK, you've got simple, fast et cetera on prim. And we've now extended that to the cloud. Your choice. They're going to spend Maur with you than they are with the guys that fight that. >> Yeah, absolutely. And, you know, I think if you look at our approach and how we've built the products and how were, you know, taking them to market? We've taken a very different approach than some of the competitive set. You know, in some ways, we've really just extended the same way that we think about innovation and product engineering from our existing on prime portfolio into the cloud, which is we look for heart problems to solve way take the hard road, we build differentiated products. Even if it takes us a little bit longer, you can see that, you know, in the product offerings, right? We've really focused on enabling tier one mission critical applications. If you look at the competitive, said they haven't started their their reason why we did that. All right, is we knew that you know, we had customers telling us, like if if you're a customer and you want to use the cloud and you want to think about the cloud is a D R site well, when something goes wrong and you two fell over duty, our site, you you need to be sure that it works exactly the same way there as it did on problem. That's everything from data service is data path features to all of the work flows. An orchestration to go around it because when your primary site goes down is not the time when you want to be discovering that. Oh, there's a footnote on that future and it's that's not supported in the cloud version, that sort of thing on dso you know that, Like I said, you know, the focus that we've put on the product development we've done towards Cloud Block stores really been around creating the same level of enterprise grade features on enabling those applications in the cloud as we do in private. >> You know, we don't make the Amazon storage. We make the Amazon storage better. What's that commercial? Essentially what? That's essentially >> what we've done You know, the great thing about that is that we've done it in close partnership with Amazon, right? You know, we had Amazon on stage yesterday on day, were talking a little bit about that partnership process. And ultimately, I think why that partnership has been so successful is we're both ultimately driven by the same thing, which is customer success. All right. In the early days of working with Amazon as we started coming up with the concept of club block store and consulting them on, we're thinking about building it this way. What do you think? What service is should be, You know, should we leverage and m in eight of us to make this happen? It became pretty clear to them that we were setting out to build a differentiated product and not just tick off check boxes on dhe. That's when they their eyes really okay, way. We really would like you to do a differentiated product here. >> Hey, if this takes off, we're gonna sell all the C two at three. >> What are some of the things Sorry day that you've been with here about six years? What are some of the things that have surprised you pleasantly that the customers have catalysed from an architecture perspective that customer feedback coming back t your team and the and the guys and girls engineering the product. Customers are demanding a certain thing that maybe wasn't something that was an internal idea but really was catalyzed by customers anything that just really I think it's very cool. Very surprising. >> Yeah. No, I mean, I think I think a >> couple of things. I think personally one of the things that surprised me was, you know, when I joined Pure in 2013 you know, we're all we're all about simplicity, right? You talk to cause who I think you had on the show earlier. You know, in the early days who tell you our differentiators gonna be simplicity and I got to say when I first joined the company is a little skeptical is like All right, I get it. Simplicity is a thing. Is it really a differentiator? I very quickly was surprised based on customer feedback that no, it really is very, very meaningful on. And that's something that we take all the way through Engineering. Write everything down, Thio how we design features and put them in the user interfaces. If there's, you know, there's an engineer that wants to put a configuration hook or a knob or ah on option in the user interface way kind of stop and say, Well, G, how would you document that? How would you suggest the user make a decision? Tea set that value will describe and say, Okay, well, g, we can make that decision, can't we? Right? Like, why don't we just want we just make it simpler And so that's been That's been a big surprise, I think, from a customer catalyzed, uh, point of view. What I'd say is we've been really surprised at a lot of the use cases that the flash blade product has been put into play for. And, you know, I think a I was one of them when we when we first set out, we had really targeted Flash played at addressing a segment of the commercial HPC Chip Design Hardware Design software development market. Andi is actually a set of customers, very large Web property customer that came to us with an A I use case. They said, Hey, you know, we've got a ton of data video images, uh, text postings. And we want to do a lot of analysis of this. All right, I want to do a facial recognition. We want to do content and sentiment analysis. We've got the Jeep use. We think you guys have the right storage product for that, and that's really that's really taken off. And that was very much a customer driven area. We >> talked a little bit about that within video yesterday. About some of the customer catalyzed innovation where a is concerned. >> Absolutely. What do you see is the critical technical skills that pure needs in the next decade. I mean, you're five. Correct? Remember, you can't have a networking background. Internal networking, I guess of you got guys from Veritas, right? Obviously strong software file system. What do you What do you see is the critical skill. Yeah, that's >> a good question. You know, we have a very diverse team, all right? We we in engineering typically higher and look for people with strong systems, backgrounds that are willing to learn and want to solve her problems. We, you know, typically haven't hired very specific domain areas myself, my doctor, and is in language run times and compilers, Oh, distributed systems so a bit all over the map, You know, What I'd say is that the first phase of pure the first kind of decade was really about reinventing the storage experience on for me. I look at it as taking lessons from the consumer experience, bringing him into the storage on Enterprise World. Three iPhones, example. That's used a lot. There's a couple of examples you can think of. I think the next phase of what we're trying to do and you heard Charlie talk about this on stage with a modern date experience is take some lessons from the cloud experience and bring them into the enterprise. Right? So the first phase is about consumer simplicity for a human think the next phase is really about bring in some more of the cloud experience for enabling automation and dev ops and management orchestration. >> So what kind of work? A long, long, lot of work to do to get we envisioned this massively scalable distributed system where you have that cloud experience no matter where your data lives, that's not there today, Um, and you don't want to ship your date around, it'd be too much data. So you're on a ship metadata and have the intelligence tow. Bring the compute to that. That data. >> What do you >> got to do? What's the work that you have to do to actually make that seamless? That there's that over word overuse word again. It's not seamless today. Yeah, >> so? So, look, I mean, I think there's there's a lot of angles to it right on. And we're gonna We're gonna work our way there to your point. You know, it's not there today, but, you know, you're you're starting to see us lay the groundwork with all the announcements that came out today, right under the umbrella of Hey, we want to end up creating more portable, more seamless, more agile experience for customers. You can see where, as we bring Maur storage media's into play different classes of service, different balances of performance and cost, bringing those together in a way so that an application can use them income in the right combinations, you know, bring a I into play to help customers do that seamlessly and transparently eyes a big part of it. You can see multiple location kind of agility that we're bringing into play with Claude Block >> store >> enabled, like loud snap and snap shot mobility. Things like that on Dhe. Then you know, I think, as we move beyond the block world and way look att, what we can able with applications that sit on top of file on object protocols. There's a lot of, ah, a lot of greenfield there, right? So you know, we think object storage is very attractive, and we're starting to see that as the application vendors, right, as the applications that sit on top of the storage layer are really embracing object storage as the cloud native storage interface, if you will, that's creating a lot of, ah, a lot of, uh, you know, a lot of ways to share data, right? We're starting to see it, even within the data center, where multiple applications now are able to share data because object storage is being used. And so, like I said, there's a lot of angles to this right. There's there's bringing multiple discreet A raise together under the same management plane. There's bringing multiple different types of storage media a little bit closer together from a seamless application mobility perspective. There's bring multiple locations, data centers, clouds together from a migration a d R perspective. And then there's, you know, there's bringing a global name space type of capability to the table, so it's a long journey. But you know, we think it's the right one. And you know what we ultimately want to do is, you know, have customers be able to think about, be ableto provisioned, be able to manage to not just an array, but really more of like an A Z, right. I want a pool. I want it to be about a fast. But you know, I'm willing to pay about yea much for it, and I need this types of data protection policies for it. Please make it happen >> and anywhere do you So you see, it is technically feasible to be able to run any app, any workload on any cloud or on Prem without having a re compile the application, make changes to the application. That's what I really kind of meant by Seamus that you see that as technically feasible in the next called 5 to 10 years, I'll give you I think >> I think it'll take a long wait a long time we'll get there. And I think, you know, I think it'll depend on the application. All right. I think there are gonna be some combinations that look. I mean, if if you have a high, high frequency, low latent see trading database, there's physical limitations, you're not going to run the application here and put the storage in the cloud. But if we if we step back from it, right, the concept, Yeah. I mean, I think that a lot of a lot of things are becoming possible to make this happen, right? Fastener networking is everywhere. It's getting faster application architectures and making it more feasible. You know, the media costs and what we're able to drive out of the media are bringing a lot a lot more than work leads to flash A eyes is coming into play. So, like I said, it's gonna be different on the on the application. But, you know, I think we're entering a phase where, you know, the modern software developer doesn't wanna have to think too hard about where is you know where physically what six sides of sheet metal is. My dad is sitting on. They want to think about what I need from it. What do we need from in terms of capacity, what we need from it in terms of performance, what we need from it in terms of data service capabilities. All right, ends, you know, And I need to be able to control that elastic Lee. I need to be able to control that through my application through software, and that's kind of what we're building towards. >> Last question, Rob, as we wrap up here, feedback that you've heard the last day and 1/2 on some of the news that came out yesterday from customers, analysts, partners. >> Yeah, you know, I'd say if I were to net it out. I think the one piece of you, Doc, we've gotten this. Wow, you guys have a lot of stuff on. It's really nice to see you guys talking about stuff. It's available today, right? That >> that's a >> lot of eyes on that screen. And, you know, I think I had a KN analysts say to me, You know, this is it's really refreshing. Thio kind of See you guys take a both you know, the viewpoint of the customer. What you're delivering the customer, what you're enabling on then be, You know, I got a lot of tech conferences and I hear a lot about, like, way off in the future. Envisioned Andi feedback we got was you guys had a really good balance of reality today. What, You're helping customers today? What's available today to do that? And enough of the hay. And here's where we're headed. So >> we actually heard the same thing. So good stuff, right? Well, congrats on the 10th anniversary, and we appreciate you joining us on the Cube. We look forward to next year already in whatever city. You're gonna take us to >> two. Thanks a lot. >> All right. For day, Volante. I'm Lisa Martin. You're watching the Cube. Thanks for watching.

Published Date : Sep 18 2019

SUMMARY :

Brought to you by We have probably the VP and chief architect at Pier Storage. We're glad you have a voice. Working with customers of the last couple of years, you know, we realized, is that, But, you know, you mentioned economics and you were talking from a customer standpoint. At the end of the day, you know, even our products on Prem are really they're Well, I think it makes sense because, as you know, we're talking to some customers. All right, is we knew that you know, we had customers telling us, like if if you're a customer and We make the Amazon storage better. We really would like you to do a differentiated product What are some of the things that have surprised you pleasantly that the customers have in the early days who tell you our differentiators gonna be simplicity and I got to say when About some of the customer catalyzed innovation where a is concerned. What do you see is the critical technical skills that pure needs in I think the next phase of what we're trying to do and you heard Charlie talk about this on stage with a modern date experience scalable distributed system where you have that cloud experience no matter where your data lives, What's the work that you have to do to actually make that seamless? but, you know, you're you're starting to see us lay the groundwork with all the announcements that came out today, So you know, we think object storage is very attractive, and we're starting to see that in the next called 5 to 10 years, I'll give you I think And I think, you know, I think it'll depend on the application. of the news that came out yesterday from customers, analysts, partners. Yeah, you know, I'd say if I were to net it out. And, you know, I think I had a KN analysts say to me, and we appreciate you joining us on the Cube. Thanks a lot. All right.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

Rob LeePERSON

0.99+

2013DATE

0.99+

5QUANTITY

0.99+

Austin, TexasLOCATION

0.99+

VeritasORGANIZATION

0.99+

RobPERSON

0.99+

yesterdayDATE

0.99+

JeepORGANIZATION

0.99+

bothQUANTITY

0.99+

eightQUANTITY

0.99+

todayDATE

0.99+

six sidesQUANTITY

0.99+

fiveQUANTITY

0.99+

PremORGANIZATION

0.99+

next yearDATE

0.99+

CharliePERSON

0.99+

ThreeQUANTITY

0.98+

twoQUANTITY

0.98+

Claude BlockPERSON

0.98+

10 yearsQUANTITY

0.98+

iPhonesCOMMERCIAL_ITEM

0.98+

one pieceQUANTITY

0.98+

first phaseQUANTITY

0.98+

about 3000 partnersQUANTITY

0.98+

Dave Ilan TastePERSON

0.98+

10th anniversaryQUANTITY

0.97+

oneQUANTITY

0.97+

2019DATE

0.97+

OneQUANTITY

0.97+

CubeORGANIZATION

0.97+

Last nightDATE

0.97+

next decadeDATE

0.96+

firstQUANTITY

0.96+

TOCORGANIZATION

0.96+

about six yearsQUANTITY

0.96+

threeQUANTITY

0.95+

club blockORGANIZATION

0.95+

Theo CubePERSON

0.94+

VolantePERSON

0.94+

Pier StorageORGANIZATION

0.94+

PureORGANIZATION

0.92+

applesORGANIZATION

0.92+

Piers BridgeLOCATION

0.92+

last couple of yearsDATE

0.85+

LeePERSON

0.84+

AndiPERSON

0.81+

Accelerate 2019ORGANIZATION

0.72+

ton of data video imagesQUANTITY

0.72+

first kindQUANTITY

0.71+

one locationQUANTITY

0.7+

2019EVENT

0.68+

DSOORGANIZATION

0.68+

coupleQUANTITY

0.67+

tier oneQUANTITY

0.67+

Cloud BlockORGANIZATION

0.67+

last dayDATE

0.64+

ThioPERSON

0.64+

couple of yearsQUANTITY

0.64+

HPC ChipORGANIZATION

0.61+

Enterprise WorldORGANIZATION

0.54+

SeamusPERSON

0.52+

CubePERSON

0.52+

AccelerateORGANIZATION

0.51+

Karen Quintos, Dell Technologies | Dell Technologies World 2019


 

>> Live from Las Vegas, it's theCUBE covering Dell Technology's World 2019. Brought to you by Dell Technologies and it's ecosystem partners. >> Hi, welcome to theCUBE Lisa Martin with Stu Miniman and we are live at Dell Technologies World 2019 in Las Vegas with about 15,000 or so other people. There's about 4,000 of the Dell Technologies community of partners here as well. Day one as I mentioned, we're very pleased to welcome back one of our cube alumni, Karen Quintos, EVP and Chief Customer Officer from Dell Technologies, Karen, welcome back to theCUBE. >> Thank you, thank you. Always great to be with you all. >> So one of the things you walk down on stage this morning with Michael Dell and and the whole gang and you started to share a story that I'd love for you to share with our audience about this darling little girl, Phoebe from Manchester, England that has to do with this Dell Technologies partnership with Deloitte Detroit and 3D prosthetics. Can you share this story and what it meant about this partnership. >> Well we wanted to tell this story about Phoebe because we really wanted the audience to understand the innovation and all of what's done it with social good is really about the individual, You know, technology plays a key role but the face behind the technology and the innovation are people and you know, as you mention Phoebe is from Manchester, U.K. Her father wrote this blog about Phoebe's experience. Phoebe's aunt, Claire works for Deloitte. She had access to a lot of what they could do in terms of 3D printing and basically came to Dell and we were able to take it and scale it and accelerate it and speed it up with a engineer by the name of Seamus who saw what the precision workstation could do. So it was this small idea to help an amazing little girl like this that has now turned into this movement around how do we more rapidly, quickly scale 3D prosthetics so these children and adults can have a chance at a normal life so. >> What kind of prosthetics did you guys build for her? >> It's an arm, so the very first arm that we built for her when she was about five years old had the frozen Disney theme painted on it. I asked her father Keith what is the one that she's wearing now because she's now this like really super cool seven-year-old that goes to school and all of her classmates and friends around her see her as this rock star and the one that she has today is printed with unicorns and rainbows. So if you know anything about seven-year-old girls, it's all about unicorns and rainbows and she's done an amazing thing and she's inspired so many other people around the world, individuals, customers, partners like Deloitte and others that we're working with to really take this to a whole new level. >> Karen, I think back to Dell you know, if you think back a couple of decades ago you know, drove a lot of the some of the waves of technology change you know, think back to the PC, but in the early days it was you know supply chain and simple ordering in all these environments and when I've watched Dell move into the enterprise, a lot of that is, I need to be listening to my customer, I need to be much closer to them because it's not just ordering your SKU and having it faster and at a reasonable price but there's a lot more customization. Can you talk about how you're kind of putting that center, that customer in the center of the discussion and that feedback loops that you have with them, how that's changed in Dell. >> Yeah sure, so all of the basic fundamentals around you got to order, deliver, make the supply chain work to deliver for our customers still matters but it's gone beyond that to your point and probably the best way to talk about it is these six customer award winners that we recognized last night. I've gotten to know all six of those over the last year and while they are doing amazing things from a digital transformation using technology in the travel business, the automotive business, banking, financial services, insurance, kind of across the board, the thing that they say consistently is look, we didn't always have the answer in terms of what we needed but you came in, you listened, you rolled up your sleeves to try to figure out how you could design a solution that would meet the needs that we have and they said, that's why you're one of the most strategic partners that we have. Now you can do all those other things, right? You can supply chain ride and build and produce and all that but it's the design of a solution that helps us do the things that will allow us to be differentiated and you look at that list of six customers and brands that they represent, right, Carnival Cruise Lines, USAA, Bradesco, McLaren I mean, the list kind of goes on, they are the differentiators out there and we're really honored to be able to be working with them. >> So we're only a day one and it's only just after lunchtime but one of the things I think somatically that I heard this morning in the keynote with Michael and Pat and Jeff and Satya and yourself is, it's all about people. A couple interviews I did earlier today, same sort of thing, it's like we had the city of Las Vegas on. This is all driven by the people in for the people so that sense of community is really strong. I also noticed this year's theme of real transformation, parlays off last year's theme of make it real, it being digital transformation, IT, security, workforce transformation, what are some of the things that were like at Dell Technologies. Cloud this morning for example, VMware Cloud on Dell EMC that you guys specifically heard say from last year's attendees that are manifesting in some of the announcements today and some of the great things the 15 or so thousand people here are going to get to see and feel and touch at this year's event? >> Well, Lisa you nailed it. What you heard on stage today is what customers have been telling us over the last year. We unveiled about a month ago with a very small group of CIOs in Amia, our cloud strategy, our portfolio, the things that we're going to be able to do and one customer in particular immediately chimed in and said, we need you in the cloud and we need you in there now because you offer choice, you offer open, you offer simplicity, you offer integration and they're like, there's just too many choices and a lot of them are expensive. So what you heard on stage is absolutely a manifestation of what they told us. The other pieces, look, I think I think the industry and CIOs are very quickly realizing their workforce matters, making them happy and productive matters having them enabled that they can work flexibly wherever they want to really, really matters and you know, our Unified Workspace ONE solution is all about how we help them simplify, automate, streamline that experience with their workforce so their employees stick around. I mean, there's a war on talent and everybody's dealing with it and that experience is really, really important in particular to the gensies and the millennials. >> Karen, I love that point. Actually, I was really impressed this morning. In the press and analyst session this morning, there was a discussion of diversity and inclusion and the thing that I heard is, it's a business imperative, it's not, okay it's nice to do it or we should do it but no, this is actually critical to the business. Can you talk about what that means and what you hear from your customers and partners. >> Yes, yes, well, we're seeing it in spades and all of these technology jobs that are open, right. So look, all the research has shown that if you build a diverse team, you'll get to a more innovative solution and people generally get that but what they really get today is here in the U.S. alone, there's 1.1 million open technology jobs by the year 2024, half of them, half of them are going to be filled by the existing workforce. So there is this war in talent that is going to get bigger and bigger and bigger and I think that's what really has given a wake-up call to corporations around why this matters. I think the other piece that we're starting to see, not just around diversity but in our other social impact priorities around the environment as well as how we use our technology for good, look, customers want to do business with a corporation that has a soul and they stand for something and they're doing something, not just a bunch of talking heads but where it's really turning into action and they're being transparent about the journeys and where they're at with it. So it matters now to the current generation, the next generation, it matters to business leaders, matters to the financial services community, which you start to see you know, some of the momentum around you know, the black stones and state street. So it's really exciting that we're part of it and we're leading the way in a lot of number of areas. >> And it's something to that we talked about a lot on theCUBE, diversity and inclusion from many different levels, one of them being the business imperative that you talked about, the workforce needing to compete for this talent, but also how much different products and technologies and apps and APS and things can be with just thought diversity in and of itself and I think it's refreshing to what Stu was saying, hey, we're hearing this is a business imperative but you're also seeing proof in the pudding. This isn't just, we've got an imperative and we're going to do things nominally, you're seeing the efforts manifest. One of the, Draper Labs who was one of the customer award winners. That video that was shown this morning struck probably everyone's heart with the campfire in Paradise California. >> Tragic. >> I grew up close to there and that was something that only maybe, I get goosebumps, six months ago, so massively devastating and we think you know, that was 2018 but seeing how Dell Technologies is enabling this laboratory to investigate the potential toxins coming from all of this chart debris and how they're working to understand the social impact to all of us as they rebuild, I just thought it was a really nice manifestation of a social impact but also the technology breadth and differentiation that Dell has enabling. >> That was also why this story today was so great about Phoebe, right because it's where you can connect the human spirit with technology and scale and have an even bigger impact and there's so much that technology can help with today. You know, that that story about Phoebe. From the time that her aunt from Deloitte identified, you know, what we could do, all the way to the time that Phoebe got her first arm was less than seven months, seven months and you think about you know, some of the other prototypes that were out there, times would take years to be able to do it. So I love that you know, connection of human need with the human spirit and connecting and inspiring and motivating so many children and adults around the world. >> And what what are some of the next, speaking of Phoebe and the Deloitte digital 3D prosthetics partnership, what are some of the other areas we're going to see this technology that this little five-year-old from Manchester spurned. >> Well, I'll give you another example. So we, there was an individual in India, actually an employee of ours that designed an application to help figure out how to deploy healthcare monitoring in some of the remote villages in India where they don't have access to basic things that we take for granted. Monitoring your blood pressure, right, checking your cholesterol level and he created this application that a year later now, we have given kind of the full range of the Dell portfolio technology suite. So it is you know our application plus Pivotal plus VMware plus Dell EMC combined with the partnering that we've done with Tata Trust and the State of India, we've now deployed this healthcare solution called Life Care Solution to nearly 37 million rural residents, citizens in India. >> Wow 37 million. >> 37 million, so a small idea, you take from a really passionate individual, a person, a human being and figure out how you can really leverage that across the full gamut of what Dell can do, I think the results are incredible. >> Awesome, you guys also have a Women in Technology Executive Summit that you're hosting later this week. Let's talk about that in conjunction of what we talked a minute ago about, it's a business imperative as Stu pointed out, there are tangible, measurable results, tell us about this. >> Well, I'm kind of done honestly with a lot of the negativity around, oh, we're not making any progress, oh, we need to be moving fast and if you look at the amount of effort, energy and focus that is going into this space by so many companies and the public sector, it's remarkable and I've met a number of these CIOs over the last year or two, so we basically said let's invite 20 of them, let's share our passion, have made progress, care about solving this across their organization. A lot of us are working on the same things so if we simply got in a room and figured out, are their power in numbers and if we worked collectively together, could we accelerate progress. So that's what it's all about. So we have about 15 or 20 CEOs, both men and women and we'll be spending you know, six or seven hours together and we want to walk away with one or two recommendations on some things that we could collaborate on and have a faster, bigger impact. >> And I heard that, you mentioned collaboration, that's one of the vibes I also got from the keynote this morning when you saw Michael up there with Pat and Jeff and Satya, the collaboration within Dell Technologies, I think even talking with Stu and some of the things that have come out and that I've read, it seems to be more symbiosis with VMware but even some the, like I said, we're only in, I wouldn't even say halfway through day one and that is the spirit around here. We talk about people influence, but this spirit of collaboration is very authentic here. You are the first chief customer officer for Dell, if you look back at your tenure in this role, could you have envisioned where you are now? >> No, because it was like the first ever chief customer officer at Dell and you know, it really gave me a unique opportunity to build something from scratch and you know, there's been a number of other competitors as well as other companies that have announced in the last year or so the need to have a chief customer officer, the need to figure out how, which is a big remit of mine across Dell Technologies, how do we how do we eliminate the silos and connect the seams because that's where the value is going to be unlocked for our customers. That's what you saw on stage today. You saw the value of that with Jeff, with Pat, with Satya, some you know, one of our most important partners out there. Our customers don't want point solutions, they want them to be integrated, they want it to be streamlined, they don't be automated, they want us to speed time to value, they want us to streamline a lot of the back-office kind of mundane things that they're like, I don't want my people spending their time anymore and doing that and that's where we see Dell Technologies being so much more differentiated from other choices in the market. >> Yep, I agree with you. Well Karen, thank you so much for joining Stu and me on theCUBE this afternoon, sharing some of the stories, look forward to hearing next year what comes out of this year's as Women in Tech Exec Summit. Thank you so much for your time. >> Thank you very much, thank you. >> with Stu Miniman, I'm Lisa Martin, you're watching theCUBE, live day one of Dell Technology World from Las Vegas, thanks for watching. (light electronic music)

Published Date : Apr 29 2019

SUMMARY :

Brought to you by Dell Technologies There's about 4,000 of the Always great to be with you all. So one of the things you and you know, as you mention Phoebe is and the one that she has today is printed a lot of that is, I need to and probably the best way to talk about it and some of the great things the 15 and said, we need you in the cloud and what you hear from your and people generally get that that you talked about, the and we think you know, that was 2018 and adults around the world. and the Deloitte digital Trust and the State of India, that across the full gamut Awesome, you guys also have a and the public sector, it's remarkable and that is the spirit around here. and connect the seams sharing some of the stories, of Dell Technology World from Las Vegas,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ClairePERSON

0.99+

KarenPERSON

0.99+

Lisa MartinPERSON

0.99+

JeffPERSON

0.99+

Karen QuintosPERSON

0.99+

DeloitteORGANIZATION

0.99+

IndiaLOCATION

0.99+

Stu MinimanPERSON

0.99+

BradescoORGANIZATION

0.99+

Michael DellPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

DellORGANIZATION

0.99+

USAAORGANIZATION

0.99+

Tata TrustORGANIZATION

0.99+

KeithPERSON

0.99+

PatPERSON

0.99+

PhoebePERSON

0.99+

2018DATE

0.99+

sixQUANTITY

0.99+

McLarenORGANIZATION

0.99+

Carnival Cruise LinesORGANIZATION

0.99+

LisaPERSON

0.99+

seven monthsQUANTITY

0.99+

Las VegasLOCATION

0.99+

MichaelPERSON

0.99+

oneQUANTITY

0.99+

2024DATE

0.99+

first armQUANTITY

0.99+

last yearDATE

0.99+

StuPERSON

0.99+

six customersQUANTITY

0.99+

SatyaPERSON

0.99+

next yearDATE

0.99+

a year laterDATE

0.99+

Manchester, EnglandLOCATION

0.99+

37 millionQUANTITY

0.99+

Draper LabsORGANIZATION

0.99+

20QUANTITY

0.99+

less than seven monthsQUANTITY

0.99+

U.S.LOCATION

0.99+

todayDATE

0.99+

ManchesterLOCATION

0.99+

six months agoDATE

0.98+

last nightDATE

0.98+

Women in Tech Exec SummitEVENT

0.98+

this yearDATE

0.98+

bothQUANTITY

0.98+

Manchester, U.K.LOCATION

0.98+

Louis Verzi, Cardinal Health & Anthony Lye, NetApp | Google Cloud Next 2019


 

>> fly from San Francisco. It's the Cube covering Google Cloud next nineteen Rodeo by Google Cloud and its ecosystem partners. >> Welcome back to San Francisco, everybody. This is the Cube, the leader in live tech coverage. And we hear it. Mosconi Center, Google Cloud. Next twenty nineteen. Hashtag Google. Next nineteen. I'm Dave, along with my co host student, Amanda's Day two for us. Anthony Lives here. Senior vice president, general manager of the Cloud Data Services Business Unit That net app Cuba Lawman Louis Versi. Who's senior cloud engineer at Cloud Health. Gentlemen. Welcome, Cardinal. Help that I got cloud in the brain. Gentlemen, welcome to the Cube. Thank you much for coming on, Luis. Let's start with you. Uh, a little bit about Cardinal Health. What you guys air are all about. Tell us about the business. Sure. >> Uh, Cardinal Health is a global supply chain medical products services company. We service hospitals, pharmacies throughout the world. We're drivers are delivering cost effective solutions to our two patients right throughout the world. >> Awesome. We're gonna get into that, Anthony, you've been in the Cube a couple times here almost a year since we were last at this show. it's grown quite a bit. Good thing Mosconi is new and improved. He's got all these new customers here. Give us the update. On what? Look back a year, What's transpired? One of the highlights for you. >> Open it up. You know, we've achieved a tremendous amount. I mean, you know, we were a Google partner of the year, which was quite nice. Wasn't even award for the hard work? You know, we have a very special relationship with Google. We actually engineer directly into the Google console, our services that their products that are sold by Google, which gives us a very unique value proposition. We just keep adding, you know, we have more services and we had more regions on. We continue to sort of differentiate the basic services that that customers are now using for secondary workloads and increasingly very large primary work. Look all >> right, we're going to get into it and learn more about the partnership. But but thinking about what's going on, a cardinal health question for you, Lewis is one of the drivers in your business that are affecting your technology strategy and how you're dealing with those. >> Sure, there's a few things on. I'm sure this is the same in many industries, right? We're facing cost pressures. We need to deliver solutions at a lower cost than we have been in the past. We need to move faster. We need to have agility to be able to respond to changes in the market place. So on Prem doesn't didn't give us a lot of that flexibility to turn those lovers in any of those three areas that those three things have really driven our push into the cloud. All >> right, Louis, let let's dig into that a little bit. You could kind of Do you still have on Prem as part of your solution way? Still have >> some eso We've been working over the past two years to my great work loads out of our data center into the cloud. We're about eighty percent of the way there. There's gonna be some workloads. I Siri's doesn't run in the cloud. Very well. You know, we've got Cem >> Way. Were just joking about that earlier today. Yes, yes, yes. Lots of things. But in the back corner somewhere, I've got that icier running or the day working on that Anthony way. >> Blessed with blessed. You know, this is a customer of ours, and way enabled him to run some, you know, pretty heavy on Prem workloads that required NFS can now run, you know, production on Google clouds. So >> yeah, and you're basically trying to make that experience Seamus Wright A cz muchas. You can wait. Talk about that. That partnership with Google, What are the challenges that you guys are tryingto tackle? I'm just going to refer to your >> question. I mean, you know, what we see is that there's a sort of a pivot with the clouds that traditional i t people thought horizontally and they try and sort of you had a storage team and you had a security team and you had a networking team in the cloud. It's sort of pivots ninety degrees, and you have people who don't work clothes on the workload. People are experts in every single thing, and so they go to the cloud, assuming that the cloud itself will take care of a lot of that problem for So we worked with Google and we built a service. We didn't We didn't build it for a storage guy tow, configure. And you know it undo the bolts and nuts way built it like dial tone. That there is. The NFS is always on in Google Cloud and you come and provisioned an end point and you just tell us how much capacity you want and how much performance. And that's it. It takes about eight seconds to establish a volume in Ghoul Cloud that may take through, you know, trouble tickets, and I t capital purchases about six months to do. >> Yeah, Anthony. Actually, one of my favorite interviews last year is I talked to Dave Hits at your event, and he talked about when we first started building it. We build something that storage people would love, and you shot him down and said, No, no, no, This needs to be a cloud first Clouds absolution. Louis, I want to poke at you. You actually said Price is a main driver for cloud agility. Absolutely. But bring this inside a little bit. I know you're speaking at the show a year. You know, people always say, it's like, Hey, you know, cloud isn't easy. Is it cheap? Well, you know, Devil's in the details there. So would love to hear your experience there. And you know how you know less expensive translates in your world? Sure. >> So when we were looking for something, we tried to get away from Nasim. We're moving to the cloud and we just can't do it right There's way have a lot of cots, applications, a lot of processes that you just have to have known as right and we're looking for something Is Anthony described that with a click of a button are developers Khun spin up their own storage. The price point was lower than then. Frankly, you could get just provisioning the type of disk that you need in the cloud fur, and that was acceptable for most of our workloads. The the the ability to tear right. There's through three classes of storage and in the cloud volume services. Most of our workloads are running on the standard tear, but we've got some workloads where they've got higher performance and we provisioned them right on the standard. And when that you're doing, they're testing like, hey, we need a little bit more with a click of a button there at a higher tier of storage. No downtime, no restarting, no moving storage. It's I just worked. So the cost, the agility were getting all of that out of the solution to >> manage those laces, that sort of, ah, sort of automated way or you sort of monitoring things. And what's the process for for managing, which slays the slaves on the different tiers of storage. If >> we provide him, Yeah, we're not. We're not money for s. >> So it's all automated. >> Run it. And we stand by guarantees throughput guarantees on we take the pain away. You know, I always like to say, you know, what people want to do in the public cloud is innovate, not administrator. And generally, you know. So when when people say clouds cheaper, it's because I think they've decided that they're better use of the dollar is in application development, data science, and then they can retire people and put application developers into the business. So what ghoul does, I think incredibly well as it has infrastructure to remove the sort of the legacy barrier and the traditional stuff. And then it has this wonderful new innovation that, you know, maybe a few companies in the world could decide could use it. But most people couldn't afford to put TP use or GP use in their data center, so they know he was really two very strong Valley proposition. >> And maybe what they're saying is when they say the cloud is cheaper, maybe is better are why I'm spending money elsewhere. That's give me a better return. >> I do things that make you different. Not the same, right, >> right, right. So storage strategy. I mean, I'm sure there should be such a thing anymore. Work illustrated back in the day when used to work A DMC was II by AMC for Block Net out for file Things have changed in terms of how you run a strategy. Think about your business. So what is your strategy when you think about infrastructure and storage and workloads? >> So we really don't want to have to focus on an infrastructure strategy, right? Right now we're mostly running traditional workloads in the cloud running on PM's. We're working towards getting a lot of work loads into geeky, using that service and in Google Cloud platform, >> so can you just step back for a second? How do you end up on Google? Why'd you choose them versus some of the alternative out there. >> So we started our cloud journey a couple of years ago. Started out with really the main cloud player in town, like most people have. Um, and about a year in, not all of our needs were being met. You know, they that company entered decided to enter our business segment. S O, you know, starts asking some questions. People start asking some questions there. So that prompted us to do an r f p to try to see technologically really, were we on the right cloud cloud platform? And we compared the top three cloud providers and ended up on GP from a technological decision, not just a business decision. It gave us the ability to have a top level organization where we could provisioned projects to application teams. They could work autonomously within those projects, but we still had a shared VPC, a shared network where we could put Enterprise Guard rails in place to protect the company. >> Dominic Price was on earlier with Google and he was saying some nice things about net happened. I'd like to hear your perspective is why Ned App What's unique about Nana. What's so special about net app in the cloud. Sure, a few of the >> things that Anthony talked about were really differentiators for us. We didn't have to go sign a Pio with another company, and we didn't need to commit to a certain amount of storage. We didn't need to build our own infrastructure. Even in the cloud, the service was just there. You do a little bit of up front, set up to connect your networking and weaken prevision storage whenever we want. We can change the speed the through. Put that we're getting on that storage at any point in time. We congrats. That storage with no downtime. Those are all things that were really different and other solutions that were out there. >> I mean, it's interesting infrastructure. Tio was really still even in a cloud. It's kind of like a bunch of Lego blocks on what we always said it was. You know, people want to buy the pirate ship, you know, they don't want to, like, have to dig in all these bins. And so we sort of said, Let's build storage, Kind of like a pirate ship that you just know that the end result is a pirate ship and I don't have to understand how to pick a ll Those pieces. Someone's done that for me. So, you know, we're really trying, Teo. I was I'd say we like to create easy buns. You know, people just hit the easy button and go. Someone else is going to make sure it's there. Someone else is going to make sure it performs. I am just a consumer off it, >> Anthony Wave talkto you and Ned app. You play across all the major cloud providers out there and you've got opinion when it comes to Kerber Netease, Help! Help! Help! Give us the you know where what you think about what you've heard this weekend. Google. You know, I think how they differentiate themselves in the market. >> You know, I think it's great, you know, that Google, I think open source community. So I think that was a ninja stry changing event. And, you know, I think community's really starts to redefine application development. I think portability is obviously a big thing with it, But But for an application, developer of the V. M. Was something that somebody added afterwards, and it was sort of like, Oh, no way overboard infrastructure. So now we'Ll virtual eyes it But the cost of virtual izing things was so expensive, you know, you put a no s in a V m and communities was, was built and was sort of attracted to the developer. And so the developers are coding and re factoring, and I just You just look around now and you just see the ground swell on Cuban cnc f is here, and the contributions that were being made to communities are astonishing. It's it's reached a scale way bigger than Lennox. The amount of innovation that's going into cos I think is unstoppable. Now it's it's going to be the standard if it isn't already >> Well, Louis, I'd love you to expand. You said it sounded like you moved to the cloud first, but now you're going down that application modernization, you know, how does Cooper Netease fit into that? And what what other pieces? Because it's changing the applications and get me the long pole in the tent and modernization. So >> cardinal took the approach of we need to get everything into the cloud. And then we can begin modernizing our applications because if we tried to modernize everything up front, would take us ten to fifteen years to get to the cloud, and we couldn't afford to do that. So lifting and shifting machines was about seventy eighty percent of our migration to the cloud. What we're looking at now is modern, modernizing some of her applications R E commerce solution will be will be running on Cooper. Nettie is very shortly on DH will be taking other workloads there in the future. That's definitely the next step. The next evolution >> Okuda Cloud or multi Cloud? That is the question way >> are multi cloud. There are, you know, certain needs that can only be met in certain clouds, right? So Google Cloud is our primary cloud provider. But we're also also using Amazon for specific >> workloads and used net up across those clouds erect. Okay, so is that What's that like? Is that nap experience across clouds so still coming together? Is it sort of highly similar? What's experience like? >> So it's it's using that app in both solutions is the same. I think there's some stuff that we're looking forward to, that where where things will be tied together a little bit more and >> that brings me to the road map Question. That's Please get your best people working on that. >> Oh, yeah. No, no. I mean, I So, look, I think storages that sort of wonderful business because, you know, data is heavy, it's hard, it doesn't like to be moved, and it needs to be managed. It's It's the primary asset of your business these days. So So we have we have, you know, we released continuously new features onto the service. So, you know, we've got full S and B nfs support routing an FSB four support routing a backup service. We're integrating NFS into communities, which is a very frequently asked response. A lot of companies developers want to build ST collapse and Block has a real problem when the container failed. NFS doesn't So we're almost seeing a renaissance with communities and NFS So So you know, we just we subscribe to that constant innovation and we'll just continue to build out mohr and more services that that allow I think cloud customers to, as I said, to sort of spend their time innovating while we take care of the administration for them >> two thousand six to floor. And I wrote a manifesto on storage is a service. Yeah, I didn't know it. Take this long, but I'm glad you got there. Last question, Lewis. Cool stuff. You working on fun projects? What's floating your boat these days? >> My time these days is, uh, the cloud. As I said, we went to the cloud for cost for cost savings. You can spend more money than you anticipate in the cloud. I know it's a shocker. So that's one of the things that I'm focusing our efforts on right now is making sure that way. Keep those costs under control. Still deliver the speed and agility. But keep an eye on those things >> that they put a bow on. Google next twenty nineteen. Partner of the year. That's awesome. Congratulations. Thank >> you. Uh, you know, I would say, you know, to put in a bone it's great to see Thomas again. You know, I went to Thomas that Oracle for about six and a half years. He's an incredibly bright man on DH. I think he's going to do a lot of really good things for Google. As you know, I work for his twin brother, George on DH. They are insanely bright people and really fun to work with. So for me, it was great to come up here and see Thomas and I shook hands when we won the award, and it was kind of too really was like, you know, we're both in a Google event. >> Yeah, it was fun. I'm gonna make an observation. I was saying the studio in the Kino today. They were both Patriots fans. So Bill Bala check. He has progeny. Coaches leave. They try to be him. It just doesn't work. Thomas Curie is not trying to be Larry. I'm sure they, you know, share a lot of the same technical philosophies and cellphone. But he's got his own way of doing things in his own style. So I really it's >> a great Haifa. Google great >> really is. Hey, guys, Thanks so much for coming to the cure. Thank you. Keep right, everybody Day Volante with student meant John Furry is also in the house. We're here. Google Next twenty nineteen, Google Cloud next week Right back. Right after this short break

Published Date : Apr 10 2019

SUMMARY :

It's the Cube covering This is the Cube, the leader in live tech coverage. We're drivers are delivering cost effective solutions to One of the highlights for you. I mean, you know, we were are affecting your technology strategy and how you're dealing with those. have really driven our push into the cloud. You could kind of Do you still have of our data center into the cloud. But in the back corner somewhere, I've got that icier running or the day working on that Anthony way. you know, pretty heavy on Prem workloads that required NFS can now run, That partnership with Google, What are the challenges that you guys I mean, you know, what we see is that there's a sort of a pivot with the clouds that You know, people always say, it's like, Hey, you know, cloud isn't easy. applications, a lot of processes that you just have to have known as right and we're manage those laces, that sort of, ah, sort of automated way or you sort of monitoring things. we provide him, Yeah, we're not. You know, I always like to say, you know, what people want to do in the public cloud is And maybe what they're saying is when they say the cloud is cheaper, maybe is better are why I do things that make you different. have changed in terms of how you run a strategy. So we really don't want to have to focus on an infrastructure strategy, so can you just step back for a second? S O, you know, starts asking some questions. Sure, a few of the We can change the speed the through. And so we sort of said, Let's build storage, Kind of like a pirate ship that you just know Give us the you know where what you think about what you've heard this weekend. You know, I think it's great, you know, that Google, I think open source community. You said it sounded like you moved to the cloud first, in the future. There are, you know, certain needs that can only be met in certain Okay, so is that What's So it's it's using that app in both solutions is the same. that brings me to the road map Question. So you know, we just we subscribe to that constant innovation and Take this long, but I'm glad you got there. You can spend more money than you anticipate Partner of the year. when we won the award, and it was kind of too really was like, you know, we're both in a Google event. I'm sure they, you know, a great Haifa. student meant John Furry is also in the house.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GoogleORGANIZATION

0.99+

AnthonyPERSON

0.99+

GeorgePERSON

0.99+

LewisPERSON

0.99+

ThomasPERSON

0.99+

San FranciscoLOCATION

0.99+

AMCORGANIZATION

0.99+

Dominic PricePERSON

0.99+

John FurryPERSON

0.99+

tenQUANTITY

0.99+

LuisPERSON

0.99+

two patientsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Cardinal HealthORGANIZATION

0.99+

Thomas CuriePERSON

0.99+

Seamus WrightPERSON

0.99+

Bill BalaPERSON

0.99+

Dave HitsPERSON

0.99+

Louis VersiPERSON

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

Anthony WavePERSON

0.99+

PatriotsORGANIZATION

0.99+

LouisPERSON

0.99+

next weekDATE

0.99+

DavePERSON

0.99+

TeoPERSON

0.99+

bothQUANTITY

0.99+

ninety degreesQUANTITY

0.98+

SiriTITLE

0.98+

both solutionsQUANTITY

0.98+

PremORGANIZATION

0.98+

Cloud HealthORGANIZATION

0.98+

todayDATE

0.98+

about seventy eighty percentQUANTITY

0.98+

LarryPERSON

0.98+

three thingsQUANTITY

0.98+

firstQUANTITY

0.98+

twoQUANTITY

0.97+

about six monthsQUANTITY

0.97+

LegoORGANIZATION

0.97+

OneQUANTITY

0.97+

twinQUANTITY

0.96+

DMCORGANIZATION

0.96+

about six and a half yearsQUANTITY

0.96+

MosconiPERSON

0.95+

a yearQUANTITY

0.95+

OracleORGANIZATION

0.95+

three areasQUANTITY

0.94+

fifteen yearsQUANTITY

0.94+

Anthony LyePERSON

0.94+

LennoxORGANIZATION

0.94+

Next twenty nineteenDATE

0.94+

AmandaPERSON

0.93+

NasimORGANIZATION

0.93+

about eighty percentQUANTITY

0.93+

about eight secondsQUANTITY

0.92+

Google CloudTITLE

0.92+

NedTITLE

0.91+

Louis VerziPERSON

0.9+

earlier todayDATE

0.9+

nineteenDATE

0.88+

Next nineteenDATE

0.88+