Image Title

Search Results for x86 cores:

Next Gen Servers Ready to Hit the Market


 

(upbeat music) >> The market for enterprise servers is large and it generates well north of $100 billion in annual revenue, and it's growing consistently in the mid to high single digit range. Right now, like many segments, the market for servers is, it's like slingshotting, right? Organizations, they've been replenishing their install bases and upgrading, especially at HQs coming out of the isolation economy. But the macro headwinds, as we've reported, are impacting all segments of the market. CIOs, you know, they're tapping the brakes a little bit, sometimes quite a bit and being cautious with both capital expenditures and discretionary opex, particularly in the cloud. They're dialing it down and just being a little bit more, you know, cautious. The market for enterprise servers, it's dominated as you know, by x86 based systems with an increasingly large contribution coming from alternatives like ARM and NVIDIA. Intel, of course, is the largest supplier, but AMD has been incredibly successful competing with Intel because of its focus, it's got an outsourced manufacturing model and its innovation and very solid execution. Intel's frequent delays with its next generation Sapphire Rapid CPUs, now slated for January 2023 have created an opportunity for AMD, specifically AMD's next generation EPYC CPUs codenamed Genoa will offer as many as 96 Zen 4 cores per CPU when it launches later on this month. Observers can expect really three classes of Genoa. There's a standard Zen 4 compute platform for general purpose workloads, there's a compute density optimized Zen 4 package and then a cache optimized version for data intensive workloads. Indeed, the makers of enterprise servers are responding to customer requirements for more diversity and server platforms to handle different workloads, especially those high performance data-oriented workloads that are being driven by AI and machine learning and high performance computing, HPC needs. OEMs like Dell, they're going to be tapping these innovations and try to get to the market early. Dell, in particular, will be using these systems as the basis for its next generation Gen 16 servers, which are going to bring new capabilities to the market. Now, of course, Dell is not alone, there's got other OEM, you've got HPE, Lenovo, you've got ODMs, you've got the cloud players, they're all going to be looking to keep pace with the market. Now, the other big trend that we've seen in the market is the way customers are thinking about or should be thinking about performance. No longer is the clock speed of the CPU the soul and most indicative performance metric. There's much more emphasis in innovation around all those supporting components in a system, specifically the parts of the system that take advantage, for example, of faster bus speeds. We're talking about things like network interface cards and RAID controllers and memories and other peripheral devices that in combination with microprocessors, determine how well systems can perform and those kind of things around compute operations, IO and other critical tasks. Now, the combinatorial factors ultimately determine the overall performance of the system and how well suited a particular server is to handling different workloads. So we're seeing OEMs like Dell, they're building flexibility into their offerings and putting out products in their portfolios that can meet the changing needs of their customers. Welcome to our ongoing series where we investigate the critical question, does hardware matter? My name is Dave Vellante, and with me today to discuss these trends and the things that you should know about for the next generation of server architectures is former CTO from Oracle and EMC and adjunct faculty and Wharton CTO Academy, David Nicholson. Dave, always great to have you on "theCUBE." Thanks for making some time with me. >> Yeah, of course, Dave, great to be here. >> All right, so you heard my little spiel in the intro, that summary, >> Yeah. >> Was it accurate? What would you add? What do people need to know? >> Yeah, no, no, no, 100% accurate, but you know, I'm a resident nerd, so just, you know, some kind of clarification. If we think of things like microprocessor release cycles, it's always going to be characterized as rolling thunder. I think 2023 in particular is going to be this constant release cycle that we're going to see. You mentioned the, (clears throat) excuse me, general processors with 96 cores, shortly after the 96 core release, we'll see that 128 core release that you referenced in terms of compute density. And then, we can talk about what it means in terms of, you know, nanometers and performance per core and everything else. But yeah, no, that's the main thing I would say, is just people shouldn't look at this like a new car's being released on Saturday. This is going to happen over the next 18 months, really. >> All right, so to that point, you think about Dell's next generation systems, they're going to be featuring these new AMD processes, but to your point, when you think about performance claims, in this industry, it's a moving target. It's that, you call it a rolling thunder. So what does that game of hopscotch, if you will, look like? How do you see it unfolding over the next 12 to 18 months? >> So out of the gate, you know, slated as of right now for a November 10th release, AMD's going to be first to market with, you know, everyone will argue, but first to market with five nanometer technology in production systems, 96 cores. What's important though is, those microprocessors are going to be resident on motherboards from Dell that feature things like PCIe 5.0 technology. So everything surrounding the microprocessor complex is faster. Again, going back to this idea of rolling thunder, we expect the Gen 16 PowerEdge servers from Dell to similarly be rolled out in stages with initial releases that will address certain specific kinds of workloads and follow on releases with a variety of systems configured in a variety of ways. >> So I appreciate you painting a picture. Let's kind of stay inside under the hood, if we can, >> Sure. >> And share with us what we should know about these kind of next generation CPUs. How are companies like Dell going to be configuring them? How important are clock speeds and core counts in these new systems? And what about, you mentioned motherboards, what about next gen motherboards? You mentioned PCIe Gen 5, where does that fit in? So take us inside deeper into the system, please. >> Yeah, so if you will, you know, if you will join me for a moment, let's crack open the box and look inside. It's not just microprocessors. Like I said, they're plugged into a bus architecture that interconnect. How quickly that interconnect performs is critical. Now, I'm going to give you a statistic that doesn't require a PhD to understand. When we go from PCIe Gen 4 to Gen 5, which is going to be featured in all of these systems, we double the performance. So just, you can write that down, two, 2X. The performance is doubled, but the numbers are pretty staggering in terms of giga transactions per second, 128 gigabytes per second of aggregate bandwidth on the motherboard. Again, doubling when going from 4th Gen to 5th Gen. But the reality is, most users of these systems are still on PCIe Gen 3 based systems. So for them, just from a bus architecture perspective, you're doing a 4X or 8X leap in performance, and then all of the peripherals that plug into that faster bus are faster, whether it's RAID control cards from RAID controllers or storage controllers or network interface cards. Companies like Broadcom come to mind. All of their components are leapfrogging their prior generation to fit into this ecosystem. >> So I wonder if we could stay with PCIe for a moment and, you know, just understand what Gen 5 brings. You said, you know, 2X, I think we're talking bandwidth here. Is there a latency impact? You know, why does this matter? And just, you know, this premise that these other components increasingly matter more, Which components of the system are we talking about that can actually take advantage of PCIe Gen 5? >> Pretty much all of them, Dave. So whether it's memory plugged in or network interface cards, so communication to the outside world, which computer servers tend to want to do in 2022, controllers that are attached to internal and external storage devices. All of them benefit from this enhancement and performance. And it's, you know, PCI express performance is measured in essentially bandwidth and throughput in the sense of the numbers of transactions per second that you can do. It's mind numbing, I want to say it's 32 giga transfers per second. And then in terms of bandwidth, again, across the lanes that are available, 128 gigabytes per second. I'm going to have to check if it's gigabits or gigabytes. It's a massive number. And again, it's double what PCIe 4 is before. So what does that mean? Just like the advances in microprocessor technology, you can consolidate massive amounts of work into a much smaller footprint. That's critical because everything in that server is consuming power. So when you look at next generation hardware that's driven by things like AMD Genoa or you know, the EPYC processors, the Zen with the Z4 microprocessors, for every dollar that you're spending on power and equipment and everything else, you're getting far greater return on your investment. Now, I need to say that we anticipate that these individual servers, if you're out shopping for a server, and that's a very nebulous term because they come in all sorts of shapes and sizes, I think there's going to be a little bit of sticker shock at first until you run the numbers. People will look at an individual server and they'll say, wow, this is expensive and the peripherals, the things that are going into those slots are more expensive, but you're getting more bang for your buck. You're getting much more consolidation, lower power usage and for every dollar, you're getting a greater amount of performance and transactions, which translates up the stack through the application layer and, you know, out to the end user's desire to get work done. >> So I want to come back to that, but let me stay on performance for a minute. You know, we all used to be, when you'd go buy a new PC, you'd be like, what's the clock speed of that? And so, when you think about performance of a system today and how measurements are changing, how should customers think about performance in these next gen systems? And where does that, again, where does that supporting ecosystem play? >> So if you are really into the speeds and feeds and what's under the covers, from an academic perspective, you can go in and you can look at the die size that was used to create the microprocessors, the clock speeds, how many cores there are, but really, the answer is look at the benchmarks that are created through testing, especially from third party organizations that test these things for workloads that you intend to use these servers for. So if you are looking to support something like a high performance environment for artificial intelligence or machine learning, look at the benchmarks as they're recorded, as they're delivered by the entire system. So it's not just about the core. So yeah, it's interesting to look at clock speeds to kind of compare where we are with regards to Moore's Law. Have we been able to continue to track along that path? We know there are physical limitations to Moore's Law from an individual microprocessor perspective, but none of that really matters. What really matters is what can this system that I'm buying deliver in terms of application performance and user requirement performance? So that's what I'd say you want to look for. >> So I presume we're going to see these benchmarks at some point, I'm hoping we can, I'm hoping we can have you back on to talk about them. Is that something that we can expect in the future? >> Yeah, 100%, 100%. Dell, and I'm sure other companies, are furiously working away to demonstrate the advantages of this next gen architecture. If I had to guess, I would say that we are going to see quite a few world records set because of the combination of things, like faster network interface cards, faster storage cards, faster memory, more memory, faster cache, more cache, along with the enhanced microprocessors that are going to be delivered. And you mentioned this is, you know, AMD is sort of starting off this season of rolling thunder and in a few months, we'll start getting the initial entries from Intel also, and we'll be able to compare where they fit in with what AMD is offering. I'd expect OEMs like Dell to have, you know, a portfolio of products that highlight the advantages of each processor's set. >> Yeah, I talked in my open Dave about the diversity of workloads. What are some of those emerging workloads and how will companies like Dell address them in your view? >> So a lot of the applications that are going to be supported are what we think of as legacy application environments. A lot of Oracle databases, workloads associated with ERP, all of those things are just going to get better bang for their buck from a compute perspective. But what we're going to be hearing a lot about and what the future really holds for us that's exciting is this arena of artificial intelligence and machine learning. These next gen platforms offer performance that allows us to do things in areas like natural language processing that we just couldn't do before cost effectively. So I think the next few years are going to see a lot of advances in AI and ML that will be debated in the larger culture and that will excite a lot of computer scientists. So that's it, AI/ML are going to be the big buzzwords moving forward. >> So Dave, you talked earlier about this, some people might have sticker shocks. So some of the infrastructure pros that are watching this might be, oh, okay, I'm going to have to pitch this, especially in this, you know, tough macro environment. I'm going to have to sell this to my CIO, my CFO. So what does this all mean? You know, if they're going to have to pay more, how is it going to affect TCO? How would you pitch that to your management? >> As long as you stay away from per unit cost, you're fine. And again, we don't have necessarily, or I don't have necessarily insider access to street pricing on next gen servers yet, but what I do know from examining what the component suppliers tell us is that, these systems are going to be significantly more expensive on a per unit basis. But what does that mean? If the server that you're used to buying for five bucks is now 10 bucks, but it's doing five times as much work, it's a great deal, and anyone who looks at it and says, 10 bucks? It used to only be five bucks, well, the ROI and the TCO, that's where all of this really needs to be measured and a huge part of that is going to be power consumption. And along with the performance tests that we expect to see coming out imminently, we should also be expecting to see some of those ROI metrics, especially around power consumption. So I don't think it's going to be a problem moving forward, but there will be some sticker shock. I imagine you're going to be able to go in and configure a very, very expensive, fully loaded system on some of these configurators online over the next year. >> So it's consolidation, which means you could do more with less. It's going to be, or more with the same, it's going to be lower power, less cooling, less floor space and lower management overhead, which is kind of now you get into staff, so you're going to have to sort of identify how the staff can be productive in other areas. You're probably not going to fire people hopefully. But yeah, it sounds like it's going to be a really consolidation play. I talked at the open about Intel and AMD and Intel coming out with Sapphire Rapids, you know, of course it's been well documented, it's late but they're now scheduled for January. Pat Gelsinger's talked about this, and of course they're going to try to leapfrog AMD and then AMD is going to respond, you talked about this earlier, so that game is going to continue. How long do you think this cycle will last? >> Forever. (laughs) It's just that, there will be periods of excitement like we're going to experience over at least the next year and then there will be a lull and then there will be a period of excitement. But along the way, we've got lurkers who are trying to disrupt this market completely. You know, specifically you think about ARM where the original design point was, okay, you're powered by a battery, you have to fit in someone's pocket. You can't catch on fire and burn their leg. That's sort of the requirement, as opposed to the, you know, the x86 model, which is okay, you have a data center with a raised floor and you have a nuclear power plant down the street. So don't worry about it. As long as an 18-wheeler can get it to where it needs to be, we'll be okay. And so, you would think that over time, ARM is going to creep up as all destructive technologies do, and we've seen that, we've definitely seen that. But I would argue that we haven't seen it happen as quickly as maybe some of us expected. And then you've got NVIDIA kind of off to the side starting out, you know, heavy in the GPU space saying, hey, you know what, you can use the stuff we build for a whole lot of really cool new stuff. So they're running in a different direction, sort of gnawing at the traditional x86 vendors certainly. >> Yes, so I'm glad- >> That's going to be forever. >> I'm glad you brought up ARM and NVIDIA, I think, but you know, maybe it hasn't happened as quickly as many thought, although there's clearly pockets and examples where it is taking shape. But this to me, Dave, talks to the supporting cast. It's not just about the microprocessor unit anymore, specifically, you know, generally, but specifically the x86. It's the supporting, it's the CPU, the NPU, the XPU, if you will, but also all those surrounding components that, to your earlier point, are taking advantage of the faster bus speeds. >> Yeah, no, 100%. You know, look at it this way. A server used to be measured, well, they still are, you know, how many U of rack space does it take up? You had pizza box servers with a physical enclosure. Increasingly, you have the concept of a server in quotes being the aggregation of components that are all plugged together that share maybe a bus architecture. But those things are all connected internally and externally, especially externally, whether it's external storage, certainly networks. You talk about HPC, it's just not one server. It's hundreds or thousands of servers. So you could argue that we are in the era of connectivity and the real critical changes that we're going to see with these next generation server platforms are really centered on the bus architecture, PCIe 5, and the things that get plugged into those slots. So if you're looking at 25 gig or 100 gig NICs and what that means from a performance and/or consolidation perspective, or things like RDMA over Converged Ethernet, what that means for connecting systems, those factors will be at least as important as the microprocessor complexes. I imagine IT professionals going out and making the decision, okay, we're going to buy these systems with these microprocessors, with this number of cores in memory. Okay, great. But the real work starts when you start talking about connecting all of them together. What does that look like? So yeah, the definition of what constitutes a server and what's critically important I think has definitely changed. >> Dave, let's wrap. What can our audience expect in the future? You talked earlier about you're going to be able to get benchmarks, so that we can quantify these innovations that we've been talking about, bring us home. >> Yeah, I'm looking forward to taking a solid look at some of the performance benchmarking that's going to come out, these legitimate attempts to set world records and those questions about ROI and TCO. I want solid information about what my dollar is getting me. I think it helps the server vendors to be able to express that in a concrete way because our understanding is these things on a per unit basis are going to be more expensive and you're going to have to justify them. So that's really what, it's the details that are going to come the day of the launch and in subsequent weeks. So I think we're going to be busy for the next year focusing on a lot of hardware that, yes, does matter. So, you know, hang on, it's going to be a fun ride. >> All right, Dave, we're going to leave it there. Thanks you so much, my friend. Appreciate you coming on. >> Thanks, Dave. >> Okay, and don't forget to check out the special website that we've set up for this ongoing series. Go to doeshardwarematter.com and you'll see commentary from industry leaders, we got analysts on there, technical experts from all over the world. Thanks for watching, and we'll see you next time. (upbeat music)

Published Date : Nov 10 2022

SUMMARY :

and the things that you should know about Dave, great to be here. I think 2023 in particular is going to be over the next 12 to 18 months? So out of the gate, you know, So I appreciate you painting a picture. going to be configuring them? So just, you can write that down, two, 2X. Which components of the and the peripherals, the And so, when you think about So it's not just about the core. can expect in the future? Dell to have, you know, about the diversity of workloads. So a lot of the applications that to your management? So I don't think it's going to and then AMD is going to respond, as opposed to the, you the XPU, if you will, and the things that get expect in the future? it's the details that are going to come going to leave it there. Okay, and don't forget to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

David NicholsonPERSON

0.99+

January 2023DATE

0.99+

OracleORGANIZATION

0.99+

JanuaryDATE

0.99+

DellORGANIZATION

0.99+

hundredsQUANTITY

0.99+

November 10thDATE

0.99+

AMDORGANIZATION

0.99+

10 bucksQUANTITY

0.99+

five bucksQUANTITY

0.99+

NVIDIAORGANIZATION

0.99+

100 gigQUANTITY

0.99+

EMCORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

LenovoORGANIZATION

0.99+

100%QUANTITY

0.99+

SaturdayDATE

0.99+

128 coreQUANTITY

0.99+

25 gigQUANTITY

0.99+

96 coresQUANTITY

0.99+

five timesQUANTITY

0.99+

2XQUANTITY

0.99+

96 coreQUANTITY

0.99+

8XQUANTITY

0.99+

4XQUANTITY

0.99+

96QUANTITY

0.99+

next yearDATE

0.99+

twoQUANTITY

0.99+

todayDATE

0.99+

2022DATE

0.98+

bothQUANTITY

0.98+

doeshardwarematter.comOTHER

0.98+

5th Gen.QUANTITY

0.98+

4th GenQUANTITY

0.98+

ARMORGANIZATION

0.98+

18-wheelerQUANTITY

0.98+

Z4COMMERCIAL_ITEM

0.97+

firstQUANTITY

0.97+

IntelORGANIZATION

0.97+

2023DATE

0.97+

Zen 4COMMERCIAL_ITEM

0.97+

Sapphire RapidsCOMMERCIAL_ITEM

0.97+

thousandsQUANTITY

0.96+

one serverQUANTITY

0.96+

doubleQUANTITY

0.95+

PCIe Gen 4OTHER

0.95+

Sapphire Rapid CPUsCOMMERCIAL_ITEM

0.94+

PCIe Gen 3OTHER

0.93+

PCIe 4OTHER

0.93+

x86COMMERCIAL_ITEM

0.92+

Wharton CTO AcademyORGANIZATION

0.92+

theCUBE Previews Supercomputing 22


 

(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)

Published Date : Oct 25 2022

SUMMARY :

And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Danny HillisPERSON

0.99+

Steve ChenPERSON

0.99+

NECORGANIZATION

0.99+

FujitsuORGANIZATION

0.99+

IBMORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Steve WallachPERSON

0.99+

DavidPERSON

0.99+

DellORGANIZATION

0.99+

Dave NicholsonPERSON

0.99+

NASAORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Steve FrankPERSON

0.99+

NvidiaORGANIZATION

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

Seymour CrayPERSON

0.99+

John FurrierPERSON

0.99+

Paul GillinPERSON

0.99+

Dave VellantePERSON

0.99+

UnisysORGANIZATION

0.99+

1997DATE

0.99+

SavannahPERSON

0.99+

DallasLOCATION

0.99+

EUORGANIZATION

0.99+

Controlled Data CorporationsORGANIZATION

0.99+

IntelORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Penguin SolutionsORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

TuesdayDATE

0.99+

siliconangle.comOTHER

0.99+

AMDORGANIZATION

0.99+

21st centuryDATE

0.99+

iPhone 12COMMERCIAL_ITEM

0.99+

10QUANTITY

0.99+

CrayPERSON

0.99+

one terabyteQUANTITY

0.99+

CDCORGANIZATION

0.99+

thecube.netOTHER

0.99+

Lawrence Livermore LabsORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

Kendall Square ResearchORGANIZATION

0.99+

iPhone 14COMMERCIAL_ITEM

0.99+

john@siliconangle.comOTHER

0.99+

$2 millionQUANTITY

0.99+

November 13thDATE

0.99+

firstQUANTITY

0.99+

over $200 millionQUANTITY

0.99+

TodayDATE

0.99+

more than half a billion dollarsQUANTITY

0.99+

20QUANTITY

0.99+

seven peopleQUANTITY

0.99+

hundredsQUANTITY

0.99+

mid 1960sDATE

0.99+

three daysQUANTITY

0.99+

ConvexORGANIZATION

0.99+

70'sDATE

0.99+

SC22EVENT

0.99+

david.vellante@siliconangle.comOTHER

0.99+

late 80'sDATE

0.98+

80'sDATE

0.98+

ES7000COMMERCIAL_ITEM

0.98+

todayDATE

0.98+

almost $2 millionQUANTITY

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

20 years laterDATE

0.98+

tens of millions of dollarsQUANTITY

0.98+

SundayDATE

0.98+

JapaneseOTHER

0.98+

90'sDATE

0.97+

David Flynn Supercloud Audio


 

>> From every ISV to solve the problems. You want there to be tools in place that you can use, either open source tools or whatever it is that help you build it. And slowly over time, that building will become easier and easier. So my question to you was, where do you see you playing? Do you see yourself playing to ISVs as a set of tools, which will make their life a lot easier and provide that work? >> Absolutely. >> If they don't have, so they don't have to do it. Or you're providing this for the end users? Or both? >> So it's a progression. If you go to the ISVs first, you're doomed to starved before you have time for that other option. >> Yeah. >> Right? So it's a question of phase, the phasing of it. And also if you go directly to end users, you can demonstrate the power of it and get the attention of the ISVs. I believe that the ISVs, especially those with the biggest footprints and the most, you know, coveted estates, they have already made massive investments at trying to solve decentralization of their software stack. And I believe that they have used it as a hook to try to move to a software as a service model and rope people into leasing their infrastructure. So if you look at the clouds that have been propped up by Autodesk or by Adobe, or you name the company, they are building proprietary makeshift solutions for decentralizing or hybrid clouding. Or maybe they're not even doing that at all and all they're is saying hey, if you want to get location agnosticness, then what you should just, is just move into our cloud. >> Right. >> And then they try to solve on the background how to decentralize it between different regions so they can have decent offerings in each region. But those who are more advanced have already made larger investments and will be more averse to, you know, throwing that stuff away, all of their makeshift machinery away, and using a platform that gives them high performance parallel, low level file system access, while at the same time having metadata-driven, you know, policy-based, intent-based orchestration to manage the diffusion of data across a decentralized infrastructure. They are not going to be as open because they've made such an investment and they're going to look at how do they monetize it. So what we have found with like the movie studios who are using us already, many of the app they're using, many of those software offerings, the ISVs have their own cloud that offers that software for the cloud. But what we got when I asked about this, 'cause I was dealt specifically into this question because I'm very interested to know how we're going to make that leap from end user upstream into the ISVs where I believe we need to, and they said, look, we cannot use these software ISV-specific SAS clouds for two reasons. Number one is we lose control of the data. We're giving it to them. That's security and other issues. And here you're talking about we're doing work for Disney, we're doing work for Netflix, and they're not going to let us put our data on those software clouds, on those SAS clouds. Secondly, in any reasonable pipeline, the data is shared by many different applications. We need to be agnostic as to the application. 'Cause the inputs to one application, you know, the output for one application provides the input to the next, and it's not necessarily from the same vendor. So they need to have a data platform that lets them, you know, go from one software stack, and you know, to run it on another. Because they might do the rendering with this and yet, they do the editing with that, and you know, et cetera, et cetera. So I think the further you go up the stack in the structured data and dedicated applications for specific functions in specific verticals, the further up the stack you go, the harder it is to justify a SAS offering where you're basically telling the end users you need to park all your data with us and then you can run your application in our cloud and get this. That ultimately is a dead end path versus having the data be open and available to many applications across this supercloud layer. >> Okay, so-- >> Is that making any sense? >> Yes, so if I could just ask a clarifying question. So, if I had to take Snowflake as an example, I think they're doing exactly what you're saying is a dead end, put everything into our proprietary system and then we'll figure out how to distribute it. >> Yeah. >> And and I think if you're familiar with Zhamak Dehghaniis' data mesh concept. Are you? >> A little bit, yeah. >> But in her model, Snowflake, a Snowflake warehouse is just a node on the mesh and that mesh is-- >> That's right. >> Ultimately the supercloud and you're an enabler of that is what I'm hearing. >> That's right. What they're doing up at the structured level and what they're talking about at the structured level we're doing at the underlying, unstructured level, which by the way has implications for how you implement those distributed database things. In other words, implementing a Snowflake on top of Hammerspace would have made building stuff like in the first place easier. It would allow you to easily shift and run the database engine anywhere. You still have to solve how to shard and distribute at the transaction layer above, so I'm not saying we're a substitute for what you need to do at the app layer. By the way, there is another example of that and that's Microsoft Office, right? It's one thing to share that, to have a file share where you can share all the docs. It's something else to have Word and PowerPoint, Excel know how to allow people to be simultaneously editing the same doc. That's always going to happen in the app layer. But not all applications need that level of, you know, in-app decentralization. You know, many of them, many workflows are pipelined, especially the ones that are very data intensive where you're doing drug discovery or you're doing rendering, or you're doing machine learning training. These things are human in the loop with large stages of processing across tens of thousands of cores. And I think that kind of data processing pipeline is what we're focusing on first. Not so much the Microsoft Office or the Snowflake, you know, parking a relational database because that takes a lot of application layer stuff and that's what they're good at. >> Right. >> But I think... >> Go ahead, sorry. >> Later entrance in these markets will find Hammerspace as a way to accelerate their work so they can focus more narrowly on just the stuff that's app-specific, higher level sharing in the app. >> Yes, Snowflake founders-- >> I think it might be worth mentioning also, just keep this confidential guys, but one of our customers is Blue Origin. And one of the things that we have found is kind of the point of what you're talking about with our customers. They're needing to build this and since it's not commercially available or they don't know where to look for it to be commercially available, they're all building themselves. So this layer is needed. And Blue is just one of the examples of quite a few we're now talking to. And like manufacturing, HPC, research where they're out trying to solve this problem with their own scripting tools and things like that. And I just, I don't know if there's anything you want to add, David, but you know, but there's definitely a demand here and customers are trying to figure out how to solve it beyond what Hammerspace is doing. Like the need is so great that they're just putting developers on trying to do it themselves. >> Well, and you know, Snowflake founders, they didn't have a Hammerspace to lean on. But, one of the things that's interesting about supercloud is we feel as though industry clouds will emerge, that as part of company's digital transformations, they will, you know, every company's a software company, they'll begin to build their own clouds and they will be able to use a Hammerspace to do that. >> A super pass layer. >> Yes. It's really, I don't know if David's speaking, I don't want to speak over him, but we can't hear you. May be going through a bad... >> Well, a regional, regional talks that make that possible. And so they're doing these render farms and editing farms, and it's a cloud-specific to the types of workflows in the median entertainment world. Or clouds specifically to workflows in the chip design world or in the drug and bio and life sciences exploration world. There are large organizations that are kind of a blend of end users, like the Broad, which has their own kind of cloud where they're asking collaborators to come in and work with them. So it starts to even blur who's an end user versus an ISV. >> Yes. >> Right? When you start talking about the massive data is the main gravity is to having lots of people participate. >> Yep, and that's where the value is. And that's where the value is. And this is a megatrend that we see. And so it's really important for us to get to the point of what is and what is not a supercloud and, you know, that's where we're trying to evolve. >> Let's talk about this for a second 'cause I want to, I want to challenge you on something and it's something that I got challenged on and it has led me to thinking differently than I did at first, which Molly can attest to. Okay? So, we have been looking for a way to talk about the concept of cloud of utility computing, run anything anywhere that isn't addressed in today's realization of cloud. 'Cause today's cloud is not run anything anywhere, it's quite the opposite. You park your data in AWS and that's where you run stuff. And you pretty much have to. Same with with Azure. They're using data gravity to keep you captive there, just like the old infrastructure guys did. But now it's even worse because it's coupled back with the software to some degree, as well. And you have to use their storage, networking, and compute. It's not, I mean it fell back to the mainframe era. Anyhow, so I love the concept of supercloud. By the way, I was going to suggest that a better term might be hyper cloud since hyper speaks to the multidimensionality of it and the ability to be in a, you know, be in a different dimension, a different plane of existence kind of thing like hyperspace. But super and hyper are somewhat synonyms. I mean, you have hyper cars and you have super cars and blah, blah, blah. I happen to like hyper maybe also because it ties into the whole Hammerspace notion of a hyper-dimensional, you know, reality, having your data centers connected by a wormhole that is Hammerspace. But regardless, what I got challenged on is calling it something different at all versus simply saying, this is what cloud has always meant to be. This is the true cloud, this is real cloud, this is cloud. And I think back to what happened, you'll remember, at Fusion IO we talked about IO memory and we did that because people had a conceptualization of what an SSD was. And an SSD back then was low capacity, low endurance, made to go military, aerospace where things needed to be rugged but was completely useless in the data center. And we needed people to imagine this thing as being able to displace entire SAND, with the kind of capacity density, performance density, endurance. And so we talked IO memory, we could have said enterprise SSD, and that's what the industry now refers to for that concept. What will people be saying five and 10 years from now? Will they simply say, well this is cloud as it was always meant to be where you are truly able to run anything anywhere and have not only the same APIs, but you're same data available with high performance access, all forms of access, block file and object everywhere. So yeah. And I wonder, and this is just me throwing it out there, I wonder if, well, there's trade offs, right? Giving it a new moniker, supercloud, versus simply talking about how cloud is always intended to be and what it was meant to be, you know, the real cloud or true cloud, there are trade-offs. By putting a name on it and branding it, that lets people talk about it and understand they're talking about something different. But it also is that an affront to people who thought that that's what they already had. >> What's different, what's new? Yes, and so we've given a lot of thought to this. >> Right, it's like you. >> And it's because we've been asked that why does the industry need a new term, and we've tried to address some of that. But some of the inside baseball that we haven't shared is, you remember the Web 2.0, back then? >> Yep. >> Web 2.0 was the same thing. And I remember Tim Burners Lee saying, "Why do we need Web 2.0? "This is what the Web was always supposed to be." But the truth is-- >> I know, that was another perfect-- >> But the truth is it wasn't, number one. Number two, everybody hated the Web 2.0 term. John Furrier was actually in the middle of it all. And then it created this groundswell. So one of the things we wrote about is that supercloud is an evocative term that catalyzes debate and conversation, which is what we like, of course. And maybe that's self-serving. But yeah, HyperCloud, Metacloud, super, meaning, it's funny because super came from Latin supra, above, it was never the superlative. But the superlative was a convenient byproduct that caused a lot of friction and flack, which again, in the media business is like a perfect storm brewing. >> The bad thing to have to, and I think you do need to shake people out of their, the complacency of the limitations that they're used to. And I'll tell you what, the fact that you even have the terms hybrid cloud, multi-cloud, private cloud, edge computing, those are all just referring to the different boundaries that isolate the silo that is the current limited cloud. >> Right. >> So if I heard correctly, what just, in terms of us defining what is and what isn't in supercloud, you would say traditional applications which have to run in a certain place, in a certain cloud can't run anywhere else, would be the stuff that you would not put in as being addressed by supercloud. And over time, you would want to be able to run the data where you want to and in any of those concepts. >> Or even modern apps, right? Or even modern apps that are siloed in SAS within an individual cloud, right? >> So yeah, I guess it's twofold. Number one, if you're going at the high application layers, there's lots of ways that you can give the appearance of anything running anywhere. The ISV, the SAS vendor can engineer stuff to have the ability to serve with low enough latency to different geographies, right? So if you go too high up the stack, it kind of loses its meaning because there's lots of different ways to make due and give the appearance of omni-presence of the service. Okay? As you come down more towards the platform layer, it gets harder and harder to mask the fact that supercloud is something entirely different than just a good regionally-distributed SAS service. So I don't think you, I don't think you can distinguish supercloud if you go too high up the stack because it's just SAS, it's just a good SAS service where the SAS vendor has done the hard work to give you low latency access from different geographic regions. >> Yeah, so this is one of the hardest things, David. >> Common among them. >> Yeah, this is really an important point. This is one of the things I've had the most trouble with is why is this not just SAS? >> So you dilute your message when you go up to the SAS layer. If you were to focus most of this around the super pass layer, the how can you host applications and run them anywhere and not host this, not run a service, not have a service available everywhere. So how can you take any application, even applications that are written, you know, in a traditional legacy data center fashion and be able to run them anywhere and have them have their binaries and their datasets and the runtime environment and the infrastructure to start them and stop them? You know, the jobs, the, what the Kubernetes, the job scheduler? What we're really talking about here, what I think we're really talking about here is building the operating system for a decentralized cloud. What is the operating system, the operating environment for a decentralized cloud? Where you can, and that the main two functions of an operating system or an operating environment are the process scheduler, the thing that's scheduling what is running where and when and so forth, and the file system, right? The thing that's supplying a common view and access to data. So when we talk about this, I think that the strongest argument for supercloud is made when you go down to the platform layer and talk of it, talk about it as an operating environment on which you can run all forms of applications. >> Would you exclude--? >> Not a specific application that's been engineered as a SAS. (audio distortion) >> He'll come back. >> Are you there? >> Yeah, yeah, you just cut out for a minute. >> I lost your last statement when you broke up. >> We heard you, you said that not the specific application. So would you exclude Snowflake from supercloud? >> Frankly, I would. I would. Because, well, and this is kind of hard to do because Snowflake doesn't like to, Frank doesn't like to talk about Snowflake as a SAS service. It has a negative connotation. >> But it is. >> I know, we all know it is. We all know it is and because it is, yes, I would exclude them. >> I think I actually have him on camera. >> There's nothing in common. >> I think I have him on camera or maybe Benoit as saying, "Well, we are a SAS." I think it's Slootman. I think I said to Slootman, "I know you don't like to say you're a SAS." And I think he said, "Well, we are a SAS." >> Because again, if you go to the top of the application stack, there's any number of ways you can give it location agnostic function or you know, regional, local stuff. It's like let's solve the location problem by having me be your one location. How can it be decentralized if you're centralizing on (audio distortion)? >> Well, it's more decentralized than if it's all in one cloud. So let me actually, so the spectrum. So again, in the spirit of what is and what isn't, I think it's safe to say Hammerspace is supercloud. I think there's no debate there, right? Certainly among this crowd. And I think we can all agree that Dell, Dell Storage is not supercloud. Where it gets fuzzy is this Snowflake example or even, how about a, how about a Cohesity that instantiates its stack in different cloud regions in different clouds, and synchronizes, however magic sauce it does that. Is that a supercloud? I mean, so I'm cautious about having too strict of a definition 'cause then only-- >> Fair enough, fair enough. >> But I could use your help and thoughts on that. >> So I think we're talking about two different spectrums here. One is the spectrum of platform to application-specific. As you go up the application stack and it becomes this specific thing. Or you go up to the more and more structured where it's serving a specific application function where it's more of a SAS thing. I think it's harder to call a SAS service a supercloud. And I would argue that the reason there, and what you're lacking in the definition is to talk about it as general purpose. Okay? Now, that said, a data warehouse is general purpose at the structured data level. So you could make the argument for why Snowflake is a supercloud by saying that it is a general purpose platform for doing lots of different things. It's just one at a higher level up at the structured data level. So one spectrum is the high level going from platform to, you know, unstructured data to structured data to very application-specific, right? Like a specific, you know, CAD/CAM mechanical design cloud, like an Autodesk would want to give you their cloud for running, you know, and sharing CAD/CAM designs, doing your CAD/CAM anywhere stuff. Well, the other spectrum is how well does the purported supercloud technology actually live up to allowing you to run anything anywhere with not just the same APIs but with the local presence of data with the exact same runtime environment everywhere, and to be able to correctly manage how to get that runtime environment anywhere. So a Cohesity has some means of running things in different places and some means of coordinating what's where and of serving diff, you know, things in different places. I would argue that it is a very poor approximation of what Hammerspace does in providing the exact same file system with local high performance access everywhere with metadata ability to control where the data is actually instantiated so that you don't have to wait for it to get orchestrated. But even then when you do have to wait for it, it happens automatically and so it's still only a matter of, well, how quick is it? And on the other end of the spectrum is you could look at NetApp with Flexcache and say, "Is that supercloud?" And I would argue, well kind of because it allows you to run things in different places because it's a cache. But you know, it really isn't because it presumes some central silo from which you're cacheing stuff. So, you know, is it or isn't it? Well, it's on a spectrum of exactly how fully is it decoupling a runtime environment from specific locality? And I think a cache doesn't, it stretches a specific silo and makes it have some semblance of similar access in other places. But there's still a very big difference to the central silo, right? You can't turn off that central silo, for example. >> So it comes down to how specific you make the definition. And this is where it gets kind of really interesting. It's like cloud. Does IBM have a cloud? >> Exactly. >> I would say yes. Does it have the kind of quality that you would expect from a hyper-scale cloud? No. Or see if you could say the same thing about-- >> But that's a problem with choosing a name. That's the problem with choosing a name supercloud versus talking about the concept of cloud and how true up you are to that concept. >> For sure. >> Right? Because without getting a name, you don't have to draw, yeah. >> I'd like to explore one particular or bring them together. You made a very interesting observation that from a enterprise point of view, they want to safeguard their store, their data, and they want to make sure that they can have that data running in their own workflows, as well as, as other service providers providing services to them for that data. So, and in in particular, if you go back to, you go back to Snowflake. If Snowflake could provide the ability for you to have your data where you wanted, you were in charge of that, would that make Snowflake a supercloud? >> I'll tell you, in my mind, they would be closer to my conceptualization of supercloud if you can instantiate Snowflake as software on your own infrastructure, and pump your own data to Snowflake that's instantiated on your own infrastructure. The fact that it has to be on their infrastructure or that it's on their, that it's on their account in the cloud, that you're giving them the data and they're, that fundamentally goes against it to me. If they, you know, they would be a pure, a pure plate if they were a software defined thing where you could instantiate Snowflake machinery on the infrastructure of your choice and then put your data into that machinery and get all the benefits of Snowflake. >> So did you see--? >> In other words, if they were not a SAS service, but offered all of the similar benefits of being, you know, if it were a service that you could run on your own infrastructure. >> So did you see what they announced, that--? >> I hope that's making sense. >> It does, did you see what they announced at Dell? They basically announced the ability to take non-native Snowflake data, read it in from an object store on-prem, like a Dell object store. They do the same thing with Pure, read it in, running it in the cloud, and then push it back out. And I was saying to Dell, look, that's fine. Okay, that's interesting. You're taking a materialized view or an extended table, whatever you're doing, wouldn't it be more interesting if you could actually run the query locally with your compute? That would be an extension that would actually get my attention and extend that. >> That is what I'm talking about. That's what I'm talking about. And that's why I'm saying I think Hammerspace is more progressive on that front because with our technology, anybody who can instantiate a service, can make a service. And so I, so MSPs can use Hammerspace as a way to build a super pass layer and host their clients on their infrastructure in a cloud-like fashion. And their clients can have their own private data centers and the MSP or the public clouds, and Hammerspace can be instantiated, get this, by different parties in these different pieces of infrastructure and yet linked together to make a common file system across all of it. >> But this is data mesh. If I were HPE and Dell it's exactly what I'd be doing. I'd be working with Hammerspace to create my own data. I'd work with Databricks, Snowflake, and any other-- >> Data mesh is a good way to put it. Data mesh is a good way to put it. And this is at the lowest level of, you know, the underlying file system that's mountable by the operating system, consumed as a real file system. You can't get lower level than that. That's why this is the foundation for all of the other apps and structured data systems because you need to have a data mesh that can at least mesh the binary blob. >> Okay. >> That hold the binaries and that hold the datasets that those applications are running. >> So David, in the third week of January, we're doing supercloud 2 and I'm trying to convince John Furrier to make it a data slash data mesh edition. I'm slowly getting him to the knothole. I would very much, I mean you're in the Bay Area, I'd very much like you to be one of the headlines. As Zhamak Dehghaniis going to speak, she's the creator of Data Mesh, >> Sure. >> I'd love to have you come into our studio as well, for the live session. If you can't make it, we can pre-record. But you're right there, so I'll get you the dates. >> We'd love to, yeah. No, you can count on it. No, definitely. And you know, we don't typically talk about what we do as Data Mesh. We've been, you know, using global data environment. But, you know, under the covers, that's what the thing is. And so yeah, I think we can frame the discussion like that to line up with other, you know, with the other discussions. >> Yeah, and Data Mesh, of course, is one of those evocative names, but she has come up with some very well defined principles around decentralized data, data as products, self-serve infrastructure, automated governance, and and so forth, which I think your vision plugs right into. And she's brilliant. You'll love meeting her. >> Well, you know, and I think.. Oh, go ahead. Go ahead, Peter. >> Just like to work one other interface which I think is important. How do you see yourself and the open source? You talked about having an operating system. Obviously, Linux is the operating system at one level. How are you imagining that you would interface with cost community as part of this development? >> Well, it's funny you ask 'cause my CTO is the kernel maintainer of the storage networking stack. So how the Linux operating system perceives and consumes networked data at the file system level, the network file system stack is his purview. He owns that, he wrote most of it over the last decade that he's been the maintainer, but he's the gatekeeper of what goes in. And we have leveraged his abilities to enhance Linux to be able to use this decentralized data, in particular with decoupling the control plane driven by metadata from the data access path and the many storage systems on which the data gets accessed. So this factoring, this splitting of control plane from data path, metadata from data, was absolutely necessary to create a data mesh like we're talking about. And to be able to build this supercloud concept. And the highways on which the data runs and the client which knows how to talk to it is all open source. And we have, we've driven the NFS 4.2 spec. The newest NFS spec came from my team. And it was specifically the enhancements needed to be able to build a spanning file system, a data mesh at a file system level. Now that said, our file system itself and our server, our file server, our data orchestration, our data management stuff, that's all closed source, proprietary Hammerspace tech. But the highways on which the mesh connects are actually all open source and the client that knows how to consume it. So we would, honestly, I would welcome competitors using those same highways. They would be at a major disadvantage because we kind of built them, but it would still be very validating and I think only increase the potential adoption rate by more than whatever they might take of the market. So it'd actually be good to split the market with somebody else to come in and share those now super highways for how to mesh data at the file system level, you know, in here. So yeah, hopefully that answered your question. Does that answer the question about how we embrace the open source? >> Right, and there was one other, just that my last one is how do you enable something to run in every environment? And if we take the edge, for example, as being, as an environment which is much very, very compute heavy, but having a lot less capability, how do you do a hold? >> Perfect question. Perfect question. What we do today is a software appliance. We are using a Linux RHEL 8, RHEL 8 equivalent or a CentOS 8, or it's, you know, they're all roughly equivalent. But we have bundled and a software appliance which can be instantiated on bare metal hardware on any type of VM system from VMware to all of the different hypervisors in the Linux world, to even Nutanix and such. So it can run in any virtualized environment and it can run on any cloud instance, server instance in the cloud. And we have it packaged and deployable from the marketplaces within the different clouds. So you can literally spin it up at the click of an API in the cloud on instances in the cloud. So with all of these together, you can basically instantiate a Hammerspace set of machinery that can offer up this file system mesh. like we've been using the terminology we've been using now, anywhere. So it's like being able to take and spin up Snowflake and then just be able to install and run some VMs anywhere you want and boom, now you have a Snowflake service. And by the way, it is so complete that some of our customers, I would argue many aren't even using public clouds at all, they're using this just to run their own data centers in a cloud-like fashion, you know, where they have a data service that can span it all. >> Yeah and to Molly's first point, we would consider that, you know, cloud. Let me put you on the spot. If you had to describe conceptually without a chalkboard what an architectural diagram would look like for supercloud, what would you say? >> I would say it's to have the same runtime environment within every data center and defining that runtime environment as what it takes to schedule the execution of applications, so job scheduling, runtime stuff, and here we're talking Kubernetes, Slurm, other things that do job scheduling. We're talking about having a common way to, you know, instantiate compute resources. So a global compute environment, having a common compute environment where you can instantiate things that need computing. Okay? So that's the first part. And then the second is the data platform where you can have file block and object volumes, and have them available with the same APIs in each of these distributed data centers and have the exact same data omnipresent with the ability to control where the data is from one moment to the next, local, where all the data is instantiate. So my definition would be a common runtime environment that's bifurcate-- >> Oh. (attendees chuckling) We just lost them at the money slide. >> That's part of the magic makes people listen. We keep someone on pin and needles waiting. (attendees chuckling) >> That's good. >> Are you back, David? >> I'm on the edge of my seat. Common runtime environment. It was like... >> And just wait, there's more. >> But see, I'm maybe hyper-focused on the lower level of what it takes to host and run applications. And that's the stuff to schedule what resources they need to run and to get them going and to get them connected through to their persistence, you know, and their data. And to have that data available in all forms and have it be the same data everywhere. On top of that, you could then instantiate applications of different types, including relational databases, and data warehouses and such. And then you could say, now I've got, you know, now I've got these more application-level or structured data-level things. I tend to focus less on that structured data level and the application level and am more focused on what it takes to host any of them generically on that super pass layer. And I'll admit, I'm maybe hyper-focused on the pass layer and I think it's valid to include, you know, higher levels up the stack like the structured data level. But as soon as you go all the way up to like, you know, a very specific SAS service, I don't know that you would call that supercloud. >> Well, and that's the question, is there value? And Marianna Tessel from Intuit said, you know, we looked at it, we did it, and it just, it was actually negative value for us because connecting to all these separate clouds was a real pain in the neck. Didn't bring us any additional-- >> Well that's 'cause they don't have this pass layer underneath it so they can't even shop around, which actually makes it hard to stand up your own SAS service. And ultimately they end up having to build their own infrastructure. Like, you know, I think there's been examples like Netflix moving away from the cloud to their own infrastructure. Basically, if you're going to rent it for more than a few months, it makes sense to build it yourself, if it's at any kind of scale. >> Yeah, for certain components of that cloud. But if the Goldman Sachs came to you, David, and said, "Hey, we want to collaborate and we want to build "out a cloud and essentially build our SAS system "and we want to do that with Hammerspace, "and we want to tap the physical infrastructure "of not only our data centers but all the clouds," then that essentially would be a SAS, would it not? And wouldn't that be a Super SAS or a supercloud? >> Well, you know, what they may be using to build their service is a supercloud, but their service at the end of the day is just a SAS service with global reach. Right? >> Yeah. >> You know, look at, oh shoot. What's the name of the company that does? It has a cloud for doing bookkeeping and accounting. I forget their name, net something. NetSuite. >> NetSuite. NetSuite, yeah, Oracle. >> Yeah. >> Yep. >> Oracle acquired them, right? Is NetSuite a supercloud or is it just a SAS service? You know? I think under the covers you might ask are they using supercloud under the covers so that they can run their SAS service anywhere and be able to shop the venue, get elasticity, get all the benefits of cloud in the, to the benefit of their service that they're offering? But you know, folks who consume the service, they don't care because to them they're just connecting to some endpoint somewhere and they don't have to care. So the further up the stack you go, the more location-agnostic it is inherently anyway. >> And I think it's, paths is really the critical layer. We thought about IAS Plus and we thought about SAS Minus, you know, Heroku and hence, that's why we kind of got caught up and included it. But SAS, I admit, is the hardest one to crack. And so maybe we exclude that as a deployment model. >> That's right, and maybe coming down a level to saying but you can have a structured data supercloud, so you could still include, say, Snowflake. Because what Snowflake is doing is more general purpose. So it's about how general purpose it is. Is it hosting lots of other applications or is it the end application? Right? >> Yeah. >> So I would argue general purpose nature forces you to go further towards platform down-stack. And you really need that general purpose or else there is no real distinguishing. So if you want defensible turf to say supercloud is something different, I think it's important to not try to wrap your arms around SAS in the general sense. >> Yeah, and we've kind of not really gone, leaned hard into SAS, we've just included it as a deployment model, which, given the constraints that you just described for structured data would apply if it's general purpose. So David, super helpful. >> Had it sign. Define the SAS as including the hybrid model hold SAS. >> Yep. >> Okay, so with your permission, I'm going to add you to the list of contributors to the definition. I'm going to add-- >> Absolutely. >> I'm going to add this in. I'll share with Molly. >> Absolutely. >> We'll get on the calendar for the date. >> If Molly can share some specific language that we've been putting in that kind of goes to stuff we've been talking about, so. >> Oh, great. >> I think we can, we can share some written kind of concrete recommendations around this stuff, around the general purpose, nature, the common data thing and yeah. >> Okay. >> Really look forward to it and would be glad to be part of this thing. You said it's in February? >> It's in January, I'll let Molly know. >> Oh, January. >> What the date is. >> Excellent. >> Yeah, third week of January. Third week of January on a Tuesday, whatever that is. So yeah, we would welcome you in. But like I said, if it doesn't work for your schedule, we can prerecord something. But it would be awesome to have you in studio. >> I'm sure with this much notice we'll be able to get something. Let's make sure we have the dates communicated to Molly and she'll get my admin to set it up outside so that we have it. >> I'll get those today to you, Molly. Thank you. >> By the way, I am so, so pleased with being able to work with you guys on this. I think the industry needs it very bad. They need something to break them out of the box of their own mental constraints of what the cloud is versus what it's supposed to be. And obviously, the more we get people to question their reality and what is real, what are we really capable of today that then the more business that we're going to get. So we're excited to lend the hand behind this notion of supercloud and a super pass layer in whatever way we can. >> Awesome. >> Can I ask you whether your platforms include ARM as well as X86? >> So we have not done an ARM port yet. It has been entertained and won't be much of a stretch. >> Yeah, it's just a matter of time. >> Actually, entertained doing it on behalf of NVIDIA, but it will absolutely happen because ARM in the data center I think is a foregone conclusion. Well, it's already there in some cases, but not quite at volume. So definitely will be the case. And I'll tell you where this gets really interesting, discussion for another time, is back to my old friend, the SSD, and having SSDs that have enough brains on them to be part of that fabric. Directly. >> Interesting. Interesting. >> Very interesting. >> Directly attached to ethernet and able to create a data mesh global file system, that's going to be really fascinating. Got to run now. >> All right, hey, thanks you guys. Thanks David, thanks Molly. Great to catch up. Bye-bye. >> Bye >> Talk to you soon.

Published Date : Oct 5 2022

SUMMARY :

So my question to you was, they don't have to do it. to starved before you have I believe that the ISVs, especially those the end users you need to So, if I had to take And and I think Ultimately the supercloud or the Snowflake, you know, more narrowly on just the stuff of the point of what you're talking Well, and you know, Snowflake founders, I don't want to speak over So it starts to even blur who's the main gravity is to having and, you know, that's where to be in a, you know, a lot of thought to this. But some of the inside baseball But the truth is-- So one of the things we wrote the fact that you even have that you would not put in as to give you low latency access the hardest things, David. This is one of the things I've the how can you host applications Not a specific application Yeah, yeah, you just statement when you broke up. So would you exclude is kind of hard to do I know, we all know it is. I think I said to Slootman, of ways you can give it So again, in the spirit But I could use your to allowing you to run anything anywhere So it comes down to how quality that you would expect and how true up you are to that concept. you don't have to draw, yeah. the ability for you and get all the benefits of Snowflake. of being, you know, if it were a service They do the same thing and the MSP or the public clouds, to create my own data. for all of the other apps and that hold the datasets So David, in the third week of January, I'd love to have you come like that to line up with other, you know, Yeah, and Data Mesh, of course, is one Well, you know, and I think.. and the open source? and the client which knows how to talk and then just be able to we would consider that, you know, cloud. and have the exact same data We just lost them at the money slide. That's part of the I'm on the edge of my seat. And that's the stuff to schedule Well, and that's the Like, you know, I think But if the Goldman Sachs Well, you know, what they may be using What's the name of the company that does? NetSuite, yeah, Oracle. So the further up the stack you go, But SAS, I admit, is the to saying but you can have a So if you want defensible that you just described Define the SAS as including permission, I'm going to add you I'm going to add this in. We'll get on the calendar to stuff we've been talking about, so. nature, the common data thing and yeah. to it and would be glad to have you in studio. and she'll get my admin to set it up I'll get those today to you, Molly. And obviously, the more we get people So we have not done an ARM port yet. because ARM in the data center I think is Interesting. that's going to be really fascinating. All right, hey, thanks you guys.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

SlootmanPERSON

0.99+

NetflixORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

MollyPERSON

0.99+

Marianna TesselPERSON

0.99+

DellORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

FrankPERSON

0.99+

DisneyORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

IBMORGANIZATION

0.99+

JanuaryDATE

0.99+

John FurrierPERSON

0.99+

FebruaryDATE

0.99+

PeterPERSON

0.99+

Zhamak DehghaniisPERSON

0.99+

HammerspaceORGANIZATION

0.99+

WordTITLE

0.99+

AWSORGANIZATION

0.99+

RHEL 8TITLE

0.99+

OracleORGANIZATION

0.99+

BenoitPERSON

0.99+

ExcelTITLE

0.99+

secondQUANTITY

0.99+

AutodeskORGANIZATION

0.99+

CentOS 8TITLE

0.99+

David FlynnPERSON

0.99+

oneQUANTITY

0.99+

DatabricksORGANIZATION

0.99+

HPEORGANIZATION

0.99+

PowerPointTITLE

0.99+

first pointQUANTITY

0.99+

bothQUANTITY

0.99+

TuesdayDATE

0.99+

SnowflakeORGANIZATION

0.99+

first partQUANTITY

0.99+

todayDATE

0.99+

each regionQUANTITY

0.98+

LinuxTITLE

0.98+

OneQUANTITY

0.98+

IntuitORGANIZATION

0.98+

Tim Burners LeePERSON

0.98+

Zhamak Dehghaniis'PERSON

0.98+

Blue OriginORGANIZATION

0.98+

Bay AreaLOCATION

0.98+

two reasonsQUANTITY

0.98+

eachQUANTITY

0.98+

one applicationQUANTITY

0.98+

SnowflakeTITLE

0.98+

firstQUANTITY

0.98+

more than a few monthsQUANTITY

0.97+

SASORGANIZATION

0.97+

ARMORGANIZATION

0.97+

MicrosoftORGANIZATION

0.97+

Oracle & AMD Partner to Power Exadata X9M


 

[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]

Published Date : Sep 13 2022

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
20 percentQUANTITY

0.99+

juan loyzaPERSON

0.99+

amdORGANIZATION

0.99+

amazonORGANIZATION

0.99+

8QUANTITY

0.99+

256-wayQUANTITY

0.99+

10QUANTITY

0.99+

OracleORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

87 percentQUANTITY

0.99+

128QUANTITY

0.99+

oracleORGANIZATION

0.99+

two threadsQUANTITY

0.99+

googleORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

todayDATE

0.99+

50QUANTITY

0.99+

200QUANTITY

0.99+

ipodCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

two chipsQUANTITY

0.99+

both companiesQUANTITY

0.99+

10DATE

0.98+

iphoneCOMMERCIAL_ITEM

0.98+

earlier this centuryDATE

0.98+

last aprilDATE

0.98+

third generationQUANTITY

0.98+

juanPERSON

0.98+

64 coresQUANTITY

0.98+

128-wayQUANTITY

0.98+

two socketQUANTITY

0.98+

eight lanesQUANTITY

0.98+

awsORGANIZATION

0.97+

AMDORGANIZATION

0.97+

iosTITLE

0.97+

fourth genQUANTITY

0.96+

168 pcieQUANTITY

0.96+

dave vellantePERSON

0.95+

third genQUANTITY

0.94+

aws azureORGANIZATION

0.94+

appleORGANIZATION

0.94+

thousands of parametersQUANTITY

0.92+

yearsDATE

0.91+

15 yearsQUANTITY

0.9+

Power ExadataORGANIZATION

0.9+

over 90 percentQUANTITY

0.89+

four companiesQUANTITY

0.89+

firstQUANTITY

0.88+

ociORGANIZATION

0.87+

first appearanceQUANTITY

0.85+

one teamQUANTITY

0.84+

almost 15 years agoDATE

0.83+

seven nanometerQUANTITY

0.83+

last few yearsDATE

0.82+

one thingQUANTITY

0.82+

15 years agoDATE

0.82+

epycTITLE

0.8+

over 60QUANTITY

0.79+

amd produceORGANIZATION

0.79+

Tushar Katarki & Justin Boitano | Red Hat Summit 2022


 

(upbeat music) >> We're back. You're watching theCUBE's coverage of Red Hat Summit 2022 here in the Seaport in Boston. I'm Dave Vellante with my co-host, Paul Gillin. Justin Boitano is here. He's the Vice President of Enterprise and Edge Computing at NVIDIA. Maybe you've heard of him. And Tushar Katarki who's the Director of Product Management at Red Hat. Gentlemen, welcome to theCUBE, good to see you. >> Thank you. >> Great to be here, thanks >> Justin, you are a keynote this morning. You got interviewed and shared your thoughts on AI. You encourage people to got to think bigger on AI. I know it's kind of self-serving but why? Why should we think bigger? >> When you think of AI, I mean, it's a monumental change. It's going to affect every industry. And so when we think of AI, you step back, you're challenging companies to build intelligence and AI factories, and factories that can produce intelligence. And so it, you know, forces you to rethink how you build data centers, how you build applications. It's a very data centric process where you're bringing in, you know, an exponential amount of data. You have to label that data. You got to train a model. You got to test the model to make sure that it's accurate and delivers business value. Then you push it into production, it's going to generate more data, and you kind of work through that cycle over and over and over. So, you know, just as Red Hat talks about, you know, CI/CD of applications, we're talking about CI/CD of the AI model itself, right? So it becomes a continuous improvement of AI models in production which is a big, big business transformation. >> Yeah, Chris Wright was talking about basically take your typical application development, you know, pipeline, and life cycle, and apply that type of thinking to AI. I was saying those two worlds have to come together. Actually, you know, the application stack and the data stack including AI need to come together. What's the role of Red Hat? What's your sort of posture on AI? Where do you fit with OpenShift? >> Yeah, so we're really excited about AI. I mean, a lot of our customers obviously are looking to take that data and make meaning out of it using AI is definitely a big important tool. And OpenShift, and our approach to Open Hybrid Cloud really forms a successful platform to base all your AI journey on with the partners such as NVIDIA whom we are working very closely with. And so the idea really is as Justin was saying, you know, the end to end, when you think about life of a model, you've got data, you mine that data, you create models, you deploy it into production. That whole thing, what we call CI/CD, as he was saying DevOps, DevSecOps, and the hybrid cloud that Red Hat has been talking about, although with OpenShift as the center forms a good basis for that. >> So somebody said the other day, I'm going to ask you, is INVIDIA a hardware company or a software company? >> We are a company that people know for our hardware but, you know, predominantly now we're a software company. And that's what we were on stage talking about. I mean, ultimately, a lot of these customers know that they've got to embark on this journey to apply AI, to transform their business with it. It's such a big competitive advantage going into, you know, the next decade. And so the faster they get ahead of it, the more they're going to win, right? But some of them, they're just not really sure how to get going. And so a lot of this is we want to lower the barrier to entry. We built this program, we call it Launchpad to basically make it so they get instant access to the servers, the AI servers, with OpenShift, with the MLOps tooling, with example applications. And then we walk them through examples like how do you build a chatbot? How do you build a vision system for quality control? How do you build a price recommendation model? And they can do hands on labs and walk out of, you know, Launchpad with all the software they need, I'll say the blueprint for building their application. They've got a way to have the software and containers supported in production, and they know the blueprint for the infrastructure and operating that a scale with OpenShift. So more and more, you know, to come back to your question is we're focused on the software layers and making that easy to help, you know, either enterprises build their apps or work with our ecosystem and developers to buy, you know, solutions off the shelf. >> On the harbor side though, I mean, clearly NVIDIA has prospered on the backs of GPUs, as the engines of AI development. Is that how it's going to be for the foreseeable future? Will GPUs continue to be core to building and training AI models or do you see something more specific to AI workloads? >> Yeah, I mean, it's a good question. So I think for the next decade, well, plus, I mean not forever, we're going to always monetize hardware. It's a big, you know, market opportunity. I mean, Jensen talks about a $100 billion, you know, market opportunity for NVIDIA just on hardware. It's probably another a $100 billion opportunity on the software. So the reality is we're getting going on the software side, so it's still kind of early days, but that's, you know, a big area of growth for us in the future and we're making big investments in that area. On the hardware side, and in the data center, you know, the reality is since Moore's law has ended, acceleration is really the thing that's going to advance all data centers. So I think in the future, every server will have GPUs, every server will have DPUs, and we can talk a bit about what DPUs are. And so there's really kind of three primary processors that have to be there to form the foundation of the enterprise data center in the future. >> Did you bring up an interesting point about DPUs and MPUs, and sort of the variations of GPUs that are coming about? Do you see those different PU types continuing to proliferate? >> Oh, absolutely. I mean, we've done a bunch of work with Red Hat, and we've got a, I'll say a beta of OpenShift 4.10 that now supports DPUs as the, I'll call it the control plane like software defined networking offload in the data center. So it takes all the software defined networking off of CPUs. When everybody talks about, I'll call it software defined, you know, networking and core data centers, you can think of that as just a CPU tax up to this point. So what's nice is it's all moving over to DPU to, you know, offload and isolate it from the x86 cores. It increases security of data center. It improves the throughput of your data center. And so, yeah, DPUs, we see everybody copying that model. And, you know to give credit where credit is due, I think, you know, companies like AWS, you know, they bought Annapurna, they turned it into Nitro which is the foundation of their data centers. And everybody wants the, I'll call it democratized version of that to run their data centers. And so every financial institution and bank around the world sees the value of this technology, but running in their data centers. >> Hey, everybody needs a Nitro. I've written about it. It's Annapurna acquisition, 350 million. I mean, peanuts in the grand scheme of things. It's interesting, you said Moore's law is dead. You know, we have that conversation all the time. Pat Gelsinger promised that Moore's law is alive and well. But the interesting thing is when you look at the numbers, that's, you know, Moore's law, we all know it, doubling of the transistor densities every 18 to 24 months. Let's say that, that promise that he made is true. What I think the industry maybe doesn't appreciate, I'm sure you do, being in NVIDIA, when you combine what you were just saying, the CPU, the GPU, Paul, the MPU, accelerators, all the XPUs, you're talking about, I mean, look at Apple with the M1, I mean 6X in 15 months versus doubling every 18 to 24. The A15 is probably averaging over the last five years, a 110% performance improvement each year versus the historical Moore's law which is 40%. It's probably down to the low 30s now. So it's a completely different world that we're entering now. And the new applications are going to be developed on these capabilities. It's just not your general purpose market anymore. From an application development standpoint, what does that mean to the world? >> Yeah, I mean, yeah, it is a great point. I mean, from an application, I mean first of all, I mean, just talk about AI. I mean, they are all very compute intensive. They're data intensive. And I mean to move data focus so much in to compute and crunch those numbers. I mean, I'd say you need all the PUs that you mentioned in the world. And also there are other concerns that will augment that, right? Like we want to, you know, security is so important so we want to secure everything. Cryptography is going to take off to new levels, you know, that we are talking about, for example, in the case of DPUs, we are talking about, you know, can that be used to offload your encryption and firewalling, and so on and so forth. So I think there are a lot of opportunities even from an application point of view to take of this capacity. So I'd say we've never run out of the need for PUs if you will. >> So is OpenShift the layer that's going to simplify all that for the developer. >> That's right. You know, so one of the things that we worked with NVIDIA, and in fact was we developed this concept of an operator for GPUs, but you can use that pattern for any of the PUs. And so the idea really is that, how do you, yeah-- (all giggle) >> That's a new term. >> Yeah, it's a new term. (all giggle) >> XPUs. >> XPUs, yeah. And so that pattern becomes very easy for GPUs or any other such accelerators to be easily added as a capacity. And for the Kubernetes scaler to understand that there is that capacity so that an application which says that I want to run on a GPU then it becomes very easy for it to run on that GPU. And so that's the abstraction to your point about how we are making that happen. >> And to add to this. So the operator model, it's this, you know, open source model that does the orchestration. So Kubernetes will say, oh, there's a GPU in that node, let me run the operator, and it installs our entire run time. And our run time now, you know, it's got a MIG configuration utility. It's got the driver. It's got, you know, telemetry and metering of the actual GPU and the workload, you know, along with a bunch of other components, right? They get installed in that Kubernetes cluster. So instead of somebody trying to chase down all the little pieces and parts, it just happens automatically in seconds. We've extended the operator model to DPUs and networking cards as well, and we have all of those in the operator hub. So for somebody that's running OpenShift in their data centers, it's really simple to, you know, turn on Node Feature Discovery, you point to the operators. And when you see new accelerated nodes, the entire run time is automatically installed for you. So it really makes, you know, GPUs and our networking, our advanced networking capabilities really first class citizens in the data center. >> So you can kind of connect the dots and see how NVIDIA and the Red Hat partnership are sort of aiming at the enterprise. I mean, NVIDIA, obviously, they got the AI piece. I always thought maybe 25% of the compute cycles in the data center were wasted doing storage offloads or networking offload, security. I think Jensen says it's 30%, probably a better number than I have. But so now you're seeing a lot of new innovation in new hardware devices that are attacking that with alternative processors. And then my question is, what about the edge? Is that a blue field out at the edge? What does that look like to NVIDIA and where does OpenShift play? >> Yeah, so when we talk about the edge, we always going to start talking about like which edge are we talking about 'cause it's everything outside the core data center. I mean, some of the trends that we see with regard to the edges is, you know, when you get to the far edge, it's single nodes. You don't have the guards, gates, and guns protection of the data center. So you start having to worry about physical security of the hardware. So you can imagine there's really stringent requirements on protecting the intellectual property of the AI model itself. You spend millions of dollars to build it. If I push that out to an edge data center, how do I make sure that that's fully protected? And that's the area that we just announced a new processor that we call Hopper H100. It supports confidential computing so that you can basically ensure that model is always encrypted in system memory across the bus, of the PCI bus to the GPU, and it's run in a confidential way on the GPU. So you're protecting your data which is your model plus the data flowing through it, you know, in transit, wallet stored, and then in use. So that really adds to that edge security model. >> I wanted to ask you about the cloud, correct me if I'm wrong. But it seems to me that that AI workloads have been slower than most to make their way to the cloud. There are a lot of concerns about data transfer capacity and even cost. Do you see that? First of all, do you agree with that? And secondly, is that going to change in the short-term? >> Yeah, so I think there's different classes of problems. So we'll take, there's some companies where their data's generated in the cloud and we see a ton of, I'll say, adoption of AI by cloud service providers, right? Recommendation engines, translation engines, conversational AI services, that all the clouds are building. That's all, you know, our processors. There's also problems that enterprises have where now I'm trying to take some of these automation capabilities but I'm trying to create an intelligent factory where I want to, you know, merge kind of AI with the physical world. And that really has to run at the edge 'cause there's too much data being generated by cameras to bring that all the way back into the cloud. So, you know, I think we're seeing mass adoption in the cloud today. I think at the edge a lot of businesses are trying to understand how do I deploy that reliably and securely and scale it. So I do think, you know, there's different problems that are going to run in different places, and ultimately we want to help anybody apply AI where the business is generating the data. >> So obviously very memory intensive applications as well. We've seen you, NVIDIA, architecturally kind of move away from the traditional, you know, x86 approach, take better advantage of memories where obviously you have relationships with Arm. So you've got a very diverse set of capabilities. And then all these other components that come into use, to just be a kind of x86 centric world. And now it's all these other supporting components to support these new applications and it's... How should we think about the future? >> Yeah, I mean, it's very exciting for sure, right? Like, you know, the future, the data is out there at the edge, the data can be in the data center. And so we are trying to weave a hybrid cloud footprint that spans that. I mean, you heard Paul come here, talk about it. But, you know, we've talked about it for some time now. And so the paradigm really that is, that be it an application, and when I say application, it could be even an AI model as a service. It can think about that as an application. How does an application span that entire paradigm from the core to the edge and beyond is where the future is. And, of course, there's a lot of technical challenges, you know, for us to get there. And I think partnerships like this are going to help us and our customers to get there. So the world is very exciting. You know, I'm very bullish on how this will play out, right? >> Justin, we'll give you the last word, closing thoughts. >> Well, you know, I think a lot of this is like I said, it's how do we reduce the complexity for enterprises to get started which is why Launchpad is so fundamental. It gives, you know, access to the entire stack instantly with like hands on curated labs for both IT and data scientists. So they can, again, walk out with the blueprints they need to set this up and, you know, start on a successful AI journey. >> Just a position, is Launchpad more of a Sandbox, more of a school, or more of an actual development environment. >> Yeah, think of it as it's, again, it's really for trial, like hands on labs to help people learn all the foundational skills they need to like build an AI practice and get it into production. And again, it's like, you don't need to go champion to your executive team that you need access to expensive infrastructure and, you know, and bring in Red Hat to set up OpenShift. Everything's there for you so you can instantly get started. Do kind of a pilot project and then use that to explain to your executive team everything that you need to then go do to get this into production and drive business value for the company. >> All right, great stuff, guys. Thanks so much for coming to theCUBE. >> Yeah, thanks. >> Thank you for having us. >> All right, thank you for watching. Keep it right there, Dave Vellante and Paul Gillin. We'll be back right after this short break at the Red Hat Summit 2022. (upbeat music)

Published Date : May 11 2022

SUMMARY :

here in the Seaport in Boston. Justin, you are a keynote this morning. And so it, you know, forces you to rethink Actually, you know, the application And so the idea really to buy, you know, solutions off the shelf. Is that how it's going to be the data center, you know, of that to run their data centers. I mean, peanuts in the of the need for PUs if you will. all that for the developer. And so the idea really is Yeah, it's a new term. And so that's the So it really makes, you know, Is that a blue field out at the edge? across the bus, of the PCI bus to the GPU, First of all, do you agree with that? And that really has to run at the edge you know, x86 approach, from the core to the edge and beyond Justin, we'll give you the Well, you know, I think a lot of this is Launchpad more of a that you need access to Thanks so much for coming to theCUBE. at the Red Hat Summit 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tushar KatarkiPERSON

0.99+

JustinPERSON

0.99+

Paul GillinPERSON

0.99+

Dave VellantePERSON

0.99+

NVIDIAORGANIZATION

0.99+

Justin BoitanoPERSON

0.99+

Chris WrightPERSON

0.99+

Dave VellantePERSON

0.99+

PaulPERSON

0.99+

AWSORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

110%QUANTITY

0.99+

25%QUANTITY

0.99+

30%QUANTITY

0.99+

40%QUANTITY

0.99+

$100 billionQUANTITY

0.99+

AppleORGANIZATION

0.99+

INVIDIAORGANIZATION

0.99+

AnnapurnaORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

SeaportLOCATION

0.99+

350 millionQUANTITY

0.99+

15 monthsQUANTITY

0.99+

24QUANTITY

0.99+

Red HatORGANIZATION

0.99+

24 monthsQUANTITY

0.99+

next decadeDATE

0.99+

Red Hat Summit 2022EVENT

0.98+

18QUANTITY

0.98+

BostonLOCATION

0.98+

OpenShiftTITLE

0.98+

30sQUANTITY

0.97+

each yearQUANTITY

0.97+

A15COMMERCIAL_ITEM

0.97+

secondlyQUANTITY

0.97+

FirstQUANTITY

0.97+

todayDATE

0.96+

6XQUANTITY

0.96+

next decadeDATE

0.96+

bothQUANTITY

0.96+

Open Hybrid CloudTITLE

0.95+

KubernetesTITLE

0.95+

theCUBEORGANIZATION

0.94+

LaunchpadTITLE

0.94+

two worldsQUANTITY

0.93+

millions of dollarsQUANTITY

0.92+

M1COMMERCIAL_ITEM

0.92+

NitroORGANIZATION

0.91+

Vice PresidentPERSON

0.91+

OpenShift 4.10TITLE

0.89+

single nodesQUANTITY

0.88+

DevSecOpsTITLE

0.86+

JensenORGANIZATION

0.83+

oneQUANTITY

0.82+

three primary processorsQUANTITY

0.82+

DevOpsTITLE

0.81+

firstQUANTITY

0.8+

last five yearsDATE

0.79+

this morningDATE

0.79+

MoorePERSON

0.77+

x86 coresQUANTITY

0.71+

Dave Brown, AWS | AWS re:Invent 2021


 

(bright music) >> Welcome back everyone to theCUBE's coverage of AWS re:Invent 2021 in person. So a live event, physical in-person, also virtual hybrid. So a lot of great action online, check out the website. All the videos are there on theCUBE, as well as what's going on all of the actions on site and theCUBE's here. I'm John Furrier, your host with Dave Vellante, my cohost. Finally, we've got David Brown, VP of Elastic Compute Cloud. EC2, the bread and butter. Our favorite part of Amazon. David, great to have you back on theCUBE in person. >> John, it's great to be back. It's the first time I'd been on theCUBE in person as well. A lot of virtual events with you guys, but it's amazing to be back at re:Invent. >> We're so excited for you. I know, Matt Garman and I've talked in the past. We've talked in the past. EC2 is just an amazing product. It's always been the core block of AWS. More and more action happening and developers are now getting more action and there's well, we wrote a big piece about it. What's going on? The Silicon's really paying off. You've got to also general purpose Intel and AMD, and you've got the custom silicon, all working together. What's the new update? Give us a scoop. >> Well, John, it's actually 15 years of EC2 this year and I've been lucky to be on that team for 14 years and so incredible to see the growth. It's been an amazing journey. The thing that's really driven us, two things. One is supporting new workloads. And so what are the workloads that customers have available out there trying to do on the cloud that we don't support and launch new instance types. And that's the first thing. The second one is price performance. How do we give customers more performance at a continuously decreasing price year-over-year? And that's just driven innovation across EC2 over the years with things like Graviton. All of our inferential chips are custom silicon, but also instance types with the latest Intel Ice Lake CPU's, latest Milan. We just announced the AMD Milan instance. It's just constantly innovation across the ever-increasing list of instances. So super exciting. >> So instances become the new thing. Provision an instance, spin up an instance. Instance becomes, and you can get instances, flavors, almost like flavors, right? >> David: Yeah. >> Take us through the difference between an instance and then the EC2 itself. >> That's correct, yeah. So we actually have, by end of the year, right now we have over 475 different instances available to you whether it's GPU accelerators, high-performance computing instances, memory optimized, just enormous number. We'll actually hit 500 by the end of the year, but that is it. I mean, customers are looking for different types of machines and those are the instances. >> So the Custom Silicon, it's one of the most interesting developments. We've written about it. AWS secret weapon is one of them. I wonder if you could take us back to the decision points and the journey. The Annapurna acquisition, you started working with them as a partner, then you said, all right, let's just buy the company. >> David: Yeah. >> And then now, you're seeing the acceleration, your time to tapeout is way, way compressed. Maybe what was the catalyst and maybe we can get into where it's going. >> Yeah, absolutely. Super interesting story 'cause it actually starts all the way back in 2008. In 2008, EC2 had actually been around for just a little under two years. And if you remember back then, everybody was like, will virtualize and hypervisors, specialization would never really get you the same performances, what they were calling bare metal back then. Everybody's looking at the cloud. And so we took a look at that. And I mean, network latencies, in some cases with hypervisors were as high as 200 or 300 milliseconds. And it was a number of real challenges. And so we knew that we would have to change the way that virtualization works and get into hardware. And so in 2010, 2011, we started to look at how could I offload my network processing, my IO processing to additional hardware. And that's what we delivered our first Nitro card in 2012 and 2013. We actually offloaded all of the processing of network to a Nitro card. And that Nitro card actually had a Annapurna arm chip on it. Our Nitro 1 chip. >> For the offload? >> The offload card, yeah. And so that's when my team started to code for Arm. We started to work on our Linux works for Arm. We actually had to write our own operating system initially 'cause there weren't any operating systems available we could use. And so that's what we started this journey. And over the years, when we saw how well it worked for networking, we said, let's do it for storage as well. And then we said, Hey, we could actually improve security significantly. And by 2017, we'd actually offloaded 100% of everything we did on that server to our offload cards Leaving a 100% of the server available for customers. And we're still actually the only cloud provider that does that today. >> Just to interject, in the data center today, probably 30% of the general purpose cores are used for offloads. You're saying 0% in the cloud. >> On our nitro instances, so every instance we've launched since 2017, our C5. We use 0% of that central core. And you can actually see that in our instance types. If you look at our largest instance type, you can see that we're giving you 96 cores and we're giving you, and our largest instance, 24 terabytes of memory. We're not giving you 23.6 terabytes 'cause we need some. It's all given to you as the customer. >> So much more efficient, >> Much, much more efficient, much better, better price performance as well. But then ultimately those Nitro chips, we went through Nitro 1, Nitro 2, Nitro 3, Nitro 4. We said, Hey, could we build a general purpose server chip? Could we actually bring Arm into the cloud? And in 2018, we launched the A1 instance, which was our Graviton1 instance. And what we didn't tell people at the time is that it was actually the same chip we were using on our network card. So essentially, it was a network card that we were giving to you as a server. But what it did is it sparked the ecosystem. That's why we put it out there. And I remember before launch, some was saying, is this just going to be a university project? Are we going to see people from big universities using Arm in the cloud? Was it really going to take off? And the response was amazing. The ecosystem just grew. We had customers move to it and immediately begin to see improvements. And we knew that a year later, Graviton2 was going to come out. And Graviton2 was just an amazing chip. It continues to see incredible adoption, 40% price performance improvement over other instances. >> So this is worth calling out because I think that example of the network card, I mean, innovation can come from anywhere. This is what Jassy always would say is do the experiments. Think about the impact of what's going on here. You're focused on a mission. Let's get that processing of the lowest cost, pick up some workloads. So you're constantly tinkering with tuning the engine. New discovery comes in. Nitro is born. The chip comes in. But I think the fundamental thing, and I want to get your reaction to this 'cause we've put this out there on our post on Sunday. And I said, in every inflection point, I'm old enough, my birthday was yesterday. I'm old enough to know that. >> David: I saw that. >> I'm old enough to know that in the eighties, the client server shifts. Every inflection point where development changed, the methodology, the mindset or platforms change, all the apps went to the better platform. Who wants to run their application on a slower platform? And so, and those inflects. So now that's happening now, I believe. So you got better performance and I'm imagining that the app developers are coding for it. Take us through how you see that because okay, you're offering up great performance for workloads. Now it's cloud workloads. That's almost all apps. Can you comment on that? >> Well, it has been really interesting to see. I mean, as I said, we were unsure who was going to use it when we initially launched and the adoption has been amazing. Initially, obviously it's always, a lot of the startups, a lot of the more agile companies that can move a lot faster, typically a little bit smaller. They started experimenting, but the data got out there. That 40% price performance was a reality. And not only for specific workloads, it was broadly successful across a number of workloads. And so we actually just had SAP who obviously is an enormous enterprise, supporting enterprises all over the world, announced that they are going to be moving the S/4 HANA Cloud to run on Graviton2. It's just phenomenal. And we've seen enterprises of that scale and game developers, every single vertical looking to move to Graviton2 and get that 40% price performance. >> Now we have to, as analysts, we have to say, okay, how did you get to that 40%? And you have to make some assumptions obviously. And it feels like you still have some dry powder when you looked at Graviton2. I think you were running, I don't know, it's speculated anyway. I don't know if you guys, it's your data, two and a half, 2.5 gigahertz. >> David: Yeah. >> I don't know if we can share what's going on with Graviton3, but my point is you had some dry powder and now with Graviton3, quite a range of performance, 'cause it really depends on the workload. >> David: That's right. >> Maybe you could give some insight as to that. What can you share about how you tuned Graviton3? >> When we look at benchmarking, we don't want to be trying to find that benchmark that's highly tuned and then put out something that is, Hey, this is the absolute best we can get it to and that's 40%. So that 40% is actually just on average. So we just went and ran real world workloads. And we saw some that were 55%. We saw some that were 25. It depends on what it was, but on average, it was around the 35, 45%, and we said 40%. And the great thing about that is customers come back and say, Hey, we saw 40% in this workload. It wasn't that I had to tune it. And so with Graviton3, launching this week. Available in our C7g instance, we said 25%. And that is just a very standard benchmark in what we're seeing. And as we start to see more customer workloads, I think it's going to be incredible to see what that range looks like. Graviton2 for single-threaded applications, it didn't give you that much of a performance. That's what we meant by cloud applications, generally, multi-threaded. In Graviton3, that's no longer the case. So we've had some customers report up to 80% performance improvements of Graviton2 to Graviton3 when the application was more of a single-threaded application. So we started to see. (group chattering) >> You have to keep going, the time to market is compressing. So you have that, go ahead, sorry. >> No, no, I always want to add one thing on the difference between single and multi-threaded applications. A lot of legacy, you're single threaded. So this is kind of an interesting thing. So the mainframe, migration stuff, you start to see that. Is that where that comes in the whole? >> Well, a lot of the legacy apps, but also even some of the new apps, like single threading like video transcoding, for example, is all done on a single core. It's very difficult. I mean, almost impossible to do that multi-threaded way. A lot of the crypto algorithms as well, encryption and cryptography is often single core. So with Graviton3, we've seen a significant performance boost for video encoding, cryptographic algorithms, that sort of thing, which really impacts even the most modern applications. >> So that's an interesting point because now single threaded is where the vertical use cases come in. It's not like more general purpose OS kind of things. >> Yeah, and Graviton has already been very broad. I think we're just knocking down the last few verticals where maybe it didn't support it and now it absolutely does. >> And if an ISV then ports, like an SAP's ports to Graviton, then the customer doesn't see any, I mean, they're going to see the performance difference, but they don't have to think about it. >> David: Yeah. >> They just say, I choose that instance and I'm going to get better price performance. >> Exactly, so we've seen that from our ISVs. We've also been doing that with our AWS services. So services like EMR, RDS, Elastic Cache, it will be moving and making Graviton2 available for customers, which means the customer doesn't have to do the migration at all. It's all done for them. They just pick the instance and get the price performance benefits, and so yeah. >> I think, oh, no, that was serverless. Sorry. >> Well, Lambda actually just did launch on Graviton2. And I think they were talking about a 35% price performance improvement. >> Who was that? >> Lambda, a couple of months ago. >> So what does an ISV have to do to port to Graviton. >> It's relatively straightforward, and this is actually one of the things that has slowed customers down is the, wow, that must be a big migration. And that ecosystem that I spoke about is the important part. And today, with all the Linux operating systems being available for Arm running on Graviton2, with all of the container runtimes being available, and then slowly open source applications in ISV is being available. It's actually really, really easy. And we just ran the Graviton2 four-day challenge. And we did that because we actually had an enterprise migrate one of the largest production applications in just four days. Now, I probably wouldn't recommend that to most enterprises that we see is a little too fast, but they could actually do that. >> But just from a numbers standpoint, that's insanely amazing. I mean, when you think about four days. >> Yeah. >> And when we talked on virtually last year, this year, I can't remember now. You said, we'll just try it. >> David: That's right. >> And see what happens, so I presume a lot of people have tried it. >> Well, that's my advice. It's the unknown, it's the what will it take? So take a single engineer, tell them and give them a time. Say you have one week, get this running on Graviton2, and I think the results are pretty amazing, very surprised. >> We were one of the first, if not the first to say that Arm is going to be dominant in the enterprise. We know it's dominant in the Edge. And when you look at the performance curves and the time to tape out, it's just astounding. And I don't know if people appreciate that relative to the traditional Moore's law curve. I mean, it's a style. And then when you combine the power of the CPU, the GPU, the NPU, kind of what Apple does in the iPhone, it blows away the historical performance curves. And you're on that curve. >> That's right. >> I wonder if you could sort of explain that. >> So with Graviton, we're optimizing just across every single part of AWS. So one of the nice things is we actually own that end-to-end. So when it starts with the early design of Graviton2 and Graviton3, and we obviously working on other chips right now. We're actually using the cloud to do all of the electronic design automation. So we're able to test with AWS how that Graviton3 chip is going to work long before we've even started taping it out. And so those workloads are running on high-frequency CPU's on Graviton. Actually we're using Graviton to build Graviton now in the cloud. The other thing we're doing is we're making sure that the Annapurna team that's building those CPUs is deeply engaged with my team and we're going to ultimately go and build those instances so that when that chip arrives from tapeout. I'm not waiting nine months or two years, like would normally be the case, but I actually had an instance up and running within a week or two on somebody's desk studying to do the integration. And that's something we've optimized significantly to get done. And so it allows us to get that iteration time. It also allows us to be very, very accurate with our tapeouts. We're not having to go back with Graviton. They're all A1 chips. We're not having to go back and do multiple runs of these things because we can do so much validation and performance testing in the cloud ahead of time. >> This is the epiphany of the Arm model. >> It really is. >> It's a standard. When you send it to the fab, they know what's going to work. You hit volume and it's just no fab. >> Well, this is a great thread. We'll stay on this 'cause Adam told us when we met with them for re:Invent that they're seeing a lot more visibility into use cases at the scale. So the scale gives you an advantage on what instances might work. >> And makes the economics works. >> Makes the economics work, hence the timing, the shrinking time to market, not there, but also for the apps. Talk about the scale advantage you guys have. >> Absolutely. I mean, the scale advantage of AWS plays out in a number of ways for our customers. The first thing is being able to deliver highly optimized hardware. So we don't just look at the Graviton3 CPU, you were speaking about the core count and the frequency and Peter spoke about a lot of that in his keynote yesterday. But we look at how does the Graviton3 CPU work with the rest of the instance. What is the right balance between the CPU and memory? The CPU and the Hydro. What's the performance and the drive? We just launched the Nitro SSD, which is now we've actually building our own custom SSDs for Nitro getting better performance, being able to do updates, better security, making it more cloudy. We're just saying, we've been challenged with the SSD in the parts. The other place that scales really helping is in capacity. Being able to make sure that we can absorb things like the COVID spike, or the stuff you see in the financial industry with just enormous demand for compute. We can do that because of our scale. We are able to scale. And the final area is actually in quality because I have such an enormous fleet. I'm actually able to drive down AFR. So annual failure rates, are we well below what the mathematical theoretical tenant or possibility is? So if you look at what's put on that actual sticker on the box that says you should be able to get a full percent AFR. At scale and with focus, we're actually able to get that down to significantly below what the mathematical entitlement was actually be. >> Yeah, it's incredible. I've got a great, and this is the advantage, and that's why I believe anyone who's writing applications that has includes a database, data transfer, any kind of execution of code will use the stack. >> Why would they? Really, why? We've seen this, like you said before, whether it was PC, then the fastest Pentium or somebody. >> Why would you want your app to run slower? >> Unix box, right? ISVS want it to run as fast and as cheaply as possible. Now power plays into it as well. >> Yeah, well, we do have, I agree with what you're saying. We do have a number of customers that are still looking to run on x86, but obviously customers that want windows. Windows isn't available for Arm and so that's a challenge. They'll continue to do that. And you know the way we do look at it is most law kind of died out on us in 2002, 2003. And what I'm hoping is, not necessarily bringing wars a little back, but then we say, let's not accept the 10%, 15% improvement year-over-year. There's absolutely more we can all be doing. And so I'm excited to see where the x86 world's going and they doing a lot of great stuff. Intel Ice Lakes looking amazing. Milan is really great to have an AWS as well. >> Well, I'm thinking it's fair point 'cause we certainly look what Pat's doing it at Intel and he's remaking the company. I've said he's going to follow on the Arm playbook in my mind a little bit, and which is the right thing to do. So competition is a good thing. >> David: Absolutely. >> We're excited for you and a great to see Graviton and you guys have this kind of inflection point. We've been tracking for a while, but now the world's starting to see it. So congratulations to your team. >> David: Thank you. >> Just a couple of things. You guys have some news on instances. Talk about the deprecation issue and how you guys are keeping instances alive real quick. >> Yeah, we're super customer obsessed at Amazon. And so that really drives us. And one of the worst things for us to do is to have to tell a customer that we no longer supporting a service. We recently actually just deprecated the ECG classic network. I'm not sure if you saw that and that's actually off the 10 years of continuing to support it. And the only reason we did it is we have a tiny percentage of customers still using that from back in 2012. But one of the challenges is obviously instance hardware eventually will ultimately time out and fail and have hardware issues as it gets older and older. And so we didn't want to be in a place, in EC2, where we would have to constantly go to customers and say that M1 small, that C3, whatever you were running, it's no longer supported, please move. That's just a text that customers shouldn't have to do. And if they still getting value out of an older instance, let them keep using it. So we actually just announced at re:Invent, in my keynote on Tuesday, the longevity support for EC2 instances, which means we will never come back to you again and ask you to please get off an instance, because we can actually emulate all those instances on our Nitro system. And so all of these instances are starting to migrate to Nitro. You're getting all the benefits of Nitro for now some of our older zen instances, but also you don't have to worry about that work. That's just not something you need to do to get off in all the instance. >> That's great. That's a great test service. Stay on as long as you want. When you're ready to move, move. Okay, final question for you. I know we've got time, I want to get this in. The global network, you guys are known for AWS cloud WAN serve. Gives you updates on what's going on with that. >> So Werner just announced that in his keynote and over the last two to three years or so, we've seen a lot of customers starting to use the AWS backbone, which is extensive. I mean, you've seen the slides in Werner's keynote. It really does span the world. I think it's probably one of the largest networks out there. Customers starting to use that for actually their branch office communication. So instead of going and provisioning the own international MPLS networks and that sort of thing, they say, let me onboard to AWS with VPN or direct connect, and I can actually run the AWS backbone around the world. Now doing that actually has some complexity. You got to think about transit gateways. You got to think about those inter-region peering. And AWS cloud when takes all of that complexity away, you essentially create a cloud WAN, connecting to it to VPN or direct connect, and you can even go and actually set up network segments. So essentially VLANs for different parts of the organization. So super excited to get out that out of there. >> So the ease of use is the key there. >> Massively easy to use. and we have 26 SD-WAN partners. We even partnering with folks like Verizon and Swisscom in Switzerland to telco to actually allow them to use it for their customers as well. >> We'll probably use your service someday when we have a global rollout date. >> Let's do that, CUBE Global. And then the other was the M1 EC2 instance, which got a lot of applause. >> David: Absolutely. >> M1, I think it was based on A15. >> Yeah, that's for Mac. We've got to be careful 'cause M1 is our first instance as well. >> Yeah right, it's a little confusion there. >> So it's a Mac. The EC2 Mac is with M1 silicon from Apple, which super excited to put out there. >> Awesome. >> David Brown, great to see you in person. Congratulations to you and the team and all the work you guys have done over the years. And now that people starting to realize the cloud platform, the compute just gets better and better. It's a key part of the system. >> Thanks John, it's great to be here. >> Thanks for sharing. >> The SiliconANGLE is here. We're talking about custom silicon here on AWS. I'm John Furrier with Dave Vellante. You're watching theCUBE. The global leader in tech coverage. We'll be right back with more covers from re:Invent after this break. (bright music)

Published Date : Dec 2 2021

SUMMARY :

all of the actions on site A lot of virtual events with you guys, It's always been the core block of AWS. And that's the first thing. So instances become the new thing. and then the EC2 itself. available to you whether So the Custom Silicon, seeing the acceleration, of the processing of network And over the years, when we saw You're saying 0% in the cloud. It's all given to you as the customer. And the response was amazing. example of the network card, and I'm imagining that the app a lot of the more agile companies And it feels like you 'cause it really depends on the workload. some insight as to that. And the great thing about You have to keep going, the So the mainframe, migration Well, a lot of the legacy apps, So that's an interesting down the last few verticals but they don't have to think about it. and I'm going to get and get the price performance I think, oh, no, that was serverless. And I think they were talking about a 35% to do to port to Graviton. about is the important part. I mean, when you think about four days. And when we talked And see what happens, so I presume the what will it take? and the time to tape out, I wonder if you could that the Annapurna team When you send it to the fab, So the scale gives you an advantage the shrinking time to market, or the stuff you see in and that's why I believe anyone We've seen this, like you said before, and as cheaply as possible. And so I'm excited to see is the right thing to do. and a great to see Graviton Talk about the deprecation issue And the only reason we did it Stay on as long as you want. and over the last two and Swisscom in Switzerland to We'll probably use your service someday the M1 EC2 instance, We've got to be careful little confusion there. The EC2 Mac is with M1 silicon from Apple, and all the work you guys The SiliconANGLE is here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave VellantePERSON

0.99+

David BrownPERSON

0.99+

VerizonORGANIZATION

0.99+

PeterPERSON

0.99+

WernerPERSON

0.99+

SwisscomORGANIZATION

0.99+

Matt GarmanPERSON

0.99+

JohnPERSON

0.99+

2008DATE

0.99+

AMDORGANIZATION

0.99+

AdamPERSON

0.99+

John FurrierPERSON

0.99+

SwitzerlandLOCATION

0.99+

Dave BrownPERSON

0.99+

SundayDATE

0.99+

40%QUANTITY

0.99+

30%QUANTITY

0.99+

2010DATE

0.99+

14 yearsQUANTITY

0.99+

100%QUANTITY

0.99+

2011DATE

0.99+

AmazonORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

2002DATE

0.99+

2012DATE

0.99+

15%QUANTITY

0.99+

25QUANTITY

0.99+

23.6 terabytesQUANTITY

0.99+

nine monthsQUANTITY

0.99+

TuesdayDATE

0.99+

10 yearsQUANTITY

0.99+

10%QUANTITY

0.99+

96 coresQUANTITY

0.99+

two yearsQUANTITY

0.99+

last yearDATE

0.99+

four daysQUANTITY

0.99+

2018DATE

0.99+

55%QUANTITY

0.99+

2013DATE

0.99+

2017DATE

0.99+

200QUANTITY

0.99+

2003DATE

0.99+

24 terabytesQUANTITY

0.99+

PatPERSON

0.99+

AppleORGANIZATION

0.99+

one weekQUANTITY

0.99+

four-dayQUANTITY

0.99+

IntelORGANIZATION

0.99+

25%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

two and a halfQUANTITY

0.99+

this yearDATE

0.99+

yesterdayDATE

0.99+

a year laterDATE

0.99+

firstQUANTITY

0.99+

Elastic Compute CloudORGANIZATION

0.99+

500QUANTITY

0.99+

Breaking Analysis: How Nvidia Wins the Enterprise With AI


 

from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante nvidia wants to completely transform enterprise computing by making data centers run 10x faster at one tenth the cost and video's ceo jensen wang is crafting a strategy to re-architect today's on-prem data centers public clouds and edge computing installations with a vision that leverages the company's strong position in ai architectures the keys to this end-to-end strategy include a clarity of vision massive chip design skills a new arm-based architecture approach that integrates memory processors i o and networking and a compelling software consumption model even if nvidia is unsuccessful at acquiring arm we believe it will still be able to execute on this strategy by actively participating in the arm ecosystem however if its attempts to acquire arm are successful we believe it will transform nvidia from the world's most valuable chip company into the world's most valuable supplier of integrated computing architectures hello everyone and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll explain why we believe nvidia is in the right position to power the world's computing centers and how it plans to disrupt the grip that x86 architectures have had on the data center for decades the data center market is in transition like the universe the cloud is expanding at an accelerated pace no longer is the cloud an opaque set of remote services i always say somewhere out there sitting in a mega data center no rather the cloud is extending to on-premises data centers data centers are moving into the cloud and they're connecting through adjacent locations that create hybrid interactions clouds are being meshed together across regions and eventually will stretch to the far edge this new definition or view of cloud will be hyper distributed and run by software kubernetes is changing the world of software development and enabling workloads to run anywhere open apis external applications expanding the digital supply chains and this expanding cloud they all increase the threat surface and vulnerability to the most sensitive information that resides within the data center and around the world zero trust has become a mandate we're also seeing ai being injected into every application and it's the technology area that we see with the most momentum coming out of the pandemic this new world will not be powered by general purpose x86 processors rather it will be supported by an ecosystem of arm-based providers in our opinion that are affecting an unprecedented increase in processor performance as we have been reporting and nvidia in our view is sitting in the poll position and is currently the favorite to dominate the next era of computing architecture for global data centers public clouds as well as the near and far edge let's talk about jensen wang's clarity of vision for this new world here's a chart that underscores some of the fundamental assumptions that he's leveraging to expand his market the first is that there's a lot of waste in the data center he claims that only half of the cpu cores deployed in the data center today actually support applications the other half are processing the infrastructure all around the applications that run the software defined data center and they're terribly under utilized nvidia's blue field three dpu the data processing unit was described in a blog post on siliconangle by analyst zias caravala as a complete mini server on a card i like that with software defined networking storage and security acceleration built in this product has the bandwidth and according to nvidia can replace 300 general purpose x86 cores jensen believes that every network chip will be intelligent programmable and capable of this type of acceleration to offload conventional cpus he believes that every server node will have this capability and enable every packed of every packet and every application to be monitored in real time all the time for intrusion and as servers move to the edge bluefield will be included as a core component in his view and this last statement by jensen is critical in our opinion he says ai is the most powerful force of our time whether you agree with that or not it's relevant because ai is everywhere an invidious position in ai and the architectures the company is building are the fundamental linchpin of its data center enterprise strategy so let's take a look at some etr spending data to see where ai fits on the priority list here's a set of data in a view that we often like to share the horizontal axis is market share or pervasiveness in the etr data but we want to call your attention to the vertical axis that's really really what really we want to pay attention today that's net score or spending momentum exiting the pandemic we've seen ai capture the number one position in the last two surveys and we think this dynamic will continue for quite some time as ai becomes the staple of digital transformations and automations an ai will be infused in every single dot you see on this chart nvidia's architectures it just so happens are tailor made for ai workloads and that is how it will enter these markets let's quantify what that means and lay out our view of how nvidia with the help of arm will go after the enterprise market here's some data from wikibon research that depicts the percent of worldwide spending on server infrastructure by workload type here are the key points first the market last year was around 78 billion dollars worldwide and is expected to approach 115 billion by the end of the decade this might even be a conservative figure and we've split the market into three broad workload categories the blue is ai and other related applications what david floyer calls matrix workloads the orange is general purpose think things like erp supply chain hcm collaboration basically oracle saps and microsoft work that's being supported today and of course many other software providers and the gray that's the area that jensen was referring to is about being wasted the offload work for networking and storage and all the software defined management in the data centers around the world okay you can see the squeeze that we think compute infrastructure is gonna gonna occur around that orange area that general-purpose workloads that we think is going to really get squeezed in the next several years on a percentage basis and on an absolute basis it's really not growing nearly as fast as the other two and video with arm in our view is well positioned to attack that blue area and the gray area those those workload offsets and the new emerging ai applications but even the orange as we've reported is under pressure as for example companies like aws and oracle they use arm-based designs to service general purpose workloads why are they doing that cost is the reason because x86 generally and intel specifically are not delivering the price performance and efficiency required to keep up with the demands to reduce data center costs and if intel doesn't respond which we believe it will but if it doesn't act arm we think will get 50 percent of the general purpose workloads by the end of the decade and with nvidia it will dominate the blue the ai and the gray the offload work when we say dominate we're talking like capture 90 percent of the available market if intel doesn't respond now intel they're not just going to sit back and let that happen pat gelsinger is well aware of this in moving intel to a new strategy but nvidia and arm are way ahead in the game in our view and as we've reported this is going to be a real challenge for intel to catch up now let's take a quick look at what nvidia is doing with relevant parts of its pretty massive portfolio here's a slide that shows nvidia's three chip strategy the company is shifting to arm-based architectures which we'll describe in more detail in a moment the slide shows at the top line nvidia's ampere architecture not to be confused with the company ampere computing nvidia is taking a gpu centric approach no surprise obvious reasons there that's their sort of stronghold but we think over time it may rethink this a little bit and lean more into npus the neural processing unit we look at what apple's doing what tesla are doing we see opportunities for companies like nvidia to really sort of go after that but we'll save that for another day nvidia has announced its grace cpu a nod to the famous computer scientist grace hopper grace is a new architecture that doesn't rely on x86 and much more efficiently uses memory resources we'll again describe this in more detail later and the bottom line there that roadmap line shows the bluefield dpu which we described is essentially a complete server on a card in this approach using arm will reduce the elapsed time to go from chip design to production by 50 we're talking about shaving years down to 18 months or less we don't have time to do a deep dive into nvidia's portfolio it's large but we want to share some things that we think are important and this next graphic is one of them this shows some of the details of nvidia's jetson architecture which is designed to accelerate those ai plus workloads that we showed earlier and the reason is that this is important in our view is because the same software supports from small to very large including edge systems and we think this type of architecture is very well suited for ai inference at the edge as well as core data center applications that use ai and as we've said before a lot of the action in ai is going to happen at the edge so this is a good example of leveraging an architecture across a wide spectrum of performance and cost now we want to take a moment to explain why the moved arm-based architectures is so critical to nvidia one of the biggest cost challenges for nvidia today is keeping the gpu utilized typical utilization of gpu is well below 20 percent here's why the left hand side of this chart shows essentially racks if you will of traditional compute and the bottlenecks that nvidia faces the processor and dram they're tied together in separate blocks imagine there are thousands thousands of cores in a rack and every time you need data that lives in another processor you have to send a request and go retrieve it it's very overhead intensive now technologies like rocky are designed to help but it doesn't solve the fundamental architectural bottleneck every gpu shown here also has its own dram and it has to communicate with the processors to get the data i.e they can't communicate with each other efficiently now the right hand side side shows where nvidia is headed start in the middle with system on chip socs cpus are packaged in with npus ipu's that's the image processing unit you know x dot dot dot x pu's the the alternative processors they're all connected with sram which is think of that as a high speed layer like an layer one cache the os for the system on a chip lives inside of this and that's where nvidia has this killer software model what they're doing is they're licensing the consumption of the operating system that's running this system on chip in this entire system and they're affecting a new and really compelling subscription model you know maybe they should just give away the chips and charge for the software like a razer blade model talk about disruptive now the outer layer is the the dpu and the shared dram and other resources like the ampere computing the company this time cpus ssds and other resources these are the processors that will manage the socs together this design is based on nvidia's three chip approach using bluefield dpu leveraging melanox that's the networking component the network enables shared dram across the cpus which will eventually be all arm based grace lives inside the system on a chip and also on the outside layers and of course the gpu lives inside the soc in a scaled-down version like for instance a rendering gpu and we show some gpus on the outer layer as well for ai workloads at least in the near term you know eventually we think they may reside solely in the system on chip but only time will tell okay so you as you can see nvidia is making some serious moves and by teaming up with arm and leaning into the arm ecosystem it plans to take the company to its next level so let's talk about how we think competition for the next era of compute stacks up here's that same xy graph that we love to show market share or pervasiveness on the horizontal tracking against next net score on the vertical net score again is spending velocity and we've cut the etr data to capture players that are that are big in compute and storage and networking we've plugged in a couple of the cloud players these are the guys that we feel are vying for data center leadership around compute aws is a very strong position we believe that more than half of its revenues comes from compute you know ec2 we're talking about more than 25 billion on a run rate basis that's huge the company designs its own silicon graviton 2 etc and is working with isvs to run general purpose workloads on arm-based graviton chips microsoft and google they're going to follow suit they're big consumers of compute they sell a lot but microsoft in particular you know they're likely to continue to work with oem partners to attack that on-prem data center opportunity but it's really intel that's the provider of compute to the likes of hpe and dell and cisco and the odms which are the odms are not shown here now hpe let's talk about them for a second they have architectures and i hate to bring it up but remember the machine i know it's the butt of many jokes especially from competitors it had been you know frankly hpe and hp they deserve some of that heat for all the fanfare and then that they they put out there and then quietly you know pulled the machine or put it out the pasture but hpe has a strong position in high performance computing and the work that it did on new computing architectures with the machine and shared memories that might be still kicking around somewhere inside of hp and could come in handy for some day in the future so hpe has some chops there plus hpe has been known hp historically has been known to design its own custom silicon so i would not count them out as an innovator in this race cisco is interesting because it not only has custom silicon designs but its entry into the compute business with ucs a decade ago was notable and they created a new way to think about integrating resources particularly compute and networking with partnerships to add in the storage piece initially it was within within emc prior to the dell acquisition but you know it continues with netapp and pure and others cisco invests they spend money investing in architectures and we expect the next generation of ucs oh ucs2 ucs 2.0 will mark another notable milestone in the company's data center business dell just had an amazing quarterly earnings report the company grew top line revenue by around 12 percent and it wasn't because of an easy compare to last year dells is simply executing despite continued softness in the legacy emc storage business laptop the laptop demand continued to soar in dell server business it's growing again but we don't see dell as an architectural innovator per se in compute rather we think the company will be content to partner with suppliers whether it's intel nvidia arm-based partners or all of the above dell we think will rely on its massive portfolio its excellent supply chain and execution ethos to compete now ibm is notable for historical reasons with its mainframe ibm created the first great compute monopoly before it unwind and wittingly handed it to intel along with microsoft we don't see ibm necessarily aspiring to retake that compute platform mantle that once once held with mainframes rather red hat in the march to hybrid cloud is the path that we think in our view is ibm's approach now let's get down to the elephants in the room intel nvidia and china inc china is of course relevant because of companies like alibaba and huawei and the chinese chinese government's desire to be self-sufficient in semiconductor technology and technology generally but our premise here is that the trends are favoring nvidia over intel in this picture because nvidia is making moves to further position itself for new workloads in the data center and compete for intel's stronghold intel is going to attempt to remake itself but it should have been doing this seven years ago what pat gelsinger is doing today intel is simply far behind and it's going to take at least a couple years for them to really start to to make inroads in this new model let's stay on the nvidia v intel comparison for a moment and take a snapshot of the two companies here's a quick chart that we put together with some basic kpis some of these figures are approximations or they're rounded so don't stress over it too much but you can see intel is an 80 billion dollar company 4x the size of nvidia but nvidia's market cap far exceeds that of intel why is that of course growth in our view it's justified due to that growth and nvidia's strategic positioning intel used to be the gross margin king but nvidia has much higher gross margins interesting now when it comes down to free cash flow intel is still dominant as it pertains to the balance sheet intel is way more capital intensive than nvidia and as it starts to build out its foundries that's going to eat into intel's cash position now what we did is we put together a little pro forma on the third column of nvidia plus arm circa let's say the end of 2022. we think they could get to a run rate that is about half the size of intel and that can propel the company's market cap to well over half a trillion dollars if they get any credit for arm they're paying 40 billion dollars for arm a company that's you know sub 2 billion the risk is that because of the arm because the arm deal is based on cash plus tons of stock it could put pressure on the market capitalization for some time arm has 90 percent gross margins because it pretty much has a pure license model so it helps the gross margin line a little bit for this in this pro forma and the balance sheet is a swag arm has said that it's not going to take on debt to do the transaction but we haven't had time to really dig into that and figure out how they're going to structure it so we took a took a swag in in what we would do with this low interest rate environment but but take that with a grain of salt we'll do more research in there the point is given the momentum and growth of nvidia its strategic position in ai is in its deep engineering they're aimed at all the right places and its potential to unlock huge value with arm on paper it looks like the horse to beat if it can execute all right let's wrap up here's a summary look the architectures on which nvidia is building its dominant ai business are evolving and nvidia is well positioned to drive a truck right to the enterprise in our view the power has shifted from intel to the arm ecosystem and nvidia is leaning in big time whereas intel it has to preserve its current business while recreating itself at the same time this is going to take a couple of years but intel potentially has the powerful backing of the us government too strategic to fail the wild card is will nvidia be successful in acquiring arm certain factions in the uk and eu are fighting the deal because they don't want the u.s dictating to whom arm can sell its technology for example the restrictions placed on huawei for many suppliers of arm-based chips based on u.s sanctions nvidia's competitors like broadcom qualcomm at all are nervous that if nvidia gets armed they will be at a competitive disadvantage they being invidious competitors and for sure china doesn't want nvidia controlling arm for obvious reasons and it will do what it can to block the deal and or put handcuffs on how business can be done in china we can see a scenario where the u.s government pressures the uk and eu regulators to let this deal go through look ai and semiconductors you can't get much more strategic than that for the u.s military and the u.s long-term competitiveness in exchange for maybe facilitating the deal the government pressures nvidia to guarantee some feed to the intel foundry business while at the same time imposing conditions that secure access to arm-based technology for nvidia's competitors and maybe as we've talked about before having them funnel business to intel's foundry actually we've talked about the us government enticing apple to do so but it could also entice nvidia's competitors to do so propping up intel's foundry business which is clearly starting from ground zero and is going to need help outside of intel's own semiconductor manufacturing internally look we don't have any inside information as to what's happening behind the scenes with the us government and so forth but on its earning call on its earnings call nvidia said they're working with regulators that are on track to complete the deal in early 2022. we'll see okay that's it for today thank you to david floyer who co-created this episode with me and remember i publish each week on wikibon.com and siliconangle.com these episodes they're all available as podcasts all you're going to do is search breaking analysis podcast and you can always connect with me on twitter at dvalante or email me at david.valante siliconangle.com i always appreciate the comments on linkedin and in the clubhouse please follow me so you can be notified when we start a room and riff on these topics and don't forget to check out etr.plus for all the survey data this is dave vellante for the cube insights powered by etr be well and we'll see you next time [Music] you

Published Date : May 30 2021

SUMMARY :

and it's the technology area that we see

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
alibabaORGANIZATION

0.99+

nvidiaORGANIZATION

0.99+

50 percentQUANTITY

0.99+

90 percentQUANTITY

0.99+

huaweiORGANIZATION

0.99+

microsoftORGANIZATION

0.99+

david floyerPERSON

0.99+

40 billion dollarsQUANTITY

0.99+

chinaLOCATION

0.99+

thousandsQUANTITY

0.99+

18 monthsQUANTITY

0.99+

appleORGANIZATION

0.99+

david.valanteOTHER

0.99+

last yearDATE

0.99+

two companiesQUANTITY

0.99+

bostonLOCATION

0.99+

googleORGANIZATION

0.99+

10xQUANTITY

0.99+

early 2022DATE

0.99+

jensenPERSON

0.99+

ibmORGANIZATION

0.99+

around 78 billion dollarsQUANTITY

0.99+

third columnQUANTITY

0.99+

80 billion dollarQUANTITY

0.99+

more than halfQUANTITY

0.99+

ukLOCATION

0.99+

firstQUANTITY

0.98+

around 12 percentQUANTITY

0.98+

a decade agoDATE

0.98+

115 billionQUANTITY

0.98+

todayDATE

0.98+

each weekQUANTITY

0.97+

dellsORGANIZATION

0.97+

seven years agoDATE

0.97+

50QUANTITY

0.97+

dellORGANIZATION

0.97+

jensen wangPERSON

0.97+

twoQUANTITY

0.97+

end of 2022DATE

0.97+

over half a trillion dollarsQUANTITY

0.97+

siliconangle.comOTHER

0.96+

intelORGANIZATION

0.96+

Breaking Analysis: Moore's Law is Accelerating and AI is Ready to Explode


 

>> From theCUBE Studios in Palo Alto and Boston, bringing you data-driven insights from theCUBE and ETR. This is breaking analysis with Dave Vellante. >> Moore's Law is dead, right? Think again. Massive improvements in processing power combined with data and AI will completely change the way we think about designing hardware, writing software and applying technology to businesses. Every industry will be disrupted. You hear that all the time. Well, it's absolutely true and we're going to explain why and what it all means. Hello everyone, and welcome to this week's Wikibon Cube Insights powered by ETR. In this breaking analysis, we're going to unveil some new data that suggests we're entering a new era of innovation that will be powered by cheap processing capabilities that AI will exploit. We'll also tell you where the new bottlenecks will emerge and what this means for system architectures and industry transformations in the coming decade. Moore's Law is dead, you say? We must have heard that hundreds, if not, thousands of times in the past decade. EE Times has written about it, MIT Technology Review, CNET, and even industry associations that have lived by Moore's Law. But our friend Patrick Moorhead got it right when he said, "Moore's Law, by the strictest definition of doubling chip densities every two years, isn't happening anymore." And you know what, that's true. He's absolutely correct. And he couched that statement by saying by the strict definition. And he did that for a reason, because he's smart enough to know that the chip industry are masters at doing work arounds. Here's proof that the death of Moore's Law by its strictest definition is largely irrelevant. My colleague, David Foyer and I were hard at work this week and here's the result. The fact is that the historical outcome of Moore's Law is actually accelerating and in quite dramatically. This graphic digs into the progression of Apple's SoC, system on chip developments from the A9 and culminating with the A14, 15 nanometer bionic system on a chip. The vertical axis shows operations per second and the horizontal axis shows time for three processor types. The CPU which we measure here in terahertz, that's the blue line which you can't even hardly see, the GPU which is the orange that's measured in trillions of floating point operations per second and then the NPU, the neural processing unit and that's measured in trillions of operations per second which is that exploding gray area. Now, historically, we always rushed out to buy the latest and greatest PC, because the newer models had faster cycles or more gigahertz. Moore's Law would double that performance every 24 months. Now that equates to about 40% annually. CPU performance is now moderated. That growth is now down to roughly 30% annual improvements. So technically speaking, Moore's Law as we know it was dead. But combined, if you look at the improvements in Apple's SoC since 2015, they've been on a pace that's higher than 118% annually. And it's even higher than that, because the actual figure for these three processor types we're not even counting the impact of DSPs and accelerator components of Apple system on a chip. It would push this even higher. Apple's A14 which is shown in the right hand side here is quite amazing. It's got a 64 bit architecture, it's got many, many cores. It's got a number of alternative processor types. But the important thing is what you can do with all this processing power. In an iPhone, the types of AI that we show here that continue to evolve, facial recognition, speech, natural language processing, rendering videos, helping the hearing impaired and eventually bringing augmented reality to the palm of your hand. It's quite incredible. So what does this mean for other parts of the IT stack? Well, we recently reported Satya Nadella's epic quote that "We've now reached peak centralization." So this graphic paints a picture that was quite telling. We just shared the processing powers exploding. The costs consequently are dropping like a rock. Apple's A14 cost the company approximately 50 bucks per chip. Arm at its v9 announcement said that it will have chips that can go into refrigerators. These chips are going to optimize energy usage and save 10% annually on your power consumption. They said, this chip will cost a buck, a dollar to shave 10% of your refrigerator electricity bill. It's just astounding. But look at where the expensive bottlenecks are, it's networks and it's storage. So what does this mean? Well, it means the processing is going to get pushed to the edge, i.e., wherever the data is born. Storage and networking are going to become increasingly distributed and decentralized. Now with custom silicon and all that processing power placed throughout the system, an AI is going to be embedded into software, into hardware and it's going to optimize a workloads for latency, performance, bandwidth, and security. And remember, most of that data, 99% is going to stay at the edge. And we love to use Tesla as an example. The vast majority of data that a Tesla car creates is never going to go back to the cloud. Most of it doesn't even get persisted. I think Tesla saves like five minutes of data. But some data will connect occasionally back to the cloud to train AI models and we're going to come back to that. But this picture says if you're a hardware company, you'd better start thinking about how to take advantage of that blue line that's exploding, Cisco. Cisco is already designing its own chips. But Dell, HPE, who kind of does maybe used to do a lot of its own custom silicon, but Pure Storage, NetApp, I mean, the list goes on and on and on either you're going to get start designing custom silicon or you're going to get disrupted in our view. AWS, Google and Microsoft are all doing it for a reason as is IBM and to Sarbjeet Johal said recently this is not your grandfather's semiconductor business. And if you're a software engineer, you're going to be writing applications that take advantage of all the data being collected and bringing to bear this processing power that we're talking about to create new capabilities like we've never seen it before. So let's get into that a little bit and dig into AI. You can think of AI as the superset. Just as an aside, interestingly in his book, "Seeing Digital", author David Moschella says, there's nothing artificial about this. He uses the term machine intelligence, instead of artificial intelligence and says that there's nothing artificial about machine intelligence just like there's nothing artificial about the strength of a tractor. It's a nuance, but it's kind of interesting, nonetheless, words matter. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get "smarter", make better models, for example, that can lead to augmented intelligence and help humans make better decisions. These models improve as they get more data and are iterated over time. Now deep learning is a more advanced type of machine learning. It uses more complex math. But the point that we want to make here is that today much of the activity in AI is around building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years. Inference is the deployment of that model that we were just talking about, taking real time data from sensors, processing that data locally and then applying that training that has been developed in the cloud and making micro adjustments in real time. So let's take an example. Again, we love Tesla examples. Think about an algorithm that optimizes the performance and safety of a car on a turn, the model take data on friction, road condition, angles of the tires, the tire wear, the tire pressure, all this data, and it keeps testing and iterating, testing and iterating, testing iterating that model until it's ready to be deployed. And then the intelligence, all this intelligence goes into an inference engine which is a chip that goes into a car and gets data from sensors and makes these micro adjustments in real time on steering and braking and the like. Now, as you said before, Tesla persist the data for very short time, because there's so much of it. It just can't push it back to the cloud. But it can now ever selectively store certain data if it needs to, and then send back that data to the cloud to further train them all. Let's say for instance, an animal runs into the road during slick conditions, Tesla wants to grab that data, because they notice that there's a lot of accidents in New England in certain months. And maybe Tesla takes that snapshot and sends it back to the cloud and combines it with other data and maybe other parts of the country or other regions of New England and it perfects that model further to improve safety. This is just one example of thousands and thousands that are going to further develop in the coming decade. I want to talk about how we see this evolving over time. Inference is where we think the value is. That's where the rubber meets the road, so to speak, based on the previous example. Now this conceptual chart shows the percent of spend over time on modeling versus inference. And you can see some of the applications that get attention today and how these applications will mature over time as inference becomes more and more mainstream, the opportunities for AI inference at the edge and in IOT are enormous. And we think that over time, 95% of that spending is going to go to inference where it's probably only 5% today. Now today's modeling workloads are pretty prevalent and things like fraud, adtech, weather, pricing, recommendation engines, and those kinds of things, and now those will keep getting better and better and better over time. Now in the middle here, we show the industries which are all going to be transformed by these trends. Now, one of the point that Moschella had made in his book, he kind of explains why historically vertically industries are pretty stovepiped, they have their own stack, sales and marketing and engineering and supply chains, et cetera, and experts within those industries tend to stay within those industries and they're largely insulated from disruption from other industries, maybe unless they were part of a supply chain. But today, you see all kinds of cross industry activity. Amazon entering grocery, entering media. Apple in finance and potentially getting into EV. Tesla, eyeing insurance. There are many, many, many examples of tech giants who are crossing traditional industry boundaries. And the reason is because of data. They have the data. And they're applying machine intelligence to that data and improving. Auto manufacturers, for example, over time they're going to have better data than insurance companies. DeFi, decentralized finance platforms going to use the blockchain and they're continuing to improve. Blockchain today is not great performance, it's very overhead intensive all that encryption. But as they take advantage of this new processing power and better software and AI, it could very well disrupt traditional payment systems. And again, so many examples here. But what I want to do now is dig into enterprise AI a bit. And just a quick reminder, we showed this last week in our Armv9 post. This is data from ETR. The vertical axis is net score. That's a measure of spending momentum. The horizontal axis is market share or pervasiveness in the dataset. The red line at 40% is like a subjective anchor that we use. Anything above 40% we think is really good. Machine learning and AI is the number one area of spending velocity and has been for awhile. RPA is right there. Very frankly, it's an adjacency to AI and you could even argue. So it's cloud where all the ML action is taking place today. But that will change, we think, as we just described, because data's going to get pushed to the edge. And this chart will show you some of the vendors in that space. These are the companies that CIOs and IT buyers associate with their AI and machine learning spend. So it's the same XY graph, spending velocity by market share on the horizontal axis. Microsoft, AWS, Google, of course, the big cloud guys they dominate AI and machine learning. Facebook's not on here. Facebook's got great AI as well, but it's not enterprise tech spending. These cloud companies they have the tooling, they have the data, they have the scale and as we said, lots of modeling is going on today, but this is going to increasingly be pushed into remote AI inference engines that will have massive processing capabilities collectively. So we're moving away from that peak centralization as Satya Nadella described. You see Databricks on here. They're seen as an AI leader. SparkCognition, they're off the charts, literally, in the upper left. They have extremely high net score albeit with a small sample. They apply machine learning to massive data sets. DataRobot does automated AI. They're super high in the y-axis. Dataiku, they help create machine learning based apps. C3.ai, you're hearing a lot more about them. Tom Siebel's involved in that company. It's an enterprise AI firm, hear a lot of ads now doing AI and responsible way really kind of enterprise AI that's sort of always been IBM. IBM Watson's calling card. There's SAP with Leonardo. Salesforce with Einstein. Again, IBM Watson is right there just at the 40% line. You see Oracle is there as well. They're embedding automated and tele or machine intelligence with their self-driving database they call it that sort of machine intelligence in the database. You see Adobe there. So a lot of typical enterprise company names. And the point is that these software companies they're all embedding AI into their offerings. So if you're an incumbent company and you're trying not to get disrupted, the good news is you can buy AI from these software companies. You don't have to build it. You don't have to be an expert at AI. The hard part is going to be how and where to apply AI. And the simplest answer there is follow the data. There's so much more to the story, but we just have to leave it there for now and I want to summarize. We have been pounding the table that the post x86 era is here. It's a function of volume. Arm volumes are a way for volumes are 10X those of x86. Pat Gelsinger understands this. That's why he made that big announcement. He's trying to transform the company. The importance of volume in terms of lowering the cost of semiconductors it can't be understated. And today, we've quantified something that we haven't really seen much of and really haven't seen before. And that's that the actual performance improvements that we're seeing in processing today are far outstripping anything we've seen before, forget Moore's Law being dead that's irrelevant. The original finding is being blown away this decade and who knows with quantum computing what the future holds. This is a fundamental enabler of AI applications. And this is most often the case the innovation is coming from the consumer use cases first. Apple continues to lead the way. And Apple's integrated hardware and software model we think increasingly is going to move into the enterprise mindset. Clearly the cloud vendors are moving in this direction, building their own custom silicon and doing really that deep integration. You see this with Oracle who kind of really a good example of the iPhone for the enterprise, if you will. It just makes sense that optimizing hardware and software together is going to gain momentum, because there's so much opportunity for customization in chips as we discussed last week with Arm's announcement, especially with the diversity of edge use cases. And it's the direction that Pat Gelsinger is taking Intel trying to provide more flexibility. One aside, Pat Gelsinger he may face massive challenges that we laid out a couple of posts ago with our Intel breaking analysis, but he is right on in our view that semiconductor demand is increasing. There's no end in sight. We don't think we're going to see these ebbs and flows as we've seen in the past that these boom and bust cycles for semiconductor. We just think that prices are coming down. The market's elastic and the market is absolutely exploding with huge demand for fab capacity. Now, if you're an enterprise, you should not stress about and trying to invent AI, rather you should put your focus on understanding what data gives you competitive advantage and how to apply machine intelligence and AI to win. You're going to be buying, not building AI and you're going to be applying it. Now data as John Furrier has said in the past is becoming the new development kit. He said that 10 years ago and he seems right. Finally, if you're an enterprise hardware player, you're going to be designing your own chips and writing more software to exploit AI. You'll be embedding custom silicon in AI throughout your product portfolio and storage and networking and you'll be increasingly bringing compute to the data. And that data will mostly stay where it's created. Again, systems and storage and networking stacks they're all being completely re-imagined. If you're a software developer, you now have processing capabilities in the palm of your hand that are incredible. And you're going to rewriting new applications to take advantage of this and use AI to change the world, literally. You'll have to figure out how to get access to the most relevant data. You have to figure out how to secure your platforms and innovate. And if you're a services company, your opportunity is to help customers that are trying not to get disrupted are many. You have the deep industry expertise and horizontal technology chops to help customers survive and thrive. Privacy? AI for good? Yeah well, that's a whole another topic. I think for now, we have to get a better understanding of how far AI can go before we determine how far it should go. Look, protecting our personal data and privacy should definitely be something that we're concerned about and we should protect. But generally, I'd rather not stifle innovation at this point. I'd be interested in what you think about that. Okay. That's it for today. Thanks to David Foyer, who helped me with this segment again and did a lot of the charts and the data behind this. He's done some great work there. Remember these episodes are all available as podcasts wherever you listen, just search breaking it analysis podcast and please subscribe to the series. We'd appreciate that. Check out ETR's website at ETR.plus. We also publish a full report with more detail every week on Wikibon.com and siliconangle.com, so check that out. You can get in touch with me. I'm dave.vellante@siliconangle.com. You can DM me on Twitter @dvellante or comment on our LinkedIn posts. I always appreciate that. This is Dave Vellante for theCUBE Insights powered by ETR. Stay safe, be well. And we'll see you next time. (bright music)

Published Date : Apr 10 2021

SUMMARY :

This is breaking analysis and did a lot of the charts

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David FoyerPERSON

0.99+

David MoschellaPERSON

0.99+

IBMORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Patrick MoorheadPERSON

0.99+

Tom SiebelPERSON

0.99+

New EnglandLOCATION

0.99+

Pat GelsingerPERSON

0.99+

CNETORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

DellORGANIZATION

0.99+

AppleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

MIT Technology ReviewORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

10%QUANTITY

0.99+

five minutesQUANTITY

0.99+

TeslaORGANIZATION

0.99+

hundredsQUANTITY

0.99+

Satya NadellaPERSON

0.99+

OracleORGANIZATION

0.99+

BostonLOCATION

0.99+

95%QUANTITY

0.99+

40%QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

AdobeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

last weekDATE

0.99+

99%QUANTITY

0.99+

ETRORGANIZATION

0.99+

dave.vellante@siliconangle.comOTHER

0.99+

John FurrierPERSON

0.99+

EE TimesORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

10XQUANTITY

0.99+

last weekDATE

0.99+

MoschellaPERSON

0.99+

theCUBEORGANIZATION

0.98+

IntelORGANIZATION

0.98+

15 nanometerQUANTITY

0.98+

2015DATE

0.98+

todayDATE

0.98+

Seeing DigitalTITLE

0.98+

30%QUANTITY

0.98+

HPEORGANIZATION

0.98+

this weekDATE

0.98+

A14COMMERCIAL_ITEM

0.98+

higher than 118%QUANTITY

0.98+

5%QUANTITY

0.97+

10 years agoDATE

0.97+

EinORGANIZATION

0.97+

a buckQUANTITY

0.97+

64 bitQUANTITY

0.97+

C3.aiTITLE

0.97+

DatabricksORGANIZATION

0.97+

about 40%QUANTITY

0.96+

theCUBE StudiosORGANIZATION

0.96+

DataikuORGANIZATION

0.95+

siliconangle.comOTHER

0.94+

Pradeep Sindhu CLEAN


 

>> As I've said many times on theCUBE for years, decades even we've marched to the cadence of Moore's law relying on the doubling of performance every 18 months or so, but no longer is this the main spring of innovation for technology rather it's the combination of data applying machine intelligence and the cloud supported by the relentless reduction of the cost of compute and storage and the build-out of a massively distributed computer network. Very importantly, the last several years alternative processors have emerged to support offloading work and performing specific tests. GPUs are the most widely known example of this trend with the ascendancy of Nvidia for certain applications like gaming and crypto mining and more recently machine learning. But in the middle of last decade we saw the early development focused on the DPU, the data processing unit, which is projected to make a huge impact on data centers in the coming years as we move into the next era of cloud. And with me is Pradeep Sindhu who's the co-founder and CEO of Fungible, a company specializing in the design and development of DPUs. Pradeep, welcome to theCUBE. Great to see you. >> Thank-you, Dave and thank-you for having me. >> You're very welcome. So okay, my first question is don't CPUs and GPUs process data already. Why do we need a DPU? >> That is a natural question to ask. And CPUs have been around in one form or another for almost 55, maybe 60 years. And this is when general purpose computing was invented and essentially all CPUs went to x86 architecture by and large and of course is used very heavily in mobile computing, but x86 is primarily used in data center which is our focus. Now, you can understand that that architecture of a general purpose CPUs has been refined heavily by some of the smartest people on the planet. And for the longest time improvements you refer to Moore's law, which is really the improvements of the price, performance of silicon over time that combined with architectural improvements was the thing that was pushing us forward. Well, what has happened is that the architectural refinements are more or less done. You're not going to get very much, you're not going to squeeze more blood out of that storm from the general purpose computer architecture. what has also happened over the last decade is that Moore's law which is essentially the doubling of the number of transistors on a chip has slowed down considerably and to the point where you're only getting maybe 10, 20% improvements every generation in speed of the transistor if that. And what's happening also is that the spacing between successive generations of technology is actually increasing from two, two and a half years to now three, maybe even four years. And this is because we are reaching some physical limits in CMOS. These limits are well-recognized. And we have to understand that these limits apply not just to general purposive use but they also apply to GPUs. Now, general purpose CPUs do one kind of competition they're really general and they can do lots and lots of different things. It is actually a very, very powerful engine. And then the problem is it's not powerful enough to handle all computations. So this is why you ended up having a different kind of a processor called the GPU which specializes in executing vector floating-point arithmetic operations much, much better than CPU maybe 20, 30, 40 times better. Well, GPUs have now been around for probably 15, 20 years mostly addressing graphics computations, but recently in the last decade or so they have been used heavily for AI and analytics computations. So now the question is, well, why do you need another specialized engine called the DPU? Well, I started down this journey about almost eight years ago and I recognize I was still at Juniper Networks which is another company that I founded. I recognize that in the data center as the workload changes to addressing more and more, larger and larger corpuses of data, number one and as people use scale-out as these standard technique for building applications, what happens is that the amount of east-west traffic increases greatly. And what happens is that you now have a new type of workload which is coming. And today probably 30% of the workload in a data center is what we call data-centric. I want to give you some examples of what is a data-centric workload. >> Well, I wonder if I could interrupt you for a second. >> Of course. >> Because I want those examples and I want you to tie it into the cloud 'cause that's kind of the topic that we're talking about today and how you see that evolving. I mean, it's a key question that we're trying to answer in this program. Of course, early cloud was about infrastructure, little compute, little storage, little networking and now we have to get to your point all this data in the cloud. And we're seeing, by the way the definition of cloud expand into this distributed or I think a term you use is disaggregated network of computers. So you're a technology visionary and I wonder how you see that evolving and then please work in your examples of that critical workload, that data-centric workload. >> Absolutely happy to do that. So if you look at the architecture of our cloud data centers the single most important invention was scale-out of identical or near identical servers all connected to a standard IP ethernet network. That's the architecture. Now, the building blocks of this architecture is ethernet switches which make up the network, IP ethernet switches. And then the server is all built using general purpose x86 CPUs with DRAM, with SSD, with hard drives all connected to inside the CPU. Now, the fact that you scale these server nodes as they're called out was very, very important in addressing the problem of how do you build very large scale infrastructure using general purpose compute. But this architecture did is it compute centric architecture and the reason it's a compute centric architecture is if you open this server node what you see is a connection to the network typically with a simple network interface card. And then you have CPUs which are in the middle of the action. Not only are the CPUs processing the application workload but they're processing all of the IO workload, what we call data-centric workload. And so when you connect SSDs, and hard drives, and GPUs, and everything to the CPU, as well as to the network you can now imagine the CPUs is doing two functions. It's running the applications but it's also playing traffic cop for the IO. So every IO has to go through the CPU and you're executing instructions typically in the operating system and you're interrupting the CPU many, many millions of times a second. Now, general purpose CPUs and the architecture CPUs was never designed to play traffic cop because the traffic cop function is a function that requires you to be interrupted very, very frequently. So it's critical that in this new architecture where there's a lot of data, a lot of these stress traffic the percentage of workload, which is data-centric has gone from maybe one to 2% to 30 to 40%. I'll give you some numbers which are absolutely stunning. If you go back to say 1987 and which is the year in which I bought my first personal computer the network was some 30 times slower than the CPU. The CPU is running at 15 megahertz, the network was running at three megabits per second. Or today the network runs at a 100 gigabits per second and the CPU clock speed of a single core is about three to 2.3 gigahertz. So you've seen that there's a 600X change in the ratio of IO to compute just the raw clock speed. Now, you can tell me that, hey, typical CPUs have lots, lots of cores, but even when you factor that in there's been close to two orders of magnitude change in the amount of IO to compute. There is no way to address that without changing the architecture and this is where the DPU comes in. And the DPU actually solves two fundamental problems in cloud data centers. And these are fundamental there's no escaping it. No amount of clever marketing is going to get around these problems. Problem number one is that in a compute centric cloud architecture the interactions between server nodes are very inefficient. That's number one, problem number one. Problem number two is that these data-centric computations and I'll give you those four examples. The network stack, the storage stack, the virtualization stack, and the security stack. Those four examples are executed very inefficiently by CPUs. Needless to say that if you try to execute these on GPUs you will run into the same problem probably even worse because GPUs are not good at executing these data-centric computations. So what we were looking to do at Fungible is to solve these two basic problems. And you don't solve them by just taking older architectures off the shelf and applying them to these problems because this is what people have been doing for the last 40 years. So what we did was we created this new microprocessor that we call DPU from ground up. It's a clean sheet design and it solves those two problems fundamentally. >> So I want to get into that. And I just want to stop you for a second and just ask you a basic question which is if I understand it correctly, if I just took the traditional scale out, if I scale out compute and storage you're saying I'm going to hit a diminishing returns. It's not only is it not going to scale linearly I'm going to get inefficiencies. And that's really the problem that you're solving. Is that correct? >> That is correct. And the workloads that we have today are very data-heavy. You take AI for example, you take analytics for example it's well known that for AI training the larger the corpus of relevant data that you're training on the better the result. So you can imagine where this is going to go. >> Right. >> Especially when people have figured out a formula that, hey the more data I collect I can use those insights to make money- >> Yeah, this is why I wanted to talk to you because the last 10 years we've been collecting all this data. Now, I want to bring in some other data that you actually shared with me beforehand. Some market trends that you guys cited in your research. And the first thing people said is they want to improve their infrastructure and they want to do that by moving to the cloud. And they also, there was a security angle there as well. That's a whole another topic we could discuss. The other stat that jumped out at me, there's 80% of the customers that you surveyed said there'll be augmenting their x86 CPU with alternative processing technology. So that's sort of, I know it's self-serving, but it's right on the conversation we're having. So I want to understand the architecture. >> Sure. >> And how you've approached this. You've clearly laid out this x86 is not going to solve this problem. And even GPUs are not going to solve the problem. >> They re not going to solve the problem. >> So help us understand the architecture and how you do solve this problem. >> I'll be very happy to. Remember I use this term traffic cop. I use this term very specifically because, first let me define what I mean by a data-centric computation because that's the essence of the problem we're solving. Remember I said two problems. One is we execute data-centric workloads at least an order of magnitude more efficiently than CPUs or GPUs, probably 30 times more efficient. And the second thing is that we allow nodes to interact with each other over the network much, much more efficiently. Okay, so let's keep those two things in mind. So first let's look at the data-centric piece. The data-centric piece for workload to qualify as being data-centric four things have to be true. First of all, it needs to come over the network in the form of packets. Well, this is all workloads so I'm not saying anything. Secondly, this workload is heavily multiplex in that there are many, many, many computations that are happening concurrently, thousands of them, okay? That's the number two. So a lot of multiplexing. Number three is that this workload is stateful. In other words you can't process back it's out of order. You have to do them in order because you're terminating network sessions. And the last one is that when you look at the actual computation the ratio of IO to arithmetic is medium to high. When you put all four of them together you actually have a data-centric workload, right? And this workload is terrible for general purpose CPUs. Not only the general purpose CPU is not executed properly the application that is running on the CPU also suffers because data center workloads are interfering workloads. So unless you designed specifically to them you're going to be in trouble. So what did we do? Well, what we did was our architecture consists of very, very heavily multi-threaded general purpose CPUs combined with very heavily threaded specific accelerators. I'll give you examples of some of those accelerators, DMA accelerators, then ratio coding accelerators, compression accelerators, crypto accelerators, compression accelerators. These are just some, and then look up accelerators. These are functions that if you do not specialize you're not going to execute them efficiently. But you cannot just put accelerators in there, these accelerators have to be multi-threaded to handle. We have something like a 1,000 different treads inside our DPU to address these many, many, many computations that are happening concurrently but handle them efficiently. Now, the thing that is very important to understand is that given the velocity of transistors I know that we have hundreds of billions of transistors on a chip, but the problem is that those transistors are used very inefficiently today if the architecture of a CPU or a GPU. What we have done is we've improved the efficiency of those transistors by 30 times, okay? >> So you can use a real estate much more effectively? >> Much more effectively because we were not trying to solve a general purpose computing problem. Because if you do that we're going to end up in the same bucket where general purpose CPUs are today. We were trying to solve a specific problem of data-centric computations and of improving the note to note efficiency. So let me go to point number two because that's equally important. Because in a scalar or architecture the whole idea is that I have many, many notes and they're connected over a high performance network. It might be shocking for your listeners to hear that these networks today run at a utilization of no more than 20 to 25%. Question is why? Well, the reason is that if I tried to run them faster than that you start to get back at drops because there are some fundamental problems caused by congestion on the network which are unsolved as we speak today. There are only one solution which is to use TCP. Well, TCP is a well-known, is part of the TCP IP suite. TCP was never designed to handle the latencies and speeds inside data center. It's a wonderful protocol but it was invented 43 years ago now. >> Yeah, very reliable and tested and proven. It's got a good track record but you're right. >> Very good track record, unfortunately eats a lot of CPU cycles. So if you take the idea behind TCP and you say, okay, what's the essence of TCP? How would you apply it to the data center? That's what we've done with what we call FCP which is a fabric control protocol, which we intend to open. We intend to publish the standards and make it open. And when you do that and you embed FCP in hardware on top of this standard IP ethernet network you end up with the ability to run at very large-scale networks where the utilization of the network is 90 to 95%, not 20 to 25%. >> Wow, okay. >> And you end up with solving problems of congestion at the same time. Now, why is this important today? That's all geek speak so far. The reason this stuff is important is that it such a network allows you to disaggregate, pull and then virtualize the most important and expensive resources in the data center. What are those? It's computer on one side, storage on the other side. And increasingly even things like DRAM wants to be disaggregated. And well, if I put everything inside a general purpose server the problem is that those resources get stranded because they're stuck behind a CPU. Well, once you disaggregate those resources and we're saying hyper disaggregate meaning the hyper and the hyper disaggregate simply means that you can disaggregate almost all the resources. >> And then you going to reaggregate them, right? I mean, that's obviously- >> Exactly and the network is the key in helping. >> Okay. >> So the reason the company is called Fungible is because we are able to disaggregate, virtualize and then pull those resources. And you can get for so scale-out companies the large AWS, Google, et cetera they have been doing this aggregation tooling for some time but because they've been using a compute centric architecture their disaggregation is not nearly as efficient as we can make. And they're off by about a factor of three. When you look at enterprise companies they are off by another factor of four because the utilization of enterprise is typically around 8% of overall infrastructure. The utilization in the cloud for AWS, and GCP, and Microsoft is closer to 35 to 40%. So there is a factor of almost four to eight which you can gain by dis-aggregating and pulling. >> Okay, so I want to interrupt you again. So these hyperscalers are smart. They have a lot of engineers and we've seen them. Yeah, you're right they're using a lot of general purpose but we've seen them make moves toward GPUs and embrace things like Arm. So I know you can't name names, but you would think that this is with all the data that's in the cloud, again, our topic today. You would think the hyperscalers are all over this. >> Well, the hyperscalers recognized here that the problems that we have articulated are important ones and they're trying to solve them with the resources that they have and all the clever people that they have. So these are recognized problems. However, please note that each of these hyperscalers has their own legacy now. They've been around for 10, 15 years. And so they're not in a position to all of a sudden turn on a dime. This is what happens to all companies at some point. >> They have technical debt, you mean? (laughs) >> I'm not going to say they have technical debt, but they have a certain way of doing things and they are in love with the compute centric way of doing things. And eventually it will be understood that you need a third element called the DPU to address these problems. Now, of course, you've heard the term SmartNIC. >> Yeah, right. >> Or your listeners must've heard that term. Well, a SmartNIC is not a DPU. What a SmartNIC is, is simply taking general purpose ARM cores, putting the network interface and a PCI interface and integrating them all on the same chip and separating them from the CPU. So this does solve a problem. It solves the problem of the data center workload interfering with the application workload, good job, but it does not address the architectural problem of how to execute data center workloads efficiently. >> Yeah, so it reminds me of, I understand what you're saying I was going to ask you about SmartNICs. It's almost like a bridge or a band-aid. >> Band-aid? >> It almost reminds me of throwing a high flash storage on a disc system that was designed for spinning disc. Gave you something but it doesn't solve the fundamental problem. I don't know if it's a valid analogy but we've seen this in computing for a longtime. >> Yeah, this analogy is close because okay, so let's take a hyperscaler X, okay? We won't name names. You find that half my CPUs are crippling their thumbs because they're executing this data-centric workload. Well, what are you going to do? All your code is written in C++ on x86. Well, the easiest thing to do is to separate the cores that run this workload. Put it on a different let's say we use Arm simply because x86 licenses are not available to people to build their own CPUs so Arm was available. So they put a bunch of Arm cores, they stick a PCI express and a network interface and you bought that code from x86 to Arm. Not difficult to do but and it does you results. And by the way if for example this hyperscaler X, shall we called them, if they're able to remove 20% of the workload from general purpose CPUs that's worth billions of dollars. So of course, you're going to do that. It requires relatively little innovation other than to port code from one place to another place. >> Pradeep, that's what I'm saying. I mean, I would think again, the hyperscalers why can't they just do some work and do some engineering and then give you a call and say, okay, we're going to attack these workloads together. That's similar to how they brought in GPUs. And you're right it's worth billions of dollars. You could see when the hyperscalers Microsoft, and Azure, and AWS bolt announced, I think they depreciated servers now instead of four years it's five years. And it dropped like a billion dollars to their bottom line. But why not just work directly with you guys? I mean, let's see the logical play. >> Some of them are working with us. So that's not to say that they're not working with us. So all of the hyperscalers they recognize that the technology that we're building is a fundamental that we have something really special and moreover it's fully programmable. So the whole trick is you can actually build a lump of hardware that is fixed function. But the difficulty is that in the place where the DPU would sit which is on the boundary of a server and the network, is literally on that boundary, that place the functionality needs to be programmable. And so the whole trick is how do you come up with an architecture where the functionality is programmable but it is also very high speed for this particular set of applications. So the analogy with GPUs is nearly perfect because GPUs and particularly Nvidia implemented or they invented CUDA which is the programming language for GPUs. And it made them easy to use, made it fully programmable without compromising performance. Well, this is what we're doing with DPUs. We've invented a new architecture, we've made them very easy to program. And they're these workloads, not workloads, computation that I talked about which is security, virtualization, storage and then network. Those four are quintessential examples of data center workloads and they're not going away. In fact, they're becoming more, and more, and more important over time. >> I'm very excited for you guys, I think, and really appreciate Pradeep, we have your back because I really want to get into some of the secret sauce. You talked about these accelerators, eraser code and crypto accelerators. But I want to understand that. I know there's NBMe in here, there's a lot of hardware and software and intellectual property, but we're seeing this notion of programmable infrastructure extending now into this domain, this build-out of this, I like this term disaggregated, massive disaggregated network. >> Hyper disaggregated. >> It's so hyper disaggregated even better. And I would say this and then I got to go. But what got us here the last decade is not the same as what's going to take us through the next decade. >> That's correct. >> Pradeep, thanks so much for coming on theCUBE. It's a great conversation. >> Thank-you for having me it's really a pleasure to speak with you and get the message of Fungible out there. >> Yeah, I promise we'll have you back. And keep it right there everybody we've got more great content coming your way on theCUBE on cloud. This is Dave Vellante. Stay right there. >> Thank-you, Dave.

Published Date : Jan 4 2021

SUMMARY :

of compute and storage and the build-out Thank-you, Dave and is don't CPUs and GPUs is that the architectural interrupt you for a second. and I want you to tie it into the cloud in the amount of IO to compute. And that's really the And the workloads that we have And the first thing is not going to solve this problem. and how you do solve this problem. And the last one is that when you look the note to note efficiency. and tested and proven. the network is 90 to 95%, in the data center. Exactly and the network So the reason the data that's in the cloud, recognized here that the problems the compute centric way the data center workload I was going to ask you about SmartNICs. the fundamental problem. Well, the easiest thing to I mean, let's see the logical play. So all of the hyperscalers they recognize into some of the secret sauce. last decade is not the same It's a great conversation. and get the message of Fungible out there. Yeah, I promise we'll have you back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

90QUANTITY

0.99+

PradeepPERSON

0.99+

MicrosoftORGANIZATION

0.99+

20%QUANTITY

0.99+

15 megahertzQUANTITY

0.99+

30 timesQUANTITY

0.99+

30%QUANTITY

0.99+

four yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

20QUANTITY

0.99+

five yearsQUANTITY

0.99+

80%QUANTITY

0.99+

30QUANTITY

0.99+

Juniper NetworksORGANIZATION

0.99+

Pradeep SindhuPERSON

0.99+

GoogleORGANIZATION

0.99+

two problemsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

600XQUANTITY

0.99+

1987DATE

0.99+

threeQUANTITY

0.99+

twoQUANTITY

0.99+

first questionQUANTITY

0.99+

two problemsQUANTITY

0.99+

1,000 different treadsQUANTITY

0.99+

oneQUANTITY

0.99+

30 timesQUANTITY

0.99+

60 yearsQUANTITY

0.99+

next decadeDATE

0.99+

eachQUANTITY

0.99+

second thingQUANTITY

0.99+

2.3 gigahertzQUANTITY

0.99+

2%QUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

firstQUANTITY

0.99+

40%QUANTITY

0.99+

thousandsQUANTITY

0.99+

two functionsQUANTITY

0.98+

25%QUANTITY

0.98+

todayDATE

0.98+

third elementQUANTITY

0.98+

FungibleORGANIZATION

0.98+

95%QUANTITY

0.98+

40 timesQUANTITY

0.98+

two ordersQUANTITY

0.98+

singleQUANTITY

0.98+

SecondlyQUANTITY

0.98+

last decadeDATE

0.98+

two thingsQUANTITY

0.98+

two basic problemsQUANTITY

0.97+

10, 20%QUANTITY

0.97+

a secondQUANTITY

0.97+

around 8%QUANTITY

0.97+

one solutionQUANTITY

0.97+

43 years agoDATE

0.97+

fourQUANTITY

0.97+

four examplesQUANTITY

0.96+

eightQUANTITY

0.96+

billions of dollarsQUANTITY

0.96+

100 gigabits per secondQUANTITY

0.96+

one sideQUANTITY

0.95+

35QUANTITY

0.94+

three megabits per secondQUANTITY

0.94+

GCPORGANIZATION

0.93+

AzureORGANIZATION

0.92+

two fundamental problemsQUANTITY

0.91+

hundreds of billions of transistorsQUANTITY

0.91+

two and a half yearsQUANTITY

0.91+

Problem number twoQUANTITY

0.9+

VMworld Day 1 General Session | VMworld 2018


 

For Las Vegas, it's the cube covering vm world 2018, brought to you by vm ware and its ecosystem partners. Ladies and gentlemen, Vm ware would like to thank it's global diamond sponsors and it's platinum sponsors for vm world 2018 with over 125,000 members globally. The vm ware User Group connects via vmware customers, partners and employees to vm ware, information resources, knowledge sharing, and networking. To learn more, visit the [inaudible] booth in the solutions exchange or the hemoglobin gene vm village become a part of the community today. This presentation includes forward looking statements that are subject to risks and uncertainties. Actual results may differ materially as a result of various risk factors including those described in the 10 k's 10 q's and k's vm ware. Files with the SEC. Ladies and Gentlemen, please welcome Pat Gelsinger. Welcome to vm world. Good morning. Let's try that again. Good morning and I'll just say it is great to be here with you today. I'm excited about the sixth year of being CEO. When it was on this stage six years ago were Paul Maritz handed me the clicker and that's the last he was seen. We have 20,000 plus here on site in Vegas and uh, you know, on behalf of everyone at Vm ware, you know, we're just thrilled that you would be with us and it's a joy and a thrill to be able to lead such a community. We have a lot to share with you today and we really think about it as a community. You know, it's my 23,000 plus employees, the souls that I'm responsible for, but it's our partners, the thousands and we kicked off our partner day yesterday, but most importantly, the vm ware community is centered on you. You know, we're very aware of this event would be nothing without you and our community and the role that we play at vm wares to build these cool breakthrough innovations that enable you to do incredible things. You're the ones who take our stuff and do amazing things. You altogether. We have truly changed the world over the last two decades and it is two decades. You know, it's our anniversary in 1998, the five people that started a vm ware, right. You know, it was, it was exactly 20 years ago and we're just thrilled and I was thinking about this over the weekend and it struck me, you know, anniversary, that's like old people, you know, we're here, we're having our birthday and it's a party, right? We can't have a drink yet, but next year. Yeah. We're 20 years old. Right. We can do that now. And I'll just say the culture of this community is something that truly is amazing and in my 38 years, 38 years in tech, that sort of sounds like I'm getting old or something, but the passion, the loyalty, almost a cult like behavior that we see in this team of people to us is simply thrilling. And you know, we put together a little video to sort of summarize the 20 years and some of that history and some of the unique and quirky aspects of our culture. Let's watch that now. We knew we had something unique and then we demonstrated that what was unique was also some reasons that we love vm ware, you know, like the community out there. So great. The technology I love it. Ware is solid and much needed. Literally. I do love Vmr. It's awesome. Super Awesome. Pardon? There's always someone that wants to listen and learn from us and we've learned so much from them as well. And we reached out to vm ware to help us start building. What's that future world look like? Since we're doing really cutting edge stuff, there's really no better people to call and Bmr has been known for continuous innovation. There's no better way to learn how to do new things in it than being with a company that's at the forefront of technology. What do you think? Don't you love that commitment? Hey Ashley, you know, but in the prep sessions for this, I thought, boy, what can I do to take my commitment to the next level? And uh, so, uh, you know, coming in a couple days early, I went to down the street to bad ass tattoo. So it's time for all of us to take our commitment up level and sometimes what happens in Vegas, you take home. Thank you. Vm Ware has had this unique role in the industry over these 20 years, you know, and for that we've seen just incredible things that have happened over this period of time and it's truly extraordinary what we've accomplished together. And you know, as we think back, you know, what vm ware has uniquely been able to do is I'll say bridge across know and we've seen time and again that we see these areas of innovation emerging and rapidly move forward. But then as they become utilized by our customers, they create this natural tension of what business wants us flexibility to use across these silos of innovation. And from the start of our history, we have collectively had this uncanny ability to bridge across these cycles of innovation. You know, an act one was clearly the server generation. You know, it may seem a little bit, uh, ancient memory now, but you remember you used to walk into your data center and it looked like the loove the museum of it passed right? You know, and you had your old p series and your z series in your sparks and your pas and your x86 cluster and Yo, it had to decide, well, which architecture or am I going to deploy and run this on? And we bridged across and that was the magic of Esx. You don't want to just changed the industry when that occurred. And I sort of called the early days of Esx and vsphere. It was like the intelligence test. If you weren't using it, you fail because Yup. Servers, 10 servers become one months, become minutes. I still have people today who come up to me and they reflect on their first experience of vsphere or be motion and it was like a holy moment in their life and in their careers. Amazing and act to the Byo d, You know, can we bridge across these devices and users wanted to be able to come in and say, I have my device and I'm productive on it. I don't want to be forced to use the corporate standard. And maybe more than anything was the power of the iphone that was introduced, the two, seven, and suddenly every employee said this is exciting and compelling. I want to use it so I can be more productive when I'm here. Bye. Jody was the rage and again it was a tough challenge and once again vm ware helped to bridge across the surmountable challenge. And clearly our workspace one community today is clearly bridging across these silos and not just about managing devices but truly enabling employee engagement and productivity. Maybe act three was the network and you know, we think about the network, you know, for 30 years we were bound to this physical view of what the network would be an in that network. We are bound to specific protocols. We had to wait months for network upgrades and firewall rules. Once every two weeks we'd upgrade them. If you had a new application that needed a firewall rule, sorry, you know, come back next month we'll put, you know, deep frustration among developers and ceos. Everyone was ready to break the chains. And that's exactly what we did. An NSX and Nice Sierra. The day we acquired it, Cisco stock drops and the industry realizes the networking has changed in a fundamental way. It will never be the same again. Maybe act for was this idea of cloud migration. And if we were here three years ago, it was student body, right to the public cloud. Everything is going there. And I remember I was meeting with a cio of federal cio and he comes up to me and he says, I tried for the last two years to replatform my 200 applications I got to done, you know, and all of a sudden that was this. How do I do cloud migration and the effective and powerful way. Once again, we bridged across, we brought these two worlds together and eliminated this, uh, you know, this gap between private and public cloud. And we'll talk a lot more about that today. You know, maybe our next act is what we'll call the multicloud era. You know, because today in a recent survey by Deloitte said that the average business today is using eight public clouds and expected to become 10 plus public clouds. And you know, as you're managing different tools, different teams, different architectures, those solution, how do you, again bridge across, and this is what we will do in the multicloud era, we will help our community to bridge across and take advantage of these powerful cycles of innovation that are going on, but be able to use them across a consistent infrastructure and operational environment. And we'll have a lot more to talk about on this topic today. You know, and maybe the last item to bridge across maybe the most important, you know, people who are profit. You know, too often we think about this as an either or question. And as a business leader, I'm are worried about the people or the And Milton Friedman probably set us up for this issue decades ago when he said, planet, right? the sole purpose of a business is to make profits. You want to create a multi-decade dilemma, right? For business leaders, could I have both people and profits? Could I do well and do good? And particularly for technology, I think we don't have a choice to think about these separately. We are permeating every aspect of business. And Society, we have the responsibility to do both and have all the things that vm ware has accomplished. I think this might be the one that I'm most proud of over, you know, w we have demonstrated by vsphere and the hypervisor alone that we have saved over 540 million tons of co two emissions. That is what you have done. Can you believe that? Five hundred 40 million tons is enough to have 68 percent of all households for a year. Wow. Thank you for what you have done. Thank you. Or another translation of that. Is that safe enough to drive a trillion miles and the average car or you could go to and from Jupiter just in case that was in your itinerary a thousand times. Right? He was just incredible. What we have done and as a result of that, and I'll say we were thrilled to accept this recognition on behalf of you and what you have done. You know, vm were recognized as number 17 in the fortune. Change the world list last week. And we really view it as accepting this honor on behalf of what you have done with our products and technology tech as a force for good. We believe that fundamentally that is our opportunity, if not our obligation, you know, fundamentally tech is neutral, you know, we together must shape it for good. You know, the printing press by Gutenberg in 1440, right? It was used to create mass education and learning materials also can be used for extremist propaganda. The technology itself is neutral. Our ecosystem has a critical role to play in shaping technology as a force for good. You know, and as we think about that tomorrow, we'll have a opportunity to have a very special guest and I really encourage you to be here, be on time tomorrow morning on the stage and you know, Sanjay's a session, we'll have Malala, Nobel Peace Prize winner and fourth will be a bit of extra security as you come in and you understand that. And I just encourage you not to be late because we see this tech being a force for good in everything that we do at vm ware. And I hope you'll enjoy, I'm quite looking forward to the session tomorrow. Now as we think about the future. I like to put it in this context, the superpowers of tech know and you know, 38 years in the industry, you know, I am so excited because I think everything that we've done over the last four decades is creating a foundation that allows us to do more and go faster together. We're unlocking game, changing opportunities that have not been available to any people in the history of humanity. And we have these opportunities now and I, and I think about these four cloud, you have unimaginable scale. You'll literally with your Amex card, you can go rent, you know, 10,000 cores for $100 per hour. Or if you have Michael's am ex card, we can rent a million cores for $10,000 an hour. Thanks Michael. But we also know that we're in many ways just getting started and we have tremendous issues to bridge across and compatible clouds, mobile unprecedented scale. Literally, your application can reach half the humans on the planet today. But we also know that five percent, the lowest five percent of humanity or the other half of humanity, they're still in the lower income brackets, less than five percent penetrated. And we know that we have customer examples that are using mobile phones to raise impoverished farmers in Africa, out of poverty just by having a smart phone with proper crop, the information field and whether a guidance that one tool alone lifting them out of poverty. Ai knows, you know, I really love the topic of ai in 1986. I'm the chief architect of the 80 46. Some of you remember what that was. Yeah, I, you know, you're, you're my folk, right? Right. And for those of you who don't, it was a real important chip at the time. And my marketing manager comes running into my office and he says, Pat, pat, we must make the 46 a great ai chip. This is 1986. What happened? Nothing an AI is today, a 30 year overnight success because the algorithms, the data have gotten so much bigger that we can produce results, that we can bring intelligence to everything. And we're seeing dramatic breakthroughs in areas like healthcare, radiology, you know, new drugs, diagnosis tools, and designer treatments. We're just scratching the surface, but ai has so many gaps, yet we don't even in many cases know why it works. Right? And we'll call that explainable ai and edge and Iot. We're connecting the physical and the digital worlds was never before possible. We're bridging technology into every dimension of human progress. And today we're largely hooking up things, right? We have so much to do yet to make them intelligent. Network secured, automated, the patch, bringing world class it to Iot, but it's not just that these are super powers. We really see that each and each one of them is a super power in and have their own right, but they're making each other more powerful as well. Cloud enables mobile conductivity. Mobile creates more data, more data makes the AI better. Ai Enables more edge use cases and more edge requires more cloud to store the data and do the computing right? They're reinforcing each other. And with that, we know that we are speeding up and these superpowers are reshaping every aspect of society from healthcare to education, the transportation, financial institutions. This is how it all comes together. Now, just a simple example, how many of you have ever worn a hardhat? Yeah, Yo. Pretty boring thing. And it has one purpose, right? You know, keep things from smacking me in the here's the modern hardhat. It's a complete heads up display with ar head. Well, vr capabilities that give the worker safety or workers or factory workers or supply people the ability to see through walls to understand what's going on inside of the equipment. I always wondered when I was a kid to have x Ray Vision, you know, some of my thoughts weren't good about why I wanted it, but you know, I wanted to. Well now you can have it, you know, but imagine in this environment, the complex application that sits behind it. You know, you're accessing maybe 50 year old building plants, right? You're accessing HVAC systems, but modern ar and vr capabilities and new containerized displays. You'll think about that application. You know, John Gage famously said the network is the computer pat today says the application is now a network and pretty typically a complicated one, you know, and this is the vm ware vision is to make that kind of environment realizable in every aspect of our business and community and we simply have been on this journey, any device, any application, any cloud with intrinsic security. And this vision has been consistent for those of you who have been joining us for a number of years. You've seen this picture, but it's been slowly evolving as we've worked in piece by piece to refine and extend this vision, you know, and for it, we're going to walk through and use this as the compass for our discussion today as we walk through our conversation. And you know, we're going to start by a focus on any cloud. And as we think about this cloud topic, you know, we see it as a multicloud world hybrid cloud, public cloud, but increasingly seeing edge and telco becoming clouds in and have their own right. And we're not gonna spend time on it today, but this area of Telco to the is an enormous opportunity for us in our community. You know, data centers and cloud today are over 80 percent virtualized. The Telco network is less than 10 percent virtualized. Wow. An industry that's almost as big as our industry entirely unvirtualized, although the technologies we've created here can be applied over here and Telco and we have an enormous buildout coming with five g and environments emerging. What an opportunity for us, a virgin market right next to us and we're getting some early mega winds in this area using the technologies that you have helped us cure rate than the So we're quite excited about this topic area as well. market. So let's look at this full view of the multicloud. Any cloud journey. And we see that businesses are on a multicloud journey, you know, and today we see this fundamentally in these two paths, a hybrid cloud and a public cloud. And these paths are complimentary and coexisting, but today, each is being driven by unique requirements and unique teams. Largely the hybrid cloud is being driven by it. And operations, the public cloud being driven more by developers and line of business requirements and as some multicloud environment. So how do we deliver upon that and for that, let's start by digging in on the hybrid cloud aspect of this and as we think about the hybrid cloud, we've been talking about this subject for a number of years and I want to give a very specific and crisp definition. You're the hybrid cloud is the public cloud and the private cloud cooperating with consistent infrastructure and consistent operations simply put seamless path to and from the cloud that my workloads don't care if it's here or there. I'm able to run them in a agile, scalable, flexible, efficient manner across those two environments, whether it's my data center or someone else's, I can bring them together to make that work is the magic of the Vm ware Cloud Foundation. The vm ware Cloud Foundation brings together computer vsphere and the core of why we are here, but combines with that networking storage delivered through a layer of management and automation. The rule of the cloud is ruthlessly automate everything. We laid out this vision of the software defined data center seven years ago and we've been steadfastly working on this vision and vm ware. Cloud Foundation provides this consistent infrastructure and operations with integrated lifecycle management automation. Patching the m ware cloud foundation is the simplest path to the hybrid cloud and the fastest way to get vm ware cloud foundation is hyperconverged infrastructure, you know, and with this we've combined integrated then validated hardware and as a building block inside of this we have validated hardware, the v Sand ready environments. We have integrated appliances and cloud delivered infrastructure, three ways that we deliver that integrate integrated hyperconverged infrastructure solution. And we have by far the broadest ecosystem of partners to do it. A broad set of the sand ready nodes from essentially everybody in the industry. Secondly, we have integrated appliances, the extract of vxrail that we have co engineered with our partners at Dell technology and today in fact Dell is releasing the power edge servers, a major step in blade servers that again are going to be powering vxrail and vxrack systems and we deliver hyperconverged infrastructure through a broader set of Vm ware cloud partners as well. At the heart of the hyperconverged infrastructure is v San and simply put, you know, be San has been the engine that's just been moving rapidly to take over the entire integration of compute and storage and expand to more and more areas. We have incredible momentum over 15,000 customers for v San Today and for those of you who joined us, we say thank you for what you have done with this product today. Really amazing you with 50 percent of the global 2000 using it know vm ware. V San Vxrail are clearly becoming the standard for how hyperconverge is done in the industry. Our cloud partner programs over 500 cloud partners are using ulv sand in their solution, you know, and finally the largest in Hci software revenue. Simply put the sand is the software defined storage technology of choice for the industry and we're seeing that customers are putting this to work in amazing ways. Vm Ware and Dell technologies believe in tech as a force for good and that it can have a major impact on the quality of life for every human on the planet and particularly for the most underdeveloped parts of the world. Those that live on less than $2 per day. In fact that this moment 5 billion people worldwide do not have access to modern affordable surgery. Mercy ships is working hard to change the global surgery crisis with greater than 400 volunteers. Mercy ships operates the largest NGO hospital ship delivering free medical care to the poorest of the poor in Africa. Let's see from them now. When the ship shows up to port, literally people line up for days to receive state of the art life, sane changing life saving surgeries, tumor site limbs, disease blindness, birth defects, but not only that, the personnel are educating and training the local healthcare providers with new skills and infrastructure so they can care for their own. After the ship has left, mercy ships runs on Vm ware, a dell technology with VX rail, Dell Isilon data protection. We are the it platform for mercy ships. Mercy ships is now building their next generation ship called global mercy, which were more than double. It's lifesaving capacity. It's the largest charity hospital ever. It will go live in 20 slash 20 serving Africa and I personally plan on being there for its launch. It is truly amazing what they are doing with our technology. Thanks. So we see this picture of the hybrid cloud. We've talked about how we do that for the private cloud. So let's look over at the public cloud and let's dig into this a little bit more deeply. You know, we're taking this incredible power of the Vm ware Cloud Foundation and making it available for the leading cloud providers in the world and with that, the partnership that we announced almost two years ago with Amazon and on the stage last year, we announced their first generation of products, no better example of the hybrid cloud. And for that it's my pleasure to bring to stage my friend, my partner, the CEO of aws. Please welcome Andy Jassy. Thank you andy. You know, you honor us with your presence, you know, and it really is a pleasure to be able to come in front of this audience and talk about what our teams have accomplished together over the last, uh, year. Yo, can you give us some perspective on that, Andy and what customers are doing with it? Well, first of all, thanks for having me. I really appreciate it. It's great to be here with all of you. Uh, you know, the offering that we have together customers because it allows them to use the same software they've been using to again, where cloud and aws is very appealing to manage their infrastructure for years to be able to deploy it an aws and we see a lot of customer momentum and a lot of customers using it. You see it in every imaginable vertical business segment in transportation. You see it with stagecoach and media and entertainment. You see it with discovery communications in education, Mit and Caltech and consulting and accenture and cognizant and dxc you see in every imaginable vertical business segment and the number of customers using the offering is doubling every quarter. So people were really excited about it and I think that probably the number one use case we see so far, although there are a lot of them, is customers who are looking to migrate on premises applications to the cloud. And a good example of that is mit. We're there right now in the process of migrating. In fact, they just did migrate 3000 vms from their data centers to Vm ware cloud native us. And this would have taken years before to do in the past, but they did it in just three months. It was really spectacular and they're just a fun company to work with and the team there. But we're also seeing other use cases as well. And you're probably the second most common example is we'll say on demand capabilities for things like disaster recovery. We have great examples of customers you that one in particular, his brakes, right? Urban in those. The brings security trucks and they all armored trucks coming by and they had a critical need to retire a secondary data center that they were using, you know, for Dr. so we quickly built to Dr Protection Environment for $600. Bdms know they migrated their mission critical workloads and Wallah stable and consistent Dr and now they're eliminating that site and looking for other migrations as well. The rate of 10 to 15 percent. It was just a great deal. One of the things I believe Andy, he'll customers should never spend capital, uh, Dr ever again with this kind of capability in place. That is just that game changing, you know, and you know, obviously we've been working on expanding our reach, you know, we promised to make the service available a year ago with the global footprint of Amazon and now we've delivered on that promise and in fact today or yesterday if you're an ozzie right down under, we announced in Sydney, uh, as well. And uh, now we're in US Europe and in APJ. Yeah. It's really, I mean it's very exciting. Of course Australia is one of the most virtualized places in the world and, and it's pretty remarkable how fast European customers have started using the offering to and just the quarter that's been out there and probably have the many requests customers has had. And you've had a, probably the number one request has been that we make the offering available in all the regions. The aws has regions and I can tell you by the end of 2019 will largely be there including with golf clubs and golf clap. You guys have been, that's been huge for you guys. Yeah. It's a government only region that we have that a lot of federal government workloads live in and we are pretty close together having the offering a fedramp authority to operate, which is a big deal on a game changer for governments because then there'll be able to use the familiar tools they use and vm ware not just to run their workloads on premises but also in the cloud as well with the data privacy requirements, security requirements they need. So it's a real game changer for government too. Yeah. And this you can see by the picture here basically before the end of next year, everywhere that you are and have an availability zone. We're going to be there running on data. Yup. Yeah. Let's get with it. Okay. We're a team go faster. Okay. You'll and you know, it's not just making it available, but this pace of innovation and you know, you guys have really taught us a few things in this respect and since we went live in the Oregon region, you know, we've been on a quarterly cadence of major releases and two was really about mission critical at scale and we added our second region. We added our hybrid cloud extension with m three. We moved the global rollout and we launched in Europe with m four. We really add a lot of these mission critical governance aspects started to attack all of the industry certifications and today we're announcing and five right. And uh, you know, with that, uh, I think we have this little cool thing you know, two of the most important priorities for that we're doing with ebs and storage. Yeah, we'll take, customers, our cost and performance. And so we have a couple of things to talk about today that we're bringing to you that I think hit both of those on a storage side. We've combined the elasticity of Amazon Elastic Block store or ebs with ware is Va v San and we've provided now a storage option that you'll be able to use that as much. It's very high capacity and much more cost effective and you'll start to see this initially on the Vm ware cloud. Native us are five instances which are compute instances, their memory optimized and so this will change the cost equation. You'll be able to use ebs by default and it'll be much more cost effective for storage or memory intensive workloads. Um, it's something that you guys have asked for. It's been very frequently requested it, it hits preview today. And then the other thing is that we've worked really hard together to integrate vm ware's Nsx along with aws direct neck to have a private even higher performance conductivity between on premises and the cloud. So very, very exciting new capabilities to show deep integration between the companies. Yeah. You know, in that aspect of the deep integration. So it's really been the thing that we committed to, you know, we have large engineering teams that are working literally every day. Right on bringing together and how do we fuse these platforms together at a deep and intimate way so that we can deliver new services just like elastic drs and the c and ebs really powerful, uh, capabilities and that pace of innovation continue. So next maybe. Um, maybe six. I don't know. We'll see. All right. You know, but we're continuing this toward pace of innovation, you know, completing all of the capabilities of Nsx. You'll full integration for all of the direct connect to capabilities. Really expanding that. You're only improving licensed capabilities on the platform. We'll be adding pks on top of for expanded developer a capabilities. So just. Oh, thank you. I, I think that was formerly known as Right, and y'all were continuing this pace of storage Chad. So anyway. innovation going forward, but I think we also have a few other things to talk about today. Andy. Yeah, I think we have some news that hopefully people here will be pretty excited about. We know we have a pretty big database business and aws and it's. It's both on the relational and on the nonrelational side and the business is billions of dollars in revenue for us and on the relational side. We have a service called Amazon relational database service or Amazon rds that we have hundreds of thousands of customers using because it makes it much easier for them to set up, operate and scale their databases and so many companies now are operating in hybrid mode and will be for a while and a lot of those customers have asked us, can you give us the ease of manageability of those databases but on premises. And so we talked about it and we thought about and we work with our partners at Vm ware and I'm excited to announce today, right now Amazon rds on Vm ware and so that will bring all the capabilities of Amazon rds to vm ware's customers for their on premises environments. And so what you'll be able to do is you'll be able to provision databases. You'll be able to scale the compute or the memory or the storage for those database instances. You'll be able to patch the operating system or database engines. You'll be able to create, read replicas to scale your database reads and you can deploy this rep because either on premises or an aws, you'll be able to deploy and high high availability configuration by replicating the data to different vm ware clusters. You'll be able to create online backups that either live on premises or an aws and then you'll be able to take all those databases and if you eventually want to move them to aws, you'll be able to do so rather easily. You have a pretty smooth path. This is going to be available in a few months. It will be available on Oracle sql server, sql postgresql and Maria DB. I think it's very exciting for our customers and I think it's also a good example of where we're continuing to deepen the partnership and listen to what customers want and then innovate on their behalf. Absolutely. Thank you andy. It is thrilling to see this and as we said, when we began the partnership, it was a deep integration of our offerings and our go to market, but also building this bi-directional hybrid highway to give customers the capabilities where they wanted cloud on premise, on premise to the cloud. It really is a unique partnership that we've built, the momentum we're feeling to our customer base and the cool innovations that we're doing. Andy, thank you so much for you Jordan Young, rural 20th. You guys appreciate it. Yeah, we really have just seen incredible momentum and as you might have heard from our earnings call that we just finished this. We finished the last quarter. We just really saw customer momentum here. Accelerating. Really exciting to see how customers are starting to really do the hybrid cloud at scale and with this we're just seeing that this vm ware cloud foundation available on Amazon available on premise. Very powerful, but it's not just the partnership with Amazon. We are thrilled to see the momentum of our Vm ware cloud provider program and this idea of the vm ware cloud providers has continued to gain momentum in the industry and go over five years. Right. This program has now accumulated more than 4,200 cloud partners in over 120 countries around the globe. It gives you choice, your local provider specialty offerings, some of your local trusted partners that you would have in giving you the greatest flexibility to choose from and cloud providers that meet your unique business requirements. And we launched last year a program called Vm ware cloud verified and this was saying you're the most complete embodiment of the Vm ware Cloud Foundation offering by our cloud partners in this program and this logo you know, allows you to that this provider has achieved the highest standard for cloud infrastructure and that you can scale and deliver your hybrid cloud and partnering with them. It know a particular. We've been thrilled to see the momentum that we've had with IBM as a huge partner and our business with them has grown extraordinarily rapidly and triple digits, but not just the customer count, which is now over 1700, but also in the depth of customers moving large portions of the workload. And as you see by the picture, we're very proud of the scope of our partnerships in a global basis. The highest standard of hybrid cloud for you, the Vm ware cloud verified partners. Now when we come back to this picture, you know we, you know, we're, we're growing in our definition of what the hybrid cloud means and through Vm Ware Cloud Foundation, we've been able to unify the private and the public cloud together as never before, but we're also seeing that many of you are interested in how do I extend that infrastructure further and farther and will simply call that the edge right? And how do we move data closer to where? How do we move data center resources and capacity closer to where the data's being generated at the operations need to be performed? Simply the edge and we'll dig into that a little bit more, but as we do that, what are the things that we offer today with what we just talked about with Amazon and our VCP p partners is that they can consume as a service this full vm ware Cloud Foundation, but today we're only offering that in the public cloud until project dimension of project dimension allows us to extend delivered as a service, private, public, and to the edge. Today we're announcing the tech preview, a project dimension Vm ware cloud foundation in a hyperconverged appliance. We're partnered deeply with Dell EMC, Lenovo for the first partners to bring this to the marketplace, built on that same proven infrastructure, a hybrid cloud control plane, so literally just like we're managing the Vm ware cloud today, we're able to do that for your on premise. You're small or remote office or your edge infrastructure through that exact same as a service management and control plane, a complete vm ware operated end to end environment. This is project dimension. Taking the vcf stack, the full vm ware cloud foundation stack, making an available in the cloud to the edge and on premise as well, a powerful solution operated by BM ware. This project dimension and project dimension allows us to have a fundamental building block in our approach to making customers even more agile, flexible, scalable, and a key component of our strategy as well. So let's click into that edge a little bit more and we think about the edge in the following layers, the compute edge, how do we get the data and operations and applications closer to where they need to be. If you remember last year I talked about this pendulum swinging of centralization and decentralization edge is a decentralization force. We're also excited that we're moving the edge of the devices as well and we're doing that in two ways. One with workspace, one for human optimized devices and the second is project pulse or Vm ware pulse. And today we're announcing pulse two point zero where you can consume it now as a service as well as with integrated security. And we've now scaled pulse to support 500 million devices. Isn't that incredible, right? I mean this is getting a scale. Billions and billions and finally networking is a key component. You all that. We're stretching the networking platform, right? And evolving how that edge operates in a more cloud and that's a service white and this is where Nsx St with Velo cloud is such a key component of delivering the edge of network services as well. Taken together the device side, the compute edge and rethinking and evolving the networking layer together is the vm ware edge strategy summary. We see businesses are on this multicloud journey, right? How do we then do that for their private of public coming together, the hybrid cloud, but they're also on a journey for how they work and operate it across the public cloud and the public cloud we have this torrid innovation, you'll want Andy's here, challenges. You know, he's announcing 1500 new services or were extraordinary innovation and you'll same for azure or Google Ibm cloud, but it also creates the same complexity as we said. Businesses are using multiple public clouds and how do I operate them? How do I make them work? You know, how do I keep track of my accounts and users that creates a set of cloud operations problems as well in the complexity of doing that. How do you make it work? Right? And your for that. We'll just see that there's this idea cloud cost compliance, analytics as these common themes that of, you know, keep coming up and we're seeing in our customers that are new role is emerging. The cloud operations role. You're the person who's figuring out how to make these multicloud environments work and keep track of who's using what and which data is landing where today I'm thrilled to tell you that the, um, where is acquiring the leader in this space? Cloudhealth technologies. Thank you. Cloudhealth technologies supports today, Amazon, azure and Google. They have some 3,500 customers, some of the largest and most respected brands in the, as a service industry. And Sasa business today rapidly span expanding feature sets. We will take cloudhealth and we're going to make it a fundamental platform and branded offering from the um, where we will add many of the other vm ware components into this platform, such as our wavefront analytics, our cloud, choreo compliance, and many of the other vm ware products will become part of the cloudhealth suite of services. We will be enabling that through our enterprise channels as well as through our MSP and BCPP partners as well know. Simply put, we will make cloudhealth the cloud operations platform of choice for the industry. I'm thrilled today to have Joe Consella, the CTO and founder. Joe, please stand up. Thank you joe to your team of a couple hundred, you know, mostly in Boston. Welcome to the Vm ware family, the Vm ware community. It is a thrill to have you part of our team. Thank you joe. Thank you. We're also announcing today, and you can think of this, much like we had v realize operations and v realize automation, the compliment to the cloudhealth operations, vm ware, cloud automation, and some of you might've heard of this in the past, this project tango. Well, today we're announcing the initial availability of Vm ware, cloud automation, assemble, manage complex applications, automate their provisioning and cloud services, and manage them through a brokerage the initial availability of cloud automation services, service. Your today, the acquisition of cloudhealth as a platform, the aware of the most complete set of multicloud management tools in the industry, and we're going to do so much more so we've seen this picture of this multicloud journey that our customers are on and you know, we're working hard to say we are going to bridge across these worlds of innovation, the multicloud world. We're doing many other things. You're gonna hear a lot at the show today about this year. We're also giving the tech preview of the Vm ware cloud marketplace for our partners and customers. Also today, Dell technologies is announcing their cloud marketplace to provide a self service, a portfolio of a Dell emc technologies. We're fundamentally in a unique position to accelerate your multicloud journey. So we've built out this any cloud piece, but right in the middle of that any cloud is the network. And when we think about the network, we're just so excited about what we have done and what we're seeing in the industry. So let's click into this a little bit further. We've gotten a lot done over the last five years. Networking. Look at these numbers. 80 million switch ports have been shipped. We are now 10 x larger than number two and software defined networking. We have over 7,500 customers running on Nsx and maybe the stat that I'm most proud of is 82 percent of the fortune 100 has now adopted nsx. You have made nsx these standard and software defined networking. Thank you very much. Thank you. When we think about this journey that we're on, we started. You're saying, Hey, we've got to break the chains inside of the data center as we said. And then Nsx became the software defined networking platform. We started to do it through our cloud provider partners. Ibm made a huge commitment to partner with us and deliver this to their customers. We then said, boy, we're going to make a fundamental to all of our cloud services including aws. We built this bridge called the hybrid cloud extension. We said we're going to build it natively into what we're doing with Telcos, with Azure and Amazon as a service. We acquired the St Wagon, right, and a Velo cloud at the hottest product of Vm ware's portfolio today. The opportunity to fundamentally transform branch and wide area networking and we're extending it to the edge. You're literally, the world has become this complex network. We have seen the world go from the old defined by rigid boundaries, simply put in a distributed world. Hardware cannot possibly work. We're empowering customers to secure their applications and the data regardless of where they sit and when we think of the virtual cloud network, we say it's these three fundamental things, a cloud centric networking fabric with intrinsic security and all of it delivered in software. The world is moving from data centers to centers of data and they need to be connected and Nsx is the way that we will do that. So you'll be aware of is well known for this idea of talking but also showing. So no vm world keynote is okay without great demonstrations of it because you shouldn't believe me only what we can actually show and to do that know I'm going to have our CTL come onstage and CTL y'all. I used to be a cto and the CTO is the certified smart guy. He's also known as the chief talking officer and today he's my demo partner. Please walk, um, Vm ware, cto ray to the stage. Right morning pat. How you doing? Oh, it's great ray, and thanks so much for joining us. Know I promised that we're going to show off some pretty cool stuff here. We've covered a lot already, but are you up to the task? We're going to try and run through a lot of demos. We're going to do it fast and you're going to have to keep me on time to ask an awkward question. Slow me down. Okay. That's my fault if you run along. Okay, I got it. I got it. Let's jump right in here. So I'm a CTO. I get to meet lots of customers that. A few weeks ago I met a cio of a large distribution company and she described her it infrastructure as consisting of a number of data centers troll to us, which he also spoke of a large number of warehouses globally, and each of these had local hyperconverged compute and storage, primarily running surveillance and warehouse management applications, and she pulls me four questions. The first question she asked me, she says, how do I migrate one of these data centers to Vm ware cloud on aws? I want to get out of one of these data centers. Okay. Sounds like something andy and I were just talking exactly, exactly what you just spoke to a few moments ago. She also wanted to simplify the management of the infrastructure in the warehouse as themselves. Okay. He's age and smaller data centers that you've had out there. Her application at the warehouses that needed to run locally, butter developers wanted to develop using cloud infrastructure. Cloud API is a little bit late. The rds we spoken with her in. Her final question was looking to the future, make all this complicated management go away. I want to be able to focus on my application, so that's what my business is about. So give me some new ways of how to automate all of this infrastructure from the edge to the cloud. Sounds pretty clear. Can we do it? Yes we can. So we're going to dive right in right now into one of these demos. And the first demo we're going to look at it is vm ware cloud on aws. This is the best solution for accelerating this public cloud journey. So can we start the demo please? So what you were looking at here is one of those data centers and you should be familiar with this product. It's a familiar vsphere client. You see it's got a bunch of virtual machines running in there. These are the virtual machines that we now want to be able to migrate and move the VMC on aws. So we're going to go through that migration right now. And to do that we use a product that you've seen already atx, however it's the x has been, has got some new cool features since the last time we download it. Probably on this stage here last year, I wanted those in particular is how do we do bulk migration and there's a new cool thing, right? Whole thing we want to move the data center en mass and his concept here is cloud motion with vsphere replication. What this does is it replicates the underlying storage of the virtual machines using vsphere replication. So if and when you want to now do the final migration, it actually becomes a vmotion. So this is what you see going on right here. The replication is in place. Now when you want to touch you move those virtual machines. What you'll do is a vmotion and the key thing to think about here is this is an actual vmotion. Those the ends as room as they're moving a hustler, migrating remained life just as you would in a v motion across one particular infrastructure. Did you feel complete application or data center migration with no dying town? It's a Standard v motion kind of appearance. Wow. That is really impressive. That's correct. Wow. You. So one of the other things we want to talk about here is as we are moving these virtual machines from the on prem infrastructure to the VMC on aws infrastructure, unfortunately when we set up the cloud on VMC and aws, we only set up for hosts, uh, that might not be, that'd be enough because she is going to move the whole infrastructure of that this was something you guys, you and Andy referred to briefly data center. Now, earlier, this concept of elastic drs. what elastic drs does, it allows the VMC on aws to react to the workloads as they're being created and pulled in onto that infrastructure and automatically pull in new hosts into the VMC infrastructure along the way. So what you're seeing here is essentially the MC growing the infrastructure to meet the needs of the workloads themselves. Very cool. So overseeing that elastic drs. we also see the ebs capabilities as well. Again, you guys spoke about this too. This is the ability to be able to take the huge amount of stories that Amazon have, an ebs and then front that by visa you get the same experience of v Sign, but you get this enormous amount of storage capabilities behind it. Wow. That's incredible. That's incredible. I'm excited about this. This is going to enable customers to migrate faster and larger than ever before. Correct. Now she had a series of little questions. Okay. The second question was around what about all those data centers and those age applications that I did not move, and this is where we introduce the project which you've heard of already tonight called project dementia. What this does, it gives you the simplicity of Vm ware cloud, but bringing that out to the age, you know what's basically going on here, vmc on aws is a service which manages your infrastructure in aws. We know stretch that service out into your infrastructure, in your data center and at the age, allowing us to be able to manage that infrastructure in the same way. Once again, let's dive down into a demo and take a look at what this looks like. So what you've got here is a familiar series of services available to you, one of them, which is project dimension. When you enter project dimension, you first get a view of all of the different infrastructure that you have available to you, your data centers, your edge locations. You can then dive deeply into one of these to get a closer look at what's going on here. We're diving into one of these The problem is there's a networking problem going on in this warehouse. warehouses and we see it as a problem here. How do we know? We know because vm ware is running this as a managed service. We are directly managing or sorry, monitoring your infrastructure or we discover there's something going wrong here. We automatically create the ASR, so somebody is dealing with this. You have visibility to what's going on, but the vm ware managed service is already chasing the problem for you. Oh, very good. So now we're seeing this dispersed infrastructure with project dementia, but what's running on it so well before we get with running out, you've got another problem and the problem is of course, if you're managing a lot of infrastructure like this, you need to keep it up to date. And so once again, this is where the vm ware managed service kicks in. We manage that infrastructure in terms of patching it and updating it for you. And as an example, when we released a security patch, here's one for the recent l, one terminal fault, the Vmr managed service is already on that and making sure that your on prem and edge infrastructure is up to date. Very good. Now, what's running? Okay. So what's running, uh, so we mentioned this case of this software running at the edge infrastructure itself, and these are workloads which are running locally in those age, uh, those edge locations. This is a surveillance application. You can see it here at the bottom it says warehouse safety monitor. So this is an application which gathers images and then stores those images He said my sql database on top there, now this is where we leverage the somewhere and it puts them in a database. technology you just learned about when Andy and pat spoke about disability to take rds and run that on your on prem infrastructure. The block of virtual machines in the moment are the rds components from Amazon running in your infrastructure or in your edge location, and this gives you the ability to allow your developers to be able to leverage and operate against those Apis, but now the actual database, the infrastructure is running on prem and you might be doing just for performance reasons because of latency, you might be doing it simply because this data center is not always connected to the cloud. When you take a look into under the hood and see what's going on here, what you actually see this is vsphere, a modified version of vsphere. You see this new concept of my custom availability zone. That is the availability zone running on your infrastructure which supports or ds. What's more interesting is you flip back to the Amazon portal. This is typically what your developers are going to do. Once again, you see an availability zone in your Amazon portal. This is the availability zone running on your equipment in your data center. So we've truly taken that already as infrastructure and moved it to the edge so the developer sees what they're comfortable with and the infrastructure sees what they're comfortable with bridging those two worlds. Fabulous. Right. So the final question of course that we got here was what's next? How do I begin to look to the future and say I am going to, I want to be able to see all of my infrastructure just handled in an automated fashion. And so when you think about that, one of the questions there is how do we leverage new technologies such as ai and ml to do that? So what you've got here is, sorry we've got a little bit later. What you've got here is how do I blend ai in a male and the power of what's in the data center itself. Okay. And we could do that. We're bringing you the AI and ml, right? And fusing them together as never before to truly change how the data center operates. Correct. And it is this introduction is this merging of these things together, which is extremely powerful in my mind. This is a little bit like a self driving vehicle, so thinking about a car driving down the street is self driving vehicle, it is consuming information from all of the environment around it, other vehicles, what's happening, everything from the wetter, but it also has a lot of built in knowledge which is built up to to self learning and training along the way in the kids collecting lots of that data for decades. Exactly. And we've got all that from all the infrastructure that we have. We can now bring that to bear. So what we're focusing on here is a project called project magna and project. Magna leverage is all of this infrastructure. What it does here is it helps connect the dots across huge datasets and again a deep insight across the stack, all the way from the application hardware, the infrastructure to the public cloud, and even the age and what it does, it leverages hundreds of control points to optimize your infrastructure on Kpis of cost performance, even user specified policies. This is the use of machine language in order to fundamentally transform. I'm sorry, machine learning. I'm going back to some. Very early was here, right? This is the use of machine learning and ai, which will automatically transform. How do you actually automate these data centers? The goal is true automation of your infrastructure, so you get to focus on the applications which really served needs of your business. Yeah, and you know, maybe you could think about that as in the past we would have described the software defined data center, but in the future we're calling it the self driving data center. Here we are taking that same acronym and redefining it, right? Because the self driving data center, the steep infusion of ai and machine learning into the management and automation into the storage, into the networking, into vsphere, redefining the self driving data center and with that we believe fundamentally is to be an enormous advance and how they can take advantage of new capabilities from bm ware. Correct. And you're already seeing some of this in pieces of projects such as some of the stuff we do in wavefront and so already this is how do we take this to a new level and that's what project magnet will do. So let's summarize what we've seen in a few demos here as we work in true each of these very quickly going through these demos. First of all, you saw the n word cloud on aws. How do I migrate an entire data center to the cloud with no downtime? Check, we saw project dementia, get the simplicity of Vm ware cloud in the data center and manage it at the age as a managed service check. Amazon rds and Vm ware. Cool Demo, seamlessly deploy a cloud service to an on premises environment. In this case already. Yes, we got that one coming in are in m five. And then finally project magna. What happens when you're looking to the future? How do we leverage ai and ml to self optimize to virtual infrastructure? Well, how did ray do as our demo guy? Thank you. Thanks. Thanks. Right. Thank you. So coming back to this picture, our gps for the day, we've covered any cloud, let's click into now any application, and as we think about any application, we really view it as this breadth of the traditional cloud native and Sas Coobernetti is quickly maybe spectacularly becoming seen as the consensus way that containers will be managed and automate as the framework for how modern APP teams are looking at their next generation environment, quickly emerging as a key to how enterprises build and deploy their applications today. And containers are efficient, lightweight, portable. They have lots of values for developers, but they need to also be run and operate and have many infrastructure challenges as well. Managing automation while patch lifecycle updates, efficient move of new application services, know can be accelerated with containers. We also have these infrastructure problems and you know, one thing we want to make clear is that the best way to run a container environment is on a virtual machine. You know, in fact, every leader in public cloud runs their containers and virtual machines. Google the creator and arguably the world leader in containers. They runs them all in containers. Both their internal it and what they run as well as G K, e for external users as well. They just announced gke on premise on vm ware for their container environments. Google and all major clouds run their containers and vms and simply put it's the best way to run containers. And we have solved through what we have done collectively the infrastructure problems and as we saw earlier, cool new container apps are also typically some ugly combination of cool new and legacy and existing environments as well. How do we bridge those two worlds? And today as people are rapidly moving forward with containers and Coobernetti's, we're seeing a certain set of problems emerge. And Dan cone, right, the director of CNCF, the Coobernetti, uh, the cloud native computing foundation, the body for Coobernetti's collaboration and that, the group that sort of stewards the standardization of this capability and he points out these four challenges. How do you secure them? How do you network and you know, how do you monitor and what do you do for the storage underneath them? Simply put, vm ware is out to be, is working to be is on our way to be the dial tone for Coobernetti's. Now, some of you who were in your twenties might not know what that means, so we know over to a gray hair or come and see me afterward. We'll explain what dial tone means to you or maybe stated differently. Enterprise grade standard for Cooper netties and for that we are working together with our partners at Google as well as pivotal to deliver Vm ware, pks, Cooper netties as an enterprise capability. It builds on Bosh. The lifecycle engine that's foundational to the pivotal have offerings today, uh, builds on and is committed to stay current with the latest Coobernetti's releases. It builds on Nsx, the SDN container, networking and additional contributions that were making like harbor the Vm ware open source contribution for the container registry. It packages those together makes them available on a hybrid cloud as well as public cloud environments with pks operators can efficiently deploy, run, upgrade their coopernetties environments on SDDC or on all public clouds. While developers have the freedom to embrace and run their applications rapidly and efficiently, simply put, pks, the standard for Coobernetti's in the enterprise and underneath that Nsx you'll is emerging as the standard for software defined networking. But when we think about and we saw that quote on the challenges of Kubernetes today, we see that networking is one of the huge challenge is underneath that and in a containerized world, things are changing even more rapidly. My network environment is moving more quickly. NSX provides the environment's easily automate networking and security for rapid deployment of containerized environments that fully supports the MRP chaos, fully supports pivotal's application service, and we're also committed to fully support all of the major kubernetes distribution such as red hat, heptio and docker as well Nsx, the only platform on the planet that can address the complexity and scale of container deployments taken together Vm Ware, pks, the production grade computer for the enterprise available on hybrid cloud, available on major public clouds. Now, let's not just talk about it again. Let's see it in action and please walk up to the stage. When di Carter with Ray, the senior director of cloud native marketing for Vm ware. Thank you. Hi everybody. So we're going to talk about pks because more and more new applications are built using kubernetes and using containers with vm ware pts. We get to simplify the deploying and the operation of Kubernetes at scale. When the. You're the experts on all of this, right? So can you take as true the scenario of how pks or vm ware pts can really help a developer operating the Kubernedes environment, developed great applications, but also from an administrator point of view, I can really handle things like networking, security and those configurations. Sounds great. I love to dive into the demo here. Okay. Our Demo is. Yeah, more pks running coubernetties vsphere. Now pks has a lot of cool functions built in, one of which is Nsx. And today what I'm going to show you is how NSX will automatically bring up network objects as quick Coobernetti's name spaces are spun up. So we're going to start with the fees per client, which has been extended to Ron pks, deployed cooper clusters. We're going to go into pks instance one, and we see that there are five clusters running. We're going to select one other clusters, call application production, and we see that it is running nsx. Now a cluster typically has multiple users and users are assigned namespaces, and these namespaces are essentially a way to provide isolation and dedicated resources to the users in that cluster. So we're going to check how many namespaces are running in this cluster and more brought up the Kubernetes Ui. We're going to click on namespace and we see that this cluster currently has four namespaces running wire. We're going to do next is bringing up a new name space and show that Nsx will automatically bring up the network objects required for that name space. So to do that, we're going to upload a Yammel file and your developer may actually use Ku Kata command to do this as well. We're going to check the namespace and there it is. We have a new name space called pks rocks. Yeah. Okay. Now why is that guy now? It's great. We have a new name space and now we want to make sure it has the network elements assigned to us, so we're going to go to the NSX manager and hit refresh and there it is. PKS rocks has a logical robber and a logical switch automatically assigned to it and it's up and running. So I want to interrupt here because you made this look so easy, right? I'm not sure people realize the power of what happened here. The developer, winton using Kubernetes, is api infrastructure to familiar with added a new namespace and behind the scenes pks and tardy took care of the networking. It combination of Nsx, a combination of what we do at pks to truly automate this function. Absolutely. So this means that if you are on the infrastructure operation, you don't need to worry about your developer springing up namespaces because Nsx will take care of bringing the networking up and then bringing them back down when the namespace is not used. So rate, but that's not it. Now, I was in operations before and I know how hard it is for enterprises to roll out a new product without visibility. Right, so pks took care of those dates, you operational needs as well, so while it's running your clusters, it's also exporting Meta data so that your developers and operators can use wavefront to gain deep visibility into the health of the cluster as well as resources consumed by the cluster. So here you see the wavefront Ui and it's showing you the number of nodes running, active parts, inactive pause, et cetera. You can also dive deeper into the analytics and take a look at information site, Georgia namespace, so you see pks rocks there and you see the number of active nodes running as well as the CPU utilization and memory consumption of that nice space. So now pks rocks is ready to run containerized applications and microservices. So you just get us a very highlight of a demo here to see a little bit what pks pks says, where can we learn more? So we'd love to show you more. Please come by the booth and we have more cool functions running on pks and we'd love to have you come by. Excellent. Thank you, Lindy. Thank you. Yeah, so when we look at these types of workloads now running on vsphere containers, Kubernedes, we also see a new type of workload beginning to appear and these are workloads which are basically machine learning and ai and in many cases they leverage a new type of infrastructure, hardware accelerators, typically gps. What we're going to talk about here is how in video and Vm ware have worked together to give you flexibility to run sophisticated Vdi workloads, but also to leverage those same gpu for deep learning inference workloads also on vsphere. So let's dive right into a demo here. Again, what you're seeing here is again, you're looking at here, you're looking at your standard view realized operations product, and you see we've got two sets of applications here, a Vdi desktop workload and machine learning, and the graph is showing what's happening with the Vdi desktops. These are office workers leveraging these desktops everyday, so of course the infrastructure is super busy during the daytime when they're in the office, but the green area shows this is not been used very heavily outside of those times. So let's take a look. What happens to the machine learning application in this case, this organization leverages those available gpu to run the machine learning operations outside the normal working hours. Let's take a little bit of a deeper dive into what the application it is before we see what we can do from an infrastructure and configuration point of view. So this machine learning application processes a vast number of images and it clarify or sorry, it categorizes these images and as it's doing so, it is moving forward and putting each of these in a database and you can see it's operating here relatively fast and it's leveraging some gps to do that. So typical image processing type of machine learning problem. Now let's take a dive in and look at the infrastructure which is making this happen. First of all, we're going to look only at the Vdi employee Dvt, a Vdi infrastructure here. So I've got a bunch of these applications running Vdi applications. What I want to do is I want to move these so that I can make this image processing out a application run a lot faster. Now normally you wouldn't do this, but pot insisted that we do this demo at 10:30 in the morning when the office workers are in there, so we're going to move older Vdi workloads over to the other cluster and that's what you're seeing is going on right now. So as they move over to this other cluster, what we are now doing is freeing up all of the infrastructure. The GPU that Vdi workload was using here. We see them moving across and now you've freed up that infrastructure. So now we want to take a look at this application itself, the machine learning application and see how we can make use of that. Now freed up infrastructure we've got here is the application is running using one gpu in a vsphere cluster, but I've got three more gpu is available now because I've moved the Vdi workloads. We simply modify the application, let it know that these are available and you suddenly see an increase in the processing capabilities because of what we've done here in terms of making the flexibility of accessing those gps. So what you see here is the same gps that youth for Vdi, which you probably have in your infrastructure today, can also be used to run sophisticated machine learning and ai type of applications on your vsphere infrastructure. So let's summarize what we've seen in the various demos here in this section. First of all, we saw how the MRPS simplifies the deployment and operating operation of Kubernetes at scale. What we've also seen is that leveraging the Nvidia Gpu, we can now run the most demanding workloads on vsphere. When we think about all of these applications and these new types of workloads that people are running. I want to take one second to speak to another workload that we're seeing beginning to appear in the data center. And this is of course blockchain. We're seeing an increasing number of organizations evaluating blockchains for smart contract and digital consensus solutions. So this tech, this technology is really becoming or potentially becoming a critical role in how businesses will interact each other, how they will work together. We'd project concord, which is an open source project that we're releasing today. You get the choice, performance and scale of verifiable trust, which you can then bring to bear and run in the enterprise, but this is not just another blockchain implementation. We have focused very squarely on making sure that this is good for enterprises. It focuses on performance, it focuses on scalability. We have seen examples where running consensus algorithms have taken over 80 days on some of the most common and widely used infrastructure in blockchain and we project conquered. You can do that in two and a half hours. So I encourage you to check out this project on get hub today. You'll also see lots of activity around the whole conference. Speaking about this. Now we're going to dive into another section which is the anti device section. And for that I need to welcome pat back up there. Thank you pat. Thanks right. So diving into any device piece of the puzzle, you and as we think about the superpowers that we have, maybe there are no more area that they are more visible than in the any device aspect of our picture. You know, and as we think about this, the superpowers, you know, think about mobility, right? You know, and how it's enabling new things like desktop as a service in the mobile area, these breadth of smartphones and devices, ai and machine learning allow us to manage them, secure them and this expanding envelope of devices in the edge that need to be connected and wearables and three d printers and so on. We've also seen increasing research that says engaged employees are at the center of business success. Engaged employees are the critical ingredient for digital transformation. And frankly this is how I run vm ware, right? You know, I have my device and my work, all my applications, every one of my 23,000 employees is running on our transformed workspace one environment. Research shows that companies that, that give employees ready anytime access are nearly three x more likely to be leaders in digital transformation. That employees spend 20 percent of their time today on manual processes that can be automated. The way team collaboration and speed of division decisions increases by 16 percent with engaged employees with modern devices. Simply put this as a critical aspect to enabling your business, but you remember this picture from the silos that we started with and each of these environments has their own tribal communities of management, security automation associated with them, and the complexity associated with these is mind boggling and we start to think about these. Remember the I'm a pc and I'm a Mac. Well now you have. I'm an Ios. I'm a droid and other bdi and I'm now a connected printer and I'm a connected watch. You remember citrix manager and good is now bad and sccm a failed model and vpns and Xanax. The chaos is now over at the center of that is vm ware, workspace one, get it out of the business of managing devices, automate them from the cloud, but still have the mentor price. Secure cloud based analytics that brings new capabilities to this critical topic. You'll focus your energy on creating employee and customer experiences. You know, new capabilities to allow like our airlift, the new capability to help customers migrate from their sccm environment to a modern management, expanding the use of workspace intelligence. Last year we announced the chromebook and a partnership with HP and today I'm happy to announce the next step in our partnerships with Dell. And uh, today we're announcing that Dell provisioning for Vm ware, workspace one as part of Dell's ready to work solutions Dallas, taking the next leap and bringing workspace one into the core of their client to offerings. And the way you can think about this as Literally a dell drop ship, lap pops showing up to new employee. day one, productivity. You give them their credential and everything else is delivered by workspace one, your image, your software, everything patched and upgraded, transforming your business, right beginning at that device experience that you give to your customer. And again, we don't want to talk about it. We want to show you how this works. Please walk to the stage with re renew the head of our desktop products marketing. Thank you. So we just heard from pat about how workspace one integrated with Dell laptops is really set up to manage windows devices. What we're broadly focused on here is how do we get a truly modern management system for these devices, but one that has an intelligence behind it to make sure that we're kept with a good understanding of how to keep these devices always up to date and secure. Can we start the demo please? So what we're seeing here is to be the the front screen that you see of workspace one and you see you've got multiple devices a little bit like that demo that patch assured. I've got Ios, android, and of course I've got windows renewal. Can you please take us through how workspace one really changes the ability of somebody an it administrator to update and manage windows into our environment? Absolutely. With windows 10, Microsoft has finally joined the modern management body and we are really excited about that. Now. The good news about modern management is the frequency of ostp updates and how quickly they come out because you can address all those security issues that are hitting our radar on a daily basis, but the bad news about modern management is the frequency of those updates because all of us in it admins, we have to test each and every one of our applications would that latest version because we don't want to roll out that update in case of causes any problems with workspace one, we saw that we simply automate and provide you with the APP compatibility information right out of the box so you can now automate that update process. Let's take a quick look. Let's drill down here further into the windows devices. What we'll see is that only a small percentage of those devices are on that latest version of operating system. Now, that's not a good thing because it might have an important security fix. Let's scroll down further and see what the issue is. We find that it's related to app compatibility. In fact, 38 percent of our devices are blocked from being upgraded and the issue is app compatibility. Now we were able to find that not by asking the admins to test each and every one of those, but we combined windows analytics data with APP intelligent out of the box and be provided that information right here inside of the console. Let's dig down further and see what those devices and apps look like. So knew this is the part that I find most interesting. If I am a system administrator at this point I'm looking at workspace one is giving me a key piece of information. It says if you proceed with this update, it's going to fail 84, 85 percent at a time. So that's an important piece of information here, but not alone. Is it telling me that? It is telling me roughly speaking why it thinks it's going to fail. We've got a number of apps which are not ready to work with this new version, particularly the Mondo card sales lead tracker APP. So what we need to do is get engineering to tackle the problems with this app and make sure that it's updated. So let's get fixing it in order to fix it. What we'll do is create an automation and we can do this right out of the box in this automation will open up a Jira ticket right from within the console to inform the engineers about the problem, not just that we can also flag and send a notification to that engineering manager so that it's top of mine and they can get working on this fixed right away. Let's go ahead and save that automation right here, ray UC. There's the automation that we just So what's happening here is essentially this update is now scheduled meeting. saved. We can go and update oldest windows devices, but workspace one is holding the process of proceeding with that update, waiting for the engineers to update the APP, which is going to cause the problem. That's going to take them some time, right? So the engineers have been working on this, they have a fixed and let's go back and see what's happened to our devices. So going back into the ios updates, what we'll find is now we've unblocked those devices from being upgraded. The 38 percent has drastically dropped down. It can rest in peace that all of the devices are compliant and on that latest version of operating system. And again, this is just a snapshot of the power of workspace one to learn more and see more. I invite you all to join our EOC showcase keynote later this evening. Okay. So we've spoken about the presence of these new devices that it needs to be able to manage and operate across everything that they do. But what we're also seeing is the emergence of a whole new class of computing device. And these are devices which are we commonly speak to have been at the age or embedded devices or Iot. And in many cases these will be in factories. They'll be in your automobiles, there'll be in the building, controlling, controlling, uh, the building itself, air conditioning, etc. Are quite often in some form of industrial environment. There's something like this where you've got A wind farm under embedded in each of these turbines. This is a new class of computing which needs to be managed, secured, or we think virtualization can do a pretty good job of that in new virtualization frontier, right at the edge for iot and iot gateways, and that's gonna. That's gonna, open up a whole new realm of innovation in that space. Let's dive down and taking the demo. This spaces. Well, let's do that. What we're seeing here is a wind turbine farm, a very different than a data center than what we're used to and all the compute infrastructure is being managed by v center and we see to edge gateway hose and they're running a very mission critical safety watchdog vm right on there. Now the safety watchdog vm is an fte mode because it's collecting a lot of the important sensor data and running the mission critical operations for the turbine, so fte mode or full tolerance mode, that's a pretty sophisticated virtualization feature allowing to applications to essentially run in lockstep. So if there's a failure, wouldn't that gets to take over immediately? So this no sophisticated virtualization feature can be brought out all the way to the edge. Exactly. So just like in the data center, we want to perform an update, so as we performed that update, the first thing we'll do is we'll suspend ft on that safety watchdog. Next, we'll put two. Oh, five into maintenance mode. Once that's done, we'll see the power of emotion that we're all familiar with. We'll start to see all the virtual machines vmotion over to the second backup host. Again, all the maintenance, all the update without skipping a heartbeat without taking down any daily operations. So what we're seeing here is the basic power of virtualization being brought out to the age v motion maintenance mode, et cetera. Great. What's the big deal? We've been doing that for years. What's the, you know, come on. What's the big deal? So what you're on the edge. So when you get to the age pack, you're dealing with a whole new class of infrastructure. You're dealing with embedded systems and new types of cpu hours and process. This whole demo has been done on an arm 64. Virtualization brought to arm 64 for embedded devices. So we're doing this on arm on the edge, correct. Specifically focused for embedded for age oems. Okay. Now that's good. Okay. Thank you ray. Actually, we've got a summary here. Pat, just a second before you disappear. A lot to rattle off what we've just seen, right? We've seen workspace one cross platform management. What we've also seen, of course esx for arm to bring the power of vfx to edge on 64, but are in platforms will go no. Okay. Okay. Thank you. Thanks. Now we've seen a look at a customer who is taking advantage of everything that we just saw and again, a story of a customer that is just changing lives in a fundamental way. Let's see. Make a wish. So when a family gets the news that a child is sick and it's a critical illness, it could be a life threatening illness. The whole family has turned upside down. Imagine somebody comes to you and they say, what's the one thing you want that's in your heart? You tell us and then we make that happen. So I was just calling to give you the good news that we're going to be able to grant jackson a wish make, which is the largest wish granting organizations in the United States. English was featured in the cbs 60 minutes episode. Interestingly, it got a lot of hits, but uh, unfortunately for the it team, the whole website crashed make a wish is going through a program right now where we're centralizing technology and putting certain security standards in place at our chapters. So what you're seeing here, we're configuring certain cloud services to make sure that they always are able to deliver on the mission whether they have a local problem or not is we continue to grow the partnership and work with vm ware. It's enabling us to become more efficient in our processes and allows us to grant more wishes. It was a little girl. She had a two year old brother. She just wanted a puppy and she was forthright and I want to name the puppy in my name so my brother would always have me to list them off a five year old. It's something we can't change their medical outcome, but we can change their spiritual outcome and we can transform their lives. Thank you. Working together with you truly making wishes come true. The last topic I want to touch on today, and maybe the most important to me personally is security. You got to fundamentally, when we think about this topic of security, I'll say it's broken today and you know, we would just say that the industry got it wrong that we're trying to bolt on or chasing bad, and when we think about our security spend, we're spending more and we're losing more, right? Every day we're investing more in this aspect of our infrastructure and we're falling more behind. We believe that we have to have much less security products and much more security. You know, fundamentally, you know, if you think about the problem, we build infrastructure, right? Generic infrastructure, we then deploy applications, all kinds of applications, and we're seeing all sorts of threats launched that as daily tens of millions. You're simple virus scanner, right? Is having tens of millions of rules running and changing many times a day. We simply believe the security model needs to change. We need to move from bolted on and chasing bad to an environment that has intrinsic security and is built to ensure good. This idea of built in security. We are taking every one of the core vm ware products and we are building security directly into it. We believe with this, we can eliminate much of the complexity. Many of the sensors and agents and boxes. Instead, they'll directly leverage the mechanisms in the infrastructure and we're using that infrastructure to lock it down to behave as we intended it to ensure good, right on the user side with workspace one on the network side with nsx and microsegmentation and storage with native encryption and on the compute with app defense, we are building in security. We're not chasing threats or adding on, but radically reducing the attack surface. When we look at our applications in the data center, you see this collection of machines running inside of it, right? You know, typically running on vsphere and those machines are increasingly connected. Through nsx and last year we introduced the breakthrough security solution called app defense and app defense. Leverages the unique insight we get into the application so that we can understand the application and map it into the infrastructure and then you can lock down, you could take that understanding, that manifest of its behavior and then lock those vms to that intended behavior and we do that without the operational and performance burden of agents and other rear looking use of attack detection. We're shrinking the attack surface, not chasing the latest attack vector, you know, and this idea of bolt on versus chasing bad. You sort of see it right in the network. Machines have lots of conductivity, lots of applications running and something bad happens. It basically has unfettered access to move horizontally through the data center and most of our security is north, south. MosT of the attacks are eastwest. We introduced this idea of microsegmentation five years ago, and by it we're enabling organizations to secure some networks and separate sensitive applications and services as never before. This idea isn't new, that just was never practical before nsx, but we're not standing still. Our teams are innovating to leap beyond 12. What's next beyond microsegmentation, and we see this in three simple words, learn, imagine a system that can look into the applications and understand their behavior and how they should operate. we're using machine learning and ai instead of chasing were to be able to ensure good where that that system can then locked down its behavior so the system consistently operates that way, but finally we know we have a world of increasing dynamic applications and as we move to more containerize the microservices, we know this world is changing, so we need to adapt. We need to have more automation to adapt to the current behavior. Today I'm very excited to have two major announcements that are delivering on this vision. The first of those vsphere platinum, our flagship vm ware vsphere product now has app defense built right in platinum will enable virtualization teams. Yeah, go ahead. Yeah, let's use it. Platinum will enable virtualization teams you to give an enormous contribution to the security profile of your enterprise. You could see whatever vm is for its purpose, its behavior until the system. That's what it's allowed to do. Dramatically reducing the attack surface without impact. On operations or performance, the capability is so powerful, so profound. We want you to be able to leverage it everywhere, and that's why we're building it directly into vsphere, vsphere platinum. I call it the burger and fries. You know, nobody leaves the restaurant without the fries who would possibly run a vm in the future without turning security on. That's how we want this to work going forward. Vsphere platinum and as powerful as microsegmentation has been as an idea. We're taking the next step with what we call adaptive microsegmentation. We are fusing Together app defense and vsphere with nsx to allow us to align the policies of the application through vsphere and the network. We can then lock down the network and the compute and enable this automation of the microsegment formation taken together adaptive microsegmentation. But again, we don't want to just tell you about it. We want to show you. Please welcome to the stage vj dante, who heads our machine learning team for app dispense. Vj a very good vj. Thanks for joining us. So, you know, I talked about this idea right, of being able to learn, lock and adapt. Uh, can you show it to us? Great. Yeah. Thank you. With vc a platinum, what we have done is we have put in everything you need to learn, lock and adapt, right with the infrastructure. The next time you bring up your wifi at line, you'll actually see a difference right in there. Let's go with that demo. There you go. And when you look at our defense there, what you see is that all your guests, virtual machines and all your host, hundreds of them and thousands of virtual machines enabling for that difference. It's in there. And what that does is immediately gets you visibility into the processes running on those virtual machines and the risk for the first time. Think about it for the first time. You're looking at the infrastructure through the lens of an application. Here, for example, the ecommerce application, you can see the components that make up that application, how they interact with each other, the specific process, a specific ip address on a specific board. That's what you get, but so we're learning the behavior. Yes. Yeah, that's very good. But how do you make sure you only learn good behavior? Exactly. How do we make sure that it's not bad? We actually verify me insured. It's all good. We ensured that everybody these reputation is verified. We ensured that the haven is verified. Let's go to svc host, for example. This process can exhibit hundreds of behaviors across numerous. Realize what we do here is we actually verify that failure saw us. It's actually a machine learning models that had been trained on millions of instances of good, bad at you said, and then automatically verify that for okay, so we said, you. We learned simply, learn now, lock. How does that work? Well, once you learned the application, locking it is as simple as clicking on that verify and protect button and then you can lock both the compute and network and it's done. So we've pushed those policies into nsx and microsegmentation has been established actually locked down the compute. What is the operating system is exactly. Let's first look at compute, protected the processes and the behaviors are locked down to exactly what is allowed for that application. And we have bacon policies and program your firewall. This is nsx being configured automatically for you, laurie, with one single click. Very good. So we said learn lock. Now, how does this adapt thing work? Well, a bad change is the only constant, but modern applications applications change on a continuous basis. What we do is actually pretty simple. We look at every change as it comes in determinant is good or bad. If it's good, we say allow it, update the policies. That's bad. We denied. Let's look at an example as asco dxc. It's exhibiting a behavior that they've not seen getting the learning period. Okay? So this machine has never behave this This hasn't been that way. But. way. But again, our machine learning models had seen thousands of instances of this process. They know this is normal. It talks on three 89 all the time. So what it's done to the few things, it's lowered the criticality of the alarm. Okay, so false positive. Exactly. The bane of security operations, false positives, and it has gone and updated. Jane does locks on compute and network to allow for that behavior. Applications continues to work on this project. Okay, so we can learn and adapt and action right through the compute and the network. What about the client? Well, we do with workplace one, intelligence protect and manage end user endpoint, but what's one intelligence? Nsx and actually work together to protect your entire data center infrastructure, but don't believe me. You can watch it for yourself tomorrow tom cornu keynote. You want to be there, at 1:00 PM, be there or be nowhere. I love you. Thank you veejay. Great job. Thank you so much. So the idea of intrinsic security and ensuring good, we believe fundamentally changing how security will be delivered in the enterprise in the future and changing the entire security industry. We've covered a lot today. I'm thrilled as I stand on stage to stand before this community that truly has been at the center of changing the world of technology over the last couple of decades. In it. We've talked about this idea of the super powers of technology and as they accelerate the huge demand for what you do, you know in the same way we together created this idea of the virtual infrastructure admin. You'll think about all the jobs that we are spawning in the discussion that we had today, the new skills, the new opportunities for each one of us in this room today, quantum program, machine learning engineer, iot and edge expert. We're on the cusp of so many new capabilities and we need you and your skills to do that. The skills that you possess, the abilities that you have to work across these silos of technology and enabled tomorrow. I'll tell you, I am now 38 years in the industry and I've never been more excited because together we have the opportunity to build on the things that collective we have done over the last four decades and truly have a positive global impact. These are hard problems, but I believe together we can successfully extend the lifespan of every human being. I believe together we can eradicate chronic diseases that have plagued mankind for centuries. I believe we can lift the remaining 10 percent of humanity out of extreme poverty. I believe that we can reschedule every worker in the age of the superpowers. I believe that we can give modern ever education to every child on the planet, even in the of slums. I believe that together we could reverse the impact of climate change. I believe that together we have the opportunity to make these a reality. I believe this possibility is only possible together with you. I asked you have a please have a wonderful vm world. Thanks for listening. Happy 20th birthday. Have a great topic.

Published Date : Aug 28 2018

SUMMARY :

of devices in the edge that need to be

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

AndyPERSON

0.99+

IBMORGANIZATION

0.99+

MichaelPERSON

0.99+

1998DATE

0.99+

TelcoORGANIZATION

0.99+

1986DATE

0.99+

TelcosORGANIZATION

0.99+

EuropeLOCATION

0.99+

Paul MaritzPERSON

0.99+

DellORGANIZATION

0.99+

BostonLOCATION

0.99+

Andy JassyPERSON

0.99+

LenovoORGANIZATION

0.99+

10QUANTITY

0.99+

DeloitteORGANIZATION

0.99+

JoePERSON

0.99+

SydneyLOCATION

0.99+

Joe ConsellaPERSON

0.99+

AfricaLOCATION

0.99+

Pat GelsingerPERSON

0.99+

OregonLOCATION

0.99+

20 percentQUANTITY

0.99+

AshleyPERSON

0.99+

16 percentQUANTITY

0.99+

VegasLOCATION

0.99+

JupiterLOCATION

0.99+

Last yearDATE

0.99+

last yearDATE

0.99+

first questionQUANTITY

0.99+

LindyPERSON

0.99+

telcoORGANIZATION

0.99+

John GagePERSON

0.99+

10 percentQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Dan conePERSON

0.99+

68 percentQUANTITY

0.99+

200 applicationsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

50 percentQUANTITY

0.99+

Vm Ware Cloud FoundationORGANIZATION

0.99+

1440DATE

0.99+

30 yearQUANTITY

0.99+

HPORGANIZATION

0.99+

38 percentQUANTITY

0.99+

38 yearsQUANTITY

0.99+

$600QUANTITY

0.99+

20 yearsQUANTITY

0.99+

one monthsQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

windows 10TITLE

0.99+

hundredsQUANTITY

0.99+

yesterdayDATE

0.99+

80 millionQUANTITY

0.99+

five percentQUANTITY

0.99+

second questionQUANTITY

0.99+

JodyPERSON

0.99+

TodayDATE

0.99+

tomorrowDATE

0.99+

SanjayPERSON

0.99+

23,000 employeesQUANTITY

0.99+

five peopleQUANTITY

0.99+

sixth yearQUANTITY

0.99+

82 percentQUANTITY

0.99+

five instancesQUANTITY

0.99+

tomorrow morningDATE

0.99+

CoobernettiORGANIZATION

0.99+

Arturo Suarez, Canonical & Eric Sarault, Kontron | OpenStack Summit 2018


 

>> Narrator: Live from Vancouver, Canada it's theCUBE covering OpenStack Summit North America 2018. Brought to you by Red Hat, the OpenStack Foundation, and its ecosystem partners. >> Welcome back to theCUBE. I'm Stu Miniman here with my cohost here John Troyer. And we're at the OpenStack Summit 2018, here in Vancouver. One of the key topics we've been discussing, actually for a few years but under new branding, and it's really matured a bit is Edge Computing. So, we're really happy to welcome to the program two first time guests. We have Arturo Suarez, who's a program director with Canonical. We also have first time Kontron employee on, Eric Sarault, who's a product manager of software and services with, I believe Montreal based, is the headquarters. >> That's correct. >> Stu: So, thank you for allowing all of us to come up to Canada and have some fun. >> It's a pleasure. >> But we were all working during Victoria Day, right? >> Yeah. >> All right. Arturo, we know Canonical. So, we're going to talk about where you fit in. But, Eric, let's start with Kontron. I've got a little bit of background with them. I worked in really kind of the TelCo space back in the 90s. But for people that don't know Kontron maybe give us some background. So, basically, the entity here today is representing the communications business unit. So, what we do on that front is mostly TelCo's service providers. We also have strong customer base in the media vertical. But right now the OpenStack, what we're focusing on, is really on the Edge, mixed messages as well. So, we're really getting about delivering the true story about Edge because everybody has their own version of Edge. Everybody has their own little precisions about it. But down the road, it's making sure that we align everyone towards the same messaging so that we deliver a unified solution so that everybody understands what it is. >> Yeah. So, my filter on this has been Edge depends who you are. If you're a telecommunications vendor, when we've talked about the Cohen, it's the Edge of where they sit. If I'm an enterprise, it's the Edge is more like the IOT devices and sometimes there's an aggregation box in between. So, there's somewhere between two and four Edges out there. It's like cloud. We spent a bunch of years discussing it and then we just put the term to the side and go things. When you're talking Edge at Kontron, what does that mean? You actually have devices. >> We do. >> So, who's your customer? What does the Edge look like? >> So, we do have customers on that front. Right now we're working with some big names out there. Basically delivering solutions for 12 inch depth racks at the bottom of radio towers or near cell sites. And ultimately working our way up closer to what would look like, what I like to call a "closet" data center, if you will. Where we also have a platform with multiple systems that's able to be hosted in the environment. So, that's really about not only having one piece of the equation but really being able to get closer to the data center. >> All right. And Arturo, help bring us in because we know Canonical's a software company. What's the Edge mean to your customers and where does Canonical fit? >> So, Canonical, we take pride of being an ubiquitous platform, right? So, it doesn't matter where the Edge, or what the Edge is, right? There is an Ubuntu platform. There is an Ubuntu operating system for every single domain of compute, going from the very end of the Edge. That device that sits on your house or that drone that is flying around. And you need to do some application businesses, or to post on application businesses with, all the way to the core rank. Our OpenStack story starts at the core. But it's interesting as it goes farther from that core, how the density, it's an important factor in how you do things, so. We are able, with Kontron, to provide an operating system and tooling to tackle several of those compute domains that are part of the cloud where real estate is really expensive, right. >> Eric, so you all are a systems developer? Is that a fair two-word phrase? It's hardware and software? >> Basically, we do our original design. >> Okay. I know where I am. >> Manufacturing. >> So, I'm two steps away from hardware. So, I think of those as all systems. But you build things? >> Eric: Correct. >> And you work with software. I think for folks that have been a little more abstract, you tend to think, "Well, in those towers, there must be some bespoke chips and some other stuff but nothing very sophisticated." At this point we're running, or that your customers are running, full OpenStack installations on your system hardware. >> Eric: Correct. >> That's in there and it's rugged and it's upgradable. Can you talk a little bit about the business impact, of that sort of thing, as you go out and work with your customers? >> Certainly. So, one of the challenges that we saw there was really that, from a hardware perspective, people didn't really think about making sure that, once the box is shipped, how do you get the software on it, right. Typically, it's a push and forget approach. And this is where we saw a big gap, that it doesn't make any sense for folks to figure that on their own. A lot of those people out there are actually application developers. They don't have the networking background. They don't have a hardware engineering background. And the last thing they want to be doing is spending weeks, if not months, figuring out how to deploy OpenStack, or Kubernetes, or other solutions out there. So, that's where we leverage Canonical's tools, including MAAS and Juju, to really deploy that easily, at scale, and automated. Along with packaging some documentation, some proper steps on how to deploy the environment quickly in a few hours instead of just sitting there scratching your head and trying to figure it out, right. Because that's the last thing they want. The minute they have the box in their hand they already want to consume the resources and get up and running, so. That's really the mission we want to tackle that you're not going to see from most hardware vendors out there. >> Yeah, it's interesting. We often talk about scale, and our term, it's a very different scale when you talk about how fast it's deployed. We're not talking about tens or hundreds of thousands of cores for one environment. It's way more distributed. >> Yeah. It's a different type of scale. It's still a scale but the building block is different, right. So, we take the orders of magnitude more of points of presence than there are data centers, right. At that scale, and the farther you go again from the core, the larger the scale it is. But the building block is different. And the ability to play, the price of the compute is different. It goes much higher, right? So, going back again, that ability to condense in OpenStack, the ability to deliver a Kubernetes within that little space, is pretty unique, right? And while we're still figuring out what technology goes on the Edge, we still need to account for, as Eric said, the economics of that Edge play a big, big part of that gain, right. So, there is a scale, it's in the thousands of points of presence, in the hundreds of thousands of points of presence, or different buildings where you can put an Edge cloud, or the use-case are still being defined, but it's scaled on a different building block. >> Well, Arturo, just to clarify for myself, sometimes when you're looking at an OpenStack component diagram, there's a lot of components and I don't know how many nodes I'm going to have to run. And they're all talking to each other. But at the Edge, even though there's powerful hardware there, there's an overhead consideration, right? >> Yes. Absolutely and that's going to be there. And OpenStack might evolve but might not evolve. But this is something we are tackling today, right. That's why I love the fact that Kontron has also a Kubernetes cluster, right. That multi-technology, the real multi-cloud is a multi-technology approach to the Edge, right. There are all the things that we can put in the Edge and the access is set. It's not defined. We need to know exactly how much room you have, how you make the most out of each of your cores or each of the gigs of RAM out there. So, OpenStack obviously is heavy for some parts of the Edge. Kontron, with our help, has pushed that to the minimum Openstack viable that allows you not to roll a track when you need to do something on that location, right. As that is as effective as it can get today. >> Eric, can you help put this in a framework of cloud, in general. When I think of Edge, a lot of it data's going to need to go back to data centers or a public cloud, multiple public cloud providers. How do your customers deal with that? Are you using Kubernetes to help them span between public cloud and the Edge? >> So, it's a mix of both. Right now we're doing some work to see how you can utilize idle processing time, along with Kubernetes scheduling and orchestration capabilities. But also OpenStack really caters to the more traditional SDNN of the use-case out there to run your traditional applications. So, that's two things that we get out of the platform. But it's also understanding how much data do you want to go back to the data center and making sure that most of the processing is as close as possible. That goes along with 5G, of course. You literally don't have the time to go back to the data centers. So, it's really about putting those capabilities, whether it's FPGAs, GPUs, and those platform, and really enabling that as close as possible to the Edge, or the end user, should I say. >> Eric, I know you're in the carrier space. Can you talk a little, maybe Kontron in general? And maybe how you, in your career as you go the next decades looking at imbed-able technology everywhere, and what do you all see as the vision of where we're headed? >> Oh, wow. That's a hell of a question. >> That's a big question to throw on you. >> I think it's very interesting to see where things are going. There's a lot of consolidation. And you have all these opensource project that needs to work together. The fact that OpenStack is embracing the reality that Kubernetes is going to be there to drive workloads. And they're not stepping on each others' throat, not even near. So, this is where the collaboration, between what we're seeing from the OpenStack Foundation along with the projects from the Linux Foundation, this is really, really interesting to see this moving forward. Other projects upcoming, like ONAP and Akraino, it's going to be very interesting for the next 24 months, to see what it's going to shape into. >> One of the near things, you mentioned 5G and we've been watching, what's available, how that roll-out's going to go into the various pieces. Is this ecosystem ready for that? Going to take advantage of it? And how soon until it is real for customers? >> The hardware is ready. That's for sure. It's really going to be about making sure if you have a split environment that's based on X86, or a split with ARM, it's going to be about making sure that these environments can interact with each other. The service chaining is probably the most complicated aspect there is to what people want to be doing there. And there's a bit of a tie, rope-pulling, from one side to another still but it's finally starting to put in to play. So, I think that the fact that Akraino, which is going to bring a version of OpenStack within the Linux Foundation, this is going to be really unlocking the capabilities that are out there to deploy the solution. And tying along with that, with hardware that has a single purpose, that's able to cater all the use-cases, and not just think about one vertical. "And then this box does this and this other box does another use-case." I think that's the pitfall that a lot of vendors fallen into. Instead of just, "Okay, for a second think outside the box. How many applications could you fit in this footprint?" And there probably going to be big data and multiple use-cases, that are nowhere near each other. So, don't try to do this very specific platform and just make sure that you're able to cater pretty much everyone. It's probably going to do the job, right, so. >> There's over 40 sessions on Edge Computing here. Why don't we just give both of you the opportunity to give us a closing remarks on the importance of Edge, what you're seeing here at the show, and final takeaways. >> From our side, from the Canonical side again, the Edge is whatever is not core. That really has different domains of compute. There is an Ubuntu for each of one of those domains. As Eric mentioned, this is important because you have a common platform, not only in the hardware perspective or the orchestrating technologies and their needs, which are evolving fast. And we have the ability, because how we are built, to accommodate or to build on all of those technologies. And be able to allow developers to choose what they want to do or how they want to do it. Try and try again, in different types of technologies and finally get to that interesting thing, right. There is that application layer that still needs to be developed to make the best use out of the existing technologies. So, it's going to be interesting to see how applications and the technologies evolve together. And we are in a great position as a common platform to all of those compute domains on all of those technologies from the economical perspective. >> On our side, what we see, it's really about making sure it's a density play. At the Edge, and the closer you go to these more wild environments, it's not data centers with 30 kilowatts per rack. You don't have the luxury of putting in, what I like to call whiteboards, 36 inch servers or open-compute systems. So, we really want to make sure that we're able to cater to that. We do have the products for it along with the technologies that Canonical are bringing in on that front. We're able to easily roll-out multiple types of application for those different use-cases. And, ultimately, it's all going to be about density, power efficiency, and making sure that your time to production with the environment is as short as possible. Because the minute they'll want access to that platform, you need to be ready to roll it out. Otherwise, you're going to be lagging behind. >> Eric and Arturo, thanks so much for coming on the program and giving us all the updates on Edge Computing here. For John Troyer, I'm Stu Miniman. Back with lots more coverage here from OpenStack Summit 2018 in Vancouver. Thanks for watching theCUBE. (exciting music)

Published Date : May 22 2018

SUMMARY :

Brought to you by Red Hat, the OpenStack Foundation, One of the key topics we've been discussing, to come up to Canada and have some fun. So, basically, the entity here today is it's the Edge of where they sit. that's able to be hosted in the environment. What's the Edge mean to your customers that are part of the cloud But you build things? or that your customers are running, and it's rugged and it's upgradable. So, one of the challenges that we saw there when you talk about how fast it's deployed. And the ability to play, and I don't know how many nodes I'm going to have to run. has pushed that to the minimum Openstack viable data's going to need to go back to and really enabling that as close as possible to the Edge, and what do you all see as the vision of where we're headed? That's a hell of a question. the reality that Kubernetes is going to be there how that roll-out's going to go into the various pieces. that are out there to deploy the solution. the opportunity to give us a closing remarks So, it's going to be interesting to see how applications and the closer you go to these more wild environments, coming on the program and giving us all the updates

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric SaraultPERSON

0.99+

EricPERSON

0.99+

Arturo SuarezPERSON

0.99+

TelCoORGANIZATION

0.99+

John TroyerPERSON

0.99+

CanonicalORGANIZATION

0.99+

ArturoPERSON

0.99+

Stu MinimanPERSON

0.99+

VancouverLOCATION

0.99+

Red HatORGANIZATION

0.99+

OpenStack FoundationORGANIZATION

0.99+

CanadaLOCATION

0.99+

KontronORGANIZATION

0.99+

12 inchQUANTITY

0.99+

30 kilowattsQUANTITY

0.99+

MontrealLOCATION

0.99+

Linux FoundationORGANIZATION

0.99+

two-wordQUANTITY

0.99+

bothQUANTITY

0.99+

ONAPORGANIZATION

0.99+

OpenStack Summit 2018EVENT

0.99+

thousandsQUANTITY

0.99+

Vancouver, CanadaLOCATION

0.99+

twoQUANTITY

0.99+

StuPERSON

0.99+

eachQUANTITY

0.99+

two stepsQUANTITY

0.98+

36 inchQUANTITY

0.98+

fourQUANTITY

0.98+

oneQUANTITY

0.98+

EdgeTITLE

0.98+

AkrainoORGANIZATION

0.98+

90sDATE

0.98+

OneQUANTITY

0.98+

OpenStackORGANIZATION

0.98+

OpenStackTITLE

0.97+

UbuntuTITLE

0.97+

two thingsQUANTITY

0.97+

one pieceQUANTITY

0.97+

Victoria DayEVENT

0.97+

todayDATE

0.96+

over 40 sessionsQUANTITY

0.96+

KontronPERSON

0.96+

ARMORGANIZATION

0.96+

OpenStack Summit North America 2018EVENT

0.95+

first timeQUANTITY

0.94+

one environmentQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

next decadesDATE

0.92+

Dave Tang, Western Digital & Martin Fink, Western Digital l | CUBEConversation Feb 2018


 

(inspirational music) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We are in our Palo Alto studio. The conference season hasn't really kicked off yet into full swing so we can do a lot more kind of intimate stuff here in the studio, for a CUBE Conversation. And we're really excited to have a many time CUBE alum on, and a new guest, both from Western Digital. So Dave Tang, Senior Vice President at Western Digital. Great to see you again, Dave. >> Great to be here, Jeff. >> Absolutely and Martin Fink, he is the Chief Technology Officer at Western Digital, a longtime HP alum. I'm sure people recognized you from that and our great machine keynotes we were talking about it. So great to finally meet you, Martin. >> Thank you, nice to be here. >> Absolutely, so you guys are here talking about and we've got an ongoing program actually with Western Digital about Data Makes Possible, right. With all the things that are going on in tech at the end of the day, right, there's data, it's got to be stored somewhere and then of course there's processes and things going on. We've been exploring media and entertainment, sports, healthcare, autonomous vehicles, you know. All the places that this continues to reach out and it's such a fun project because you guys are a rising tide, lifts all boats, kind of company and really enjoy watching this whole ecosystem grow. So I really want to thank you for that. But now there's some new things that we want to talk about that you guys are doing to continue really in that same theme, and that's the support of this RISC-V. So first off, for people who have no idea, what is RISC-V? Let's jump into that and then kind of what is the announcement and why it's important. >> Sure, so RISC-V is an, you know, the tagline is, it's an open source instruction set architecture. So what does that mean, just so people can kind of understand. So today the world is dominated by two instruction set architectures. For the most part the, we'll call the desktop enterprise world is dominated by the Intel instruction set architecture and that's what's in most PCs, what people talk about as x86. And then the embedded and mobile space tends to be dominated by Arm, or by Arm Holdings. And so both of those are great architectures but they're also proprietary, they're owned by their respective companies. So RISC-V is essentially a third entrant, we'll say, into this world, but the distinction is that it's completely open source. So everything about the instruction set is available to all and anybody can implement it. We can all share the implementations. We can share the code that makes up that instruction set architecture, and very importantly for us and part of our motivation is the freedom to innovate. So we now have the ability to modify the instruction set or change the implementation of the instruction set, to optimize it for our devices and our storage and our drives, etc. >> So is this the first kind of open source play in microprocessor architecture? >> No, there's been other attempts at this. OpenSpark kind of comes to mind, and things like that, but the ability to get a community of individuals to kind of rally around this in a meaningful way has really been a challenge. And so I'd say that right now, RISC-V presents probably the best sort of clean slate, let's take some thing new to the market out there. >> So open source, obviously we've seen you know, take over the software world, first in the operating system which everybody is familiar with Linux but then we see it time and time again in different applications, Hadoop. I mean, there's just a proliferation of open source projects. The benefits are tremendous. Pretty easy to ascertain at a typical software case, how is that going to be applied do you think within the microprocessor world? >> So it's a little bit different. When we're talking about open source hardware or open source chips and microprocessors, you're dealing with a physical device. So even though you can open source all of the designs and the code associated with that device, you still have to fabricate it. You still have to create a physical design and you still have to call up a fab and say, will you make this for me at these particular volumes? And so that's the difference. So there are some differences between open source software where it's, you know, you create the bits and then you distribute those bits through the Internet and all is good. Whereas here, you still have a physical need to fabricate something. >> Now, how much more flexibility can you do then for the output when you can actually impact the architecture as opposed to just creating a custom chip design, on top of somebody else's architecture? >> Well, let me give you probably a really simple, concrete example that kind of people can internalize of some of our motivation behind this, because that might sort of help get people through this. If you think of a very typical surveillance application, you have a camera pointed into a room or a hallway. The reality is we're basically grabbing a ton of video frames but very few of them change, right? So the typical surveillance application is it never changes and you really want, only know when stuff changes. Well, today, in very simple terms, all of those frames get routed up to some big server somewhere and that server spends a lot of time trying to figure out, okay have I got a frame that changed? Have I got a frame that changed, and so on. And then eventually it'll find maybe two or three or five frames that have got something interesting. So in the world what we're trying to do is to say, okay well why don't we take that, find no changes, and push that right down to the device? So we basically store all those frames, why don't we go figure out all the frames that mean nothing, and only ship up to that big bad server the frames that have something interesting and something you want to go analyze and do some work on? So that's a very typical application that's quite meaningful because we can do all of that work at the device. We can eliminate shipping a whole bunch of data to where it's just going to get discarded anyways, and we can allow the end customer to really focus on the data that matters, and get some intelligence. >> And that's critical as we get more and more immersed in a data-centric world, where we have realtime applications like Martin described as well as large data-centric applications like of course, big data analytics, but also training for AI systems or machine learning. These workloads are going to become more and more diverse and they're going to need more specialized architectures and more specialized processing. So big data is getting bigger and faster and these realtime fast data applications are getting faster and bigger. So we need ways to contend with that, that really go beyond what's available with general purpose architectures. >> So that's a great point because if we take this example of video frames, now if I can build a processor that is customized to only do that, that's the only thing it does. It can be very low power, very efficient, and do that one thing very very well, and the cost adder, if you want to call it that, to the device where we put it, is a tiny fraction, but the cost savings of the overall solution is significant. So this ability to customize the instruction set to only do what you need it to do for that very special purpose, that's gold. >> So I just wanted to, Dave, we've talked about a lot of interesting innovations that you guys have come up with over the years, with the helium launch. Which I don't know, a couple, two, three years ago, you were just at the MAMR event, really energy assisted recording. So this is really kind of foundational within the storage and the media itself and how you guys do better and take advantage of evolving land space. This is a kind of a different play for Western Digital, this isn't a direct kind of improvement in the way that storage media and architecture works but this is really more of, I'm going to ask you. What is the Western Digital play here? Why is this an important space for you guys in your core storage business? >> Well we're really broadening our focus to really develop and innovate around technologies that really help the world extract more value from data as a whole, right. So it's way beyond storage these days, right. We're looking for better ways to capture, preserve, access, and transform the data. And unless you transform it, you can't really extract the value out of it so as we see all these new applications for data and the vast possibilities for data, we really want to pave the path and help the industry innovate to bring all those applications to reality. >> It's interesting too because one of the great topics always in computing is you know, you got compute and store, which has to go to which, right. And nobody wants to move a lot of data, that's hard and may or may not be easy to get compute. Especially these IoT applications, remote devices, tough conditions and power, which we mentioned a little bit before we went on air. So the the landscape for the for the need for compute and store in networking is radically changing than either the desktop or what we're seeing a consolidation in clouds. So what's interesting here, where does the scale come, right? At the end of the day, scale always wins. And that's where we've seen historically where the general-purpose microprocessor architectures is dominated but used to be a slew of specialty purpose architectures but now there's an opportunity to bring scale to this. So how does that scale game continue to evolve? >> So it's a great point that scale does matter and we've seen that repeatedly and so it's a significant part of the reason why we decided to go early with a significant commitment was to tell the world that we were bringing scale to the equation. And so what we communicated to the marketplace is we ship on the order of a billion processor cores a year, most people don't realize that all of our devices from USB sticks to hard drives, all have processors on them. And so we said, hey we're going to basically go all-in and go big and that translates into a billion cores that we ship every year and we're going to go on a program to essentially migrate all of those cores to RISC-V. It'll take a few years to get there but we'll migrate all of those cores and so we basically were signaling to the market, hey scale is now here. Scale is here, you can make the investments, you can go forward, you can make that commitment to RISC-V because essentially we've got your back. >> So just to make sure we get that clear. So you guys have announced that you're going to slowly migrate over time your micro processors that power your devices to the tune of approximately a billion with a B, cores per year to this new architecture. >> That is correct. >> And has that started? >> So the design has started. So we have started to design and develop our first two cores but the actual manifestation into devices probably in the early stage of 2020. >> Okay, okay. But that's a pretty significant commitment and again, the ideas you explicitly said it's a signal to the ecosystem, this is worth your investment because there is some scale here. >> Martin: That's right. >> Yeah, pretty exciting. And how do you think it's going to open up the ability for you to do new things with your devices that you before either couldn't do or we're too expensive with dollars or power. >> Martin: So we're going to step and iterate through this and one key point here is a lot of people tend to want to start in this processor world at the very high end, right. I'm going to go take on a Xeon processor or something like that. It's not what we're doing. We're basically saying, we're going to go at the small end, the tiny end where power matters. Power matters a lot in our devices and where can we achieve the optimum combination of power and performance. So even in our small devices like a USB stick or a client SSD or something like that, if we can reduce power consumption and even just maintain performance that's a huge win for our customers, you know. If you think about your laptop and if I reduce the power consumption of that SSD in there so that you have longer battery life and you can get you know through the day better, that's a huge win, right. And I don't impact performance in the process, that's a huge win. So what we do, what we're doing right now is we're developing the cores based on the RISC-V architecture and then what we're going to do is once we've got that sort of design, sort of complete is we want to take all of the typical client workloads and profile them on that. Then we want to find out, okay where are the hot spots? What are the two or three things that are really consuming all the power and how do we go optimize, by either creating two or three instructions or by optimizing the micro architecture for an existing instruction. And then iterate through that a few times so that we really get a big win, even at the very low end of the spectrum and then we just iterate through that with time. >> We're in a unique position I think in that the technologies that we develop span everything from the actual media where the bits are stored, whether it's solid-state flash or rotating magnetic disk and the recording heads. We take those technologies and build them all the way up into devices and platforms and full-fledged data center systems. And if we can optimize and tune all the way from that core media level all the way up through into the system level, we can deliver significantly higher value, we believe, to the marketplace. So this is the start of that, that enables us to customize command sets and optimize the flow of data so that we can we can allow users to access it when and where they need it. >> So I think there's another actually really cool point, which goes back to the open source nature of this and we try to be very clear about this. We're not going to develop our cores for all applications. We want the world to develop all sorts of different cores. And so for many applications somebody else might come in and say, hey we've got a really cool core. So one of the companies we've partnered with and invested in for example, is Esperanto. They've actually decided to go at the high end and do a machine learning accelerator. Hey, maybe we'll use that for some machine learning applications in our system level performance. So we don't have to do it all but we've got a common architecture across the portfolio and that speaks to that sort of open source nature of the RISC-V architecture is we want the world to get going. We want our competitors to get on board, we want partners, we want software providers, we want everybody on board. >> It's such a different ecosystem with open-source and the way the contributions are made and the way contributions are valued and the way that people can go find niches that are underserved. It's this really interesting kind of bifurcation of the market really, you don't really want to be in the big general-purpose middle anymore. That's not a great place to be, there's all kinds of specialty places where you can build the competence and with software and you know with, thank goodness for Moore's law decreasing prices of the power of the compute and now the cloud, which is basically always available. Really a exciting time to develop a myriad of different applications. >> Right and you talked before about scale in terms of points of implementation that will drive adoption and drive this to critical mass but there's another aspect of scale relative to the architecture within a single system that's also important that I think RISC-V helps to break down some barriers. Because with general purpose computer architectures, they assume a certain ratio of memory and storage and processing and bandwidth for interconnect and if you exceed those ratios, you have to add a whole new processor. Even though you don't need to need the processing capability, you need it for scale. So that's another great benefit of these new architectures is that the diversity of data needs where some are going to be large data sets, some are going to be small data sets that need need high bandwidth. You can customize and blend that recipe as you need to, you're not at the mercy of these fixed ratios. >> Yeah and I think you know it's so much of kind of what is cloud computing. And the atomic nature of it, that you can apply the ratios, the amount that you need as you need, you can change it on the fly, you can tone it up, tone it down. And I think the other interesting thing that you touched on is some of these new, which are now relatively special-purpose but are going to be general-purpose very soon in terms of machine learning and AI and applying those to different places and applying them closer to the problem. It's a very very interesting evolution of the landscape but what I want to do is kind of close on you Martin, especially because again kind of back to the machine. Not the machine specifically but you have been in the business of looking way down the road for a long time. So you came out, I'd looked at your LinkedIn, you retired for three months, congratulations. (laughs) Hope you got some my golf in but you came back to Western Digital so why did you come back? And as you look down the road a ways, what do you see that it excites you, that got you off that three-month little tour around the golf course and I'm sorry I had to tease about that. But what do you see? What are you excited about that you came back and got involved in an open source microprocessor project? >> So the the short answer was that, I saw the opportunity at Western Digital to be where data lives. So I had spent my entire career, will call it at the compute or the server side of things and the interesting thing is I had a very close relationship with SanDisk, which was acquired by Western Digital. And so I had, we'll call it an insider view, of what was possible there and so what triggered was essentially what we're talking about here was given that about half the world's data lands on Western Digital devices, taking that from a real position of strength in the marketplace and say, what could we go do to make data more intelligent and rather than start kind of at that server end and so that I saw that potential there and it was just incredible, so that's that's what made me want to join. >> Exciting times. Dave good get. (laughs) >> We're delighted to have Martin with us. >> All right, well we look forward to watch it evolve. We've got another another whole set of events we're going to do again together with Western Digital that we're excited about. Again, covering Data Makes Possible but you know kind of uplifting into the application space as a lot of the cool things that people are doing in innovation. So Martin, great to finally meet you and thanks for stopping by. >> Thanks for the time. >> David as always and I think we'll see in a month or so. >> Right, always a pleasure Jeff, thanks. >> All right Martin Fink, Dave Tang. I'm Jeff Frick, you're watching theCUBE. Thanks for watching, we'll catch you next time. (inspirational music)

Published Date : Feb 1 2018

SUMMARY :

Great to see you again, Dave. So great to finally meet you, Martin. and that's the support of this RISC-V. So everything about the instruction set is available to all but the ability to get a community of individuals how is that going to be applied do you think and the code associated with that device, and something you want to go analyze and do some work on? and they're going to need more specialized architectures and the cost adder, if you want to call it that, and how you guys do better and the vast possibilities for data, So how does that scale game continue to evolve? and so it's a significant part of the reason why So just to make sure we get that clear. So the design has started. and again, the ideas you explicitly said that you before either couldn't do so that you have longer battery life and and optimize the flow of data and that speaks to that sort of open source nature and with software and you know with, is that the diversity of data needs where the amount that you need as you need, and the interesting thing is I had (laughs) So Martin, great to finally meet you David as always and I think Thanks for watching, we'll catch you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NutanixORGANIZATION

0.99+

Western DigitalORGANIZATION

0.99+

JohnPERSON

0.99+

DavidPERSON

0.99+

KristaPERSON

0.99+

Bernie HannonPERSON

0.99+

Jeff FrickPERSON

0.99+

BerniePERSON

0.99+

H3CORGANIZATION

0.99+

CitrixORGANIZATION

0.99+

September of 2015DATE

0.99+

Dave TangPERSON

0.99+

Krista SatterthwaitePERSON

0.99+

SanDiskORGANIZATION

0.99+

MartinPERSON

0.99+

James WhitePERSON

0.99+

SuePERSON

0.99+

CiscoORGANIZATION

0.99+

Carol DweckPERSON

0.99+

Martin FinkPERSON

0.99+

JeffPERSON

0.99+

HPEORGANIZATION

0.99+

twoQUANTITY

0.99+

Stu MinimanPERSON

0.99+

DavePERSON

0.99+

Dave allantePERSON

0.99+

John FurrierPERSON

0.99+

RaghuPERSON

0.99+

Raghu NandanPERSON

0.99+

Palo AltoLOCATION

0.99+

threeQUANTITY

0.99+

Lee CaswellPERSON

0.99+

HPORGANIZATION

0.99+

Antonio NeriPERSON

0.99+

five yearsQUANTITY

0.99+

three-monthQUANTITY

0.99+

four-yearQUANTITY

0.99+

one minuteQUANTITY

0.99+

GaryPERSON

0.99+

AntonioPERSON

0.99+

Feb 2018DATE

0.99+

2023DATE

0.99+

seven dollarsQUANTITY

0.99+

three monthsQUANTITY

0.99+

Arm HoldingsORGANIZATION

0.99+

firstQUANTITY

0.99+

Lenovo Transform 2017 Keynote


 

(upbeat techno music) >> Announcer: Good morning ladies and gentlemen. This is Lenovo Transform. Please welcome to the stage Lenovo's Rod Lappin. (upbeat instrumental) >> Alright, ladies and gentlemen. Here we go. I was out the back having a chat. A bit faster than I expected. How are you all doing this morning? (crowd cheers) >> Good? How fantastic is it to be in New York City? (crowd applauds) Excellent. So my name's Rod Lappin. I'm with the Data Center Group, obviously. I do basically anything that touches customers from our sales people, our pre-sales engineers, our architects, et cetera, all the way through to our channel partner sales engagement globally. So that's my job, but enough of that, okay? So the weather this morning, absolutely fantastic. Not a cloud in the sky, perfect. A little bit different to how it was yesterday, right? I want to thank all of you because I know a lot of you had a lot of commuting issues getting into New York yesterday with all the storms. We have a lot of people from international and domestic travel caught up in obviously the network, which blows my mind, actually, but we have a lot of people here from Europe, obviously, a lot of analysts and media people here as well as customers who were caught up in circling around the airport apparently for hours. So a big round of applause for our team from Europe. (audience applauds) Thank you for coming. We have some people who commuted a very short distance. For example, our own server general manager, Cameron (mumbles), he's out the back there. Cameron, how long did it take you to get from Raleigh to New York? An hour-and-a-half flight? >> Cameron: 17 hours. >> 17 hours, ladies and gentleman. That's a fantastic distance. I think that's amazing. But I know a lot of us, obviously, in the United States have come a long way with the storms, obviously very tough, but I'm going to call out one individual. Shaneil from Spotless. Where are you Shaneil, you're here somewhere? There he is from Australia. Shaneil how long did it take you to come in from Australia? 25 hour, ladies and gentleman. A big round of applause. That's a pretty big effort. Shaneil actually I want you to stand up, if you don't mind. I've got a seat here right next to my CEO. You've gone the longest distance. How about a big round of applause for Shaneil. We'll put him in my seat, next to YY. Honestly, Shaneil, you're doing me a favor. Okay ladies and gentlemen, we've got a big day today. Obviously, my seat now taken there, fantastic. Obviously New York City, the absolute pinnacle of globalization. I first came to New York in 1996, which was before a lot of people in the room were born, unfortunately for me these days. Was completely in awe. I obviously went to a Yankees game, had no clue what was going on, didn't understand anything to do with baseball. Then I went and saw Patrick Ewing. Some of you would remember Patrick Ewing. Saw the Knicks play basketball. Had no idea what was going on. Obviously, from Australia, and somewhat slightly height challenged, basketball was not my thing but loved it. I really left that game... That was the first game of basketball I'd ever seen. Left that game realizing that effectively the guy throws the ball up at the beginning, someone taps it, that team gets it, they run it, they put it in the basket, then the other team gets it, they put it in the basket, the other team gets it, and that's basically the entire game. So I haven't really progressed from that sort of learning or understanding of basketball since then, but for me, personally, being here in New York, and obviously presenting with all of you guys today, it's really humbling from obviously some of you would have picked my accent, I'm also from Australia. From the north shore of Sydney. To be here is just a fantastic, fantastic event. So welcome ladies and gentlemen to Transform, part of our tech world series globally in our event series and our event season here at Lenovo. So once again, big round of applause. Thank you for coming (audience applauds). Today, basically, is the culmination of what I would classify as a very large journey. Many of you have been with us on that. Customers, partners, media, analysts obviously. We've got quite a lot of our industry analysts in the room. I know Matt Eastwood yesterday was on a train because he sent a Tweet out saying there's 170 people on the WIFI network. He was obviously a bit concerned he was going to get-- Pat Moorhead, he got in at 3:30 this morning, obviously from traveling here as well with some of the challenges with the transportation, so we've got a lot of people in the room that have been giving us advice over the last two years. I think all of our employees are joining us live. All of our partners and customers through the stream. As well as everybody in this packed-out room. We're very very excited about what we're going to be talking to you all today. I want to have a special thanks obviously to our R&D team in Raleigh and around the world. They've also been very very focused on what they've delivered for us today, and it's really important for them to also see the culmination of this great event. And like I mentioned, this is really the feedback. It's not just a Lenovo launch. This is a launch based on the feedback from our partners, our customers, our employees, the analysts. We've been talking to all of you about what we want to be when we grow up from a Data Center Group, and I think you're going to hear some really exciting stuff from some of the speakers today and in the demo and breakout sessions that we have after the event. These last two years, we've really transformed the organization, and that's one of the reasons why that theme is part of our Tech World Series today. We're very very confident in our future, obviously, and where the company's going. It's really important for all of you to understand today and take every single snippet that YY, Kirk, and Christian talk about today in the main session, and then our presenters in the demo sections on what Lenovo's actually doing for its future and how we're positioning the company, obviously, for that future and how the transformation, the digital transformation, is going ahead globally. So, all right, we are now going to step into our Transform event. And I've got a quick agenda statement for you. The very first thing is we're going to hear from YY, our chairman and CEO. He's going to discuss artificial intelligence, the evolution of our society and how Lenovo is clearly positioning itself in the industry. Then, obviously, you're going to hear from Kirk Skaugen, our president of the Data Center Group, our new boss. He's going to talk about how long he's been with the company and the transformation, once again, we're making, very specifically to the Data Center Group and how much of a difference we're making to society and some of our investments. Christian Teismann, our SVP and general manager of our client business is going to talk about the 25 years of ThinkPad. This year is the 25-year anniversary of our ThinkPad product. Easily the most successful brand in our client branch or client branch globally of any vendor. Most successful brand we've had launched, and this afternoon breakout sessions, obviously, with our keynotes, fantastic sessions. Make sure you actually attend all of those after this main arena here. Now, once again, listen, ask questions, and make sure you're giving us feedback. One of the things about Lenovo that we say all the time... There is no room for arrogance in our company. Every single person in this room is a customer, partner, analyst, or an employee. We love your feedback. It's only through your feedback that we continue to improve. And it's really important that through all of the sessions where the Q&As happen, breakouts afterwards, you're giving us feedback on what you want to see from us as an organization as we go forward. All right, so what were you doing 25 years ago? I spoke about ThinkPad being 25 years old, but let me ask you this. I bet you any money that no one here knew that our x86 business is also 25 years old. So, this year, we have both our ThinkPad and our x86 anniversaries for 25 years. Let me tell you. What were you guys doing 25 years ago? There's me, 25 years ago. It's a bit scary, isn't it? It's very svelte and athletic and a lot lighter than I am today. It makes me feel a little bit conscious. And you can see the black and white shot. It shows you that even if you're really really short and you come from the wrong side of the tracks to make some extra cash, you can still do some modeling as long as no one else is in the photo to give anyone any perspective, so very important. I think I might have got one photo shoot out of that, I don't know. I had to do it, I needed the money. Let me show you another couple of photos. Very interesting, how's this guy? How cool does he look? Very svelte and athletic. I think there's no doubt. He looks much much cooler than I do. Okay, so ladies and gentlemen, without further ado, it gives me great honor to obviously introduce our very very first guest to the stage. Ladies and gentlemen, our chairman and CEO, Yuanqing Yang. or as we like to call him, YY. A big round of applause, thank you. (upbeat techno instrumental) >> Good morning everyone. Thank you, Rod, for your introduction. Actually, I didn't think I was younger than you (mumbles). I can't think of another city more fitting to host the Transform event than New York. A city that has transformed from a humble trading post 400 years ago to one of the most vibrant cities in the world today. It is a perfect symbol of transformation of our world. The rapid and the deep transformations that have propelled us from the steam engine to the Internet era in just 200 years. Looking back at 200 years ago, there was only a few companies that operated on a global scale. The total value of the world's economy was around $188 billion U.S. dollars. Today, it is only $180 for each person on earth. Today, there are thousands of independent global companies that compete to sell everything, from corn and crude oil to servers and software. They drive a robust global economy was over $75 trillion or $1,000 per person. Think about it. The global economy has multiplied almost 450 times in just two centuries. What is even more remarkable is that the economy has almost doubled every 15 years since 1950. These are significant transformation for businesses and for the world and our tiny slice of pie. This transformation is the result of the greatest advancement in technology in human history. Not one but three industrial revolutions have happened over the last 200 years. Even though those revolutions created remarkable change, they were just the beginning. Today, we are standing at the beginning of the fourth revolution. This revolution will transform how we work (mumbles) in ways that no one could imagine in the 18th century or even just 18 months ago. You are the people who will lead this revolution. Along with Lenovo, we will redefine IT. IT is no longer just information technology. It's intelligent technology, intelligent transformation. A transformation that is driven by big data called computing and artificial intelligence. Even the transition from PC Internet to mobile Internet is a big leap. Today, we are facing yet another big leap from the mobile Internet to the Smart Internet or intelligent Internet. In this Smart Internet era, Cloud enables devices, such as PCs, Smart phones, Smart speakers, Smart TVs. (mumbles) to provide the content and the services. But the evolution does not stop them. Ultimately, almost everything around us will become Smart, with building computing, storage, and networking capabilities. That's what we call the device plus Cloud transformation. These Smart devices, incorporated with various sensors, will continuously sense our environment and send data about our world to the Cloud. (mumbles) the process of this ever-increasing big data and to support the delivery of Cloud content and services, the data center infrastructure is also transforming to be more agile, flexible, and intelligent. That's what we call the infrastructure plus Cloud transformation. But most importantly, it is the human wisdom, the people learning algorithm vigorously improved by engineers that enables artificial intelligence to learn from big data and make everything around us smarter. With big data collected from Smart devices, computing power of the new infrastructure under the trend artificial intelligence, we can understand the world around us more accurately and make smarter decisions. We can make life better, work easier, and society safer and healthy. Think about what is already possible as we start this transformation. Smart Assistants can help you place orders online with a voice command. Driverless cars can run on the same road as traditional cars. (mumbles) can help troubleshoot customers problems, and the virtual doctors already diagnose basic symptoms. This list goes on and on. Like every revolution before it, intelligent transformation, will fundamentally change the nature of business. Understanding and preparing for that will be the key for the growth and the success of your business. The first industrial revolution made it possible to maximize production. Water and steam power let us go from making things by hand to making them by machine. This transformed how fast things could be produced. It drove the quantity of merchandise made and led to massive increase in trade. With this revolution, business scale expanded, and the number of customers exploded. Fifty years later, the second industrial revolution made it necessary to organize a business like the modern enterprise, electric power, and the telegraph communication made business faster and more complex, challenging businesses to become more efficient and meeting entirely new customer demands. In our own lifetimes, we have witnessed the third industrial revolution, which made it possible to digitize the enterprise. The development of computers and the Internet accelerated business beyond human speed. Now, global businesses have to deal with customers at the end of a cable, not always a handshake. While we are still dealing with the effects of a digitizing business, the fourth revolution is already here. In just the past two or three years, the growth of data and advancement in visual intelligence has been astonishing. The computing power can now process the massive amount of data about your customers, suppliers, partners, competitors, and give you insights you simply could not imagine before. Artificial intelligence can not only tell you what your customers want today but also anticipate what they will need tomorrow. This is not just about making better business decisions or creating better customer relationships. It's about making the world a better place. Ultimately, can we build a new world without diseases, war, and poverty? The power of big data and artificial intelligence may be the revolutionary technology to make that possible. Revolutions don't happen on their own. Every industrial revolution has its leaders, its visionaries, and its heroes. The master transformers of their age. The first industrial revolution was led by mechanics who designed and built power systems, machines, and factories. The heroes of the second industrial revolution were the business managers who designed and built modern organizations. The heroes of the third revolution were the engineers who designed and built the circuits and the source code that digitized our world. The master transformers of the next revolution are actually you. You are the designers and the builders of the networks and the systems. You will bring the benefits of intelligence to every corner of your enterprise and make intelligence the central asset of your business. At Lenovo, data intelligence is embedded into everything we do. How we understand our customer's true needs and develop more desirable products. How we profile our customers and market to them precisely. How we use internal and external data to balance our supply and the demand. And how we train virtual agents to provide more effective sales services. So the decisions you make today about your IT investment will determine the quality of the decisions your enterprise will make tomorrow. So I challenge each of you to seize this opportunity to become a master transformer, to join Lenovo as we work together at the forefront of the fourth industrial revolution, as leaders of the intelligent transformation. (triumphant instrumental) Today, we are launching the largest portfolio in our data center history at Lenovo. We are fully committed to the (mumbles) transformation. Thank you. (audience applauds) >> Thanks YY. All right, ladies and gentlemen. Fantastic, so how about a big round of applause for YY. (audience applauds) Obviously a great speech on the transformation that we at Lenovo are taking as well as obviously wanting to journey with our partners and customers obviously on that same journey. What I heard from him was obviously artificial intelligence, how we're leveraging that integrally as well as externally and for our customers, and the investments we're making in the transformation around IoT machine learning, obviously big data, et cetera, and obviously the Data Center Group, which is one of the key things we've got to be talking about today. So we're on the cusp of that fourth revolution, as YY just mentioned, and Lenovo is definitely leading the way and investing in those parts of the industry and our portfolio to ensure we're complimenting all of our customers and partners on what they want to be, obviously, as part of this new transformation we're seeing globally. Obviously now, ladies and gentlemen, without further ado once again, to tell us more about what's going on today, our announcements, obviously, that all of you will be reading about and seeing in the breakout and the demo sessions with our segment general managers this afternoon is our president of the data center, Mr. Kirk Skaugen. (upbeat instrumental) >> Good morning, and let me add my welcome to Transform. I just crossed my six months here at Lenovo after over 24 years at Intel Corporation, and I can tell you, we've been really busy over the last six months, and I'm more excited and enthusiastic than ever and hope to share some of that with you today. Today's event is called "Transform", and today we're announcing major new transformations in Lenovo, in the data center, but more importantly, we're celebrating the business results that these platforms are going to have on society and with international supercomputing going on in parallel in Frankfurt, some of the amazing scientific discoveries that are going to happen on some of these platforms. Lenovo has gone through some significant transformations in the last two years, since we acquired the IBM x86 business, and that's really positioning us for this next phase of growth, and we'll talk more about that later. Today, we're announcing the largest end-to-end data center portfolio in Lenovo's history, as you heard from YY, and we're really taking the best of the x86 heritage from our IBM acquisition of the x86 server business and combining that with the cost economics that we've delivered from kind of our China heritage. As we've talked to some of the analysts in the room, it's really that best of the east and best of the west is combining together in this announcement today. We're going to be announcing two new brands, building on our position as the number one x86 server vendor in both customer satisfaction and in reliability, and we're also celebrating, next month in July, a very significant milestone, which will we'll be shipping our 20 millionth x86 server into the industry. For us, it's an amazing time, and it's an inflection point to kind of look back, pause, but also share the next phase of Lenovo and the exciting vision for the future. We're also making some declarations on our vision for the future today. Again, international supercomputing's going on, and, as it turns out, we're the fastest growing supercomputer company on earth. We'll talk about that. Our goal today that we're announcing is that we plan in the next several years to become number one in supercomputing, and we're going to put the investments behind that. We're also committing to our customers that we're going to disrupt the status quo and accelerate the pace of innovation, not just in our legacy server solutions, but also in Software-Defined because what we've heard from you is that that lack of legacy, we don't have a huge router business or a huge sand business to protect. It's that lack of legacy that's enabling us to invest and get ahead of the curb on this next transition to Software-Defined. So you're going to see us doing that through building our internal IP, through some significant joint ventures, and also through some merges and acquisitions over the next several quarters. Altogether, we're driving to be the most trusted data center provider in the industry between us and our customers and our suppliers. So a quick summary of what we're going to dive into today, both in my keynote as well as in the breakout sessions. We're in this transformation to the next phase of Lenovo's data center growth. We're closing out our previous transformation. We actually, believe it or not, in the last six months or so, have renegotiated 18,000 contracts in 160 countries. We built out an entire end-to-end organization from development and architecture all the way through sales and support. This next transformation, I think, is really going to excite Lenovo shareholders. We're building the largest data center portfolio in our history. I think when IBM would be up here a couple years ago, we might have two or three servers to announce in time to market with the next Intel platform. Today, we're announcing 14 new servers, seven new storage systems, an expanded set of networking portfolios based on our legacy with Blade Network Technologies and other companies we've acquired. Two new brands that we'll talk about for both data center infrastructure and Software-Defined, a new set of premium premiere services as well as a set of engineered solutions that are going to help our customers get to market faster. We're going to be celebrating our 20 millionth x86 server, and as Rod said, 25 years in x86 server compute, and Christian will be up here talking about 25 years of ThinkPad as well. And then a new end-to-end segmentation model because all of these strategies without execution are kind of meaningless. I hope to give you some confidence in the transformation that Lenovo has gone through as well. So, having observed Lenovo from one of its largest partners, Intel, for more than a couple decades, I thought I'd just start with why we have confidence on the foundation that we're building off of as we move from a PC company into a data center provider in a much more significant way. So Lenovo today is a company of $43 billion in sales. Absolutely astonishing, it puts us at about Fortune 202 as a company, with 52,000 employees around the world. We're supporting and have service personnel, almost a little over 10,000 service personnel that service our servers and data center technologies in over 160 countries that provide onsite service and support. We have seven data center research centers. One of the reasons I came from Intel to Lenovo was that I saw that Lenovo became number one in PCs, not through cost cutting but through innovation. It was Lenovo that was partnering on the next-generation Ultrabooks and two-in-ones and tablets in the modem mods that you saw, but fundamentally, our path to number one in data center is going to be built on innovation. Lastly, we're one of the last companies that's actually building not only our own motherboards at our own motherboard factories, but also with five global data center manufacturing facilities. Today, we build about four devices a second, but we also build over 100 servers per hour, and the cost economics we get, and I just visited our Shenzhen factory, of having everything from screws to microprocessors come up through the elevator on the first floor, go left to build PCs and ThinkPads and go right to build server technology, means we have some of the world's most cost effective solutions so we can compete in things like hyperscale computing. So it's with that that I think we're excited about the foundation that we can build off of on the Data Center Group. Today, as we stated, this event is about transformation, and today, I want to talk about three things we're going to transform. Number one is the customer experience. Number two is the data center and our customer base with Software-Defined infrastructure, and then the third is talk about how we plan to execute flawlessly with a new transformation that we've had internally at Lenovo. So let's dive into it. On customer experience, really, what does it mean to transform customer experience? Industry pundits say that if you're not constantly innovating, you can fall behind. Certainly the technology industry that we're in is transforming at record speed. 42% of business leaders or CIOs say that digital first is their top priority, but less than 50% actually admit that they have a strategy to get there. So people are looking for a partner to keep pace with that innovation and change, and that's really what we're driving to at Lenovo. So today we're announcing a set of plans to take another step function in customer experience, and building off of our number one position. Just recently, Gartner shows Lenovo as the number 24 supply chains of companies over $12 billion. We're up there with Amazon, Coca-Cola, and we've now completely re-architected our supply chain in the Data Center Group from end to end. Today, we can deliver 90% of our SKUs, order to ship in less than seven days. The artificial intelligence that YY mentioned is optimizing our performance even further. In services, as we talked about, we're now in 160 countries, supporting on-site support, 50 different call centers around the world for local language support, and we're today announcing a whole set of new premiere support services that I'll get into in a second. But we're building on what's already better than 90% customer satisfaction in this space. And then in development, for all the engineers out there, we started foundationally for this new set of products, talking about being number one in reliability and the lowest downtime of any x86 server vendor on the planet, and these systems today are architected to basically extend that leadership position. So let me tell you the realities of reliability. This is ITIC, it's a reliability report. 750 CIOs and IT managers from more than 20 countries, so North America, Europe, Asia, Australia, South America, Africa. This isn't anything that's paid for with sponsorship dollars. Lenovo has been number one for four years running on x86 reliability. This is the amount of downtime, four hours or more, in mission-critical environments from the leading x86 providers. You can see relative to our top two competitors that are ahead of us, HP and Dell, you can see from ITIC why we are building foundationally off of this, and why it's foundational to how we're developing these new platforms. In customer satisfaction, we are also rated number one in x86 server customer satisfaction. This year, we're now incentivizing every single Lenovo employee on customer satisfaction and customer experience. It's been a huge mandate from myself and most importantly YY as our CEO. So you may say well what is the basis of this number one in customer satisfaction, and it's not just being number one in one category, it's actually being number one in 21 of the 22 categories that TBR talks about. So whether it's performance, support systems, online product information, parts and availability replacement, Lenovo is number one in 21 of the 22 categories and number one for six consecutive studies going back to Q1 of 2015. So this, again, as we talk about the new product introductions, it's something that we absolutely want to build on, and we're humbled by it, and we want to continue to do better. So let's start now on the new products and talk about how we're going to transform the data center. So today, we are announcing two new product offerings. Think Agile and ThinkSystem. If you think about the 25 years of ThinkPad that Christian's going to talk about, Lenovo has a continuous learning culture. We're fearless innovators, we're risk takers, we continuously learn, but, most importantly, I think we're humble and we have some humility. That when we fail, we can fail fast, we learn, and we improve. That's really what drove ThinkPad to number one. It took about eight years from the acquisition of IBM's x86 PC business before Lenovo became number one, but it was that innovation, that listening and learning, and then improving. As you look at the 25 years of ThinkPad, there were some amazing successes, but there were also some amazing failures along the way, but each and every time we learned and made things better. So this year, as Rod said, we're not just celebrating 25 years of ThinkPad, but we're celebrating 25 years of x86 server development since the original IBM PC servers in 1992. It's a significant day for Lenovo. Today, we're excited to announce two new brands. ThinkSystem and ThinkAgile. It's an important new announcement that we started almost three years ago when we acquired the x86 server business. Why don't we run a video, and we'll show you a little bit about ThinkSystem and ThinkAgile. >> Narrator: The status quo is comfortable. It gets you by, but if you think that's good enough for your data center, think again. If adoption is becoming more complicated when it should be simpler, think again. If others are selling you technology that's best for them, not for you, think again. It's time for answers that win today and tomorrow. Agile, innovative, different. Because different is better. Different embraces change and makes adoption simple. Different designs itself around you. Using 25 years of innovation and design and R&D. Different transforms, it gives you ThinkSystem. World-record performance, most reliable, easy to integrate, scales faster. Different empowers you with ThinkAgile. It redefines the experience, giving you the speed of Cloud and the control of on-premise IT. Responding faster to what your business really needs. Different defines the future. Introducing Lenovo ThinkSystem and ThinkAgile. (exciting and slightly aggressive digital instrumental) >> All right, good stuff, huh? (audience applauds) So it's built off of this 25-year history of us being in the x86 server business, the commitment we established three years ago after acquiring the x86 server business to be and have the most reliable, the most agile, and the most highest-performing data center solutions on the planet. So today we're announcing two brands. ThinkSystem is for the traditional data center infrastructure, and ThinkAgile is our brand for Software-Defined infrastructure. Again, the teams challenge themselves from the start, how do we build off this rich heritage, expanding our position as number one in customer satisfaction, reliability, and one of the world's best supply chains. So let's start and look at the next set of solutions. We have always prided ourself that little things don't mean a lot. Little things mean everything. So today, as we said on the legacy solutions, we have over 30 world-record performance benchmarks on Intel architecture, and more than actually 150 since we started tracking this back in 2001. So it's the little pieces of innovation. It's the fine tuning that we do with our partners like an Intel or a Microsoft, an SAP, VMware, and Nutanix that's enabling us to get these world-record performance benchmarks, and with this next generation of solutions we think we'll continue to certainly do that. So today we're announcing the most comprehensive portfolio ever in our data center history. There's 14 servers, seven storage devices, and five network switches. We're also announcing, which is super important to our customer base, a set of new premiere service options. That's giving you fast access directly to a level two support person. No automated response system involved. You get to pick up the phone and directly talk to a level two support person that's going to have end-to-end ownership of the customer experience for ThinkSystem. With ThinkAgile, that's going to be completely bundled with every ThinkAgile you purchase. In addition, we're having white glove service on site that will actually unbox the product for you and get it up and running. It's an entirely new set of solutions for hybrid Cloud, for big data analytics and database applications around these engineered solutions. These are like 40- to 50-page guides where we fine-tuned the most important applications around virtual desktop infrastructure and those kinds of applications, working side by side with all of our ISP partners. So significantly expanding, not just the hardware but the software solutions that, obviously, you, as our customers, are running. So if you look at ThinkSystem innovation, again, it was designed for the ultimate in flexibility, performance, and reliability. It's a single now-unified brand that combines what used to be the Lenovo Think server and the IBM System x products now into a single brand that spans server, storage, and networking. We're basically future-proofing it for the next-generation data center. It's a significantly simplified portfolio. One of the big pieces that we've heard is that the complexity of our competitors has really been overwhelming to customers. We're building a more flexible, more agile solution set that requires less work, less qualification, and more future proofing. There's a bunch of things in this that you'll see in the demos. Faster time-to-service in terms of the modularity of the systems. 12% faster service equating to almost $50 thousand per hour of reduced downtime. Some new high-density options where we have four nodes and a 2U, twice the density to improve and reduce outbacks and mission-critical workloads. And then in high-performance computing and supercomputing, we're going to spend some time on that here shortly. We're announcing new water-cooled solutions. We have some of the most premiere water-cooled solutions in the world, with more than 25 patents pending now, just in the water-cooled solutions for supercomputing. The performance that we think we're going to see out of these systems is significant. We're building off of that legacy that we have today on the existing Intel solutions. Today, we believe we have more than 50% of SAP HANA installations in the world. In fact, SAP just went public that they're running their internal SAP HANA on Lenovo hardware now. We're seeing a 59% increase in performance on SAP HANA generation on generation. We're seeing 31% lower total cost to ownership. We believe this will continue our position of having the highest level of five-nines in the x86 server industry. And all of these servers will start being available later this summer when the Intel announcements come out. We're also announcing the largest storage portfolio in our history, significantly larger than anything we've done in the past. These are all available today, including some new value class storage offerings. Our network portfolio is expanding now significantly. It was a big surprise when I came to Lenovo, seeing the hundreds of engineers we had from the acquisition of Blade Network Technologies and others with our teams in Romania, Santa Clara, really building out both the embedded portfolio but also the top racks, which is around 10 gig, 25 gig, and 100 gig. Significantly better economics, but all the performance you'd expect from the largest networking companies in the world. Those are also available today. ThinkAgile and Software-Defined, I think the one thing that has kind of overwhelmed me since coming in to Lenovo is we are being embraced by our customers because of our lack of legacy. We're not trying to sell you one more legacy SAN at 65% margins. ThinkAgile really was founded, kind of born free from the shackles of legacy thinking and legacy infrastructure. This is just the beginning of what's going to be an amazing new brand in the transformation to Software-Defined. So, for Lenovo, we're going to invest in our own internal organic IP. I'll foreshadow: There's some significant joint ventures and some mergers and acquisitions that are going to be coming in this space. And so this will be the foundation for our Software-Defined networking and storage, for IoT, and ultimately for the 5G build-out as well. This is all built for data centers of tomorrow that require fluid resources, tightly integrated software and hardware in kind of an appliance, selling at the rack level, and so we'll show you how that is going to take place here in a second. ThinkAgile, we have a few different offerings. One is around hyperconverged storage, Hybrid Cloud, and also Software-Defined storage. So we're really trying to redefine the customer experience. There's two different solutions we're having today. It's a Microsoft Azure solution and a Nutanix solution. These are going to be available both in the appliance space as well as in a full rack solution. We're really simplifying and trying to transform the entire customer experience from how you order it. We've got new capacity planning tools that used to take literally days for us to get the capacity planning done. It's now going down to literally minutes. We've got new order, delivery, deployment, administration service, something we're calling ThinkAgile Advantage, which is the white glove unboxing of the actual solutions on prem. So the whole thing when you hear about it in the breakout sessions about transforming the entire customer experience with both an HX solution and an SX solution. So again, available at the rack level for both Nutanix and for Microsoft Solutions available in just a few months. Many of you in the audience since the Microsoft Airlift event in Seattle have started using these things, and the feedback to date has been fantastic. We appreciate the early customer adoption that we've seen from people in the audience here. So next I want to bring up one of our most important partners, and certainly if you look at all of these solutions, they're based on the next-generation Intel Xeon scalable processor that's going to be announcing very very soon. I want to bring on stage Rupal Shah, who's the corporate vice president and general manager of Global Data Center Sales with Intel, so Rupal, please join me. (upbeat instrumental) So certainly I have long roots at Intel, but why don't you talk about, from Intel's perspective, why Lenovo is an important partner for Lenovo. >> Great, well first of all, thank you very much. I've had the distinct pleasure of not only working with Kirk for many many years, but also working with Lenovo for many years, so it's great to be here. Lenovo is not only a fantastic supplier and leader in the industry for Intel-based servers but also a very active partner in the Intel ecosystem. In the Intel ecosystem, specifically, in our partner programs and in our builder programs around Cloud, around the network, and around storage, I personally have had a long history in working with Lenovo, and I've seen personally that PC transformation that you talked about, Kirk, and I believe, and I know that Intel believes in Lenovo's ability to not only succeed in the data center but to actually lead in the data center. And so today, the ThinkSystem and ThinkAgile announcement is just so incredibly important. It's such a great testament to our two companies working together, and the innovation that we're able to bring to the market, and all of it based on the Intel Xeon scalable processor. >> Excellent, so tell me a little bit about why we've been collaborating, tell me a little bit about why you're excited about ThinkSystem and ThinkAgile, specifically. >> Well, there are a lot of reasons that I'm excited about the innovation, but let me talk about a few. First, both of our companies really stand behind the fact that it's increasingly a hybrid world. Our two companies offer a range of solutions now to customers to be able to address their different workload needs. ThinkSystem really brings the best, right? It brings incredible performance, flexibility in data center deployment, and industry-leading reliability that you've talked about. And, as always, Xeon has a history of being built for the data center specifically. The Intel Xeon scalable processor is really re-architected from the ground up in order to enhance compute, network, and storage data flows so that we can deliver workload optimized performance for both a wide range of traditional workloads and traditional needs but also some emerging new needs in areas like artificial intelligence. Second is when it comes to the next generation of Cloud infrastructure, the new Lenovo ThinkAgile line offers a truly integrated offering to address data center pain points, and so not only are you able to get these pretested solutions, but these pretested solutions are going to get deployed in your infrastructure faster, and they're going to be deployed in a way that's going to meet your specific needs. This is something that is new for both of us, and it's an incredible innovation in the marketplace. I think that it's a great addition to what is already a fantastic portfolio for Lenovo. >> Excellent. >> Finally, there's high-performance computing. In high-performance computing. First of all, congratulations. It's a big week, I think, for both of us. Fantastic work that we've been doing together in high-performance computing and actually bringing the best of the best to our customers, and you're going to hear a whole lot more about that. We obviously have a number of joint innovation centers together between Intel and Lenovo. Tell us about some of the key innovations that you guys are excited about. >> Well, Intel and Lenovo, we do have joint innovation labs around the world, and we have a long and strong history of very tight collaboration. This has brought a big wave of innovation to the marketplace in areas like software-defined infrastructure. Yet another area is working closely on a joint vision that I think our two companies have in artificial intelligence. Intel is very committed to the world of AI, and we're committed in making the investments required in technology development, in training, and also in R&D to be able to deliver end-to-end solutions. So with Intel's comprehensive technology portfolio and Lenovo's development and innovation expertise, it's a great combination in this space. I've already talked a little bit about HPC and so has Kirk, and we're going to hear a little bit more to come, but we're really building the fastest compute solutions for customers that are solving big problems. Finally, we often talk about processors from Intel, but it's not just about the processors. It's way beyond that. It's about engaging at the solution level for our customers, and I'm so excited about the work that we've done together with Lenovo to bring to market products like Intel Omni-Path Architecture, which is really the fabric for high-performance data centers. We've got a great showing this week with Intel Omni-Path Architecture, and I'm so grateful for all the work that we've done to be able to bring true solutions to the marketplace. I am really looking forward to our future collaboration with Lenovo as we have in the past. I want to thank you again for inviting me here today, and congratulations on a fantastic launch. >> Thank you, Rupal, very much, for the long partnership. >> Thank you. (audience applauds) >> Okay, well now let's transition and talk a little bit about how Lenovo is transforming. The first thing we've done when I came on board about six months ago is we've transformed to a truly end-to-end organization. We're looking at the market segments I think as our customers define them, and we've organized into having vice presidents and senior vice presidents in charge of each of these major groups, thinking really end to end, from architecture all the way to end of life and customer support. So the first is hyperscale infrastructure. It's about 20% on the market by 2020. We've hired a new vice president there to run that business. Given we can make money in high-volume desktop PCs, it's really the manufacturing prowess, deep engineering collaboration that's enabling us to sell into Baidu, and to Alibaba, Tencent, as well as the largest Cloud vendors on the West Coast here in the United States. We believe we can make money here by having basically a deep deep engineering engagement with our key customers and building on the PC volume economics that we have within Lenovo. On software-defined infrastructure, again, it's that lack of legacy that I think is propelling us into this space. We're not encumbered by trying to sell one more legacy SAN or router, and that's really what's exciting us here, as we transform from a hardware to a software-based company. On HPC and AI, as we said, we'll talk about this in a second. We're the fastest-growing supercomputing company on earth. We have aspirations to be the largest supercomputing company on earth, with China and the U.S. vying for number one in that position, it puts us in a good position there. We're going to bridge that into artificial intelligence in our upcoming Shanghai Tech World. The entire day is around AI. In fact, YY has committed $1.2 billion to artificial intelligence over the next few years of R&D to help us bridge that. And then on data center infrastructure, is really about moving to a solutions based infrastructure like our position with SAP HANA, where we've gone deep with engineers on site at SAP, SAP running their own infrastructure on Lenovo and building that out beyond just SAP to other solutions in the marketplace. Overall, significantly expanding our services portfolio to maintain our number one customer satisfaction rating. So given ISC, or International Supercomputing, this week in Frankfurt, and a lot of my team are actually over there, I wanted to just show you the transformation we've had at Lenovo for delivering some of the technology to solve some of the most challenging humanitarian problems on earth. Today, we are the fastest-growing supercomputer company on the planet in terms of number of systems on the Top 500 list. We've gone from zero to 92 positions in just a few short years, but IDC also positions Lenovo as the fast-growing supercomputer and HPC company overall at about 17% year on year growth overall, including all of the broad channel, the regional universities and this kind of thing, so this is an exciting place for us. I'm excited today that Sergi has come all the way from Spain to be with us today. It's an exciting time because this week we announce the fastest next-generation Intel supercomputer on the planet at Barcelona Supercomputer. Before I bring Sergi on stage, let's run a video and I'll show you why we're excited about the capabilities of these next-generation supercomputers. Run the video please. >> Narrator: Different creates one of the most powerful supercomputers for the Barcelona Supercomputer Center. A high-performance, high-capacity design to help shape tomorrow's world. Different designs what's best for you, with 25 years of end-to-end expertise delivering large-scale solutions. It integrates easily with technology from industry partners, through deep collaboration with the client to manufacture, test, configure, and install at global scale. Different achieves the impossible. The first of a new series. A more energy-efficient supercomputer yet 10 times more powerful than its predecessor. With over 3,400 Lenovo ThinkSystem servers, each performing over two trillion calculations per second, giving us 11.1 petaflop capacity. Different powers MareNostrum, a supercomputer that will help us better understand cancer, help discover disease-fighting therapies, predict the impact of climate change. MareNostrom 4.0 promises to uncover answers that will help solve humanities greatest challenges. (audience applauds) >> So please help me in welcoming operations director of the Barcelona Supercomputer Center, Sergi Girona. So welcome, and again, congratulations. It's been a big week for both of us. But I think for a long time, if you haven't been to Barcelona, this has been called the world's most beautiful computer because it's in one of the most gorgeous chapels in the world as you can see here. Congratulations, we now are number 13 on the Top500 list and the fastest next-generation Intel computer. >> Thank you very much, and congratulations to you as well. >> So maybe we can just talk a little bit about what you've done over the last few months with us. >> Sure, thank you very much. It is a pleasure for me being invited here to present to you what we've been doing with Lenovo so far and what we are planning to do in the next future. I'm representing here Barcelona Supercomputing Center. I am presenting high-performance computing services to science and industry. How we see these science services has changed the paradigm of science. We are coming from observation. We are coming from observation on the telescopes and the microscopes and the building of infrastructures, but this is not affordable anymore. This is very expensive, so it's not possible, so we need to move to simulations. So we need to understand what's happening in our environment. We need to predict behaviors only going through simulation. So, at BSC, we are devoted to provide services to industry, to science, but also we are doing our own research because we want to understand. At the same time, we are helping and developing the new engineers of the future on the IT, on HPC. So we are having four departments based on different topics. The main and big one is wiling to understand how we are doing the next supercomputers from the programming level to the performance to the EIA, so all these things, but we are having also interest on what about the climate change, what's the air quality that we are having in our cities. What is the precision medicine we need to have. How we can see that the different drugs are better for different individuals, for different humans, and of course we have an energy department, taking care of understanding what's the better optimization for a cold, how we can save energy running simulations on different topics. But, of course, the topic of today is not my research, but it's the systems we are building in Barcelona. So this is what we have been building in Barcelona so far. From left to right, you have the preparation of the facility because this is 160 square meters with 1.4 megabytes, so that means we need new piping, we need new electricity, at the same time in the center we have to install the core services of the system, so the management practices, and then on the right-hand side you have installation of the networking, the Omni-Path by Intel. Because all of the new racks have to be fully integrated and they need to come into operation rapidly. So we start deployment of the system May 15, and we've now been ending and coming in production July first. All the systems, all the (mumbles) systems from Lenovo are coming before being open and available. What we've been installing here in Barcelona is general purpose systems for our general workload of the system with 3,456 nodes. Everyone of those having 48 cores, 96 gigabytes main memory for a total capacity of about 400 terabytes memory. The objective of this is that we want to, all the system, all the processors, to work together for a single execution for running altogether, so this is an example of the platinum processors from Intel having 24 cores each. Of course, for doing this together with all the cores in the same application, we need a high-speed network, so this is Omni-Path, and of course all these cables are connecting all the nodes. Noncontention, working together, cooperating. Of course, this is a bunch of cables. They need to be properly aligned in switches. So here you have the complete presentation. Of course, this is general purpose, but we wanted to invest with our partners. We want to understand what the supercomputers we wanted to install in 2020, (mumbles) Exascale. We want to find out, we are installing as well systems with different capacities with KNH, with power, with ARM processors. We want to leverage our obligations for the future. We want to make sure that in 2020 we are ready to move our users rapidly to the new technologies. Of course, this is in total, giving us a total capacity of 13.7 petaflops that it's 12 times the capacity of the former MareNostrum four years ago. We need to provide the services to our scientists because they are helping to solve problems for humanity. That's the place we are going to go. Last is inviting you to come to Barcelona to see our place and our chapel. Thank you very much (audience applauds). >> Thank you. So now you can all go home to your spouses and significant others and say you have a formal invitation to Barcelona, Spain. So last, I want to talk about what we've done to transform Lenovo. I think we all know the history is nice but without execution, none of this is going to be possible going forward, so we have been very very busy over the last six months to a year of transforming Lenovo's data center organization. First, we moved to a dedicated end-to-end sales and marketing organization. In the past, we had people that were shared between PC and data center, now thousands of sales people around the world are 100% dedicated end to end to our data center clients. We've moved to a fully integrated and dedicated supply chain and procurement organization. A fully dedicated quality organization, 100% dedicated to expanding our data center success. We've moved to a customer-centric segment, again, bringing in significant new leaders from outside the company to look end to end at each of these segments, supercomputing being very very different than small business, being very very different than taking care of, for example, a large retailer or bank. So around hyperscale, software-defined infrastructure, HPC, AI, and supercomputing and data center solutions-led infrastructure. We've built out a whole new set of global channel programs. Last year, or a year passed, we have five different channel programs around the world. We've now got one simplified channel program for dealer registration. I think our channel is very very energized to go out to market with Lenovo technology across the board, and a whole new set of system integrator relationships. You're going to hear from one of them in Christian's discussion, but a whole new set of partnerships to build solutions together with our system integrative partners. And, again, as I mentioned, a brand new leadership team. So look forward to talking about the details of this. There's been a significant amount of transformation internal to Lenovo that's led to the success of this new product introduction today. So in conclusion, I want to talk about the news of the day. We are transforming Lenovo to the next phase of our data center growth. Again, in over 160 countries, closing on that first phase of transformation and moving forward with some unique declarations. We're launching the largest portfolio in our history, not just in servers but in storage and networking, as everything becomes kind of a software personality on top of x86 Compute. We think we're very well positioned with our scale on PCs as well as data center. Two new brands for both data center infrastructure and Software-Defined, without the legacy shackles of our competitors, enabling us to move very very quickly into Software-Defined, and, again, foreshadowing some joint ventures in M&A that are going to be coming up that will further accelerate ourselves there. New premiere support offerings, enabling you to get direct access to level two engineers and white glove unboxing services, which are going to be bundled along with ThinkAgile. And then celebrating the milestone of 25 years in x86 server compute, not just ThinkPads that you'll hear about shortly, but also our 20 million server shipping next month. So we're celebrating that legacy and looking forward to the next phase. And then making sure we have the execution engine to maintain our position and grow it, being number one in customer satisfaction and number one in quality. So, with that, thank you very much. I look forward to seeing you in the breakouts today and talking with many of you, and I'll bring Rod back up to transition us to the next section. Thank you. (audience applauds) >> All right, Kirk, thank you, sir. All right, ladies and gentlemen, what did you think of that? How about a big round of applause for ThinkAgile, ThinkSystems new brands? (audience applauds) And, obviously, with that comes a big round of applause, for Kirk Skaugen, my boss, so we've got to give him a big round of applause, please. I need to stay employed, it's very important. All right, now you just heard from Kirk about some of the new systems, the brands. How about we have a quick look at the video, which shows us the brand new DCG images. >> Narrator: Legacy thinking is dead, stuck in the past, selling the same old stuff, over and over. So then why does it seem like a data center, you know, that thing powering all our little devices and more or less everything interaction today is still stuck in legacy thinking because it's rigid, inflexible, slow, but that's not us. We don't do legacy. We do different. Because different is fearless. Different reduces Cloud deployment from days to hours. Different creates agile technology that others follow. Different is fluid. It uses water-cooling technology to save energy. It co-innovates with some of the best minds in the industry today. Different is better, smarter. Maybe that's why different already holds so many world-record benchmarks in everything. From virtualization to database and application performance or why it's number one in reliability and customer satisfaction. Legacy sells you what they want. Different builds the data center you need without locking you in. Introducing the Data Center Group at Lenovo. Different... Is better. >> All right, ladies and gentlemen, a big round of applause, once again (mumbles) DCG, fantastic. And I'm sure all of you would agree, and Kirk mentioned it a couple of times there. No legacy means a real consultative approach to our customers, and that's something that we really feel is differentiated for ourselves. We are effectively now one of the largest startups in the DCG space, and we are very much ready to disrupt. Now, here in New York City, obviously, the heart of the fashion industry, and much like fashion, as I mentioned earlier, we're different, we're disruptive, we're agile, smarter, and faster. I'd like to say that about myself, but, unfortunately, I can't. But those of you who have observed, you may have noticed that I, too, have transformed. I don't know if anyone saw that. I've transformed from the pinstripe blue, white shirt, red tie look of the, shall we say, our predecessors who owned the x86 business to now a very Lenovo look. No tie and consequently a little bit more chic New York sort of fashion look, shall I say. Nothing more than that. So anyway, a bit of a transformation. It takes a lot to get to this look, by the way. It's a lot of effort. Our next speaker, Christian Teismann, is going to talk a lot about the core business of Lenovo, which really has been, as we've mentioned today, our ThinkPad, 25-year anniversary this year. It's going to be a great celebration inside Lenovo, and as we get through the year and we get closer and closer to the day, you'll see a lot more social and digital work that engages our customers, partners, analysts, et cetera, when we get close to that birthday. Customers just generally are a lot tougher on computers. We know they are. Whether you hang onto it between meetings from the corner of the Notebook, and that's why we have magnesium chassis inside the box or whether you're just dropping it or hypothetically doing anything else like that. We do a lot of robust testing on these products, and that's why it's the number one branded Notebook in the world. So Christian talks a lot about this, but I thought instead of having him talk, I might just do a little impromptu jump back stage and I'll show you exactly what I'm talking about. So follow me for a second. I'm going to jaunt this way. I know a lot of you would have seen, obviously, the front of house here, what we call the front of house. Lots of videos, et cetera, but I don't think many of you would have seen the back of house here, so I'm going to jump through the back here. Hang on one second. You'll see us when we get here. Okay, let's see what's going on back stage right now. You can see one of the team here in the back stage is obviously working on their keyboard. Fantastic, let me tell you, this is one of the key value props of this product, obviously still working, lots of coffee all over it, spill-proof keyboard, one of the key value propositions and why this is the number one laptop brand in the world. Congratulations there, well done for that. Obviously, we test these things. Height, distances, Mil-SPEC approved, once again, fantastic product, pick that up, lovely. Absolutely resistant to any height or drops, once again, in line with our Mil-SPEC. This is Charles, our producer and director back stage for the absolute event. You can see, once again, sand, coincidentally, in Manhattan, who would have thought a snow storm was occurring here, but you can throw sand. We test these things for all of the elements. I've obviously been pretty keen on our development solutions, having lived in Japan for 12 years. We had this originally designed in 1992 by (mumbles), he's still our chief development officer still today, fantastic, congratulations, a sand-enhanced notebook, he'd love that. All right, let's get back out front and on with the show. Watch the coffee. All right, how was that? Not too bad (laughs). It wasn't very impromptu at all, was it? Not at all a set up (giggles). How many people have events and have a bag of sand sitting on the floor right next to a Notebook? I don't know. All right, now it's time, obviously, to introduce our next speaker, ladies and gentlemen, and I hope I didn't steal his thunder, obviously, in my conversations just now that you saw back stage. He's one of my best friends in Lenovo and easily is a great representative of our legendary PC products and solutions that we're putting together for all of our customers right now, and having been an ex-Pat with Lenovo in New York really calls this his second home and is continually fighting with me over the fact that he believes New York has better sushi than Tokyo, let's welcome please, Christian Teismann, our SVP, Commercial Business Segment, and PC Smart Office. Christian Teismann, come on up mate. (audience applauds) >> So Rod thank you very much for this wonderful introduction. I'm not sure how much there is to add to what you have seen already back stage, but I think there is a 25-year of history I will touch a little bit on, but also a very big transformation. But first of all, welcome to New York. As Rod said, it's my second home, but it's also a very important place for the ThinkPad, and I will come back to this later. The ThinkPad is thee industry standard of business computing. It's an industry icon. We are celebrating 25 years this year like no other PC brand has done before. But this story today is not looking back only. It's a story looking forward about the future of PC, and we see a transformation from PCs to personalized computing. I am privileged to lead the commercial PC and Smart device business for Lenovo, but much more important beyond product, I also am responsible for customer experience. And this is what really matters on an ongoing basis. But allow me to stay a little bit longer with our iconic ThinkPad and history of the last 25 years. ThinkPad has always stand for two things, and it always will be. Highest quality in the industry and technology innovation leadership that matters. That matters for you and that matters for your end users. So, now let me step back a little bit in time. As Rod was showing you, as only Rod can do, reliability is a very important part of ThinkPad story. ThinkPads have been used everywhere and done everything. They have survived fires and extreme weather, and they keep surviving your end users. For 25 years, they have been built for real business. ThinkPad also has a legacy of first innovation. There are so many firsts over the last 25 years, we could spend an hour talking about them. But I just want to cover a couple of the most important milestones. First of all, the ThinkPad 1992 has been developed and invented in Japan on the base design of a Bento box. It was designed by the famous industrial designer, Richard Sapper. Did you also know that the ThinkPad was the first commercial Notebook flying into space? In '93, we traveled with the space shuttle the first time. For two decades, ThinkPads were on every single mission. Did you know that the ThinkPad Butterfly, the iconic ThinkPad that opens the keyboard to its size, is the first and only computer showcased in the permanent collection of the Museum of Modern Art, right here in New York City? Ten years later, in 2005, IBM passed the torch to Lenovo, and the story got even better. Over the last 12 years, we sold over 100 million ThinkPads, four times the amount IBM sold in the same time. Many customers were concerned at that time, but since then, the ThinkPad has remained the best business Notebook in the industry, with even better quality, but most important, we kept innovating. In 2012, we unveiled the X1 Carbon. It was the thinnest, lightest, and still most robust business PC in the world. Using advanced composited materials like a Formula One car, for super strengths, X1 Carbon has become our ThinkPad flagship since then. We've added an X1 Carbon Yoga, a 360-degree convertible. An X1 Carbon tablet, a detachable, and many new products to come in the future. Over the last few years, many new firsts have been focused on providing the best end-user experience. The first dual-screen mobile workstation. The first Windows business tablet, and the first business PC with OLED screen technology. History is important, but a massive transformation is on the way. Future success requires us to think beyond the box. Think beyond hardware, think beyond notebooks and desktops, and to think about the future of personalized computing. Now, why is this happening? Well, because the business world is rapidly changing. Looking back on history that YY gave, and the acceleration of innovation and how it changes our everyday life in business and in personal is driving a massive change also to our industry. Most important because you are changing faster than ever before. Human capital is your most important asset. In today's generation, they want to have freedom of choice. They want to have a product that is tailored to their specific needs, every single day, every single minute, when they use it. But also IT is changing. The Cloud, constant connectivity, 5G will change everything. Artificial intelligence is adding things to the capability of an infrastructure that we just are starting to imagine. Let me talk about the workforce first because it's the most important part of what drives this. The millennials will comprise more than half of the world's workforce in 2020, three years from now. Already, one out of three millennials is prioritizing mobile work environment over salary, and for nearly 60% of all new hires in the United States, technology is a very important factor for their job search in terms of the way they work and the way they are empowered. This new generation of new employees has grown up with PCs, with Smart phones, with tablets, with touch, for their personal use and for their occupation use. They want freedom. Second, the workplace is transforming. The video you see here in the background. This is our North America headquarters in Raleigh, where we have a brand new Smart workspace. We have transformed this to attract the new generation of workers. It has fewer traditional workspaces, much more meaning and collaborative spaces, and Lenovo, like many companies, is seeing workspaces getting smaller. An average workspace per employee has decreased by 30% over the last five years. Employees are increasingly mobile, but, if they come to the office, they want to collaborate with their colleagues. The way we collaborate and communicate is changing. Investment in new collaboration technology is exploding. The market of collaboration technology is exceeding the market of personal computing today. It will grow in the future. Conference rooms are being re-imagined from a ratio of 50 employees to one large conference room. Today, we are moving into scenarios of four employees to one conference room, and these are huddle rooms, pioneer spaces. Technology is everywhere. Video, mega-screens, audio, electronic whiteboards. Adaptive technologies are popping up and change the way we work. As YY said earlier, the pace of the revolution is astonishing. So personalized computing will transform the PC we all know. There's a couple of key factors that we are integrating in our next generations of PC as we go forward. The most important trends that we see. First of all, choose your own device. We talked about this new generation of workforce. Employees who are used to choosing their own device. We have to respond and offer devices that are tailored to each end user's needs without adding complexity to how we operate them. PC is a service. Corporations increasingly are looking for on-demand computing in data center as well as in personal computing. Customers want flexibility. A tailored management solution and a services portfolio that completes the lifecycle of the device. Agile IT, even more important, corporations want to run an infrastructure that is agile, instant respond to their end-customer needs, that is self provisioning, self diagnostic, and remote software repair. Artificial intelligence. Think about artificial intelligence for you personally as your personal assistant. A personal assistant which does understand you, your schedule, your travel, your next task, an extension of yourself. We believe the PC will be the center of this mobile device universe. Mobile device synergy. Each of you have two devices or more with you. They need to work together across different operating systems, across different platforms. We believe Lenovo is uniquely positioned as the only company who has a Smart phone business, a PC business, and an infrastructure business to really seamlessly integrate all of these devices for simplicity and for efficiency. Augmented reality. We believe augmented reality will drive significantly productivity improvements in commercial business. The core will be to understand industry-specific solutions. New processes, new business challenges, to improve things like customer service and sales. Security will remain the foundation for personalized computing. Without security, without trust in the device integrity, this will not happen. One of the most important trends, I believe, is that the PC will transform, is always connected, and always on, like a Smart phone. Regardless if it's open, if it's closed, if you carry it, or if you work with it, it always is capable to respond to you and to work with you. 5G is becoming a reality, and the data capacity that will be out there is by far exceeding today's traffic imagination. Finally, Smart Office, delivering flexible and collaborative work environments regardless on where the worker sits, fully integrated and leverages all the technologies we just talked before. These are the main challenges you and all of your CIO and CTO colleagues have to face today. A changing workforce and a new set of technologies that are transforming PC into personalized computing. Let me give you a real example of a challenge. DXC was just formed by merging CSE company and HP's Enterprise services for the largest independent services company in the world. DXC is now a 25 billion IT services leader with more than 170,000 employees. The most important capital. 6,000 clients and eight million managed devices. I'd like to welcome their CIO, who has one of the most challenging workforce transformation in front of him. Erich Windmuller, please give him a round of applause. (audience applauds). >> Thank you Christian. >> Thank you. >> It's my pleasure to be here, thank you. >> So first of all, let me congratulation you to this very special time. By forming a new multi-billion-dollar enterprise, this new venture. I think it has been so far fantastically received by analysts, by the press, by customers, and we are delighted to be one of your strategic partners, and clearly we are collaborating around workforce transformation between our two companies. But let me ask you a couple of more personal questions. So by bringing these two companies together with nearly 200,00 employees, what are the first actions you are taking to make this a success, and what are your biggest challenges? >> Well, first, again, let me thank you for inviting me and for DXC Technology to be a part of this very very special event with Lenovo, so thank you. As many of you might expect, it's been a bit of a challenge over the past several months. My goal was really very simple. It was to make sure that we brought two companies together, and they could operate as one. We need to make sure that could continue to support our clients. We certainly need to make sure we could continue to sell, our sellers could sell. That we could pay our employees, that we could hire people, we could do all the basic foundational things that you might expect a company would want to do, but we really focused on three simple areas. I called it the three Cs. Connectivity, communicate, and collaborate. So we wanted to make sure that we connected our legacy data centers so we could transfer information and communicate back and forth. We certainly wanted to be sure that our employees could communicate via WIFI, whatever locations they may or may not go to. We certainly wanted to, when we talk about communicate, we need to be sure that everyone of our employees could send and receive email as a DXC employee. And that we had a single-enterprise directory and people could communicate, gain access to calendars across each of the two legacy companies, and then collaborate was also key. And so we wanted to be sure, again, that people could communicate across each other, that our legacy employees on either side could get access to many of their legacy systems, and, again, we could collaborate together as a single corporation, so it was challenging, but very very, great opportunity for all of us. And, certainly, you might expect cyber and security was a very very important topic. My chairman challenged me that we had to be at least as good as we were before from a cyber perspective, and when you bring two large companies together like that there's clearly an opportunity in this disruptive world so we wanted to be sure that we had a very very strong cyber security posture, of which Lenovo has been very very helpful in our achieving that. >> Thank you, Erich. So what does DXC consider as their critical solutions and technology for workplace transformation, both internally as well as out on the market? >> So workplace transformation, and, again, I've heard a lot of the same kinds of words that I would espouse... It's all about making our employees productive. It's giving the right tools to do their jobs. I, personally, have been focused, and you know this because Lenovo has been a very very big part of this, in working with our, we call it our My Style Workplace, it's an offering team in developing a solution and driving as much functionality as possible down to the workstation. We want to be able, for me, to avoid and eliminate other ancillary costs, audio video costs, telecommunication cost. The platform that we have, the digitized workstation that Lenovo has provided us, has just got a tremendous amount of capability. We want to streamline those solutions, as well, on top of the modern server. The modern platform, as we call it, internally. I'd like to congratulate Kirk and your team that you guys have successfully... Your hardware has been certified on our modern platform, which is a significant accomplishment between our two companies and our partnership. It was really really foundational. Lenovo is a big part of our digital workstation transformation, and you'll continue to be, so it's very very important, and I want you to know that your tools and your products have done a significant job in helping us bring two large corporations together as one. >> Thank you, Erich. Last question, what is your view on device as a service and hardware utility model? >> This is the easy question, right? So who in the room doesn't like PC or device as a service? This is a tremendous opportunity, I think, for all of us. Our corporation, like many of you in the room, we're all driven by the concept of buying devices in an Opex versus a Capex type of a world and be able to pay as you go. I think this is something that all of us would like to procure, product services and products, if you will, personal products, in this type of a mode, so I am very very eager to work with Lenovo to be sure that we bring forth a very dynamic and constructive device as a service approach. So very eager to do that with Lenovo and bring that forward for DXC Technology. >> Erich, thank you very much. It's a great pleasure to work with you, today and going forward on all sides. I think with your new company and our lineup, I think we have great things to come. Thank you very much. >> My pleasure, great pleasure, thank you very much. >> So, what's next for Lenovo PC? We already have the most comprehensive commercial portfolio in the industry. We have put the end user in the core of our portfolio to finish and going forward. Ultra mobile users, like consultants, analysts, sales and service. Heavy compute users like engineers and designers. Industry users, increasingly more understanding. Industry-specific use cases like education, healthcare, or banking. So, there are a few exciting things we have to announce today. Obviously, we don't have that broad of an announcement like our colleagues from the data center side, but there is one thing that I have that actually... Thank you Rod... Looks like a Bento box, but it's not a ThinkPad. It's a first of it's kind. It's the world's smallest professional workstation. It has the power of a tower in the Bento box. It has the newest Intel core architecture, and it's designed for a wide range of heavy duty workload. Innovation continues, not only in the ThinkPad but also in the desktops and workstations. Second, you hear much about Smart Office and workspace transformation today. I'm excited to announce that we have made a strategic decision to expand our Think portfolio into Smart Office, and we will soon have solutions on the table in conference rooms, working with strategic partners like Intel and like Microsoft. We are focused on a set of devices and a software architecture that, as an IoT architecture, unifies the management of Smart Office. We want to move fast, so our target is that we will have our first product already later this year. More to come. And finally, what gets me most excited is the upcoming 25 anniversary in October. Actually, if you go to Japan, there are many ThinkPad lovers. Actually beyond lovers, enthusiasts, who are collectors. We've been consistently asked in blogs and forums about a special anniversary edition, so let me offer you a first glimpse what we will announce in October, of something we are bring to market later this year. For the anniversary, we will introduce a limited edition product. This will include throwback features from ThinkPad's history as well as the best and most powerful features of the ThinkPad today. But we are not just making incremental adjustments to the Think product line. We are rethinking ThinkPad of the future. Well, here is what I would call a concept card. Maybe a ThinkPad without a hinge. Maybe one you can fold. What do you think? (audience applauds) but this is more than just design or look and feel. It's a new set of advanced materials and new screen technologies. It's how you can speak to it or write on it or how it speaks to you. Always connected, always on, and can communicate on multiple inputs and outputs. It will anticipate your next meeting, your next travel, your next task. And when you put it all together, it's just another part of the story, which we call personalized computing. Thank you very much. (audience applauds) Thank you, sir. >> Good on ya, mate. All right, ladies and gentlemen. We are now at the conclusion of the day, for this session anyway. I'm going to talk a little bit more about our breakouts and our demo rooms next door. But how about the power with no tower, from Christian, huh? Big round of applause. (audience applauds) And what about the concept card, the ThinkPad? Pretty good, huh? I love that as well. I tell you, it was almost like Leonardo DiCaprio was up on stage at one stage. He put that big ThinkPad concept up, and everyone's phones went straight up and took a photo, the whole audience, so let's be very selective on how we distribute that. I'm sure it's already on Twitter. I'll check it out in a second. So once again, ThinkPad brand is a core part of the organization, and together both DCG and PCSD, what we call PCSD, which is our client side of the business and Smart device side of the business, are obviously very very linked in transforming Lenovo for the future. We want to also transform the industry, obviously, and transform the way that all of us do business. Lenovo, if you look at basically a summary of the day, we are highly committed to being a top three data center provider. That is really important for us. We are the largest and fastest growing supercomputing company in the world, and Kirk actually mentioned earlier on, committed to being number one by 2020. So Madhu who is in Frankfurt at the International Supercomputing Convention, if you're watching, congratulations, your targets have gone up. There's no doubt he's going to have a lot of work to do. We're obviously very very committed to disrupting the data center. That's obviously really important for us. As we mentioned, with both the brands, the ThinkSystem, and our ThinkAgile brands now, highly focused on disrupting and ensuring that we do things differently because different is better. Thank you to our customers, our partners, media, analysts, and of course, once again, all of our employees who have been on this journey with us over the last two years that's really culminating today in the launch of all of our new products and our profile and our portfolio. It's really thanks to all of you that once again on your feedback we've been able to get to this day. And now really our journey truly begins in ensuring we are disrupting and enduring that we are bringing more value to our customers without that legacy that Kirk mentioned earlier on is really an advantage for us as we really are that large startup from a company perspective. It's an exciting time to be part of Lenovo. It's an exciting time to be associated with Lenovo, and I hope very much all of you feel that way. So a big round of applause for today, thank you very much. (audience applauds) I need to remind all of you. I don't think I'm going to have too much trouble getting you out there, because I was just looking at Christian on the streaming solutions out in the room out the back there, and there's quite a nice bit of lunch out there as well for those of you who are hungry, so at least there's some good food out there, but I think in reality all of you should be getting up into the demo sessions with our segment general managers because that's really where the rubber hits the road. You've heard from YY, you've heard from Kirk, and you've heard from Christian. All of our general managers and our specialists in our product sets are going to be out there to obviously demonstrate our technology. As we said at the very beginning of this session, this is Transform, obviously the fashion change, hopefully you remember that. Transform, we've all gone through the transformation. It's part of our season of events globally, and our next event obviously is going to be in Tech World in Shanghai on the 20th of July. I hope very much for those of you who are going to attend have a great safe travel over there. We look forward to seeing you. Hope you've had a good morning, and get into the sessions next door so you get to understand the technology. Thank you very much, ladies and gentlemen. (upbeat innovative instrumental)

Published Date : Jun 20 2017

SUMMARY :

This is Lenovo Transform. How are you all doing this morning? Not a cloud in the sky, perfect. One of the things about Lenovo that we say all the time... from the mobile Internet to the Smart Internet and the demo sessions with our segment general managers and the cost economics we get, and I just visited and the control of on-premise IT. and the feedback to date has been fantastic. and all of it based on the Intel Xeon scalable processor. and ThinkAgile, specifically. and it's an incredible innovation in the marketplace. the best of the best to our customers, and also in R&D to be able to deliver end-to-end solutions. Thank you. some of the technology to solve some of the most challenging Narrator: Different creates one of the most powerful in the world as you can see here. So maybe we can just talk a little bit Because all of the new racks have to be fully integrated from outside the company to look end to end about some of the new systems, the brands. Different builds the data center you need in the DCG space, and we are very much ready to disrupt. and change the way we work. and we are delighted to be one of your strategic partners, it's been a bit of a challenge over the past several months. and technology for workplace transformation, I've heard a lot of the same kinds of words Last question, what is your view on device and be able to pay as you go. It's a great pleasure to work with you, and most powerful features of the ThinkPad today. and get into the sessions next door

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ShaneilPERSON

0.99+

Erich WindmullerPERSON

0.99+

Richard SapperPERSON

0.99+

LenovoORGANIZATION

0.99+

EuropeLOCATION

0.99+

1992DATE

0.99+

twoQUANTITY

0.99+

Patrick EwingPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Data Center GroupORGANIZATION

0.99+

DellORGANIZATION

0.99+

RomaniaLOCATION

0.99+

Rupal ShahPERSON

0.99+

Matt EastwoodPERSON

0.99+

Christian TeismannPERSON

0.99+

May 15DATE

0.99+

RodPERSON

0.99+

ErichPERSON

0.99+

AustraliaLOCATION

0.99+

RupalPERSON

0.99+

AlibabaORGANIZATION

0.99+

JapanLOCATION

0.99+

IBMORGANIZATION

0.99+

Pat MoorheadPERSON

0.99+

SpainLOCATION

0.99+

AmazonORGANIZATION

0.99+

RaleighLOCATION

0.99+

TencentORGANIZATION

0.99+

AsiaLOCATION

0.99+

2001DATE

0.99+

25 gigQUANTITY

0.99+

Blade Network TechnologiesORGANIZATION

0.99+

New YorkLOCATION

0.99+

MadhuPERSON

0.99+

DCGORGANIZATION

0.99+

Leonardo DiCaprioPERSON

0.99+

40QUANTITY

0.99+

KirkPERSON

0.99+

100%QUANTITY

0.99+

14 serversQUANTITY

0.99+

BarcelonaLOCATION

0.99+

12 timesQUANTITY

0.99+

2020DATE

0.99+

12 yearsQUANTITY

0.99+

Sujal Das, Netronome - OpenStack Summit 2017 - #OpenStackSummit - #theCUBE


 

>> Announcer: Live from Boston, Massachusetts, it's theCUBE covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem support. >> And we're back. I'm Stu Miniman with my cohost, John Troyer, getting to the end of day two of three days of coverage here at the OpenStack Summit in Boston. Happy to welcome the program Sujal Das, who is the chief marketing and strategy officer at Netronome. Thanks so much for joining us. >> Thank you. >> Alright, so we're getting through it, you know, really John and I have been digging into, you know, really where OpenStack is, talking to real people, deploying real clouds, where it fits into the multi cloud world. You know, networking is one of those things that took a little while to kind of bake out. Seems like every year we talk about Neutron and all the pieces that are there. But talk to us, Netronome, we know you guys make SmartNICs. You've got obviously some hardware involved when I hear a NIC, and you've got software. What's your involvement in OpenStack and what sort of things are you doing here at the show? >> Absolutely, thanks, Stu. So, we do SmartNIC platforms, so that includes both hardware and software that can be used in commercial office house servers. So with respect to OpenStack, I think the whole idea of STN with OpenStack is centered around the data plane that runs on the server, things such as the Open vSwitch, or Virtual Router, and they're evolving new data planes coming into the market. So we offload and accelerate the data plane in our SmartNICs, because the SmartNICs are programmable, we can evolve the feature set very quickly. So in fact, we have software releases that come out every six months that keep up to speed with OpenStack releases and Open vSwitches. So that's what we do in terms of providing a higher performance OpenStack environment so to say. >> Yeah, so I spent a good part of my career working on that part of the stack, if you will, and the balance is always like, right, what do you build into the hardware? Do I have accelerators? Is this the software that does, you know, usually in the short term hardware can take it care of it, but in the long term you follow the, you know, just development cycles, software tends to win in terms, so, you know. Where are we with where functionality is, what differentiates what you offer compared to others in the market? >> Absolutely. So we see a significant trend in terms of the role of a coprocessor to the x86 or evolving ARM-based servers, right, and the workloads are shifting rapidly. You know, with the need for higher performance, more efficiency in the server, you need coprocessors. So we make, essentially, coprocessors that accelerate networking. And that sits next to an x86 on a SmartNIC. The important differentiation we have is that we are able to pack a lot of cores on a very small form factor hardware device. As many as 120 cores that are optimized for networking. And by able to do that, we're able to deliver very high performance at the lowest cost and power. >> Can you speak to us, just, you know, what's the use case for that? You know, we talk about scale and performance. Who are your primary customers for this? Is this kind of broad spectrum, or, you know, certain industries or use cases that pop out. >> Sure, so we have three core market segments that we go after, right? One is the innovene construction market, where we see a lot of OpenStack use, for example. We also have the traditional cloud data center providers who are looking at accelerating even SmartNICs. And lastly the security market, that's kind of been our legacy market that we have grown up with. With security kind of moving away from appliances to more distributed security, those are our key three market segments that we go after. >> The irony is, in this world of cloud, hardware still matters, right? Not only does hardware, like, you're packing a huger number of cores into a NIC, so that hardware matters. But, one of the reasons that it matters now is because of the rise of this latest generation of solid-state storage, right? People are driving more and more IO. Do you see, what are the trends that you're seeing in terms of storage IO and IO in general in the data center? >> Absolutely. So I think the large data centers of the world, they showed the way in terms of how to do storage, especially with SSDs, what they call disaggregated storage, essentially being able to use the storage on each server and being able to aggregate those together into a pool of storage resources and its being called hyperconverged. I think companies like Nutanix have found a lot of success in that market. What I believe is going to happen in the next phase is hyperconvergence 2.0 where we're going to go beyond security, which essentially addressed TCO and being able to do more with less, but the next level would be hyperconvergence around security where you'd have distributed security in all servers and also telemetry. So basically your storage appliance is going away with hyperconvergence 1.0, but with the next generation of hyperconvergence we'd see the secured appliances and the monitoring appliances sort of going away and becoming all integrated in the server infrastructure to allow for better service levels and scalability. >> So what's the relationship between distributed security and then the need for more bandwidth at the back plane? >> Absolutely. So when you move security into the server, the processing requirements in the server goes up. And typically with all security processing, it's a lot of what's called flow processing or match-action processing. And those are typically not suitable for a general purpose server like the ARM or the x86, but that's where you need specialized coprocessors, kind of like the world of GPUs doing well in the artificial intelligence applications. I think the same example here. When you have security, telemetry, et cetera being done in each server, you need special purpose processing to do that at the lowest cost and power. >> Sujal, you mentioned that you've got solutioned into the public cloud. Are those the big hyperscale guys? Is it service providers? I'm curious if you could give a little color there. >> Yes, so these are both tier one and tier two service providers in the cloud market as well as the telco service providers, more in the NFV side. But we see a common theme here in terms of wanting to do security and things like telemetry. Telemetry is becoming a hot topic. Something called in-band telemetry that we are actually demonstrating at our booth and also speaking about with some our partners at the show, such as with Mirantis, Red Hat, and Juniper. Where doing all of these on each server is becoming a requirement. >> When I hear you talk, I think about here at OpenStack, we're talking about the hybrid or multi cloud world and especially something like security and telemetry I need to handle my data center, I need to handle the public cloud, and even when I start to get into that IoT edge environment, we know that the service area for attack just gets orders of magnitude larger, therefore we need security that can span across those. Are you touching all of those pieces, maybe give us a little bit of, dive into it. >> Absolutely, I think a great example is DDoS, right, distributed denial of service attacks. And today you know you have these kind of attacks happening from computers, right. Look at the environment where you have IoTs, right, you have tons and tons of small devices that can be hacked and could flood attacks into the data center. Look at the autonomous car or self-driving car phenomenon, where each car is equivalent to about 2,500 Internet users. So the number of users is going to scale so rapidly and the amount of attacks that could be proliferated from these kind of devices is going to be so high that people are looking at moving DDoS from the perimeter of the network to each server. And that's a great example that we're working with with a large service provider. >> I'm kind of curious how the systems take advantage of your technology. I can see it, some of it being transparent, like if you just want to jam more bits through the system, then that should be pretty transparent to the app and maybe even to the data plane and the virtual switches. But I'm guessing also there are probably some API or other software driven ways of doing, like to say, hey not only do I want you to jam more bits through there, but I want to do some packet inspection or I want to do some massaging or some QoS or I'm not sure what all these SmartNICs do. So is my model correct? Is that kind of the different ways of interacting with your technology? >> You're hitting a great point. A great question by the way, thank you. So the world has evolved from very custom ways of doing things, so proprietary ways of doing things, to more standard ways of doing things. And one thing that has kind of standardized so to say the data plane that does all of these functions that you mention, things like security or ACL roots or virtualization. Open vSwitch is a great example of a data plane that has kind of standardized how you do things. And there are a lot of new open source projects that are happening in the Linux Foundation, such as VPP for example. So each of these standardize the way you do it and then it becomes easier for vendors like us to implement a standard data plane and then work with the Linux kernel community in getting all of those things upstream, which we are working on. And then having the Red Hats of the world actually incorporate those into their distributions so that way the deployment model becomes much easier, right. And one of the topics of discussion with Red Hat that we presented today was exactly that, as to how do you make these kind of scales, scalability for security and telemetry, be more easily accessible to users through a Red Hat distribution, for example. >> Sujal, can you give us a little bit of just an overview of the sessions that Netronome has here at the show and what are the challenges that people are coming to that they're excited to meet with your company about? >> Absolutely, so we presented one session with Mirantis. Mirantis, as you know, is a huge OpenStack player. With Mirantis, we presented exactly the same, the problem statement that I was talking about. So when you try to do security with OpenStack, whether its stateless or stateful, your performance kind of tanks when you apply a lot of security policies, for example, on a per server basis that you can do with OpenStack. So when you use a SmartNIC, you essentially return a lot of the CPU cores to the revenue generating applications, right, so essentially operators are able to make more per server, make more money per server. That's a sense of what the value is, so that was the topic with Mirantis, who uses actually Open Contrail virtual router data plane in their solution. We also have presented with Juniper, which is also-- >> Stu: Speaking of Open Contrail. >> Yeah, so Juniper is another version of Contrail. So we're presenting a very similar product but that's with the commercial product from Juniper. And then we have yesterday presented with Red Hat. And Red Hat is based on Red Hat's OpenStack and their Open vSwitch based products where of course we are upstreaming a lot of these code bits that I talked about. But the value proposition is uniform across all of these vendors, which is when you do storage, sorry, security and telemetry and virtualization et cetera in a distributed way across all of your servers and get it for all of your appliances, you get better scale. But to achieve the efficiencies in the server, you need a SmartNIC such as ours. >> I'm curious, is the technology usually applied then at the per server level, is there a rack scale component too that needs to be there? >> It's on a per server basis, so it's the use cases like any other traditional NIC that you would use. So it looks and feels like any other NIC except that there is more processing cores in the hardware and there's more software involved. But again all of the software gets tightly integrated into the OS vendor's operating system and then the OpenStack environment. >> Got you. Well I guess you can never be too rich, too thin, or have too much bandwidth. >> That's right, yeah. >> Sujal, share with our audience any interesting conversation you had or other takeaways you want people to have from the OpenStack Summit. >> Absolutely, so without naming specific customer names, we had one large data center service provider in Europe come in and their big pain point was latency. Latency going form the VM on one server to another server. And that's a huge pain point and their request was to be able to reduce that by 10x at least. And we're able to do that, so that's one use case that we have seen. The other is again relates to telemetry, you know, how... This is a telco service provider, so as they go into 5G and they have to service many different applications such as what they call network slices. One slice servicing the autonomous car applications. Another slice managing the video distribution, let's say, with something like Netflix, video streaming. Another one servicing the cellphone, something like a phone like this where the data requirements are not as high as some TV sitting in your home. So they need different kinds of SLA for each of these services. How do they slice and dice the network and how are they able to actually assess the rogue VM so to say that might cause performance to go down and affect SLAs, telemetry, or what is called in-band telemetry is a huge requirement for those applications. So I'm giving you like two, one is a data center operator. You know an infrastructure as a service, just want lower latency. And the other one is interest in telemetry. >> So, Sujal, final question I have for you. Look forward a little bit for us. You've got your strategy hat on. Netronome, OpenStack in general, what do you expect to see as we look throughout the year maybe if we're, you know, sitting down with you in Vancouver a year from now, what would you hope that we as an industry and as a company have accomplished? >> Absolutely, I think you know you'd see a lot of these products so to say that enable seamless integration of SmartNICs become available on a broad basis. I think that's one thing I would see happening in the next one year. The other big event is the whole notion of hyperconvergence that I talked about, right. I would see the notion of hyperconvergence move away from one of just storage focus to security and telemetry with OpenStack kind of addressing that from a cloud orchestration perspective. And also with each of those requirements, software defined networking which is being able to evolve your networking data plane rapidly in the run. These are all going to become mainstream. >> Sujal Das, pleasure catching up with you. John and I will be back to do the wrap-up for day two. Thanks so much for watching theCUBE. (techno beat)

Published Date : May 9 2017

SUMMARY :

Brought to you by the OpenStack Foundation, of coverage here at the OpenStack Summit in Boston. But talk to us, Netronome, we know you guys make SmartNICs. in our SmartNICs, because the SmartNICs are programmable, on that part of the stack, if you will, of a coprocessor to the x86 or evolving ARM-based servers, Can you speak to us, just, you know, And lastly the security market, is because of the rise of this latest generation to do more with less, but the next level kind of like the world of GPUs doing well into the public cloud. more in the NFV side. that the service area for attack just gets orders of the network to each server. I'm kind of curious how the systems take advantage So each of these standardize the way you do it of the CPU cores to the revenue generating applications, of these vendors, which is when you do storage, sorry, But again all of the software gets tightly integrated Well I guess you can never be too rich, too thin, or other takeaways you want people to have The other is again relates to telemetry, you know, how... as we look throughout the year maybe if we're, you know, of these products so to say that enable seamless integration Sujal Das, pleasure catching up with you.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John TroyerPERSON

0.99+

JohnPERSON

0.99+

Sujal DasPERSON

0.99+

EuropeLOCATION

0.99+

NutanixORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

VancouverLOCATION

0.99+

Red HatORGANIZATION

0.99+

OpenStack FoundationORGANIZATION

0.99+

NetronomeORGANIZATION

0.99+

BostonLOCATION

0.99+

JuniperORGANIZATION

0.99+

MirantisORGANIZATION

0.99+

120 coresQUANTITY

0.99+

10xQUANTITY

0.99+

Red HatTITLE

0.99+

OpenStackORGANIZATION

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

each carQUANTITY

0.99+

Linux FoundationORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

each serverQUANTITY

0.99+

bothQUANTITY

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

OpenStack SummitEVENT

0.98+

OpenStackTITLE

0.98+

OpenStack Summit 2017EVENT

0.98+

NetflixORGANIZATION

0.98+

three daysQUANTITY

0.98+

about 2,500 Internet usersQUANTITY

0.97+

OneQUANTITY

0.97+

one sessionQUANTITY

0.97+

telcoORGANIZATION

0.97+

Red HatsTITLE

0.97+

eachQUANTITY

0.97+

SujalPERSON

0.97+

day twoQUANTITY

0.97+

one serverQUANTITY

0.97+

#OpenStackSummitEVENT

0.96+

ARMORGANIZATION

0.96+

StuPERSON

0.96+

NeutronORGANIZATION

0.95+

three market segmentsQUANTITY

0.94+

both tier oneQUANTITY

0.92+

Linux kernelTITLE

0.9+

Open vSwitchTITLE

0.9+

next one yearDATE

0.89+

hyperconvergence 2.0OTHER

0.84+

tier twoQUANTITY

0.84+

x86COMMERCIAL_ITEM

0.83+

one use caseQUANTITY

0.81+

one large data centerQUANTITY

0.81+

TCOORGANIZATION

0.8+

one thingQUANTITY

0.79+

Open ContrailTITLE

0.79+

1.0OTHER

0.75+

three core market segmentsQUANTITY

0.74+

Lisa Spelman, Intel - Google Next 2017 - #GoogleNext17 - #theCUBE


 

(bright music) >> Narrator: Live from Silicon Valley. It's theCUBE, covering Google Cloud Next 17. >> Okay, welcome back, everyone. We're live in Palo Alto for theCUBE special two day coverage here in Palo Alto. We have reporters, we have analysts on the ground in San Francisco, analyzing what's going on with Google Next, we have all the great action. Of course, we also have reporters at Open Compute Summit, which is also happening in San Hose, and Intel's at both places, and we have Intel senior manager on the line here, on the phone, Lisa Spelman, vice president and general manager of the Xeon product line, product manager responsibility as well as marketing across the data center. Lisa, welcome to theCUBE, and thanks for calling in and dissecting Google Next, as well as teasing out maybe a little bit of OCP around the Xeon processor, thanks for calling. >> Lisa: Well, thank you for having me, and it's hard to be in many places at once, so it's a busy week and we're all over, so that's that. You know, we'll do this on the phone, and next time we'll do it in person. >> I'd love to. Well, more big news is obviously Intel has a big presence with the Google Next, and tomorrow there's going to be some activity with some of the big name executives at Google. Talking about your relationship with Google, aka Alphabet, what are some of the key things that you guys are doing with Google that people should know about, because this is a very turbulent time in the ecosystem of the tech business. You saw Mobile World Congress last week, we've seen the evolution of 5G, we have network transformation going on. Data centers are moving to a hybrid cloud, in some cases, cloud native's exploding. So all new kind of computing environment is taking shape. What is Intel doing here at Google Next that's a proof point to the trajectory of the business? >> Lisa: Yeah, you know, I'd like to think it's not too much of a surprise that we're there, arm in arm with Google, given all of the work that we've done together over the last several years in that tight engineering and technical partnership that we have. One of the big things that we've been working with Google on is, as they move from delivering cloud services for their own usage and for their own applications that they provide out to others, but now as they transition into being a cloud service provider for enterprises and other IT shops as well, so they've recently launched their Google Cloud platform, just in the last week or so. Did a nice announcement about the partnership that we have together, and how the Google Cloud platform is now available and running and open for business on our latest next generation Intel Xeon product, and that's codenamed Skylake, but that's something that we've been working on with them since the inception of the design of the product, so it's really nice to have it out there and in the market, and available for customers, and we very much value partnerships, like the one we have with Google, where we have that deep technical engagement to really get to the heart of the workload that they need to provide, and then can design product and solution around that. So you don't just look at it as a one off project or a one time investment, it's an ongoing continuation and evolution of new product, new features, new capabilities to continue to improve their total cost of ownership and their customer experience. >> Well, Lisa, this is your baby, the Xeon, codename Skylake, which I love that name. Intel always has great codenames, by the way, we love that, but it's real technology. Can you share some specific features of what's different around these new workloads because, you know, we've been teasing out over the past day and we're going to be talking tomorrow as well about these new use cases, because you're looking at a plethora of use cases, from IoT edge all the way down into cloud native applications. What specific things is Xeon doing that's next generation that you could highlight, that points to this new cloud operating system, the cloud service providers, whether it's managed services to full blown down and dirty cloud? >> Lisa: So it is my baby, I appreciate you saying that, and it's so exciting to see it out there and starting to get used and picked up and be unleashing it on the world. With this next generation of Xeon, it's always about the processor, but what we've done has gone so much beyond that, so we have a ton of what we call platform level innovation that is coming in, we really see this as one of our biggest kind of step function improvements in the last 10 years that we've offered. Some of the features that we've already talked about are things like AVX-512 instructions, which I know just sounds fun and rolls of the tongue, but really it's very specific workload acceleration for things like high performance computing workloads. And high performance computing is something that we see more and more getting used in access in cloud style infrastructure. So it's this perfect marrying of that workload specifically deriving benefit from the new platforms, and seeing really strong performance improvements. It also speaks to the way with Intel and Xeon families, 'cause remember, with Xeon, we have Xeon Phi, you've got standard Xeon, you've got Xeon D. You can use these instructions across the families and have workloads that can move to the most optimized hardware for whatever you're trying to drive. Some of the other things that we've talked about announced is we'll have our next generation of Intel Resource Director technology, which really helps you manage and provide quality of service within you application, which is very important to cloud service providers, giving them control over hardware and software assets so that they can deliver the best customer experience to their customers based on the service level agreement they've signed up for. And then the other one is Intel Omni-Path architecture, so again, fairly high performance computing focused product, Omni-Path is a fabric, and we're going to offer that in an integrated fashion with Skylake so that you can get even higher level of performance and capability. So we're looking forward to a lot more that we have to come, the whole of the product line will continue to roll out in the middle of this year, but we're excited to be able to offer an early version to the cloud service providers, get them started, get it out in the market and then do that full scale enterprise validation over the next several months. >> So I got to ask you the question, because this is something that's coming up, we're seeing a transition, also the digital transformation's been talked about for a while. Network transformation, IoTs all around the corner, we've got autonomous vehicles, smart cities, on and on. But I got to ask you though, the cloud service providers seems to be coming out of this show as a key storyline in Google Next as the multi cloud architectures become very clear. So it's become clear, not just this show but it's been building up to this, it's pretty clear that it's going to be a multi cloud world. As well as you're starting to see the providers talk about their SaaS offerings, Google talking about G Suite, Microsoft talks about Office 365, Oracle has their apps, IBM's got Watson, so you have this SaaSification. So this now creates a whole another category of what cloud is. If you include SaaS, you're really talking about Salesforce, Adobe, you know, on and on the list, everyone is potentially going to become a SaaS provider whether they're unique cloud or partnering with some other cloud. What does that mean for a cloud service provider, what do they need for applications support requirements to be successful? >> So when we look at the cloud service provider market inside of Intel, we are talking about infrastructure as a service, platform as a service and software as a service. So cutting across the three major categories, I give you like, up until now, infrastructure of the service has gotten a lot of the airtime or focus, but SaaS is actually the bigger business, and that's why you see, I think, people moving towards it, especially as enterprise IT becomes more comfortable with using SaaS application. You know, maybe first they started with offloading their expense report tool, but over time, they've moved into more sophisticated offerings that free up resources for them to do their most critical or business critical applications the they require to stay in more of a private cloud. I think that's evolution to a multi cloud, a hybrid cloud, has happened across the entire industry, whether you are an enterprise or whether you are a cloud service provider. And then the move to SaaS is logical, because people are demanding just more and more services. One of the things through all our years of partnering with the biggest to the smallest cloud service providers and working so closely on those technical requirements that we've continued to find is that total cost of ownership really is king, it's that performance per dollar, TCO, that they can provide and derive from their infrastructure, and we focused a lot of our engineering and our investment in our silicon design around providing that. We have multi generations that we've provided even just in the last five years to continue to drive those step function improvements and really optimize our hardware and the code that runs on top of it to make sure that it does continue to deliver on those demanding workloads. The other thing that we see the providers focusing on is what's their differentiation. So you'll see cloud service providers that will look through the various silicon features that we offer and choose, they'll pick and choose based on whatever their key workload is or whatever their key market is, and really kind of hone in and optimize for those silicon features so that they can have a differentiated offering into the market about what capabilities and services they'll provide. So it's an area where we continue to really focus our efforts, understand the workload, drive the TCO down, and then focus in on the design point of what's going to give that differentiation and acceleration. >> It's interesting, the definition's also where I would agree with you, the cloud service provider is a huge market when you even look at the SaaS. 'Cause whether you're talking about Uber or Netflix, for instance, examples people know about in real life, you can't ignore these new diverse use cases coming out. For instance, I was just talking with Stu Miniman, one of our analysts here, Wikibon, and Riot Games could be considered a cloud, right, I mean, 'cause it's a SaaS platform, it's gaming. You're starting to see these new apps coming out of the woodwork. There seems to be a requirement for being agile as a cloud provider. How do you enable that, what specifically can you share, if I'm a cloud service provider, to be ready to support anything that's coming down the pike? >> Lisa: You know, we do do a lot of workload and market analysis inside of Intel and the data center group, and then if you have even seen over the past five years, again, I'll just stick with the new term, how much we've expanded and broadened our product portfolio. So again, it will still be built upon that foundation of Xeon and what we have there, but we've gone to offer a lot of varieties. So again, I mentioned Xeon Phi. Xeon Phi at the 72 cores, bootable Xeon but specific workload acceleration targeted at high performance computing and other analytics workloads. And then you have things at the other end. You've got Xeon D, which is really focused at more frontend web services and storage and network workloads, or Atom, which is even lower power and more focused on cold and warm storage workloads, and again, that network function. So you could then say we're not just sticking with one product line and saying this is the answer for everything, we're saying here's the core of what we offer, and the features people need, and finding options, whether they range from low power to high power high performance, and kind of mixed across that whole kind of workload spectrum, and then we've broadened around the CPU into a lot of other silicon innovation. So I don't know if you guys have had a chance to talk about some of the work that we're doing with FPGAs, with our FPGA group and driving and delivering cloud and network acceleration through FPGAs. We've also introduced new products in the last year like Silicon Photonics, so dealing with network traffic crossing through-- >> Well, is FPGA, that's the Altera stuff, we did talk with them, they're doing the programmable chips. >> Lisa: Exactly, so it requires a level of sophistication and understanding what you need the workload to accelerate, but once you have it, it is a very impressive and powerful performance gain for you, so the cloud service providers are a perfect market for that, as are the cloud service providers because they have very sophisticated IT and very technically astute engineering teams that are able to really, again, go back to the workload, understand what they need and figure out the right software solution to pair with it. So that's been a big focus of our targeting. And then, like I said, we've added all these different things, different new products to the platform that start to, over time, just work better and better together, so when you have things like Intel SSD there together with Intel CPUs and Intel Ethernet and Intel FPGA and Intel Silicon Photonics, you can start to see how the whole package, when it's designed together under one house, can offer a tremendous amount of workload acceleration. >> I got to ask you a question, Lisa, 'cause this comes up, while you're talking, I'm just in my mind visualizing a new kind of virtual computer server, the cloud is one big server, so it's a design challenge. And what was teased out at Mobile World Congress that was very clear was this new end to end architecture, you know, re-imagined, but if you have these processors that have unique capabilities, that have use case specific capabilities, in a way, you guys are now providing a portfolio of solutions so that it almost can be customized for a variety of cloud service providers. Am I getting that right, is that how you guys see this happening where you guys can just say, "Hey, just mix and match what you want and you're good." >> Lisa: Well, and we try to provide a little bit more guidance than as you wish, I mean, of course, people have their options to choose, so like, with the cloud service providers, that's what we have, really tight engineering engagement, so that we can, you know, again, understand what they need, what their design point is, what they're honing in on. You might work with one cloud service provider that is very facilities limited, and you might work with another one that is, they're face limited, the other one's power limited, and another one has performance is king, so you can, we can cut some SKUs to help meet each of those needs. Another good example is in the artificial intelligence space where we did another acquisition last year, a company called Nervana that's working on optimized silicon for a neural network. And so now we have put together this AI portfolio, so instead of saying, "Oh, here's one answer "for artificial intelligence," it's, "Here's a multitude of answers where you've got Xeon," so if you have, I'm going to utilize capacity, and are starting down your artificial intelligence journey, just use your Xeon capacity with an optimized framework and you'll get great results and you can start your journey. If you are monetizing and running your business based on what AI can do for you and you are leading the pack out there, you've got the best data scientists and algorithm writers and peak running experts in the world, then you're going to want to use something like the silicon that we acquired from the Nervana team, and that codename is Lake Crest, speaking of some lakes there. And you'll want to use something like Xeon with Lake Crest to get that ultimate workload acceleration. So we have the whole portfolio that goes from Xeon to Xeon Phi to Xeon with FPGAs or Xeon with Lake Crest. Depending on what you're doing, and again, what your design point is, we have a solution for you. And of course, when we say solution, we don't just mean hardware, we mean the optimized software frameworks and the libraries and all of that, that actually give you something that can perform. >> On the competitive side, we've seen the processor landscape heat up on the server and the cloud space. Obviously, whether it's from a competitor or homegrown foundry, whatever fabs are out there, I mean, so Intel's always had a great partnership with cloud service providers. Vis-a-vis the competition and context to that, what are you guys doing specifically and how you'd approach the marketplace in light of competition? >> Lisa: So we do operate in a highly competitive market, and we always take all competitors seriously. So far we've seen the press heat up, which is different than seeing all of the deployments, so what we look for is to continue to offer the highest performance and lowest total cost of ownership for all our customers, and in this case, the cloud service providers, of course. And what do we do is we kind of stick with our game plan of putting the best silicon in the world into the market on a regular beat rate and cadence, and so there's always news, there's always an interesting story, but when you look at having had eight new products and new generations in market since the last major competitive x86 product, that's kind of what we do, just keep delivering so that our customers know that they can bet on us to always be there and not have these massive gaps. And then I also talked to you about portfolio expansion, we don't bet on just one horse, we give our customers the choice to optimize for their workloads, so you can go up to 72 cores with Xeon Phi if that's important, you can go as low as two cores with Atom, if that's what works for you. Just an example of how we try to kind of address all of our customer segments with the right product at the right time. >> And IoT certainly brings a challenge too, when you hear about network edge, that's a huge, huge growth area, I mean, you can't deny that that's going to be amazing, you look at the cars are data centers these days, right? >> Lisa: A data center on wheels. >> Data center on wheels. >> Lisa: That's one of the fun things about my role, even in the last year, is that growing partnership, even inside of Intel with our IoT team, and just really going through all of the products that we have in development, and how many of them can be reused and driven towards IoT solution. The other thing is, if you look into the data center space, I genuinely believe we have the world's best ecosystem, you can't find an ISV that we haven't worked with to optimize their solution to run best on Intel architecture and get that workload acceleration. And now we have the chance to put that same playbook into play in the IoT space, so it's a growing, somewhat nascent but growing market with a ton of opportunity and a ton of standards to still be built, and a lot of full solution kits to be put together. And that's kind of what Intel does, you know, we don't just throw something out to the market and say, "Good luck," we actually put the ecosystem together around it so that it performs. But I think that's kind of what you see with, I don't know if you guys saw our Intel GO announcement, but it's really like the software development kit and the whole product offering for what you need for truly delivering automated vehicles. >> Well, Lisa, I got to say, so you guys have a great formula, why fix what's not broken, stay with Moore's law, keep that cadence going, but what's interesting is you are listening and adapting to the architectural shifts, which is smart, so congratulations and I think, as the cloud service provider world changes, and certainly in the data center, it's going to be a turbulent time, but a lot of opportunity, and so good to have that reliability and, if you can make the software go faster then they can write more software faster, so-- >> Lisa: Yup, and that's what we've seen every time we deliver a step function improvement in performance, we see a step function improvement in demand, and so the world is still hungry for more and more compute, and we see this across all of our customer bases. And every time you make that compute more affordable, they come up with new, innovative, different ways to do things, to get things done and new services to offer, and that fundamentally is what drives us, is that desire to continue to be the backbone of that industry innovation. >> If you could sum up in a bumper sticker what that step function is, what is that new step function? >> Lisa: Oh, when we say step functions of improvements, I mean, we're always looking at targeting over 20% performance improvement per generation, and then on top of that, we've added a bunch of other capabilities beyond it. So it might show up as, say, a security feature as well, so you're getting the massive performance improvement gen to gen, and then you're also getting new capabilities like security features added on top. So you'll see more and more of those types of announcements from us as well where we kind of highlight the, not just the performance but that and what else comes with it, so that you can continue to address, you know, again, the growing needs that are out there, so all we're trying to say is, day a step ahead. >> All right, Lisa Spelman, VP of the GM, the Xeon product family as well as marketing and data center. Thank you for spending the time and sharing your insights on Google Next, and giving us a peak at the portfolio of the Xeon next generation, really appreciate it, and again, keep on bringing that power, Moore's law, more flexibility. Thank you so much for sharing. We're going to have more live coverage here in Palo Alto after this short break. (bright music)

Published Date : Mar 9 2017

SUMMARY :

Narrator: Live from Silicon Valley. maybe a little bit of OCP around the Xeon processor, and it's hard to be in many places at once, of the tech business. partnerships, like the one we have with Google, that you could highlight, that points to and it's so exciting to see it out there So I got to ask you the question, and really optimize our hardware and the code is a huge market when you even look at the SaaS. and the data center group, and then if you have even seen Well, is FPGA, that's the Altera stuff, the right software solution to pair with it. I got to ask you a question, Lisa, so that we can, you know, again, understand what they need, Vis-a-vis the competition and context to that, And then I also talked to you about portfolio expansion, and the whole product offering for what you need and so the world is still hungry for more and more compute, with it, so that you can continue to address, you know, at the portfolio of the Xeon next generation,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa SpelmanPERSON

0.99+

GoogleORGANIZATION

0.99+

NervanaORGANIZATION

0.99+

LisaPERSON

0.99+

Palo AltoLOCATION

0.99+

San FranciscoLOCATION

0.99+

AlphabetORGANIZATION

0.99+

two coresQUANTITY

0.99+

OracleORGANIZATION

0.99+

AdobeORGANIZATION

0.99+

IBMORGANIZATION

0.99+

IntelORGANIZATION

0.99+

UberORGANIZATION

0.99+

last yearDATE

0.99+

Silicon PhotonicsORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

72 coresQUANTITY

0.99+

two dayQUANTITY

0.99+

last weekDATE

0.99+

San HoseLOCATION

0.99+

oneQUANTITY

0.99+

G SuiteTITLE

0.99+

Office 365TITLE

0.99+

Stu MinimanPERSON

0.99+

Open Compute SummitEVENT

0.98+

Mobile World CongressEVENT

0.98+

XeonORGANIZATION

0.98+

tomorrowDATE

0.98+

both placesQUANTITY

0.98+

AlteraORGANIZATION

0.98+

Riot GamesORGANIZATION

0.97+

OneQUANTITY

0.97+

WikibonORGANIZATION

0.97+

WatsonTITLE

0.96+

over 20%QUANTITY

0.95+

SaaSTITLE

0.95+

firstQUANTITY

0.95+

one horseQUANTITY

0.94+

Silicon ValleyLOCATION

0.94+

one productQUANTITY

0.94+

eachQUANTITY

0.94+

one answerQUANTITY

0.94+

eight new productsQUANTITY

0.93+

one timeQUANTITY

0.92+

XeonCOMMERCIAL_ITEM

0.92+

GMORGANIZATION

0.91+

one houseQUANTITY

0.91+

Google CloudTITLE

0.91+

Eric Starkloff, National Instruments & Dr. Tom Bradicich, HPE - #HPEDiscover #theCUBE


 

>> Voiceover: Live from Las Vegas, it's theCUBE, covering Discover 2016, Las Vegas. Brought to you by Hewlett Packard Enterprise. Now, here are your hosts, John Furrier and Dave Vellante. >> Okay, welcome back everyone. We are here live in Las Vegas for SiliconANGLE Media's theCUBE. It's our flagship program, we go out to the events to extract the signal from the noise, we're your exclusive coverage of HP Enterprise, Discover 2016, I'm John Furrier with my co-host, Dave Vellante, extracting the signals from the noise with two great guests, Dr. Tom Bradicich, VP and General Manager of the servers and IoT systems, and Eric Starkloff, the EVP of Global Sales and Marketing at National Instruments, welcome back to theCUBE. >> Thank you. >> John: Welcome for the first time Cube alumni, welcome to theCUBE. >> Thank you. >> So we are seeing a real interesting historic announcement from HP, because not only is there an IoT announcement this morning that you are the architect of, but the twist that you're taking with IoT, is very cutting edge, kind of like I just had Google IO, and at these big conferences they always have some sort of sexy demo, that's to kind of show the customers the future, like AI, or you know, Oculus Rift goggles as the future of their application, but you actually don't have something that's futuristic, it's reality, you have a new product, around IoT, at the Edge, Edgeline, the announcements are all online. Tom, but you guys did something different. And Eric's here for a reason, we'll get to that in a second, but the announcement represents a significant bet. That you're making, and HP's making, on the future of IoT. Please share the vision, and the importance of this event. >> Well thank you, and it's great to be back here with you guys. We've looked around and we could not find anything that existed today, if you will, to satisfy the needs of this industry and our customers. So we had to create not only a new product, but a new product category. A category of products that didn't exist before, and the new Edgeline1000, and the Edgeline4000 are the first entrance into this new product category. Now, what's a new product category? Well, whoever invented the first automobile, there was not a category of automobiles. When the first automobile was invented, it created a new product category called automobiles, and today everybody has a new entry into that as well. So we're creating a new product category, called converged IoT systems. Converged IoT systems are needed to deliver the real-time insights, real-time response, and advance the business outcomes, or the engineering outcomes, or the scientific outcomes, depending on the situation of our customers. They're needed to do that. Now when you have a name, converged, that means somewhat, a synonym is integration, what did we integrate? Now, I want to tell you the three major things we integrated, one of which comes from Eric, and the fine National Instruments company, that makes this technology that we actually put in, to the single box. And I can't wait to tell you more about it, but that's what we did, a new product category, not just two new products. >> So, you guys are bringing two industries together, again, that's not only just point technologies or platforms, in tooling, you're bringing disparate kind of players together. >> Yes. >> But it's not just a partnership, it's not like shaking hands and doing a strategic partnership, so there's real meat on the bone here. Eric, talk about one, the importance of this integration of two industries, basically, coming together, converged category if you will, or industry, and what specifically is in the box or in the technology. >> Yeah, I think you hit it exactly right. I mean, everyone talks about the convergence of OT, or operational technology, and IT. And we're actually doing it together. I represent the OT side, National Instruments is a global leader. >> John: OT, it means, just for the audience? >> Operational Technology, it's basically industrial equipment, measurement equipment, the thing that is connected to the real world. Taking data and controlling the thing that is in the internet of things, or the industrial internet of things as we play. And we've been doing internet of... >> And IT is Information Technologies, we know what that is, OT is... >> I figured that one you knew, OT is Operational Technology. We've been doing IoT before it was a buzzword. Doing measurement and control systems on industrial equipment. So when we say we're making it real, this Edgeline system actually incorporates in National Instruments technology, on an industry standard called PXI. And it is a measurement and control standard that's ubiquitous in the industry, and it's used to connect to the real world, to connect to sensors, actuators, to take in image data, and temperature data and all of those things, to instrument the world, and take in huge amounts of analog data, and then apply the compute power of an Edgeline system onto that application. >> We don't talk a lot about analog data in the IT world. >> Yeah. >> Why is analog data so important, I mean it's prevalent obviously in your world. Talk a little bit more about that. >> It's the largest source of data in the world, as Tom says it's the oldest as well. Analog, of course if you think about it, the analog world is literally infinite. And it's only limited by how many things we want to measure, and how fast we measure them. And the trend in technology is more measurement points and faster. Let me give you a couple of examples of the world we live in. Our customers have acquired over the years, approximately 22 exabytes of data. We don't deal with exabytes that often, I'll give an analogy. It's streaming high definition video, continuously, for a million years, produces 22 exabytes of data. Customers like CERN, that do the Large Hadron Collider, they're a customer of ours, they take huge amounts of analog data. Every time they do an experiment, it's the equivalent of 14 million images, photographs, that they take per second. They create 25 petabytes of data each year. The importance of this and the importance of Edgeline, and we'll get into this some, is that when you have that quantity of data, you need to push processing, and compute technology, towards the edge. For two main reasons. One, is the quantity of data, doesn't lend itself, or takes up too much bandwidth, to be streaming all of it back to central, to cloud, or centralized storage locations. The other one that's very, very important is latency. In the applications that we serve, you often need to make a decision in microseconds. And that means that the processing needs to be done, literally the speed of light is a limiting factor, the processing must be done on the edge, at the thing itself. >> So basically you need a data center at the edge. >> A great way to say it. >> A great way to say it. And this data, or big analog data as we love to call it, is things like particulates, motion, acceleration, voltage, light, sound, location, such as GPS, as well as many other things like vibration and moisture. That is the data that is pent up in things. In the internet of things. And Eric's company National Instruments, can extract that data, digitize it, make it ones and zeroes, and put it into the IT world where we can compute it and gain these insights and actions. So we really have a seminal moment here. We really have the OT industry represented by Eric, connecting with the IT industry, in the same box, literally in the same product in the box, not just a partnership as you pointed out. In fact it's quite a moment, I think we should have a photo op here, shaking hands, two industries coming together. >> So you talk about this new product category. What are the parameters of a new product category? You gave an example of an automobile, okay, but nobody had ever seen one before, but now you're bringing together sort of two worlds. What defines the parameters of a product category, such that it warrants a new category? >> Well, in general, never been done before, and accomplishes something that's not been done before, so that would be more general. But very specifically, this new product, EL1000 and EL4000, creates a new product category because this is an industry first. Never before have we taken data acquisition and capture technology from National Instruments, and data control technology from National Instruments, put that in the same box as deep compute. Deep x86 compute. What do I mean by deep? 64 xeon cores. As you said, a piece of the data center. But that's not all we converged. We took Enterprise Class systems management, something that HP has done very well for many, many years. We've taken the Hewlett Packard Enterprise iLo lights-out technology, converged that as well. In addition we put storage in there. 10s of terabytes of storage can be at the edge. So by this combination of things, that did exist before, the elements of course, by that combination of things, we've created this new product category. >> And is there a data store out there as well? A database? >> Oh yes, now since we have, this is the profundity of what I said, lies in the fact that because we have so many cores, so close to the acquisition of the data, from National Instruments, we can run virtually any application that runs on an x86 server. So, and I'm not exaggerating, thousands. Thousands of databases. Machine learning. Manageability, insight, visualization of data. Data capture tools, that all run on servers and workstations, now run at the edge. Again, that's never been done before, in the sense that at the edge today, are very weak processing. Very weak, and you can't just run an unmodified app, at that level. >> And in terms of the value chain, National Instruments is a supplier to this new product category? Is that the right way to think about it? >> An ingredient, a solution ingredient but just like we are, number one, but we are both reselling the product together. >> Dave: Okay. >> So we've jointly, collaboratively, developed this together. >> So it's engineers and engineers getting together, building the product. >> Exactly. His engineers, mine, we worked extremely close, and produced this beauty. >> We had a conversation yesterday, argument about the iPhone, I was saying hey, this was a game-changing category, if you will, because it was a computer that had software that could make phone calls. Versus the other guys, who had a phone, that could do text messages and do email. With a browser. >> Tom: With that converged product. >> So this would be similar, if I may, and you can correct me if I'm wrong, I want you to correct me and clarify, what you're saying is, you guys essentially looked at the edge differently, saying let's build the data center, at the edge, in theory or in concept here, in a little concept, but in theory, the power of a data center, that happens to do edge stuff. >> Tom: That's right. >> Is that accurate? >> I think it's very accurate. Let me make a point and let you respond. >> Okay. >> Neapolitan ice cream has three flavors. Chocolate, vanilla, strawberry, all in one box. That's what we did with this Edgeline. What's the value of that? Well, you can carry it, you can store it, you can serve it more conveniently, with everything together. You could have separate boxes, of chocolate, vanilla, and strawberry, that existed, right, but coming together, that convergence is key. We did that with deep compute, with data capture and control, and then systems management and Enterprise class device and systems management. And I'd like to explain why this is a product. Why would you use this product, you know, as well. Before I continue though, I want to get to the seven reasons why you would use this. And we'll go fast. But seven reasons why. But would you like to add anything about the definition of the conversion? >> Yeah, I was going to just give a little perspective, from an OT and an industrial OT kind of perspective. This world has generally lived in a silo away from IT. >> Mm-hmm. >> It's been proprietary networking standards, not been connected to the rest of the enterprise. That's the huge opportunity when we talk about the IoT, or the industrial IT, is connecting that to the rest of the enterprise. Let me give you an example. One of our customers is Duke Energy. They've implemented an online monitoring system for all of their power generation plants. They have 2,000 of our devices called CompactRIO, that connect to 30,000 sensors across all of their generation plants, getting real-time monitoring, predictive analytics, predictive failure, and it needs to have processing close to the edge, that latency issue I mentioned? They need to basically be able to do deep processing and potentially shut down a machine. Immediately if it's an a condition that warrants so. The importance here is that as those things are brought online, into IT infrastructure, the importance of deep compute, and the importance of the security and the capability that HPE has, becomes critical to our customers in the industrial internet of things. >> Well, I want to push back and just kind of play devil's advocate, and kind of poke holes in your thesis, if I can. >> Eric: Sure thing. >> So you got the probes and all the sensors and all the analog stuff that's been going on for you know, years and years, powering and instrumentation. You've got the box. So okay, I'm a customer. I have other stuff I might put in there, so I don't want to just rely on just your two stuff. Your technologies. So how do you deal with the corner case of I might have my own different devices, it's connected through IT, is that just a requirement on your end, or is that... How do you deal with the multi-vendor thing? >> It has to be an open standard. And there's two elements of open standard in this product, I'll let Tom come in on one, but one of them is, the actual IO standard, that connects to the physical world, we said it's something called PXI. National Instruments is a major vendor within this PXI market, but it is an open standard, there are 70 different vendors, thousands of products, so that part of it in connecting to the physical world, is built on an open standard, and the rest of the platform is as well. >> Indeed. Can I go back to your metaphor of the smartphone that you held up? There are times even today, but it's getting less and less, that people still carry around a camera. Or a second phone. Or a music player. Or the Beats headphones, et cetera, right? There's still time for that. So to answer your question, it's not a replacement for everything. But very frankly, the vision is over time, just like the smartphone, and the app store, more and more will get converged into this platform. So it's an introduction of a platform, we've done the inaugural convergence of the aforementioned data capture, high compute, management, storage, and we'll continue to add more and more, again, just like the smartphone analogy. And there will still be peripheral solutions around, to address your point. >> But your multi-vendor strategy if I get this right, doesn't prevent you, doesn't foreclose the customer's benefits in any way, so they connect through IT, they're connected into the box and benefits. You changed, they're just not converged inside the box. >> At this point. But I'm getting calls regularly, and you may too, Eric, of other vendors saying, I want in. I would like to relate that conceptually to the app store. Third party apps are being produced all the time that go onto this platform. And it's pretty exciting. >> And before you get to your seven killer attributes, what's the business model? So you guys have jointly engineered this product, you're jointly selling it through your channels, >> Eric: Yes. >> If you have a large customer like GE for example, who just sort of made the public commitment to HPE infrastructure. How will you guys "split the booty," so to speak? (laughter) >> Well we are actually, as Tom said we are doing reselling, we'll be reselling this through our channel, but I think one of the key things is bringing together our mutual expertise. Because when we talk about convergence of OT and IT, it's also bringing together the engineering expertise of our two companies. We really understand acquiring data from the real world, controlling industrial systems. HPE is the world leader in IT technology. And so, we'll be working together and mutually with customers to bring those two perspectives together, and we see huge opportunity in that. >> Yeah, okay so it's engineering. You guys are primarily a channel company anyway, so. >> Actually, I can make it frankly real simple, knowing that if we go back to the Neapolitan ice cream, and we reference National Instruments as chocolate, they have all the contact with the chocolate vendor, the chocolate customers if you will. We have all the vanilla. So we can go in and then pull each other that way, and then go in and pull this way, right? So that's one way as this market develops. And that's going to very powerful because indeed, the more we talk about when it used to be separated, before today, the more we're expressing that also separate customers. That the other guy does not know. And that's the key here in this relationship. >> So talk about the trend we're hearing here at the show, I mean it's been around in IT for a long time. But more now with the agility, the DevOps and cloud and everything. End to end management. Because that seems to be the table stakes. Do you address any of that in the announcement, is it part, does it fit right in? >> Absolutely, because, when we take, and we shift left, this is one of our monikers, we shift left. The data center and the cloud is on the right, and we're shifting left the data center class capabilities, out to the edge. That's why we call it shift left. And we meet, our partner National Instruments is already there, and an expert and a leader. As we shift left, we're also shifting with it, the manageability capabilities and the software that runs the management. Whether it be infrastructure, I mean I can do virtualization at the edge now, with a very popular virtualization package, I can do remote desktops like the Citrix company, the VMware company, these technologies and databases that come from our own Vertica database, that come from PTC, a great partner, with again, operations technology. Things that were running already in the data center now, get to run there. >> So you bring the benefit to the IT guy, out to the edge, to management, and Eric, you get the benefit of connecting into IT, to bring that data benefits into the business processes. >> Exactly. And as the industrial internet of things scales to billions of machines that have monitoring, and online monitoring capability, that's critical. Right, it has to be manageable. You have to be able to have these IT capabilities in order to manage such a diverse set of assets. >> Well, the big data group can basically validate that, and the whole big data thesis is, moving data where it needs to be, and having data about physical analog stuff, assets, can come in and surface more insight. >> Exactly. The biggest data of all. >> And vice versa. >> Yup. >> All right, we've got to get to the significant seven, we only have a few minutes left. >> All right. Oh yeah. >> Hit us. >> Yeah, yeah. And we're cliffhanging here on that one. But let me go through them real quick. So the question is, why wouldn't I just, you know, rudimentary collect the data, do some rudimentary analytics, send it all up to the cloud. In fact you hear that today a lot, pop-up. Censored cloud, censored cloud. Who doesn't have a cloud today? Every time you turn around, somebody's got a cloud, please send me all your data. We do that, and we do that well. We have Helion, we have the Microsoft Azure IoT cloud, we do that well. But my point is, there's a world out there. And it can be as high as 40 to 50 percent of the market, IDC is quoted as suggesting 40 percent of the data collected at the edge, by for example National Instruments, will be processed at the edge. Not sent, necessarily back to the data center or cloud, okay. With that background, there are seven reasons to not send all the data, back to the cloud. That doesn't mean you can't or you shouldn't, it just means you don't have to. There are seven reasons to compute at the edge. With an Edgeline system. Ready? >> Dave: Ready. >> We're going to go fast. And there'll be a test on this, so. >> I'm writing it down. >> Number one is latency, Eric already talked about that. How fast do you want your turnaround time? How fast would you like to know your asset's going to catch on fire? How fast would you like to know when the future autonomous car, that there's a little girl playing in the road, as opposed to a plastic bag being blown against the road, and are you going to rely on the latency of going all the way to the cloud and back, which by the way may be dropped, it's not only slow, but you ever try to make a phone call recently, and it not work, right? So you get that point. So that's latency one. You need to time to incite, time to response. Number one of seven, I'll go real quick. Number two of seven is bandwidth. If you're going to send all this big analog data, the oldest, the fastest, and the biggest of all big data, all back, you need tremendous bandwidth. And sometimes it doesn't exist, or, as some of our mutual customers tell us, it exists but I don't want to use it all for edge data coming back. That's two of seven. Three of seven is cost. If you're going to use the bandwidth, you've got to pay for it. Even if you have money to pay for it, you might not want to, so again that's three, let's go to four. (coughs) Excuse me. Number four of seven is threats. If you're going to send all the data across sites, you have threats. It doesn't mean we can't handle the threats, in fact we have the best security in the industry, with our Aruba security, ClearPass, we have ArcSight, we have Volt. We have several things. But the point is, again, it just exposes it to more threats. I've had customers say, we don't want it exposed. Anyway, that's four. Let's move on to five, is duplication. If you're going to collect all the data, and then send it all back, you're going to duplicate at the edge, you're going to duplicate not all things, but some things, both. All right, so duplication. And here we're coming up to number six. Number six is corruption. Not hostile corruption, but just package dropped. Data gets corrupt. The longer you have it in motion, e.g. back to the cloud, right, the longer it is as well. So you have corruption, you can avoid. And number three, I'm sorry, number seven, here we go with number seven. Not to send all the data back, is what we call policies and compliance, geo-fencing, I've had a customer say, I am not allowed to send all the data to these data centers or to my data scientists, because I can't leave country borders. I can't go over the ocean, as well. Now again, all these seven, create a market for us, so we can solve these seven, or at least significantly ameliorate the issues by computing at the edge with the Edgeline systems. >> Great. Eric, I want to get your final thoughts here, and as we wind down the segment. You're from the ops side, ops technologies, this is your world, it's not new to you, this edge stuff, it's been there, been there, done that, it is IoT for you, right? So you've seen the evolution of your industry. For the folks that are in IT, that HP is going to be approaching with this new category, and this new shift left, what does it mean? Share your color behind, and reasoning and reality check, on the viability. >> Sure. >> And relevance. >> Yeah, I think that there are some significant things that are driving this change. The rise of software capability, connecting these previously siloed, unconnected assets to the rest of the world, is a fundamental shift. And the cost point of acquisition technology has come down the point where we literally have a better, more compelling economic case to be made, for the online monitoring of more and more machine-type data. That example I gave of Duke Energy? Ten years ago they evaluated online monitoring, and it wasn't economical, to implement that type of a system. Today it is, and it's actually very, very compelling to their business, in terms of scheduled downtime, maintenance cost, it's a compelling value proposition. And the final one is as we deliver more analytics capability to the edge, I believe that's going to create opportunity that we don't even really, completely envision yet. And this deep computing, that the Edgeline systems have, is going to enable us to do an analysis at the edge, that we've previously never done. And I think that's going to create whole new opportunities. >> So based on your expert opinion, talk to the IT guys watching, viability, and ability to do this, what's the... Because some people are a little nervous, will the parachute open? I mean, it's a huge endeavor for an IT company to instrument the edge of their business, it's the cutting, bleeding edge, literally. What's the viability, the outcome, is it possible? >> It's here now. It is here now, I mean this announcement kind of codifies it in a new product category, but it's here now, and it's inevitable. >> Final word, your thoughts. >> Tom: I agree. >> Proud papa, you're like a proud papa now, you got your baby out there. >> It's great. But the more I tell you how wonderful the EL1000, EL4000 is, it's like my mother calling me handsome. Therefore I want to point the audience to Flowserve. F-L-O-W, S-E-R-V-E. They're one of our customers using Edgeline, and National Instruments equipment, so you can find that video online as well. They'll tell us about really the value here, and it's really powerful to hear from a customer. >> John: And availability is... >> Right now we have EL1000s and EL4000s in the hands of our customers, doing evaluations, at the end of the summer... >> John: Pre-announcement, not general availability. >> Right, general availability is not yet, but we'll have that at the end of the summer, and we can do limited availability as we call it, depending on the demand, and how we roll it out, so. >> How big the customer base is, in relevance to the... Now, is this the old boon shot box, just a quick final question. >> Tom: It is not, no. >> Really? >> We are leveraging some high-performance, low-power technology, that Intel has just announced, I'd like to shout out to that partner. They just announced and launched... Diane Bryant did her keynote to launch the new xeon, E3, low-power high-performance xeon, and it was streamed, her keynote, on the Edgeline compute engine. That's actually going into the Edgeline, that compute blade is going into the Edgeline. She streamed with it, we're pretty excited about that as well. >> Tom and Eric, thanks so much for sharing the big news, and of course congratulations, new category. >> Thank you. >> Let's see how this plays out, we'll be watching, got to get the draft picks in for this new sports league, we're calling it, like IoT, the edge, of course we're theCUBE, we're living at the edge, all the time, we're at the edge of HPE Discovery. Have one more day tomorrow, but again, three days of coverage. You're watching theCUBE, I'm John Furrier with Dave Vellante, we'll be right back. (electronic music)

Published Date : Jun 9 2016

SUMMARY :

Brought to you by Hewlett Packard Enterprise. of the servers and IoT systems, John: Welcome for the first time Cube alumni, and the importance of this event. and it's great to be back here with you guys. So, you guys are bringing two industries together, Eric, talk about one, the importance I mean, everyone talks about the convergence of OT, the thing that is connected to the real world. And IT is Information Technologies, I figured that one you knew, I mean it's prevalent obviously in your world. And that means that the processing needs to be done, and put it into the IT world where we can compute it What are the parameters of a new product category? that did exist before, the elements of course, lies in the fact that because we have so many cores, but we are both reselling the product together. So we've jointly, collaboratively, building the product. and produced this beauty. Versus the other guys, who had a phone, at the edge, in theory or in concept here, Let me make a point and let you respond. about the definition of the conversion? from an OT and an industrial OT kind of perspective. and the importance of the security and the capability and kind of poke holes in your thesis, and all the analog stuff that's been going on and the rest of the platform is as well. and the app store, doesn't foreclose the customer's benefits in any way, Third party apps are being produced all the time How will you guys "split the booty," so to speak? HPE is the world leader in IT technology. Yeah, okay so it's engineering. And that's the key here in this relationship. So talk about the trend we're hearing here at the show, and the software that runs the management. and Eric, you get the benefit of connecting into IT, And as the industrial internet of things scales and the whole big data thesis is, The biggest data of all. we only have a few minutes left. All right. of the data collected at the edge, We're going to go fast. and the biggest of all big data, that HP is going to be approaching with this new category, that the Edgeline systems have, it's the cutting, bleeding edge, literally. and it's inevitable. you got your baby out there. But the more I tell you at the end of the summer... depending on the demand, How big the customer base is, that compute blade is going into the Edgeline. thanks so much for sharing the big news, all the time, we're at the edge of HPE Discovery.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TomPERSON

0.99+

EricPERSON

0.99+

Eric StarkloffPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Diane BryantPERSON

0.99+

Duke EnergyORGANIZATION

0.99+

National InstrumentsORGANIZATION

0.99+

seven reasonsQUANTITY

0.99+

22 exabytesQUANTITY

0.99+

two companiesQUANTITY

0.99+

25 petabytesQUANTITY

0.99+

40 percentQUANTITY

0.99+

30,000 sensorsQUANTITY

0.99+

HPORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

40QUANTITY

0.99+

TodayDATE

0.99+

Tom BradicichPERSON

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

two industriesQUANTITY

0.99+

yesterdayDATE

0.99+

14 million imagesQUANTITY

0.99+

twoQUANTITY

0.99+

Las VegasLOCATION

0.99+

CERNORGANIZATION

0.99+

todayDATE

0.99+

two elementsQUANTITY

0.99+

thousandsQUANTITY

0.99+

EL4000sCOMMERCIAL_ITEM

0.99+

threeQUANTITY

0.99+

OneQUANTITY

0.99+

oneQUANTITY

0.99+

EL1000sCOMMERCIAL_ITEM

0.99+

first automobileQUANTITY

0.99+

70 different vendorsQUANTITY

0.99+

ThreeQUANTITY

0.99+

bothQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

IntelORGANIZATION

0.99+

billionsQUANTITY

0.99+

EdgelineORGANIZATION

0.99+

each yearQUANTITY

0.99+

two new productsQUANTITY

0.99+

single boxQUANTITY

0.99+

sevenQUANTITY

0.99+

GEORGANIZATION

0.99+

fourQUANTITY

0.99+

three daysQUANTITY

0.99+

Kathryn Guarini, Ph.D - IBMz Next 2015 - theCUBE


 

>>live from the Frederick P Rose Hall, home of jazz at Lincoln center in New York, New York. It's the queue at IBM Z. Next redefining digital business. Brought to you by headline sponsor. IBM. >>Hey everyone. We are here live in New York city for the IBM Z system. Special presentation of the cube. I'm John furrier, cofounder SiliconANGLE at my coast. Dave Alante co founder Wiki bond.org. Dave, we are here with gathering Corine, vice president of the Z systems technology. Welcome to the cube. Great to have you. >>Thank you. I'm really glad to be here. It's an exciting day for us. >>We had a great conversation last night. I wanted to just get you introduced to the crowd one year overseeing a lot of the technology side of it. You're involved in the announcement, but uh, you're super technical and uh, and, and the speeds and feeds of this thing are out there. It's in the news, it's in the press, but it's not really getting the justice. And we were talking earlier on our intro about how the main frame is back in modernize, but it's not your grandfather's mainframe. Tell us what's different, what's the performance tech involved, why is it different and what should people be aware of? >>Sure. So this machine really is unmatched. We have tremendous scale performance in multiple dimensions that we can talk through. The IO subsystem provides tremendous value security that's unmatched. So many of the features and attributes to the system just cannot be compared to other platforms. And the Z 13 what we're announcing today evolves and improves so many of those attributes. We really designed the system to support transaction growth from mobility, to do analytics in the system, integrated with the data and the transactions that we can drive insights when they really matter and support it. Cloud delivery. >>So there's two, two threads that are out there in the news that we've wanted to pivot on. One is the digital business model, and that's out in the press release is all the IBM marketing and action digital business. We believe as transformers, that's pretty much something that's gonna be transformative. But performance with the cloud has been touted, Hey, basically unlimited performance with cloud. Think of compute as a not a scarce resource anymore. How do you guys see that? Cause you guys are now pushing performance to a whole nother level. Why can't I just get scale out saying or scale out infrastructure, build data centers. What is this fitted with that mindset or is it, >>yeah, so I, there's, there's performance in so many different dimensions and I'll can talk you through a few of them. So at the, at the heart of the technology in this system, we have tremendous value in from the processor up. So starting at the base technology, we build the microprocessor in 22 nanometer technology, eight cores per chip. We've got four layers of cash integrate on this. More cash that can be accessed from these processor cores then can compare to anything else. Tremendous value. Don't have to go out through IO to memory as frequently as you would have to in other environments. We also have an iOS SIS subsystem that has hundreds of additional processing cores that allows you to drive workload fast through that. Um, so I think it's the, it's, it's the, the, the scale of this system that can allow you to do things in a single footprint that you have to do with a variety of distributed environments separately coupled with unique security features, embedded encryption capability on the processor, PCIE attached, tamper resistance, cryptography, compression engines as so many of these technologies that come together to build a system. >>So IBM went to the, went to the, went to the woodshed back and took all the good technology from the back room cobbled together. Cause you guys have done some pretty amazing things in the, what they call proprietary days, been mainframe back in the sixties seventies eighties and client server a lot of innovation. So you guys, is that true? Would that be an accurate statement? You guys kind of cobbled together and engineered this system with the best >>engineered from, from from soup to nuts, from the casters up. We live, we literally have made innovations at almost every level here in the system. Now it's evolved from previous generations and we have tremendous capabilities in the prior ones as well. But you see across almost every dimension we have improved performance scape scalability capability. Um, and we've done that while opening up the platform. So some of the new capabilities that we're discussing today include enterprise Linux. So Linux on the platform run Linux on many platforms. Linux is Linux, but it's even better on the Z 13 because now you have the scalability, the security, the availability behind it and new open support, we're announcing KVM will be supported on this platform later this year we have OpenStack supported, we're developing an ecosystem around this. We have renouncing Postgres, Docker, no JS support on the mainframe. And that's tremendously exciting because now we're really broadening a user base and allowing users to do a lot more with Linux on the main. >>So one of the big themes that we're hearing today is bringing marrying analytics and transaction systems together. You guys are very excited about that. Uh, one of the, even even the New York times article referenced this, people are somewhat confused about this because other people talk about doing it. We go to the Hadoop world, you know, we talked big data, spark in memory databases, SAP doing their stuff with Hannah. What's different about what Z systems are doing? >>That's a great question. So today many users are moving data off of platforms, including the mainframe to do their analytics. Moving back on this ETL process, extract, transform load. It's incredibly expensive, cumbersome copies of that data. You have redundancy, you have security risk, tremendous complexity to manage. And it's totally unnecessary today because you can do that analytics now on the system Z platform, driving tremendous capability insights that can be done within the transaction and integrated where the transactions and the data live. So much more value to do that. And we've built up a portfolio of capabilities and some of them are new. We're an announcing as part of today's event as well that can allow us to do transformation of the data analytics of that data. And it, and it's, it's at every level, right? We have embedded analytics, accelerators in the process or a new engine we call Cindy single instruction. Multiple data allows you to do, uh, a mathematical, uh, vector processing. >>Let's drill down on that. I want to get your particular on this. You have the in process or stuff is compelling to me. I like, I want to drill down on that. Get technical. Right now all the rage is in memory in memory. She's not even on the big data. Spark has got traction for the analytics. DTL thing is a huge problem. I think that's 100% accurate across the board. We hear that all the time. But what's going on in the process server because you guys have advanced not just in memory, it's in processor. What is that architecture, what are the, some of the tech features and why is that different than just saying, Hey, I'm doing a lot of in memory. >>So, so the process or has um, a deeper and richer cash hierarchy, um, than, than we see in other environments. That means we have four layers of cash. Two of those cash layers are embedded within the processor core itself. They're private to the core. The next layer is on the processor chip and it's shared amongst all those cores. And the fourth layer on a herder, right, is on a separate chip. It's huge. It's embedded DRAM technology. It's a tremendously large cash and we've expanded that, which means you don't have to go out to memory nearly as frequently because you, >>you stayed in the yard that stayed in the yard today in memory is state of the art today. You guys have taken it advanced inside the core. What kind of performances that dude, what's the, what's the advantage? >>There's huge performance advantages to that. We see, we see, we can do, uh, analytics. Numbers are something like 17 times faster than comparable solutions. Being able to bring those analytics into the system for insights when you need them, right? To be able to do faster of scoring of transactions, to be able to do faster fraud detection with so many applications. So many industries are looking to be able to bring these insights faster, more co-located with the data and not have to wait the latency associated with moving data off and, and, and doing some sort of analysis on data that's stale. How that's not interesting. We really want to be able to to integrate that where the data and the transactions live and we can now do that on the. >>So in memory obviously is awesome, right? You can go much faster. A best IO is no IO as gene Amdahl would say, but if something goes wrong and you have to flush the memory in reload >>everything, it's problematic. How does IBM address that? So to minimize that problem relative to we hear you hear complaints and other architectures that that that's problematic. How do you solve that problem or have you solved that problem? >>Well, you know, I think it's a combination of, of the cash, the memory and the analytics capabilities, the resiliency of the system. So you worry about machines going down, failures and we've built in security, reliability, redundancy at every level to prevent failures. We have diagnostic capabilities, things like the IBM Z aware solution, right? This is a solution that's been used to monitor the system behavior so that you can identify anomalous behaviors before you have a problem that's been available with cos. now we're extending that to Linux for the first time. We have solutions like disaster recovery, continuous availability solutions like the GDPs, uh, it's now extended to be a virtual appliance for Linux. So I, there's so many features and functions. This system allow you to have a much more robust, capable, >>popular is Linux. Can you quantify that? You guys talk a lot about Linux and can you give us some percentage? >>Linux has been around for 15 years on the mainframe and um, we have a very good user adoption. We're, we're, we're seeing a large fraction of our clients are running Linux either all by itself or in concert with Zoes. >>So double digit workloads. >>Yeah, it's a very, it's a very significant fraction of the myths in the field today. >>God, I don't want to get a personal perspective from you on some things. One, you went, uh, you have an applied physics degree from Yale, master's from an applied physics from Stanford, PhD, applied physics from Stanford and all the congratulations by the way, you're super smart means you, it means you can get to the schools you means you're, you're smart. But the rage is software defined, right? So I want you to tell us from your perspective being in applied physics, the advances in Silicon is really being engineered now. So is it the combination of that software defined? What's your perspective? What should people know about the tech at the physics side of it? Cause you can't change physics know the other day, but Silicon is doing some good stuff. So talk about that, that convergence between the physics, Silicon and software. >>Yeah, that's a, that's a great question. So I think what sets us apart here with the mainframe is our ability to integrate across that stack. So you're right, Silicon Silicon piece of 22 nanometers Silicon, we can all do similar things with it, but when you co optimize what you do with that Silicon with high-performance system design, with innovations at every level, from where operating systems software, you can build an end to end solution that's unmatched. And with an IBM, we, we, we do that. We really have an opportunity to collaborate across the stack. So can we put things in the operating system? It can take advantage of something that's in that hardware and being able to do that gives us a unique opportunity. And we've done that here, right? Whether it's the Cyndi accelerator and having our software capabilities or see Plex optimizes a Java, be able to take advantage of what's in that, uh, in that microprocessor, we see that with new instructions that we offer here that can be taken advantage of compilers that optimize for what's in the technology. So I think it's that, it's that co optimization across the stack. You're right, software as a user, you see the software, you see the solution, you see the capability at the machine. But to get that you need the infrastructure underneath it, you need the capabilities that can be exploited by the software. And that's why that, >>and we're seeing that in dev ops right now with the dev ops movement. You're seeing, I want to abstract away the complexities of infrastructure and have software be more optimized. And here you guys are changing the state of the art in with the in-memory to in processor architecture, but also you're enabling developers and software to work effectively. >>Right? And I think about cloud service delivery, right? You know, and we would love to be able to offer end users it as a service so we can access the mainframe. All of those qualities of service that we know and love about the mainframe without the complexity and can do that. Technologies like Zoes connect and Blumix with system Z mobile first platform, allowing you to connect from systems, engagements, the six systems of Rutgers deploy Z services. So you can, we were trying to help our clients to be able to not be cost centers for their, uh, for their firms but to provide value added services. And that can be done with the capabilities on the main. >>So no, Docker, OpenStack KVM, obviously we talked about Linux. What does that mean from a business standpoint, from the perspective of running applications? Can you sort of walk us through what you expect clients to do or what >>it's, it's, it's all about standardization and really expanding an ecosystem for users on the platform. And we want anybody running Linux anywhere to be able to run it on, run their applications, develop their applications on the mainframe. And to be able to take advantage of the consolidation opportunities driven by the scale the platform and be able to drive unmatched end to end security solutions on this plot. Right? It's, it's a combination of enabling an ecosystem to be able to do what users expect to be able to do. And that ecosystem continues to evolve. It's very rapidly changing. We know we have to respond, but we want to make sure that we are providing the capabilities that developers and users expect on the platform. And I think we've taken a tremendous leap at the Z 13 to be able to do that. >>So obviously Linux opened up. That was the starting point. Right? Um, what do you expect with the sort of new open innovations? Will you pull in more workloads, more applications or, >>I certainly believe we will. And you know, new workloads on the platform. This is, this is a, an evolution for us and we continue to see the opportunity to bring new workloads to the platform. Things, support of, of, of Linux. And the expanding ecosystem there helps us to do that effectively. We see that, whether it's um, the, the, the transaction growth from mobile and being able to say, what does that mean for the mainframe? How can we not just respond to that but take advantage, enable new opportunities there. And I, so I think absolutely Linux will help us to grow workloads to get into new spaces and really continue to modernize the mainframe. >>John and I were talking at the open Paul Moritz at the time, CEO of VMware in 2009. So we are going to build a software mainframe. Um, interesting, very bold statement. Don't, where's he working on pivotal? Do you have a software mainframe? Have you already built it? >>I don't think you can have software that running on something. And so the mainframe is not a piece of hardware. The mainframe's a solution. It's a platform that includes technology, infrastructure, hardware and the software capabilities that run on it. And as I said, I think it's the integration that the co optimization across that really provides value to clients. I don't know how you can have a software solution without some fundamental infrastructure that gives you the qualities of service. That's so much of the inherent security availability. All of that is >>that's a marketing. It didn't, it didn't pan out. The vision was beautiful and putting a great PowerPoint together. he went to pivotal now, but I think what's happening is what you're, what you're talking about is it's distributed mainframe capability. The scale out open source movement has driven the wannabe mainframe market to explode. And so what now you look at Amazon, you can Google look at these, these power data centers. They are mainframes. In essence, they are centralized places. Well, they want to say the cloud is a software mainframe. Software runs on these data centers. So instead of having rack and stack, uh, three x86 processors, you just drop into mainframe or God box as I call it. And you have this monster box that's highly optimized and then you could have clusters of other stuff around it. Your argument is the integration is what, what makes the difference that end. And so Amazon makes their own gear, right? We know that now they don't do open compute. They're making their own gear. So people who want to be Amazon would probably go to some kind of hybrid mainframe. Like they're not making their own. 70 makes sense of that cause Amazon, I mean they purpose built their own boxes. They are building their own point though, right? I mean to the outside of the box. Right. >>The way I see it as is for for mission critical applications where you cannot support any downtime, you want to have a system that's built from the ground up for pure availability for security and we have that right? We have a system that you can prevent failures, right? We have redundancy at so many levels. We have, we have, you know, if a transaction, different model rate, you win when you take money out of your account or when you transfer money more potently into your account, you need to make sure it's there, right? You want to know that with a hundred percent confidence and to do that I would expect you feel more confident running that >>credit card transactions, same game all over again. Mission critical versus non mission critical, I mean internet of things. But what's not mission critical is my follow up question here of things. Some sensors data that's passive. I, if it's running my airplane, ass running your temperature. Oh, you're down for 10 minutes. I mean, yeah, >>there were some times that we would accept, accepts and downs. >>Lumpy. No, it's really about lumpy SLA performing. Amazon gets away with that because the economics are fantastic, right? So you can't be lumpy and bank transaction. What about costs versus, Oh mainframe. So expensive, so expensive. You guys put out some TCO data that suggest it's less expensive. Help us get through that. >>Yeah, so, so I think when we look at total cost of ownership, we're often looking at the savings to administration and the management of the complexity of sprawl. And with the mainframe, because you have such scale and what you can include in it in a single footprint, you can now consolidate so much into this literally very small environment and the cost savings because of the integration capabilities, because of the performance that you can contain within this box, you see end-to-end cost savings for our clients. And in that, that the break even point is not so large. Right. And so you talked about mission critical. If you're doing your mission critical work on your mainframe and you have other things that you need to do that aren't, you don't consider perhaps as mission critical, you have an opportunity to consolidate. You can do that all on the same platform. You're, you're not, you know, we, we can run with tremendous utilization. You can, you want to use these machines for all their work. >>So sorry. So a follow up on that. So the stickiness then AKA lock-in used to be, I got a bunch of COBOL code that won't run anywhere else. He got me, I got to keep buying Mayfair. I was just saying now the stickiness is for the types of workloads that your clients are running. It is cheaper. That's your, >>it's cheaper. And I think it has unmatched capability, availability, security features that you can't find in other solutions. >>And if you had to, in theory you could replicate it, but it would just be so expensive with people. >>In theory, I, okay. But I think some the fundamental technologies and solutions across that stack, who else can do that? Right. Okay. Can integrate solutions in the hardware and all the way up that stack. And, and I, I don't know anyone else, >>tell me what, tell me what, in your opinion, what gets you most excited about this technology platform? I mean, is there a couple things? Just are one thing saying >>that is so game changing. I'm super excited by this. Um, I can't sleep at night. I'm intoxicated technically. I mean, what gets you jazzed up on this? >>Well, I, I'll tell you, it's, today's a really proud day. I have to say being here and being a part of this launch, you know, personally having been a part of the development, been an IBM for 15 years. I spent the last eight years doing hardware development, including building components and key parts of the system. And now to see us bring that to market and with the value that I know we're bringing to clients, it's, it get, I, I get a little choked up. I truly, honestly, I truly, honestly feel really, really proud about what we've done. Um, so in terms of what is most exciting, um, I think the analytics story is incredibly powerful and I think being able to take a bunch of the technologies that we've built up over time, including some of the new capabilities like in database transformation and advanced analytics that we'll be continuing to roll out over the course of this year. I think this can be really transformative and I think we can help our clients to take advantage of that. I think they will see tremendous value to their business. We'll be able to do things that we simply couldn't do with the old model of moving data off and, and having the latency that comes with that. So I'm really excited about that >>nice platform, not just a repackaging of mainframe. Okay, great. So second, final question from me I want to ask you is two perspectives on, um, the environment, the society we live in. So first let's talk it CIO, CEO, what mindset should they be in as this new transformation? The digital businesses upon them and they have the ability to rearchitect now with mainframe and cloud and data centers. What should they be thinking about as someone who has a PhD in applied physics, been working on this killer system? What is the, what's the moonshot for that CIO and, and how should they be thinking about their architecture right now? >>So I think CEO's need to be thinking about what is a good solution for the variety of problems that they have in their shops and not segment those as we've often seen. Um, you have the x86 distributed world and maybe you have a main frame this and that. I begin to think about this more holistically about the set of challenges you need to go address as a business. And what capabilities do you want to bring to bear to solve those problems? I think that when you think about it that way, you get away from good enough solutions. You get away from some of this, um, mindset that you have about this only plays over there. And this only plays over there. And I think you open yourself up for new possibilities that can drive tremendous value to their businesses. And we can think differently about how to use technology, drive efficiency, drive performance, and real value. >>Last night at dinner, we, we all, we all have families and kids. Um, and you know, even there's a lot of talk about software driving the world these days. And it is, software's amazing. It's great. Best time to be a software developer. Since I've been programming since I was in college and, and it's so much so awesome with open source. However, there's a real culture hacker culture now with hardware. So, um, what's your advice to young people out there? You know, middle schoolers or parents that have kids in middle school for women, young girls, young boys with this. Now you've got drones, you've got hackers, raspberry pie, these kinds of things are going on. You've got kind of this Homebrew computer mindset. These young kids, they don't even know what Apple butter >>I would say it is, it is so exciting. Uh, the, the, the engineering world, the technology challenges, hardware or software. And I wouldn't even differentiate. I think we have a tremendous opportunity to do new and exciting things here. Um, I would say to young girls and boys don't opt out too soon. That means take your classes, studying math and science in school and keep it as an option because you might find when you're in high school or college or beyond, that you really want to do this cool stuff. And if you haven't taken the basics, you, you find yourselves not in a position to be able to, to, to, to team and build great things and deliver new products and provide a lot of value. So I think it's a really exciting area. And I've been >>it's a research as I'm seeing like this. I mean I went to the 30th anniversary for apples Macintosh in Cupertino last year and that whole Homebrew computer club was a hacker culture. You know, the misfits, if you will. And a coder camp. >>I think that think there are people who grow up in, always know that they want to be the engineer, the software developer. And that's great. And then there are others of us, and I'll put myself in that in that space that you may have a lot of different interests. And what has drawn me to engineering and to the, the work that we do here is has been the, the ability to solve tough problems, to, to do something you've never, no one has ever done before, to team with fantastically smart people and to build new technology. I think it's an incredibly exciting space and I encourage people to think about that opportunity >>from a person who has a PhD in applied physics. That's awesome. Thank Kevin. Thanks for joining us here inside the queue, VP of systems. Again, great time to be a software build. Great time to be making hardware and solutions. This is the cue. We're excited to be live in New York city. I'm John furry with Dave Alante. We'll be right back. This rep break.

Published Date : Jan 16 2015

SUMMARY :

Brought to you by headline sponsor. We are here live in New York city for the IBM Z system. I'm really glad to be here. I wanted to just get you introduced to the crowd one year overseeing a lot We really designed the system to support transaction growth from mobility, to do analytics and that's out in the press release is all the IBM marketing and action digital business. hundreds of additional processing cores that allows you to drive workload fast through that. So you guys, is that true? So some of the new capabilities that we're discussing We go to the Hadoop world, you know, we talked big data, spark in memory databases, And it's totally unnecessary today because you can do that You have the in process or stuff is compelling to me. It's a tremendously large cash and we've expanded that, which means you don't have to go You guys have taken it advanced inside the core. Being able to bring those analytics into the system for insights when you need them, would say, but if something goes wrong and you have to flush the memory in reload So to minimize that problem relative to we hear you hear complaints and other architectures that that that's problematic. to monitor the system behavior so that you can identify anomalous behaviors before you have a problem You guys talk a lot about Linux and can you give us some percentage? we have a very good user adoption. So I want you to tell us from your perspective of 22 nanometers Silicon, we can all do similar things with it, but when you co optimize And here you guys are changing the state of the art in with the in-memory with system Z mobile first platform, allowing you to connect from systems, What does that mean from a business standpoint, from the perspective of running applications? driven by the scale the platform and be able to drive unmatched end to end security what do you expect with the sort of new open innovations? And you know, new workloads on the platform. Do you have a software mainframe? I don't think you can have software that running on something. And so what now you look at Amazon, you can Google look at these, and to do that I would expect you feel more confident running I mean, yeah, So you can't be lumpy and bank transaction. And with the mainframe, because you have such scale and what you can include So the stickiness then AKA lock-in security features that you can't find in other solutions. Can integrate solutions in the hardware and all the way up that stack. I mean, what gets you jazzed up on this? We'll be able to do things that we simply couldn't do with the old model of moving data off So second, final question from me I want to ask you is two perspectives on, And I think you open yourself up for new possibilities Um, and you know, And if you haven't taken the basics, You know, the misfits, if you will. and I'll put myself in that in that space that you may have a lot of different interests. This is the cue.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave AlantePERSON

0.99+

JohnPERSON

0.99+

Kathryn GuariniPERSON

0.99+

CorinePERSON

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

KevinPERSON

0.99+

twoQUANTITY

0.99+

17 timesQUANTITY

0.99+

Paul MoritzPERSON

0.99+

100%QUANTITY

0.99+

15 yearsQUANTITY

0.99+

2009DATE

0.99+

DavePERSON

0.99+

CupertinoLOCATION

0.99+

TwoQUANTITY

0.99+

hundredsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

RutgersORGANIZATION

0.99+

22 nanometerQUANTITY

0.99+

last yearDATE

0.99+

10 minutesQUANTITY

0.99+

LinuxTITLE

0.99+

VMwareORGANIZATION

0.99+

fourth layerQUANTITY

0.99+

John furrierPERSON

0.99+

PowerPointTITLE

0.99+

New YorkLOCATION

0.99+

secondQUANTITY

0.99+

22 nanometersQUANTITY

0.99+

MayfairORGANIZATION

0.99+

JavaTITLE

0.99+

iOSTITLE

0.99+

PlexTITLE

0.99+

StanfordORGANIZATION

0.99+

six systemsQUANTITY

0.99+

New York, New YorkLOCATION

0.98+

firstQUANTITY

0.98+

todayDATE

0.98+

OneQUANTITY

0.98+

hundred percentQUANTITY

0.98+

eight coresQUANTITY

0.98+

BlumixORGANIZATION

0.98+

New York cityLOCATION

0.98+

two perspectivesQUANTITY

0.98+

ZoesORGANIZATION

0.98+

Lincoln centerLOCATION

0.98+

first timeQUANTITY

0.98+

first platformQUANTITY

0.98+

one yearQUANTITY

0.98+

two threadsQUANTITY

0.97+

later this yearDATE

0.97+

30th anniversaryQUANTITY

0.97+

raspberryORGANIZATION

0.97+

SiliconANGLEORGANIZATION

0.96+

oneQUANTITY

0.96+

YaleORGANIZATION

0.96+

OpenStackTITLE

0.96+

HomebrewORGANIZATION

0.96+

four layersQUANTITY

0.96+

SiliconORGANIZATION

0.95+

single footprintQUANTITY

0.94+

Z 13COMMERCIAL_ITEM

0.94+

IBMzORGANIZATION

0.94+

James Hamilton, AWS | AWS Re:Invent 2013


 

(mellow electronic music) >> Welcome back, we're here live in Las Vegas. This is SiliconANGLE and Wikibon's theCUBE, our flagship program. We go out to the events, extract the signal from the noise. We are live in Las Vegas at Amazon Web Services re:Invent conference, about developers, large-scale cloud, big data, the future. I'm John Furrier, the founder of SiliconANGLE. I'm joined by co-host, Dave Vellante, co-founder of Wikibon.org, and our guest is James Hamilton, VP and Distinguished Engineer at Amazon Web Services. Welcome to theCUBE. >> Well thank you very much. >> You're a tech athlete, certainly in our book, is a term we coined, because we love to use sports analogies You're kind of the cutting edge. You've been the business and technology innovating for many years going back to the database days at IBM, Microsoft, and now Amazon. You gave a great presentation at the analyst briefing. Very impressive. So I got to ask you the first question, when did you first get addicted to the notion of what Amazon could be? When did you first taste the Cool-Aide? >> Super good question. Couple different instances. One is I was general manager of exchange hosts and services and we were doing a decent job, but what I noticed was customers were loving it, we're expanding like mad, and I saw opportunity to improve by at least a factor of two I'm sorry, 10, it's just amazing. So that was a first hint that this is really important for customers. The second one was S3 was announced, and the storage price pretty much froze the whole industry. I've worked in storage all my life, I think I know what's possible in storage, and S3 was not possible. It was just like, what is this? And so, I started writing apps against it, I was just blown away. Super reliable. Unbelievably priced. I wrote a fairly substantial app, I got a bill for $7. Wow. So that's really the beginnings of where I knew this was going to change the world, and I've been, as you said, addicted to it since. >> So you also mentioned some stats there. We'll break it down, 'cause we love to talk about the software defined data center, which is basically not even at the hype stage yet. It's just like, it's still undefined, but software virtualization, network virtualization really is pushing that movement of the software focus, and that's essentially you guys are doing. You're talking about notifications and basically it's a large-scale systems problem. That you guys are building a global operating system as Andy Jassy would say. Well, he didn't say that directly, he said internet operating system, but if you believe that APIs are critical services. So I got to ask you that question around this notion of a data center, I mean come on, nobody's really going to give up their data center. It might change significantly, but you pointed out the data center costs are in the top three order, servers, power circulation systems, or cooling circulation, and then actual power itself. Is that right, did I get that right? >> Pretty close, pretty close. Servers dominate, and then after servers if you look at data centers together, that's power, cooling, and the building and facility itself. That is the number two cost, and the actual power itself is number three. >> So that's a huge issue. When we talk like CIOs, it's like can you please take the facility's budget off my back? For many reasons, one, it's going to be written off soon maybe. All kinds of financial issues around-- >> A lot of them don't see it, though, which is a problem. >> That is a problem, that is a problem. Real estate season, and then, yes. >> And then they go, "Ah, it's not my problem" so money just flies out the window. >> So it's obviously a cost improvement for you. So what are you guys doing in that area and what's your big ah-ha for the customers that you walk in the door and say, look, we have this cloud, we have this system and all those headaches can be, not shifted, or relieved if you will, some big asprin for them. What's the communication like? What do you talk to them about? >> Really it depends an awful lot on who it is. I mean, different people care about different things. What gets me excited is I know that this is the dominate cost of offering a service is all of this muck. It's all of this complexity, it's all of this high, high capital cost up front. Facility will run 200 million before there's servers in it. This is big money, and so from my perspective, taking that way from most companies is one contribution. Second contribution is, if you build a lot of data centers you get good at it, and so as a consequence of that I think we're building very good facilities. They're very reliable, and the costs are plummeting fast. That's a second contribution. Third contribution is because... because we're making capacity available to customers it means they don't have to predict two years in advance what they're going to need, and that means there's less wastage, and that's just good for the industry as a whole. >> So we're getting some questions on our crowd chat application. If you want to ask a question, ask him anything. It's kind of like Reddit. Go to crowdchat.net/reinvent. The first question came in was, "James, when do you think ARM will be in the data center?" >> Ah ha, that's a great question. Well, many people know that I'm super excited about ARM. It's early days, the reason why I'm excited is partly because I love seeing lots of players. I love seeing lots of innovation. I think that's what's making our industry so exciting right now. So that's one contribution that ARM brings. Another is if you look at the history of server-side computing, most of the innovation comes from the volume-driven, usually on clients first. The reason why X86 ended up in such a strong position is so many desktops we running X86 processors and as a consequence it became a great server processor. High R&D flow into it. ARM is in just about every device that everyone's carrying around. It's almost every disk drive, it's just super broadly deployed. And whenever you see a broadly deployed processor it means there's an opportunity to do something special for customers. I think it's good for the industry. But in a precise answer to your question, I really don't have one right now. It's something that we're deeply interested in and investigating deeply, but at this point it hasn't happened yet, but I'm excited by it. >> Do you think that... Two lines of questioning here. One is things that are applicable to AWS, other's just your knowledge of the industry and what you think. We talked about that yesterday with OCP, right? >> Yep. >> Not a right fit for us, but you applaud the effort. We should talk about that, too, but does splitting workloads up into little itty, bitty processors change the utilization factor and change the need for things like virtualization, you know? What do you think? >> Yeah, it's a good question. I first got excited about the price performance of micro-servers back in 2007. And at that time it was pretty easy to produce a win by going to a lower-powered processor. At that point memory bandwidth wasn't as good as it could be. It was actually hard on some workloads to fully use a processor. Intel's a very smart company, they've done great work on improving the memory bandwidth, and so today it's actually harder to produce a win, and so you kind of have workloads in classes. At the very, very high end we've got database workloads. They really love single-threaded performance, and performance really is king, but there are lots of highly parallel workloads where there's an opportunity for a big gain. I still think virtualization is probably something where the industry's going to want to be there, just because it brings so many operational advantages. >> So I got to ask the question. Yesterday we had Jason Stowe on, CEO of Cycle Computing, and he had an amazing thing that he did, sorry, trumping it out kids say, but it's not new to you, but it's new to us. He basically created a supercomputer and spun up hundreds of thousands of cores in 30 minutes, which is like insane, but he did it for like 30 grand. Which would've cost, if you try to provision it to the TUCO calculator or whatever your model, it'd be months and years, maybe, and years. But the thing that he said I want to get your point on and I'm going to ask you questions specifically on is, Spot instances were critical for him to do that, and the creativity of his solutions, so I got to ask you, did you see Spot pricing instances being a big deal, and what impact has that done to AWS' vision of large scale? >> I'm super excited by Spot. In fact, it's one of the reasons I joined Amazon. I went through a day of interviews, I met a bunch of really smart people doing interesting work. Someone probably shouldn't have talked to me about Spot because it hadn't been announced yet, and I just went, "This is brilliant! "This is absolutely brilliant!" It's taking the ideas from financial markets, where you've got high-value assets, and saying why don't we actually sell it off, make a market on the basis of that and sell it off? So two things happen that make Spot interesting. The first is an observation up front that poor utilization is basically the elephant in the room. Most folks can't use more than 12% to 15% of their overall server capacity, and so all the rest ends up being wasted. >> You said yesterday 30% is outstanding. It's like have a party. >> 30% probably means you're not measuring it well. >> Yeah, you're lying. >> It's real good, yeah, basically. So that means 70% or more is wasted, it's a crime. And so the first thing that says is, that one of the most powerful advertisements for cloud computing is if you bring a large number of non-correlated workloads together, what happens is when you're supporting a workload you've got to have enough capacity to support the peak, but you only get to monetize the average. And so as the peak to average gets further apart, you're wasting more. So when you bring a large number of non-correlated workloads together what happens is it flattens out just by itself. Without doing anything it flattens out, but there's still some ups and downs. And the Spot market is a way of filling in those ups and downs so we get as close to 100%. >> Is there certain workloads that fit the spot, obviously certain workloads might fit it, but what workloads don't fit the Spot price, because, I mean, it makes total sense and it's an arbitrage opportunity for excess capacity laying around, and it's price based on usage. So is there a workload, 'cause it'll be torrent up, torrent down, I mean, what's the use cases there? >> Workloads that don't operate well in an interrupted environment, that are very time-critical, those workloads shouldn't be run in Spot. It's just not what the resource is designed for. But workloads like the one that we were talking to with Cycle Computing are awesome, where you need large numbers of resources. If the workload needs to restart, that's absolutely fine, and price is really the focus. >> Okay, and question from crowd chat. "Ask James what are his thoughts "on commodity networking and merchant silicon." >> I think an awful lot about that. >> This guy knows you. (both laughing) >> Who's that from? >> It's your family. >> Yeah, exactly! >> They're watching. >> No, network commoditization is a phenomenal thing that the whole industry's needed that for 15 years. We've got a vertical ecosystem that's kind of frozen in time. Vertically-integrated ecosystem kind of frozen in time. Costs everywhere are falling except in networking. We just got to do something, and so it's happening. I'm real excited by that. It's really changing the Amazon business and what we can do for customers. >> Let's talk a little bit about server design, because I was fascinated yesterday listening to you talk how you've come full circle. Over the last decade, right, you started with what's got to be stripped down, basic commodity and now you're of a different mindset. So describe that, and then I have some follow-up questions for you. >> Yeah, I know what you're alluding to. Is years ago I used to argue you don't want hardware specialization, it's crazy. It's the magic's in software. You want to specialize software running on general-purpose processors, and that's because there was a very small number of servers out there, and I felt like it was the most nimble way to run. However today, in AWS when we're running ten of thousands of copies of a single type of server, hardware optimizations are absolutely vital. You end up getting a power-performance advantage at 10X. You can get a price-performance advantage that's substantial and so I've kind of gone full circle where now we're pulling more and more down into the hardware, and starting to do hardware optimizations for our customers. >> So heat density is a huge problem in data centers and server design. You showed a picture of a Quanta package yesterday. You didn't show us your server, said "I can't you ours," but you said, "but we blow this away, "and this is really good." But you describe that you're able to get around a lot of those problems because of the way you design data centers. >> Yep. >> Could you talk about that a little bit? >> Sure, sure, sure. One of the problems when you're building a server it could end up anywhere. It could end up in a beautiful data center that's super well engineered. It could end up on the end of a row on a very badly run data center. >> Or in a closet. >> Or in a closet. The air is recirculating, and so the servers have to be designed with huge headroom on cooling requirements, and they have to be able to operate in any of those environments without driving warranty costs for the vendors. We take a different approach. We say we're not going to build terrible data centers. We're going to build really good data centers and we're going to build servers that exploit the fact those data centers are good, and what happens is more value. We don't have to waste as much because we know that we don't have to operate in the closet. >> We got some more questions coming here by the way. This is awesome. This ask me anything crowd chat thing is going great. We got someone, he's from Nutanix, so he's a geek. He's been following your career for many years. I got to ask you about kind of the future of large-scale. So Spot, in his comment, David's comment, Spot instances prove that solutions like WMare's distributed power management are not valuable. Don't power off the most expensive asset. So, okay, that brings up an interesting point. I don't want to slam on BMWare right now, but I just wanted to bring to the next logical question which is this is a paradigm shift. That's a buzz word, but really a lot's happening that's new and innovative. And you guys are doing it and leading. What's next in the large-scale paradigm of computing and computer science? On the science-side you mentioned merchant silicon. Obviously that's, the genie's out of the bottle there, but what's around the corner? Is it the notifications at the scheduling? Was it virtualization, is it compiler design? What are some of the things that you see out on the horizon that you got your eyes on? >> That's interesting, I mean. I've got, if you name your area, and I'll you some interesting things happening in the area, and it's one of the cool things of being in the industry right now. Is that 10 years ago we had a relatively static, kind of slow-pace. You really didn't have to look that far ahead, because of anything was coming you'd see it coming for five years. Now if you ask me about power distribution, we've got tons of work going on in power distribution. We're researching different power distribution topologies. We're researching higher voltage distribution, direct current distribution. Haven't taken any of those steps yet, but we're were working in that. We've got a ton going on in networking. You'll see an announcement tomorrow of a new instance type that is got some interesting characteristics from a networking perspective. There's a lot going on. >> Let's pre-announce, no. >> Gary's over there like-- >> How 'about database, how 'about database? I mean, 10 years ago, John always says database was kind of boring. You go to a party say, oh welcome to database business, oh yeah, see ya. 25 years ago it was really interesting. >> Now you go to a party is like, hey ah! Have a drink! >> It a whole new ballgame, you guys are participating. Google Spanner is this crazy thing, right? So what are your thoughts on the state of the database business today, in memory, I mean. >> No, it's beautiful. I did a keynote at SIGMOD a few years ago and what I said is that 10 years ago Bruce Linsey, I used to work with him in the database world, Bruce Linsey called it polishing the round ball. It's just we're making everything a little, tiny bit better, and now it's fundamentally different. I mean what's happening right now is the database world, every year, if you stepped out for a year, you wouldn't recognize it. It's just, yeah, it's amazing. >> And DynamoDB has had rapid success. You know, we're big users of that. We actually built this app, crowd chat app that people are using on Hadoop and Hbase, and we immediately moved that to DynamoDB and your stack was just so much faster and scalable. So I got to ask you the-- >> And less labor. >> Yeah, yeah. So it's just been very reliable and all the other goodness of the elastic B socket and SQS, all that other good stuff we're working with node, et cetera So I got to ask you, the area that I want your opinion around the corner is versioning control. So at large-scale one of the challenges that we have is as we're pushin' new code, making sure that the integrated stack is completely updated and synchronized with open-source projects. So where does that fit into the scaling up? 'Cause at large scale, versioning control used to be easy to manage, but downloading software and putting in patches, but now you guys handle all that at scale. So that, I'm assuming there's some automation involved, some real tech involved, but how are you guys handling the future of making sure the code is all updated in the stack? >> It's a great question. It's super important from a security perspective that the code be up to date and current. It's super important from a customer perspective and you need to make sure that these upgrades are just non-disruptive. One customer, best answer I heard was yesterday from a customer was on a panel, they were asked how did they deal with Amazon's upgrades, and what she said is, "I didn't even know when they were happening. "I can't tell when they're happening." Exactly the right answer. That's exactly our goal. We monitor the heck out of all of our systems, and our goal, and boy we take it seriously, is we need to know any issue before a customer knows it. And if you fail on that promise, you'll meet Andy really quick. >> So some other paradigm questions coming in. Floyd asks, "Ask James what his opinion of cloud brokerage "companies such as Jamcracker or Graviton. "Do they have a place, or is it wrong thinking?" (James laughs) >> From my perspective, the bigger and richer the ecosystem, the happier our customers all are. It's all goodness. >> It's Darwinism, that's the answer. You know, the fit shall survive. No, but I think that brings up this new marketplace that Spot pricing came out of the woodwork. It's a paradigm that exists in other industries, apply it to cloud. So brokering of cloud might be something, especially with regional and geographical focuses. You can imagine a world of brokering. I mean, I don't know, I'm not qualified to answer that. >> Our goal, honestly, is to provide enough diversity of services that we completely satisfy customer's requirements, and that's what we intend to do. >> How do you guys think about the make versus buy? Are you at a point now where you say, you know what, we can make this stuff for our specific requirements better than we can get it off the shelf, or is that not the case? >> It changes every few minutes. It really does. >> So what are the parameters? >> Years ago when I joined the company we were buying servers from OEM suppliers, and they were doing some tailoring for our uses. It's gotten to the point now where that's not the right model and we have our own custom designs that are being built. We've now gotten to the point where some of the components in servers are being customized for us, partly because we're driving sufficient volume that it's justified, and partly because the partners that the component suppliers are happy to work with us directly and they want input from us. And so it's every year it's a little bit more specialized and that line's moving, so it's shifting towards specialization pretty quickly. >> So now I'm going to be replaced by the crowd, gettin' great questions, I'm going to be obsolete! No earbud, I got it right here. So the question's more of a fun one probably for you to answer, or just kind of lean back and kind of pull your hair out, but how the heck does AWS add so much infrastructure per day? How do you do it? >> It's a really interesting question. I kind of know how much infrastructure, I know abstractly how much infrastructure we put out every day, but when you actually think about this number in context, it's mind boggling. So here's the number. Here's the number. Every day, we deploy enough servers to support Amazon when it was a seven billion dollar company. You think of how many servers a seven billion dollar e-commerce company would actually require? Every day we deploy that many servers, and it's just shocking to me to think that the servers are in the logistics chain, they're being built, they're delivered to the appropriate data centers, there's back positions there, there's networking there, there's power there. I'm actually, every day I'm amazed to be quite honest with you. >> It's mind-boggling. And then for a while I was there, okay, wait a minute. Would that be Moors' Law? Uh no, not even in particular. 'Cause you said every day. Not every year, every day. >> Yeah, it really is. It's a shocking number and one, my definition of scale changes almost every day, where if you look at the number of customers that are trusting with their workloads today, that's what's driving that growth, it's phenomenal! >> We got to get wrapped up, but I got to ask the Hadoob World SQL over Hadoob question solutions. Obviously Hadoob is great, great for storing stuff, but now you're seeing hybrids come out. Again this comes back down to the, you can recognize the database world anymore if you were asleep for a year. So what's your take on that ecosystem? You guys have a lasting map or a decent a bunch of other things. There's some big data stuff going on. How do you, from a database perspective, how do you look at Hadoob and SQL over Hadoob? >> I personally love 'em both, and I love the diversity that's happening in the database world. There's some people that kind of have a religion and think it's crazy to do anything else. I think it's a good thing. Map reduce is particularly, I think, is a good thing, because it takes... First time I saw map reduce being used was actually a Google advertising engineer. And what I loved about his, I was actually talking to him about it, and what I loved is he had no idea how many servers he was using. If you ask me or anyone in the technology how many servers they're using, they know. And the beautiful thing is he's running multi-thousand node applications and he doesn't know. He doesn't care, he's solving advertising problems. And so I think it's good. I think there's a place for everything. >> Well my final question is asking guests this show. Put the bumper sticker on the car leaving re:Invent this year. What's it say? What does the bumper sticker say on the car? Summarize for the folks, what is the tagline this year? The vibe, and the focus? >> Yeah, for me this was the year. I mean, the business has been growing but this is the year where suddenly I'm seeing huge companies 100% dependent upon AWS or on track to be 100% dependent upon AWS. This is no longer an experiment, something people want to learn about. This is real, and this is happening. This is running real businesses. So it's real, baby! >> It's real baby, I like, that's the best bumper... James, distinguished guest now CUBE alum for us, thanks for coming on, you're a tech athlete. Great to have you, great success. Sounds like you got a lot of exciting things you're working on and that's always fun. And obviously Amazon is killing it, as we say in Silicon Valley. You guys are doing great, we love the product. We've been using it for crowd chats. Great stuff, thanks for coming on theCUBE. >> Thank you. >> We'll be right back with our next guest after this short break. This is live, exclusive coverage with siliconANGLE theCUBE. We'll be right back.

Published Date : Nov 14 2013

SUMMARY :

I'm John Furrier, the founder of SiliconANGLE. So I got to ask you the first question, and the storage price pretty much froze the whole industry. So I got to ask you that question around and the actual power itself is number three. can you please take the facility's budget off my back? A lot of them don't see it, That is a problem, that is a problem. so money just flies out the window. So what are you guys doing in that area and that's just good for the industry as a whole. "James, when do you think ARM will be in the data center?" of server-side computing, most of the innovation and what you think. and change the need for things and so you kind of have workloads in classes. and the creativity of his solutions, so I got to ask you, and so all the rest ends up being wasted. It's like have a party. And so as the peak to average and it's an arbitrage opportunity that's absolutely fine, and price is really the focus. Okay, and question from crowd chat. This guy knows you. that the whole industry's needed that for 15 years. Over the last decade, right, you started with It's the magic's in software. because of the way you design data centers. One of the problems when you're The air is recirculating, and so the servers I got to ask you about kind of the future of large-scale. and it's one of the cool things You go to a party say, oh welcome of the database business today, in memory, I mean. is the database world, every year, So I got to ask you the-- So at large-scale one of the challenges that we have is that the code be up to date and current. So some other paradigm questions coming in. From my perspective, the bigger and richer the ecosystem, It's Darwinism, that's the answer. diversity of services that we completely It really does. the component suppliers are happy to work with us So the question's more of a fun one that the servers are in the logistics chain, 'Cause you said every day. where if you look at the number of customers the Hadoob World SQL over Hadoob question solutions. and think it's crazy to do anything else. Summarize for the folks, what is the tagline this year? I mean, the business has been growing It's real baby, I like, that's the best bumper... This is live, exclusive coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Erik KaulbergPERSON

0.99+

2017DATE

0.99+

Jason ChamiakPERSON

0.99+

Dave VolontePERSON

0.99+

Dave VellantePERSON

0.99+

RebeccaPERSON

0.99+

Marty MartinPERSON

0.99+

Rebecca KnightPERSON

0.99+

JasonPERSON

0.99+

JamesPERSON

0.99+

AmazonORGANIZATION

0.99+

DavePERSON

0.99+

Greg MuscurellaPERSON

0.99+

ErikPERSON

0.99+

MelissaPERSON

0.99+

MichealPERSON

0.99+

Lisa MartinPERSON

0.99+

Justin WarrenPERSON

0.99+

Michael NicosiaPERSON

0.99+

Jason StowePERSON

0.99+

Sonia TagarePERSON

0.99+

AysegulPERSON

0.99+

MichaelPERSON

0.99+

PrakashPERSON

0.99+

JohnPERSON

0.99+

Bruce LinseyPERSON

0.99+

Denice DentonPERSON

0.99+

Aysegul GunduzPERSON

0.99+

RoyPERSON

0.99+

April 2018DATE

0.99+

August of 2018DATE

0.99+

MicrosoftORGANIZATION

0.99+

Andy JassyPERSON

0.99+

IBMORGANIZATION

0.99+

AustraliaLOCATION

0.99+

EuropeLOCATION

0.99+

April of 2010DATE

0.99+

Amazon Web ServicesORGANIZATION

0.99+

JapanLOCATION

0.99+

Devin DillonPERSON

0.99+

National Science FoundationORGANIZATION

0.99+

ManhattanLOCATION

0.99+

ScottPERSON

0.99+

GregPERSON

0.99+

Alan ClarkPERSON

0.99+

Paul GalenPERSON

0.99+

GoogleORGANIZATION

0.99+

JamcrackerORGANIZATION

0.99+

Tarek MadkourPERSON

0.99+

AlanPERSON

0.99+

AnitaPERSON

0.99+

1974DATE

0.99+

John FerrierPERSON

0.99+

12QUANTITY

0.99+

ViaWestORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

2015DATE

0.99+

James HamiltonPERSON

0.99+

John FurrierPERSON

0.99+

2007DATE

0.99+

Stu MinimanPERSON

0.99+

$10 millionQUANTITY

0.99+

DecemberDATE

0.99+