Coherent Nonlinear Dynamics and Combinatorial Optimization
Hi, I'm Hideo Mabuchi from Stanford University. This is my presentation on coherent nonlinear dynamics, and combinatorial optimization. This is going to be a talk, to introduce an approach, we are taking to the analysis, of the performance of Coherent Ising Machines. So let me start with a brief introduction, to ising optimization. The ising model, represents a set of interacting magnetic moments or spins, with total energy given by the expression, shown at the bottom left of the slide. Here the cigna variables are meant to take binary values. The matrix element jij, represents the interaction, strength and sign, between any pair of spins ij, and hi represents a possible local magnetic field, acting on each thing. The ising ground state problem, is defined in an assignment of binary spin values, that achieves the lowest possible value of total energy. And an instance of the easing problem, is specified by given numerical values, for the matrix j and vector h, although the ising model originates in physics, we understand the ground state problem, to correspond to what would be called, quadratic binary optimization, in the field of operations research. And in fact, in terms of computational complexity theory, it can be established that the, ising ground state problem is NP complete. Qualitatively speaking, this makes the ising problem, a representative sort of hard optimization problem, for which it is expected, that the runtime required by any computational algorithm, to find exact solutions, should asyntonically scale, exponentially with the number of spins, and four worst case instances at each end. Of course, there's no reason to believe that, the problem instances that actually arise, in practical optimization scenarios, are going to be worst case instances. And it's also not generally the case, in practical optimization scenarios, that we demand absolute optimum solutions. Usually we're more interested in, just getting the best solution we can, within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for computation. This focus is great interest on, so-called heuristic algorithms, for the ising problem and other NP complete problems, which generally get very good, but not guaranteed optimum solutions, and run much faster than algorithms, that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem, for which extensive compilations, of benchmarking data may be found online. A recent study found that, the best known TSP solver required median runtimes, across a library of problem instances, that scaled as a very steep route exponential, for an up to approximately 4,500. This gives some indication of the change, in runtime scaling for generic, as opposed to worst case problem instances. Some of the instances considered in this study, were taken from a public library of TSPs, derived from real world VLSI design data. This VLSI TSP library, includes instances within ranging from 131 to 744,710, instances from this library within between 6,880 and 13,584, were first solved just a few years ago, in 2017 requiring days of runtime, and a 48 core two gigahertz cluster, all instances with n greater than or equal to 14,233, remain unsolved exactly by any means. Approximate solutions however, have been found by heuristic methods, for all instances in the VLSI TSP library, with, for example, a solution within 0.014% of a known lower bound, having been discovered for an instance, with n equal 19,289, requiring approximately two days of runtime, on a single quarter at 2.4 gigahertz. Now, if we simple-minded the extrapolate, the route exponential scaling, from the study yet to n equal 4,500, we might expect that an exact solver, would require something more like a year of runtime, on the 48 core cluster, used for the n equals 13,580 for instance, which shows how much, a very small concession on the quality of the solution, makes it possible to tackle much larger instances, with much lower costs, at the extreme end, the largest TSP ever solved exactly has n equal 85,900. This is an instance derived from 1980s VLSI design, and this required 136 CPU years of computation, normalized to a single core, 2.4 gigahertz. But the 20 fold larger, so-called world TSP benchmark instance, with n equals 1,904,711, has been solved approximately, with an optimality gap bounded below 0.0474%. Coming back to the general practical concerns, of applied optimization. We may note that a recent meta study, analyze the performance of no fewer than, 37 heuristic algorithms for MaxCut, and quadratic binary optimization problems. And find the performance... Sorry, and found that a different heuristics, work best for different problem instances, selected from a large scale heterogeneous test bed, with some evidence, the cryptic structure, in terms of what types of problem instances, were best solved by any given heuristic. Indeed, there are reasons to believe, that these results for MaxCut, and quadratic binary optimization, reflect to general principle, of a performance complementarity, among heuristic optimization algorithms, and the practice of solving hard optimization problems. There thus arises the critical pre processing issue, of trying to guess, which of a number of available, good heuristic algorithms should be chosen, to tackle a given problem instance. Assuming that any one of them, would incur high cost to run, on a large problem of incidents, making an astute choice of heuristic, is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight, about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This is certainly pinpointed by researchers in the field, as a circumstance and must be addressed. So adding this all up, we see that a critical frontier, for cutting edge academic research involves, both the development of novel heuristic algorithms, that deliver better performance with lower costs, on classes of problem instances, that are underserved by existing approaches, as well as fundamental research, to provide deep conceptual insight, into what makes a given problem instance, easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law, and speculate about a so-called second quantum revolution, it's natural to talk not only about novel algorithms, for conventional CPUs, but also about highly customized, special purpose hardware architectures, on which we may run entirely unconventional algorithms, for common tutorial optimizations, such as ising problem. So against that backdrop, I'd like to use my remaining time, to introduce our work on, analysis of coherent using machine architectures, and associated optimization algorithms. Ising machines in general, are a novel class of information processing architectures, for solving combinatorial optimization problems, by embedding them in the dynamics, of analog, physical, or a cyber-physical systems. In contrast to both more traditional engineering approaches, that build ising machines using conventional electronics, and more radical proposals, that would require large scale quantum entanglement the emerging paradigm of coherent ising machines, leverages coherent nominal dynamics, in photonic or optical electronic platforms, to enable near term construction, of large scale prototypes, that leverage posting as information dynamics. The general structure of current of current CIM systems, as shown in the figure on the right, the role of the easing spins, is played by a train of optical pulses, circulating around a fiber optical storage ring, that beam splitter inserted in the ring, is used to periodically sample, the amplitude of every optical pulse. And the measurement results, are continually read into an FPGA, which uses then to compute perturbations, to be applied to each pulse, by a synchronized optical injections. These perturbations are engineered to implement, the spin-spin coupling and local magnetic field terms, of the ising hamiltonian, corresponding to a linear part of the CIM dynamics. Asynchronously pumped parametric amplifier, denoted here as PPL and wave guide, adds a crucial nonlinear component, to the CIM dynamics as well. And the basic CIM algorithm, the pump power starts very low, and is gradually increased, at low pump powers, the amplitudes of the easing spin pulses, behave as continuous complex variables, whose real parts which can be positive or negative, by the role of soft or perhaps mean field spins. Once the pump power crosses the threshold, for perimetric self oscillation in the optical fiber ring, however, the amplitudes of the easing spin pulses, become effectively quantized into binary values, while the pump power is being ramped up, the FPGA subsystem continuously applies, its measurement based feedback implementation, of the using hamiltonian terms. The interplay of the linearized easing dynamics, implemented by the FPGA , and the thresholds quantization dynamics, provided by the sink pumped parametric amplifier, result in a final state, of the optical plus amplitudes, at the end of the pump ramp, that can be read as a binary strain, giving a proposed solution, of the ising ground state problem. This method of solving ising problems, seems quite different from a conventional algorithm, that runs entirely on a digital computer. As a crucial aspect, of the computation is performed physically, by the analog continuous coherent nonlinear dynamics, of the optical degrees of freedom, in our efforts to analyze CA and performance. We have therefore turn to dynamical systems theory. Namely a study of bifurcations, the evolution of critical points, and typologies of heteroclitic orbits, and basins of attraction. We conjecture that such analysis, can provide fundamental insight, into what makes certain optimization instances, hard or easy for coherent ising machines, and hope that our approach, can lead to both improvements of the course CIM algorithm, and the pre processing rubric, for rapidly assessing the CIM insuibility of the instances. To provide a bit of intuition about how this all works. It may help to consider the threshold dynamics, of just one or two optical parametric oscillators, in the CIM architecture just described. We can think of each of the pulse time slots, circulating around the fiber ring, as are presenting an independent OPO. We can think of a single OPO degree of freedom, as a single resonant optical mode, that experiences linear dissipation, due to coupling loss, and gain in a pump near crystal, as shown in the diagram on the upper left of the slide, as the pump power is increased from zero. As in the CIM algorithm, the non-linear gain is initially too low, to overcome linear dissipation. And the OPO field remains in a near vacuum state, at a critical threshold value, gain equals dissipation, and the OPO undergoes a sort of lasing transition. And the steady States of the OPO, above this threshold are essentially coherent States. There are actually two possible values, of the OPO coherent amplitude, and any given above threshold pump power, which are equal in magnitude, but opposite in phase, when the OPO cross this threshold, it basically chooses one of the two possible phases, randomly, resulting in the generation, of a single bit of information. If we consider two uncoupled OPOs, as shown in the upper right diagram, pumped at exactly the same power at all times, then as the pump power is increased through threshold, each OPO will independently choose a phase, and thus two random bits are generated, for any number of uncoupled OPOs, the threshold power per OPOs is unchanged, from the single OPO case. Now, however, consider a scenario, in which the two appeals are coupled to each other, by a mutual injection of their out coupled fields, as shown in the diagram on the lower right. One can imagine that, depending on the sign of the coupling parameter alpha, when one OPO is lasing, it will inject a perturbation into the other, that may interfere either constructively or destructively, with the field that it is trying to generate, via its own lasing process. As a result, when can easily show that for alpha positive, there's an effective ferromagnetic coupling, between the two OPO fields, and their collective oscillation threshold, is lowered from that of the independent OPO case, but only for the two collective oscillation modes, in which the two OPO phases are the same. For alpha negative, the collective oscillation threshold, is lowered only for the configurations, in which the OPO phases are opposite. So then looking at how alpha is related to the jij matrix, of the ising spin coupling hamilitonian, it follows the, we could use this simplistic to OPO CIM, to solve the ground state problem, of the ferromagnetic or antiferromagnetic angles, to ising model, simply by increasing the pump power, from zero and observing what phase relation occurs, as the two appeals first start to lase. Clearly we can imagine generalizing the story to larger, and, however, the story doesn't stay as clean and simple, for all larger problem instances. And to find a more complicated example, we only need to go to n equals four, for some choices of jij for n equals four, the story remains simple, like the n equals two case. The figure on the upper left of this slide, shows the energy of various critical points, for a non frustrated n equals for instance, in which the first bifurcated critical point, that is the one that, by forgets of the lowest pump value a, this first bifurcated critical point, flows asyntonically into the lowest energy using solution, and the figure on the upper right, however, the first bifurcated critical point, flows to a very good, but suboptimal minimum at large pump power. The global minimum is actually given, by a distinct critical point. The first appears at a higher pump power, and is not needed radically connected to the origin. The basic CIM algorithm, is this not able to find this global minimum, such non-ideal behavior, seems to become more common at margin end, for the n equals 20 instance show in the lower plots, where the lower right pod is just a zoom into, a region of the lower left block. It can be seen that the global minimum, corresponds to a critical point, that first appears that of pump parameter a around 0.16, at some distance from the adriatic trajectory of the origin. That's curious to note that, in both of these small and examples, however, the critical point corresponding to the global minimum, appears relatively close, to the adiabatic trajectory of the origin, as compared to the most of the other, local minimum that appear. We're currently working to characterise, the face portrait typology, between the global minimum, and the adiabatic trajectory of the origin, taking clues as to how the basic CIM algorithm, could be generalized, to search for non-adiabatic trajectories, that jumped to the global minimum, during the pump up, of course, n equals 20 is still too small, to be of interest for practical optimization applications. But the advantage of beginning, with the study of small instances, is that we're able to reliably to determine, their global minima, and to see how they relate to the idea, that trajectory of the origin, and the basic CIM algorithm. And the small land limit, We can also analyze, for the quantum mechanical models of CAM dynamics, but that's a topic for future talks. Existing large-scale prototypes, are pushing into the range of, n equals, 10 to the four, 10 to the five, 10 to the six. So our ultimate objective in theoretical analysis, really has to be, to try to say something about CAM dynamics, and regime of much larger in. Our initial approach to characterizing CAM behavior, in the large end regime, relies on the use of random matrix theory. And this connects to prior research on spin classes, SK models, and the tap equations, et cetera, at present we're focusing on, statistical characterization, of the CIM gradient descent landscape, including the evolution of critical points, And their value spectra, as the pump powers gradually increase. We're investigating, for example, whether there could be some way, to explain differences in the relative stability, of the global minimum versus other local minima. We're also working to understand the deleterious, or potentially beneficial effects, of non-ideologies such as asymmetry, in the implemented using couplings, looking one step ahead, we plan to move next into the direction, of considering more realistic classes of problem instances, such as quadratic binary optimization with constraints. So in closing I should acknowledge, people who did the hard work, on these things that I've shown. So my group, including graduate students, Edwin Ng, Daniel Wennberg, Ryatatsu Yanagimoto, and Atsushi Yamamura have been working, in close collaboration with, Surya Ganguli, Marty Fejer and Amir Safavi-Naeini. All of us within the department of applied physics, at Stanford university and also in collaboration with Yoshihisa Yamamoto, over at NTT-PHI research labs. And I should acknowledge funding support, from the NSF by the Coherent Ising Machines, expedition in computing, also from NTT-PHI research labs, army research office, and ExxonMobil. That's it. Thanks very much.
SUMMARY :
by forgets of the lowest pump value a,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Edwin Ng | PERSON | 0.99+ |
ExxonMobil | ORGANIZATION | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
85,900 | QUANTITY | 0.99+ |
Marty Fejer | PERSON | 0.99+ |
Ryatatsu Yanagimoto | PERSON | 0.99+ |
4,500 | QUANTITY | 0.99+ |
Hideo Mabuchi | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Amir Safavi-Naeini | PERSON | 0.99+ |
13,580 | QUANTITY | 0.99+ |
Surya Ganguli | PERSON | 0.99+ |
48 core | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
1980s | DATE | 0.99+ |
14,233 | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
Yoshihisa Yamamoto | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
NTT-PHI | ORGANIZATION | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
19,289 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
two appeals | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
two case | QUANTITY | 0.99+ |
Coherent Ising Machines | ORGANIZATION | 0.98+ |
0.014% | QUANTITY | 0.98+ |
131 | QUANTITY | 0.98+ |
each pulse | QUANTITY | 0.98+ |
two possible values | QUANTITY | 0.98+ |
NSF | ORGANIZATION | 0.98+ |
744,710 | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
Stanford University | ORGANIZATION | 0.98+ |
20 fold | QUANTITY | 0.98+ |
13,584 | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
two gigahertz | QUANTITY | 0.96+ |
single core | QUANTITY | 0.96+ |
single | QUANTITY | 0.95+ |
six | QUANTITY | 0.95+ |
zero | QUANTITY | 0.95+ |
five | QUANTITY | 0.95+ |
6,880 | QUANTITY | 0.94+ |
approximately two days | QUANTITY | 0.94+ |
each | QUANTITY | 0.93+ |
each end | QUANTITY | 0.93+ |
37 heuristic | QUANTITY | 0.93+ |
Moore | PERSON | 0.93+ |
each OPO | QUANTITY | 0.93+ |
two collective oscillation modes | QUANTITY | 0.93+ |
single bit | QUANTITY | 0.92+ |
each thing | QUANTITY | 0.92+ |
20 instance | QUANTITY | 0.91+ |
one step | QUANTITY | 0.9+ |
around 0.16 | QUANTITY | 0.89+ |
Stanford university | ORGANIZATION | 0.88+ |
single quarter | QUANTITY | 0.87+ |
approximately 4,500 | QUANTITY | 0.87+ |
second quantum revolution | QUANTITY | 0.85+ |
a year | QUANTITY | 0.84+ |
two random bits | QUANTITY | 0.83+ |
two OPO | QUANTITY | 0.81+ |
few years ago | DATE | 0.77+ |
two uncoupled OPOs | QUANTITY | 0.76+ |
MaxCut | TITLE | 0.74+ |
four worst case | QUANTITY | 0.71+ |
0.0474% | QUANTITY | 0.7+ |
up to | QUANTITY | 0.7+ |
Coherent | ORGANIZATION | 0.69+ |
Melissa Massa, Lenovo | Lenovo Transform 2018
>> Live from New York City, it's theCUBE, covering Lenovo Transform 2.0 brought to you by Lenovo. >> Welcome back to theCUBE's live coverage of Lenovo Transform here in New York City. I'm your host Rebecca Knight along with my co-host Stu Miniman. We're joined by Melissa Massa. She is the Executive Director of Hyperscale Sales. Thanks so much for coming on theCUBE. >> Thank you, thank you for having me. It's quite exciting. >> It is, it is very exciting. You're a cube newbie. >> I'm a cube newbie, yes. >> So this is very exciting. I'm sure it's the first of many visits. So Melissa we're at this real inflection point in technology and in AI as AI is ushering in this new wave with increasing use of big data and analytics and machine learning. All this means hyperscale is increasingly important. Can you just set the stage for our viewers a little bit about where we are in this-- >> Absolutely, yeah the transformation is really taking place in this industry that we know and love. And it's really amazing at how fast rapid the change is coming so if you look at in the past traditional one U two U type compute were the standard requirements right and today it's much more complex. It's becoming a much faster paced and you look at some of the big guys out there right from the top ten space. They're really helping to evolve AI and machine learning much faster as it's part of the cloud now and it's centric from the cloud space. So it's making things whether it's for personal use, for play, for business or for good humanity type areas. It's really helping involve and change the space altogether. >> One of the themes we've talked about in our kickoff there is Lenovo has a global presence, but it's also through a lot of partnerships. So Intel, Nvidia of course has to be very important in the AI space, you know, people like Microsoft and VMware. That's very much you know, some of those last ones especially look like Microsoft and VMware very much on the enterprise side. The cloud, the hyperscale, you mentioned the top 10 providers. What are the pieces, what are they looking for? What's the expertise that Lenovo brings that helps you fight in this very competitive real tight margin and very demanding ever-changing marketplace? >> You know this marketplace well? You sum it up very well, but in this in this marketplace, when you look at what the big guys are doing right and then you talk about partnerships, in our space, we don't come in and we don't have predisposition in terms of what we're going to. It's really through understanding what they're trying to do with technology and the direction they're going and it's interesting because at Lenovo we have several hundred engineers now dedicated just in our hyperscale organization, but we have 2000 engineers across the globe. So this really allows us to tap into this expertise in our organization, everything from even HPC aspects to multi socket boxes to different types of platforms, you look at ARM, you can look at AMD, look at Intel. So we don't really try to be one provider. We try to be the provider for our customers, and what their needs and where their requirements are going. >> So where have you seen the most success and we're looking forward do you see the growth coming from? >> Yeah we've started out a little bit different in this space. I think a lot of companies take a while getting their name out and getting traction, trying to grow up in what I'll call more that tier two that tier three space. Lenovo really has come into the tier one space. We're very fortunate in that aspect that we kind of are doing more of a top-down trajectory, so we've been very successful. I think you've heard Kirk talk about and you'll hear us continue to talk about the partnerships we have today with ten of the largest, truth be known, I've got pilots going on with the others. I think in a very short period of time we'll be talking about what we're doing across all of the top ten that is really unique to Lenovo, but again I think one of the reasons there's been success there is there's an availability of an engineer to engineer relationship we bring to the table that is really unique and allows our customers as they're going through this evolution with this change in the cloud space, they're realizing that there's not always the expertise they need in house. They've got to go outside and external and look for help in certain areas. One of the areas is we have an eight socket box and it's a great box with an incredibly high memory footprint and there's not a reference architecture on that box in the marketplace. Lenovo really helped develop it. So that's been a great platform for us to be able to have conversations with clients around for SAP hosting, HANA hosting and whatnot. >> Can you talk a little bit about this kind of the scale and investment Lenovo needs to have to be successful in this space? For those of us that track the hyperscales it's like you know there's tens of billions of dollars a year that they're investing in people, plant, and infrastructure. Kirk mentioned in the keynote, what was it? 42 soccer field size manufacturing facility. Is that only for hyperscale? Is it used for some of the other businesses? Help us unpack that a little yeah. >> So that's great, great question. To be in this business, you have to be incredibly committed into this business right, and I can say from YY on down through our entire leadership organization, there is a passion around this space from a hyperscale compute perspective in ensuring our success. In order to do that it really comes with making those right investments, so we can take care of these customers both near-term and long-term. This is not a short-term thing. This is an incredibly long-term plan for us and I will tell you the growth numbers they've given me over the course of the next years so that we have to make these types of investments right, so not only do we leverage our own manufacturing plants, but fortunately for Lenovo, we own. So it really helps minimize margin stacking but I've got great manufacturing facilities around the world and also now as you heard today, and the 42 football fields, we have started our own motherboard lines in our Hefei China Factory. So we'll be producing over 40,000 boards there a year with the two lines we have and then we're going to continue to grow well beyond that. >> So you are a tech veteran. You've been, at this is not your first rodeo here at Lenovo. How would you describe, I mean talking about YY's vision and the commitment he has made to hyperscale, what do you think it is that differentiates Lenovo in this very crowded and competitive tech world? >> I came from a couple of different places before Lenovo. So I had seen the OEM, I had seen the ODM aspect. And I was nervous when we launched this out of Lenovo as to how well is the market going to receive it. It's a crowded place and then you've started to see some of the other players that have been there, have faded off right. So what's really interesting about Lenovo when people ask us about what is your strategy, it's really we call it our ODM plus model and what does that mean? Well it means I'm taking the best parts of an OEM from a size, the global perspective of the markets I can get into for my clients are incredible and for an export of record, being able to get them into markets that are very challenging for others, I have a global services organization. So if you do need me to happen to come into your data center and help with other things, we have that capability too. And then also, but because I own my own manufacturing and I don't outsource anything, I keep relatively low costs to do business with. I can compete with more of that traditional ODM size and now you take the full vertical integration we have and you bring that to the table with being able to we manufacture all of our own motherboards, all the way up through our systems, it's a pretty powerful story, and I think from what we've seen the clients have really resonated with this story. They like what they're seeing from the benefits. >> Yeah it's so much we can learn, maybe you talk so much about scale, I think first of all the customer base that you talk about, 5000 servers or more is kind of the entry level for that, and just the speed that they're changing. A question we get all the time is how do people keep up with this? Give us a little bit of insight as to what you're hearing from your customers in the hyperscale market? How are they keeping to innovate, keeping to grow and how can everybody deal with kind of the pace of change today? >> It's unbelievable, I mean you look around it's immersive data. It's the network you got all this data now and you've got to get it through a pipe right and so there's all these different aspects coming. I've always told our customers look if there are areas that I can't help you with in, I'm going to tell you. I'm going to be more what's right up the middle for you guys, so we really focus on where are you going, where are you evolving, where do you need help from, how can we help to get you? I don't know if Kirk or anybody at the team has talked about it, but really breaking news for you guys because I was going to announce it in pitch today is that we are actually going to build our own white box networking products, and we're going to leave them open source from an OS perspective for our customers too, because we feel this is going to be a very key area for them. We've got the in-house talent. We've actually moved a number of engineers on our networking team directly into our hyperscale organization to get this started. >> Okay is this announcement which, congratulations by the way, is this, are you hearing that demand from the hyperscalers? Some of the hyperscalers have-- >> Absolutely. >> Kind of dipped their toe in there. I know you've been at the OCP events where we see some of the big players like Microsoft and Google. How do they fit, how does that compete against Cisco, so yeah how much of that is kind of a requirement to the customers? >> It is a requirement. I think if you're going to be all-in with these customers because we happen to have a great investment in the networking space already. Also you see Lenovo I think we're a company that we don't come with 50 years of habits right? We come as a fresh company. I never hear inside the company oh we tried that 10 years ago, and we don't want to do it again. We come with a fresh perspective and approach to building our business. We've got the networking organization inside of our company. Why not proliferate it in the next generation and why does that matter? Open matters right? Everything look at what's coming today. Open BMC, open OS. I have major customers coming into Raleigh and sitting down and talking to us about where we going from a security perspective, and how we're going to bring open security standards into this market? >> The other thing when I think about you know, YY mentioned it. Cloud network and device kind of things like IOT and the global device because everybody, AI and IOT everybody's going there. How does that play in your space? >> It just continues, the data just continues to double in massive size and scale, and there are new technologies out. People are learning to use things like the FPGA is a lot smarter and you look at like what they're able to do today from that technology and deliver one server that can take the compute power of four now. So all of that is helping to evolve this rapid pace and where we're going. >> Finally what we'll be talking about next year? I mean perhaps inked deals with the remaining four players that you are in pilot programs with. What other things are most exciting to you? >> Yeah so I think in what you're going to find is I'm launching a team that's going to go after the tier II and tier III market. And we're going to really start to invest in this space. We're going to really start to proliferate. Paul and I, you saw up on the screen. We have 33 custom boards in design today. We have a factory that we need to fill right, so we're going to continue to really push the envelope on everything we're going to be developing from a custom perspective. I think you're going to see it evolve with quite a number of products, maybe even more so beyond just your traditional server approach. We're there to help clients in other areas where they also need to manufacture maybe a part or what could be a commodity for them. And they need special attention in that particular space. We're going to continue to work with them, but I would say the biggest thing. When I'm sitting here next year is going to be the sheer size of where this hyperscale team is going and the revenue and the growth that's bringing in to Lenovo overall. >> Great well thank you so much for coming into theCUBE Melissa. >> It was nice talking to you. >> I appreciate it. Thank you. >> I'm Rebecca Knight for Stu Miniman. We will have more from theCUBE live at Lenovo Transform in just a little bit. (upbeat music)
SUMMARY :
brought to you by Lenovo. She is the Executive Director of Hyperscale Sales. It's quite exciting. It is, it is very exciting. I'm a cube newbie, Can you just set the stage for our viewers a little bit and you look at some of the big guys out there right in the AI space, you know, and then you talk about partnerships, One of the areas is we have an eight socket box and investment Lenovo needs to have to be successful and the 42 football fields, we have started our own So you are a tech veteran. and now you take the full vertical integration we have Yeah it's so much we can learn, maybe you talk so much guys, so we really focus on where are you going, Microsoft and Google. and sitting down and talking to us about where we going from and the global device because everybody, So all of that is helping to evolve this rapid pace that you are in pilot programs with. and the growth that's bringing in to Lenovo overall. Great well thank you so much for coming I appreciate it. in just a little bit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Melissa Massa | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Melissa | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
New York City | LOCATION | 0.99+ |
two lines | QUANTITY | 0.99+ |
Kirk | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
2000 engineers | QUANTITY | 0.99+ |
50 years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Hefei China Factory | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
one server | QUANTITY | 0.99+ |
over 40,000 boards | QUANTITY | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
four players | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Raleigh | LOCATION | 0.98+ |
33 custom boards | QUANTITY | 0.98+ |
a year | QUANTITY | 0.97+ |
ten | QUANTITY | 0.97+ |
eight socket | QUANTITY | 0.96+ |
four | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
5000 servers | QUANTITY | 0.96+ |
10 years ago | DATE | 0.95+ |
BMC | ORGANIZATION | 0.95+ |
one provider | QUANTITY | 0.94+ |
YY | PERSON | 0.94+ |
first rodeo | QUANTITY | 0.94+ |
tens of billions of dollars a year | QUANTITY | 0.94+ |
tier one | QUANTITY | 0.9+ |
10 providers | QUANTITY | 0.9+ |
42 football fields | QUANTITY | 0.9+ |
HANA | TITLE | 0.88+ |
42 soccer | QUANTITY | 0.87+ |
hundred engineers | QUANTITY | 0.86+ |
Hyperscale | ORGANIZATION | 0.8+ |
Lenovo Transform | EVENT | 0.8+ |
III | OTHER | 0.79+ |
top ten | QUANTITY | 0.79+ |
years | DATE | 0.78+ |
II | OTHER | 0.77+ |
tier three | QUANTITY | 0.74+ |
OCP | EVENT | 0.7+ |
YY | ORGANIZATION | 0.68+ |
tier two | QUANTITY | 0.66+ |
SAP | ORGANIZATION | 0.66+ |
theCUBE | ORGANIZATION | 0.59+ |
themes | QUANTITY | 0.55+ |
tier | QUANTITY | 0.55+ |
Lenovo Transform 2.0 Keynote | Lenovo Transform 2018
(electronic dance music) (Intel Jingle) (ethereal electronic dance music) ♪ Okay ♪ (upbeat techno dance music) ♪ Oh oh oh oh ♪ ♪ Oh oh oh oh ♪ ♪ Oh oh oh oh oh ♪ ♪ Oh oh oh oh ♪ ♪ Oh oh oh oh oh ♪ ♪ Take it back take it back ♪ ♪ Take it back ♪ ♪ Take it back take it back ♪ ♪ Take it back ♪ ♪ Take it back take it back ♪ ♪ Yeah everybody get loose yeah ♪ ♪ Yeah ♪ ♪ Ye-yeah yeah ♪ ♪ Yeah yeah ♪ ♪ Everybody everybody yeah ♪ ♪ Whoo whoo ♪ ♪ Whoo whoo ♪ ♪ Whoo yeah ♪ ♪ Everybody get loose whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ >> As a courtesy to the presenters and those around you, please silence all mobile devices, thank you. (electronic dance music) ♪ Everybody get loose ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ ♪ Whoo ♪ (upbeat salsa music) ♪ Ha ha ha ♪ ♪ Ah ♪ ♪ Ha ha ha ♪ ♪ So happy ♪ ♪ Whoo whoo ♪ (female singer scatting) >> Ladies and gentlemen, please take your seats. Our program will begin momentarily. ♪ Hey ♪ (female singer scatting) (male singer scatting) ♪ Hey ♪ ♪ Whoo ♪ (female singer scatting) (electronic dance music) ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ Red red red red ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ Red red red red ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ Red red red red ♪ ♪ Red don't go ♪ ♪ All hands are in don't go ♪ ♪ In don't go ♪ ♪ Oh red go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ Red red red red ♪ ♪ All hands are red don't go ♪ ♪ All hands are in red red red red ♪ ♪ All hands are in don't go ♪ ♪ All hands are in red go ♪ >> Ladies and gentlemen, there are available seats. Towards house left, house left there are available seats. If you are please standing, we ask that you please take an available seat. We will begin momentarily, thank you. ♪ Let go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ ♪ All hands are in don't go ♪ ♪ Red all hands are in don't go ♪ (upbeat electronic dance music) ♪ Just make me ♪ ♪ Just make me ♪ ♪ Just make me ♪ ♪ Just make me ♪ ♪ Just make me ♪ ♪ I live ♪ ♪ Just make me ♪ ♪ Just make me ♪ ♪ Hey ♪ ♪ Yeah ♪ ♪ Oh ♪ ♪ Ah ♪ ♪ Ah ah ah ah ah ah ♪ ♪ Just make me ♪ ♪ Just make me ♪ (bouncy techno music) >> Ladies and gentlemen, once again we ask that you please take the available seats to your left, house left, there are many available seats. If you are standing, please make your way there. The program will begin momentarily, thank you. Good morning! This is Lenovo Transform 2.0! (keyboard clicks) >> Progress. Why do we always talk about it in the future? When will it finally get here? We don't progress when it's ready for us. We need it when we're ready, and we're ready now. Our hospitals and their patients need it now, our businesses and their customers need it now, our cities and their citizens need it now. To deliver intelligent transformation, we need to build it into the products and solutions we make every day. At Lenovo, we're designing the systems to fight disease, power businesses, and help you reach more customers, end-to-end security solutions to protect your data and your companies reputation. We're making IT departments more agile and cost efficient. We're revolutionizing how kids learn with VR. We're designing smart devices and software that transform the way you collaborate, because technology shouldn't just power industries, it should power people. While everybody else is talking about tomorrow, we'll keep building today, because the progress we need can't wait for the future. >> Please welcome to the stage Lenovo's Rod Lappen! (electronic dance music) (audience applauding) >> Alright. Good morning everyone! >> Good morning. >> Ooh, that was pretty good actually, I'll give it one more shot. Good morning everyone! >> Good morning! >> Oh, that's much better! Hope everyone's had a great morning. Welcome very much to the second Lenovo Transform event here in New York. I think when I got up just now on the steps I realized there's probably one thing in common all of us have in this room including myself which is, absolutely no one has a clue what I'm going to say today. So, I'm hoping very much that we get through this thing very quickly and crisply. I love this town, love New York, and you're going to hear us talk a little bit about New York as we get through here, but just before we get started I'm going to ask anyone who's standing up the back, there are plenty of seats down here, and down here on the right hand side, I think he called it house left is the professional way of calling it, but these steps to my right, your left, get up here, let's get you all seated down so that you can actually sit down during the keynote session for us. Last year we had our very first Lenovo Transform. We had about 400 people. It was here in New York, fantastic event, today, over 1,000 people. We have over 62 different technology demonstrations and about 15 breakout sessions, which I'll talk you through a little bit later on as well, so it's a much bigger event. Next year we're definitely going to be shooting for over 2,000 people as Lenovo really transforms and starts to address a lot of the technology that our commercial customers are really looking for. We were however hampered last year by a storm, I don't know if those of you who were with us last year will remember, we had a storm on the evening before Transform last year in New York, and obviously the day that it actually occurred, and we had lots of logistics. Our media people from AMIA were coming in. They took the, the plane was circling around New York for a long time, and Kamran Amini, our General Manager of our Data Center Infrastructure Group, probably one of our largest groups in the Lenovo DCG business, took 17 hours to get from Raleigh, North Carolina to New York, 17 hours, I think it takes seven or eight hours to drive. Took him 17 hours by plane to get here. And then of course this year, we have Florence. And so, obviously the hurricane Florence down there in the Carolinas right now, we tried to help, but still Kamran has made it today. Unfortunately, very tragically, we were hoping he wouldn't, but he's here today to do a big presentation a little bit later on as well. However, I do want to say, obviously, Florence is a very serious tragedy and we have to take it very serious. We got, our headquarters is in Raleigh, North Carolina. While it looks like the hurricane is just missing it's heading a little bit southeast, all of our thoughts and prayers and well wishes are obviously with everyone in the Carolinas on behalf of Lenovo, everyone at our headquarters, everyone throughout the Carolinas, we want to make sure everyone stays safe and out of harm's way. We have a great mixture today in the crowd of all customers, partners, industry analysts, media, as well as our financial analysts from all around the world. There's over 30 countries represented here and people who are here to listen to both YY, Kirk, and Christian Teismann speak today. And so, it's going to be a really really exciting day, and I really appreciate everyone coming in from all around the world. So, a big round of applause for everyone whose come in. (audience applauding) We have a great agenda for you today, and it starts obviously a very consistent format which worked very successful for us last year, and that's obviously our keynote. You'll hear from YY, our CEO, talk a little bit about the vision he has in the industry and how he sees Lenovo's turned the corner and really driving some great strategy to address our customer's needs. Kirk Skaugen, our Executive Vice President of DCG, will be up talking about how we've transformed the DCG business and once again are hitting record growth ratios for our DCG business. And then you'll hear from Christian Teismann, our SVP and General Manager for our commercial business, get up and talk about everything that's going on in our IDG business. There's really exciting stuff going on there and obviously ThinkPad being the cornerstone of that I'm sure he's going to talk to us about a couple surprises in that space as well. Then we've got some great breakout sessions, I mentioned before, 15 breakout sessions, so while this keynote section goes until about 11:30, once we get through that, please go over and explore, and have a look at all of the breakout sessions. We have all of our subject matter experts from both our PC, NBG, and our DCG businesses out to showcase what we're doing as an organization to better address your needs. And then obviously we have the technology pieces that I've also spoken about, 62 different technology displays there arranged from everything IoT, 5G, NFV, everything that's really cool and hot in the industry right now is going to be on display up there, and I really encourage all of you to get up there. So, I'm going to have a quick video to show you from some of the setup yesterday on a couple of the 62 technology displays we've got on up on stage. Okay let's go, so we've got a demonstrations to show you today, one of the greats one here is the one we've done with NC State, a high-performance computing artificial intelligence demonstration of fresh produce. It's about modeling the population growth of the planet, and how we're going to supply water and food as we go forward. Whoo. Oh, that is not an apple. Okay. (woman laughs) Second one over here is really, hey Jonas, how are you? Is really around virtual reality, and how we look at one of the most amazing sites we've got, as an install on our high-performance computing practice here globally. And you can see, obviously, that this is the Barcelona supercomputer, and, where else in New York can you get access to being able to see something like that so easily? Only here at Lenovo Transform. Whoo, okay. (audience applauding) So there's two examples of some of the technology. We're really encouraging everyone in the room after the keynote to flow into that space and really get engaged, and interact with a lot of the technology we've got up there. It seems I need to also do something about my fashion, I've just realized I've worn a vest two days in a row, so I've got to work on that as well. Alright so listen, the last thing on the agenda, we've gone through the breakout sessions and the demo, tonight at four o'clock, there's about 400 of you registered to be on the cruise boat with us, the doors will open behind me. the boat is literally at the pier right behind us. You need to make sure you're on the boat for 4:00 p.m. this evening. Outside of that, I want everyone to have a great time today, really enjoy the experience, make it as experiential as you possibly can, get out there and really get in and touch the technology. There's some really cool AI displays up there for us all to get involved in as well. So ladies and gentlemen, without further adieu, it gives me great pleasure to introduce to you a lover of tennis, as some of you would've heard last year at Lenovo Transform, as well as a lover of technology, Lenovo, and of course, New York City. I am obviously very pleasured to introduce to you Yang Yuanqing, our CEO, as we like to call him, YY. (audience applauding) (upbeat funky music) >> Good morning, everyone. >> Good morning. >> Thank you Rod for that introduction. Welcome to New York City. So, this is the second year in a row we host our Transform event here, because New York is indeed one of the most transformative cities in the world. Last year on this stage, I spoke about the Fourth Industrial Revolution, and our vision around the intelligent transformation, how it would fundamentally change the nature of business and the customer relationships. And why preparing for this transformation is the key for the future of our company. And in the last year I can assure you, we were being very busy doing just that, from searching and bringing global talents around the world to the way we think about every product and every investment we make. I was here in New York just a month ago to announce our fiscal year Q1 earnings, which was a good day for us. I think now the world believes it when we say Lenovo has truly turned the corner to a new phase of growth and a new phase of acceleration in executing the transformation strategy. That's clear to me is that the last few years of a purposeful disruption at Lenovo have led us to a point where we can now claim leadership of the coming intelligent transformation. People often asked me, what is the intelligent transformation? I was saying this way. This is the unlimited potential of the Fourth Industrial Revolution driven by artificial intelligence being realized, ordering a pizza through our speaker, and locking the door with a look, letting your car drive itself back to your home. This indeed reflect the power of AI, but it just the surface of it. The true impact of AI will not only make our homes smarter and offices more efficient, but we are also completely transformed every value chip in every industry. However, to realize these amazing possibilities, we will need a structure built around the key components, and one that touches every part of all our lives. First of all, explosions in new technology always lead to new structures. This has happened many times before. In the early 20th century, thousands of companies provided a telephone service. City streets across the US looked like this, and now bundles of a microscopic fiber running from city to city bring the world closer together. Here's what a driving was like in the US, up until 1950s. Good luck finding your way. (audience laughs) And today, millions of vehicles are organized and routed daily, making the world more efficient. Structure is vital, from fiber cables and the interstate highways, to our cells bounded together to create humans. Thankfully the structure for intelligent transformation has emerged, and it is just as revolutionary. What does this new structure look like? We believe there are three key building blocks, data, computing power, and algorithms. Ever wondered what is it behind intelligent transformation? What is fueling this miracle of human possibility? Data. As the Internet becomes ubiquitous, not only PCs, mobile phones, have come online and been generating data. Today it is the cameras in this room, the climate controls in our offices, or the smart displays in our kitchens at home. The number of smart devices worldwide will reach over 20 billion in 2020, more than double the number in 2017. These devices and the sensors are connected and generating massive amount of data. By 2020, the amount of data generated will be 57 times more than all the grains of sand on Earth. This data will not only make devices smarter, but will also fuel the intelligence of our homes, offices, and entire industries. Then we need engines to turn the fuel into power, and the engine is actually the computing power. Last but not least the advanced algorithms combined with Big Data technology and industry know how will form vertical industrial intelligence and produce valuable insights for every value chain in every industry. When these three building blocks all come together, it will change the world. At Lenovo, we have each of these elements of intelligent transformations in a single place. We have built our business around the new structure of intelligent transformation, especially with mobile and the data center now firmly part of our business. I'm often asked why did you acquire these businesses? Why has a Lenovo gone into so many fields? People ask the same questions of the companies that become the leaders of the information technology revolution, or the third industrial transformation. They were the companies that saw the future and what the future required, and I believe Lenovo is the company today. From largest portfolio of devices in the world, leadership in the data center field, to the algorithm-powered intelligent vertical solutions, and not to mention the strong partnership Lenovo has built over decades. We are the only company that can unify all these essential assets and deliver end to end solutions. Let's look at each part. We now understand the important importance data plays as fuel in intelligent transformation. Hundreds of billions of devices and smart IoTs in the world are generating better and powering the intelligence. Who makes these devices in large volume and variety? Who puts these devices into people's home, offices, manufacturing lines, and in their hands? Lenovo definitely has the front row seats here. We are number one in PCs and tablets. We also produces smart phones, smart speakers, smart displays. AR/VR headsets, as well as commercial IoTs. All of these smart devices, or smart IoTs are linked to each other and to the cloud. In fact, we have more than 20 manufacturing facilities in China, US, Brazil, Japan, India, Mexico, Germany, and more, producing various devices around the clock. We actually make four devices every second, and 37 motherboards every minute. So, this factory located in my hometown, Hu-fi, China, is actually the largest laptop factory in the world, with more than three million square feet. So, this is as big as 42 soccer fields. Our scale and the larger portfolio of devices gives us access to massive amount of data, which very few companies can say. So, why is the ability to scale so critical? Let's look again at our example from before. The early days of telephone, dozens of service providers but only a few companies could survive consolidation and become the leader. The same was true for the third Industrial Revolution. Only a few companies could scale, only a few could survive to lead. Now the building blocks of the next revolution are locking into place. The (mumbles) will go to those who can operate at the scale. So, who could foresee the total integration of cloud, network, and the device, need to deliver intelligent transformation. Lenovo is that company. We are ready to scale. Next, our computing power. Computing power is provided in two ways. On one hand, the modern supercomputers are providing the brute force to quickly analyze the massive data like never before. On the other hand the cloud computing data centers with the server storage networking capabilities, and any computing IoT's, gateways, and miniservers are making computing available everywhere. Did you know, Lenovo is number one provider of super computers worldwide? 170 of the top 500 supercomputers, run on Lenovo. We hold 89 World Records in key workloads. We are number one in x86 server reliability for five years running, according to ITIC. a respected provider of industry research. We are also the fastest growing provider of hyperscale public cloud, hyper-converged and aggressively growing in edge computing. cur-ges target, we are expand on this point soon. And finally to run these individual nodes into our symphony, we must transform the data and utilize the computing power with advanced algorithms. Manufactured, industry maintenance, healthcare, education, retail, and more, so many industries are on the edge of intelligent transformation to improve efficiency and provide the better products and services. We are creating advanced algorithms and the big data tools combined with industry know-how to provide intelligent vertical solutions for several industries. In fact, we studied at Lenovo first. Our IT and research teams partnered with our global supply chain to develop an AI that improved our demand forecasting accuracy. Beyond managing our own supply chain we have offered our deep learning supply focused solution to other manufacturing companies to improve their efficiency. In the best case, we have improved the demand, focused the accuracy by 30 points to nearly 90 percent, for Baosteel, the largest of steel manufacturer in China, covering the world as well. Led by Lenovo research, we launched the industry-leading commercial ready AR headset, DaystAR, partnering with companies like the ones in this room. This technology is being used to revolutionize the way companies service utility, and even our jet engines. Using our workstations, servers, and award-winning imaging processing algorithms, we have partnered with hospitals to process complex CT scan data in minutes. So, this enable the doctors to more successfully detect the tumors, and it increases the success rate of cancer diagnosis all around the world. We are also piloting our smart IoT driven warehouse solution with one of the world's largest retail companies to greatly improve the efficiency. So, the opportunities are endless. This is where Lenovo will truly shine. When we combine the industry know-how of our customers with our end-to-end technology offerings, our intelligent vertical solutions like this are growing, which Kirk and Christian will share more. Now, what will drive this transformation even faster? The speed at which our networks operate, specifically 5G. You may know that Lenovo just launched the first-ever 5G smartphone, our Moto Z3, with the new 5G Moto model. We are partnering with multiple major network providers like Verizon, China Mobile. With the 5G model scheduled to ship early next year, we will be the first company to provide a 5G mobile experience to any users, customers. This is amazing innovation. You don't have to buy a new phone, just the 5G clip on. What can I say, except wow. (audience laughs) 5G is 10 times the fast faster than 4G. Its download speed will transform how people engage with the world, driverless car, new types of smart wearables, gaming, home security, industrial intelligence, all will be transformed. Finally, accelerating with partners, as ready as we are at Lenovo, we need partners to unlock our full potential, partners here to create with us the edge of the intelligent transformation. The opportunities of intelligent transformation are too profound, the scale is too vast. No company can drive it alone fully. We are eager to collaborate with all partners that can help bring our vision to life. We are dedicated to open partnerships, dedicated to cross-border collaboration, unify the standards, share the advantage, and market the synergies. We partner with the biggest names in the industry, Intel, Microsoft, AMD, Qualcomm, Google, Amazon, and Disney. We also find and partner with the smaller innovators as well. We're building the ultimate partner experience, open, shared, collaborative, diverse. So, everything is in place for intelligent transformation on a global scale. Smart devices are everywhere, the infrastructure is in place, networks are accelerating, and the industries demand to be more intelligent, and Lenovo is at the center of it all. We are helping to drive change with the hundreds of companies, companies just like yours, every day. We are your partner for intelligent transformation. Transformation never stops. This is what you will hear from Kirk, including details about Lenovo NetApp global partnership we just announced this morning. We've made the investments in every single aspect of the technology. We have the end-to-end resources to meet your end-to-end needs. As you attend the breakout session this afternoon, I hope you see for yourself how much Lenovo has transformed as a company this past year, and how we truly are delivering a future of intelligent transformation. Now, let me invite to the stage Kirk Skaugen, our president of Data Center growth to tell you about the exciting transformation happening in the global Data C enter market. Thank you. (audience applauding) (upbeat music) >> Well, good morning. >> Good morning. >> Good morning! >> Good morning! >> Excellent, well, I'm pleased to be here this morning to talk about how we're transforming the Data Center and taking you as our customers through your own intelligent transformation journey. Last year I stood up here at Transform 1.0, and we were proud to announce the largest Data Center portfolio in Lenovo's history, so I thought I'd start today and talk about the portfolio and the progress that we've made over the last year, and the strategies that we have going forward in phase 2.0 of Lenovo's transformation to be one of the largest data center companies in the world. We had an audacious vision that we talked about last year, and that is to be the most trusted data center provider in the world, empowering customers through the new IT, intelligent transformation. And now as the world's largest supercomputer provider, giving something back to humanity, is very important this week with the hurricanes now hitting North Carolina's coast, but we take this most trusted aspect very seriously, whether it's delivering the highest quality products on time to you as customers with the highest levels of security, or whether it's how we partner with our channel partners and our suppliers each and every day. You know we're in a unique world where we're going from hundreds of millions of PCs, and then over the next 25 years to hundred billions of connected devices, so each and every one of you is going through this intelligent transformation journey, and in many aspects were very early in that cycle. And we're going to talk today about our role as the largest supercomputer provider, and how we're solving humanity's greatest challenges. Last year we talked about two special milestones, the 25th anniversary of ThinkPad, but also the 25th anniversary of Lenovo with our IBM heritage in x86 computing. I joined the workforce in 1992 out of college, and the IBM first personal server was launching at the same time with an OS2 operating system and a free mouse when you bought the server as a marketing campaign. (audience laughing) But what I want to be very clear today, is that the innovation engine is alive and well at Lenovo, and it's really built on the culture that we're building as a company. All of these awards at the bottom are things that we earned over the last year at Lenovo. As a Fortune now 240 company, larger than companies like Nike, or AMEX, or Coca-Cola. The one I'm probably most proud of is Forbes first list of the top 2,000 globally regarded companies. This was something where 15,000 respondents in 60 countries voted based on ethics, trustworthiness, social conduct, company as an employer, and the overall company performance, and Lenovo was ranked number 27 of 2000 companies by our peer group, but we also now one of-- (audience applauding) But we also got a perfect score in the LGBTQ Equality Index, exemplifying the diversity internally. We're number 82 in the top working companies for mothers, top working companies for fathers, top 100 companies for sustainability. If you saw that factory, it's filled with solar panels on the top of that. And now again, one of the top global brands in the world. So, innovation is built on a customer foundation of trust. We also said last year that we'd be crossing an amazing milestone. So we did, over the last 12 months ship our 20 millionth x86 server. So, thank you very much to our customers for this milestone. (audience applauding) So, let me recap some of the transformation elements that have happened over the last year. Last year I talked about a lot of brand confusion, because we had the ThinkServer brand from the legacy Lenovo, the System x, from IBM, we had acquired a number of networking companies, like BLADE Network Technologies, et cetera, et cetera. Over the last year we've been ramping based on two brand structures, ThinkAgile for next generation IT, and all of our software-defined infrastructure products and ThinkSystem as the world's highest performance, highest reliable x86 server brand, but for servers, for storage, and for networking. We have transformed every single aspect of the customer experience. A year and a half ago, we had four different global channel programs around the world. Typically we're about twice the mix to our channel partners of any of our competitors, so this was really important to fix. We now have a single global Channel program, and have technically certified over 11,000 partners to be technical experts on our product line to deliver better solutions to our customer base. Gardner recently recognized Lenovo as the 26th ranked supply chain in the world. And, that's a pretty big honor, when you're up there with Amazon and Walmart and others, but in tech, we now are in the top five supply chains. You saw the factory network from YY, and today we'll be talking about product shipping in more than 160 countries, and I know there's people here that I've met already this morning, from India, from South Africa, from Brazil and China. We announced new Premier Support services, enabling you to go directly to local language support in nine languages in 49 countries in the world, going directly to a native speaker level three support engineer. And today we have more than 10,000 support specialists supporting our products in over 160 countries. We've delivered three times the number of engineered solutions to deliver a solutions orientation, whether it's on HANA, or SQL Server, or Oracle, et cetera, and we've completely reengaged our system integrator channel. Last year we had the CIO of DXE on stage, and here we're talking about more than 175 percent growth through our system integrator channel in the last year alone as we've brought that back and really built strong relationships there. So, thank you very much for amazing work here on the customer experience. (audience applauding) We also transformed our leadership. We thought it was extremely important with a focus on diversity, to have diverse talent from the legacy IBM, the legacy Lenovo, but also outside the industry. We made about 19 executive changes in the DCG group. This is the most senior leadership team within DCG, all which are newly on board, either from our outside competitors mainly over the last year. About 50 percent of our executives were now hired internally, 50 percent externally, and 31 percent of those new executives are diverse, representing the diversity of our global customer base and gender. So welcome, and most of them you're going to be able to meet over here in the breakout sessions later today. (audience applauding) But some things haven't changed, they're just keeping getting better within Lenovo. So, last year I got up and said we were committed with the new ThinkSystem brand to be a world performance leader. You're going to see that we're sponsoring Ducati for MotoGP. You saw the Ferrari out there with Formula One. That's not a surprise. We want the Lenovo ThinkSystem and ThinkAgile brands to be synonymous with world record performance. So in the last year we've gone from 39 to 89 world records, and partners like Intel would tell you, we now have four times the number of world record workloads on Lenovo hardware than any other server company on the planet today, with more than 89 world records across HPC, Java, database, transaction processing, et cetera. And we're proud to have just brought on Doug Fisher from Intel Corporation who had about 10-17,000 people on any given year working for him in workload optimizations across all of our software. It's just another testament to the leadership team we're bringing in to keep focusing on world-class performance software and solutions. We also per ITIC, are the number one now in x86 server reliability five years running. So, this is a survey where CIOs are in a blind survey asked to submit their reliability of their uptime on their x86 server equipment over the last 365 days. And you can see from 2016 to 2017 the downtime, there was over four hours as noted by the 750 CXOs in more than 20 countries is about one percent for the Lenovo products, and is getting worse generation from generation as we went from Broadwell to Pearlie. So we're taking our reliability, which was really paramount in the IBM System X heritage, and ensuring that we don't just recognize high performance but we recognize the highest level of reliability for mission-critical workloads. And what that translates into is that we at once again have been ranked number one in customer satisfaction from you our customers in 19 of 22 attributes, in North America in 18 of 22. This is a survey by TVR across hundreds of customers of us and our top competitors. This is the ninth consecutive study that we've been ranked number one in customer satisfaction, so we're taking this extremely seriously, and in fact YY now has increased the compensation of every single Lenovo employee. Up to 40 percent of their compensation bonus this year is going to be based on customer metrics like quality, order to ship, and things of this nature. So, we're really putting every employee focused on customer centricity this year. So, the summary on Transform 1.0 is that every aspect of what you knew about Lenovo's data center group has transformed, from the culture to the branding to dedicated sales and marketing, supply chain and quality groups, to a worldwide channel program and certifications, to new system integrator relationships, and to the new leadership team. So, rather than me just talk about it, I thought I'd share a quick video about what we've done over the last year, if you could run the video please. Turn around for a second. (epic music) (audience applauds) Okay. So, thank you to all our customers that allowed us to publicly display their logos in that video. So, what that means for you as investors, and for the investor community out there is, that our customers have responded, that this year Gardner just published that we are the fastest growing server company in the top 10, with 39 percent growth quarter-on-quarter, and 49 percent growth year-on-year. If you look at the progress we've made since the transformation the last three quarters publicly, we've grown 17 percent, then 44 percent, then 68 percent year on year in revenue, and I can tell you this quarter I'm as confident as ever in the financials around the DCG group, and it hasn't been in one area. You're going to see breakout sessions from hyperscale, software-defined, and flash, which are all growing more than a 100 percent year-on-year, supercomputing which we'll talk about shortly, now number one, and then ultimately from profitability, delivering five consecutive quarters of pre-tax profit increase, so I think, thank you very much to the customer base who's been working with us through this transformation journey. So, you're here to really hear what's next on 2.0, and that's what I'm excited to talk about today. Last year I came up with an audacious goal that we would become the largest supercomputer company on the planet by 2020, and this graph represents since the acquisition of the IBM System x business how far we were behind being the number one supercomputer. When we started we were 182 positions behind, even with the acquisition for example of SGI from HP, we've now accomplished our goal actually two years ahead of time. We're now the largest supercomputer company in the world. About one in every four supercomputers, 117 on the list, are now Lenovo computers, and you saw in the video where the universities are said, but I think what I'm most proud of is when your customers rank you as the best. So the awards at the bottom here, are actually Readers Choice from the last International Supercomputing Show where the scientific researchers on these computers ranked their vendors, and we were actually rated the number one server technology in supercomputing with our ThinkSystem SD530, and the number one storage technology with our ThinkSystem DSS-G, but more importantly what we're doing with the technology. You're going to see we won best in life sciences, best in data analytics, and best in collaboration as well, so you're going to see all of that in our breakout sessions. As you saw in the video now, 17 of the top 25 research institutions in the world are now running Lenovo supercomputers. And again coming from Raleigh and watching that hurricane come across the Atlantic, there are eight supercomputers crunching all of those models you see from Germany to Malaysia to Canada, and we're happy to have a SciNet from University of Toronto here with us in our breakout session to talk about what they're doing on climate modeling as well. But we're not stopping there. We just announced our new Neptune warm water cooling technology, which won the International Supercomputing Vendor Showdown, the first time we've won that best of show in 25 years, and we've now installed this. We're building out LRZ in Germany, the first ever warm water cooling in Peking University, at the India Space Propulsion Laboratory, at the Malaysian Weather and Meteorological Society, at Uninett, at the largest supercomputer in Norway, T-Systems, University of Birmingham. This is truly amazing technology where we're actually using water to cool the machine to deliver a significantly more energy-efficient computer. Super important, when we're looking at global warming and some of the electric bills can be millions of dollars just for one computer, and could actually power a small city just with the technology from the computer. We've built AI centers now in Morrisville, Stuttgart, Taipei, and Beijing, where customers can bring their AI workloads in with experts from Intel, from Nvidia, from our FPGA partners, to work on their workloads, and how they can best implement artificial intelligence. And we also this year launched LICO which is Lenovo Intelligent Compute Orchestrator software, and it's a software solution that simplifies the management and use of distributed clusters in both HPC and AI model development. So, what it enables you to do is take a single cluster, and run both HPC and AI workloads on it simultaneously, delivering better TCO for your environment, so check out LICO as well. A lot of the customers here and Wall Street are very excited and using it already. And we talked about solving humanity's greatest challenges. In the breakout session, you're going to have a virtual reality experience where you're going to be able to walk through what as was just ranked the world's most beautiful data center, the Barcelona Supercomputer. So, you can actually walk through one of the largest supercomputers in the world from Barcelona. You can see the work we're doing with NC State where we're going to have to grow the food supply of the world by 50 percent, and there's not enough fresh water in the world in the right places to actually make all those crops grow between now and 2055, so you're going to see the progression of how they're mapping the entire globe and the water around the world, how to build out the crop population over time using AI. You're going to see our work with Vestas is this largest supercomputer provider in the wind turbine areas, how they're working on wind energy, and then with University College London, how they're working on some of the toughest particle physics calculations in the world. So again, lots of opportunity here. Take advantage of it in the breakout sessions. Okay, let me transition to hyperscale. So in hyperscale now, we have completely transformed our business model. We are now powering six of the top 10 hyperscalers in the world, which is a significant difference from where we were two years ago. And the reason we're doing that, is we've coined a term called ODM+. We believe that hyperscalers want more procurement power than an ODM, and Lenovo is doing about $18 billion of procurement a year. They want a broader global supply chain that they can get from a local system integrator. We're more than 160 countries around the world, but they want the same world-class quality and reliability like they get from an MNC. So, what we're doing now is instead of just taking off the shelf motherboards from somewhere, we're starting with a blank sheet of paper, we're working with the customer base on customized SKUs and you can see we already are developing 33 custom solutions for the largest hyperscalers in the world. And then we're not just running notebooks through this factory where YY said, we're running 37 notebook boards a minute, we're now putting in tens and tens and tens of thousands of server board capacity per month into this same factory, so absolutely we can compete with the most aggressive ODM's in the world, but it's not just putting these things in in the motherboard side, we're also building out these systems all around the world, India, Brazil, Hungary, Mexico, China. This is an example of a new hyperscale customer we've had this last year, 34,000 servers we delivered in the first six months. The next 34,000 servers we delivered in 68 days. The next 34,000 servers we delivered in 35 days, with more than 99 percent on-time delivery to 35 data centers in 14 countries as diverse as South Africa, India, China, Brazil, et cetera. And I'm really ashamed to say it was 99.3, because we did have a forklift driver who rammed their forklift right through the middle of the one of the server racks. (audience laughing) At JFK Airport that we had to respond to, but I think this gives you a perspective of what it is to be a top five global supply chain and technology. So last year, I said we would invest significantly in IP, in joint ventures, and M and A to compete in software defined, in networking, and in storage, so I wanted to give you an update on that as well. Our newest software-defined partnership is with Cloudistics, enabling a fully composable cloud infrastructure. It's an exclusive agreement, you can see them here. I think Nag, our founder, is going to be here today, with a significant Lenovo investment in the company. So, this new ThinkAgile CP series delivers the simplicity of the public cloud, on-premise with exceptional support and a marketplace of essential enterprise applications all with a single click deployment. So simply put, we're delivering a private cloud with a premium experience. It's simple in that you need no specialists to deploy it. An IT generalist can set it up and manage it. It's agile in that you can provision dozens of workloads in minutes, and it's transformative in that you get all of the goodness of public cloud on-prem in a private cloud to unlock opportunity for use. So, we're extremely excited about the ThinkAgile CP series that's now shipping into the marketplace. Beyond that we're aggressively ramping, and we're either doubling, tripling, or quadrupling our market share as customers move from traditional server technology to software-defined technology. With Nutanix we've been public, growing about more than 150 percent year-on-year, with Nutanix as their fastest growing Nutanix partner, but today I want to set another audacious goal. I believe we cannot just be Nutanix's fastest growing partner but we can become their largest partner within two years. On Microsoft, we are already four times our market share on Azure stack of our traditional business. We were the first to launch our ThinkAgile on Broadwell and on Skylake with the Azure Stack Infrastructure. And on VMware we're about twice our market segment share. We were the first to deliver an Intel-optimized Optane-certified VSAN node. And with Optane technology, we're delivering 50 percent more VM density than any competitive SSD system in the marketplace, about 10 times lower latency, four times the performance of any SSD system out there, and Lenovo's first to market on that. And at VMworld you saw CEO Pat Gelsinger of VMware talked about project dimension, which is Edge as a service, and we're the only OEM beyond the Dell family that is participating today in project dimension. Beyond that you're going to see a number of other partnerships we have. I'm excited that we have the city of Bogota Columbia here, an eight million person city, where we announced a 3,000 camera video surveillance solution last month. With pivot three you're going to see city of Bogota in our breakout sessions. You're going to see a new partnership with Veeam around backup that's launching today. You're going to see partnerships with scale computing in IoT and hyper-converged infrastructure working on some of the largest retailers in the world. So again, everything out in the breakout session. Transitioning to storage and data management, it's been a great year for Lenovo, more than a 100 percent growth year-on-year, 2X market growth in flash arrays. IDC just reported 30 percent growth in storage, number one in price performance in the world and the best HPC storage product in the top 500 with our ThinkSystem DSS G, so strong coverage, but I'm excited today to announce for Transform 2.0 that Lenovo is launching the largest data management and storage portfolio in our 25-year data center history. (audience applauding) So a year ago, the largest server portfolio, becoming the largest fastest growing server OEM, today the largest storage portfolio, but as you saw this morning we're not doing it alone. Today Lenovo and NetApp, two global powerhouses are joining forces to deliver a multi-billion dollar global alliance in data management and storage to help customers through their intelligent transformation. As the fastest growing worldwide server leader and one of the fastest growing flash array and data management companies in the world, we're going to deliver more choice to customers than ever before, global scale that's never been seen, supply chain efficiencies, and rapidly accelerating innovation and solutions. So, let me unwrap this a little bit for you and talk about what we're announcing today. First, it's the largest portfolio in our history. You're going to see not just storage solutions launching today but a set of solution recipes from NetApp that are going to make Lenovo server and NetApp or Lenovo storage work better together. The announcement enables Lenovo to go from covering 15 percent of the global storage market to more than 90 percent of the global storage market and distribute these products in more than 160 countries around the world. So we're launching today, 10 new storage platforms, the ThinkSystem DE and ThinkSystem DM platforms. They're going to be centrally managed, so the same XClarity management that you've been using for server, you can now use across all of your storage platforms as well, and it'll be supported by the same 10,000 plus service personnel that are giving outstanding customer support to you today on the server side. And we didn't come up with this in the last month or the last quarter. We're announcing availability in ordering today and shipments tomorrow of the first products in this portfolio, so we're excited today that it's not just a future announcement but something you as customers can take advantage of immediately. (audience applauding) The second part of the announcement is we are announcing a joint venture in China. Not only will this be a multi-billion dollar global partnership, but Lenovo will be a 51 percent owner, NetApp a 49 percent owner of a new joint venture in China with the goal of becoming in the top three storage companies in the largest data and storage market in the world. We will deliver our R and D in China for China, pooling our IP and resources together, and delivering a single route to market through a complementary channel, not just in China but worldwide. And in the future I just want to tell everyone this is phase one. There is so much exciting stuff. We're going to be on the stage over the next year talking to you about around integrated solutions, next-generation technologies, and further synergies and collaborations. So, rather than just have me talk about it, I'd like to welcome to the stage our new partner NetApp and Brad Anderson who's the senior vice president and general manager of NetApp Cloud Infrastructure. (upbeat music) (audience applauding) >> Thank You Kirk. >> So Brad, we've known each other a long time. It's an exciting day. I'm going to give you the stage and allow you to say NetApp's perspective on this announcement. >> Very good, thank you very much, Kirk. Kirk and I go back to I think 1994, so hey good morning and welcome. My name is Brad Anderson. I manage the Cloud Infrastructure Group at NetApp, and I am honored and privileged to be here at Lenovo Transform, particularly today on today's announcement. Now, you've heard a lot about digital transformation about how companies have to transform their IT to compete in today's global environment. And today's announcement with the partnership between NetApp and Lenovo is what that's all about. This is the joining of two global leaders bringing innovative technology in a simplified solution to help customers modernize their IT and accelerate their global digital transformations. Drawing on the strengths of both companies, Lenovo's high performance compute world-class supply chain, and NetApp's hybrid cloud data management, hybrid flash and all flash storage solutions and products. And both companies providing our customers with the global scale for them to be able to meet their transformation goals. At NetApp, we're very excited. This is a quote from George Kurian our CEO. George spent all day yesterday with YY and Kirk, and would have been here today if it hadn't been also our shareholders meeting in California, but I want to just convey how excited we are for all across NetApp with this partnership. This is a partnership between two companies with tremendous market momentum. Kirk took you through all the amazing results that Lenovo has accomplished, number one in supercomputing, number one in performance, number one in x86 reliability, number one in x86 customers sat, number five in supply chain, really impressive and congratulations. Like Lenovo, NetApp is also on a transformation journey, from a storage company to the data authority in hybrid cloud, and we've seen some pretty impressive momentum as well. Just last week we became number one in all flash arrays worldwide, catching EMC and Dell, and we plan to keep on going by them, as we help customers modernize their their data centers with cloud connected flash. We have strategic partnerships with the largest hyperscalers to provide cloud native data services around the globe and we are having success helping our customers build their own private clouds with just, with a new disruptive hyper-converged technology that allows them to operate just like hyperscalers. These three initiatives has fueled NetApp's transformation, and has enabled our customers to change the world with data. And oh by the way, it has also fueled us to have meet or have beaten Wall Street's expectations for nine quarters in a row. These are two companies with tremendous market momentum. We are also building this partnership for long term success. We think about this as phase one and there are two important components to phase one. Kirk took you through them but let me just review them. Part one, the establishment of a multi-year commitment and a collaboration agreement to offer Lenovo branded flash products globally, and as Kurt said in 160 countries. Part two, the formation of a joint venture in PRC, People's Republic of China, that will provide long term commitment, joint product development, and increase go-to-market investment to meet the unique needs to China. Both companies will put in storage technologies and storage expertise to form an independent JV that establishes a data management company in China for China. And while we can dream about what phase two looks like, our entire focus is on making phase one incredibly successful and I'm pleased to repeat what Kirk, is that the first products are orderable and shippable this week in 160 different countries, and you will see our two companies focusing on the here and now. On our joint go to market strategy, you'll see us working together to drive strategic alignment, focused execution, strong governance, and realistic expectations and milestones. And it starts with the success of our customers and our channel partners is job one. Enabling customers to modernize their legacy IT with complete data center solutions, ensuring that our customers get the best from both companies, new offerings the fuel business success, efficiencies to reinvest in game-changing initiatives, and new solutions for new mission-critical applications like data analytics, IoT, artificial intelligence, and machine learning. Channel partners are also top of mind for both our two companies. We are committed to the success of our existing and our future channel partners. For NetApp channel partners, it is new pathways to new segments and to new customers. For Lenovo's channel partners, it is the competitive weapons that now allows you to compete and more importantly win against Dell, EMC, and HP. And the good news for both companies is that our channel partner ecosystem is highly complementary with minimal overlap. Today is the first day of a very exciting partnership, of a partnership that will better serve our customers today and will provide new opportunities to both our companies and to our partners, new products to our customers globally and in China. I am personally very excited. I will be on the board of the JV. And so, I look forward to working with you, partnering with you and serving you as we go forward, and with that, I'd like to invite Kirk back up. (audience applauding) >> Thank you. >> Thank you. >> Well, thank you, Brad. I think it's an exciting overview, and these products will be manufactured in China, in Mexico, in Hungary, and around the world, enabling this amazing supply chain we talked about to deliver in over 160 countries. So thank you Brad, thank you George, for the amazing partnership. So again, that's not all. In Transform 2.0, last year, we talked about the joint ventures that were coming. I want to give you a sneak peek at what you should expect at future Lenovo events around the world. We have this Transform in Beijing in a couple weeks. We'll then be repeating this in 20 different locations roughly around the world over the next year, and I'm excited probably more than ever about what else is coming. Let's talk about Telco 5G and network function virtualization. Today, Motorola phones are certified on 46 global networks. We launched the world's first 5G upgradable phone here in the United States with Verizon. Lenovo DCG sells to 58 telecommunication providers around the world. At Mobile World Congress in Barcelona and Shanghai, you saw China Telecom and China Mobile in the Lenovo booth, China Telecom showing a video broadband remote access server, a VBRAS, with video streaming demonstrations with 2x less jitter than they had seen before. You saw China Mobile with a virtual remote access network, a VRAN, with greater than 10 times the throughput and 10x lower latency running on Lenovo. And this year, we'll be launching a new NFV company, a software company in China for China to drive the entire NFV stack, delivering not just hardware solutions, but software solutions, and we've recently hired a new CEO. You're going to hear more about that over the next several quarters. Very exciting as we try to drive new economics into the networks to deliver these 20 billion devices. We're going to need new economics that I think Lenovo can uniquely deliver. The second on IoT and edge, we've integrated on the device side into our intelligent devices group. With everything that's going to consume electricity computes and communicates, Lenovo is in a unique position on the device side to take advantage of the communications from Motorola and being one of the largest device companies in the world. But this year, we're also going to roll out a comprehensive set of edge gateways and ruggedized industrial servers and edge servers and ISP appliances for the edge and for IoT. So look for that as well. And then lastly, as a service, you're going to see Lenovo delivering hardware as a service, device as a service, infrastructure as a service, software as a service, and hardware as a service, not just as a glorified leasing contract, but with IP, we've developed true flexible metering capability that enables you to scale up and scale down freely and paying strictly based on usage, and we'll be having those announcements within this fiscal year. So Transform 2.0, lots to talk about, NetApp the big news of the day, but a lot more to come over the next year from the Data Center group. So in summary, I'm excited that we have a lot of customers that are going to be on stage with us that you saw in the video. Lots of testimonials so that you can talk to colleagues of yourself. Alamos Gold from Canada, a Canadian gold producer, Caligo for data optimization and privacy, SciNet, the largest supercomputer we've ever put into North America, and the largest in Canada at the University of Toronto will be here talking about climate change. City of Bogota again with our hyper-converged solutions around smart city putting in 3,000 cameras for criminal detection, license plate detection, et cetera, and then more from a channel mid market perspective, Jerry's Foods, which is from my home state of Wisconsin, and Minnesota which has about 57 stores in the specialty foods market, and how they're leveraging our IoT solutions as well. So again, about five times the number of demos that we had last year. So in summary, first and foremost to the customers, thank you for your business. It's been a great journey and I think we're on a tremendous role. You saw from last year, we're trying to build credibility with you. After the largest server portfolio, we're now the fastest-growing server OEM per Gardner, number one in performance, number one in reliability, number one in customer satisfaction, number one in supercomputing. Today, the largest storage portfolio in our history, with the goal of becoming the fastest growing storage company in the world, top three in China, multibillion-dollar collaboration with NetApp. And the transformation is going to continue with new edge gateways, edge servers, NFV solutions, telecommunications infrastructure, and hardware as a service with dynamic metering. So thank you for your time. I've looked forward to meeting many of you over the next day. We appreciate your business, and with that, I'd like to bring up Rod Lappen to introduce our next speaker. Rod? (audience applauding) >> Thanks, boss, well done. Alright ladies and gentlemen. No real secret there. I think we've heard why I might talk about the fourth Industrial Revolution in data and exactly what's going on with that. You've heard Kirk with some amazing announcements, obviously now with our NetApp partnership, talk about 5G, NFV, cloud, artificial intelligence, I think we've hit just about all the key hot topics. It's with great pleasure that I now bring up on stage Mr. Christian Teismann, our senior vice president and general manager of commercial business for both our PCs and our IoT business, so Christian Teismann. (techno music) Here, take that. >> Thank you. I think I'll need that. >> Okay, Christian, so obviously just before we get down, you and I last year, we had a bit of a chat about being in New York. >> Exports. >> You were an expat in New York for a long time. >> That's true. >> And now, you've moved from New York. You're in Munich? >> Yep. >> How does that feel? >> Well Munich is a wonderful city, and it's a great place to live and raise kids, but you know there's no place in the world like New York. >> Right. >> And I miss it a lot, quite frankly. >> So what exactly do you miss in New York? >> Well there's a lot of things in New York that are unique, but I know you spent some time in Japan, but I still believe the best sushi in the world is still in New York City. (all laughing) >> I will beg to differ. I will beg to differ. I think Mr. Guchi-san from Softbank is here somewhere. He will get up an argue very quickly that Japan definitely has better sushi than New York. But obviously you know, it's a very very special place, and I have had sushi here, it's been fantastic. What about Munich? Anything else that you like in Munich? >> Well I mean in Munich, we have pork knuckles. >> Pork knuckles. (Christian laughing) Very similar sushi. >> What is also very fantastic, but we have the real, the real Oktoberfest in Munich, and it starts next week, mid-September, and I think it's unique in the world. So it's very special as well. >> Oktoberfest. >> Yes. >> Unfortunately, I'm not going this year, 'cause you didn't invite me, but-- (audience chuckling) How about, I think you've got a bit of a secret in relation to Oktoberfest, probably not in Munich, however. >> It's a secret, yes, but-- >> Are you going to share? >> Well I mean-- >> See how I'm putting you on the spot? >> In the 10 years, while living here in New York, I was a regular visitor of the Oktoberfest at the Lower East Side in Avenue C at Zum Schneider, where I actually met my wife, and she's German. >> Very good. So, how about a big round of applause? (audience applauding) Not so much for Christian, but more I think, obviously for his wife, who obviously had been drinking and consequently ended up with you. (all laughing) See you later, mate. >> That's the beauty about Oktoberfest, but yes. So first of all, good morning to everybody, and great to be back here in New York for a second Transform event. New York clearly is the melting pot of the world in terms of culture, nations, but also business professionals from all kind of different industries, and having this event here in New York City I believe is manifesting what we are trying to do here at Lenovo, is transform every aspect of our business and helping our customers on the journey of intelligent transformation. Last year, in our transformation on the device business, I talked about how the PC is transforming to personalized computing, and we've made a lot of progress in that journey over the last 12 months. One major change that we have made is we combined all our device business under one roof. So basically PCs, smart devices, and smart phones are now under the roof and under the intelligent device group. But from my perspective makes a lot of sense, because at the end of the day, all devices connect in the modern world into the cloud and are operating in a seamless way. But we are also moving from a device business what is mainly a hardware focus historically, more and more also into a solutions business, and I will give you during my speech a little bit of a sense of what we are trying to do, as we are trying to bring all these components closer together, and specifically also with our strengths on the data center side really build end-to-end customer solution. Ultimately, what we want to do is make our business, our customer's businesses faster, safer, and ultimately smarter as well. So I want to look a little bit back, because I really believe it's important to understand what's going on today on the device side. Many of us have still grown up with phones with terminals, ultimately getting their first desktop, their first laptop, their first mobile phone, and ultimately smartphone. Emails and internet improved our speed, how we could operate together, but still we were defined by linear technology advances. Today, the world has changed completely. Technology itself is not a limiting factor anymore. It is how we use technology going forward. The Internet is pervasive, and we are not yet there that we are always connected, but we are nearly always connected, and we are moving to the stage, that everything is getting connected all the time. Sharing experiences is the most driving force in our behavior. In our private life, sharing pictures, videos constantly, real-time around the world, with our friends and with our family, and you see the same behavior actually happening in the business life as well. Collaboration is the number-one topic if it comes down to workplace, and video and instant messaging, things that are coming from the consumer side are dominating the way we are operating in the commercial business as well. Most important beside technology, that a new generation of workforce has completely changed the way we are working. As the famous workforce the first generation of Millennials that have now fully entered in the global workforce, and the next generation, it's called Generation Z, is already starting to enter the global workforce. By 2025, 75 percent of the world's workforce will be composed out of two of these generations. Why is this so important? These two generations have been growing up using state-of-the-art IT technology during their private life, during their education, school and study, and are taking these learnings and taking these behaviors in the commercial workspace. And this is the number one force of change that we are seeing in the moment. Diverse workforces are driving this change in the IT spectrum, and for years in many of our customers' focus was their customer focus. Customer experience also in Lenovo is the most important thing, but we've realized that our own human capital is equally valuable in our customer relationships, and employee experience is becoming a very important thing for many of our customers, and equally for Lenovo as well. As you have heard YY, as we heard from YY, Lenovo is focused on intelligent transformation. What that means for us in the intelligent device business is ultimately starting with putting intelligence in all of our devices, smartify every single one of our devices, adding value to our customers, traditionally IT departments, but also focusing on their end users and building products that make their end users more productive. And as a world leader in commercial devices with more than 33 percent market share, we can solve problems been even better than any other company in the world. So, let's talk about transformation of productivity first. We are in a device-led world. Everything we do is connected. There's more interaction with devices than ever, but also with spaces who are increasingly becoming smart and intelligent. YY said it, by 2020 we have more than 20 billion connected devices in the world, and it will grow exponentially from there on. And users have unique personal choices for technology, and that's very important to recognize, and we call this concept a digital wardrobe. And it means that every single end-user in the commercial business is composing his personal wardrobe on an ongoing basis and is reconfiguring it based on the work he's doing and based where he's going and based what task he is doing. I would ask all of you to put out all the devices you're carrying in your pockets and in your bags. You will see a lot of you are using phones, tablets, laptops, but also cameras and even smartwatches. They're all different, but they have one underlying technology that is bringing it all together. Recognizing digital wardrobe dynamics is a core factor for us to put all the devices under one roof in IDG, one business group that is dedicated to end-user solutions across mobile, PC, but also software services and imaging, to emerging technologies like AR, VR, IoT, and ultimately a AI as well. A couple of years back there was a big debate around bring-your-own-device, what was called consumerization. Today consumerization does not exist anymore, because consumerization has happened into every single device we build in our commercial business. End users and commercial customers today do expect superior display performance, superior audio, microphone, voice, and touch quality, and have it all connected and working seamlessly together in an ease of use space. We are already deep in the journey of personalized computing today. But the center point of it has been for the last 25 years, the mobile PC, that we have perfected over the last 25 years, and has been the undisputed leader in mobility computing. We believe in the commercial business, the ThinkPad is still the core device of a digital wardrobe, and we continue to drive the success of the ThinkPad in the marketplace. We've sold more than 140 million over the last 26 years, and even last year we exceeded nearly 11 million units. That is about 21 ThinkPads per minute, or one Thinkpad every three seconds that we are shipping out in the market. It's the number one commercial PC in the world. It has gotten countless awards but we felt last year after Transform we need to build a step further, in really tailoring the ThinkPad towards the need of the future. So, we announced a new line of X1 Carbon and Yoga at CES the Consumer Electronics Show. And the reason is not we want to sell to consumer, but that we do recognize that a lot of CIOs and IT decision makers need to understand what consumers are really doing in terms of technology to make them successful. So, let's take a look at the video. (suspenseful music) >> When you're the number one business laptop of all time, your only competition is yourself. (wall shattering) And, that's different. Different, like resisting heat, ice, dust, and spills. Different, like sharper, brighter OLA display. The trackpoint that reinvented controls, and a carbon fiber roll cage to protect what's inside, built by an engineering and design team, doing the impossible for the last 25 years. This is the number one business laptop of all time, but it's not a laptop. It's a ThinkPad. (audience applauding) >> Thank you very much. And we are very proud that Lenovo ThinkPad has been selected as the best laptop in the world in the second year in a row. I think it's a wonderful tribute to what our engineers have been done on this one. And users do want awesome displays. They want the best possible audio, voice, and touch control, but some users they want more. What they want is super power, and I'm really proud to announce our newest member of the X1 family, and that's the X1 extreme. It's exceptionally featured. It has six core I9 intel chipset, the highest performance you get in the commercial space. It has Nvidia XTX graphic, it is a 4K UHD display with HDR with Dolby vision and Dolby Atmos Audio, two terabyte in SSD, so it is really the absolute Ferrari in terms of building high performance commercial computer. Of course it has touch and voice, but it is one thing. It has so much performance that it serves also a purpose that is not typical for commercial, and I know there's a lot of secret gamers also here in this room. So you see, by really bringing technology together in the commercial space, you're creating productivity solutions of one of a kind. But there's another category of products from a productivity perspective that is incredibly important in our commercial business, and that is the workstation business . Clearly workstations are very specifically designed computers for very advanced high-performance workloads, serving designers, architects, researchers, developers, or data analysts. And power and performance is not just about the performance itself. It has to be tailored towards the specific use case, and traditionally these products have a similar size, like a server. They are running on Intel Xeon technology, and they are equally complex to manufacture. We have now created a new category as the ultra mobile workstation, and I'm very proud that we can announce here the lightest mobile workstation in the industry. It is so powerful that it really can run AI and big data analysis. And with this performance you can go really close where you need this power, to the sensors, into the cars, or into the manufacturing places where you not only wannna read the sensors but get real-time analytics out of these sensors. To build a machine like this one you need customers who are really challenging you to the limit. and we're very happy that we had a customer who went on this journey with us, and ultimately jointly with us created this product. So, let's take a look at the video. (suspenseful music) >> My world involves pathfinding both the hardware needs to the various work sites throughout the company, and then finding an appropriate model of desktop, laptop, or workstation to match those needs. My first impressions when I first seen the ThinkPad P1 was I didn't actually believe that we could get everything that I was asked for inside something as small and light in comparison to other mobile workstations. That was one of the I can't believe this is real sort of moments for me. (engine roars) >> Well, it's better than general when you're going around in the wind tunnel, which isn't alway easy, and going on a track is not necessarily the best bet, so having a lightweight very powerful laptop is extremely useful. It can take a Xeon processor, which can support ECC from when we try to load a full car, and when we're analyzing live simulation results. through and RCFT post processor or example. It needs a pretty powerful machine. >> It's come a long way to be able to deliver this. I hate to use the word game changer, but it is that for us. >> Aston Martin has got a lot of different projects going. There's some pretty exciting projects and a pretty versatile range coming out. Having Lenovo as a partner is certainly going to ensure that future. (engine roars) (audience applauds) >> So, don't you think the Aston Martin design and the ThinkPad design fit very well together? (audience laughs) So if Q, would get a new laptop, I think you would get a ThinkPad X P1. So, I want to switch gears a little bit, and go into something in terms of productivity that is not necessarily on top of the mind or every end user but I believe it's on top of the mind of every C-level executive and of every CEO. Security is the number one threat in terms of potential risk in your business and the cost of cybersecurity is estimated by 2020 around six trillion dollars. That's more than the GDP of Japan and we've seen a significant amount of data breach incidents already this years. Now, they're threatening to take companies out of business and that are threatening companies to lose a huge amount of sensitive customer data or internal data. At Lenovo, we are taking security very, very seriously, and we run a very deep analysis, around our own security capabilities in the products that we are building. And we are announcing today a new brand under the Think umbrella that is called ThinkShield. Our goal is to build the world's most secure PC, and ultimately the most secure devices in the industry. And when we looked at this end-to-end, there is no silver bullet around security. You have to go through every aspect where security breaches can potentially happen. That is why we have changed the whole organization, how we look at security in our device business, and really have it grouped under one complete ecosystem of solutions, Security is always something where you constantly are getting challenged with the next potential breach the next potential technology flaw. As we keep innovating and as we keep integrating, a lot of our partners' software and hardware components into our products. So for us, it's really very important that we partner with companies like Intel, Microsoft, Coronet, Absolute, and many others to really as an example to drive full encryption on all the data seamlessly, to have multi-factor authentication to protect your users' identity, to protect you in unsecured Wi-Fi locations, or even simple things like innovation on the device itself, to and an example protect the camera, against usage with a little thing like a thinkShutter that you can shut off the camera. SO what I want to show you here, is this is the full portfolio of ThinkShield that we are announcing today. This is clearly not something I can even read to you today, but I believe it shows you the breadth of security management that we are announcing today. There are four key pillars in managing security end-to-end. The first one is your data, and this has a lot of aspects around the hardware and the software itself. The second is identity. The third is the security around online, and ultimately the device itself. So, there is a breakout on security and ThinkShield today, available in the afternoon, and encourage you to really take a deeper look at this one. The first pillar around productivity was the device, and around the device. The second major pillar that we are seeing in terms of intelligent transformation is the workspace itself. Employees of a new generation have a very different habit how they work. They split their time between travel, working remotely but if they do come in the office, they expect a very different office environment than what they've seen in the past in cubicles or small offices. They come into the office to collaborate, and they want to create ideas, and they really work in cross-functional teams, and they want to do it instantly. And what we've seen is there is a huge amount of investment that companies are doing today in reconfiguring real estate reconfiguring offices. And most of these kind of things are moving to a digital platform. And what we are doing, is we want to build an entire set of solutions that are just focused on making the workspace more productive for remote workforce, and to create technology that allow people to work anywhere and connect instantly. And the core of this is that we need to be, the productivity of the employee as high as possible, and make it for him as easy as possible to use these kind of technologies. Last year in Transform, I announced that we will enter the smart office space. By the end of last year, we brought the first product into the market. It's called the Hub 500. It's already deployed in thousands of our customers, and it's uniquely focused on Microsoft Skype for Business, and making meeting instantly happen. And the product is very successful in the market. What we are announcing today is the next generation of this product, what is the Hub 700, what has a fantastic audio quality. It has far few microphones, and it is usable in small office environment, as well as in major conference rooms, but the most important part of this new announcement is that we are also announcing a software platform, and this software platform allows you to run multiple video conferencing software solutions on the same platform. Many of you may have standardized for one software solution or for another one, but as you are moving in a world of collaborating instantly with partners, customers, suppliers, you always will face multiple software standards in your company, and Lenovo is uniquely positioned but providing a middleware platform for the device to really enable multiple of these UX interfaces. And there's more to come and we will add additional UX interfaces on an ongoing base, based on our customer requirements. But this software does not only help to create a better experience and a higher productivity in the conference room or the huddle room itself. It really will allow you ultimately to manage all your conference rooms in the company in one instance. And you can run AI technologies around how to increase productivity utilization of your entire conference room ecosystem in your company. You will see a lot more devices coming from the node in this space, around intelligent screens, cameras, and so on, and so on. The idea is really that Lenovo will become a core provider in the whole movement into the smart office space. But it's great if you have hardware and software that is really supporting the approach of modern IT, but one component that Kirk also mentioned is absolutely critical, that we are providing this to you in an as a service approach. Get it what you want, when you need it, and pay it in the amount that you're really using it. And within UIT there is also I think a new philosophy around IT management, where you're much more focused on the value that you are consuming instead of investing into technology. We are launched as a service two years back and we already have a significant number of customers running PC as a service, but we believe as a service will stretch far more than just the PC device. It will go into categories like smart office. It might go even into categories like phone, and it will definitely go also in categories like storage and server in terms of capacity management. I want to highlight three offerings that we are also displaying today that are sort of building blocks in terms of how we really run as a service. The first one is that we collaborated intensively over the last year with Microsoft to be the launch pilot for their Autopilot offering, basically deploying images easily in the same approach like you would deploy a new phone on the network. The purpose really is to make new imaging and enabling new PC as seamless as it's used to be in the phone industry, and we have a complete set of offerings, and already a significant number customers have deployed Autopilot with Lenovo. The second major offering is Premier Support, like in the in the server business, where Premier Support is absolutely critical to run critical infrastructure, we see a lot of our customers do want to have Premier Support for their end users, so they can be back into work basically instantly, and that you have the highest possible instant repair on every single device. And then finally we have a significant amount of time invested into understanding how the software as a service really can get into one philosophy. And many of you already are consuming software as a service in many different contracts from many different vendors, but what we've created is one platform that really can manage this all together. All these things are the foundation for a device as a service offering that really can manage this end-to-end. So, implementing an intelligent workplace can be really a daunting prospect depending on where you're starting from, and how big your company ultimately is. But how do you manage the transformation of technology workspace if you're present in 50 or more countries and you run an infrastructure for more than 100,000 people? Michelin, famous for their tires, infamous for their Michelin star restaurant rating, especially in New York, and instantly recognizable by the Michelin Man, has just doing that. Please welcome with me Damon McIntyre from Michelin to talk to us about the challenges and transforming collaboration and productivity. (audience applauding) (electronic dance music) Thank you, David. >> Thank you, thank you very much. >> We on? >> So, how do you feel here? >> Well good, I want to thank you first of all for your partnership and the devices you create that helped us design, manufacture, and distribute the best tire in the world, okay? I just had to say it and put out there, alright. And I was wondering, were those Michelin tires on that Aston Martin? >> I'm pretty sure there is no other tire that would fit to that. >> Yeah, no, thank you, thank you again, and thank you for the introduction. >> So, when we talk about the transformation happening really in the workplace, the most tangible transformation that you actually see is the drastic change that companies are doing physically. They're breaking down walls. They're removing cubes, and they're moving to flexible layouts, new desks, new huddle rooms, open spaces, but the underlying technology for that is clearly not so visible very often. So, tell us about Michelin's strategy, and the technology you are deploying to really enable this corporation. >> So we, so let me give a little bit a history about the company to understand the daunting tasks that we had before us. So we have over 114,000 people in the company under 170 nationalities, okay? If you go to the corporate office in France, it's Clermont. It's about 3,000 executives and directors, and what have you in the marketing, sales, all the way up to the chain of the global CIO, right? Inside of the Americas, we merged in Americas about three years ago. Now we have the Americas zone. There's about 28,000 employees across the Americas, so it's really, it's really hard in a lot of cases. You start looking at the different areas that you lose time, and you lose you know, your productivity and what have you, so there, it's when we looked at different aspects of how we were going to manage the meeting rooms, right? because we have opened up our areas of workspace, our CIO, CEOs in our zones will no longer have an office. They'll sit out in front of everybody else and mingle with the crowd. So, how do you take those spaces that were originally used by an individual but now turn them into like meeting rooms? So, we went through a large process, and looked at the Hub 500, and that really met our needs, because at the end of the day what we noticed was, it was it was just it just worked, okay? We've just added it to the catalog, so we're going to be deploying it very soon, and I just want to again point that I know everybody struggles with this, and if you look at all the minutes that you lose in starting up a meeting, and we know you know what I'm talking about when I say this, it equates to many many many dollars, okay? And so at the end the day, this product helps us to be more efficient in starting up the meeting, and more productive during the meeting. >> Okay, it's very good to hear. Another major trend we are seeing in IT departments is taking a more hands-off approach to hardware. We're seeing new technologies enable IT to create a more efficient model, how IT gets hardware in the hands of end-users, and how they are ultimately supporting themselves. So what's your strategy around the lifecycle management of the devices? >> So yeah you mentioned, again, we'll go back to the 114,000 employees in the company, right? You imagine looking at all the devices we use. I'm not going to get into the number of devices we have, but we have a set number that we use, and we have to go through a process of deploying these devices, which we right now service our own image. We build our images, we service them through our help desk and all that process, and we go through it. If you imagine deploying 25,000 PCs in a year, okay? The time and the daunting task that's behind all that, you can probably add up to 20 or 30 people just full-time doing that, okay? So, with partnering with Lenovo and their excellent technology, their technical teams, and putting together the whole process of how we do imaging, it now lifts that burden off of our folks, and it shifts it into a more automated process through the cloud, okay? And, it's with the Autopilot on the end of the project, we'll have Autopilot fully engaged, but what I really appreciate is how Lenovo really, really kind of got with us, and partnered with us for the whole process. I mean it wasn't just a partner between Michelin and Lenovo. Microsoft was also partnered during that whole process, and it really was a good project that we put together, and we hope to have something in a full production mode next year for sure. >> So, David thank you very, very much to be here with us on stage. What I really want to say, customers like you, who are always challenging us on every single aspect of our capabilities really do make the big difference for us to get better every single day and we really appreciate the partnership. >> Yeah, and I would like to say this is that I am, I'm doing what he's exactly said he just said. I am challenging Lenovo to show us how we can innovate in our work space with your devices, right? That's a challenge, and it's going to be starting up next year for sure. We've done some in the past, but I'm really going to challenge you, and my whole aspect about how to do that is bring you into our workspace. Show you how we make how we go through the process of making tires and all that process, and how we distribute those tires, so you can brainstorm, come back to the table and say, here's a device that can do exactly what you're doing right now, better, more efficient, and save money, so thank you. >> Thank you very much, David. (audience applauding) Well it's sometimes really refreshing to get a very challenging customers feedback. And you know, we will continue to grow this business together, and I'm very confident that your challenge will ultimately help to make our products even more seamless together. So, as we now covered productivity and how we are really improving our devices itself, and the transformation around the workplace, there is one pillar left I want to talk about, and that's really, how do we make businesses smarter than ever? What that really means is, that we are on a journey on trying to understand our customer's business, deeper than ever, understanding our customer's processes even better than ever, and trying to understand how we can help our customers to become more competitive by injecting state-of-the-art technology in this intelligent transformation process, into core processes. But this cannot be done without talking about a fundamental and that is the journey towards 5G. I really believe that 5G is changing everything the way we are operating devices today, because they will be connected in a way like it has never done before. YY talked about you know, 20 times 10 times the amount of performance. There are other studies that talk about even 200 times the performance, how you can use these devices. What it will lead to ultimately is that we will build devices that will be always connected to the cloud. And, we are preparing for this, and Kirk already talked about, and how many operators in the world we already present with our Moto phones, with how many Telcos we are working already on the backend, and we are working on the device side on integrating 5G basically into every single one of our product in the future. One of the areas that will benefit hugely from always connected is the world of virtual reality and augmented reality. And I'm going to pick here one example, and that is that we have created a commercial VR solution for classrooms and education, and basically using consumer type of product like our Mirage Solo with Daydream and put a solution around this one that enables teachers and schools to use these products in the classroom experience. So, students now can have immersive learning. They can studying sciences. They can look at environmental issues. They can exploring their careers, or they can even taking a tour in the next college they're going to go after this one. And no matter what grade level, this is how people will continue to learn in the future. It's quite a departure from the old world of textbooks. In our area that we are looking is IoT, And as YY already elaborated, we are clearly learning from our own processes around how we improve our supply chain and manufacturing and how we improve also retail experience and warehousing, and we are working with some of the largest companies in the world on pilots, on deploying IoT solutions to make their businesses, their processes, and their businesses, you know, more competitive, and some of them you can see in the demo environment. Lenovo itself already is managing 55 million devices in an IoT fashion connecting to our own cloud, and constantly improving the experience by learning from the behavior of these devices in an IoT way, and we are collecting significant amount of data to really improve the performance of these systems and our future generations of products on a ongoing base. We have a very strong partnership with a company called ADLINK from Taiwan that is one of the leading manufacturers of manufacturing PC and hardened devices to create solutions on the IoT platform. The next area that we are very actively investing in is commercial augmented reality. I believe augmented reality has by far more opportunity in commercial than virtual reality, because it has the potential to ultimately improve every single business process of commercial customers. Imagine in the future how complex surgeries can be simplified by basically having real-time augmented reality information about the surgery, by having people connecting into a virtual surgery, and supporting the surgery around the world. Visit a furniture store in the future and see how this furniture looks in your home instantly. Doing some maintenance on some devices yourself by just calling the company and getting an online manual into an augmented reality device. Lenovo is exploring all kinds of possibilities, and you will see a solution very soon from Lenovo. Early when we talked about smart office, I talked about the importance of creating a software platform that really run all these use cases for a smart office. We are creating a similar platform for augmented reality where companies can develop and run all their argumented reality use cases. So you will see that early in 2019 we will announce an augmented reality device, as well as an augmented reality platform. So, I know you're very interested on what exactly we are rolling out, so we will have a first prototype view available there. It's still a codename project on the horizon, and we will announce it ultimately in 2019, but I think it's good for you to take a look what we are doing here. So, I just wanted to give you a peek on what we are working beyond smart office and the device productivity in terms of really how we make businesses smarter. It's really about increasing productivity, providing you the most secure solutions, increase workplace collaboration, increase IT efficiency, using new computing devices and software and services to make business smarter in the future. There's no other company that will enable to offer what we do in commercial. No company has the breadth of commercial devices, software solutions, and the same data center capabilities, and no other company can do more for your intelligent transformation than Lenovo. Thank you very much. (audience applauding) >> Thanks mate, give me that. I need that. Alright, ladies and gentlemen, we are done. So firstly, I've got a couple of little housekeeping pieces at the end of this and then we can go straight into going and experiencing some of the technology we've got on the left-hand side of the room here. So, I want to thank Christian obviously. Christian, awesome as always, some great announcements there. I love the P1. I actually like the Aston Martin a little bit better, but I'll take either if you want to give me one for free. I'll take it. We heard from YY obviously about the industry and how the the fourth Industrial Revolution is impacting us all from a digital transformation perspective, and obviously Kirk on DCG, the great NetApp announcement, which is going to be really exciting, actually that Twitter and some of the social media panels are absolutely going crazy, so it's good to see that the industry is really taking some impact. Some of the publications are really great, so thank you for the media who are obviously in the room publishing right no. But now, I really want to say it's all of your turn. So, all of you up the back there who are having coffee, it's your turn now. I want everyone who's sitting down here after this event move into there, and really take advantage of the 15 breakouts that we've got set there. There are four breakout sessions from a time perspective. I want to try and get you all out there at least to use up three of them and use your fourth one to get out and actually experience some of the technology. So, you've got four breakout sessions. A lot of the breakout sessions are actually done twice. If you have not downloaded the app, please download the app so you can actually see what time things are going on and make sure you're registering correctly. There's a lot of great experience of stuff out there for you to go do. I've got one quick video to show you on some of the technology we've got and then we're about to close. Alright, here we are acting crazy. Now, you can see obviously, artificial intelligence machine learning in the browser. God, I hate that dance, I'm not a Millenial at all. It's effectively going to be implemented by healthcare. I want you to come around and test that out. Look at these two guys. This looks like a Lenovo management meeting to be honest with you. These two guys are actually concentrating, using their brain power to race each others in cars. You got to come past and give that a try. Give that a try obviously. Fantastic event here, lots of technology for you to experience, and great partners that have been involved as well. And so, from a Lenovo perspective, we've had some great alliance partners contribute, including obviously our number one partner, Intel, who's been a really big loyal contributor to us, and been a real part of our success here at Transform. Excellent, so please, you've just seen a little bit of tech out there that you can go and play with. I really want you, I mean go put on those black things, like Scott Hawkins our chief marketing officer from Lenovo's DCG business was doing and racing around this little car with his concentration not using his hands. He said it's really good actually, but as soon as someone comes up to speak to him, his car stops, so you got to try and do better. You got to try and prove if you can multitask or not. Get up there and concentrate and talk at the same time. 62 different breakouts up there. I'm not going to go into too much detai, but you can see we've got a very, very unusual numbering system, 18 to 18.8. I think over here we've got a 4849. There's a 4114. And then up here we've got a 46.1 and a 46.2. So, you need the decoder ring to be able to understand it. Get over there have a lot of fun. Remember the boat leaves today at 4:00 o'clock, right behind us at the pier right behind us here. There's 400 of us registered. Go onto the app and let us know if there's more people coming. It's going to be a great event out there on the Hudson River. Ladies and gentlemen that is the end of your keynote. I want to thank you all for being patient and thank all of our speakers today. Have a great have a great day, thank you very much. (audience applauding) (upbeat music) ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ♪ ♪ Ba da bop bop bop ba do ♪
SUMMARY :
and those around you, Ladies and gentlemen, we ask that you please take an available seat. Ladies and gentlemen, once again we ask and software that transform the way you collaborate, Good morning everyone! Ooh, that was pretty good actually, and have a look at all of the breakout sessions. and the industries demand to be more intelligent, and the strategies that we have going forward I'm going to give you the stage and allow you to say is that the first products are orderable and being one of the largest device companies in the world. and exactly what's going on with that. I think I'll need that. Okay, Christian, so obviously just before we get down, You're in Munich? and it's a great place to live and raise kids, And I miss it a lot, but I still believe the best sushi in the world and I have had sushi here, it's been fantastic. (Christian laughing) the real Oktoberfest in Munich, in relation to Oktoberfest, at the Lower East Side in Avenue C at Zum Schneider, and consequently ended up with you. and is reconfiguring it based on the work he's doing and a carbon fiber roll cage to protect what's inside, and that is the workstation business . and then finding an appropriate model of desktop, in the wind tunnel, which isn't alway easy, I hate to use the word game changer, is certainly going to ensure that future. And the core of this is that we need to be, and distribute the best tire in the world, okay? that would fit to that. and thank you for the introduction. and the technology you are deploying and more productive during the meeting. how IT gets hardware in the hands of end-users, You imagine looking at all the devices we use. and we really appreciate the partnership. and it's going to be starting up next year for sure. and how many operators in the world Ladies and gentlemen that is the end of your keynote.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Kirk | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Brad | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
George Kurian | PERSON | 0.99+ |
Michelin | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Nike | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Disney | ORGANIZATION | 0.99+ |
California | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
France | LOCATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Canada | LOCATION | 0.99+ |
China | LOCATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Americas | LOCATION | 0.99+ |
Christian Teismann | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
Kirk Skaugen | PERSON | 0.99+ |
Malaysia | LOCATION | 0.99+ |
AMEX | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Rod Lappen | PERSON | 0.99+ |
University College London | ORGANIZATION | 0.99+ |
Brazil | LOCATION | 0.99+ |
Kurt | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Germany | LOCATION | 0.99+ |
17 | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
seven | QUANTITY | 0.99+ |
Hudson River | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Motorola | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
South Africa | LOCATION | 0.99+ |
Melvin Hillsman, OpenLab | OpenStack Summit 2018
>> (Narrator) Live from Vancouver, Canada, it's The Cube, covering OpenStack Summit North America 2018, brought to you by Red Hat, the OpenStack Foundation, and its Ecosystem partners. >> Welcome back, I'm Stu Miniman with my co-host John Troyer and you're watching The Cube, worldwide leader in tech coverage, and this is OpenStack Summit 2018 in Vancouver. Happy to welcome to the program, first-time guest Melvin Hillsman, who's the governance board member of OpenLab, which we got to hear about in the keynote on Monday. Thanks so much for joining us. >> Thanks for having me. >> Alright, so Melvin, we were given, start us off with a little bit about your background, what brought you to the OpenStack community, and we'll go from there. >> Sure, yeah, so my background is in Linux system administration and my getting involved in OpenStack was more or less seeing the writing on the wall as it relates to virtualization and wanting to get an early start in understanding how things would pan out over the course of some years. So I probably started OpenStack maybe three or four so years ago. I was probably later to the party than I wanted to be, but through that process, started working at Rackspace first and that's how I really got more involved into OpenStack in particular. >> Yeah, you made a comment, though. The writing on the wall for virtualization. Explained that for a sec. >> So for me, I was at a shared hosting company and we weren't virtualizin' anything. We were using traditional servers, dedicated servers, installing hundreds of customers on those servers. And so, at one point, what we started doing was we would take a dedicated server, we would create a virtual machine on it, but we would use most of the resources of that dedicated server, and so what allowed that shared hosting was to tear stuff down and recreate it, but it was very manual process and so, of course, the infrastructure service and orchestration around that OpenStack was becoming the de facto standard and way of doing it, and so I didn't want to try to learn manually, or fix something up internally, I wanted to go where OpenStack was being highly developed a lot and people working on it in their day to day jobs, which is why I went to Rackspace. >> Okay, one of the things we look at, this is a community here, so it takes people from lots of different backgrounds, and some of them do it on their spare time, some of them are paid by larger companies to participate, so to tell us about OpenLab, itself, and how your company participates there. >> Sure, so I started, well I'm at Huawei now, but I was at Rackspace and that's kind of how I got more involved in the community and there I started working on testing things above the OpenStack ecosystem, so things that people want to build on top of OpenStack and during that process Huawei reached out to me and was like hey you know you're doing a great job here, and I was like yeah I would love to come and explore more of how we can increase this activity in the community at large. And so Oakland Lab was essentially born out of that, which the OpenStack community, they deliver the OpenStack API's, and they kind of stop there, you know. Everything above that is, you do that on your own, more or less, and so also, as a chair of the user committee, again, just being more concerned about the people who are using stuff, OpenLab was able, was available to facilitate me having access to hardware and access to people who are using things outside OverStack in use cases, et cetera, where we want to test out more integrated tools working with OpenStack and different versions of OpenStack. And so that's essentially what OpenLab is-- >> So in OpenLab, projects come together and it's basically, it's an Interop, boy, in the networking world, they've had the Interop plug and plug fest for a long time, but, in essence, projects come together and you integrate them and start, you invite them in and they integrate and start to test them. Starting with, I mean, I see, for this release, Terraform and Kubernetes. >> Yeah, so a lot of people want to to use Kubernetes, right? And as an OpenStack operator you essentially, you don't really want to go and learn all the bits of Kubernetes, necessarily, and so, but you want to use Kubernetes and you want to work seamlessly with OpenStack and you want to use the API's that you're used to using with OpenStack and so we work very heavily on the external cloud provider for OpenStack, enabling Cinder V3 for containers that you're spinning up in Kubernetes, so that they have seamless integration, you don't have to try to attach your volumes, they are automatically attached. You don't have to figure out what your load balancing is going to to look like. You use Octavia, which is load balancing service for OpenStack, very tightly integrated and things, you know, as you spin things up, they work as you as you would expect and so then all the other legacy applications and all the things you're used to doing with OpenStack, you bring on Kubernetes and you essentially do things the way you've been doing them before, with just an additional layer. >> Yeah, now I wonder if you can talk a little bit about the providers and the users, you know how do they get engaged, to and give us a little flavor around those. >> Yeah, so you get engaged, you go to OpenLabtesting.org and there's two options. One, is you can test out your applications and tools, by clicking get started, you fill that out. And what's great about open lab is that we actually reach out and we talk with you, we consult with you, per se, because we have a lot of variation in hardware that's available to us and so we want to figure out the right hardware that you need in order to do the tests that you want, so that we can get the output as it relates to that integration that will, of course, educate and inform the community at large of whether or not it's working and been validated. And, again, so as a person who wants to support OpenLab or for a provider, for example, who wants to support OpenLab, you click on the support OpenLab link, you fill out a form and you tell us you know, do you want to provide more infrastructure, do you want to talk with us about how clouds are being architected, integrations are being architected, things that you're seeing in the open source use cases that may not be getting the testing that they need and you're willing to work with engineers from other companies around that, so individual testers and then companies who may bring a number of testers together around a particular use case. >> Now, you're starting to publish some of the results of Interop testing and things like that. How is open lab, how does it produce its results, is it eventually going to be producing white papers and things like that or dashboards or what's your vision there? >> Yeah, so we produce a very archaic dashboard right now, but we're working with the CNCF to, if you go to CNCF.CI, and they have a very nice dashboard that kind of shows you a number of projects and whether or not they work together. And so it's open source, so what we want to do is work with that team to figure out how do we change the logos and the git repos, to how to driving those red and green, success or failure icons that are there, but they're relevant to the test that we're doing in OpenLab, so yeah. So we definitely want to have a dashboard that's very easy to decipher what tests are failing in or passing. >> Looking forward, what kinds of projects are you most interested in getting involved? >> Right now, very much Kubernetes, of course. We're really focusing on multi architecture, again, as a result of our work with Kubernetes and driving full conformance and multi architecture. That's kind of the wheelhouse at this time. We're open for folks to give us a lot of different use cases, like we were starting to look at some edge stuff, how can we participate there, we're starting looking at FPGA's and GPU's, so a lot different, we don't have a full integration in a lot of different areas, just yet, but we are having those conversations. >> So, actually, I spent a bunch of years, when I worked on the vendor side, living in an Interrupt lab, and the most valuable things were not figuring out what worked, but what broke, so what kind of things, you know, as you're working through this, what learnings back do you share with the community, both the providers and users? Big stumbling blocks that you can help people, give a red flag, or say you know, avoid these type of things. >> Yeah, exactly what you just said. You know, what's good is some of our stuff is geographically dispersed, so we can start to talk about if, what's the latency look like? You may, within that few square miles that you're operating and doing things, it works great, but when I'm sending something across the water how, is your product still moving quickly, is the latency too bad that we can't, I can't create a container over here because it takes too long, so one example of looking at something fail as it relates to that is we're talking with Octavia folks to see, if I spin up a lot of containers am I going to therefore create a lot of load balancers and if I create a lot of load balancers I'm creating a lot of VM's, or am I creating a lot of containers or are things breaking apart, so we need to dig a little bit further to understand what is and is not working with the integrations we're currently working on and then again we're exploring GPV, GPUs just landed more or less, that was a part of the keynote as well, and so now we're talking about, well, let's do some of that testing. The software, the code, is there but is it usable? And so that's one area we want to start playing around with. >> Okay, one of the other things in the keynote's got mentioned was Zul, the CIDT tool, how's that fitting into the OpenLab? >> Yeah, we use Zul as our gating, so what's great about Zul is that you can interac6t with projects from different SCM's, so we have some projects that live in github, some that utilize Garrote, some that utilize gitlab, and Zul has applicability where it can talk to different, it can talk across these different SCM's, and if you have a patch that depends on a patch in another another pod, so a patch on one project in one SCM can depend on a patch in another project, in a different SCM, and so what's great about Zul is that you can say, hey I'm depending on th6at, so before this patch lands, check to make sure this stuff works over there, so if it succeeds there and it's a dependency then you basically confirm that succeeds there and then now I can run the test here, and it passes here as well, so you know that you can use both of those projects together again, in an integration. Does it makes sense? Hopefully I'm making it very clear, the power there with the cross SCM integration. >> Yeah, Melvin, you've had a busy week, here, at the show. Any, you know, interesting things you learned this week or something that you heard from a customer that you thought, oh boy we got to, you know, get this into our lab or a road map or, you know-- >> The ARM story, the multi architecture is, I feel like that's really taking off. We've had discussions with quite a few folks around that, so yeah, that for me, that's the next thing that I think we're really going to concentrate a little bit harder on is, again, figuring out if there are some problems, because mostly it's been just x86, but we need to start exploring what's breaking as we add more to multi architecture. >> Melvin, no shortage of new things to test and play with, and every customer always brings some unique spins on things, so appreciate you giving us the update on OpenLab, thanks so much for joining us. >> You're Welcome. Thanks for having me. >> From John Troyer, I'm Stu Miniman, thanks so much for watching The Cube. (electronic music)
SUMMARY :
brought to you by Red Hat, and you're watching The Cube, Alright, so Melvin, we were given, and that's how I really got more involved Yeah, you made a comment, though. and so what allowed that shared hosting so to tell us about OpenLab, itself, and was like hey you know you're doing a great job here, and you integrate them and start, and you want to use the API's that you're used to and the users, you know how do they get engaged, and so we want to figure out the right hardware and things like that or dashboards and the git repos, to how to driving those red and green, so a lot different, we don't have a full integration so what kind of things, you know, and so now we're talking about, and if you have a patch that depends that you thought, oh boy we got to, you know, but we need to start exploring what's breaking so appreciate you giving us the update on OpenLab, Thanks for having me. thanks so much for watching The Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Troyer | PERSON | 0.99+ |
Melvin Hillsman | PERSON | 0.99+ |
Melvin | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Monday | DATE | 0.99+ |
Vancouver | LOCATION | 0.99+ |
Vancouver, Canada | LOCATION | 0.99+ |
OpenStack | TITLE | 0.99+ |
OpenStack Foundation | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
OverStack | TITLE | 0.99+ |
Zul | TITLE | 0.99+ |
two options | QUANTITY | 0.99+ |
gitlab | TITLE | 0.99+ |
OpenStack | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.98+ |
The Cube | TITLE | 0.98+ |
OpenStack Summit 2018 | EVENT | 0.98+ |
Garrote | TITLE | 0.98+ |
three | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
github | TITLE | 0.98+ |
OpenLabtesting.org | OTHER | 0.98+ |
SCM | TITLE | 0.97+ |
OpenLab | TITLE | 0.97+ |
hundreds of customers | QUANTITY | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Rackspace | ORGANIZATION | 0.96+ |
OpenStack Summit North America 2018 | EVENT | 0.96+ |
CNCF | ORGANIZATION | 0.96+ |
this week | DATE | 0.95+ |
one project | QUANTITY | 0.95+ |
first-time | QUANTITY | 0.95+ |
Oakland Lab | ORGANIZATION | 0.94+ |
OpenLab | ORGANIZATION | 0.93+ |
one | QUANTITY | 0.92+ |
one example | QUANTITY | 0.9+ |
one point | QUANTITY | 0.89+ |
four so years ago | DATE | 0.85+ |
lot of containers | QUANTITY | 0.85+ |
open lab | TITLE | 0.84+ |
CIDT | TITLE | 0.81+ |
every customer | QUANTITY | 0.78+ |
miles | QUANTITY | 0.77+ |
CNCF.CI | ORGANIZATION | 0.73+ |
first | QUANTITY | 0.7+ |
FPGA | ORGANIZATION | 0.69+ |
Kubernetes | ORGANIZATION | 0.65+ |
Interop | TITLE | 0.62+ |
Cinder V3 | COMMERCIAL_ITEM | 0.62+ |
OpenLab | EVENT | 0.61+ |
Day One Afternoon Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Music] ladies and gentlemen please welcome Red Hat senior vice president of engineering Matt Hicks [Music] welcome back I hope you're enjoying your first day of summit you know for us it is a lot of work throughout the year to get ready to get here but I love the energy walking into someone on that first opening day now this morning we kick off with Paul's keynote and you saw this morning just how evolved every aspect of open hybrid cloud has become based on an open source innovation model that opens source the power and potential of open source so we really brought me to Red Hat but at the end of the day the real value comes when were able to make customers like yourself successful with open source and as much passion and pride as we put into the open source community that requires more than just Red Hat given the complexity of your various businesses the solution set you're building that requires an entire technology ecosystem from system integrators that can provide the skills your domain expertise to software vendors that are going to provide the capabilities for your solutions even to the public cloud providers whether it's on the hosting side or consuming their services you need an entire technological ecosystem to be able to support you and your goals and that is exactly what we are gonna talk about this afternoon the technology ecosystem we work with that's ready to help you on your journey now you know this year's summit we talked about earlier it is about ideas worth exploring and we want to make sure you have all of the expertise you need to make those ideas a reality so with that let's talk about our first partner we have him today and that first partner is IBM when I talk about IBM I have a little bit of a nostalgia and that's because 16 years ago I was at IBM it was during my tenure at IBM where I deployed my first copy of Red Hat Enterprise Linux for a customer it's actually where I did my first professional Linux development as well you and that work on Linux it really was the spark that I had that showed me the potential that open source could have for enterprise customers now iBM has always been a steadfast supporter of Linux and a great Red Hat partner in fact this year we are celebrating 20 years of partnership with IBM but even after 20 years two decades I think we're working on some of the most innovative work that we ever have before so please give a warm welcome to Arvind Krishna from IBM to talk with us about what we are working on Arvind [Applause] hey my pleasure to be here thank you so two decades huh that's uh you know I think anything in this industry to going for two decades is special what would you say that that link is made right Hatton IBM so successful look I got to begin by first seeing something that I've been waiting to say for years it's a long strange trip it's been and for the San Francisco folks they'll get they'll get the connection you know what I was just thinking you said 16 it is strange because I probably met RedHat 20 years ago and so that's a little bit longer than you but that was out in Raleigh it was a much smaller company and when I think about the connection I think look IBM's had a long long investment and a long being a long fan of open source and when I think of Linux Linux really lights up our hardware and I think of the power box that you were showing this morning as well as the mainframe as well as all other hardware Linux really brings that to life and I think that's been at the root of our relationship yeah absolutely now I alluded to a little bit earlier we're working on some new stuff and this time it's a little bit higher in the software stack and we have before so what do you what would you say spearheaded that right so we think of software many people know about some people don't realize a lot of the words are called critical systems you know like reservation systems ATM systems retail banking a lot of the systems run on IBM software and when I say IBM software names such as WebSphere and MQ and db2 all sort of come to mind as being some of that software stack and really when I combine that with some of what you were talking about this morning along hybrid and I think this thing called containers you guys know a little about combining the two we think is going to make magic yeah and I certainly know containers and I think for myself seeing the rise of containers from just the introduction of the technology to customers consuming at mission-critical capacities it's been probably one of the fastest technology cycles I've ever seen before look we completely agree with that when you think back to what Paul talks about this morning on hybrid and we think about it we are made of firm commitment to containers all of our software will run on containers and all of our software runs Rell and you put those two together and this belief on hybrid and containers giving you their hybrid motion so that you can pick where you want to run all the software is really I think what has brought us together now even more than before yeah and the best part I think I've liked we haven't just done the product in downstream alignment we've been so tied in our technology approach we've been aligned all the way to the upstream communities absolutely look participating upstream participating in these projects really bringing all the innovation to bear you know when I hear all of you talk about you can't just be in a single company you got to tap into the world of innovation and everybody should contribute we firmly believe that instead of helping to do that is kind of why we're here yeah absolutely now the best part we're not just going to tell you about what we're doing together we're actually going to show you so how every once you tell the audience a little bit more about what we're doing I will go get the demo team ready in the back so you good okay so look we're doing a lot here together we're taking our software and we are begging to put it on top of Red Hat and openshift and really that's what I'm here to talk about for a few minutes and then we go to show it to you live and the demo guard should be with us so it'll hopefully go go well so when we look at extending our partnership it's really based on three fundamental principles and those principles are the following one it's a hybrid world every enterprise wants the ability to span across public private and their own premise world and we got to go there number two containers are strategic to both of us enterprise needs the agility you need a way to easily port things from place to place to place and containers is more than just wrapping something up containers give you all of the security the automation the deploy ability and we really firmly believe that and innovation is the path forward I mean you got to bring all the innovation to bear whether it's around security whether it's around all of the things we heard this morning around going across multiple infrastructures right the public or private and those are three firm beliefs that both of us have together so then explicitly what I'll be doing here number one all the IBM middleware is going to be certified on top of openshift and rel and through cloud private from IBM so that's number one all the middleware is going to run in rental containers on OpenShift on rail with all the cloud private automation and deployability in there number two we are going to make it so that this is the complete stack when you think about from hardware to hypervisor to os/2 the container platform to all of the middleware it's going to be certified up and down all the way so that you can get comfort that this is certified against all the cyber security attacks that come your way three because we do the certification that means a complete stack can be deployed wherever OpenShift runs so that way you give the complete flexibility and you no longer have to worry about that the development lifecycle is extended all the way from inception to production and the management plane then gives you all of the delivery and operation support needed to lower that cost and lastly professional services through the IBM garages as well as the Red Hat innovation labs and I think that this combination is really speaks to the power of both companies coming together and both of us working together to give all of you that flexibility and deployment capabilities across one can't can't help it one architecture chart and that's the only architecture chart I promise you so if you look at it right from the bottom this speaks to what I'm talking about you begin at the bottom and you have a choice of infrastructure the IBM cloud as well as other infrastructure as a service virtual machines as well as IBM power and IBM mainframe as is the infrastructure choices underneath so you choose what what is best suited for the workload well with the container service with the open shift platform managing all of that environment as well as giving the orchestration that kubernetes gives you up to the platform services from IBM cloud private so it contains the catalog of all middle we're both IBM's as well as open-source it contains all the deployment capability to go deploy that and it contains all the operational management so things like come back up if things go down worry about auto scaling all those features that you want come to you from there and that is why that combination is so so powerful but rather than just hear me talk about it I'm also going to now bring up a couple of people to talk about it and what all are they going to show you they're going to show you how you can deploy an application on this environment so you can think of that as either a cloud native application but you can also think about it as how do you modernize an application using micro services but you don't want to just keep your application always within its walls you also many times want to access different cloud services from this and how do you do that and I'm not going to tell you which ones they're going to come and tell you and how do you tackle the complexity of both hybrid data data that crosses both from the private world to the public world and as well as target the extra workloads that you want so that's kind of the sense of what you're going to see through through the demonstrations but with that I'm going to invite Chris and Michael to come up I'm not going to tell you which one's from IBM which runs from Red Hat hopefully you'll be able to make the right guess so with that Chris and Michael [Music] so so thank you Arvind hopefully people can guess which ones from Red Hat based on the shoes I you know it's some really exciting stuff that we just heard there what I believe that I'm I'm most excited about when I look out upon the audience and the opportunity for customers is with this announcement there are quite literally millions of applications now that can be modernized and made available on any cloud anywhere with the combination of IBM cloud private and OpenShift and I'm most thrilled to have mr. Michael elder a distinguished engineer from IBM here with us today and you know Michael would you maybe describe for the folks what we're actually going to go over today absolutely so when you think about how do I carry forward existing applications how do I build new applications as well you're creating micro services that always need a mixture of data and messaging and caching so this example application shows java-based micro services running on WebSphere Liberty each of which are then leveraging things like IBM MQ for messaging IBM db2 for data operational decision manager all of which is fully containerized and running on top of the Red Hat open chip container platform and in fact we're even gonna enhance stock trader to help it understand how you feel but okay hang on so I'm a little slow to the draw sometimes you said we're gonna have an application tell me how I feel exactly exactly you think about your enterprise apps you want to improve customer service understanding how your clients feel can't help you do that okay well this I'd like to see that in action all right let's do it okay so the first thing we'll do is we'll actually take a look at the catalog and here in the IBM cloud private catalog this is all of the content that's available to deploy now into this hybrid solution so we see workloads for IBM will see workloads for other open source packages etc each of these are packaged up as helm charts that are deploying a set of images that will be certified for Red Hat Linux and in this case we're going to go through and start with a simple example with a node out well click a few actions here we'll give it a name now do you have your console up over there I certainly do all right perfect so we'll deploy this into the new old namespace and will deploy notate okay alright anything happening of course it's come right up and so you know what what I really like about this is regardless of if I'm used to using IBM clout private or if I'm used to working with open shift yeah the experience is well with the tool of whatever I'm you know used to dealing with on a daily basis but I mean you know I got to tell you we we deployed node ourselves all the time what about and what about when was the last time you deployed MQ on open shift you never I maybe never all right let's fix that so MQ obviously is a critical component for messaging for lots of highly transactional systems here we'll deploy this as a container on the platform now I'm going to deploy this one again into new worlds I'm gonna disable persistence and for my application I'm going to need a queue manager so I'm going to have it automatically setup my queue manager as well now this will deploy a couple of things what do you see I see IBM in cube all right so there's your stateful set running MQ and of course there's a couple of other components that get stood up as needed here including things like credentials and secrets and the service etc but all of this is they're out of the box ok so impressive right but that's the what I think you know what I'm really looking at is maybe how a well is this running you know what else does this partnership bring when I look at IBM cloud private windows inches well so that's a key reason about why it's not just about IBM middleware running on open shift but also IBM cloud private because ultimately you need that common management plane when you deploy a container the next thing you have to worry about is how do I get its logs how do I manage its help how do I manage license consumption how do I have a common security plan right so cloud private is that enveloping wrapper around IBM middleware to provide those capabilities in a common way and so here we'll switch over to our dashboard this is our Griffin and Prometheus stack that's deployed also now on cloud private running on OpenShift and we're looking at a different namespace we're looking at the stock trader namespace we'll go back to this app here momentarily and we can see all the different pieces what if you switch over to the stock trader workspace on open shipped yeah I think we might be able to do that here hey there it is alright and so what you're gonna see here all the different pieces of this op right there's d b2 over here I see the portfolio Java microservice running on Webster Liberty I see my Redis cash I see MQ all of these are the components we saw in the architecture picture a minute ago ya know so this is really great I mean so maybe let's take a look at the actual application I see we have a fine stock trader app here now we mentioned understanding how I feel exactly you know well I feel good that this is you know a brand new stock trader app versus the one from ten years ago that don't feel like we used forever so the key thing is this app is actually all of those micro services in addition to things like business rules etc to help understand the loyalty program so one of the things we could do here is actually enhance it with a a AI service from Watson this is tone analyzer it helps me understand how that user actually feels and will be able to go through and submit some feedback to understand that user ok well let's see if we can take a look at that so I tried to click on youth clearly you're not very happy right now here I'll do one quick thing over here go for it we'll clear a cache for our sample lab so look you guys don't actually know as Michael and I just wrote this no js' front end backstage while Arvin was actually talking with Matt and we deployed it real-time using continuous integration and continuous delivery that we have available with openshift well the great thing is it's a live demo right so we're gonna do it all live all the time all right so you mentioned it'll tell me how I'm feeling right so if we look at so right there it looks like they're pretty angry probably because our cache hadn't been cleared before we started the demo maybe well that would make me angry but I should be happy because I mean I have a lot of money well it's it's more than I get today for sure so but you know again I don't want to remain angry so does Watson actually understand southern I know it speaks like eighty different languages but well you know I'm from South Carolina to understand South Carolina southern but I don't know about your North Carolina southern alright well let's give it a go here y'all done a real real know no profanity now this is live I've done a real real nice job on this here fancy demo all right hey all right likes me now all right cool and the key thing is just a quick note right it's showing you've got a free trade so we can integrate those business rules and then decide to I do put one trade if you're angry give me more it's all bringing it together into one platform all running on open show yeah and I can see the possibilities right of we've not only deployed services but getting that feedback from our customers to understand well how well the services are being used and are people really happy with what they have hey listen Michael this was amazing I read you joining us today I hope you guys enjoyed this demo as well so all of you know who this next company is as I look out through the crowd based on what I can actually see with the sun shining down on me right now I can see their influence everywhere you know Sports is in our everyday lives and these guys are equally innovative in that space as they are with hybrid cloud computing and they use that to help maintain and spread their message throughout the world of course I'm talking about Nike I think you'll enjoy this next video about Nike and their brand and then we're going to hear directly from my twitting about what they're doing with Red Hat technology new developments in the top story of the day the world has stopped turning on its axis top scientists are currently racing to come up with a solution everybody going this way [Music] the wrong way [Music] please welcome Nike vice president of infrastructure engineering Mike witig [Music] hi everybody over the last five years at Nike we have transformed our technology landscape to allow us to connect more directly to our consumers through our retail stores through Nike comm and our mobile apps the first step in doing that was redesigning our global network to allow us to have direct connectivity into both Asia and AWS in Europe in Asia and in the Americas having that proximity to those cloud providers allows us to make decisions about application workload placement based on our strategy instead of having design around latency concerns now some of those workloads are very elastic things like our sneakers app for example that needs to burst out during certain hours of the week there's certain moments of the year when we have our high heat product launches and for those type of workloads we write that code ourselves and we use native cloud services but being hybrid has allowed us to not have to write everything that would go into that app but rather just the parts that are in that application consumer facing experience and there are other back-end systems certain core functionalities like order management warehouse management finance ERP and those are workloads that are third-party applications that we host on relevent over the last 18 months we have started to deploy certain elements of those core applications into both Azure and AWS hosted on rel and at first we were pretty cautious that we started with development environments and what we realized after those first successful deployments is that are the impact of those cloud migrations on our operating model was very small and that's because the tools that we use for monitoring for security for performance tuning didn't change even though we moved those core applications into Azure in AWS because of rel under the covers and getting to the point where we have that flexibility is a real enabler as an infrastructure team that allows us to just be in the yes business and really doesn't matter where we want to deploy different workload if either cloud provider or on-prem anywhere on the planet it allows us to move much more quickly and stay much more directed to our consumers and so having rel at the core of our strategy is a huge enabler for that flexibility and allowing us to operate in this hybrid model thanks very much [Applause] what a great example it's really nice to hear an IQ story of using sort of relish that foundation to enable their hybrid clout enable their infrastructure and there's a lot that's the story we spent over ten years making that possible for rel to be that foundation and we've learned a lot in that but let's circle back for a minute to the software vendors and what kicked off the day today with IBM IBM s one of the largest software portfolios on the planet but we learned through our journey on rel that you need thousands of vendors to be able to sport you across all of your different industries solve any challenge that you might have and you need those vendors aligned with your technology direction this is doubly important when the technology direction is changing like with containers we saw that two years ago bread had introduced our container certification program now this program was focused on allowing you to identify vendors that had those shared technology goals but identification by itself wasn't enough in this fast-paced world so last year we introduced trusted content we introduced our container health index publicly grading red hats images that form the foundation for those vendor images and that was great because those of you that are familiar with containers know that you're taking software from vendors you're combining that with software from companies like Red Hat and you are putting those into a single container and for you to run those in a mission-critical capacity you have to know that we can both stand by and support those deployments but even trusted content wasn't enough so this year I'm excited that we are extending once again to introduce trusted operations now last week we announced that cube con kubernetes conference the kubernetes operator SDK the goal of the kubernetes operators is to allow any software provider on kubernetes to encode how that software should run this is a critical part of a container ecosystem not just being able to find the vendors that you want to work with not just knowing that you can trust what's inside the container but knowing that you can efficiently run that software now the exciting part is because this is so closely aligned with the upstream technology that today we already have four partners that have functioning operators specifically Couchbase dynaTrace crunchy and black dot so right out of the gate you have security monitoring data store options available to you these partners are really leading the charge in terms of what it means to run their software on OpenShift but behind these four we have many more in fact this morning we announced over 60 partners that are committed to building operators they're taking their domain expertise and the software that they wrote that they know and extending that into how you are going to run that on containers in environments like OpenShift this really brings the power of being able to find the vendors being able to trust what's inside and know that you can run their software as efficiently as anyone else on the planet but instead of just telling you about this we actually want to show you this in action so why don't we bring back up the demo team to give you a little tour of what's possible with it guys thanks Matt so Matt talked about the concept of operators and when when I think about operators and what they do it's taking OpenShift based services and making them even smarter giving you insight into how they do things for example have we had an operator for the nodejs service that I was running earlier it would have detected the problem and fixed itself but when we look at it what really operators do when I look at it from an ecosystem perspective is for ISVs it's going to be a catalyst that's going to allow them to make their services as manageable and it's flexible and as you know maintainable as any public cloud service no matter where OpenShift is running and to help demonstrate this I've got my buddy Rob here Rob are we ready on the demo front we're ready awesome now I notice this screen looks really familiar to me but you know I think we want to give folks here a dev preview of a couple of things well we want to show you is the first substantial integration of the core OS tectonic technology with OpenShift and then the other thing is we are going to dive in a little bit more into operators and their usefulness so Rob yeah so what we're looking at here is the service catalog that you know and love and openshift and we've got a few new things in here we've actually integrated operators into the Service Catalog and I'm going to take this filter and give you a look at some of them that we have today so you can see we've got a list of operators exposed and this is the same way that your developers are already used to integrating with products they're right in your catalog and so now these are actually smarter services but how can we maybe look at that I mentioned that there's maybe a new view I'm used to seeing this as a developer but I hear we've got some really cool stuff if I'm the administrator of the console yeah so we've got a whole new side of the console for cluster administrators to get a look at under the infrastructure versus this dev focused view that we're looking at today today so let's go take a look at it so the first thing you see here is we've got a really rich set of monitoring and health status so we can see that we've got some alerts firing our control plane is up and we can even do capacity planning anything that you need to do to maintenance your cluster okay so it's it's not only for the the services in the cluster and doing things that you know I may be normally as a human operator would have to do but this this console view also gives me insight into the infrastructure itself right like maybe the nodes and maybe handling the security context is that true yes so these are new capabilities that we're bringing to open shift is the ability to do node management things like drain and unscheduled nodes to do day-to-day maintenance and then as well as having security constraints and things like role bindings for example and the exciting thing about this is this is a view that you've never been able to see before it's cross-cutting across namespaces so here we've got a number of admin bindings and we can see that they're connected to a number of namespaces and these would represent our engineering teams all the groups that are using the cluster and we've never had this view before this is a perfect way to audit your security you know it actually is is pretty exciting I mean I've been fortunate enough to be on the up and shift team since day one and I know that operations view is is something that we've you know strived for and so it's really exciting to see that we can offer that now but you know really this was a we want to get into what operators do and what they can do for us and so maybe you show us what the operator console looks like yeah so let's jump on over and see all the operators that we have installed on the cluster you can see that these mirror what we saw on the Service Catalog earlier now what we care about though is this Couchbase operator and we're gonna jump into the demo namespace as I said you can share a number of different teams on a cluster so it's gonna jump into this namespace okay cool so now what we want to show you guys when we think about operators you know we're gonna have a scenario here where there's going to be multiple replicas of a Couchbase service running in the cluster and then we're going to have a stateful set and what's interesting is those two things are not enough if I'm really trying to run this as a true service where it's highly available in persistent there's things that you know as a DBA that I'm normally going to have to do if there's some sort of node failure and so what we want to demonstrate to you is where operators combined with the power that was already within OpenShift are now coming together to keep this you know particular database service highly available and something that we can continue using so Rob what have you got there yeah so as you can see we've got our couch based demo cluster running here and we can see that it's up and running we've got three members we've got an off secret this is what's controlling access to a UI that we're gonna look at in a second but what really shows the power of the operator is looking at this view of the resources that it's managing you can see that we've got a service that's doing load balancing into the cluster and then like you said we've got our pods that are actually running the software itself okay so that's cool so maybe for everyone's benefit so we can show that this is happening live could we bring up the the Couchbase console please and keep up the openshift console both sides so what we see there we go so what we see on the on the right hand side is obviously the same console Rob was working in on the left-hand side as you can see by the the actual names of the pods that are there the the couch based services that are available and so Rob maybe um let's let's kill something that's always fun to do on stage yeah this is the power of the operator it's going to recover it so let's browse on over here and kill node number two so we're gonna forcefully kill this and kick off the recovery and I see right away that because of the integration that we have with operators the Couchbase console immediately picked up that something has changed in the environment now why is that important normally a human being would have to get that alert right and so with operators now we've taken that capability and we've realized that there has been a new event within the environment this is not something that you know kubernetes or open shipped by itself would be able to understand now I'm presuming we're gonna end up doing something else it's not just seeing that it failed and sure enough there we go remember when you have a stateful application rebalancing that data and making it available is just as important as ensuring that the disk is attached so I mean Rob thank you so much for you know driving this for us today and being here I mean you know not only Couchbase but as was mentioned by matt we also have you know crunchy dynaTrace and black duck I would encourage you all to go visit their booths out on the floor today and understand what they have available which are all you know here with a dev preview and then talk to the many other partners that we have that are also looking at operators so again rub thank you for joining us today Matt come on out okay this is gonna make for an exciting year of just what it means to consume container base content I think containers change how customers can get that I believe operators are gonna change how much they can trust running that content let's circle back to one more partner this next partner we have has changed the landscape of computing specifically with their work on hardware design work on core Linux itself you know in fact I think they've become so ubiquitous with computing that we often overlook the technological marvels that they've been able to overcome now for myself I studied computer engineering so in the late 90s I had the chance to study processor design I actually got to build one of my own processors now in my case it was the most trivial processor that you could imagine it was an 8-bit subtractor which means it can subtract two numbers 256 or smaller but in that process I learned the sheer complexity that goes into processor design things like wire placements that are so close that electrons can cut through the insulation in short and then doing those wire placements across three dimensions to multiple layers jamming in as many logic components as you possibly can and again in my case this was to make a processor that could subtract two numbers but once I was done with this the second part of the course was studying the Pentium processor now remember that moment forever because looking at what the Pentium processor was able to accomplish it was like looking at alien technology and the incredible thing is that Intel our next partner has been able to keep up that alien like pace of innovation twenty years later so we're excited have Doug Fisher here let's hear a little bit more from Intel for business wide open skies an open mind no matter the context the idea of being open almost only suggests the potential of infinite possibilities and that's exactly the power of open source whether it's expanding what's possible in business the science and technology or for the greater good which is why-- open source requires the involvement of a truly diverse community of contributors to scale and succeed creating infinite possibilities for technology and more importantly what we do with it [Music] you know what Intel one of our core values is risk-taking and I'm gonna go just a bit off script for a second and say I was just backstage and I saw a gentleman that looked a lot like Scott Guthrie who runs all of Microsoft's cloud enterprise efforts wearing a red shirt talking to Cormier I'm just saying I don't know maybe I need some more sleep but that's what I saw as we approach Intel's 50th anniversary these words spoken by our co-founder Robert Noyce are as relevant today as they were decades ago don't be encumbered by history this is about breaking boundaries in technology and then go off and do something wonderful is about innovation and driving innovation in our industry and Intel we're constantly looking to break boundaries to advance our technology in the cloud in enterprise space that is no different so I'm going to talk a bit about some of the boundaries we've been breaking and innovations we've been driving at Intel starting with our Intel Xeon platform Orion Xeon scalable platform we launched several months ago which was the biggest and mark the most advanced movement in this technology in over a decade we were able to drive critical performance capabilities unmatched agility and added necessary and sufficient security to that platform I couldn't be happier with the work we do with Red Hat and ensuring that those hero features that we drive into our platform they fully expose to all of you to drive that innovation to go off and do something wonderful well there's taking advantage of the performance features or agility features like our advanced vector extensions or avx-512 or Intel quick exist those technologies are fully embraced by Red Hat Enterprise Linux or whether it's security technologies like txt or trusted execution technology are fully incorporated and we look forward to working with Red Hat on their next release to ensure that our advancements continue to be exposed and their platform and all these workloads that are driving the need for us to break boundaries and our technology are driving more and more need for flexibility and computing and that's why we're excited about Intel's family of FPGAs to help deliver that additional flexibility for you to build those capabilities in your environment we have a broad set of FPGA capabilities from our power fish at Mac's product line all the way to our performance product line on the 6/10 strat exten we have a broad set of bets FPGAs what i've been talking to customers what's really exciting is to see the combination of using our Intel Xeon scalable platform in combination with FPGAs in addition to the acceleration development capabilities we've given to software developers combining all that together to deliver better and better solutions whether it's helping to accelerate data compression well there's pattern recognition or data encryption and decryption one of the things I saw in a data center recently was taking our Intel Xeon scalable platform utilizing the capabilities of FPGA to do data encryption between servers behind the firewall all the while using the FPGA to do that they preserve those precious CPU cycles to ensure they delivered the SLA to the customer yet provided more security for their data in the data center one of the edges in cyber security is innovation and route of trust starts at the hardware we recently renewed our commitment to security with our security first pledge has really three elements to our security first pledge first is customer first urgency we have now completed the release of the micro code updates for protection on our Intel platforms nine plus years since launch to protect against things like the side channel exploits transparent and timely communication we are going to communicate timely and openly on our Intel comm website whether it's about our patches performance or other relevant information and then ongoing security assurance we drive security into every one of our products we redesigned a portion of our processor to add these partition capability which is adding additional walls between applications and user level privileges to further secure that environment from bad actors I want to pause for a second and think everyone in this room involved in helping us work through our security first pledge this isn't something we do on our own it takes everyone in this room to help us do that the partnership and collaboration was next to none it's the most amazing thing I've seen since I've been in this industry so thank you we don't stop there we continue to advance our security capabilities cross-platform solutions we recently had a conference discussion at RSA where we talked about Intel Security Essentials where we deliver a framework of capabilities and the end that are in our silicon available for those to innovate our customers and the security ecosystem to innovate on a platform in a consistent way delivering that assurance that those capabilities will be on that platform we also talked about things like our security threat technology threat detection technology is something that we believe in and we launched that at RSA incorporates several elements one is ability to utilize our internal graphics to accelerate some of the memory scanning capabilities we call this an accelerated memory scanning it allows you to use the integrated graphics to scan memory again preserving those precious cycles on the core processor Microsoft adopted this and are now incorporated into their defender product and are shipping it today we also launched our threat SDK which allows partners like Cisco to utilize telemetry information to further secure their environments for cloud workloads so we'll continue to drive differential experiences into our platform for our ecosystem to innovate and deliver more and more capabilities one of the key aspects you have to protect is data by 2020 the projection is 44 zettabytes of data will be available 44 zettabytes of data by 2025 they project that will grow to a hundred and eighty s data bytes of data massive amount of data and what all you want to do is you want to drive value from that data drive and value from that data is absolutely critical and to do that you need to have that data closer and closer to your computation this is why we've been working Intel to break the boundaries in memory technology with our investment in 3d NAND we're reducing costs and driving up density in that form factor to ensure we get warm data closer to the computing we're also innovating on form factors we have here what we call our ruler form factor this ruler form factor is designed to drive as much dense as you can in a 1u rack we're going to continue to advance the capabilities to drive one petabyte of data at low power consumption into this ruler form factor SSD form factor so our innovation continues the biggest breakthrough and memory technology in the last 25 years in memory media technology was done by Intel we call this our 3d crosspoint technology and our 3d crosspoint technology is now going to be driven into SSDs as well as in a persistent memory form factor to be on the memory bus giving you the speed of memory characteristics of memory as well as the characteristics of storage given a new tier of memory for developers to take full advantage of and as you can see Red Hat is fully committed to integrating this capability into their platform to take full advantage of that new capability so I want to thank Paul and team for engaging with us to make sure that that's available for all of you to innovate on and so we're breaking boundaries and technology across a broad set of elements that we deliver that's what we're about we're going to continue to do that not be encumbered by the past your role is to go off and doing something wonderful with that technology all ecosystems are embracing this and driving it including open source technology open source is a hub of innovation it's been that way for many many years that innovation that's being driven an open source is starting to transform many many businesses it's driving business transformation we're seeing this coming to light in the transformation of 5g driving 5g into the networked environment is a transformational moment an open source is playing a pivotal role in that with OpenStack own out and opie NFV and other open source projects were contributing to and participating in are helping drive that transformation in 5g as you do software-defined networks on our barrier breaking technology we're also seeing this transformation rapidly occurring in the cloud enterprise cloud enterprise are growing rapidly and innovation continues our work with virtualization and KVM continues to be aggressive to adopt technologies to advance and deliver more capabilities in virtualization as we look at this with Red Hat we're now working on Cube vert to help move virtualized workloads onto these platforms so that we can now have them managed at an open platform environment and Cube vert provides that so between Intel and Red Hat and the community we're investing resources to make certain that comes to product as containers a critical feature in Linux becomes more and more prevalent across the industry the growth of container elements continues at a rapid rapid pace one of the things that we wanted to bring to that is the ability to provide isolation without impairing the flexibility the speed and the footprint of a container with our clear container efforts along with hyper run v we were able to combine that and create we call cotta containers we launched this at the end of last year cotta containers is designed to have that container element available and adding elements like isolation both of these events need to have an orchestration and management capability Red Hat's OpenShift provides that capability for these workloads whether containerized or cube vert capabilities with virtual environments Red Hat openshift is designed to take that commercial capability to market and we've been working with Red Hat for several years now to develop what we call our Intel select solution Intel select solutions our Intel technology optimized for downstream workloads as we see a growth in a workload will work with a partner to optimize a solution on Intel technology to deliver the best solution that could be deployed quickly our effort here is to accelerate the adoption of these type of workloads in the market working with Red Hat's so now we're going to be deploying an Intel select solution design and optimized around Red Hat OpenShift we expect the industry's start deploying this capability very rapidly I'm excited to announce today that Lenovo is committed to be the first platform company to deliver this solution to market the Intel select solution to market will be delivered by Lenovo now I talked about what we're doing in industry and how we're transforming businesses our technology is also utilized for greater good there's no better example of this than the worked by dr. Stephen Hawking it was a sad day on March 14th of this year when dr. Stephen Hawking passed away but not before Intel had a 20-year relationship with dr. Hawking driving breakthrough capabilities innovating with him driving those robust capabilities to the rest of the world one of our Intel engineers an Intel fellow which is the highest technical achievement you can reach at Intel got to spend 10 years with dr. Hawking looking at innovative things they could do together with our technology and his breakthrough innovative thinking so I thought it'd be great to bring up our Intel fellow Lema notch Minh to talk about her work with dr. Hawking and what she learned in that experience come on up Elina [Music] great to see you Thanks something going on about the breakthrough breaking boundaries and Intel technology talk about how you use that in your work with dr. Hawking absolutely so the most important part was to really make that technology contextually aware because for people with disability every single interaction takes a long time so whether it was adapting for example the language model of his work predictor to understand whether he's gonna talk to people or whether he's writing a book on black holes or to even understand what specific application he might be using and then making sure that we're surfacing only enough actions that were relevant to reduce that amount of interaction so the tricky part is really to make all of that contextual awareness happen without totally confusing the user because it's constantly changing underneath it so how is that your work involving any open source so you know the problem with assistive technology in general is that it needs to be tailored to the specific disability which really makes it very hard and very expensive because it can't utilize the economies of scale so basically with the system that we built what we wanted to do is really enable unleashing innovation in the world right so you could take that framework you could tailor to a specific sensor for example a brain computer interface or something like that where you could actually then support a different set of users so that makes open-source a perfect fit because you could actually build and tailor and we you spoke with dr. Hawking what was this view of open source is it relevant to him so yeah so Stephen was adamant from the beginning that he wanted a system to benefit the world and not just himself so he spent a lot of time with us to actually build this system and he was adamant from day one that he would only engage with us if we were commit to actually open sourcing the technology that's fantastic and you had the privilege of working with them in 10 years I know you have some amazing stories to share so thank you so much for being here thank you so much in order for us to scale and that's what we're about at Intel is really scaling our capabilities it takes this community it takes this community of diverse capabilities it takes two births thought diverse thought of dr. Hawking couldn't be more relevant but we also are proud at Intel about leading efforts of diverse thought like women and Linux women in big data other areas like that where Intel feels that that diversity of thinking and engagement is critical for our success so as we look at Intel not to be encumbered by the past but break boundaries to deliver the technology that you all will go off and do something wonderful with we're going to remain committed to that and I look forward to continue working with you thank you and have a great conference [Applause] thank God now we have one more customer story for you today when you think about customers challenges in the technology landscape it is hard to ignore the public cloud these days public cloud is introducing capabilities that are driving the fastest rate of innovation that we've ever seen in our industry and our next customer they actually had that same challenge they wanted to tap into that innovation but they were also making bets for the long term they wanted flexibility and providers and they had to integrate to the systems that they already have and they have done a phenomenal job in executing to this so please give a warm welcome to Kerry Pierce from Cathay Pacific Kerry come on thanks very much Matt hi everyone thank you for giving me the opportunity to share a little bit about our our cloud journey let me start by telling you a little bit about Cathay Pacific we're an international airline based in Hong Kong and we serve a passenger and a cargo network to over 200 destinations in 52 countries and territories in the last seventy years and years seventy years we've made substantial investments to develop Hong Kong as one of the world's leading transportation hubs we invest in what matters most to our customers to you focusing on our exemplary service and our great product and it's both on the ground and in the air we're also investing and expanding our network beyond our multiple frequencies to the financial districts such as Tokyo New York and London and we're connecting Asia and Hong Kong with key tech hubs like San Francisco where we have multiple flights daily we're also connecting Asia in Hong Kong to places like Tel Aviv and our upcoming destination of Dublin in fact 2018 is actually going to be one of our biggest years in terms of network expansion and capacity growth and we will be launching in September our longest flight from Hong Kong direct to Washington DC and that'll be using a state-of-the-art Airbus a350 1000 aircraft so that's a little bit about Cathay Pacific let me tell you about our journey through the cloud I'm not going to go into technical details there's far smarter people out in the audience who will be able to do that for you just focus a little bit about what we were trying to achieve and the people side of it that helped us get there we had a couple of years ago no doubt the same issues that many of you do I don't think we're unique we had a traditional on-premise non-standardized fragile infrastructure it didn't meet our infrastructure needs and it didn't meet our development needs it was costly to maintain it was costly to grow and it really inhibited innovation most importantly it slowed the delivery of value to our customers at the same time you had the hype of cloud over the last few years cloud this cloud that clouds going to fix the world we were really keen on making sure we didn't get wound up and that so we focused on what we needed we started bottom up with a strategy we knew we wanted to be clouded Gnostic we wanted to have active active on-premise data centers with a single network and fabric and we wanted public clouds that were trusted and acted as an extension of that environment not independently we wanted to avoid single points of failure and we wanted to reduce inter dependencies by having loosely coupled designs and finally we wanted to be scalable we wanted to be able to cater for sudden surges of demand in a nutshell we kind of just wanted to make everything easier and a management level we wanted to be a broker of services so not one size fits all because that doesn't work but also not one of everything we want to standardize but a pragmatic range of services that met our development and support needs and worked in harmony with our public cloud not against it so we started on a journey with red hat we implemented Red Hat cloud forms and ansible to manage our hybrid cloud we also met implemented Red Hat satellite to maintain a manager environment we built a Red Hat OpenStack on crimson vironment to give us an alternative and at the same time we migrated a number of customer applications to a production public cloud open shift environment but it wasn't all Red Hat you love heard today that the Red Hat fits within an overall ecosystem we looked at a number of third-party tools and services and looked at developing those into our core solution I think at last count we had tried and tested somewhere past eight different tools and at the moment we still have around 62 in our environment that help us through that journey but let me put the technical solution aside a little bit because it doesn't matter how good your technical solution is if you don't have the culture and the people to get it right as a group we needed to be aligned for delivery and we focused on three core behaviors we focused on accountability agility and collaboration now I was really lucky we've got a pretty fantastic team for whom that was actually pretty easy but but again don't underestimate the importance of getting the culture and the people right because all the technology in the world doesn't matter if you don't have that right I asked the team what did we do differently because in our situation we didn't go out and hire a bunch of new people we didn't go out and hire a bunch of consultants we had the staff that had been with us for 10 20 and in some cases 30 years so what did we do differently it was really simple we just empowered and supported our staff we knew they were the smart ones they were the ones that were dealing with a legacy environment and they had the passion to make the change so as a team we encouraged suggestions and contributions from our overall IT community from the bottom up we started small we proved the case we told the story and then we got by him and only did did we implement wider the benefits the benefit through our staff were a huge increase in staff satisfaction reduction and application and platform outage support incidents risk free and failsafe application releases work-life balance no more midnight deployments and our application and infrastructure people could really focus on delivering customer value not on firefighting and for our end customers the people that travel with us it was really really simple we could provide a stable service that allowed for faster releases which meant we could deliver value faster in terms of stats we migrated 16 production b2c applications to a public cloud OpenShift environment in 12 months we decreased provisioning time from weeks or occasionally months we were waiting for hardware two minutes and we had a hundred percent availability of our key customer facing systems but most importantly it was about people we'd built a culture a culture of innovation that was built on a foundation of collaboration agility and accountability and that permeated throughout the IT organization not those just those people that were involved in the project everyone with an IT could see what good looked like and to see what it worked what it looked like in terms of working together and that was a key foundation for us the future for us you will have heard today everything's changing so we're going to continue to develop our open hybrid cloud onboard more public cloud service providers continue to build more modern applications and leverage the emerging technology integrate and automate everything we possibly can and leverage more open source products with the great support from the open source community so there you have it that's our journey I think we succeeded by not being over awed and by starting with the basics the technology was key obviously it's a cool component but most importantly it was a way we approached our transition we had a clear strategy that was actually developed bottom-up by the people that were involved day to day and we empowered those people to deliver and that provided benefits to both our staff and to our customers so thank you for giving the opportunity to share and I hope you enjoy the rest of the summer [Applause] I got one thanks what a great story would a great customer story to close on and we have one more partner to come up and this is a partner that all of you know that's Microsoft Microsoft has gone through an amazing transformation they've we've built an incredibly meaningful partnership with them all the way from our open source collaboration to what we do in the business side we started with support for Red Hat Enterprise Linux on hyper-v and that was truly just the beginning today we're announcing one of the most exciting joint product offerings on the market today let's please give a warm welcome to Paul correr and Scott Scott Guthrie to tell us about it guys come on out you know Scot welcome welcome to the Red Hat summer thanks for coming really appreciate it great to be here you know many surprises a lot of people when we you know published a list of speakers and then you rock you were on it and you and I are on stage here it's really really important and exciting to us exciting new partnership we've worked together a long time from the hypervisor up to common support and now around hybrid hybrid cloud maybe from your perspective a little bit of of what led us here well you know I think the thing that's really led us here is customers and you know Microsoft we've been on kind of a transformation journey the last several years where you know we really try to put customers at the center of everything that we do and you know as part of that you quickly learned from customers in terms of I'm including everyone here just you know you've got a hybrid of state you know both in terms of what you run on premises where it has a lot of Red Hat software a lot of Microsoft software and then really is they take the journey to the cloud looking at a hybrid of state in terms of how do you run that now between on-premises and a public cloud provider and so I think the thing that both of us are recognized and certainly you know our focus here at Microsoft has been you know how do we really meet customers with where they're at and where they want to go and make them successful in that journey and you know it's been fantastic working with Paul and the Red Hat team over the last two years in particular we spend a lot of time together and you know really excited about the journey ahead so um maybe you can share a bit more about the announcement where we're about to make today yeah so it's it's it's a really exciting announcement it's and really kind of I think first of its kind in that we're delivering a Red Hat openshift on Azure service that we're jointly developing and jointly managing together so this is different than sort of traditional offering where it's just running inside VMs and it's sort of two vendors working this is really a jointly managed service that we're providing with full enterprise support with a full SLA where the you know single throat to choke if you will although it's collectively both are choke the throats in terms of making sure that it works well and it's really uniquely designed around this hybrid world and in that it supports will support both Windows and Linux containers and it role you know it's the same open ship that runs both in the public cloud on Azure and on-premises and you know it's something that we hear a lot from customers I know there's a lot of people here that have asked both of us for this and super excited to be able to talk about it today and we're gonna show off the first demo of it just a bit okay well I'm gonna ask you to elaborate a bit more about this how this fits into the bigger Microsoft picture and I'll get out of your way and so thanks again thank you for coming here we go thanks Paul so I thought I'd spend just a few minutes talking about wouldn't you know that some of the work that we're doing with Microsoft Asher and the overall Microsoft cloud I didn't go deeper in terms of the new offering that we're announcing today together with red hat and show demo of it actually in action in a few minutes you know the high level in terms of you know some of the work that we've been doing at Microsoft the last couple years you know it's really been around this this journey to the cloud that we see every organization going on today and specifically the Microsoft Azure we've been providing really a cloud platform that delivers the infrastructure the application and kind of the core computing needs that organizations have as they want to be able to take advantage of what the cloud has to offer and in terms of our focus with Azure you know we've really focused we deliver lots and lots of different services and features but we focused really in particular on kind of four key themes and we see these four key themes aligning very well with the journey Red Hat it's been on and it's partly why you know we think the partnership between the two companies makes so much sense and you know for us the thing that we've been really focused on has been with a or in terms of how do we deliver a really productive cloud meaning how do we enable you to take advantage of cutting-edge technology and how do we kind of accelerate the successful adoption of it whether it's around the integration of managed services that we provide both in terms of the application space in the data space the analytic and AI space but also in terms of just the end-to-end management and development tools and how all those services work together so that teams can basically adopt them and be super successful yeah we deeply believe in hybrid and believe that the world is going to be a multi cloud and a multi distributed world and how do we enable organizations to be able to take the existing investments that they already have and be able to easily integrate them in a public cloud and with a public cloud environment and get immediate ROI on day one without how to rip and replace tons of solutions you know we're moving very aggressively in the AI space and are looking to provide a rich set of AI services both finished AI models things like speech detection vision detection object motion etc that any developer even at non data scientists can integrate to make application smarter and then we provide a rich set of AI tooling that enables organizations to build custom models and be able to integrate them also as part of their applications and with their data and then we invest very very heavily on trust Trust is sort of at the core of a sure and we now have more compliant certifications than any other cloud provider we run in more countries than any other cloud provider and we really focus around unique promises around data residency data sovereignty and privacy that are really differentiated across the industry and terms of where Iser runs today we're in 50 regions around the world so our region for us is typically a cluster of multiple data centers that are grouped together and you can see we're pretty much on every continent with the exception of Antarctica today and the beauty is you're going to be able to take the Red Hat open shift service and run it on ashore in each of these different locations and really have a truly global footprint as you look to build and deploy solutions and you know we've seen kind of this focus on productivity hybrid intelligence and Trust really resonate in the market and about 90 percent of Fortune 500 companies today are deployed on Azure and you heard Nike talked a little bit earlier this afternoon about some of their journeys as they've moved to a dot public cloud this is a small logo of just a couple of the companies that are on ashore today and what I do is actually even before we dive into the open ship demo is actually just show a quick video you know one of the companies thing there are actually several people from that organization here today Deutsche Bank who have been working with both Microsoft and Red Hat for many years Microsoft on the other side Red Hat both on the rel side and then on the OpenShift side and it's just one of these customers that have helped bring the two companies together to deliver this managed openshift service on Azure and so I'm just going to play a quick video of some of the folks that Deutsche Bank talking about their experiences and what they're trying to get out of it so we could roll the video that'd be great technology is at the absolute heart of Deutsche Bank we've recognized that the cost of running our infrastructure was particularly high there was a enormous amount of under utilization we needed a platform which was open to polyglot architecture supporting any kind of application workload across the various business lines of the third we analyzed over 60 different vendor products and we ended up with Red Hat openshift I'm super excited Microsoft or supporting Linux so strongly to adopting a hybrid approach we chose as here because Microsoft was the ideal partner to work with on constructs around security compliance business continuity as you as in all the places geographically that we need to be we have applications now able to go from a proof of concept to production in three weeks that is already breaking records openshift gives us given entities and containers allows us to apply the same sets of processes automation across a wide range of our application landscape on any given day we run between seven and twelve thousand containers across three regions we start see huge levels of cost reduction because of the level of multi-tenancy that we can achieve through containers open ship gives us an abstraction layer which is allows us to move our applications between providers without having to reconfigure or recode those applications what's really exciting for me about this journey is the way they're both Red Hat and Microsoft have embraced not just what we're doing but what each other are doing and have worked together to build open shift as a first-class citizen with Microsoft [Applause] in terms of what we're announcing today is a new fully managed OpenShift service on Azure and it's really the first fully managed service provided end-to-end across any of the cloud providers and it's jointly engineer operated and supported by both Microsoft and Red Hat and that means again sort of one service one SLA and both companies standing for a link firmly behind it really again focusing around how do we make customers successful and as part of that really providing the enterprise-grade not just isolates but also support and integration testing so you can also take advantage of all your rel and linux-based containers and all of your Windows server based containers and how can you run them in a joint way with a common management stack taking the advantage of one service and get maximum density get maximum code reuse and be able to take advantage of a containerized world in a better way than ever before and make this customer focus is very much at the center of what both companies are really centered around and so what if I do be fun is rather than just talk about openshift as actually kind of show off a little bit of a journey in terms of what this move to take advantage of it looks like and so I'd like to invite Brendan and Chris onstage who are actually going to show off a live demo of openshift on Azure in action and really walk through how to provision the service and basically how to start taking advantage of it using the full open ship ecosystem so please welcome Brendan and Chris we're going to join us on stage for a demo thanks God thanks man it's been a good afternoon so you know what we want to get into right now first I'd like to think Brandon burns for joining us from Microsoft build it's a busy week for you I'm sure your own stage there a few times as well you know what I like most about what we just announced is not only the business and technical aspects but it's that operational aspect the uniqueness the expertise that RedHat has for running OpenShift combined with the expertise that Microsoft has within Azure and customers are going to get this joint offering if you will with you know Red Hat OpenShift on Microsoft Azure and so you know kind of with that again Brendan I really appreciate you being here maybe talk to the folks about what we're going to show yeah so we're going to take a look at what it looks like to deploy OpenShift on to Azure via the new OpenShift service and the real selling point the really great part of this is the the deep integration with a cloud native app API so the same tooling that you would use to create virtual machines to create disks trade databases is now the tooling that you're going to use to create an open chip cluster so to show you this first we're going to create a resource group here so we're going to create that resource group in East us using the AZ tool that's the the azure command-line tooling a resource group is sort of a folder on Azure that holds all of your stuff so that's gonna come back into the second I've created my resource group in East us and now we're gonna use that exact same tool calling into into Azure api's to provision an open shift cluster so here we go we have AZ open shift that's our new command line tool putting it into that resource group I'm gonna get into East us alright so it's gonna take a little bit of time to deploy that open shift cluster it's doing a bunch of work behind the scenes provisioning all kinds of resources as well as credentials to access a bunch of different as your API so are we actually able to see this to you yeah so we can cut over to in just a second we can cut over to that resource group in a reload so Brendan while relating the beauty of what you know the teams have been doing together already is the fact that now open shift is a first-class citizen as it were yeah absolutely within the agent so I presume not only can I do a deployment but I can do things like scale and check my credentials and pretty much everything that I could do with any other service with that that's exactly right so we can anything that you you were used to doing via the my computer has locked up there we go the demo gods are totally with me oh there we go oh no I hit reload yeah that was that was just evil timing on the house this is another use for operators as we talked about earlier today that's right my dashboard should be coming up do I do I dare click on something that's awesome that was totally it was there there we go good job so what's really interesting about this I've also heard that it deploys you know in as little as five to six minutes which is really good for customers they want to get up and running with it but all right there we go there it is who managed to make it see that shows that it's real right you see the sweat coming off of me there but there you can see the I feel it you can see the various resources that are being created in order to create this openshift cluster virtual machines disks all of the pieces provision for you automatically via that one single command line call now of course it takes a few minutes to to create the cluster so in order to show the other side of that integration the integration between openshift and Azure I'm going to cut over to an open shipped cluster that I already have created alright so here you can see my open shift cluster that's running on Microsoft Azure I'm gonna actually log in over here and the first sign you're gonna see of the integration is it's actually using my credentials my login and going through Active Directory and any corporate policies that I may have around smart cards two-factor off anything like that authenticate myself to that open chef cluster so I'll accept that it can access my and now we're gonna load up the OpenShift web console so now this looks familiar to me oh yeah so if anybody's used OpenShift out there this is the exact same console and what we're going to show though is how this console via the open service broker and the open service broker implementation for Azure integrates natively with OpenShift all right so we can go down here and we can actually see I want to deploy a database I'm gonna deploy Mongo as my key value store that I'm going to use but you know like as we talk about management and having a OpenShift cluster that's managed for you I don't really want to have to manage my database either so I'm actually going to use cosmos DB it's a native Azure service it's a multilingual database that offers me the ability to access my data in a variety of different formats including MongoDB fully managed replicated around the world a pretty incredible service so I'm going to go ahead and create that so now Brendan what's interesting I think to me is you know we talked about the operational aspects and clearly it's not you and I running the clusters but you do need that way to interface with it and so when customers are able to deploy this all of this is out of the box there's no additional contemporary like this is what you get when you create when you use that tool to create that open chef cluster this is what you get with all of that integration ok great step through here and go ahead don't have any IP ranges there we go all right and we create that binding all right and so now behind the scenes openshift is integrated with the azure api's with all of my credentials to go ahead and create that distributed database once it's done provisioning actually all of the credentials necessary to access the database are going to be automatically populated into kubernetes available for me inside of OpenShift via service discovery to access from my application without any further work so I think that really shows not only the power of integrating openshift with an azure based API but actually the power of integrating a Druze API is inside of OpenShift to make a truly seamless experience for managing and deploying your containers across a variety of different platforms yeah hey you know Brendan this is great I know you've got a flight to catch because I think you're back onstage in a few hours but you know really appreciate you joining us today absolutely I look forward to seeing what else we do yeah absolutely thank you so much thanks guys Matt you want to come back on up thanks a lot guys if you have never had the opportunity to do a live demo in front of 8,000 people it'll give you a new appreciation for standing up there and doing it and that was really good you know every time I get the chance just to take a step back and think about the technology that we have at our command today I'm in awe just the progress over the last 10 or 20 years is incredible on to think about what might come in the next 10 or 20 years really is unthinkable you even forget 10 years what might come in the next five years even the next two years but this can create a lot of uncertainty in the environment of what's going to be to come but I believe I am certain about one thing and that is if ever there was a time when any idea is achievable it is now just think about what you've seen today every aspect of open hybrid cloud you have the world's infrastructure at your fingertips and it's not stopping you've heard about this the innovation of open source how fast that's evolving and improving this capability you've heard this afternoon from an entire technology ecosystem that's ready to help you on this journey and you've heard from customer after customer that's already started their journey in the successes that they've had you're one of the neat parts about this afternoon you will aren't later this week you will actually get to put your hands on all of this technology together in our live audience demo you know this is what some it's all about for us it's a chance to bring together the technology experts that you can work with to help formulate how to pull off those ideas we have the chance to bring together technology experts our customers and our partners and really create an environment where everyone can experience the power of open source that same spark that I talked about when I was at IBM where I understood the but intial that open-source had for enterprise customers we want to create the environment where you can have your own spark you can have that same inspiration let's make this you know in tomorrow's keynote actually you will hear a story about how open-source is changing medicine as we know it and literally saving lives it is a great example of expanding the ideas it might be possible that we came into this event with so let's make this the best summit ever thank you very much for being here let's kick things off right head down to the Welcome Reception in the expo hall and please enjoy the summit thank you all so much [Music] [Music]
SUMMARY :
from the bottom this speaks to what I'm
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug Fisher | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
Brendan | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Robert Noyce | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Arvind | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
March 14th | DATE | 0.99+ |
Matt | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Asia | LOCATION | 0.99+ |
Washington DC | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
two minutes | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
two numbers | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Paul correr | PERSON | 0.99+ |
September | DATE | 0.99+ |
Kerry Pierce | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
8-bit | QUANTITY | 0.99+ |
Mike witig | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
dr. Hawking | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Dublin | LOCATION | 0.99+ |
first partner | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
first platform | QUANTITY | 0.99+ |
Matt Hicks | PERSON | 0.99+ |
today | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Jim Wu, Falcon Computing | Super Computing 2017
>> Announcer: From Denver, Colorado, it's theCUBE covering Super Computing '17. Brought to you by Intel. (upbeat techno music) Hey welcome back, everybody. Jeff Frick here with theCUBE. We're at Super Computing 2017 in Denver, Colorado. It's our first trip to the show, 12,000 people, a lot of exciting stuff going on, big iron, big lifting, heavy duty compute. We're excited to have our next guest on. He's Jim Wu, he's the Director of Customer Experience for Falcon Computing. Jim, welcome. Thank you. Good to see you. So, what does Falcon do for people that aren't familiar with the company? Yeah, Falcon Company is in our early stages startup, focus on AVA-based acceleration development. Our vision is to allow software engineers to develop a FPGA-based accelerators, accelerators without FPGA expertise. Right, you just said you closed your B round. So, congratulations on that. >> Jim: Thank you. Yeah, very exciting. So, it's a pretty interesting concept. To really bring the capability to traditional software engineers to program for hardware. That's kind of a new concept. What do you think? 'Cause it brings the power of a hardware system. but the flexibility of a software system. Yeah, so today, to develop FPGA accelerators is very challenging. So, today for the accelerations-based people use very low level language, like a Verilog and the VHDL to develop FPGA accelerators. Which was very time consuming, very labor-intensive. So, our goal is to liberate them to use, C/C++ space design flow to give them an environment that they are familiar with in C/C++. So now not only can they improve their productivity, we also do a lot of automatic organization under the hood, to give them the highest accelerator results. Right, so that really opens up the ecosystem well beyond the relatively small ecosystem that knows how to program their hardware. Definitely, that's what we are hoping to see. We want to the tool in the hands of all software programmers. They can use it in the Cloud. They can use it on premises. Okay. So what's the name of your product? And how does it fit within the stack? I know we've got the Intel microprocessor under the covers, we've got the accelerator, we've got the cards. There's a lot of pieces to the puzzle. >> Jim: Yeah. So where does Falcon fit? So our main product is a compiler, called the Merlin Compiler. >> Jeff: Okay. It's a pure C and the C++ flow that enables software programmers to design APGA-based accelerators without any knowledge of APGA. And it's highly integrated with Intel development tools. So users don't even need to learn anything about the Intel development environment. They can just use their C++ development environment. Then in the end, we give them the host code as well as APGA binaries so they can round on APGA to see a accelerated applications. Okay, and how long has Merlin been GA? Actually, we'll be GA early next year. Early next year. So finishing, doing the final polish here and there. Yes. So in this quarter, we are heavily investing a lot of ease-of-use features. Okay. We have most of the features we want to be in the tool, but we're still lacking a bit in terms of ease-of-use. >> Jeff: Okay. So we are enhancing our report capabilities, we are enhancing our profiling of capabilities. We want to really truly like a traditional C++-based development environment for software application engineers. Okay, that's fine. You want to get it done, right, before you ship it out the door? So you have some Alpha programs going on? Some Beta programs of some really early adopters? Yeah, exactly. So today we provide a 14 day free trial to any customers who are interested. We have it, you can set up your enterprise or you can set up on Cloud. Okay. We provide to where you want your work done. Okay. And so you'll support all the cloud service providers, the big public clouds, all the private clouds. All the traditional data servers as well. Right. So, we are twice already on Aduplas as well as Alibaba Cloud. So we are working on bringing the tool to other public cloud providers as well. Right. So what is some of the early feedback you're getting from some of the people you're talking to? As to where this is going to make the biggest impact. What type of application space has just been waiting for this solution? So our Merlin Compiler is a productivity tool, so any space that FPGA can traditionally play well that's where we want to be there. So like encryption, decryption, video codec, compression, decompression. Those kind of applications are very stable for APGA. Now traditionally they can only be developed by hardware engineers. Now with the Merlin Compiler, all of these software engineers can use the Merlin Compiler to do all of these applications. Okay. And when is the GA getting out, I know it's coming? When is it coming? Approximately So probably first quarter of 2018. Okay, that's just right around the corner. Exactly. Alright, super. And again, a little bit about the company, how many people are you? A little bit of the background on the founders. So we have about 30 employees, at the moment, so we have offices in Santa Clara which is our headquarters. We also have an office in Los Angeles. As well as a Beijing, China. Okay, great. Alright well Jim, thanks for taking a few minutes. We'll be looking for GA in a couple of months and wish you nothing but the best success. Okay, thank you so much, Jeff. Alright, he's Jim Lu I'm Jeff Frick. You're watching theCUBE from supering computing 2017. Thanks for watching. (upbeat techno music)
SUMMARY :
Brought to you by Intel. Verilog and the VHDL to develop FPGA accelerators. called the Merlin Compiler. We have most of the features we want to be in the tool, We provide to where you want your work done.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Wu | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Santa Clara | LOCATION | 0.99+ |
Beijing | LOCATION | 0.99+ |
Los Angeles | LOCATION | 0.99+ |
14 day | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Falcon | ORGANIZATION | 0.99+ |
first quarter of 2018 | DATE | 0.99+ |
12,000 people | QUANTITY | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
twice | QUANTITY | 0.99+ |
first trip | QUANTITY | 0.99+ |
C++ | TITLE | 0.99+ |
Early next year | DATE | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
Super Computing '17 | EVENT | 0.98+ |
early next year | DATE | 0.98+ |
2017 | DATE | 0.98+ |
GA | LOCATION | 0.97+ |
Jim Lu | PERSON | 0.97+ |
Falcon Company | ORGANIZATION | 0.97+ |
about 30 employees | QUANTITY | 0.97+ |
Super Computing 2017 | EVENT | 0.97+ |
APGA | TITLE | 0.94+ |
this quarter | DATE | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
C | TITLE | 0.92+ |
Aduplas | ORGANIZATION | 0.91+ |
C/C+ | TITLE | 0.9+ |
C+ | TITLE | 0.87+ |
Alibaba Cloud | ORGANIZATION | 0.84+ |
APGA | ORGANIZATION | 0.82+ |
Falcon Computing | ORGANIZATION | 0.81+ |
China | LOCATION | 0.76+ |
Merlin | TITLE | 0.71+ |
Merlin Compiler | TITLE | 0.65+ |
Merlin | ORGANIZATION | 0.64+ |
FPGA | ORGANIZATION | 0.62+ |
Super | EVENT | 0.61+ |
GA | ORGANIZATION | 0.61+ |
Verilog | TITLE | 0.54+ |
Stephane Monoboisset, Accelize | Super Computing 2017
>> Voiceover: From Denver, Colorado, it's theCUBE covering Super Computing '17, brought to you by Intel. Hey, welcome back, everybody. Jeff Frick, here, with theCUBE. We're in Denver, Colorado at Super Computing 2017. It's all things heavy lifting, big iron, 12,000 people. I think it's the 20th anniversary of the conference. A lot of academics, really talking about big iron, doin' big computing. And we're excited to have our next guest, talking about speed, he's Stephane Monoboisset. Did I get that right? That's right. He's a director of marketing and partnerships for Accelize. Welcome. Thank you. So, for folks that aren't familiar with Accelize, give them kind of the quick overview. Okay, so Accelize is a French startup. Actually, a spinoff for a company called PLDA that has been around for 20 years, doing PCI express IP. And about a few years ago, we started initiative to basically bring FPGA acceleration to the cloud industry. So what we say is, we basically enable FPGA acceleration as a service. So did it not exist in cloud service providers before that, or what was kind of the opportunity that you saw there? So, FPGAs have been used in data centers in many different ways. They're starting to make their way into, as a service type of approach. But one of the thing that the industry, one of the buzzword that the industry's using, is FPGA as a service. And the industry usually refers to it as the way to bring FPGA to the end users. But when you think about it, end users don't really want FPGA as a service. Most of the cloud end users are not FPGA experts. So they couldn't care less whether it's an FPGA or something else. What they really want is the acceleration benefits. Hence the term, FPGA acceleration as a service. So, in order to do that, instead of just going and offering an FPGA platform, and giving them the tools, even if they are easy to use and develop the FPGAs, our objective is to propose to provide a marketplace of accelerators that they can use as a service, without even thinking that it's an FPGA on the background. So that's a really interesting concept. Because that also leverages an ecosystem. And one thing we know that's important, if you have any kind of a platform playing, you need an ecosystem that brings a much broader breadth of applications, and solution suites, and there's a lot of talk about solutions. So that was pretty insightful, 'cause now you open it up to this much broader set of applications. Well, absolutely. The ecosystem is the essential part of the offering because obviously, as a company, we cannot be expert in every single domain. And to a certain extent, even FPGA designers, they are what, about maybe 10, 15,000 FPGA designers in the world. They are not really expert in the end application. So one of the challenges that we're trying to address is how do we make application developers, the people who are already playing in the cloud, the ISVs, for example, who have the expertise of what the end user wants, being able to develop something that is efficient to the end user in FPGAs. And this is why we've created a tool called Quick Play, which basically enables what we call the accelerator function developers, the guys who have the application expertise, to leverage an ecosystem of IP providers in the FPGA space that have built efficient building blocks, like encryption, compression, video transcoding. Right. These sort of things. So what you have is an ecosystem of cloud service providers. You have an ecosystem of IP providers. And we have this growing ecosystem of accelerator developers that develop all these accelerators that are sold as a service. And that really opens up the number of people that are qualified to play in the space. 'Cause you're kind of hiding the complexity into the hardcore, harder engineers and really making it more kind of a traditional software application space. Is that right? Yeah, you're absolutely right. And we're doing that on the technical front, but we're also doing that on the business model front. Because one thing with FPGAs is that FPGAs has relied heavily over the years on the IP industry. And the IP industry for FPGAs, and it's the same for ASIGs, have been also relying on the business model, which is based on very high up-front cost. So let me give you an example. Let's say I want to develop an accelerator, right, for database. And what I need to do is to get the stream of data coming in. It's most likely encrypted, so I need to decrypt this data, then I want to do some search algorithm on it to extract certain functions. I'm going to do some processing on it, and maybe the last thing I want to do is, I want to compress because I want to store the result of that data. If I'm doing that with a traditional IP business model, what I need to do is basically go and talk to every single one of those IP providers and ask them to sell me the IP. In the traditional IP business model, I'm looking at somewhere between 200,000 to 500,000 up front cost. And I want to sell this accelerator for maybe a couple of dollars on one of the marketplace. There's something that doesn't play out. So what we've done, also, is we've introduced a pay-per-use business model that allows us to track those IPs that are being used by the accelerators so we can propagate the as-a-service business model throughout the industry, the supply chain. Which is huge, right? 'Cause as much as cloud is about flexibility and extensibility, it's about the business model as well. About paying what you use when you use it, turning it on, turning it off. So that's a pretty critical success factor. Absolutely, I mean, you can imagine that there's, I don't know, millions of users in the cloud. There's maybe hundreds of thousands of different type of ways they're processing their data. So we also need a very agile ecosystem that can develop very quickly. And we also need them to do it in a way that doesn't cost too much money, right? Think about it, and think about the app store when it was launched, right? Right. When Apple launched the iPhone back about 10 years ago, right, they didn't have much application. And they didn't, I don't think they quite knew, exactly, how it was going to be used. But what they did, which completely changed the industry, is they opened up the SDK that they sold for very small amount of money and enabled a huge community to come up with a lot of a lot of application. And now you go there and you can find application that really meats your need. That's kind of the similar concept that we're trying to develop here. Right. So how's been the uptake? I mean, so where are you, kind of, in the life cycle of this project? 'Cause it's a relatively new spinout of the larger company? Yes, so it's relatively new. We did the spinout because we really want to give that product its own life. Right, right. Right? But we are still at the beginning. So we started a developing partnership with cloud service providers. The two ones that we've announced is Amazon Web Services and OVH, the cloud service provider in France. And we have recruited, I think, about a dozen IP partners. And now we're also working with accelerator developer, accelerator functions developers. Okay. So it's a work in progress. And our main goal right now is to, really to evangelize, and to show them how much money they can do and how they can serve this market of FPGA acceleration as a service. The cloud providers, or the application providers? Who do you really have to convince the most? So the one we have to convince today are really the application developers. Okay, okay. Because without content, your marketplace doesn't mean much. So this is the main thing we're focusing on right now. Okay, great. So, 2017's coming to an end, which is hard to believe. So as you look forward to 2018, of those things you just outlined, kind of what are some of the top priorities for 2018? So, top priorities will be to strengthen our relationship with the key cloud service providers we work with. We have a couple of other discussions ongoing to try to offer a platform on more cloud service providers. We also want to strengthen our relationship with Intel. And we'll continue the evangelization to really onboard all the IP providers and the accelerator developers so that the marketplace becomes filled with valuable accelerators that people can use. And that's going to be a long process, but we are focusing right now on key application space that we know people can leverage in the application. Exciting times. Oh yeah, it is. You know, it's 10 years since the app store launched, I think, so I look at acceleration as a service in cloud service providers, this sounds like a terrific opportunity. It is, it is a huge opportunity. Everybody's talking about it. We just need to materialize it now. All right, well, congratulations and thanks for taking a couple minutes out of your day. Oh, thanks for your time. All right, he's Stephane, I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. (upbeat music)
SUMMARY :
So one of the challenges that we're trying to address
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephane Monoboisset | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Teresa | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Teresa Tung | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Jamie | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Jamie Sharath | PERSON | 0.99+ |
Rajeev | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeremy | PERSON | 0.99+ |
Ramin Sayar | PERSON | 0.99+ |
Holland | LOCATION | 0.99+ |
Abhiman Matlapudi | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Rajeem | PERSON | 0.99+ |
Jeff Rick | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
Rajeev Krishnan | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Sally Jenkins | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Stephane | PERSON | 0.99+ |
John Farer | PERSON | 0.99+ |
Jamaica | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Abhiman | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
130% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
30 days | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
183% | QUANTITY | 0.99+ |
14 million | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
38% | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
24 million | QUANTITY | 0.99+ |
Theresa | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
Accelize | ORGANIZATION | 0.99+ |
32 million | QUANTITY | 0.99+ |
Bill Jenkins, Intel | Super Computing 2017
>> Narrator: From Denver, Colorado, it's theCUBE. Covering Super Computing 17. Brought to you by Intel. (techno music) Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at the Super Computing Conference 2017. About 12 thousand people, talking about the outer edges of computing. It's pretty amazing. The keynote was huge. The square kilometer array, a new vocabulary word I learned today. It's pretty exciting times, and we're excited to have our next guest. He's Bill Jenkins. He's a Product Line Manager for AI on FPGAs at Intel. Bill, welcome. Thank you very much for having me. Nice to meet you, and nice to talk to you today. So you're right in the middle of this machine-learning AI storm, which we keep hearing more and more about. Kind of the next generation of big data, if you will. That's right. It's the most dynamic industry I've seen since the telecom industry back in the 90s. It's evolving every day, every month. Intel's been making some announcements. Using this combination of software programming and FPGAs on the acceleration stack to get more performance out of the data center. Did I get that right? Sure, yeah, yeah. Pretty exciting. The use of both hardware, as well as software on top of it, to open up the solution stack, open up the ecosystem. What of those things are you working on specifically? I really build first the enabling technology that brings the FPGA into that Intel ecosystem. Where Intel is trying to provide that solution from top to bottom to deliver AI products. >> Jeff: Right. Into that market. FPGAs are a key piece of that because we provide a different way to accelerate those machine-learning and AI workloads. Where we can be an offload engine to a CPU. We can be inline analytics to offload the system, and get higher performance that way. We tie into that overall Intel ecosystem of tools and products. Right. So that's a pretty interesting piece because the real-time streaming data is all the rage now, right? Not in batch. You want to get it now. So how do you get it in? How do you get it written to the database? How do you get it into the micro-processor? That's a really, really important piece. That's different than even two years ago. You didn't really hear much about real-time. I think it's, like I said, it's evolving quite a bit. Now, a lot of people deal with training. It's the science behind it. The data scientists work to figure out what topologies they want to deploy and how they want to deploy 'em. But now, people are building products around it. >> Jeff: Right. And once they start deploying these technologies into products, they realize that they don't want to compensate for limitations in hardware. They want to work around them. A lot of this evolution that we're building is to try to find ways to more efficiently do that compute. What we call inferencing, the actual deployed machine-learning scoring, as they will. >> Jeff: Right. In a product, it's all about how quickly can I get the data out. It's not about waiting two seconds to start the processing. You know, in an autonomous-driven car where someone's crossing the road, I'm not waiting two seconds to figure out it's a person. Right, right. I need it right away. So I need to be able to do that with video feeds, right off a disk drive, from the ethernet data coming in. I want to do that directly in line, so that my processor can do what it's good at, and we offload that processor to get better system performance. Right. And then on the machine-learning specifically, 'cause that is all the rage. And it is learning. So there is a real-time aspect to it. You talked about autonomous vehicles. But there's also continuous learning over time, that's not necessarily dependent on learning immediately. Right. But continuous improvement over time. What are some of the unique challenges in machine-learning? And what are some of the ways that you guys are trying to address those? Once you've trained the network, people always have to go back and retrain. They say okay, I've got a good accuracy, but I want better performance. Then they start lowering the precision, and they say well, today we're at 32-bit, maybe 16-bit. Then they start looking into eight. But the problem is, their accuracy drops. So they retrain that into eight topology, that network, to get the performance benefit, but with the higher accuracy. The flexibility of FPGA actually allows people to take that network at 32-bit, with the 32-bit trained weights, but deploy it in lower precision. So we can abstract away the fact that the hardware's so flexible, we can do what we call floating point 11-bit floating point. Or even 8-bit floating point. Even here today at the show, we've got a binary and ternary demo, showcasing the flexibility that the FPGA can provide today with that building block piece of hardware that the FPGA can be. And really provide, not only the topologies that people are trying to build today, but tomorrow. >> Jeff: Right. Future proofing their hardware. But then the precisions that they may want to do. So that they don't have to retrain. They can get less than a 1% accuracy loss, but they can lower that precision to get all the performance benefits of that data scientist's work to come up with a new architecture. Right. But it's interesting 'cause there's trade-offs, right? >> Bill: Sure. There's no optimum solution. It's optimum as to what you're trying to optimize for. >> Bill: Right. So really, the ability to change the ability to continue to work on those learning algorithms, to be able to change your priority, is pretty key. Yeah, a lot of times today, you want this. So this has been the mantra of the FPGA for 30 plus years. You deploy it today, and it works fine. Maybe you build an ASIC out of it. But what you want tomorrow is going to be different. So maybe if it's changing so rapidly, you build the ASIC because there's runway to that. But if there isn't, you may just say, I have the FPGA, I can just reprogram it to do what's the next architecture, the next methodology. Right. So it gives you that future proofing. That capability to sustain different topologies. Different architectures, different precisions. To kind of keep people going with the same piece of hardware. Without having to say, spin up a new ASIC every year. >> Jeff: Right, right. Which, even then, it's so dynamic it's probably faster then, every year, the way things are going today. So the other thing you mentioned is topography, and it's not the same topography you mentioned, but this whole idea of edge. Sure. So moving more and more compute, and store, and smarts to the edge. 'Cause there's just not going to be time, you mentioned autonomous vehicles, a lot of applications to get everything back up into the cloud. Back into the data center. You guys are pushing this technology, not only in the data center, but progressively closer and closer to the edge. Absolutely. The data center has a need. It's always going to be there, but they're getting big. The amount of data that we're trying to process every day is growing. I always say that the telecom industry started the Information Age. Well, the Information Age has done a great job of collecting a lot of data. We have to process that. If you think about where, maybe I'll allude back to autonomous vehicles. You're talking about thousands of gigabytes, per day, of data generated. Smart factories. Exabytes of data generated a day. What are you going to do with all that? It has to be processed. We need that compute in the data center. But we have to start pushing it out into the edge, where I start thinking, well even a show like this, I want security. So, I want to do real-time weapons detection, right? Security prevention. I want to do smart city applications. Just monitoring how traffic moves through a mall, so that I can control lighting and heating. All of these things at the edge, in the camera, that's deployed on the street. In the camera that's deployed in a mall. All of that, we want to make those smarter, so that we can do more compute. To offload the amount of data that needs to be sent back to the data center. >> Jeff: Right. As much as possible. Relevant data gets sent back. No shortage of demand for compute store networking, is there? No, no. It's really a heterogeneous world, right? We need all the different compute. We need all the different aspects of transmission of the data with 5G. We need disk space to store it. >> Jeff: Right. We need cooling to cool it. It's really becoming a heterogeneous world. All right, well, I'm going to give you the last word. I can't believe we're in November of 2017. Yeah. Which is bananas. What are you working on for 2018? What are some of your priorities? If we talk a year from now, what are we going to be talking about? Intel's acquired a lot of companies over the past couple years now on AI. You're seeing a lot of merging of the FPGA into that ecosystem. We've got the Nervana. We've got Movidius. We've got Mobileye acquisitions. Saffron Technologies. All of these things, when the FPGA is kind of a key piece of that because it gives you that flexibility of the hardware, to extend those pieces. You're going to see a lot more stuff in the cloud. A lot more stuff with partners next year. And really enabling that edge to data center compute, with things like binary neural networks, ternary neural networks. All the different next generation of topologies to kind of keep that leading edge flexibility that the FPGA can provide for people's products tomorrow. >> Jeff: Exciting times. Yeah, great. All right, Bill Jenkins. There's a lot going on in computing. If you're not getting your computer science degree, kids, think about it again. He's Bill Jenkins. I'm Jeff Frick. You're watching theCUBE from Super Computing 2017. Thanks for watching. Thank you. (techno music)
SUMMARY :
Kind of the next generation of big data, if you will. We can be inline analytics to offload the system, A lot of this evolution that we're building is to try to of hardware that the FPGA can be. So that they don't have to retrain. It's optimum as to what you're trying to optimize for. So really, the ability to change the ability to continue We need that compute in the data center. We need all the different aspects of of the hardware, to extend those pieces. There's a lot going on in computing.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Bill Jenkins | PERSON | 0.99+ |
two seconds | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
November of 2017 | DATE | 0.99+ |
8-bit | QUANTITY | 0.99+ |
16-bit | QUANTITY | 0.99+ |
32-bit | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
next year | DATE | 0.99+ |
Bill | PERSON | 0.99+ |
30 plus years | QUANTITY | 0.99+ |
11-bit | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.98+ |
eight | QUANTITY | 0.98+ |
Movidius | ORGANIZATION | 0.98+ |
Super Computing Conference 2017 | EVENT | 0.98+ |
a day | QUANTITY | 0.96+ |
Saffron Technologies | ORGANIZATION | 0.96+ |
thousands of gigabytes | QUANTITY | 0.95+ |
Mobileye | ORGANIZATION | 0.95+ |
About 12 thousand people | QUANTITY | 0.95+ |
two years ago | DATE | 0.95+ |
90s | DATE | 0.94+ |
less than a 1% | QUANTITY | 0.94+ |
Nervana | PERSON | 0.94+ |
FPGA | ORGANIZATION | 0.9+ |
both hardware | QUANTITY | 0.89+ |
first | QUANTITY | 0.84+ |
Exabytes of data | QUANTITY | 0.76+ |
Super Computing 2017 | EVENT | 0.75+ |
past couple years | DATE | 0.73+ |
every year | QUANTITY | 0.69+ |
year | QUANTITY | 0.69+ |
per day | QUANTITY | 0.6+ |
5G | QUANTITY | 0.58+ |
Super Computing 17 | EVENT | 0.55+ |
theCUBE | ORGANIZATION | 0.52+ |
FPGA | TITLE | 0.42+ |
Bernhard Friebe, Intel Programmable Solutions Group | Super Computing 2017
>> Announcer: From Denver, Colorado, it's theCUBE. Covering Super Computing 2017 brought to you by Intel. (upbeat music) >> Hey, welcome back everybody. Jeffrey Frick here with theCube. We're in Denver, Colorado at Super Computing 17. I think it's the 20th year of the convention. 12,000 people. We've never been here before. It's pretty amazing. Amazing keynote, really talking about space, and really big, big, big computing projects, so, excited to be here, and we've got our first guest of the day. He's Bernard Friebe, he is the Senior Director of FPGA, I'll get that good by the end of the day, Software Solutions for Intel Programmable group. First off, welcome, Bernard. >> Thank you. I'm glad to be here. >> Absolutely. So, have you been to this conference before? >> Yeah, a couple of times before. It's always a big event. Always a big show for us, so I'm excited. >> Yeah, and it's different, too, cuz it's got a lot of academic influence, as well, as you walk around the outside. It's pretty hardcore. >> Yes, it's wonderful, and you see a lot of innovation going on, and we need to move fast. We need to move faster. That's what it is. And accelerate. >> And that's what you're all about, acceleration, so, Intel's making a lot of announcements, really, about acceleration at FPGA. For acceleration and in data centers and in big data, and all these big applications. So, explain just a little bit how that seed is evolving and what some of the recent announcements are all about. >> The world of computing must accelerate. I think we all agree on that. We all see that that's a key requirement. And FPGA's are a truly versatile, multi-function accelerator. It accelerates so many workloads in the high-performance computing space, may it be financial, genomics, oil and gas, data analytics, and the list goes on. Machine learning is a very big one. The list goes on and on. And, so, we're investing heavily in providing solutions which makes it much easier for our users to develop and deploy FPGA in a high-performance computing environment. >> You guys are taking a lot of steps to make the software programming at FPGA a lot easier, so you don't have to be a hardcore hardware engineer, so you can open it up to a broader ecosystem and get a broader solution set. Is that right? >> That's right, and it's not just the hardware. How do you unlock the benefits of FPGA as a versatile accelerator, so their parallelism, their ability to do real-time, low-latency, acceleration of many different workloads, and how do you enable that in an environment which is truly dynamic and multi-function, like a data center. And so, the product we've recently announced is the acceleration stack for xeon with FPGA, which enables that use more. >> So, what are the components for that stack? >> It starts with hardware. So, we are building a hardware accelerator card, it's a pc express plugin card, it's called programmable accelerator card. We have integrated solutions where you have everything on an FPGA in package, but what's common is a software framework solution stack, which sits on top of these different hardware implementation, which really makes it easy for a developer to develop an accelerator, for a user to then deploy that accelerator and run it in their environment, and it also enables a data center operator to basically enable the FPGA like any other computer resources by integrating it into their orchestration framework. So, multiple levels taking care of all those needs. >> It's interesting, because there's a lot of big trends that you guys are taking advantage of. Obviously, we're at Super Computing, but big data, streaming analytics, is all the rage now, so more data faster, reading it in real time, pumping it into the database in real time, and then, right around the corner, we have IoT and internet of things and all these connected devices. So the demand for increased speed, to get that data in, get that data processed, get the analytics back out, is only growing exponentially. >> That's right, and FPGAs, due to their flexibility, have distinct advantages there. The traditional model is look aside of offload, where you have a processor, and then you offload your tasks to your accelerator. The FPGA, with their flexible I/Os and flexible core can actually run directly in the data path, so that's what we call in-line processing. And what that allows people to do is, whatever the source is, may it be cameras, may it be storage, may it be through the network, through ethernet, can stream directly into the FPGA and do your acceleration as the data comes in in a streaming way. And FPGAs provide really unique advantages there versus other types of accelerators. Low-latency, very high band-width, and they're flexible in a sense that our customers can build different interfaces, different connectivity around those FPGAs. So, it's really amazing how versatile the usage of FPGA has become. >> It is pretty interesting, because you're using all the benefits that come from hardware, hardware-based solutions, which you just get a lot of benefits when things are hardwired, with the software component and enabling a broader ecosystem to write ready-made solutions and integrations to their existing solutions that they already have. Great approach. >> The acceleration stack provides a consistent interface to the developer and the user of the FPGA. What that allows our ecosystem and our customers to do is to define these accelerators based on this framework, and then they can easily migrate those between different hardware platforms, so we're building in future improvements of the solution, and the consistent interfaces then allow our customers and partners to build their software stacks on top of it. So, their investment, once they do it and we target our Arria 10 programmable accelerator card can easily be leveraged and moved forward into the next generation strategy, and beyond. We enable, really, and encourage a broad ecosystem, to build solutions. You'll see that here at the show, many partners now have demos, and they show their solutions built on Intel FPGA hardware and the acceleration stack. >> OK, so I'm going to put you on the spot. So, these are announced, what's the current state of the general availability? >> We're sampling now on the cards, the acceleration stack is available for delivery to customers. A lot of it is open source, by the way, so it can already be downloaded from GitHub And the partners are developing the solutions they are demonstrating today. The product will go into volume production in the first half of next year. So, we're very close. >> All right, very good. Well, Bernard, thanks for taking a few minutes to stop by. >> Oh, it's my pleasure. >> All right. He's Bernard, I'm Jeff. You're watching theCUBE from Super Computing 17. Thanks for watching. (upbeat music)
SUMMARY :
brought to you by Intel. I'll get that good by the end of the day, I'm glad to be here. So, have you been to this conference before? Yeah, a couple of times before. Yeah, and it's different, too, and you see a lot of innovation going on, For acceleration and in data centers and the list goes on. and get a broader solution set. and how do you enable that in an environment and run it in their environment, and all these connected devices. and FPGAs, due to their flexibility, and enabling a broader ecosystem and the consistent interfaces then OK, so I'm going to put you on the spot. A lot of it is open source, by the way, Well, Bernard, thanks for taking a few minutes to stop by. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bernard | PERSON | 0.99+ |
Bernard Friebe | PERSON | 0.99+ |
Bernhard Friebe | PERSON | 0.99+ |
Jeffrey Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Intel Programmable Solutions Group | ORGANIZATION | 0.99+ |
12,000 people | QUANTITY | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
20th year | QUANTITY | 0.98+ |
Super Computing 17 | EVENT | 0.97+ |
FPGA | ORGANIZATION | 0.97+ |
Super Computing 2017 | EVENT | 0.97+ |
today | DATE | 0.96+ |
First | QUANTITY | 0.96+ |
GitHub | ORGANIZATION | 0.95+ |
first half of next year | DATE | 0.95+ |
first guest | QUANTITY | 0.95+ |
Intel | ORGANIZATION | 0.95+ |
FPGA | TITLE | 0.85+ |
theCube | ORGANIZATION | 0.84+ |
Arria 10 | COMMERCIAL_ITEM | 0.73+ |
theCUBE | ORGANIZATION | 0.54+ |
Super | EVENT | 0.41+ |
Computing | TITLE | 0.39+ |
17 | EVENT | 0.36+ |
Lisa Spelman, Intel - Google Next 2017 - #GoogleNext17 - #theCUBE
(bright music) >> Narrator: Live from Silicon Valley. It's theCUBE, covering Google Cloud Next 17. >> Okay, welcome back, everyone. We're live in Palo Alto for theCUBE special two day coverage here in Palo Alto. We have reporters, we have analysts on the ground in San Francisco, analyzing what's going on with Google Next, we have all the great action. Of course, we also have reporters at Open Compute Summit, which is also happening in San Hose, and Intel's at both places, and we have Intel senior manager on the line here, on the phone, Lisa Spelman, vice president and general manager of the Xeon product line, product manager responsibility as well as marketing across the data center. Lisa, welcome to theCUBE, and thanks for calling in and dissecting Google Next, as well as teasing out maybe a little bit of OCP around the Xeon processor, thanks for calling. >> Lisa: Well, thank you for having me, and it's hard to be in many places at once, so it's a busy week and we're all over, so that's that. You know, we'll do this on the phone, and next time we'll do it in person. >> I'd love to. Well, more big news is obviously Intel has a big presence with the Google Next, and tomorrow there's going to be some activity with some of the big name executives at Google. Talking about your relationship with Google, aka Alphabet, what are some of the key things that you guys are doing with Google that people should know about, because this is a very turbulent time in the ecosystem of the tech business. You saw Mobile World Congress last week, we've seen the evolution of 5G, we have network transformation going on. Data centers are moving to a hybrid cloud, in some cases, cloud native's exploding. So all new kind of computing environment is taking shape. What is Intel doing here at Google Next that's a proof point to the trajectory of the business? >> Lisa: Yeah, you know, I'd like to think it's not too much of a surprise that we're there, arm in arm with Google, given all of the work that we've done together over the last several years in that tight engineering and technical partnership that we have. One of the big things that we've been working with Google on is, as they move from delivering cloud services for their own usage and for their own applications that they provide out to others, but now as they transition into being a cloud service provider for enterprises and other IT shops as well, so they've recently launched their Google Cloud platform, just in the last week or so. Did a nice announcement about the partnership that we have together, and how the Google Cloud platform is now available and running and open for business on our latest next generation Intel Xeon product, and that's codenamed Skylake, but that's something that we've been working on with them since the inception of the design of the product, so it's really nice to have it out there and in the market, and available for customers, and we very much value partnerships, like the one we have with Google, where we have that deep technical engagement to really get to the heart of the workload that they need to provide, and then can design product and solution around that. So you don't just look at it as a one off project or a one time investment, it's an ongoing continuation and evolution of new product, new features, new capabilities to continue to improve their total cost of ownership and their customer experience. >> Well, Lisa, this is your baby, the Xeon, codename Skylake, which I love that name. Intel always has great codenames, by the way, we love that, but it's real technology. Can you share some specific features of what's different around these new workloads because, you know, we've been teasing out over the past day and we're going to be talking tomorrow as well about these new use cases, because you're looking at a plethora of use cases, from IoT edge all the way down into cloud native applications. What specific things is Xeon doing that's next generation that you could highlight, that points to this new cloud operating system, the cloud service providers, whether it's managed services to full blown down and dirty cloud? >> Lisa: So it is my baby, I appreciate you saying that, and it's so exciting to see it out there and starting to get used and picked up and be unleashing it on the world. With this next generation of Xeon, it's always about the processor, but what we've done has gone so much beyond that, so we have a ton of what we call platform level innovation that is coming in, we really see this as one of our biggest kind of step function improvements in the last 10 years that we've offered. Some of the features that we've already talked about are things like AVX-512 instructions, which I know just sounds fun and rolls of the tongue, but really it's very specific workload acceleration for things like high performance computing workloads. And high performance computing is something that we see more and more getting used in access in cloud style infrastructure. So it's this perfect marrying of that workload specifically deriving benefit from the new platforms, and seeing really strong performance improvements. It also speaks to the way with Intel and Xeon families, 'cause remember, with Xeon, we have Xeon Phi, you've got standard Xeon, you've got Xeon D. You can use these instructions across the families and have workloads that can move to the most optimized hardware for whatever you're trying to drive. Some of the other things that we've talked about announced is we'll have our next generation of Intel Resource Director technology, which really helps you manage and provide quality of service within you application, which is very important to cloud service providers, giving them control over hardware and software assets so that they can deliver the best customer experience to their customers based on the service level agreement they've signed up for. And then the other one is Intel Omni-Path architecture, so again, fairly high performance computing focused product, Omni-Path is a fabric, and we're going to offer that in an integrated fashion with Skylake so that you can get even higher level of performance and capability. So we're looking forward to a lot more that we have to come, the whole of the product line will continue to roll out in the middle of this year, but we're excited to be able to offer an early version to the cloud service providers, get them started, get it out in the market and then do that full scale enterprise validation over the next several months. >> So I got to ask you the question, because this is something that's coming up, we're seeing a transition, also the digital transformation's been talked about for a while. Network transformation, IoTs all around the corner, we've got autonomous vehicles, smart cities, on and on. But I got to ask you though, the cloud service providers seems to be coming out of this show as a key storyline in Google Next as the multi cloud architectures become very clear. So it's become clear, not just this show but it's been building up to this, it's pretty clear that it's going to be a multi cloud world. As well as you're starting to see the providers talk about their SaaS offerings, Google talking about G Suite, Microsoft talks about Office 365, Oracle has their apps, IBM's got Watson, so you have this SaaSification. So this now creates a whole another category of what cloud is. If you include SaaS, you're really talking about Salesforce, Adobe, you know, on and on the list, everyone is potentially going to become a SaaS provider whether they're unique cloud or partnering with some other cloud. What does that mean for a cloud service provider, what do they need for applications support requirements to be successful? >> So when we look at the cloud service provider market inside of Intel, we are talking about infrastructure as a service, platform as a service and software as a service. So cutting across the three major categories, I give you like, up until now, infrastructure of the service has gotten a lot of the airtime or focus, but SaaS is actually the bigger business, and that's why you see, I think, people moving towards it, especially as enterprise IT becomes more comfortable with using SaaS application. You know, maybe first they started with offloading their expense report tool, but over time, they've moved into more sophisticated offerings that free up resources for them to do their most critical or business critical applications the they require to stay in more of a private cloud. I think that's evolution to a multi cloud, a hybrid cloud, has happened across the entire industry, whether you are an enterprise or whether you are a cloud service provider. And then the move to SaaS is logical, because people are demanding just more and more services. One of the things through all our years of partnering with the biggest to the smallest cloud service providers and working so closely on those technical requirements that we've continued to find is that total cost of ownership really is king, it's that performance per dollar, TCO, that they can provide and derive from their infrastructure, and we focused a lot of our engineering and our investment in our silicon design around providing that. We have multi generations that we've provided even just in the last five years to continue to drive those step function improvements and really optimize our hardware and the code that runs on top of it to make sure that it does continue to deliver on those demanding workloads. The other thing that we see the providers focusing on is what's their differentiation. So you'll see cloud service providers that will look through the various silicon features that we offer and choose, they'll pick and choose based on whatever their key workload is or whatever their key market is, and really kind of hone in and optimize for those silicon features so that they can have a differentiated offering into the market about what capabilities and services they'll provide. So it's an area where we continue to really focus our efforts, understand the workload, drive the TCO down, and then focus in on the design point of what's going to give that differentiation and acceleration. >> It's interesting, the definition's also where I would agree with you, the cloud service provider is a huge market when you even look at the SaaS. 'Cause whether you're talking about Uber or Netflix, for instance, examples people know about in real life, you can't ignore these new diverse use cases coming out. For instance, I was just talking with Stu Miniman, one of our analysts here, Wikibon, and Riot Games could be considered a cloud, right, I mean, 'cause it's a SaaS platform, it's gaming. You're starting to see these new apps coming out of the woodwork. There seems to be a requirement for being agile as a cloud provider. How do you enable that, what specifically can you share, if I'm a cloud service provider, to be ready to support anything that's coming down the pike? >> Lisa: You know, we do do a lot of workload and market analysis inside of Intel and the data center group, and then if you have even seen over the past five years, again, I'll just stick with the new term, how much we've expanded and broadened our product portfolio. So again, it will still be built upon that foundation of Xeon and what we have there, but we've gone to offer a lot of varieties. So again, I mentioned Xeon Phi. Xeon Phi at the 72 cores, bootable Xeon but specific workload acceleration targeted at high performance computing and other analytics workloads. And then you have things at the other end. You've got Xeon D, which is really focused at more frontend web services and storage and network workloads, or Atom, which is even lower power and more focused on cold and warm storage workloads, and again, that network function. So you could then say we're not just sticking with one product line and saying this is the answer for everything, we're saying here's the core of what we offer, and the features people need, and finding options, whether they range from low power to high power high performance, and kind of mixed across that whole kind of workload spectrum, and then we've broadened around the CPU into a lot of other silicon innovation. So I don't know if you guys have had a chance to talk about some of the work that we're doing with FPGAs, with our FPGA group and driving and delivering cloud and network acceleration through FPGAs. We've also introduced new products in the last year like Silicon Photonics, so dealing with network traffic crossing through-- >> Well, is FPGA, that's the Altera stuff, we did talk with them, they're doing the programmable chips. >> Lisa: Exactly, so it requires a level of sophistication and understanding what you need the workload to accelerate, but once you have it, it is a very impressive and powerful performance gain for you, so the cloud service providers are a perfect market for that, as are the cloud service providers because they have very sophisticated IT and very technically astute engineering teams that are able to really, again, go back to the workload, understand what they need and figure out the right software solution to pair with it. So that's been a big focus of our targeting. And then, like I said, we've added all these different things, different new products to the platform that start to, over time, just work better and better together, so when you have things like Intel SSD there together with Intel CPUs and Intel Ethernet and Intel FPGA and Intel Silicon Photonics, you can start to see how the whole package, when it's designed together under one house, can offer a tremendous amount of workload acceleration. >> I got to ask you a question, Lisa, 'cause this comes up, while you're talking, I'm just in my mind visualizing a new kind of virtual computer server, the cloud is one big server, so it's a design challenge. And what was teased out at Mobile World Congress that was very clear was this new end to end architecture, you know, re-imagined, but if you have these processors that have unique capabilities, that have use case specific capabilities, in a way, you guys are now providing a portfolio of solutions so that it almost can be customized for a variety of cloud service providers. Am I getting that right, is that how you guys see this happening where you guys can just say, "Hey, just mix and match what you want and you're good." >> Lisa: Well, and we try to provide a little bit more guidance than as you wish, I mean, of course, people have their options to choose, so like, with the cloud service providers, that's what we have, really tight engineering engagement, so that we can, you know, again, understand what they need, what their design point is, what they're honing in on. You might work with one cloud service provider that is very facilities limited, and you might work with another one that is, they're face limited, the other one's power limited, and another one has performance is king, so you can, we can cut some SKUs to help meet each of those needs. Another good example is in the artificial intelligence space where we did another acquisition last year, a company called Nervana that's working on optimized silicon for a neural network. And so now we have put together this AI portfolio, so instead of saying, "Oh, here's one answer "for artificial intelligence," it's, "Here's a multitude of answers where you've got Xeon," so if you have, I'm going to utilize capacity, and are starting down your artificial intelligence journey, just use your Xeon capacity with an optimized framework and you'll get great results and you can start your journey. If you are monetizing and running your business based on what AI can do for you and you are leading the pack out there, you've got the best data scientists and algorithm writers and peak running experts in the world, then you're going to want to use something like the silicon that we acquired from the Nervana team, and that codename is Lake Crest, speaking of some lakes there. And you'll want to use something like Xeon with Lake Crest to get that ultimate workload acceleration. So we have the whole portfolio that goes from Xeon to Xeon Phi to Xeon with FPGAs or Xeon with Lake Crest. Depending on what you're doing, and again, what your design point is, we have a solution for you. And of course, when we say solution, we don't just mean hardware, we mean the optimized software frameworks and the libraries and all of that, that actually give you something that can perform. >> On the competitive side, we've seen the processor landscape heat up on the server and the cloud space. Obviously, whether it's from a competitor or homegrown foundry, whatever fabs are out there, I mean, so Intel's always had a great partnership with cloud service providers. Vis-a-vis the competition and context to that, what are you guys doing specifically and how you'd approach the marketplace in light of competition? >> Lisa: So we do operate in a highly competitive market, and we always take all competitors seriously. So far we've seen the press heat up, which is different than seeing all of the deployments, so what we look for is to continue to offer the highest performance and lowest total cost of ownership for all our customers, and in this case, the cloud service providers, of course. And what do we do is we kind of stick with our game plan of putting the best silicon in the world into the market on a regular beat rate and cadence, and so there's always news, there's always an interesting story, but when you look at having had eight new products and new generations in market since the last major competitive x86 product, that's kind of what we do, just keep delivering so that our customers know that they can bet on us to always be there and not have these massive gaps. And then I also talked to you about portfolio expansion, we don't bet on just one horse, we give our customers the choice to optimize for their workloads, so you can go up to 72 cores with Xeon Phi if that's important, you can go as low as two cores with Atom, if that's what works for you. Just an example of how we try to kind of address all of our customer segments with the right product at the right time. >> And IoT certainly brings a challenge too, when you hear about network edge, that's a huge, huge growth area, I mean, you can't deny that that's going to be amazing, you look at the cars are data centers these days, right? >> Lisa: A data center on wheels. >> Data center on wheels. >> Lisa: That's one of the fun things about my role, even in the last year, is that growing partnership, even inside of Intel with our IoT team, and just really going through all of the products that we have in development, and how many of them can be reused and driven towards IoT solution. The other thing is, if you look into the data center space, I genuinely believe we have the world's best ecosystem, you can't find an ISV that we haven't worked with to optimize their solution to run best on Intel architecture and get that workload acceleration. And now we have the chance to put that same playbook into play in the IoT space, so it's a growing, somewhat nascent but growing market with a ton of opportunity and a ton of standards to still be built, and a lot of full solution kits to be put together. And that's kind of what Intel does, you know, we don't just throw something out to the market and say, "Good luck," we actually put the ecosystem together around it so that it performs. But I think that's kind of what you see with, I don't know if you guys saw our Intel GO announcement, but it's really like the software development kit and the whole product offering for what you need for truly delivering automated vehicles. >> Well, Lisa, I got to say, so you guys have a great formula, why fix what's not broken, stay with Moore's law, keep that cadence going, but what's interesting is you are listening and adapting to the architectural shifts, which is smart, so congratulations and I think, as the cloud service provider world changes, and certainly in the data center, it's going to be a turbulent time, but a lot of opportunity, and so good to have that reliability and, if you can make the software go faster then they can write more software faster, so-- >> Lisa: Yup, and that's what we've seen every time we deliver a step function improvement in performance, we see a step function improvement in demand, and so the world is still hungry for more and more compute, and we see this across all of our customer bases. And every time you make that compute more affordable, they come up with new, innovative, different ways to do things, to get things done and new services to offer, and that fundamentally is what drives us, is that desire to continue to be the backbone of that industry innovation. >> If you could sum up in a bumper sticker what that step function is, what is that new step function? >> Lisa: Oh, when we say step functions of improvements, I mean, we're always looking at targeting over 20% performance improvement per generation, and then on top of that, we've added a bunch of other capabilities beyond it. So it might show up as, say, a security feature as well, so you're getting the massive performance improvement gen to gen, and then you're also getting new capabilities like security features added on top. So you'll see more and more of those types of announcements from us as well where we kind of highlight the, not just the performance but that and what else comes with it, so that you can continue to address, you know, again, the growing needs that are out there, so all we're trying to say is, day a step ahead. >> All right, Lisa Spelman, VP of the GM, the Xeon product family as well as marketing and data center. Thank you for spending the time and sharing your insights on Google Next, and giving us a peak at the portfolio of the Xeon next generation, really appreciate it, and again, keep on bringing that power, Moore's law, more flexibility. Thank you so much for sharing. We're going to have more live coverage here in Palo Alto after this short break. (bright music)
SUMMARY :
Narrator: Live from Silicon Valley. maybe a little bit of OCP around the Xeon processor, and it's hard to be in many places at once, of the tech business. partnerships, like the one we have with Google, that you could highlight, that points to and it's so exciting to see it out there So I got to ask you the question, and really optimize our hardware and the code is a huge market when you even look at the SaaS. and the data center group, and then if you have even seen Well, is FPGA, that's the Altera stuff, the right software solution to pair with it. I got to ask you a question, Lisa, so that we can, you know, again, understand what they need, Vis-a-vis the competition and context to that, And then I also talked to you about portfolio expansion, and the whole product offering for what you need and so the world is still hungry for more and more compute, with it, so that you can continue to address, you know, at the portfolio of the Xeon next generation,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Spelman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Nervana | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
two cores | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silicon Photonics | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
72 cores | QUANTITY | 0.99+ |
two day | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
San Hose | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
G Suite | TITLE | 0.99+ |
Office 365 | TITLE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Open Compute Summit | EVENT | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Xeon | ORGANIZATION | 0.98+ |
tomorrow | DATE | 0.98+ |
both places | QUANTITY | 0.98+ |
Altera | ORGANIZATION | 0.98+ |
Riot Games | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Wikibon | ORGANIZATION | 0.97+ |
Watson | TITLE | 0.96+ |
over 20% | QUANTITY | 0.95+ |
SaaS | TITLE | 0.95+ |
first | QUANTITY | 0.95+ |
one horse | QUANTITY | 0.94+ |
Silicon Valley | LOCATION | 0.94+ |
one product | QUANTITY | 0.94+ |
each | QUANTITY | 0.94+ |
one answer | QUANTITY | 0.94+ |
eight new products | QUANTITY | 0.93+ |
one time | QUANTITY | 0.92+ |
Xeon | COMMERCIAL_ITEM | 0.92+ |
GM | ORGANIZATION | 0.91+ |
one house | QUANTITY | 0.91+ |
Google Cloud | TITLE | 0.91+ |
Chuck Tato, Intel - Mobile World Congress 2017 - #MWC17 - #theCUBE
>> Narrator: Live from Silicon Valley, it's theCUBE. Covering mobile world congress 2017. Brought to you by Intel. >> Okay, welcome back everyone, we're here live in Palo Alto for day two of two days of Mobile World Congress special coverage here in Palo Alto, where we're bringing all the folks in Silicon Valley here in the studio to analyze all the news and commentary of which we've been watching heavily on the ground in Barcelona. We have reporters, we have analysts, and we have friends there, of course, Intel is there as well as SAP, and a variety of other companies we've been talking to on the phone and all those interviews are on YouTube.com/siliconANGLE. And we're here with Chuck Tato, who's the marketing director of the data center of communications with Intel around the FPGA, which is the programmable chips, formerly with the Alterra Group, now a part of Intel, welcome to theCUBE, and thanks for coming on. >> Thank you for having me. So, actually all the rage Mobile World Congress Intel, big splash, and you guys have been, I mean, Intel has always bene the bellweather. I was saying this earlier, Intel plays the long game. You have to in the chips games. You got to build the factories, build fabs. Most of all, have been the heartbeat of the industry, but now doing more of less chips, Most of all, making them smaller, faster, cheaper, or less expensive and just more power. The cloud does that. So you're in the cloud data center group. Take a second to talk about what you guys do within Intel, and why that's important for folks to understand. >> Sure. I'm part of the programmable solutions group. So the programmable solutions group primarily focuses on field programmable gate array technology that was acquired through the Alterra acquisition at Intel. So our focus in my particular group is around data center and Coms infrastructure. So there, what we're doing is we're taking the FPGAs and we're applying them to the data center as well as carrier infrastructure to accelerate things, make them faster, make them more repeatable, or more terministic in nature. >> And so, that how it works, as you were explaining beforehand, kind of, you can set stream of bits at it and it changes the functionality of the chip. >> Yes. So essentially, an FPGA, think of it as a malleable set of resources. When I say that, you know, you can create, it's basically a fabric with many resources in an array. So through the use of a bit stream, you can actually program that fabric to interconnect the different elements of the chip to create any function that you would like, for the most part. So think of it as you can create a switch, you can create a classification engine, and things like that. >> Any why would someone want that functionality versus just a purpose-built chip. >> Perfect question. So if you look at, there's two areas. So in the data center, as well as in carrier infrastructure, the workloads are changing constantly. And there's two problems. Number one you could create infrastructure that becomes stranded. You know, you think you're going to have so much traffic of a certain type and you don't. So you end up buying a lot of purpose-built equipment that's just wrong for what you need going forward. So by building infrastructure that is common, so it kind of COTS, you know, on servers, but adding FPGAs to the mix allows you to reconfigure the networking within the cloud, to allow you to address workloads that you care about at any given time. >> Adaptability seems to be the key thing. You know kind of trends based upon certain things, and certainly the first time you see things, you've got to figure it out. But this gives a lot of flexibility, it sounds like. >> Exactly. Adaptability is the key, as well as bandwidth, and determinism, right? So when you get a high bandwidth coming into the network, and you want to something very rapidly and consistently to provide a certain service level agreement you need to have circuits that are actually very, very deterministic in nature. >> Chuck, I want to get your thoughts on one of the key things. I talked with Sandra Reddy, Sandra Rivera, sorry, she was, I interviewed her this morning, as well as Dan Rodriguez, and Caroline Chan, Lyn Comp as well. Lot of different perspectives. I see 5G as big on one hand, have the devices out there announcing on Sunday. But what was missing, and I think Fortune was the really, the only one I saw pick up on this besides SiliconANGLE, on terms of the coverage was, there's a real end-to-end discussion here around not just the 5G as the connectivity piece that the carriers care about, but there's the under-the-hood work that's changing in the Data Center. And the car's a data center now, right? >> Yeah. >> So you have all these new things happening, IOT, people with sensors on them, and devices, and then you've got the cloud-ready compute available, right? And we love what's happening with cloud. Infinite compute is there and makes data work much better. How does the end-to-end story with Intel, and the group that you're in, impact that and what are some of the use cases that seem to be popping up in that area. >> Okay, so that's a great question, and I guess some of the examples that I could give of where we're creating end-to-end solutions would be in wireless infrastructure, as you just mentioned. As you move on to 5G infrastructure, the goal is to increase the bandwidth by 100X and reduce the latency by orders of magnitude. It's a very, very significant challenge. To do that is quite difficult, to do it just in software. FPGA is a perfect complement to a software-based solution to achieve these goals. For example, virtual switching. It's a significant load on the processors. By offloading virtual switching in an FPGA, you an create the virtual switch that you need for the particular workload that you need. Workloads change, depending on what type of services you're offering in a given area. So you can tailor it to exactly what you need. You may or may not need6 high levels of security, so things like IPsec, yo6u know, at full line rate, are the kind of things that FPGAs allow you to add ad hoc. You can add them where you need them, when you need them, and change them as the services change. >> It sounds like, I'd never thought about that, but it sounds like this is a real architectural advantage, because I'd never thought about offloading the processor, and we all know we all open up or build our PCs know that the heat syncs only get bigger and bigger, so that people want that horsepower for very processor-intensive things. >> Absolutely. So we do two things. One is we do create this flexible infrastructure, the second thing is we offload the processor for things that you know, free up cores to do more value-added things. >> Like gaming for, my kids love to see that gaming. >> Yes. There's gaming, virtual reality, augmented virtual reality, all of those things are very CPU intensive, but there's also a compute-intensive aspect. >> Okay, so I've got to get your take on this. This is kind of a cool conversation because that's, the virtual reality and augmented reality really are relevant. That is a key part of Mobile World Congress, beside the IOT, which I think is the biggest story this year, is IOT, and all the security aspects of it around, and all that good stuff. And that's really where the meat is, but the real sex appeal is the virtual reality and augmented reality. That's an example of the new things that have popped out of the woodwork, so the question for you is for all these new-use cases that I have found that emerge, there will be new things that pop out of the woodwork. "Oh, my God, I don't have to write software for that, There's an app for that now." So the new apps are going to start coming in, whether it's something new and cool on a car, Something new and cool on a sensor, something new and cool in the data center. How adaptive are you guys and how do you guys kind of fit into that kind of preparing for this unknown future. >> Well, that's a great question, too. I like to think about new services coming forward as being a unique blend of storage, compute, and networking, and depending on the application and the moment in that application, you may have to change that mix in a very flexible way. So again, the FPGA provides you the ability to change all of those to match the application needs. I'm surprised as we dig into applications, you know, how many different sets of needs there are. So each time you do that, you can envision, reprogramming your FPGA. So just like a processor, it's completely reprogrammable. You're not going to reprogram it in the same instantaneous way that you do in software, but you can reprogram it on the fly, whatever you would like. >> So, I'm kind of a neophyte here, so I want to ask some dumb questions, probably be dumb to you, but common to me, but would be like, okay, who writes bits? Is it the coders or is it someone on the firmware side, I'm trying to understand where the line is between that hardened top of kind of Intel goodness that goes on algorithmically or automatically, or what programmers do. So think full-stack developer, or a composer, a more artisan type who's maybe writing an app. Are there both access points to the coding, or is it, where's the coding come from? >> So there's multiple ways that this is happening. The traditional way of programming FPGA is the same way that you would design any ASIC in the industry, right? Somebody sits down and they write RTL, they're very specialized programmers However, going forward, there's multiple ways you an access it. For one, we're creating libraries of solutions that you can access through APIs that are built into DPDK, for example on Xeon. So you can very easily access accelerated applications and inline applications that are being developed by ourselves as well as third parties. So there's a rich eco system. >> So you guys are writing hooks that go beyond being the ASIC special type, specialist programming. >> Absolutely. So this makes it very accessible to programmers. The acceleration that's there from a library and purpose-built. >> Give me an example, if you can. >> Sure, virtual switch. So in our platform for NFE, we're building in a virtual switch solution, and you can program that just like you know, totally in software through DPDK. >> One of the things that coming up with NFE that's interesting, I don't know if this y6our wheelhouse or not, but I want to throw it out there because it's come up in multiple interviews and in the industry. You're seeing very cool ideas and solutions roll out, and I'll give, you know, I'll make one up off the top of my head, Openstack. Openstack is a great, great vision, but it's a lot of fumbling in the execution of it and the cost of ownership goes through the roof because there's a lot of operation, I'm overgeneralizing certain use-case, not all Openstack, but in generally speaking, I do have the same problem with big data where, great solution-- >> Uh-huh. >> But when you lay out the architect and then deploy it there's a lot of cost of ownership overhead in terms of resources. So is this kind of an area that you guys can help simplify, 'cause that seems to be a sticking point for people who want to stand up some infrastructure and do dev ops and then get into this API-like framework. >> Yes, from a hardware perspective, we're actually creating a platform, which includes a lot of software to tie into Openstack. So that's all preintegrated for you, if you will. So at least from a hardware interface perspective, I can say that that part of the equation gets neutralized. In terms of the rest of the ownership part, I'm not really qualified to answer that question. >> That's good media training, right there. Chuck just came back from Intel media training, which is good. We got you fresh. Network transformation, and at the, also points to some really cool exciting areas that are going on that are really important. The network layer you see, EDFE, and SDN, for instance, that's really important areas that people are innovating on, and they're super important because, again, this is where the action is. You have virtualization, you have new capabilities, you've got some security things going down lower in the stack. What's the impact there from an Intel perspective, helping this end-to-end architecture be seamless? >> Sure. So what we are doing right now is creating a layer on top of our FPGA-based SmartNIC solutions, which ties together all of that into a single platform, and it cuts across multiple Intel products. We have, you know, Xeon processors integrated with FPGAs, we have discreet FPGAs built onto cards that we are in the process of developing. So from a SmartNIC through to a fully-integrated FPGA plus Xeon processor is one common framework. One common way of programming the FPGA, so IP can move from one to the other. So there's a lot of very neat end-to-end and seamless capabilities. >> So the final question is the customer environment. I would say you guys have a lot of customers out there. The edge computing is a huge thing right now. We're seeing that as a big part of this, kind of, the clarity coming out of Mobile World Congress, at least from the telco standpoints, it's kind of not new in the data center area. The edge now is redefined. Certainly with IOT-- >> Yes. >> And IOTP, which we're calling IOTP app for people having devices. What are the customer challenges right now, that you are addressing. Specifically, what's the pain points and what's the current state-of-the-art relative to the customer's expectations now, that they're focused on that you guys are solving. >> Yeah, that's a great question, too. We have a lot of customers now that are taking transmission equipment, for example, mobile backhaul types of equipment, and they want to add mobile edge computing and NFE-type capabilities to that equipment. The beauty of what we're doing is that the same solution that we have for the cloud works just as well in that same piece of equipment. FPGAs come in all different sizes, so you can fit within your power envelope or processors come in all different sizes. So you can tailor your solution-- >> That's super important on the telco side. I mean, power is huge. >> Yes, yes, and FPGAs allow you to tailor the power equation as much as possible. >> So the question, I think is the next question is, does this make it cloud-ready, because that's term that we've been hearing a lot of. Cloud-ready. Cause that sounds like what you're offering is the ability to kind of tie into the same stuff that the cloud has, or the data center. >> Yes, exactly. In fact, you know, there's been very high profile press around the use of FPGAs in cloud infrastructure. So we're seeing a huge uptick there. So it is getting cloud-ready. I wouldn't say it's perfectly there, but we're getting very close. >> Well the thing that's exciting to me, I think, is the cloud native movement really talks about again, you know, these abstractions with micro services, and you mentioned the APIs, really fits well into some of the agilenesss that needs to happen at the network layer, to be more dynamic. I mean, just think about the provisioning of IOT. >> Chuck: Yeah. >> I mean, I'm a telco, I got to provision a phone, that's get a phone number, connect on the network, and then have sessions go to the base station, and then back to the cloud. Imagine having to provision up and down zillions of times those devices that may get provision once and go away in an hour. >> Right. >> That's still challenging, give you the network fabric. >> Yes. It is going to be a challenge, but I think as common as we can make the physical infrastructure, the better and the easier that's going to be, and as we create more common-- >> Chuck, final question, what's your take from Mobile World Congress? What are you hearing, what's your analysis, commentary, any kind of input you've heard? Obviously, Intel's got a big presence there, your thoughts on what's happening at Mobile World Congress. >> Well, see I'm not at Mobile World Congress, I'm here in Silicon Valley right now, but-- >> John: What have you heard? >> Things are very exciting. I'm mostly focused on the NFE world myself, and there's been just lots and lots of-- >> It's been high profile. >> Yes, and there's been lots of activity, and you know, we've been doing demos and really cool stuff in that area. We haven't announced much of that on the FPGA side, but I think you'll be seeing more-- >> But you're involved, so what's the coolest thing in NFE that you're seeing, because it seems to be crunch time for NFE right now. This is a catalyst point where at least, from my covering NFE, and looking at it, the iterations of it, it's primetime right now for NFE, true? >> Yeah, it's perfect timing, and it's actually perfect timing for FPGA. I'm not trying to just give it a plug. When you look at it, trials have gone on, very significant, lots of learnings from those trials. What we've done is we've identified the bottlenecks, and my group has been working very hard to resolve those bottlenecks, so we can scale and roll out in the next couple of years, and be ready for 5G when it comes. >> Software definer, Chuck Tato, here from Intel, inside theCUBE, breaking down the coverage from Mobile World Congress, as we wind down our day in California, the folks in Spain are just going out. It should be like at 12:00 o'clock at night there, and are going to bed, depending on how beat they are. Again, it's in Barcelona, Spain, it's where it's at. We're covering from here and also talking to folks in Barcelona. We'll have more commentary here in Silicon Valley on the Mobile World Congress after this short break. (techno music)
SUMMARY :
Brought to you by Intel. of the data center of Most of all, have been the So the programmable solutions and it changes the elements of the chip want that functionality So in the data center, as well and certainly the first Adaptability is the key, that the carriers care about, and the group that you're in, impact that for the particular workload that you need. that the heat syncs only the second thing is we love to see that gaming. all of those things the question for you is on the fly, whatever you would like. Is it the coders or is it ASIC in the industry, right? So you guys are writing hooks So this makes it very and you can program that and in the industry. 'cause that seems to be a sticking point of the ownership part, What's the impact there in the process of developing. So the final question is that you guys are solving. is that the same solution on the telco side. you to tailor the power equation is the ability to kind of around the use of FPGAs at the network layer, to be more dynamic. connect on the network, give you the network fabric. the better and the easier What are you hearing, what's the NFE world myself, of that on the FPGA side, the iterations of it, in the next couple of in California, the folks in
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Sandra Reddy | PERSON | 0.99+ |
Dan Rodriguez | PERSON | 0.99+ |
Sandra Rivera | PERSON | 0.99+ |
Caroline Chan | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
Chuck Tato | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Lyn Comp | PERSON | 0.99+ |
John | PERSON | 0.99+ |
two problems | QUANTITY | 0.99+ |
Chuck Tato | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Alterra Group | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
two areas | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Sunday | DATE | 0.99+ |
IOTP | TITLE | 0.99+ |
100X | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
telco | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.98+ |
#MWC17 | EVENT | 0.98+ |
second thing | QUANTITY | 0.98+ |
YouTube.com/siliconANGLE | OTHER | 0.98+ |
one | QUANTITY | 0.98+ |
two days | QUANTITY | 0.97+ |
Barcelona, Spain | LOCATION | 0.97+ |
first time | QUANTITY | 0.97+ |
single platform | QUANTITY | 0.96+ |
Mobile World Congress 2017 | EVENT | 0.96+ |
FPGA | ORGANIZATION | 0.95+ |
an hour | QUANTITY | 0.95+ |
SAP | ORGANIZATION | 0.95+ |
this morning | DATE | 0.95+ |
this year | DATE | 0.95+ |
each time | QUANTITY | 0.94+ |
day two | QUANTITY | 0.92+ |
one common framework | QUANTITY | 0.9+ |
zillions of times | QUANTITY | 0.9+ |
IOT | TITLE | 0.9+ |
NFE | ORGANIZATION | 0.89+ |
12:00 o'clock at night | DATE | 0.89+ |
Alterra | ORGANIZATION | 0.88+ |
Openstack | TITLE | 0.86+ |
both access | QUANTITY | 0.84+ |
SiliconANGLE | ORGANIZATION | 0.83+ |
once | QUANTITY | 0.83+ |
One common way | QUANTITY | 0.82+ |
NFE | TITLE | 0.79+ |
next couple of years | DATE | 0.73+ |
Ziya Ma, Intel - Spark Summit East 2017 - #sparksummit - #theCUBE
>> [Narrator] Live from Boston Massachusetts. This is the Cube, covering Sparks Summit East 2017. Brought to you by Databricks. Now here are your hosts, Dave Alante and George Gilbert. >> Back to you Boston everybody. This is the Cube and we're here live at Spark Summit East, #SparkSummit. Ziya Ma is here. She's the Vice President of Big Data at Intel. Ziya, thanks for coming to the Cube. >> Thanks for having me. >> You're welcome. So software is our topic. Software at Intel. You know people don't necessarily associate Intel with always with software but what's the story there? >> So actually there are many things that we do for software. Since I manage the Big Data engineering organization so I'll just say a little bit more about what we do for Big Data. >> [Dave] Great. >> So you know Intel do all the processors, all the hardware. But when our customers are using the hardware, they like to get the best performance out of Intel hardware. So this is for the Big Data space. We optimize the Big Data solution stack, including Spark and Hadoop on top of Intel hardware. And make sure that we leverage the latest instructions set so that the customers get the most performance out of the newest released Intel hardware. And also we collaborated very extensively with the open source community for Big Data ecosystem advancement. For example we're a leading contributor to Apache Spark ecosystem. We're also a top contributor to Apache Hadoop ecosystem. And lately we're getting into the machine learning and deep learning and the AI space, especially integrating those capabilities into the Big Data eTcosystem. >> So I have to ask you a question to just sort of strategically, if we go back several years, you look at during the Unix days, you had a number of players developing hardware, microprocessors, there were risk-based systems, remember MIPS and of course IBM had one and Sun, et cetera, et cetera. Some of those live on but very, very small portion of the market. So Intel has dominated the general purpose market. So as Big Data became more mainstream, was there a discussion okay, we have to develop specialized processors, which I know Intel can do as well, or did you say, okay, we can actually optimize through software. Was that how you got here? Or am I understanding that? >> We believe definitely software optimization, optimizing through software is one thing that we do. That's why Intel actually have, you may not know this, Intel has one of the largest software divisions that focus on enabling and optimizing the solutions in Intel hardware. And of course we also have very aggressive product roadmap for advancing continuously our hardware products. And actually, you mentioned a general purpose computing. CPU today, in the Big Data market, still has more than 95% of the market. So that's still the biggest portion of the Big Data market. And will continue our advancement in that area. And obviously as the Ai and machine learning, deep learning use cases getting added into the Big Data domain and we are expanding our product portfolio into some other Silicon products. >> And of course that was kind of the big bet of, we want to bet on Intel. And I guess, I guess-- >> You should still do. >> And still do. And I guess, at the time, Seagate or other disk mounts. Now flash comes in. And of course now Spark with memory, it's really changing the game, isn't it? What does that mean for you and the software group? >> Right, so what do we... Actually, still we focus on the optimi-- Obviously at the hardware level, like Intel now, is not just offering the computing capability. We also offer very powerful network capability. We offer very good memory solutions, memory hardware. Like we keep talking about this non-volatile memory technologies. So for Big Data, we're trying to leverage all those newest hardware. And we're already working with many of our customers to help them, to improve their Big Data memory solution, the e-memory, analytics type of capability on Intel hardware, give them the most optimum performance and most secure result using Intel hardware. So that's definitely one thing that we continue to do. That's going to be our still our top priority. But we don't just limit our work to optimization. Because giving user the best experience, giving user the complete experience on Intel platform is our ultimate goal. So we work with our customers from financial services company. We work with folks from manufacturing. From transportation. And from other IOT internet of things segment. And to make sure that we give them the easiest Big Data analytics experience on Intel hardware. So when they are running those solutions they don't have to worry too much about how to make their application work with Intel hardware, and how to make it more performant with Intel hardware. Because that's the Intel software solution that's going to bridge the gap. We do that part of the job. And so that it will make our customers experience easier and more complete. >> You serve as the accelerant to the marketplace. Go ahead George. >> [Ziya] That's right. >> So Intel's big ML as the news product, as of the last month of so, open source solution. Tell us how there are other deep learning frameworks that aren't as fully integrated with Spark yet and where BigML fits in since we're at a Spark conference. How it backfills some functionality and how it really takes advantage of Intel hardware. >> George, just like you said, BigDL, we just open sourced a month ago. It's a deep learning framework that we organically built onto of Apache Spark. And it has quite some differences from the other mainstream deep learning frameworks like Caffe, Tensorflow, Torch and Tianu are you name it. The reason that we decide to work on this project was again, through our experience, working with our analytics, especially Big Data analytic customers, as they build their AI solutions or AI modules within their analytics application, it's funny, it's getting more and more difficult to build and integrate AI capability into their existing Big Data analytics ecosystem. They had to set up a different cluster and build a different set of AI capabilities using, let's say, one of the deep learning frameworks. And later they have to overcome a lot of challenges, for example, moving the model and data between the two different clusters and then make sure that AI result is getting integrated into the existing analytics platform or analytics application. So that was the primary driver. How do we make our customers experience easier? Do they have to leave their existing infrastructure and build a separate AI module? And can we do something organic on top of the existing Big Data platform, let's say Apache Spark? Can we just do something like that? So that the user can just leverage the existing infrastructure and make it a naturally integral part of the overall analytics ecosystem that they already have. So this was the primary driver. And also the other benefit that we see by integrating this BigDL framework naturally was the Big Data platform, is that it enables efficient scale-out and fault tolerance and elasticity and dynamic resource management. And those are the benefits that's on naturally brought by Big Data platform. And today, actually, just with this short period of time, we have already tested that BigDL can scale easily to tens or hundreds of nodes. So the scalability is also quite good. And another benefit with solution like BigDL, especially because it eliminates the need of setting a separate cluster and moving the model between different hardware clusters, you save your total cost of ownership. You can just leverage your existing infrastructure. There is no need to buy additional set of hardware and build another environment just for training the model. So that's another benefit that we see. And performance-wise, again we also tested BigDL with Caffe, Torch and TensorFlow. So the performance of BigDL on single node Xeon is orders of magnitude faster than out of box at open source Caffe, TensorFlow or Torch. So it definitely it's going to be very promising. >> Without the heavy lifting. >> And useful solution, yeah. >> Okay, can you talk about some of the use cases that you expect to see from your partners and your customers. >> Actually very good question. You know we already started a few engagement with some of the interested customers. The first customer is from Stuart Industry. Where improving the accuracy for steel-surface defect recognition is very important to it's quality control. So we worked with this customer in the last few months and built end-to-end image recognition pipeline using BigDL and Spark. And the customer just through phase one work, already improved it's defect recognition accuracy to 90%. And they're seeing a very yield improvement with steel production. >> And it used to by human? >> It used to be done by human, yes. >> And you said, what was the degree of improvement? >> 90, nine, zero. So now the accuracy is up to 90%. And another use case and financial services actually, is another use case, especially for fraud detection. So this customer, again I'm not at the customer's request, they're very sensitive the financial industry, they're very sensitive with releasing their name. So the customer, we're seeing is fraud risks were increasing tremendously. With it's wide range of products, services and customer interaction channels. So the implemented end-to-end deep learning solution using BigDL and Spark. And again, through phase one work, they are seeing the fraud detection rate improved 40 times, four, zero times. Through phase one work. We think there were more improvement that we can do because this is just a collaboration in the last few month. And we'll continue this collaboration with this customer. And we expect more use cases from other business segments. But that are the two that's already have BigDL running in production today. >> Well so the first, that's amazing. Essentially replacing the human, have to interact and be much more accurate. The fraud detection, is interesting because fraud detection has come a long way in the last 10 years as you know. Used to take six months, if they found fraud. And now it's minutes, seconds but there's a lot of false positives still. So do you see this technology helping address that problem? >> Yeah, we actually that's continuously improving the prediction accuracy is one of the goals. This is another reason why we need to bring AI and Big Data together. Because you need to train your model. You need to train your AI capabilities with more and more training data. So that you get much more improved training accuracy. Actually this is the biggest way of improving your training accuracy. So you need a huge infrastructure, a big data platform so that you can host and well manage your training data sets. And so that it can feed into your deep learning solution or module for continuously improving your training accuracy. So yes. >> This is a really key point it seems like. I would like to unpack that a little bit. So when we talk to customers and application vendors, it's that training feedback loop that gets the models smarter and smarter. So if you had one cluster for training that was with another framework, and then Spark was your... Rest of your analytics. How would training with feedback data work when you had two separate environments? >> You know that's one of the drivers why we're creating BigDL. Because, we tried to port, we did not come to BigDL at the very beginning. We tried to port the existing deep learning frameworks like Caffe and Tensorflow onto Spark. And you also probably saw some research papers folks. There's other teams that out there that's also trying to port Caffe, Tensorflow and other deep learning framework that's out there onto Spark. Because you have that need. You need to bring the two capabilities together. But the problem is that those systems were developed in a very traditional way. With Big Data, not yet in consideration, when those frameworks were created, were innovated. But now the need for converging the two becomes more and more clear, and more necessary. And that's we way, when we port it over, we said gosh, this is so difficult. First it's very challenging to integrate the two. And secondly the experience, after you've moved it over, is awkward. You're literally using Spark as a dispatcher. The integration is not coherent. It's like they're superficially integrated. So this is where we said, we got to do something different. We can not just superficially integrate two systems together. Can we do something organic on top of the Big Data platform, on top of Apache Spark? So that the integration between the training system, between the feature engineering, between data management can &be more consistent, can be more integrated. So that's exactly the driver for this work. >> That's huge. Seamless integration is one of the most overused phrases in the technology business. Superficial integration is maybe a better description for a lot of those so-called seamless integrations. You're claiming here that it's seamless integration. We're out of time but last word Intel and Spark Summit. What do you guys got going here? What's the vibe like? >> So actually tomorrow I have a keynote. I'm going to talk a little bit more about what we're doing with BigDL. Actually this is one of the big things that we're doing. And of course, in order for BigDL, system like BigDL or even other deep learning frameworks, to get optimum performance on Intel hardware, there's another item that we're highlighting at MKL, Intel optimized Math Kernel Library. It has a lot of common math routines. That's optimized for Intel processor using the latest instruction set. And that's already, today, integrated into the BigDL ecosystem.z6 So that's another thing that we're highlighting. And another thing is that those are just software. And at hardware level, during November, Intel's AI day, our executives from BK, Diane Bryant and Doug Fisher. They also highlighted the Nirvana product portfolio that's coming out. That will give you different hardware choices for AI. You can look at FPGA, Xeon Fi, Xeon and our new Nirvana based Silicon like Crestlake. And those are some good silicon products that you can expect in the future. Intel, taking us to Nirvana, touching every part of the ecosystem. Like you said, 95% share and in all parts of the business. Yeah, thanks very much for coming the Cube. >> Thank you, thank you for having me. >> You're welcome. Alright keep it right there. George and I will be back with our next guest. This is Spark Summit, #SparkSummit. We're the Cube. We'll be right back.
SUMMARY :
This is the Cube, covering Sparks Summit East 2017. This is the Cube and we're here live So software is our topic. Since I manage the Big Data engineering organization And make sure that we leverage the latest instructions set So Intel has dominated the general purpose market. So that's still the biggest portion of the Big Data market. And of course that was kind of the big bet of, And I guess, at the time, Seagate or other disk mounts. And to make sure that we give them the easiest You serve as the accelerant to the marketplace. So Intel's big ML as the news product, And also the other benefit that we see that you expect to see from your partners And the customer just through phase one work, So the customer, we're seeing is fraud risks in the last 10 years as you know. So that you get much more improved training accuracy. that gets the models smarter and smarter. So that the integration between the training system, Seamless integration is one of the most overused phrases integrated into the BigDL ecosystem We're the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Seagate | ORGANIZATION | 0.99+ |
Dave Alante | PERSON | 0.99+ |
40 times | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Ziya Ma | PERSON | 0.99+ |
November | DATE | 0.99+ |
Doug Fisher | PERSON | 0.99+ |
two systems | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
more than 95% | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Spark | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
Ziya | PERSON | 0.99+ |
first customer | QUANTITY | 0.99+ |
a month ago | DATE | 0.98+ |
First | QUANTITY | 0.98+ |
Diane Bryant | PERSON | 0.98+ |
Stuart Industry | ORGANIZATION | 0.98+ |
zero times | QUANTITY | 0.98+ |
nine | QUANTITY | 0.98+ |
zero | QUANTITY | 0.97+ |
two capabilities | QUANTITY | 0.97+ |
Big Data | TITLE | 0.97+ |
BigDL | TITLE | 0.97+ |
Tensorflow | TITLE | 0.97+ |
95% share | QUANTITY | 0.96+ |
Caffe | TITLE | 0.96+ |
one thing | QUANTITY | 0.96+ |
four | QUANTITY | 0.96+ |
#SparkSummit | EVENT | 0.96+ |
one cluster | QUANTITY | 0.96+ |
up to 90% | QUANTITY | 0.96+ |
two different clusters | QUANTITY | 0.96+ |
Hadoop | TITLE | 0.96+ |
today | DATE | 0.96+ |
two separate environments | QUANTITY | 0.95+ |
Cube | COMMERCIAL_ITEM | 0.95+ |
Apache | ORGANIZATION | 0.94+ |
Databricks | ORGANIZATION | 0.94+ |
Spark Summit East 2017 | EVENT | 0.94+ |
Big Data | ORGANIZATION | 0.93+ |
Nirvana | LOCATION | 0.92+ |
MIPS | TITLE | 0.92+ |
Spark Summit East | LOCATION | 0.92+ |
hundreds of nodes | QUANTITY | 0.91+ |
secondly | QUANTITY | 0.9+ |
BigML | TITLE | 0.89+ |
Sparks Summit East 2017 | EVENT | 0.89+ |