Programmable Quantum Simulators: Theory and Practice
>>Hello. My name is Isaac twang and I am on the faculty at MIT in electrical engineering and computer science and in physics. And it is a pleasure for me to be presenting at today's NTT research symposium of 2020 to share a little bit with you about programmable quantum simulators theory and practice the simulation of physical systems as described by their Hamiltonian. It's a fundamental problem which Richard Fineman identified early on as one of the most promising applications of a hypothetical quantum computer. The real world around us, especially at the molecular level is described by Hamiltonians, which captured the interaction of electrons and nuclei. What we desire to understand from Hamiltonian simulation is properties of complex molecules, such as this iron molded to them. Cofactor an important catalyst. We desire there are ground States, reaction rates, reaction dynamics, and other chemical properties, among many things for a molecule of N Adams, a classical simulation must scale exponentially within, but for a quantum simulation, there is a potential for this simulation to scale polynomials instead. >>And this would be a significant advantage if realizable. So where are we today in realizing such a quantum advantage today? I would like to share with you a story about two things in this quest first, a theoretical optimal quantum simulation, awkward them, which achieves the best possible runtime for generic Hamiltonian. Second, let me share with you experimental results from a quantum simulation implemented using available quantum computing hardware today with a hardware efficient model that goes beyond what is utilized by today's algorithms. I will begin with the theoretically optimal quantum simulation uncle rhythm in principle. The goal of quantum simulation is to take a time independent Hamiltonian age and solve Schrodinger's equation has given here. This problem is as hard as the hardest quantum computation. It is known as being BQ P complete a simplification, which is physically reasonable and important in practice is to assume that the Hamiltonian is a sum over terms which are local. >>For example, due to allow to structure these local terms, typically do not commute, but their locality means that each term is reasonably small, therefore, as was first shown by Seth Lloyd in 1996, one way to compute the time evolution that is the exponentiation of H with time is to use the lead product formula, which involves a successive approximation by repetitive small time steps. The cost of this charterization procedure is a number of elementary steps, which scales quadratically with the time desired and inverse with the error desired for the simulation output here then is the number of local terms in the Hamiltonian. And T is the desired simulation time where Epsilon is the desired simulation error. Today. We know that for special systems and higher or expansions of this formula, a better result can be obtained such as scaling as N squared, but as synthetically linear in time, this however is for a special case, the latest Hamiltonians and it would be desirable to scale generally with time T for a order T time simulation. >>So how could such an optimal quantum simulation be constructed? An important ingredient is to transform the quantum simulation into a quantum walk. This was done over 12 years ago, Andrew trials showing that for sparse Hamiltonians with around de non-zero entries per row, such as shown in this graphic here, one can do a quantum walk very much like a classical walk, but in a superposition of right and left shown here in this quantum circuit, where the H stands for a hazard market in this particular circuit, the head Mar turns the zero into a superposition of zero and one, which then activate the left. And the right walk in superposition to graph of the walk is defined by the Hamiltonian age. And in doing so Childs and collaborators were able to show the walk, produces a unitary transform, which goes as E to the minus arc co-sign of H times time. >>So this comes close, but it still has this transcendental function of age, instead of just simply age. This can be fixed with some effort, which results in an algorithm, which scales approximately as towel log one over Epsilon with how is proportional to the sparsity of the Hamiltonian and the simulation time. But again, the scaling here is a multiplicative product rather than an additive one, an interesting insight into the dynamics of a cubit. The simplest component of a quantum computer provides a way to improve upon this single cubits evolve as rotations in a sphere. For example, here is shown a rotation operator, which rotates around the axis fi in the X, Y plane by angle theta. If one, the result of this rotation as a projection along the Z axis, the result is a co-sign squared function. That is well-known as a Ravi oscillation. On the other hand, if a cubit is rotated around multiple angles in the X Y plane, say around the fee equals zero fee equals 1.5 and fee equals zero access again, then the resulting response function looks like a flat top. >>And in fact, generalizing this to five or more pulses gives not just flattered hops, but in fact, arbitrary functions such as the Chevy chef polynomial shown here, which gets transplants like bullying or, and majority functions remarkably. If one does rotations by angle theta about D different angles in the X Y plane, the result is a response function, which is a polynomial of order T in co-sign furthermore, as captured by this theorem, given a nearly arbitrary degree polynomial there exists angles fi such that one can achieve the desired polynomial. This is the result that derives from the Remez exchange algorithm used in classical discreet time signal processing. So how does this relate to quantum simulation? Well recall that a quantum walk essentially embeds a Hamiltonian insight, the unitary transform of a quantum circuit, this embedding generalize might be called and it involves the use of a cubit acting as a projector to control the application of H if we generalize the quantum walk to include a rotation about access fee in the X Y plane, it turns out that one obtains a polynomial transform of H itself. >>And this it's the same as the polynomial in the quantum signal processing theorem. This is a remarkable result known as the quantum synchrony value transformed theorem from contrast Julian and Nathan weep published last year. This provides a quantum simulation auger them using quantum signal processing. For example, can start with the quantum walk result and then apply quantum signal processing to undo the arc co-sign transformation and therefore obtain the ideal expected Hamiltonian evolution E to the minus I H T the resulting algorithm costs a number of elementary steps, which scales as just the sum of the evolution time and the log of one over the error desired this saturates, the known lower bound, and thus is the optimal quantum simulation algorithm. This table from a recent review article summarizes a comparison of the query complexities of the known major quantum simulation algorithms showing that the cubitus station and quantum sequel processing algorithm is indeed optimal. >>Of course, this optimality is a theoretical result. What does one do in practice? Let me now share with you the story of a hardware efficient realization of a quantum simulation on actual hardware. The promise of quantum computation traditionally rests on a circuit model, such as the one we just used with quantum circuits, acting on cubits in contrast, consider a real physical problem from quantum chemistry, finding the structure of a molecule. The starting point is the point Oppenheimer separation of the electronic and vibrational States. For example, to connect it, nuclei, share a vibrational mode, the potential energy of this nonlinear spring, maybe model as a harmonic oscillator since the spring's energy is determined by the electronic structure. When the molecule becomes electronically excited, this vibrational mode changes one obtains, a different frequency and different equilibrium positions for the nuclei. This corresponds to a change in the spring, constant as well as a displacement of the nuclear positions. >>And we may write down a full Hamiltonian for this system. The interesting quantum chemistry question is known as the Frank Condon problem. What is the probability of transition between the original ground state and a given vibrational state in the excited state spectrum of the molecule, the Frank content factor, which gives this transition probability is foundational to quantum chemistry and a very hard and generic question to answer, which may be amiable to solution on a quantum computer in particular and natural quantum computer to use might be one which already has harmonic oscillators rather than one, which has just cubits. This has provided any Sonic quantum processors, such as the superconducting cubits system shown here. This processor has both cubits as embodied by the Joseph's injunctions shown here, and a harmonic oscillator as embodied by the resonant mode of the transmission cavity. Given here more over the output of this planar superconducting circuit can be connected to three dimensional cavities instead of using cubit Gates. >>One may perform direct transformations on the bull's Arctic state using for example, beam splitters, phase shifters, displacement, and squeezing operators, and the harmonic oscillator, and may be initialized and manipulated directly. The availability of the cubit allows photon number resolve counting for simulating a tri atomic two mode, Frank Condon factor problem. This superconducting cubits system with 3d cavities was to resonators cavity a and cavity B represent the breathing and wiggling modes of a Triumeq molecule. As depicted here. The coupling of these moles was mediated by a superconducting cubit and read out was accomplished by two additional superconducting cubits, coupled to each one of the cavities due to the superconducting resonators used each one of the cavities had a, a long coherence time while resonator States could be prepared and measured using these strong coupling of cubits to the cavity. And Posana quantum operations could be realized by modulating the coupling cubit in between the two cavities, the cavities are holes drilled into pure aluminum, kept superconducting by millikelvin scale. >>Temperatures microfiber, KT chips with superconducting cubits are inserted into ports to couple via a antenna to the microwave cavities. Each of the cavities has a quality factor so high that the coherence times can reach milliseconds. A coupling cubit chip is inserted into the port in between the cavities and the readout and preparation cubit chips are inserted into ports on the sides. For sake of brevity, I will skip the experimental details and present just the results shown here is the fibrotic spectrum obtained for a water molecule using the Pulsonix superconducting processor. This is a typical Frank content spectrum giving the intensity of lions versus frequency in wave number where the solid line depicts the theoretically expected result and the purple and red dots show two sets of experimental data. One taken quickly and another taken with exhaustive statistics. In both cases, the experimental results have good agreement with the theoretical expectations. >>The programmability of this system is demonstrated by showing how it can easily calculate the Frank Condon spectrum for a wide variety of molecules. Here's another one, the ozone and ion. Again, we see that the experimental data shown in points agrees well with the theoretical expectation shown as a solid line. Let me emphasize that this quantum simulation result was obtained not by using a quantum computer with cubits, but rather one with resonators, one resonator representing each one of the modes of vibration in this trial, atomic molecule. This approach represents a far more efficient utilization of hardware resources compared with the standard cubit model because of the natural match of the resonators with the physical system being simulated in comparison, if cubit Gates had been utilized to perform the same simulation on the order of a thousand cubit Gates would have been required compared with the order of 10 operations, which were performed for this post Sonic realization. >>As in topically, the Cupid motto would have required significantly more operations because of the need to retire each one of the harmonic oscillators into some max Hilbert space size compared with the optimal quantum simulation auger rhythms shown in the first half of this talk, we see that there is a significant gap between available quantum computing hardware can perform and what optimal quantum simulations demand in terms of the number of Gates required for a simulation. Nevertheless, many of the techniques that are used for optimal quantum simulation algorithms may become useful, especially if they are adapted to available hardware, moving for the future, holds some interesting challenges for this field. Real physical systems are not cubits, rather they are composed from bolt-ons and from yawns and from yawns need global anti-Semitism nation. This is a huge challenge for electronic structure calculation in molecules, real physical systems also have symmetries, but current quantum simulation algorithms are largely governed by a theorem, which says that the number of times steps required is proportional to the simulation time. Desired. Finally, real physical systems are not purely quantum or purely classical, but rather have many messy quantum classical boundaries. In fact, perhaps the most important systems to simulate are really open quantum systems. And these dynamics are described by a mixture of quantum and classical evolution and the desired results are often thermal and statistical properties. >>I hope this presentation of the theory and practice of quantum simulation has been interesting and worthwhile. Thank you.
SUMMARY :
one of the most promising applications of a hypothetical quantum computer. is as hard as the hardest quantum computation. the time evolution that is the exponentiation of H with time And the right walk in superposition If one, the result of this rotation as This is the result that derives from the Remez exchange algorithm log of one over the error desired this saturates, the known lower bound, The starting point is the point Oppenheimer separation of the electronic and vibrational States. spectrum of the molecule, the Frank content factor, which gives this transition probability The availability of the cubit Each of the cavities has a quality factor so high that the coherence times can reach milliseconds. the natural match of the resonators with the physical system being simulated quantum simulation auger rhythms shown in the first half of this talk, I hope this presentation of the theory and practice of quantum simulation has been interesting
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard Fineman | PERSON | 0.99+ |
Joseph | PERSON | 0.99+ |
Isaac twang | PERSON | 0.99+ |
Seth Lloyd | PERSON | 0.99+ |
1996 | DATE | 0.99+ |
Schrodinger | PERSON | 0.99+ |
Andrew | PERSON | 0.99+ |
Today | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Julian | PERSON | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
both cases | QUANTITY | 0.99+ |
10 operations | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
two cavities | QUANTITY | 0.99+ |
Frank Condon | PERSON | 0.99+ |
each term | QUANTITY | 0.99+ |
Nathan | PERSON | 0.99+ |
first half | QUANTITY | 0.99+ |
1.5 | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
two sets | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
zero | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
two additional superconducting cubits | QUANTITY | 0.96+ |
each one | QUANTITY | 0.94+ |
3d | QUANTITY | 0.94+ |
one way | QUANTITY | 0.94+ |
NTT research symposium | EVENT | 0.93+ |
Hamiltonian | OTHER | 0.92+ |
Posana | OTHER | 0.91+ |
over 12 years ago | DATE | 0.9+ |
one | QUANTITY | 0.89+ |
Each of | QUANTITY | 0.88+ |
zero entries | QUANTITY | 0.88+ |
Gates | PERSON | 0.87+ |
zero fee | QUANTITY | 0.85+ |
both cubits | QUANTITY | 0.83+ |
two mode | QUANTITY | 0.78+ |
Hamiltonians | PERSON | 0.77+ |
Frank | OTHER | 0.73+ |
Hamiltonian | PERSON | 0.72+ |
milliseconds | QUANTITY | 0.72+ |
one resonator | QUANTITY | 0.71+ |
Condon | PERSON | 0.71+ |
a thousand cubit Gates | QUANTITY | 0.7+ |
BQ P | OTHER | 0.69+ |
single cubits | QUANTITY | 0.69+ |
Mar | PERSON | 0.65+ |
2020 | DATE | 0.65+ |
Oppenheimer | LOCATION | 0.59+ |
Gates | OTHER | 0.59+ |
States | LOCATION | 0.57+ |
Arctic | LOCATION | 0.56+ |
Hilbert | PERSON | 0.56+ |
cavities | QUANTITY | 0.53+ |
Hamiltonians | TITLE | 0.53+ |
Hamiltonian | TITLE | 0.53+ |
Chevy | ORGANIZATION | 0.51+ |
Epsilon | TITLE | 0.48+ |
Triumeq | OTHER | 0.48+ |
N Adams | OTHER | 0.46+ |
Remez | OTHER | 0.45+ |
Cupid | PERSON | 0.44+ |
Pulsonix | ORGANIZATION | 0.37+ |
Indistinguishability Obfuscation from Well Founded Assumptions
>>thank you so much that sake for inviting me to the Entity Research Summit. And I'm really excited to talk to all of them today. So I will be talking about achieving indistinguishability obfuscation from well founded assumptions. And this is really the result of a wonderful two year collaboration with But now it's standing. Graduate student I use chain will be graduating soon on my outstanding co author, Rachel Lynde from the University of Washington. So let me jump right into it. We all know that constructing indistinguishable the obfuscation. Constructing Io has been perhaps the most consequential open problem in the foundations of photography. For several years now, they've seen over 100 papers written that show how to use Iot to achieve a number of remarkable cryptographic goals. Um, that really expand the scope of cryptography in addition to doing just remarkable, really interesting new things. Unfortunately, however, until this work, I told the work I'm about to tell you about all known constructions of Iove. All required new hardness, assumptions, heart assumptions that were designed specifically to prove that Iowa secure. And unfortunately, uh, this has a torture of history. And many of the assumptions were actually broken, which led to just a lot of doubt and uncertainty about the status of Iot, whether it really exists or doesn't exist. And the work I'm about to tell you about today changes that state of affairs in the continental way in that we show how to build io from the combination of four well established topographic assumptions. Okay, let me jump right into it and tell you how we do it. So before this work that I'm about to tell you about over the last two years with Rachel and Ayush, we actually constructed a whole sequence of works that have looked at this question. And what we showed was that if we could just build a certain special object, then that would be sufficient for constructing Io, assuming well established assumptions like L W E P R g s and M C zero and the 68 assumption of a violin. Your mouths. Okay, So what is this object? The object first starts with a P. R G and >>S zero. In other words, of trg with constant locality that stretches end bits of seed to M bits of output where am is ended one plus Epsilon for any constant Epsilon zero. Yes, but in addition to this prg, we also have these l w we like samples. So as usual, we have an elder Bluey Secret s which is random vector z b two k, where K is the dimension of the secret, which is much smaller than any way also have this public about vectors ai which are also going to be okay. And now what is given out is are the elderly samples where the error is this X I that is just brilliant value. Uh, where these excise air Also the input to our prg. Okay, unfortunately, we needed to assume that these two things together, this y and Z together is actually pseudo random. But if you think about it, there is some sort of kind of strange assumption that assumes some kind of special leakage resilience, property of elderly, we where elderly samples, even with this sort of bizarre leakage on the errors from all debris, is still surround or still have some surrounding properties. And unfortunately, we had no idea how to prove that. And we still don't have any idea how to prove this. Actually, So this is just a assumption and we didn't know it's a new assumption. So far, it hasn't been broken, but that's pretty much it. That's all we knew about it. Um and that was it. If we could. If this is true, then we could actually build. I'll now to actually use this object. We needed additional property. We needed a special property that the output of this prg here can actually be computed. Every single bit of the output could be computed by a polynomial over the public. Elder Louise samples Why? And an additional secret w with the property that this additional secret w is actually quite small. It's only excise em to the one minus delta or some constant delta gradients. Barroso polynomial smaller from the output of the prg. And crucially, the degree of this polynomial is on Lee to its violin e er can this secret double that's where the bottle in your mouth will come. Okay. And in fact, this part we did not approve. So in this previous work, using various clever transformations, we were able to show that in fact we are able to construct this in a way to this Parliament has existed only degree to be short secret values. Double mhm. So now I'm gonna show you how using our new ideas were actually gonna build. That's a special object just like this from standard assumptions. We're just gonna be sufficient for building io, and we're gonna have to modify it a little bit. Okay? One of the things that makes me so excited is that actually, our ideas are extremely simple. I want to try to get that across today. Thanks. So the first idea is let's take thes elder movie samples that we have here and change them up a little bit when it changed them up. Start before I get to that in this talk, I want you to think of K the dimension of the secret here as something very small. Something like end of the excellent. That's only for the stock, not for the previous work. Okay. All right. So we have these elderly samples right from the previous work, but I'm going to change it up instead of computing them this way, as shown in the biggest slide on this line. Let's add some sparse hair. So let's replace this error x i with the air e i plus x I where e is very sparse. Almost all of these IIs or zero. But when the I is not zero is just completely random in all of Z, pizza just completely destroys all information. Okay, so first I just want to point out that the previous work that I already mentioned applies also to this case. So if we only want to compute P R g of X plus E, then that can still be computer the polynomial. That's degree to in a short W that's previous work the jail on Guess work from 2019. I'm not going to recall that you don't have time to tell you how you do it. It's very simple. Okay, so why are we doing this? Why are we adding the sparse error? The key observation is that even though I have changed the input of the PRG to the X Plus E because he is so sparse, prg of explosive is actually the same as P. R. G of X. In almost every outlet location. It's only a tiny, tiny fraction of the outputs that are actually corrupted by the sparse Arab. Okay, so for a moment Let's just pretend that in fact, we knew how to compute PRGF X with a degree to polynomial over a short seeking. We'll come back to this, I promise. But suppose for a moment we actually knew how to compute care to your ex, Not just scared of explosive in that case were essentially already done. And the reason is there's the L. P n over zp assumption that has been around for many years, which says that if you look at these sort of elderly like samples ai from the A, I s but plus a sparse air e I where you guys most zero open when it's not serious, completely random then In fact, these samples look pseudo random. They're indistinguishable from a I r r. I just completely uniform over ZP, okay? And this is a long history which I won't go because I don't have time, but it's just really nice or something. Okay, so let's see how we can use it. So again, suppose for the moment that we were able to compute, not just appeared you've explosive but appeared to you that well, the first operation that since we're adding the sparse R E I This part the the L P N part here is actually completely random by the LP an assumption so by L P and G. P, we can actually replace this entire term with just all right. And now, no, there is no more information about X present in the samples, The only place where as is being used in the input to the prg and as a result, we could just apply to sit around this of the prg and say this whole thing is pseudo random and that's it. We've now proven that this object that I wanted to construct it is actually surrounded, which is the main thing that was so bothering us and all this previous work. Now we get it like that just for the snap of our fingers just immediately from people. Okay, so the only thing that's missing that I haven't told you yet is Wait, how do we actually compute prg attacks? Right? Because we can compute p r g of X plus e. But there's still gonna be a few outputs. They're gonna be wrong. So how can we correct those few corrupted output positions to recover PRGF s? So, for the purpose of this talks because I don't have enough time. I'm gonna make sort of a crazy simplifying assumption. Let's just assume that in fact, Onley one out the position of P r g of X plus e was correct. So it's almost exactly what PR gox. There's only one position in prg of Ecstasy which needs to be corrected to get us back to PR gox. Okay, so how can we do that? The idea is again really, really simple. Okay, so the output of the PRG is an M. Becker and so Dimension and Becker. But let's actually just rearrange that into a spirit of them by spirit of them matrix. And as I mentioned, there's only one position in this matrix that actually needs to be corrected. So let's make this correction matrix, which is almost everywhere. Zero just in position. I j it contains a single correction factor. Why, right? And if you can add this matrix to prg of explosive, then we'll get PR dribbles. Okay, so now the Onley thing I need to do is to compute this extremely sparse matrix. And here the observation was almost trivia. Just I could take a spirit of em by one maker That just has why in position I and I could take a one by spirit of them matrix. I just have one in position J zero everywhere else. If I just take the tensor product was music the matrix product of these two of these two off this column vector in a row vector. Then I will get exactly this correction matrix. Right? And note that these two vectors that's called them you and be actually really, really swamped their only spirit of n dimensional way smaller than them. Right? So if I want to correct PRGF Expo see, all I have to do is add you, Tenzer V and I can add the individual vectors u and V to my short secret w it's still short. That's not gonna make W's any sufficiently bigger. And you chancery is only a degree to computation. So in this way, using a degree to computation, we can quickly, uh, correct our our computation to recover prg events. And now, of course, this was oversimplifying situation, uh, in general gonna have many more areas. We're not just gonna have one error, like as I mentioned, but it turns out that that is also easy to deal with, essentially the same way. It's again, just a very simple additional idea. Very, very briefly. The idea is that instead of just having one giant square to them by sort of a matrix, you can split up this matrix with lots of little sub matrices and with suitable concentration bound simple balls and pins arguments we can show that we could never Leslie this idea this you Tenzer v idea to correct all of the remaining yet. Okay, that's it. Just, you see, he's like, three simple >>ah ha moments. What kind of all that it took, um, that allowed >>us to achieve this result to get idol from standard assumptions. And, um, of course I'm presenting to you them to you in this very simple way. We just these three little ideas of which I told you to. Um, but of course, there were only made possible because of years of struggling with >>all the way that didn't work, that all that struggling and mapping out all the ways didn't work >>was what allowed us toe have these ideas. Um, and again, it yields the first I'll construction from well established cryptographic assumptions, namely Theo Elgon, assumption over zp learning with errors, assumption, existence of PR GS and then zero that is PR juice with constant death circuits and the SX th assumption over by linear notes, all of which have been used many years for a number of other applications, including such things as publicly inversion, something simple public inversion that's the That's the context in which the assumptions have been used so very far from the previous state of affairs where we had assumptions that were introduced on Lee Professor constructing my own. And with that I will conclude, uh and, uh, thank you for your attention. Thanks so much.
SUMMARY :
And many of the assumptions were actually broken, which led to just a lot of doubt and uncertainty So again, suppose for the moment that we were able to compute, What kind of all that it took, um, that allowed We just these three little ideas of which I told you to. inversion, something simple public inversion that's the That's the context in which the assumptions
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rachel | PERSON | 0.99+ |
Rachel Lynde | PERSON | 0.99+ |
Ayush | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
two year | QUANTITY | 0.99+ |
University of Washington | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
first idea | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two things | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Iowa | LOCATION | 0.98+ |
over 100 papers | QUANTITY | 0.98+ |
one position | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
one error | QUANTITY | 0.97+ |
three little ideas | QUANTITY | 0.97+ |
Becker | PERSON | 0.96+ |
Lee | PERSON | 0.95+ |
Iot | TITLE | 0.95+ |
Theo Elgon | PERSON | 0.94+ |
Iove | LOCATION | 0.94+ |
zero | QUANTITY | 0.93+ |
68 | QUANTITY | 0.93+ |
Entity Research Summit | EVENT | 0.93+ |
single correction factor | QUANTITY | 0.92+ |
M. Becker | PERSON | 0.92+ |
one position | QUANTITY | 0.9+ |
two vectors | QUANTITY | 0.89+ |
Elder Louise | PERSON | 0.89+ |
Leslie | PERSON | 0.87+ |
Double | QUANTITY | 0.87+ |
first operation | QUANTITY | 0.84+ |
Tenzer | PERSON | 0.84+ |
one maker | QUANTITY | 0.83+ |
Zero | QUANTITY | 0.83+ |
X plus E | TITLE | 0.81+ |
three | QUANTITY | 0.8+ |
Epsilon | OTHER | 0.79+ |
Barroso | PERSON | 0.74+ |
V | OTHER | 0.73+ |
single bit | QUANTITY | 0.73+ |
Lee Professor | PERSON | 0.72+ |
one giant | QUANTITY | 0.72+ |
k | OTHER | 0.71+ |
four well established | QUANTITY | 0.7+ |
Arab | OTHER | 0.67+ |
M C | OTHER | 0.66+ |
last two years | DATE | 0.62+ |
double | QUANTITY | 0.6+ |
Dimension | PERSON | 0.49+ |
things | QUANTITY | 0.48+ |
Plus E | COMMERCIAL_ITEM | 0.43+ |
Onley | PERSON | 0.34+ |
Day 2 Livestream | Enabling Real AI with Dell
>>from the Cube Studios >>in Palo Alto and >>Boston connecting with thought leaders all around the world. This is a cube conversation. >>Hey, welcome back here. Ready? Jeff Frick here with the Cube. We're doing a special presentation today really talking about AI and making ai really with two companies that are right in the heart of the Dell EMC as well as Intel. So we're excited to have a couple Cube alumni back on the program. Haven't seen him in a little while. First off from Intel. Lisa Spelman. She is the corporate VP and GM for the Xeon Group in Jersey on and Memory Group. Great to see you, Lisa. >>Good to see you again, too. >>And we've got Ravi Pinter. Conte. He is the SBP server product management, also from Dell Technologies. Ravi, great to see you as well. >>Good to see you on beast. Of course, >>yes. So let's jump into it. So, yesterday, Robbie, you guys announced a bunch of new kind of ai based solutions where if you can take us through that >>Absolutely so one of the things we did Jeff was we said it's not good enough for us to have a point product. But we talked about hope, the tour of products, more importantly, everything from our workstation side to the server to these storage elements and things that we're doing with VM Ware, for example. Beyond that, we're also obviously pleased with everything we're doing on bringing the right set off validated configurations and reference architectures and ready solutions so that the customer really doesn't have to go ahead and do the due diligence. Are figuring out how the various integration points are coming for us in making a solution possible. Obviously, all this is based on the great partnership we have with Intel on using not just their, you know, super cues, but FPG's as well. >>That's great. So, Lisa, I wonder, you know, I think a lot of people you know, obviously everybody knows Intel for your CPU is, but I don't think they recognize kind of all the other stuff that can wrap around the core CPU to add value around a particular solution. Set or problems. That's what If you could tell us a little bit more about Z on family and what you guys are doing in the data center with this kind of new interesting thing called AI and machine learning. >>Yeah. Um, so thanks, Jeff and Ravi. It's, um, amazing. The way to see that artificial intelligence applications are just growing in their pervasiveness. And you see it taking it out across all sorts of industries. And it's actually being built into just about every application that is coming down the pipe. And so if you think about meeting toe, have your hardware foundation able to support that. That's where we're seeing a lot of the customer interest come in. And not just a first Xeon, but, like Robbie said on the whole portfolio and how the system and solution configuration come together. So we're approaching it from a total view of being able to move all that data, store all of that data and cross us all of that data and providing options along that entire pipeline that move, um, and within that on Z on. Specifically, we've really set that as our cornerstone foundation for AI. If it's the most deployed solution and data center CPU around the world and every single application is going to have artificial intelligence in it, it makes sense that you would have artificial intelligence acceleration built into the actual hardware so that customers get a better experience right out of the box, regardless of which industry they're in or which specialized function they might be focusing on. >>It's really it's really wild, right? Cause in process, right, you always move through your next point of failure. So, you know, having all these kind of accelerants and the ways that you can carve off parts of the workload part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution side. Nobody wants General Ai just for ai sake. It's a nice word. Interesting science experiment. But it's really in the applied. A world is. We're starting to see the value in the application of this stuff, and I wonder you have a customer. You want to highlight Absalon, tell us a little bit about their journey and what you guys did with them. >>Great, sure. I mean, if you didn't start looking at Epsilon there in the market in the marketing business, and one of the crucial things for them is to ensure that they're able to provide the right data. Based on that analysis, there run on? What is it that the customer is looking for? And they can't wait for a period of time, but they need to be doing that in the near real time basis, and that's what excellent does. And what really blew my mind was the fact that they actually service are send out close to 100 billion messages. Again, it's 100 billion messages a year. And so you can imagine the amount of data that they're analyzing, which is in petabytes of data, and they need to do real time. And that's all possible because of the kind of analytics we have driven into the power It silver's, you know, using the latest of the Intel Intel Xeon processor couple with some of the technologies from the BGS side, which again I love them to go back in and analyze this data and service to the customers very rapidly. >>You know, it's funny. I think Mark Tech is kind of an under appreciated ah world of ai and, you know, in machine to machine execution, right, That's the amount of transactions go through when you load a webpage on your site that actually ideas who you are you know, puts puts a marketplace together, sells time on that or a spot on that ad and then lets people in is a really sophisticated, as you said in massive amounts of data going through the interesting stuff. If it's done right, it's magic. And if it's done, not right, then people get pissed off. You gotta have. You gotta have use our tools. >>You got it. I mean, this is where I talked about, you know, it can be garbage in garbage out if you don't really act on the right data. Right. So that is where I think it becomes important. But also, if you don't do it in a timely fashion, but you don't service up the right content at the right time. You miss the opportunity to go ahead and grab attention, >>right? Right. Lisa kind of back to you. Um, you know, there's all kinds of open source stuff that's happening also in the in the AI and machine learning world. So we hear things about tense or flow and and all these different libraries. How are you guys, you know, kind of embracing that world as you look at ai and kind of the development. We've been at it for a while. You guys are involved in everything from autonomous vehicles to the Mar Tech. Is we discussed? How are you making sure that these things were using all the available resources to optimize the solutions? >>Yeah, I think you and Robbie we're just hitting on some of those examples of how many ways people have figured out how to apply AI now. So maybe at first it was really driven by just image recognition and image tagging. But now you see so much work being driven in recommendation engines and an object detection for much more industrial use cases, not just consumer enjoyment and also those things you mentioned and hit on where the personalization is a really fine line you walk between. How do you make an experience feel good? Personalized versus creepy personalized is a real challenge and opportunity across so many industries. And so open source like you mentioned, is a great place for that foundation because it gives people the tools to build upon. And I think our strategy is really a stack strategy that starts first with delivering the best hardware for artificial intelligence and again the other is the foundation for that. But we also have, you know, Milat type processing for out of the Edge. And then we have all the way through to very custom specific accelerators into the data center, then on top about the optimized software, which is going into each of those frameworks and doing the work so that the framework recognizes the specific acceleration we built into the CPU. Whether that steel boost or recognizes the capabilities that sit in that accelerator silicon, and then once we've done that software layer and this is where we have the opportunity for a lot of partnership is the ecosystem and the solutions work that Robbie started off by talking about. So Ai isn't, um, it's not easy for everyone. It has a lot of value, but it takes work to extract that value. And so partnerships within the ecosystem to make sure that I see these are taking those optimization is building them in and fundamentally can deliver to customers. Reliable solution is the last leg of that of that strategy, but it really is one of the most important because without it you get a lot of really good benchmark results but not a lot of good, happy customer, >>right? I'm just curious, Lee says, because you kind of sit in the catbird seat. You guys at the core, you know, kind of under all the layers running data centers run these workloads. How >>do you see >>kind of the evolution of machine learning and ai from kind of the early days, where with science projects and and really smart people on mahogany row versus now people are talking about trying to get it to, like a citizen developer, but really a citizen data science and, you know, in exposing in the power of AI to business leaders or business executioners. Analysts, if you will, so they can apply it to their day to day world in their day to day life. How do you see that kind of evolving? Because you not only in it early, but you get to see some of the stuff coming down the road in design, find wins and reference architectures. How should people think about this evolution? >>It really is one of those things where if you step back from the fundamentals of AI, they've actually been around for 50 or more years. It's just that the changes in the amount of computing capability that's available, the network capacity that's available and the fundamental efficiency that I t and infrastructure managers and get out of their cloud architectures as allowed for this pervasiveness to evolve. And I think that's been the big tipping point that pushed people over this fear. Of course, I went through the same thing that cloud did where you had maybe every business leader or CEO saying Hey, get me a cloud and I'll figure out what for later give me some AI will get a week and make it work, But we're through those initial use pieces and starting to see a business value derived from from those deployments. And I think some of the most exciting areas are in the medical services field and just the amount, especially if you think of the environment we're in right now. The amount of efficiency and in some cases, reduction in human contact that you could require for diagnostics and just customer tracking and ability, ability to follow their entire patient History is really powerful and represents the next wave and care and how we scale our limited resource of doctors nurses technician. And the point we're making of what's coming next is where you start to see even more mass personalization and recommendations in that way that feel very not spooky to people but actually comforting. And they take value from them because it allows them to immediately act. Robbie reference to the speed at which you have to utilize the data. When people get immediately act more efficiently. They're generally happier with the service. So we see so much opportunity and we're continuing to address across, you know, again that hardware, software and solution stack so we can stay a step ahead of our customers, >>Right? That's great, Ravi. I want to give you the final word because you guys have to put the solutions together, it actually delivering to the customer. So not only, you know the hardware and the software, but any other kind of ecosystem components that you have to bring together. So I wonder if you can talk about that approach and how you know it's it's really the solution. At the end of the day, not specs, not speeds and feeds. That's not really what people care about. It's really a good solution. >>Yeah, three like Jeff, because end of the day I mean, it's like this. Most of us probably use the A team to retry money, but we really don't know what really sits behind 80 and my point being that you really care at that particular point in time to be able to put a radio do machine and get your dollar bills out, for example. Likewise, when you start looking at what the customer really needs to know, what Lisa hit upon is actually right. I mean what they're looking for. And you said this on the whole solution side house. To our our mantra to this is very simple. We want to make sure that we use the right basic building blocks, ensuring that we bring the right solutions using three things the right products which essentially means that we need to use the right partners to get the right processes in GPU Xen. But then >>we get >>to the next level by ensuring that we can actually do things we can either provide no ready solutions are validated reference architectures being that you have the sausage making process that you now don't need to have the customer go through, right? In a way. We have done the cooking and we provide a recipe book and you just go through the ingredient process of peering does and then off your off right to go get your solution done. And finally, the final stages there might be helped that customers still need in terms of services. That's something else Dell technology provides. And the whole idea is that customers want to go out and have them help deploying the solutions. We can also do that we're services. So that's probably the way we approach our data. The way we approach, you know, providing the building blocks are using the right technologies from our partners, then making sure that we have the right solutions that our customers can look at. And finally, they need deployment. Help weaken due their services. >>Well, Robbie, Lisa, thanks for taking a few minutes. That was a great tee up, Rob, because I think we're gonna go to a customer a couple of customer interviews enjoying that nice meal that you prepared with that combination of hardware, software, services and support. So thank you for your time and a great to catch up. All right, let's go and run the tape. Hi, Jeff. I wanted to talk about two examples of collaboration that we have with the partners that have yielded Ah, really examples of ah put through HPC and AI activities. So the first example that I wanted to cover is within your AHMAD team up in Canada with that team. We collaborated with Intel on a tuning of algorithm and code in order to accelerate the mapping of the human brain. So we have a cluster down here in Texas called Zenith based on Z on and obtain memory on. And we were able to that customer with the three of us are friends and Intel the norm, our team on the Dell HPC on data innovation, injuring team to go and accelerate the mapping of the human brain. So imagine patients playing video games or doing all sorts of activities that help understand how the brain sends the signal in order to trigger a response of the nervous system. And it's not only good, good way to map the human brain, but think about what you can get with that type of information in order to help cure Alzheimer's or dementia down the road. So this is really something I'm passionate about. Is using technology to help all of us on all of those that are suffering from those really tough diseases? Yeah, yeah, way >>boil. I'm a project manager for the project, and the idea is actually to scan six participants really intensively in both the memory scanner and the G scanner and see if we can use human brain data to get closer to something called Generalized Intelligence. What we have in the AI world, the systems that are mathematically computational, built often they do one task really, really well, but they struggle with other tasks. Really good example. This is video games. Artificial neural nets can often outperform humans and video games, but they don't really play in a natural way. Artificial neural net. Playing Mario Brothers The way that it beats the system is by actually kind of gliding its way through as quickly as possible. And it doesn't like collect pennies. For example, if you play Mary Brothers as a child, you know that collecting those coins is part of your game. And so the idea is to get artificial neural nets to behave more like humans. So like we have Transfer of knowledge is just something that humans do really, really well and very naturally. It doesn't take 50,000 examples for a child to know the difference between a dog and a hot dog when you eat when you play with. But an artificial neural net can often take massive computational power and many examples before it understands >>that video games are awesome, because when you do video game, you're doing a vision task instant. You're also doing a >>lot of planning and strategy thinking, but >>you're also taking decisions you several times a second, and we record that we try to see. Can we from brain activity predict >>what people were doing? We can break almost 90% accuracy with this type of architecture. >>Yeah, yeah, >>Use I was the lead posts. Talk on this collaboration with Dell and Intel. She's trying to work on a model called Graph Convolution Neural nets. >>We have being involved like two computing systems to compare it, like how the performance >>was voting for The lab relies on both servers that we have internally here, so I have a GPU server, but what we really rely on is compute Canada and Compute Canada is just not powerful enough to be able to run the models that he was trying to run so it would take her days. Weeks it would crash, would have to wait in line. Dell was visiting, and I was invited into the meeting very kindly, and they >>told us that they started working with a new >>type of hardware to train our neural nets. >>Dell's using traditional CPU use, pairing it with a new >>type off memory developed by Intel. Which thing? They also >>their new CPU architectures and really optimized to do deep learning. So all of that sounds great because we had this problem. We run out of memory, >>the innovation lab having access to experts to help answer questions immediately. That's not something to gate. >>We were able to train the attic snatch within 20 minutes. But before we do the same thing, all the GPU we need to wait almost three hours to each one simple way we >>were able to train the short original neural net. Dell has been really great cause anytime we need more memory, we send an email, Dell says. Yeah, sure, no problem. We'll extended how much memory do you need? It's been really simple from our end, and I think it's really great to be at the edge of science and technology. We're not just doing the same old. We're pushing the boundaries. Like often. We don't know where we're going to be in six months. In the big data world computing power makes a big difference. >>Yeah, yeah, yeah, yeah. The second example I'd like to cover is the one that will call the data accelerator. That's a publisher that we have with the University of Cambridge, England. There we partnered with Intel on Cambridge, and we built up at the time the number one Io 500 storage solution on. And it's pretty amazing because it was built on standard building blocks, power edge servers until Xeon processors some envy me drives from our partners and Intel. And what we did is we. Both of this system with a very, very smart and elaborate suffering code that gives an ultra fast performance for our customers, are looking for a front and fast scratch to their HPC storage solutions. We're also very mindful that this innovation is great for others to leverage, so the suffering Could will soon be available on Get Hub on. And, as I said, this was number one on the Iot 500 was initially released >>within Cambridge with always out of focus on opening up our technologies to UK industry, where we can encourage UK companies to take advantage of advanced research computing technologies way have many customers in the fields of automotive gas life sciences find our systems really help them accelerate their product development process. Manage Poor Khalidiya. I'm the director of research computing at Cambridge University. Yeah, we are a research computing cloud provider, but the emphasis is on the consulting on the processes around how to exploit that technology rather than the better results. Our value is in how we help businesses use advanced computing resources rather than the provision. Those results we see increasingly more and more data being produced across a wide range of verticals, life sciences, astronomy, manufacturing. So the data accelerators that was created as a component within our data center compute environment. Data processing is becoming more and more central element within research computing. We're getting very large data sets, traditional spinning disk file systems can't keep up and we find applications being slowed down due to a lack of data, So the data accelerator was born to take advantage of new solid state storage devices. I tried to work out how we can have a a staging mechanism for keeping your data on spinning disk when it's not required pre staging it on fast envy any stories? Devices so that can feed the applications at the rate quiet for maximum performance. So we have the highest AI capability available anywhere in the UK, where we match II compute performance Very high stories performance Because for AI, high performance storage is a key element to get the performance up. Currently, the data accelerated is the fastest HPC storage system in the world way are able to obtain 500 gigabytes a second read write with AI ops up in the 20 million range. We provide advanced computing technologies allow some of the brightest minds in the world really pushed scientific and medical research. We enable some of the greatest academics in the world to make tomorrow's discoveries. Yeah, yeah, yeah. >>Alright, Welcome back, Jeff Frick here and we're excited for this next segment. We're joined by Jeremy Raider. He is the GM digital transformation and scale solutions for Intel Corporation. Jeremy, great to see you. Hey, thanks for having me. I love I love the flowers in the backyard. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Garden, Right To very beautiful places to visit in Portland. >>Yeah. You know, you only get him for a couple. Ah, couple weeks here, so we get the timing just right. >>Excellent. All right, so let's jump into it. Really? And in this conversation really is all about making Ai Riel. Um, and you guys are working with Dell and you're working with not only Dell, right? There's the hardware and software, but a lot of these smaller a solution provider. So what is some of the key attributes that that needs to make ai riel for your customers out there? >>Yeah, so, you know, it's a it's a complex space. So when you can bring the best of the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore you're getting into Memory technologies, network technologies and kind of a little less known as how many resources we have focused on the software side of things optimizing frameworks and optimizing, and in these key ingredients and libraries that you can stitch into that portfolio to really get more performance in value, out of your machine learning and deep learning space. And so you know what we've really done here with Dell? It has started to bring a bunch of that portfolio together with Dell's capabilities, and then bring in that ai's V partner, that software vendor where we can really take and stitch and bring the most value out of that broad portfolio, ultimately using using the complexity of what it takes to deploy an AI capability. So a lot going on. They're bringing kind of the three legged stool of the software vendor hardware vendor dental into the mix, and you get a really strong outcome, >>right? So before we get to the solutions piece, let's stick a little bit into the Intel world. And I don't know if a lot of people are aware that obviously you guys make CPUs and you've been making great CPIs forever. But there's a whole lot more stuff that you've added, you know, kind of around the core CPU. If you will in terms of of actual libraries and ways to really optimize the seond processors to operate in an AI world. I wonder if you can kind of take us a little bit below the surface on how that works. What are some of the examples of things you can do to get more from your Gambira Intel processors for ai specific applications of workloads? >>Yeah, well, you know, there's a ton of software optimization that goes into this. You know that having the great CPU is definitely step one. But ultimately you want to get down into the libraries like tensor flow. We have data analytics, acceleration libraries. You know, that really allows you to get kind of again under the covers a little bit and look at it. How do we have to get the most out of the kinds of capabilities that are ultimately used in machine learning in deep learning capabilities, and then bring that forward and trying and enable that with our software vendors so that they can take advantage of those acceleration components and ultimately, you know, move from, you know, less training time or could be a the cost factor. But those are the kind of capabilities we want to expose to software vendors do these kinds of partnerships. >>Okay. Ah, and that's terrific. And I do think that's a big part of the story that a lot of people are probably not as aware of that. There are a lot of these optimization opportunities that you guys have been leveraging for a while. So shifting gears a little bit, right? AI and machine learning is all about the data. And in doing a little research for this, I found actually you on stage talking about some company that had, like, 350 of road off, 315 petabytes of data, 140,000 sources of those data. And I think probably not great quote of six months access time to get that's right and actually work with it. And the company you're referencing was intel. So you guys know a lot about debt data, managing data, everything from your manufacturing, and obviously supporting a global organization for I t and run and ah, a lot of complexity and secrets and good stuff. So you know what have you guys leveraged as intel in the way you work with data and getting a good data pipeline. That's enabling you to kind of put that into these other solutions that you're providing to the customers, >>right? Well, it is, You know, it's absolutely a journey, and it doesn't happen overnight, and that's what we've you know. We've seen it at Intel on We see it with many of our customers that are on the same journey that we've been on. And so you know, this idea of building that pipeline it really starts with what kind of problems that you're trying to solve. What are the big issues that are holding you back that company where you see that competitive advantage that you're trying to get to? And then ultimately, how do you build the structure to enable the right kind of pipeline of that data? Because that's that's what machine learning and deep learning is that data journey. So really a lot of focus around you know how we can understand those business challenges bring forward those kinds of capabilities along the way through to where we structure our entire company around those assets and then ultimately some of the partnerships that we're gonna be talking about these companies that are out there to help us really squeeze the most out of that data as quickly as possible because otherwise it goes stale real fast, sits on the shelf and you're not getting that value out of right. So, yeah, we've been on the journey. It's Ah, it's a long journey, but ultimately we could take a lot of those those kind of learnings and we can apply them to our silicon technology. The software optimization is that we're doing and ultimately, how we talk to our enterprise customers about how they can solve overcome some of the same challenges that we did. >>Well, let's talk about some of those challenges specifically because, you know, I think part of the the challenge is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Little bit was there's a whole lot that goes into it. Besides just doing the analysis, there's a lot of data practice data collection, data organization, a whole bunch of things that have to happen before. You can actually start to do the sexy stuff of AI. So you know, what are some of those challenges. How are you helping people get over kind of these baby steps before they can really get into the deep end of the pool? >>Yeah, well, you know, one is you have to have the resource is so you know, do you even have the resource is if you can acquire those Resource is can you keep them interested in the kind of work that you're doing? So that's a big challenge on and actually will talk about how that fits into some of the partnerships that we've been establishing in the ecosystem. It's also you get stuck in this poc do loop, right? You finally get those resource is and they start to get access to that data that we talked about. It start to play out some scenarios, a theorize a little bit. Maybe they show you some really interesting value, but it never seems to make its way into a full production mode. And I think that is a challenge that has faced so many enterprises that are stuck in that loop. And so that's where we look at who's out there in the ecosystem that can help more readily move through that whole process of the evaluation that proved the r a y, the POC and ultimately move that thing that capability into production mode as quickly as possible that you know that to me is one of those fundamental aspects of if you're stuck in the POC. Nothing's happening from this. This is not helping your company. We want to move things more quickly, >>right? Right. And let's just talk about some of these companies that you guys are working with that you've got some reference architectures is data robot a Grid dynamics H 20 just down the road in Antigua. So a lot of the companies we've worked with with Cube and I think you know another part that's interesting. It again we can learn from kind of old days of big data is kind of generalized. Ai versus solution specific. Ai and I think you know where there's a real opportunity is not AI for a sake, but really it's got to be applied to a specific solution, a specific problem so that you have, you know, better chatbots, better customer service experience, you know, better something. So when you were working with these folks and trying to design solutions or some of the opportunities that you saw to work with some of these folks to now have an applied a application slash solution versus just kind of AI for ai's sake. >>Yeah. I mean, that could be anything from fraud, detection and financial services, or even taking a step back and looking more horizontally like back to that data challenge. If if you're stuck at the AI built a fantastic Data lake, but I haven't been able to pull anything back out of it, who are some of the companies that are out there that can help overcome some of those big data challenges and ultimately get you to where you know, you don't have a data scientist spending 60% of their time on data acquisition pre processing? That's not where we want them, right? We want them on building out that next theory. We want them on looking at the next business challenge. We want them on selecting the right models, but ultimately they have to do that as quickly as possible so that they can move that that capability forward into the next phase. So, really, it's about that that connection of looking at those those problems or challenges in the whole pipeline. And these companies like data robot in H 20 quasi. Oh, they're all addressing specific challenges in the end to end. That's why they've kind of bubbled up as ones that we want to continue to collaborate with, because it can help enterprises overcome those issues more fast. You know more readily. >>Great. Well, Jeremy, thanks for taking a few minutes and giving us the Intel side of the story. Um, it's a great company has been around forever. I worked there many, many moons ago. That's Ah, that's a story for another time, but really appreciate it and I'll interview you will go there. Alright, so super. Thanks a lot. So he's Jeremy. I'm Jeff Frick. So now it's time to go ahead and jump into the crowd chat. It's crowdchat dot net slash make ai real. Um, we'll see you in the chat. And thanks for watching
SUMMARY :
Boston connecting with thought leaders all around the world. She is the corporate VP and GM Ravi, great to see you as well. Good to see you on beast. solutions where if you can take us through that reference architectures and ready solutions so that the customer really doesn't have to on family and what you guys are doing in the data center with this kind of new interesting thing called AI and And so if you think about meeting toe, have your hardware foundation part of the intelligence that you can optimize betters is so important as you said Lisa and also Rocket and the solution we have driven into the power It silver's, you know, using the latest of the Intel Intel of ai and, you know, in machine to machine execution, right, That's the amount of transactions I mean, this is where I talked about, you know, How are you guys, you know, kind of embracing that world as you look But we also have, you know, Milat type processing for out of the Edge. you know, kind of under all the layers running data centers run these workloads. and, you know, in exposing in the power of AI to business leaders or business the speed at which you have to utilize the data. So I wonder if you can talk about that approach and how you know to retry money, but we really don't know what really sits behind 80 and my point being that you The way we approach, you know, providing the building blocks are using the right technologies the brain sends the signal in order to trigger a response of the nervous know the difference between a dog and a hot dog when you eat when you play with. that video games are awesome, because when you do video game, you're doing a vision task instant. that we try to see. We can break almost 90% accuracy with this Talk on this collaboration with Dell and Intel. to be able to run the models that he was trying to run so it would take her days. They also So all of that the innovation lab having access to experts to help answer questions immediately. do the same thing, all the GPU we need to wait almost three hours to each one do you need? That's a publisher that we have with the University of Cambridge, England. Devices so that can feed the applications at the rate quiet for maximum performance. I thought maybe you ran over to the Japanese, the Japanese garden or the Rose Ah, couple weeks here, so we get the timing just right. Um, and you guys are working with Dell and you're working with not only Dell, right? the intel portfolio, which is which is expanding a lot, you know, it's not just the few anymore What are some of the examples of things you can do to get more from You know, that really allows you to get kind of again under the covers a little bit and look at it. So you know what have you guys leveraged as intel in the way you work with data and getting And then ultimately, how do you build the structure to enable the right kind of pipeline of that is that kind of knocked big data, if you will in Hadoop, if you will kind of off the rails. Yeah, well, you know, one is you have to have the resource is so you know, do you even have the So a lot of the companies we've worked with with Cube and I think you know another that can help overcome some of those big data challenges and ultimately get you to where you we'll see you in the chat.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeremy | PERSON | 0.99+ |
Lisa Spelman | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
Texas | LOCATION | 0.99+ |
Robbie | PERSON | 0.99+ |
Lee | PERSON | 0.99+ |
Portland | LOCATION | 0.99+ |
Xeon Group | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Ravi | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
60% | QUANTITY | 0.99+ |
Jeremy Raider | PERSON | 0.99+ |
Ravi Pinter | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
20 million | QUANTITY | 0.99+ |
Mar Tech | ORGANIZATION | 0.99+ |
50,000 examples | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
Mario Brothers | TITLE | 0.99+ |
six months | QUANTITY | 0.99+ |
Antigua | LOCATION | 0.99+ |
University of Cambridge | ORGANIZATION | 0.99+ |
Jersey | LOCATION | 0.99+ |
140,000 sources | QUANTITY | 0.99+ |
six participants | QUANTITY | 0.99+ |
315 petabytes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
two companies | QUANTITY | 0.99+ |
500 gigabytes | QUANTITY | 0.99+ |
AHMAD | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
Cube Studios | ORGANIZATION | 0.99+ |
first example | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
Memory Group | ORGANIZATION | 0.99+ |
two examples | QUANTITY | 0.99+ |
Cambridge University | ORGANIZATION | 0.98+ |
Rose Garden | LOCATION | 0.98+ |
today | DATE | 0.98+ |
both servers | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
Intel Corporation | ORGANIZATION | 0.98+ |
Khalidiya | PERSON | 0.98+ |
second example | QUANTITY | 0.98+ |
one task | QUANTITY | 0.98+ |
80 | QUANTITY | 0.98+ |
intel | ORGANIZATION | 0.97+ |
Epsilon | ORGANIZATION | 0.97+ |
Rocket | PERSON | 0.97+ |
both | QUANTITY | 0.97+ |
Cube | ORGANIZATION | 0.96+ |
Making Artifical Intelligance Real With Dell & VMware
>>artificial intelligence. The words are full of possibility. Yet to many it may seem complex, expensive and hard to know where to get started. How do you make AI really for your business? At Dell Technologies, we see AI enhancing business, enriching lives and improving the world. Dell Technologies is dedicated to making AI easy, so more people can use it to make a real difference. So you can adopt and run AI anywhere with your current skill. Sets with AI Solutions powered by power edge servers and made portable across hybrid multi clouds with VM ware. Plus solved I O bottlenecks with breakthrough performance delivered by Dell EMC Ready solutions for HPC storage and Data Accelerator. And enjoy automated, effortless management with open manage systems management so you can keep business insights flowing across a multi cloud environment. With an AI portfolio that spans from workstations to supercomputers, Dell Technologies can help you get started with AI easily and grow seamlessly. AI has the potential to profoundly change our lives with Dell Technologies. AI is easy to adopt, easy to manage and easy to scale. And there's nothing artificial about that. Yeah, yeah, from >>the Cube Studios in Palo Alto and Boston >>connecting with >>thought leaders all around the world. This is a cube conversation. Hi, I'm Stew Minimum. And welcome to this special launch with our friends at Dell Technologies. We're gonna be talking about AI and the reality of making artificial intelligence real happy to welcome to the program. Two of our Cube alumni Rob, depending 90. He's the senior vice president of server product management and very Pellegrino vice president, data centric workloads and solutions in high performance computing, both with Dell Technologies. Thank you both for joining thanks to you. So you know, is the industry we watch? You know, the AI has been this huge buzz word, but one of things I've actually liked about one of the differences about what I see when I listen to the vendor community talking about AI versus what I saw too much in the big data world is you know, it used to be, you know Oh, there was the opportunity. And data is so important. Yes, that's really But it was. It was a very wonky conversation. And the promise and the translation of what has been to the real world didn't necessarily always connect and We saw many of the big data solutions, you know, failed over time with AI on. And I've seen this in meetings from Dell talking about, you know, the business outcomes in general overall in i t. But you know how ai is helping make things real. So maybe we can start there for another product announcements and things we're gonna get into. But Robbie Interior talk to us a little bit about you know, the customers that you've been seeing in the impact that AI is having on their business. >>Sure, Teoh, I'll take us a job in it. A couple of things. For example, if you start looking at, uh, you know, the autonomous vehicles industry of the manufacturing industry where people are building better tools for anything they need to do on their manufacturing both. For example, uh, this is a good example of where that honors makers and stuff you've got Xeon ut It's actually a world war balcony. Now it is using our whole product suite right from the hardware and software to do multiple iterations off, ensuring that the software and the hardware come together pretty seamlessly and more importantly, ingesting, you know, probably tens of petabytes of data to ensure that we've got the right. They're training and gardens in place. So that's a great example of how we are helping some of our customers today in ensuring that we can really meet is really in terms of moving away from just a morning scenario in something that customers are able to use like today. >>Well, if I can have one more, Ah Yanai, one of our core and more partners than just customers in Italy in the energy sector have been been really, really driving innovation with us. We just deployed a pretty large 8000 accelerator cluster with them, which is the largest commercial cluster in the world. And where they're focusing on is the digital transformation and the development of energy sources. And it's really important not be an age. You know, the plan. It's not getting younger, and we have to be really careful about the type of energies that we utilize to do what we do every day on they put a lot of innovation. We've helped set up the right solution for them, and we'll talk some more about what they've done with that cluster. Later, during our chat, but it is one of the example that is tangible with the appointment that is being used to help there. >>Great. Well, we love starting with some of the customer stories. Really glad we're gonna be able to share some of those, you know, actual here from some of the customers a little bit later in this launch. But, Robbie, you know, maybe give us a little bit as to what you're hearing from customers. You know, the overall climate in AI. You know, obviously you know, so many challenges facing, you know, people today. But you know, specifically around ai, what are some of the hurdles that they might need to overcome Be able to make ai. Really? >>I think the two important pieces I can choose to number one as much as we talk about AI machine learning. One of the biggest challenges that customers have today is ensuring that they have the right amount and the right quality of data to go out and do the analytics percent. Because if you don't do it, it's giggle garbage in garbage out. So the one of the biggest challenges our customers have today is ensuring that they have the most pristine data to go back on, and that takes quite a bit of an effort. Number two. A lot of times, I think one of the challenges they also have is having the right skill set to go out and have the execution phase of the AI pod. You know, work done. And I think those are the two big challenges we hear off. And that doesn't seem to be changing in the very near term, given the very fact that nothing Forbes recently had an article that said that less than 15% off, our customers probably are using AI machine learning today so that talks to the challenges and the opportunities ahead for me. All right, >>So, Ravi, give us the news. Tell us the updates from Dell Technologies how you're helping customers with AI today, >>going back to one of the challenges, as I mentioned, which is not having the right skin set. One of the things we are doing at Dell Technologies is making sure that we provide them not just the product but also the ready solutions that we're working with. For example, Tier and his team. We're also working on validated and things are called reference architectures. The whole idea behind this is we want to take the guesswork out for our customers and actually go ahead and destroying things that we have already tested to ensure that the integration is right. There's rightsizing attributes, so they know exactly the kind of a product that would pick up our not worry about me in time and the resources needed you get to that particular location. So those are probably the two of the biggest things we're doing to help our customers make the right decision and execute seamlessly and on time. >>Excellent. So teary, maybe give us a little bit of a broader look as to, you know, Dell's part participation in the overall ecosystem when it comes to what's happening in AI on and you know why is this a unique time for what's happening in the in the industry? >>Yeah, I mean, I think we all live it. I mean, I'm right here in my home, and I'm trying to ensure that the business continues to operate, and it's important to make sure that we're also there for our customers, right? The fight against covered 19 is eyes changing what's happening around the quarantines, etcetera. So Dell, as a participant not only in the AI the world that we live in on enabling AI is also a participant in all of the community's s. So we've recently joined the covered 19 High Performance Computing Consortium on. We also made a lot of resources available to researchers and scientists leveraging AI in order to make progress towards you're and potentially the vaccine against Corbyn. 19 examples are we have our own supercomputers in the lab here in Austin, Texas, and we've given access to some of our partners. T. Gen. Is one example. The beginning of our chat I mentioned and I So not only did they have barely deport the cluster with us earlier this year that could 19 started hitting, so they've done what's the right thing to do for community and humanity is they made the resource available to scientists in Europe on tack just down the road here, which had the largest I can't make supercomputer that we deployed with them to. Ai's doing exactly the same thing. So this is one of the real examples that are very timely, and it's it's it's happening right now we hadn't planned for it. A booth there with our customers, the other pieces. This is probably going to be a trend, but healthcare is going through and version of data you mentioned in the beginning. You're talking about 2.3000 exabytes, about 3000 times the content of the Library of Congress. It's incredible, and that data is useless. I mean, it's great we can We can put that on our great ice on storage, but you can also see it as an opportunity to get business value out of it. That's going to be we're a lot more resource is with AI so a lot happening here. That's that's really if I can get into more of the science of it because it's healthcare, because it's the industry we see now that our family members at the M. Ware, part of the Dell Technologies Portfolio, are getting even more relevance in the discussion. The industry is based on virtualization, and the M ware is the number one virtualization solution for the industry. So now we're trying to weave in the reality in the I T environment with the new nodes of AI and data science and HPC. So you will see the VM Ware just added kubernetes control plane. This fear Andi were leveraging that to have a very flexible environment On one side, we can do some data science on the other side. We can go back to running some enterprise class hardware class software on top of it. So this is is great. And we're capitalizing on it with validates solutions, validated design on. And I think that's going to be adding a lot of ah power in the hands of our customers and always based on their feedback. And they asked back, >>Yeah, I may ask you just to build on that interesting comment that you made on we're actually looking at very shortly will be talking about how we're gonna have the ability to, for example, read or V Sphere and Allah servers begin. That essentially means that we're going to cut down the time our customers need to go ahead and deploy on their sites. >>Yeah, excellent. Definitely been, you know, very strong feedback from the community. We did videos around some of the B sphere seven launch, you know, theory. You know, we actually had done an interview with you. Ah, while back at your big lab, Jeff Frick. Otto, See the supercomputers behind what you were doing. Maybe bring us in a little bit inside as who? You know, some of the new pieces that help enable AI. You know, it often gets lost on the industry. You know, it's like, Oh, yeah, well, we've got the best hardware to accelerate or enable these kind of workloads. So, you know, bring us in its But what, You know, the engineering solution sets that are helping toe make this a reality >>of today. Yeah, and truly still you've been there. You've seen the engineers in the lab, and that's more than AI being real. That that is double real because we spend a lot of time analyzing workloads customer needs. We have a lot of PhD engineers in there, and what we're working on right now is kind of the next wave of HPC enablement Azaz. We all know the consumption model or the way that we want to have access to resources is evolving from something that is directly in front of us. 1 to 1 ratio to when virtualization became more prevalent. We had a one to many ratio on genes historically have been allocated on a per user. Or sometimes it is study modified view to have more than one user GP. But with the addition of big confusion to the VM our portfolio and be treated not being part of these fear. We're building up a GPU as a service solutions through a VM ware validated design that we are launching, and that's gonna give them flexibility. And the key here is flexibility. We have the ability, as you know, with the VM Ware environment, to bring in also some security, some flexibility through moving the workloads. And let's be honest with some ties into cloud models on, we have our own set of partners. We all know that the big players in the industry to But that's all about flexibility and giving our customers what they need and what they expect in the world. But really, >>Yeah, Ravi, I guess that brings us to ah, you know, one of the key pieces we need to look at here is how do we manage across all of these environments? Uh, and you know, how does AI fit into this whole discussion between what Dell and VM ware doing things like v Sphere, you know, put pulling in new workloads >>stew, actually a couple of things. So there's really nothing artificial about the real intelligence that comes through with all that foolish intelligence we're working out. And so one of the crucial things I think we need to, you know, ensure that we talk about is it's not just about the fact that it's a problem. So here are our stories there, but I think the crucial thing is we're looking at it from an end to end perspective from everything from ensuring that we have direct workstations, right servers, the storage, making sure that is well protected and all the way to working with an ecosystem of software renders. So first and foremost, that's the whole integration piece, making sure they realized people system. But more importantly, it's also ensuring that we help our customers by taking the guess work out again. I can't emphasize the fact that there are customers who are looking at different aliens off entry, for example, somebody will be looking at an F G. A. Everybody looking at GP use. API is probably, as you know, are great because they're price points and normal. Or should I say that our needs our lot lesser than the GP use? But on the flip side, there's a need for them to have a set of folks who can actually program right. It is why it's called the no programming programmable gate arrays of Saas fee programmable. My point being in all this, it's important that we actually provide dried end to end perspective, making sure that we're able to show the integration, show the value and also provide the options, because it's really not a cookie cutter approach of where you can take a particular solution and think that it will put the needs of every single customer. He doesn't even happen in the same industry, for that matter. So the flexibility that we provide all the way to the services is truly our attempt. At Dell Technologies, you get the entire gamut of solutions available for the customer to go out and pick and choose what says their needs the best. >>Alright, well, Ravi interior Thank you so much for the update. So we're gonna turn it over to actually hear from some of your customers. Talk about the power of ai. You're from their viewpoint, how real these solutions are becoming. Love the plan words there about, you know, enabling really artificial intelligence. Thanks so much for joining after the customers looking forward to the VM Ware discussion, we want to >>put robots into the world's dullest, deadliest and dirtiest jobs. We think that if we can have machines doing the work that put people at risk than we can allow people to do better work. Dell Technologies is the foundation for a lot of the >>work that we've done here. Every single piece of software that we developed is simulated dozens >>or hundreds of thousands of times. And having reliable compute infrastructure is critical for this. Yeah, yeah, A lot of technology has >>matured to actually do something really useful that can be used by non >>experts. We try to predict one system fails. We try to predict the >>business impatience things into images. On the end of the day, it's that >>now we have machines that learn how to speak a language from from zero. Yeah, everything >>we do really, at Epsilon centered around data and our ability >>to get the right message to >>the right person at the right >>time. We apply machine learning and artificial intelligence. So in real time you can adjust those campaigns to ensure that you're getting the most optimized message theme. >>It is a joint venture between Well, cars on the Amir are your progress is automated driving on Advanced Driver Assistance Systems Centre is really based on safety on how we can actually make lives better for you. Typically gets warned on distracted in cars. If you can take those kind of situations away, it will bring the accidents down about 70 to 80%. So what I appreciate it with Dell Technologies is the overall solution that they have to live in being able to deliver the full package. That has been a major differentiator compared to your competitors. >>Yeah. Yeah, alright, welcome back to help us dig into this discussion and happy to welcome to the program Chris Facade. He is the senior vice president and general manager of the B sphere business and just Simon, chief technologist for the High performance computing group, both of them with VM ware. Gentlemen, thanks so much for joining. Thank >>you for having us. >>All right, Krish. When vm Ware made the bit fusion acquisition. Everybody was looking the You know what this will do for space Force? GPU is we're talking about things like AI and ML. So bring us up to speed. As to you know, the news today is the what being worth doing with fusion. Yeah. >>Today we have a big announcement. I'm excited to announce that, you know, we're taking the next big step in the AI ML and more than application strategy. With the launch off bit fusion, we're just now being fully integrated with VCF. They're in black home, and we'll be releasing this very shortly to the market. As you said when we acquire institution A year ago, we had a showcase that's capable days as part of the animal event. And at that time we laid out a strategy that part of our institution as the cornerstone off our capabilities in the black home in the Iot space. Since then, we have had many customers take a look at the technology and we have had feedback from them as well as from partners and analysts. And the feedback has been tremendous. >>Excellent. Well, Chris, what does this then mean for customers? You know What's the value proposition that diffusion brings the VC? Yeah, >>if you look at our customers, they are in the midst of a big ah journey in digital transformation. And basically, what that means is customers are building a ton of applications and most of those applications some kind of data analytics or machine learning embedded in it. And what this is doing is that in the harbor and infrastructure industry, this is driving a lot of innovation. So you see the advent off a lot off specialized? Absolutely. There's custom a six FPs. And of course, the views being used to accelerate the special algorithms that these AI ml type applications need. And unfortunately, customer environment. Most of these specialized accelerators uh um bare metal kind of set up, but they're not taking advantage off optimization and everything that it brings to that. Also, with fusion launched today, we are essentially doing the accelerator space. What we need to compute several years ago and that is essentially bringing organization to the accelerators. But we take it one step further, which is, you know, we use the customers the ability to pull these accelerators and essentially going to be couple it from the server so you can have a pool of these accelerators sitting in the network. And customers are able to then target their workloads and share the accelerators get better utilization by a lot of past improvements and, in essence, have a smaller pool that they can use for a whole bunch of different applications across the enterprise. That is a huge angle for our customers. And that's the tremendous positive feedback that we get getting both from customers as well. >>Excellent. Well, I'm glad we've got Josh here to dig into some of the thesis before we get to you. They got Chris. Uh, part of this announcement is the partnership of VM Ware in Dell. So tell us about what the partnership is in the solutions for for this long. Yeah. >>We have been working with the Dell in the in the AI and ML space for a long time. We have ah, good partnership there. This just takes the partnership to the next level and we will have ah, execution solution. Support in some of the key. I am el targeted words like the sea for 1 40 the r 7 40 Those are the centers that would be partnering with them on and providing solutions. >>Excellent. Eso John. You know, we've watched for a long time. You know, various technologies. Oh, it's not a fit for virtualized environment. And then, you know, VM Ware does does what it does. Make sure you know, performance is there. And make sure all the options there bring us inside a little bit. You know what this solution means for leveraging GPS? Yeah. So actually, before I before us, answer that question. Let me say that the the fusion acquisition and the diffusion technology fits into a larger strategy at VM Ware around AI and ML. That I think matches pretty nicely the overall Dell strategy as well, in the sense that we are really focused on delivering AI ml capabilities or the ability for our customers to run their am ai and ml workloads from edge before the cloud. And that means running it on CPU or running it on hardware accelerators like like G fuse. Whatever is really required by the customer in this specific case, we're quite excited about using technology as it really allows us. As Chris was describing to extend our capabilities especially in the deep learning space where GPU accelerators are critically important. And so what this technology really brings to the table is the ability to, as Chris was outlining, to pull those resources those hardware resource together and then allow organizations to drive up the utilization of those GP Resource is through that pooling and also increase the degree of sharing that we support that supported for the customer. Okay, Jeff, take us in a little bit further as how you know the mechanisms of diffusion work. Sure, Yeah, that's a great question. So think of it this way. There there is a client component that we're using a server component. The server component is running on a machine that actually has the physical GPU is installed in it. The client machine, which is running the bit fusion client software, is where the user of the data scientist is actually running their machine machine learning application. But there's no GPU actually in that host. And what is happening with fusion technology is that it is essentially intercepting the cuda calls that are being made by that machine learning app, patience and promoting those protocols over to the bit fusion server and then injecting them into the local GPU on the server. So it's actually, you know, we call it into a position in the ability that remote these protocols, but it's actually much more sophisticated than that. There are a lot of underlying capabilities that are being deployed in terms of optimization who takes maximum advantage of the the networking link that sits between the client machine and the server machine. But given all of that, once we've done it with diffusion, it's now possible for the data scientist. Either consume multiple GP use for single GPU use or even fractional defuse across that Internet using the using technology. Okay, maybe it would help illustrate some of these technologies. If you got a couple of customers, Sure, so one example would be a retail customer. I'm thinking of who is. Actually it's ah, grocery chain. That is the flowing, ah, large number of video cameras into their to their stores in order to do things like, um, watch for pilfering, uh, identify when storage store shelves could be restocked and even looking for cases where, for example, maybe a customer has fallen down in denial on someone needs to go and help those multiple video streams and then multiple app patients that are being run that part are consuming the data from those video streams and doing analytics and ml on them would be perfectly suited for this type of environment where you would like to be ableto have these multiple independent applications running but having them be able to efficiently share the hardware resources of the GP use. Another example would be retailers who are deploying ml Howard Check out registers who helped reduce fraud customers who are buying, buying things with, uh, fake barcodes, for example. So in that case, you would not necessarily want to employ a single dedicated GPU for every single check out line. Instead, what you would prefer to do is have a full set of resource. Is that each inference operation that's occurring within each one of those check out lines could then consume collectively. That would be two examples of the use of this kind of pull in technology. Okay, great. So, Josh, a lot last question for you is this technology is this only for use and anything else. You can give us a little bit of a look forward to as to what we should be expecting from the big fusion technology. Yeah. So currently, the target is specifically NVIDIA GPU use with Cuda. The team, actually even prior to acquisition, had done some work on enablement of PJs and also had done some work on open CL, which is more open standard for a device that so what you will see over time is an expansion of the diffusion capabilities to embrace devices like PJs. The domain specific a six that first was referring to earlier will roll out over time. But we are starting with the NVIDIA GPU, which totally makes sense, since that is the primary hardware acceleration and for deep learning currently excellent. Well, John and Chris, thank you so much for the updates to the audience. If you're watching this live, please throwing the crowd chat and ask your questions. This faith, If you're watching this on demand, you can also go to crowdchat dot net slash make ai really to be able to see the conversation that we had. Thanks so much for joining. >>Thank you very much. >>Thank you. Managing your data center requires around the clock. Attention Dell, EMC open manage mobile enables I t administrators to monitor data center issues and respond rapidly toe unexpected events anytime, anywhere. Open Manage Mobile provides a wealth of features within a comprehensive user interface, including >>server configuration, push notifications, remote desktop augmented reality and more. The latest release features an updated Our interface Power and Thermal Policy Review. Emergency Power Reduction, an internal storage monitoring download Open Manage Mobile today.
SUMMARY :
the potential to profoundly change our lives with Dell Technologies. much in the big data world is you know, it used to be, you know Oh, there was the opportunity. product suite right from the hardware and software to do multiple iterations be really careful about the type of energies that we utilize to do what we do every day on You know, the overall climate in AI. is having the right skill set to go out and have the execution So, Ravi, give us the news. One of the things we are doing at Dell Technologies is making So teary, maybe give us a little bit of a broader look as to, you know, more of the science of it because it's healthcare, because it's the industry we see Yeah, I may ask you just to build on that interesting comment that you made on we're around some of the B sphere seven launch, you know, theory. We all know that the big players in the industry to But that's all about flexibility and so one of the crucial things I think we need to, you know, ensure that we talk about forward to the VM Ware discussion, we the foundation for a lot of the Every single piece of software that we developed is simulated dozens And having reliable compute infrastructure is critical for this. We try to predict one system fails. On the end of the day, now we have machines that learn how to speak a language from from So in real time you can adjust solution that they have to live in being able to deliver the full package. chief technologist for the High performance computing group, both of them with VM ware. As to you know, the news today And at that time we laid out a strategy that part of our institution as the cornerstone that diffusion brings the VC? and essentially going to be couple it from the server so you can have a pool So tell us about what the partnership is in the solutions for for this long. This just takes the partnership to the next the degree of sharing that we support that supported for the customer. to monitor data center issues and respond rapidly toe unexpected events anytime, Power and Thermal Policy Review.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Library of Congress | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Robbie | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
today | DATE | 0.99+ |
John | PERSON | 0.99+ |
Italy | LOCATION | 0.99+ |
Ravi | PERSON | 0.99+ |
Chris Facade | PERSON | 0.99+ |
Two | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
VM Ware | ORGANIZATION | 0.99+ |
Rob | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Krish | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
less than 15% | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
tens of petabytes | QUANTITY | 0.99+ |
90 | QUANTITY | 0.99+ |
Andi | PERSON | 0.99+ |
first | QUANTITY | 0.98+ |
19 examples | QUANTITY | 0.98+ |
Austin, Texas | LOCATION | 0.98+ |
Epsilon | ORGANIZATION | 0.98+ |
two important pieces | QUANTITY | 0.98+ |
two big challenges | QUANTITY | 0.98+ |
Forbes | ORGANIZATION | 0.98+ |
Simon | PERSON | 0.98+ |
one example | QUANTITY | 0.98+ |
about 3000 times | QUANTITY | 0.97+ |
M. Ware | ORGANIZATION | 0.97+ |
Cube Studios | ORGANIZATION | 0.97+ |
more than one user | QUANTITY | 0.97+ |
1 40 | OTHER | 0.97+ |
8000 accelerator | QUANTITY | 0.96+ |
several years ago | DATE | 0.96+ |
Advanced Driver Assistance Systems Centre | ORGANIZATION | 0.96+ |
VMware | ORGANIZATION | 0.95+ |
A year ago | DATE | 0.95+ |
six FPs | QUANTITY | 0.95+ |
Gayatri Sarkar, Hype Capital | Sports Tech Tokyo World Demo Day 2019
(rhythmic techno music) >> Hey welcome back everybody, Jeff Frick here with theCube. We're at Oracle Park on the shores of McCovey Cove. We're excited to be here, it's a pretty interesting event. Sports Tech Tokyo World Demo Day. It's kind of like an accelerator but not really, it's kind of like Y Combinator but not really, it's a little bit different. But it's a community of tech start-ups focusing on sports with a real angle on getting beyond sports. We're excited to have our next guest, who's an investor and also a mentor, really part of the program to learn more about it and she is Gayatri Sarkar, the managing partner from Hype Capital. Welcome. >> Thank you. Thank you for inviting me here. >> Pretty nice, huh? (laughs) >> Oh, I just love the view. >> So you said before we turned on the cameras, well first off Hype Capital, what do you guys invest in? What's kind of your focus? >> So Hype Capital is part of one of the biggest ecosystems in sports which is Hype Sports Innovation. We have 13 accelerators all around the world. We are just launching the world's first E-sports accelerator with Epsilon and SK Gaming, one of the biggest gaming company. So we are part of the ecosystem for a pretty long time. And now, we have Hype Capital, VC Fund investing in Europe, Israel, and now in U.S. >> So you mentioned that being a mentor is part of this organization. It's something special. I think you're the first person we've had on who's been a mentor. What does that mean, what does that mean for you? But also what does it mean for all the portfolio companies? >> Sure, I'm a mentor at multiple accelerators. But being a part of Sports Tech Tokyo I saw the very inclusive community that is created by them and the opportunity to look at various portfolio companies and also including our portfolio companies as part of it. One of our portfolio company where we had the lead investors, 'Fun with Balls' they're part of this. >> What's it called, Fun with Balls? >> Fun with Balls, very interesting name. >> Good name. (laughs) >> Yeah, they're from Germany and they came all the way from Germany to here. So, yeah, I'm very excited, because as I said it's an inclusive community, and sports is big. So we are looking at opportunities where deep-techs, where it can be translated into various other verticals, but sports can also be one of the use cases, and that's our focus as investors. >> Right, you said your focus was really on AI, machine learning, you have a big data background a tech background. So when you look at the application of AI in sports what are some of the things that you get excited about. >> Yeah, so for me when I'm looking at investments definitely the diversification of sports portfolio. How can I build my portfolio from esports, gaming, behavioral science in sports to AI, ML, AR, opportunities in material science and various other cases. Coming back to your question it's like how can I look into the market and see the opportunities that, okay can I invest in this sector? Like what's the next big trend? And that's where I want to invest. Obviously, product/market fit, promise/market fit because there's a fan engagement experience that you get in sports, not in any other market the network effect is huge, and I think that's what VVC's are very excited in sports and I think this is right now the best time to invest in sports. >> So promise/market fit, I've never heard that before what does that mean when you say promise/market fit? >> Interesting question so promise/market fit was coined by Union Square Venture VC fund. And they think that where there's the network effect or your engagement with your consumers, with your clients, and with your partners can create a very loyal fan base and I think that is very important. You may see that in other technology sector but, not, it is completely unparalelled when it comes to sports. So, I request all the technologies that are actually trying to build they are use cases, they should focus on sports because the fan engagement, the loyal experience the opportunities, you will not get anywhere else. >> Right >> And I think this is the market that I, and other investors are looking for that, if deep-tech investors and deep-tech technologies are coming into this market we see the sports ecosystem not to be a trillion dollar but a multi-trillion dollar market. >> Right, but it's such a unique experience though, right? I mean some people will joke that fans don't necessarily root for the team, they root for the jersey, right? The players come and go, we're here at Oracle Park which was AT&T park, which was SBC Park, which was I can't even remember, Pac bell I think as well. So you know, is it reasonable for a regular company that doesn't have this innate connection to a fanbase that a lot of sports organizations do that's historical, and family-based, and has such deep roots that can survive maybe down years, can survive a crappy product, can survive kind of the dark days and generally they'll be there when things turn back around. Is that reasonable for a regular company to get that relationship with the customer? >> So, you asked me one of the most important questions in the investors relationship, or investors life which is the cyclicality of the industry and I feel like sports is one industry that has survived the cyclicality of that industry. Because, as you say, a crappy product will not survive you have to focus on customer service so you have to focus, that, okay even if you have the best product in the world how can I make my product sticky? These are the qualities that we are looking into when we are investing in entrepreneurs. But the idea is that if we are targeting startups and opportunities, our focus is that okay, you may have the worlds best product but the founder's should have the ability to understand the market. Okay, there are opportunities, if you look at Facebook if you look at various other companies they started with a product that was maybe like okay, friend site, dating site and they pivoted, so you need to understand the economy you need to understand the market and I think that's what we are looking into the entrepreneurs. And, to answer your question, the family offices they are actually part of this whole startup ecosystem they are saying if there is an opportunity because they are big, they are giant and they are working with legacy techs like Microsoft, Amazon. It's very difficult for the legacy techs to be agile and move fast, so it's very important for them if they can place themselves at a 45 degree angle with the startup ecosystem, and they can move faster. So that's the opportunity for them in the sport's startup ecosystem. >> All right, well Gayatri thanks for taking a few minutes and hopefully you can find some new investments here. >> No, thank you so much thank you so much for your time. >> Absolutely, she's Gayatri, I'm Jeff you're watching The Cube, we are at Oracle Park On the shores of historic McCovey Cove I got to get together with Big John and practice this line thanks for watching, and we'll see you next time. (rhythmic techno music)
SUMMARY :
really part of the program to learn more about it Thank you for inviting me here. So Hype Capital is part of one of the biggest ecosystems So you mentioned that being a mentor and the opportunity to look at various portfolio companies (laughs) one of the use cases, and that's our focus as investors. So when you look at the application of AI in sports and I think this is right now the best time to the opportunities, you will not get anywhere else. And I think this is the market that I, and other investors root for the team, they root for the jersey, right? and they pivoted, so you need to understand the economy and hopefully you can find some new investments here. thank you so much for your time. I got to get together with Big John and practice this line
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Europe | LOCATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Hype Capital | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Gayatri | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Oracle Park | LOCATION | 0.99+ |
13 accelerators | QUANTITY | 0.99+ |
Gayatri Sarkar | PERSON | 0.99+ |
SK Gaming | ORGANIZATION | 0.99+ |
Israel | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
The Cube | TITLE | 0.99+ |
Epsilon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Hype Sports Innovation | ORGANIZATION | 0.98+ |
multi-trillion dollar | QUANTITY | 0.97+ |
Sports Tech Tokyo | ORGANIZATION | 0.95+ |
first | QUANTITY | 0.94+ |
45 degree angle | QUANTITY | 0.93+ |
Sports Tech Tokyo World Demo Day 2019 | EVENT | 0.93+ |
McCovey Cove | LOCATION | 0.93+ |
Sports Tech Tokyo World Demo Day | EVENT | 0.91+ |
trillion dollar | QUANTITY | 0.9+ |
first person | QUANTITY | 0.84+ |
Union Square Venture VC | ORGANIZATION | 0.83+ |
theCube | ORGANIZATION | 0.82+ |
first E-sports accelerator | QUANTITY | 0.8+ |
McCovey | LOCATION | 0.8+ |
one industry | QUANTITY | 0.79+ |
Big | PERSON | 0.74+ |
&T park | ORGANIZATION | 0.73+ |
Y Combinator | ORGANIZATION | 0.67+ |
Park | ORGANIZATION | 0.65+ |
VVC | ORGANIZATION | 0.62+ |
SBC | LOCATION | 0.6+ |
AT | LOCATION | 0.6+ |
Cove | ORGANIZATION | 0.51+ |
John | LOCATION | 0.44+ |
theCUBE Insights from VMworld 2018
(upbeat techno music) >> Live from Las Vegas, it's theCUBE covering VMworld2018 brought to by VMware and it's ecosystem partners. >> Welcome back to theCUBE, I am Lisa Martin with Dave Vellante, John Furrier, Stu Miniman at the end of day two of our continuing coverage, guys, of VMworld 2018, huge event, 25+ thousand people here, 100,000+ expected to be engaging with the on demand and the live experiences. Our biggest show, right? 94 interviews over the next three days, two of them down. Let's go, John, to you, some of the takeaways from today from the guests we've had on both sets, what are some of the things that stick out in your mind? Really interesting? >> Well we had Michael Dell on so that's always a great interview, he comes on every year and he's very candid and this year he added a little bit more color commentary. That was great, it was one of my highlights. I thought the keynote that Sanjay Poonen did, he had an amazing guest, Nobel Peace Prize winner, the youngest ever and her story was so inspirational and I think that sets a tone for VMware putting a cultural stake in the ground around tech for good. We've done a lot of AI for good with Intel and there's always been these initiatives but I think there's now a cultural validation that people generally want to work for and buy from companies that are mission driven and mission driven is now part of it and people can be judged on that front so it's good to see VMware get some leadership there and put the stake in the ground. I thought that was the big news today, at least from my standpoint. The rest were like point product announcements. Sanjay Poonen went into great detail on that. Pat Gelsinger also came on, another great highlight and again we didn't have a lot of time, he was running a bit late, he had a tight schedule but it shows how smart he is, he's really super technical and he actually understands at a root level what's going on so he's actually a great CEO right now, the financial performance is there and he's also very technical, and I think it encapsulates all of it that Dell Technologies, under Michael Dell, he's making so much more money, he's going to be richer and richer. (laughing) He took an entrepreneurial bet, it wasn't hurting at the time but Dell was kind of boring, Dave. I wouldn't call it like an innovative company at the time when they were public using the 90 day shot clock. They had some things going on but they were a hardware company, a supplier to IT footprints-- >> Whoa, whoa, they were 60 billion dollars in revenue and a 20 billion dollar market gap, so something was broken. >> Well I mean it was working numbers wise but he seemed-- >> No that's opposite, a 20 billion dollar value on a 60 billion of revenue, is you're sort of a failure, so anyway, at the time. >> Market conditions aside, right, at the time, he seemed like he wanted to do something entrepreneurial and the takeaway from my interview with him, our interview with him, was he took an entrepreneurial bet put his own cash on the table and it's paying off, that horse is coming in. He's going to make more money on this transaction and takes EMC out of the game, folds it into the operations, it really is going to be, I think, a financial success story if market conditions continue to be the way they are. Michael Dell will go down as a great financial maneuver and he'll be in the top epsilon of deals. >> The story people might forget is that Carl Icahn tried to take the company away from him. Michael Dell beat the great Carl Icahn, which doesn't happen often. Why did Carl Icahn want to take Dell private? Because he knew he could make a boatload of money off of it and Michael Dell said, "No way you're taking my company. "I'm going to do my thing and change the industry." >> He's going to have 90% voting control with Silver Lake Partners when the deal is all said and done and taking a company private and the executing the financial engineering plus execution is really hard to do, look at Elon Musk in the news today. He's trying to take Tesla private, he got his butt handed to him. Now he's saying, "No, we're going to stay public." (laughing) >> Wait, guys, are you saying Michael, after he gets all this money from VMware that it will help them go public, he's not going to sell off VMware or get rid of that, right? >> Well that's a joke that he would sell VMware, I mean-- >> Unless the cash is going to be good? >> No, he won't do it. >> I don't think it'll happen. I mean, maybe some day he sells some of the portion of it but you're not going to give up control of it, why would he? It's throwing off so much cash. He's got Silver Lake as a private equity company, they understand this inside and out. I mean this transaction goes down in history as one of the greatest trades ever. >> Yeah. >> Let me ask you guys a question, because I think is one we brought up in the interview because at that time, the pundits, we were actually right on this deal. We were very bullish on it, and we actually analyzed it. You guys did a good job at Wikibon and we on theCUBE pretty much laid out what happened. He executed it, we put the risks out there, but at the time people were saying, "This is a bad deal, EMC." The current state of IT at that time looked like it was dismal but the market forces that changed were cloud, and so what were those sideways impact points that no one understood, that really helped him lift this up? What's your thoughts, Dave, on that? >> First of all the desktop business did way better than anybody thought it would, which is amazing and actually EMC did pretty poorly for a while and so that was kind of a head fake. And then as we knew, VMware crushed it and crushed it even more than anybody expected so that threw off so much cash they were able to deliver, they did Pivotal, they did a Pivotal IPO, sold some software assets. I mean basically Michael Dell and his team did everything they said they said they were going to do and it's worked out, as he said today, even better than they possibly thought. >> Well and the commentary I'd give here is when the acquisition of EMC by Dell happened, the big turn we had is the impact of cloud and we said, "Well, okay they've got VMware over there "and they've got Pivotal but Dell's "just going to be a boring infrastructure company "with server, network and storage." The message that we heard at Dell World and maturing even more here is that this portfolio of families. Yes, VMware's a big piece of it, NSX and the networking, but Pivotal with PKS, all of those tie in to what's Dell's selling. Every time they're selling VxRail, you know that has a big VMware piece. They do the networking piece that extends across multi clouds, so Dell has a much better multi cloud story than I expected them to have when they bought EMC. >> But now, VMware hides a lot of warts. >> Yeah. >> Right? >> Absolutely. >> Let's be honest about that. >> What are they? >> Okay. I still think the client business is exposed. I mean as great as it is, you got to gain share in that business if you want to keep winning, number one. Number two is, the big question I have is can the core of Dell EMC continue to innovate or will it just make incremental improvements, have to do acquisitions to do innovation, inorganic acquisitions, and end up with more stovepipes? That's always been, Stu used to work there, that was always EMC's biggest challenge. Jeff Clark came in and said, "Okay, we're going to rationalize the portfolio." That has backlash as customer's say, "Well wait a minute, does that mean "you're not going to support my products?" No, no, we're going to support your products. So they've got to continue to innovate. As I say, VMware, because of how much cash it throws off, it's 50% of the company's profits, hides a lot of those exposures. >> And if VMware takes a turn, if market conditions change, the debt looming is exposed so again, the game's not over for Dell. He can see the finish line, but. (laughing) >> Buy low, sell high, guess who's selling right now? >> So a lot of financial impact, continued innovation but at the end of the day, guys, this is all about impacting customer's businesses. Not just from we've got to enable them to be successful in this multi cloud era, that's the norm today. They need to facilitate successful digital transformations, business outcomes, but they also have VMware, Dell EMC, Dell Technologies, great power to help customer's transform their cultures. I'd love to get perspective from you guys because I love the voice to the customer, what are some of your favorite Dell EMC, VMware, partner, customer stories that you've heard the last couple days that really articulate the value of this financial successful company that they're achieving? >> Well the first thing I'll say before we get to the customer stories is on your point about what VMware's doing, is they're a technology, Robin Matlock, the CMO was on theCUBE talking about they're a technology company, they have the hands on labs, they're a very geeky audience, which we love. But they have to get leadership on the product side, they got to maintain the R and D, they got to have best in class technical products that actually are relevant. You look at companies like Tintri that went bankrupt, great technology, cul-de-sac market. There's no market there, the world's going cloud. So to me VMware has to start pumping out really strong products and technologies that the customer's are going to buy, right? (laughing) >> In conjunction with the customer to help co-develop what the customer's need. >> So I was talking to a customer and he said, "Look, I'm 10 years behind where the cloud guys are "with Amazon so all I want is VMware "to make my life easier, continue to cut my costs. "I like the way I'm operating, "I just get constant pressure to cut cost, "so if they keep doing that, I'm going to stay with them "for a long, long time." Pete Townsend said it best, companies like VMware, Dell EMC, they move at the speed of the CIO and as long as they can move at the speed of the CIO, I've said this a million times, the rich get richer and it's why competent management that led by founders like Larry Ellison, like Michael Dell, continue to do well in this industry. >> And Andy Jassy technically, I would say, a found of AWS because he started it. >> Absolutely. >> A key, the other thing I would also say from a customer, we hear a lot of customer, I won't name names because a lot of our data's in hallway conversations and at night when we go out and get the real stories. On theCUBE it's mostly, oh we've been very successful at VM, we use virtualization, blah, blah, blah and it's an IT story, but the customers in the hallways that are off the record are saying essentially this, I'm paraphrasing, look it, we have an operation to run. I love this cloud stuff and I'd love to just blink my fingers and be in the cloud and just get rid of all this and operate at a level of cloud native, I just can't. I can't get there. They see Amazon's relationship with VMware as a bridge to the future and takes away a lot of cognitive dissonance around the feelings around VMware's lack of cloud, if you will. In this case, now that's satisfied with the AWS deal and they're focused on operations on premises and how to get their app more closed, like modernize so a lot of the blocking and tackling of the customer is I got virtualization and that's great but I don't want to miss out on the next lever of innovation. Okay, I'm looking at it going slow but no one's instantly migrating to the cloud. >> No way, no way. >> They're either born in the cloud or you're on migration schedules now, really evaluating the financial impact, economic impact, headcount impact of cloud. That's the reality of the cloud. >> You got to throw a flag on some of that messaging of how easy it is to migrate. I mean it's just not that easy. I've talked to customers that said, "Well we started it and we just kind of gave up. "There was no point in it. "The new stuff we're going to do in the cloud, "but we're not going to migrate all of our apps to the cloud, "it just makes no sense, there's no business case for it." >> This is where NSX and containers and Kubernetes bet is big, I think, I think if NSX can connect the clouds with some sort of interoperable layer for whatever workloads are going to move on either Amazon or the clouds, that's good. If they want to get the developers off virtualization, into a new drug, if you will, it's going to be services, micro services, Kubernetes because you can throw containers around those old workloads, modernize with the new stuff without killing the old and Stu and I heard this clear at the CNCF and the Lennox Foundation, that this has changed the mindset because you don't have to kill the old to bring in the new. You can bring in the new, containerize the old and manage on your speed of the CIO. >> And that's Amazon's bet isn't it? I mean, look, even Sanjay even said, if you go back five, six years, the original reinvent that was sweep the floor, bring it all into the cloud? I think that's in Amazon's DNA. I mean ultimately that's their vision. That's what they want to have happen and the way they get there is how you just described it, John. >> That's where this partnership between Amazon and VMware is so important because, right, Amazon has a lot of the developers but needs to be able to get deeper into the enterprise and VMware, starting to make some progress with the developers, they've got a code initiative, they've got all of these cool projects that they announced with everything from server less and Kubernetes and many others, Edge going to be a key use case there but you know, VMware is not, this is not the developer show. Most of the conversations that I had with customers, we're talking IT things, I mean customers doing some cool things but it's about simplifying in my environment, it's about helping operations. Most of the conversations are not about this cool new micro services building these things out. >> Cisco really is the only legacy, traditional enterprise company that's crushing developers. You give IBM some chops, too, but I wouldn't say they're crushing it. We saw that at Cisco Live, Cisco is doing a phenomenal job with developers. >> Well the thing about the cloud, one thing I've been pointing out, observation that I have is if you look at the future of the cloud and you can look for metaphors and/or real examples, I think Amazon Web Services, obviously we know them well but Google Cloud to me is a picture of the future. Not in the sense of what they have for the customer's today it's the way they've run their business from day one. They have developers and they have SREs, Site Reliability Engineers. This VMworld community is going down two paths. Developers are going to be rapidly iterating on real apps and operators who are going to be running systems. That's network storage, all integrated. That's like an SRE at Google. Google's running massive scale and they perfected it, hence Kubernetes, hence some of the tools coming in to services like Istio and things that we're seeing in the Lennox Foundation. To me that's the future model, it's an operator and set of developers. Whoever can make that easy, completely seamless, is the winner of it all. >> And the linchpin, a linchpin, maybe not the linchpin, but a linchpin is still the database, right? We've seen that with Oracle. Why is Amazon going so hard after the database? I mean it's blatantly obvious what their strategy is. >> Database is the hill that everyone is trying to take down. Capture the hill, you get the high ground with the database. >> Come on Dave, when you used to do the financial models of how much money is spent by the enterprise, that database was a big chunk. We've seen the erosion of lots of licensing out there. When I talked to Microsoft, they're like, pushing a lot of open source, they're going to cloud. Microsoft licensing isn't as much. VMware licensing is something that customers would like to shrink over time but database is even bigger. >> It's a strategic fulcrum, obviously Oracle has it. Microsoft clearly has it with Sequel Server. IBM, a big part of IBM's success to this day, is DB2 running on mainframe. (laughing) So Amazon wants a piece of that action, they understand to be a major player in this business you have to have database infrastructure. >> I mean costs are going down, it's going to come down to economics. End of the day the operating models as I said, some things about DB2 on mainframe, the bottom line's going to come down to when the cost numbers to run at the value and cost expense involved in running the tech that's going to be the ultimate way that things are either going to be cleared out or replaced or expanded so the bottom line is it's going to be a cost equation at that level and then the upside's going to be revenue. >> And just a great thing for VMware, since they don't own the application, when they do things like RDS in their environment they are freeing up dollars that customers are then going to be more likely to want to spend with VMware. >> Great point. I want to make real quick, three things we've been watching this week. Is the Amazon VMware deal a one way trip to the cloud? I think it's clear not in the near term, anyway. And the second is what about the edge? The edge to me is all about data, it's like the wild, wild west. It's very unclear that there's a winner there but there's a new type of cloud emerging. And three is the Dell structure. We asked Pat, we asked VMware Ray O'Farrell, we asked Michael, if that 11 billion dollar special dividend was going to impact VMware's ability to fund it's future? Consistent answer there, no. You know, we'll see, we'll see. >> I mean what are they going to say? Yeah, that really limits my ability to buy companies, on theCUBE? No, that's the messaging so of course, 11 billion dollars gone means they can't do MNA with the cash, that means, yeah it's going to be R and D, what does that mean? Investment, so I think the answer is yes it does limit them a little bit. >> Has to. >> It's cash going out the door. >> But VMware just spent, it is rumored, around 500 million dollars for CloudHealth Technologies, Dave, Boston based company, with about 200 people You know, hey, have a billion-- >> They're going to put back a dividend anyway and do stock buybacks but I'm not sure 11 out of the 13 billion is what they would choose to do that for, so going forward, we'll see how it all plays out, obviously. I think, Floyer wrote about this, more has to go toward VMware, less toward-- >> I think it's the other way around. >> Well I think it's really good that we have one more day tomorrow. >> I think it's a one way trip to the cloud in a lot of instances, I think a lot of VMware customers are going to go off virtualization, not hypervisor and end up being in the cloud most of the business. It's going to be interesting, I think the size of customers that Amazon has now, versus VMware is what? Does VMware have more customers than Amazon right now? >> It's pretty close, right? VMware's 500,000? >> 500,000 for VMware. >> And Amazon's-- >> Over a million. >> Are they over a million, really? >> Yeah. >> A lot of smaller customers, but still. >> Yeah. >> Customer's a customer. >> But VMware might have bigger customers, see that's-- >> No question the ASP is higher, but-- >> It's not conflict, I'm just thinking like cloud is natural, right? Why wouldn't you want to use the cloud, right? I mean. >> So guys-- >> So the debate continues. >> Exactly. Good news is we have more time tomorrow to talk more about all this innovation as well as see more real world examples of how VMware is going to be enabling tech for good. Guys, thanks so much for your commentary and letting me be a part of the wrap. >> Thank you. >> Thanks, Lisa. >> Looking forward to day three tomorrow. For Dave, Stu and John, I'm Lisa Martin. You've been watching our coverage of day two VMworld 2018. We look forward to you joining us tomorrow, for day three. (upbeat techno music)
SUMMARY :
brought to by VMware and and the live experiences. and put the stake in the ground. and a 20 billion dollar market so anyway, at the time. and he'll be in the top epsilon of deals. and change the industry." Elon Musk in the news today. sells some of the portion of it but at the time people were saying, First of all the desktop business Well and the commentary I'd give here it's 50% of the company's profits, He can see the finish that really articulate the value that the customer's are going the customer's need. "I like the way I'm operating, I would say, a found of AWS and be in the cloud in the cloud or you're on all of our apps to the cloud, the old to bring in the new. and the way they get there is how you Amazon has a lot of the developers Cisco really is the only legacy, Not in the sense of what they a linchpin, maybe not the linchpin, Database is the hill that We've seen the erosion of success to this day, the bottom line's going to come down to are then going to be more And the second is what about the edge? No, that's the messaging so of course, out of the 13 billion is that we have one more day tomorrow. cloud most of the business. to use the cloud, right? and letting me be a part of the wrap. We look forward to you joining
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Sanjay Poonen | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Pete Townsend | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
60 billion | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Robin Matlock | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
CloudHealth Technologies | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Carl Icahn | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Lennox Foundation | ORGANIZATION | 0.99+ |
13 billion | QUANTITY | 0.99+ |
Sanjay | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Silver Lake Partners | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
90 day | QUANTITY | 0.99+ |
94 interviews | QUANTITY | 0.99+ |
Floyer | PERSON | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |