Image Title

Search Results for 120 volts:

Platform for Photonic and Phononic Information Processing


 

>> Thank you for coming to this talk. My name is Amir Safavi-Naeini I'm an Assistant Professor in Applied Physics at Stanford University. And today I'm going to talk about a platform that we've been developing here that allows for quantum and classical information processing using photons and phonons or mechanical motion. So first I'd like to start off, with a picture of the people who did the work. These are graduate students and postdocs in my group. In addition, I want to say that a lot of the work especially on polling of the Lithium niobate was done in collaboration with Martin Fejer's group and in particular Dr.Langrock and Jata Mishra and Marc Jankowski Now our goal is to realize a platform, for quantum coherent information processing, that enables functionality which currently does not exist in other platforms that are available. So in particular we want to have, a very low loss non-linearity that is strong and can be dispersion engineered, to be made broadband. We'd like to make circuits that are programmable and reconfigurable, and that necessitates having efficient modulation and switching. And we'd also really like to have a platform that can leverage some of the advances with superconducting circuits to enable sort of large scale programmable dynamics between many different oscillators on a chip. So, in the next few years what we're really hoping to demonstrate are few photon, optical nonlinear effects by pushing the strength of these non-linearities and reducing the amount of loss. And we also want to demonstrate these coupled, sort of qubit and many oscillators systems. Now the Material system, that we think will enable a lot of these advances is based on lithium niobate, so lithium niobate is a fair electric crystal. It's used very widely in optical components and in acousto optics and then surface acoustic wave devices. It's a fair electric crystal, that has sort of a built-in polarization. And that enables, a lot of effects, which are very useful including the piezoelectric effect, electro- optic effects. And it has a very large K2 optical non-linearity. So it allows for three wave mixing. It also has some effects that are not so great for example, pyroelectricity but because it's very, established material system there's a lot of tricks on how to deal with some of the less attractive parts of it of this material. Now most, Surface Acoustic Wave, or optical devices that you would find are based on kind of bulk lithium niobate crystals that either use surface acoustic waves that propagate on a surface or, you know, bulk waves propagating through a whole crystal, or have a very weak weakly guided low index contrast waveguide that's patterned in the lithium niobate. This was the case until just a little over a decade ago. And this work from ETH Zurich came showing that thin-film lithium niobate can be, bonded and patterned. And Photonic circuits very similar to assigning circuits made from three fives or Silicon can be implemented in this material system. And this really led to a lot of different efforts from different labs. I would say the major breakthrough came, just a few years ago from Marko Loncar, where they demonstrate that high quality factors are possible to realize in this platform. And so they showed resonators with quality factors in the tens of billions corresponding to, line widths of tens of megahertz or losses of, just a few, DB per meter. And so that really changed the picture and you know a little bit after that in collaboration with Martin Fejer's group at Stanford they were able to demonstrate polling and so very large this version engineered nonlinear effects and these types of waveguides. And, and so that showed that, sort of very new types of circuits can be possible on this platform Now our approach is very similar. So we have a thin film of lithium niobate and this time it's on Sapphire instead of oxide or some polymer. and sometimes we put oxide on top. Some Silicon oxide on top, and we can also put electrodes these electrodes can be made out of a superconductor like niobium or aluminum or they can be gold depending on what we're trying to do. The sort of important thing here is that the large index contrast means that, light is guided in a very highly confined waveguide. And it supports bends with small bending radii. And that means we can have resonators that are very small. So the mode volume for the photonic resonators can be very small and as is well known. The interaction rate scale is, one over squared of mode volume. And so we're talking about an enhancement of around six orders of magnitude in the interaction length interaction lengths, over systems using sort of bulk components. And this is in a circuit that's sort of sub millimeter in size and its made on this platform. Now interaction length is important but also quality factor is very important. So when you make these things smaller you don't want to make them much less here. That's, you know, you can look at, for example a second harmonic generation efficiency in these types of resonances and that scales as Q, to the power of three essentially. So you need to achieve, you win a lot by going to low loss circuits. Now loss and non-linearity or sort of material and waveguide properties that we can engineer, but design of these circuits, careful design of these circuits is also very important. For example, you know, because these are highly confined waves and dielectric wave guides they can, you can support several different orders of modes especially if you're working for a broad band light waves that span, you know, an octave. And now when you try to couple light in and out of these structures, you have to be very careful that you're only picking up the polarizations that you care about, and you're not inducing extra loss channels effectively reducing the queue, even though there's no material loss if you're these parasitic coupling, can lead to lower Q. so the design is very important. This plot demonstrates, you know, the types of extrinsic to intrinsic coupling that are needed to achieve very high efficiency SHG, which is unrelated to optical parametric oscillation. And, you know, you, so you sort of have to work in a regime where the extrinsic couplings are much larger than the intrinsic couplings. And this is generally true for any type of quantum operation that you want to do. So just just low material loss itself isn't enough to design is also very important. In terms of where we are, on these three important aspects like getting large G large Q and large cap up. So we've been able to achieve high Q in, in these structures. This is a Q a of a couple million, we've also been able to you can see from a broad transmission spectrum through a grading coupler you can see a very evenly spaced modes showing that we're only coupling to one mode family. And we can see that the depth of the modes is also very large, you know, 90% or more. And that means that our extrinsic coupling in intrinsic coupling is also very large. So we've been able to kind of engineer these devices and to achieve this in terms of the interaction, I won't go over it too much but, you know, in collaboration with Marty Feres group we were able to pull both lithium niobate on insulator and lithium niobate on Sapphire. We'll be able to see a very efficient, sort of high slope proficiency second harmonic generation, you know achieving approaching 5000% per watt centimeters squared for 1560 to 780 conversion. So this is all work in progress. And so for now, I'd like to talk a little bit about the integration of acoustic and mechanical components. So, first of all why would we want to integrate mechanical components? Well, there's lots of cases where, for example you want to have an extremely high extinction switching functionality. That's very difficult to do with electro optics because they need to control the phase, extremely efficiently with extreme precision. You would need very large, long resonators and or large voltages becomes very difficult to achieve you know, 60 DB types of, switching. Mechanical systems. On the other hand, they can have very small mode volumes and can give you 60 DB switching without too many complications. Of course the drawback is that they're slower, but for a lot of applications, that doesn't matter too much. So in terms of being able to make integrate memes, switching and tuning with this platform, here's a device that achieves that so that each of these beams is actuated through the Piezoelectric effect and lithium niobate via this pair of electrodes that we put a voltage across. And when you put a voltage across these have been designed to leverage one of the off diagonal terms in the piezoelectric tensor, which causes bending. And so this bending generates a very large displacement in the center of this beam, in this beam, you might notice is composed of a grading, and this grading effectively generates it's photonic crystal cavity. So it generates a localize optical mode in the center which is very sensitive to these displacements. And what we're able to see in this system is that you know, just a few millivolts so 50 millivolts here shifts the resonance frequency by much more than a line width just a few millivolts is enough to shift by a line width. And so to achieve switching we can also tune this resonance across the full telecom band and these types of devices whether in waveguide resonator form can be extremely useful for sort of phase control in a large scale system, where you might want to have many many face switches on a chip to control phases with, with low loss, because these wave guides are shorter. You have lower loss propagating across them. Now, these interactions are fairly low frequency. When we go to higher frequency, we can use the electro-optic effect. And even the electro-optic effect even though it's very widely used, and well-known on a Photonic circuit like these lithium Niobate for tying circuits has, interesting consequences and device opportunities that don't exist on the bulk devices. So for example, let's look at single sideband modulation. This is what an electro-optic sort of standard electro optics, single sideband modulator looks like you, you take your light, you split into two parts, and then you modulate each of these arms. You modulate them out of phase with an RFC tone that's out of phase. And so now you generate side bands on both and now because they're modulating out of phase when they are recombined and on the output splitter and this mock sender interferometer you end up dropping one of the side bands and then the pump and you end up with a shifted side pan. So that's possible you can do single side band modulation with an electronic device but the caveat is that this is now fundamentally lossy. So, you know, you have generated, this other side band via modulation, and the sideband is simply being lost due to interference. So it's their, It's getting combined, it's getting scattered away because there's no mode that it can get connected to. So actually you know, this is going kind of an efficiency less than 3DB usually much less than 3DB. And that's fine if you just have one of these single sideband modulators because you can always amplify, you can send more power but if you're talking about a system and you have many of these and you can't put amplifiers everywhere then, or you're working with quantum information where loss is particularly bad. This is not an option. Now, when you use resonators, you have another option. So here's a device that tries to demonstrate this. This is two resonators that are brought into the near-field of each other. So they're coupled with each other over here where they're, which causes a splitting. And now when we tune the DC voltage was tuned one of these resonators by sort of changing the effective half lengths And one of these resonators tunes, the frequency, we can see an We should see an anti crossing between the two modes and at the center of this splitting this is versus voltage, a splitting at the center at this voltage, let's say here it's around 15 volts. We can see two residences two dips, when we probed the line field going through. And now if we send in the pump resonant, with one of these, and we modulate at this difference frequency we generate this red side band but we actually don't generate the blue side band because there's no optical density of state. So the, so because there's this other side may has just not generated. This system is now much more efficient. In fact, so in Marco Loncar has give they've demonstrated. You can get a hundred percent conversion. And we've also demonstrated this in a similar experiment showing that you can get very large sideband suppression. So, you know more than 30 DB suppression of the side bands with respect to the sideband that you care about It's also interesting that these interactions now preserve quantum coherence. And this is one path to creating links between superconducting microwave systems and optical components. Because now the microwave signal that's scattered here preserves its coherence. So we've also been able to do acoustic optic interactions at these high frequencies. This is a, this is an acoustic optic modulator that operates at a few gigahertz. Basically you generate electric field here which generates a propagating wave inside this transducer made out of lithium niobate. These are aluminum electrodes on top. The phonons are focused down into a small phononic waveguides that guides mechanical waves. And then these are brought into this crystal area where the sound and the Mo and the light are both convert confined to wavelength skill mode volume and they interact very strongly with each other. And the strong interaction leads to very efficient, effective electro-optic modulation. So here we've been able to see, with just a few microwatts of power, many, many side bands being generated. So this is a fact that they like tropic much later where the VPI is, a few thousands of a volt instead of, you know, several volts, which is sort of the off the shelf, electro-optic modulator that you would find. And importantly, we've been able to combine these, photonic and phononic circuits into the same platform. So this is a lithium niobate on same Lithium niobate on Sapphire platform. This is an acoustic transducer that generates mechanical waves that propagate in this lithium niobate waveguide. You can see them here and we can make phononic circuits now. so this is a ring resonate. It's a ring resonator for phonons. So we send sound waves through. And when it's resonance, when its frequency hits the ring residences, we see peaks. and this is, this is cheeks in the drop port coming out. And what's really nice about this platform is that we actually don't need to unlike unlike many memes platforms where you have to have released steps that are usually not compatible with, you know other devices here, there's no release steps. So the phonons are guided in that thin lithium niobate layer. The high Q of these mechanical modes shows that these mechanical resonances can be very coherent oscillators. And so we've also worked towards integrating these with very non-linear microwave circuits to create strongly interacting phonons and phonon circuits. So this is a example of an experiment we did over a year ago, where we have sort of a superconducting Qubit circuit with mechanical resonances made out of lithium niobate shunting the Qubit capacitor to ground. So now vibrations of this mechanical oscillator generate a voltage across these electrodes that couples to the Qubits voltage. And so now you have an interaction between this qubit and the mechanical oscillator, and we can see that in the spectrum of the qubit as we tune it across the frequency band. And we see splittings every time the qubit frequency approaches the mechanical resonance frequency. And infact this coupling is so large, that we were able to observe for the first time, the phonon spectrum. So we can detune this qubit away from the mechanical resonance. And now you have a dispersive shift on the qubit which is proportional to the number of phonons. And because number of photons is quantized. We can actually see, the different phonon levels in the qubit spectrum. Moving forward, we've been trying to, also understand what the sources of loss are in the system. And we've been able to do this by demonstrating by fabricating very large rays in these mechanical oscillators and looking at things like, their quality factor versus frequency. This is an example of a measurement that shows a jump in the quality factor when we enter the frequency band where we expect our phononic band gap for this period, periodic material is this jump you know, in principle,if loss were only due to clamping only due to acoustic waves leaking out in these out of these ends, then this change in quality factor quality factor should go to essentially infinite or should be ex exponential losses should be exponentially suppress with the length So these, but it's not. And that means we're actually limited by other loss channels. And we've been able to determine that these are two level systems and the lithium niobate by looking at the temperature dependence of these losses and seeing that they fit very well sort of standard models that exist for the effects of two level systems on microwave and mechanical resonances. We've also started experimenting with different materials. In fact, we've been able to see that, for example, going to lithium niobate, that's dope with magnesium oxide changes or reduces significantly the effect of the two level systems. And this is a really exciting direction of research that we're pursuing. So we're understanding these materials. So with that, I'd like to thank the sponsors. NTTResearch, of course, a lot of this work was funded by DARPA, ONR, RAO, DOE very generous funding from David and Lucile Packard foundation and others that are shown here. So thank you.

Published Date : Sep 24 2020

SUMMARY :

And so that really changed the picture and

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc JankowskiPERSON

0.99+

90%QUANTITY

0.99+

Amir Safavi-NaeiniPERSON

0.99+

Jata MishraPERSON

0.99+

60 DBQUANTITY

0.99+

5000%QUANTITY

0.99+

50 millivoltsQUANTITY

0.99+

Marko LoncarPERSON

0.99+

two resonatorsQUANTITY

0.99+

two modesQUANTITY

0.99+

first timeQUANTITY

0.99+

DARPAORGANIZATION

0.99+

Marco LoncarPERSON

0.99+

ONRORGANIZATION

0.99+

ETH ZurichORGANIZATION

0.99+

one modeQUANTITY

0.99+

two partsQUANTITY

0.98+

1560QUANTITY

0.98+

eachQUANTITY

0.98+

todayDATE

0.98+

more than 30 DBQUANTITY

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

StanfordORGANIZATION

0.98+

Dr.LangrockPERSON

0.97+

Martin FejerPERSON

0.97+

NTTResearchORGANIZATION

0.97+

tens of billionsQUANTITY

0.97+

hundred percentQUANTITY

0.97+

one pathQUANTITY

0.97+

two levelQUANTITY

0.97+

lithium niobateOTHER

0.96+

two residencesQUANTITY

0.96+

RAOORGANIZATION

0.96+

firstQUANTITY

0.95+

Lucile PackardORGANIZATION

0.95+

two dipsQUANTITY

0.95+

around 15 voltsQUANTITY

0.95+

secondQUANTITY

0.94+

less than 3DBQUANTITY

0.93+

Stanford UniversityORGANIZATION

0.93+

DOEORGANIZATION

0.92+

threeQUANTITY

0.91+

over a year agoDATE

0.9+

over a decade agoDATE

0.9+

lithiumOTHER

0.84+

few years agoDATE

0.83+

three important aspectsQUANTITY

0.82+

Marty FeresPERSON

0.82+

less than 3DBQUANTITY

0.81+

couple millionQUANTITY

0.81+

tens of megahertzQUANTITY

0.81+

two level systemsQUANTITY

0.8+

around six ordersQUANTITY

0.79+

DavidORGANIZATION

0.74+

single sidebandQUANTITY

0.73+

780 conversionQUANTITY

0.72+

singleQUANTITY

0.7+

few thousands of a voltQUANTITY

0.66+

K2OTHER

0.65+

resonatorsQUANTITY

0.59+

next few yearsDATE

0.59+

SiliconOTHER

0.58+

niobiumOTHER

0.58+

few microwattsQUANTITY

0.58+

fewQUANTITY

0.55+

several voltsQUANTITY

0.53+

Lithium niobateOTHER

0.53+

PiezoelectricOTHER

0.53+

niobateOTHER

0.52+

meterQUANTITY

0.51+

fivesQUANTITY

0.45+

SapphireCOMMERCIAL_ITEM

0.41+

NiobateOTHER

0.36+

UNLIST TILL 4/2 - The Shortest Path to Vertica – Best Practices for Data Warehouse Migration and ETL


 

hello everybody and thank you for joining us today for the virtual verdict of BBC 2020 today's breakout session is entitled the shortest path to Vertica best practices for data warehouse migration ETL I'm Jeff Healey I'll leave verdict and marketing I'll be your host for this breakout session joining me today are Marco guesser and Mauricio lychee vertical product engineer is joining us from yume region but before we begin I encourage you to submit questions or comments or in the virtual session don't have to wait just type question in a comment in the question box below the slides that click Submit as always there will be a Q&A session the end of the presentation will answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forums that formed at vertical comm to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also reminder that you can maximize your screen by clicking the double arrow button and lower right corner of the sides and yes this virtual session is being recorded be available to view on demand this week send you a notification as soon as it's ready now let's get started over to you mark marco andretti oh hello everybody this is Marco speaking a sales engineer from Amir said I'll just get going ah this is the agenda part one will be done by me part two will be done by Mauricio the agenda is as you can see big bang or piece by piece and the migration of the DTL migration of the physical data model migration of et I saw VTL + bi functionality what to do with store procedures what to do with any possible existing user defined functions and migration of the data doctor will be by Maurice it you want to talk about emeritus Rider yeah hello everybody my name is Mauricio Felicia and I'm a birth record pre-sales like Marco I'm going to talk about how to optimize that were always using some specific vertical techniques like table flattening live aggregated projections so let me start with be a quick overview of the data browser migration process we are going to talk about today and normally we often suggest to start migrating the current that allows the older disease with limited or minimal changes in the overall architecture and yeah clearly we will have to port the DDL or to redirect the data access tool and we will platform but we should minimizing the initial phase the amount of changes in order to go go live as soon as possible this is something that we also suggest in the second phase we can start optimizing Bill arouse and which again with no or minimal changes in the architecture as such and during this optimization phase we can create for example dog projections or for some specific query or optimize encoding or change some of the visual spools this is something that we normally do if and when needed and finally and again if and when needed we go through the architectural design for these operations using full vertical techniques in order to take advantage of all the features we have in vertical and this is normally an iterative approach so we go back to name some of the specific feature before moving back to the architecture and science we are going through this process in the next few slides ok instead in order to encourage everyone to keep using their common sense when migrating to a new database management system people are you often afraid of it it's just often useful to use the analogy of how smooth in your old home you might have developed solutions for your everyday life that make perfect sense there for example if your old cent burner dog can't walk anymore you might be using a fork lifter to heap in through your window in the old home well in the new home consider the elevator and don't complain that the window is too small to fit the dog through this is very much in the same way as Narita but starting to make the transition gentle again I love to remain in my analogy with the house move picture your new house as your new holiday home begin to install everything you miss and everything you like from your old home once you have everything you need in your new house you can shut down themselves the old one so move each by feet and go for quick wins to make your audience happy you do bigbang only if they are going to retire the platform you are sitting on where you're really on a sinking ship otherwise again identify quick wings implement published and quickly in Vertica reap the benefits enjoy the applause use the gained reputation for further funding and if you find that nobody's using the old platform anymore you can shut it down if you really have to migrate you can still go to really go to big battle in one go only if you absolutely have to otherwise migrate by subject area use the group all similar clear divisions right having said that ah you start off by migrating objects objects in the database that's one of the very first steps it consists of migrating verbs the places where you can put the other objects into that is owners locations which is usually schemers then what do you have that you extract tables news then you convert the object definition deploy them to Vertica and think that you shouldn't do it manually never type what you can generate ultimate whatever you can use it enrolls usually there is a system tables in the old database that contains all the roads you can export those to a file reformat them and then you have a create role and create user scripts that you can apply to Vertica if LDAP Active Directory was used for the authentication the old database vertical supports anything within the l dubs standard catalogued schemas should be relatively straightforward with maybe sometimes the difference Vertica does not restrict you by defining a schema as a collection of all objects owned by a user but it supports it emulates it for old times sake Vertica does not need the catalog or if you absolutely need the catalog from the old tools that you use it it usually said it is always set to the name of the database in case of vertical having had now the schemas the catalogs the users and roles in place move the take the definition language of Jesus thought if you are allowed to it's best to use a tool that translates to date types in the PTL generated you might see as a mention of old idea to listen by memory to by the way several times in this presentation we are very happy to have it it actually can export the old database table definition because they got it works with the odbc it gets what the old database ODBC driver translates to ODBC and then it has internal translation tables to several target schema to several target DBMS flavors the most important which is obviously vertical if they force you to use something else there are always tubes like sequel plots in Oracle the show table command in Tara data etc H each DBMS should have a set of tools to extract the object definitions to be deployed in the other instance of the same DBMS ah if I talk about youth views usually a very new definition also in the old database catalog one thing that you might you you use special a bit of special care synonyms is something that were to get emulated different ways depending on the specific needs I said I stop you on the view or table to be referred to or something that is really neat but other databases don't have the search path in particular that works that works very much like the path environment variable in Windows or Linux where you specify in a table an object name without the schema name and then it searched it first in the first entry of the search path then in a second then in third which makes synonym hugely completely unneeded when you generate uvl we remained in the analogy of moving house dust and clean your stuff before placing it in the new house if you see a table like the one here at the bottom this is usually corpse of a bad migration in the past already an ID is usually an integer and not an almost floating-point data type a first name hardly ever has 256 characters and that if it's called higher DT it's not necessarily needed to store the second when somebody was hired so take good care in using while you are moving dust off your stuff and use better data types the same applies especially could string how many bytes does a string container contains for eurozone's it's not for it's actually 12 euros in utf-8 in the way that Vertica encodes strings and ASCII characters one died but the Euro sign thinks three that means that you have to very often you have when you have a single byte character set up a source you have to pay attention oversize it first because otherwise it gets rejected or truncated and then you you will have to very carefully check what their best science is the best promising is the most promising approach is to initially dimension strings in multiples of very initial length and again ODP with the command you see there would be - I you 2 comma 4 will double the lengths of what otherwise will single byte character and multiply that for the length of characters that are wide characters in traditional databases and then load the representative sample of your cells data and profile using the tools that we personally use to find the actually longest datatype and then make them shorter notice you might be talking about the issues of having too long and too big data types on projection design are we live and die with our projects you might know remember the rules on how default projects has come to exist the way that we do initially would be just like for the profiling load a representative sample of the data collector representative set of already known queries from the Vertica database designer and you don't have to decide immediately you can always amend things and otherwise follow the laws of physics avoid moving data back and forth across nodes avoid heavy iOS if you can design your your projections initially by hand encoding matters you know that the database designer is a very tight fisted thing it would optimize to use as little space as possible you will have to think of the fact that if you compress very well you might end up using more time in reading it this is the testimony to run once using several encoding types and you see that they are l e is the wrong length encoded if sorted is not even visible while the others are considerably slower you can get those nights and look it in look at them in detail I will go in detail you now hear about it VI migrations move usually you can expect 80% of everything to work to be able to live to be lifted and shifted you don't need most of the pre aggregated tables because we have live like regain projections many BI tools have specialized query objects for the dimensions and the facts and we have the possibility to use flatten tables that are going to be talked about later you might have to ride those by hand you will be able to switch off casting because vertical speeds of everything with laps Lyle aggregate projections and you have worked with molap cubes before you very probably won't meet them at all ETL tools what you will have to do is if you do it row by row in the old database consider changing everything to very big transactions and if you use in search statements with parameter markers consider writing to make pipes and using verticals copy command mouse inserts yeah copy c'mon that's what I have here ask you custom functionality you can see on this slide the verticals the biggest number of functions in the database we compare them regularly by far compared to any other database you might find that many of them that you have written won't be needed on the new database so look at the vertical catalog instead of trying to look to migrate a function that you don't need stored procedures are very often used in the old database to overcome their shortcomings that Vertica doesn't have very rarely you will have to actually write a procedure that involves a loop but it's really in our experience very very rarely usually you can just switch to standard scripting and this is basically repeating what Mauricio said in the interest of time I will skip this look at this one here the most of the database data warehouse migration talks should be automatic you can use you can automate GDL migration using ODB which is crucial data profiling it's not crucial but game-changing the encoding is the same thing you can automate at you using our database designer the physical data model optimization in general is game-changing you have the database designer use the provisioning use the old platforms tools to generate the SQL you have no objects without their onus is crucial and asking functions and procedures they are only crucial if they depict the company's intellectual property otherwise you can almost always replace them with something else that's it from me for now Thank You Marco Thank You Marco so we will now point our presentation talking about some of the Vertica that overall the presentation techniques that we can implement in order to improve the general efficiency of the dot arouse and let me start with a few simple messages well the first one is that you are supposed to optimize only if and when this is needed in most of the cases just a little shift from the old that allows to birth will provide you exhaust the person as if you were looking for or even better so in this case probably is not really needed to to optimize anything in case you want optimize or you need to optimize then keep in mind some of the vertical peculiarities for example implement delete and updates in the vertical way use live aggregate projections in order to avoid or better in order to limit the goodbye executions at one time used for flattening in order to avoid or limit joint and and then you can also implement invert have some specific birth extensions life for example time series analysis or machine learning on top of your data we will now start by reviewing the first of these ballots optimize if and when needed well if this is okay I mean if you get when you migrate from the old data where else to birth without any optimization if the first four month level is okay then probably you only took my jacketing but this is not the case one very easier to dispute in session technique that you can ask is to ask basket cells to optimize the physical data model using the birth ticket of a designer how well DB deal which is the vertical database designer has several interfaces here I'm going to use what we call the DB DB programmatic API so basically sequel functions and using other databases you might need to hire experts looking at your data your data browser your table definition creating indexes or whatever in vertical all you need is to run something like these are simple as six single sequel statement to get a very well optimized physical base model you see that we start creating a new design then we had to be redesigned tables and queries the queries that we want to optimize we set our target in this case we are tuning the physical data model in order to maximize query performances this is why we are using my design query and in our statement another possible journal tip would be to tune in order to reduce storage or a mix between during storage and cheering queries and finally we asked Vertica to produce and deploy these optimized design in a matter of literally it's a matter of minutes and in a few minutes what you can get is a fully optimized fiscal data model okay this is something very very easy to implement keep in mind some of the vertical peculiarities Vaska is very well tuned for load and query operations aunt Berta bright rose container to biscuits hi the Pharos container is a group of files we will never ever change the content of this file the fact that the Rose containers files are never modified is one of the political peculiarities and these approach led us to use minimal locks we can add multiple load operations in parallel against the very same table assuming we don't have a primary or unique constraint on the target table in parallel as a sage because they will end up in two different growth containers salad in read committed requires in not rocket fuel and can run concurrently with insert selected because the Select will work on a snapshot of the catalog when the transaction start this is what we call snapshot isolation the kappa recovery because we never change our rows files are very simple and robust so we have a huge amount of bandages due to the fact that we never change the content of B rows files contain indiarose containers but on the other side believes and updates require a little attention so what about delete first when you believe in the ethica you basically create a new object able it back so it appeared a bit later in the Rose or in memory and this vector will point to the data being deleted so that when the feed is executed Vertica will just ignore the rules listed in B delete records and it's not just about the leak and updating vertical consists of two operations delete and insert merge consists of either insert or update which interim is made of the little insert so basically if we tuned how the delete work we will also have tune the update in the merge so what should we do in order to optimize delete well remember what we said that every time we please actually we create a new object a delete vector so avoid committing believe and update too often we reduce work the work for the merge out for the removal method out activities that are run afterwards and be sure that all the interested projections will contain the column views in the dedicate this will let workers directly after access the projection without having to go through the super projection in order to create the vector and the delete will be much much faster and finally another very interesting optimization technique is trying to segregate the update and delete operation from Pyrenean third workload in order to reduce lock contention beliefs something we are going to discuss and these contain using partition partition operation this is exactly what I want to talk about now here you have a typical that arouse architecture so we have data arriving in a landing zone where the data is loaded that is from the data sources then we have a transformation a year writing into a staging area that in turn will feed the partitions block of data in the green data structure we have at the end those green data structure we have at the end are the ones used by the data access tools when they run their queries sometimes we might need to change old data for example because we have late records or maybe because we want to fix some errors that have been originated in the facilities so what we do in this case is we just copied back the partition we want to change or we want to adjust from the green interior a the end to the stage in the area we have a very fast operation which is Tokyo Station then we run our updates or our adjustment procedure or whatever we need in order to fix the errors in the data in the staging area and at the very same time people continues to you with green data structures that are at the end so we will never have contention between the two operations when we updating the staging area is completed what we have to do is just to run a swap partition between tables in order to swap the data that we just finished to adjust in be staging zone to the query area that is the green one at the end this swap partition is very fast is an atomic operation and basically what will happens is just that well exchange the pointer to the data this is a very very effective techniques and lot of customer useless so why flops on table and live aggregate for injections well basically we use slot in table and live aggregate objection to minimize or avoid joint this is what flatten table are used for or goodbye and this is what live aggregate projections are used for now compared to traditional data warehouses better can store and process and aggregate and join order of magnitudes more data that is a true columnar database joint and goodbye normally are not a problem at all they run faster than any traditional data browse that page there are still scenarios were deficits are so big and we are talking about petabytes of data and so quickly going that would mean be something in order to boost drop by and join performances and this is why you can't reduce live aggregate projections to perform aggregations hard loading time and limit the need for global appear on time and flux and tables to combine information from different entity uploading time and again avoid running joint has query undefined okay so live aggregate projections at this point in time we can use live aggregate projections using for built in aggregate functions which are some min Max and count okay let's see how this works suppose that you have a normal table in this case we have a table unit sold with three columns PIB their time and quantity which has been segmented in a given way and on top of this base table we call it uncle table we create a projection you see that we create the projection using the salad that will aggregate the data we get the PID we get the date portion of the time and we get the sum of quantity from from the base table grouping on the first two columns so PID and the date portion of day time okay what happens in this case when we load data into the base table all we have to do with load data into the base table when we load data into the base table we will feel of course big injections that assuming we are running with k61 we will have to projection to projections and we will know the data in those two projection with all the detail in data we are going to load into the table so PAB playtime and quantity but at the very same time at the very same time and without having to do nothing any any particular operation or without having to run any any ETL procedure we will also get automatically in the live aggregate projection for the data pre aggregated with be a big day portion of day time and the sum of quantity into the table name total quantity you see is something that we get for free without having to run any specific procedure and this is very very efficient so the key concept is that during the loading operation from VDL point of view is executed again the base table we do not explicitly aggregate data or we don't have any any plc do the aggregation is automatic and we'll bring the pizza to be live aggregate projection every time we go into the base table you see the two selection that we have we have on in this line on the left side and you see that those two selects will produce exactly the same result so running select PA did they trying some quantity from the base table or running the select star from the live aggregate projection will result exactly in the same data you know this is of course very useful but is much more useful result that if we and we can observe this if we run an explained if we run the select against the base table asking for this group data what happens behind the scene is that basically vertical itself that is a live aggregate projection with the data that has been already aggregating loading phase and rewrite your query using polite aggregate projection this happens automatically you see this is a query that ran a group by against unit sold and vertical decided to rewrite this clearly as something that has to be collected against the light aggregates projection because if I decrease this will save a huge amount of time and effort during the ETL cycle okay and is not just limited to be information you want to aggregate for example another query like select count this thing you might note that can't be seen better basically our goodbyes will also take advantage of the live aggregate injection and again this is something that happens automatically you don't have to do anything to get this okay one thing that we have to keep very very clear in mind Brassica what what we store in the live aggregate for injection are basically partially aggregated beta so in this example we have two inserts okay you see that we have the first insert that is entered in four volts and the second insert which is inserting five rules well in for each of these insert we will have a partial aggregation you will never know that after the first insert you will have a second one so better will calculate the aggregation of the data every time irin be insert it is a key concept and be also means that you can imagine lies the effectiveness of bees technique by inserting large chunk of data ok if you insert data row by row this technique live aggregate rejection is not very useful because for every goal that you insert you will have an aggregation so basically they'll live aggregate injection will end up containing the same number of rows that you have in the base table but if you everytime insert a large chunk of data the number of the aggregations that you will have in the library get from structure is much less than B base data so this is this is a key concept you can see how these works by counting the number of rows that you have in alive aggregate injection you see that if you run the select count star from the solved live aggregate rejection the query on the left side you will get four rules but actually if you explain this query you will see that he was reading six rows so this was because every of those two inserts that we're actively interested a few rows in three rows in India in the live aggregate projection so this is a key concept live aggregate projection keep partially aggregated data this final aggregation will always happen at runtime okay another which is very similar to be live aggregate projection or what we call top K projection we actually do not aggregate anything in the top case injection we just keep the last or limit the amount of rows that we collect using the limit over partition by all the by clothes and this again in this case we create on top of the base stable to top gay projection want to keep the last quantity that has been sold and the other one to keep the max quantity in both cases is just a matter of ordering the data in the first case using the B time column in the second page using quantity in both cases we fill projection with just the last roof and again this is something that we do when we insert data into the base table and this is something that happens automatically okay if we now run after the insert our select against either the max quantity okay or be lost wanted it okay we will get the very last you see that we have much less rows in the top k projections okay we told at the beginning that basically we can use for built-in function you might remember me max sum and count what if I want to create my own specific aggregation on top of the lid and customer sum up because our customers have very specific needs in terms of live aggregate projections well in this case you can code your own live aggregate production user-defined functions so you can create the user-defined transport function to implement any sort of complex aggregation while loading data basically after you implemented miss VPS you can deploy using a be pre pass approach that basically means the data is aggregated as loading time during the data ingestion or the batch approach that means that the data is when that woman is running on top which things to remember on live a granade projections they are limited to be built in function again some max min and count but you can call your own you DTF so you can do whatever you want they can reference only one table and for bass cab version before 9.3 it was impossible to update or delete on the uncle table this limit has been removed in 9.3 so you now can update and delete data from the uncle table okay live aggregate projection will follow the segmentation of the group by expression and in some cases the best optimizer can decide to pick the live aggregates objection or not depending on if depending on the fact that the aggregation is a consistent or not remember that if we insert and commit every single role to be uncoachable then we will end up with a live aggregate indirection that contains exactly the same number of rows in this case living block or using the base table it would be the same okay so this is one of the two fantastic techniques that we can implement in Burtka this live aggregate projection is basically to avoid or limit goodbyes the other which we are going to talk about is cutting table and be reused in order to avoid the means for joins remember that K is very fast running joints but when we scale up to petabytes of beta we need to boost and this is what we have in order to have is problem fixed regardless the amount of data we are dealing with so how what about suction table let me start with normalized schemas everybody knows what is a normalized scheme under is no but related stuff in this slide the main scope of an normalized schema is to reduce data redundancies so and the fact that we reduce data analysis is a good thing because we will obtain fast and more brides we will have to write into a database small chunks of data into the right table the problem with these normalized schemas is that when you run your queries you have to put together the information that arrives from different table and be required to run joint again jointly that again normally is very good to run joint but sometimes the amount of data makes not easy to deal with joints and joints sometimes are not easy to tune what happens in in the normal let's say traditional data browser is that we D normalize the schemas normally either manually or using an ETL so basically we have on one side in this light on the left side the normalized schemas where we can get very fast right on the other side on the left we have the wider table where we run all the three joints and pre aggregation in order to prepare the data for the queries and so we will have fast bribes on the left fast reads on the Left sorry fast bra on the right and fast read on the left side of these slides the probability in the middle because we will push all the complexity in the middle in the ETL that will have to transform be normalized schema into the water table and the way we normally implement these either manually using procedures that we call the door using ETL this is what happens in traditional data warehouse is that we will have to coach in ETL layer in order to round the insert select that will feed from the normalized schema and right into the widest table at the end the one that is used by the data access tools we we are going to to view store to run our theories so this approach is costly because of course someone will have to code this ETL and is slow because someone will have to execute those batches normally overnight after loading the data and maybe someone will have to check the following morning that everything was ok with the batch and is resource intensive of course and is also human being intensive because of the people that will have to code and check the results it ever thrown because it can fail and introduce a latency because there is a get in the time axis between the time t0 when you load the data into be normalized schema and the time t1 when we get the data finally ready to be to be queried so what would be inverter to facilitate this process is to create this flatten table with the flattened T work first you avoid data redundancy because you don't need the wide table on the normalized schema on the left side second is fully automatic you don't have to do anything you just have to insert the data into the water table and the ETL that you have coded is transformed into an insert select by vatika automatically you don't have to do anything it's robust and this Latin c0 is a single fast as soon as you load the data into the water table you will get all the joints executed for you so let's have a look on how it works in this case we have the table we are going to flatten and basically we have to focus on two different clauses the first one is you see that there is one table here I mentioned value 1 which can be defined as default and then the Select or set using okay the difference between the fold and set using is when the data is populated if we use default data is populated as soon as we know the data into the base table if we use set using Google Earth to refresh but everything is there I mean you don't need them ETL you don't need to code any transformation because everything is in the table definition itself and it's for free and of course is in latency zero so as soon as you load the other columns you will have the dimension value valued as well okay let's see an example here suppose here we have a dimension table customer dimension that is on the left side and we have a fact table on on the right you see that the fact table uses columns like o underscore name or Oh the score city which are basically the result of the salad on top of the customer dimension so Beezus were the join is executed as soon as a remote data into the fact table directly into the fact table without of course loading data that arise from the dimension all the data from the dimension will be populated automatically so let's have an example here suppose that we are running this insert as you can see we are running be inserted directly into the fact table and we are loading o ID customer ID and total we are not loading made a major name no city those name and city will be automatically populated by Vertica for you because of the definition of the flood table okay you see behave well all you need in order to have your widest tables built for you your flattened table and this means that at runtime you won't need any join between base fuck table and the customer dimension that we have used in order to calculate name and city because the data is already there this was using default the other option was is using set using the concept is absolutely the same you see that in this case on the on the right side we have we have basically replaced this all on the school name default with all underscore name set using and same is true for city the concept that I said is the same but in this case which we set using then we will have to refresh you see that we have to run these select trash columns and then the name of the table in this case all columns will be fresh or you can specify only certain columns and this will bring the values for name and city reading from the customer dimension so this technique this technique is extremely useful the difference between default and said choosing just to summarize the most important differences remember you just have to remember that default will relate your target when you load set using when you refresh end and in some cases you might need to use them both so in some cases you might want to use both default end set using in this example here we'll see that we define the underscore name using both default and securing and this means that we love the data populated either when we load the data into the base table or when we run the Refresh this is summary of the technique that we can implement in birth in order to make our and other browsers even more efficient and well basically this is the end of our presentation thank you for listening and now we are ready for the Q&A session you

Published Date : Mar 30 2020

SUMMARY :

the end to the stage in the area we have

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

TomPERSON

0.99+

MartaPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Peter BurrisPERSON

0.99+

Chris KegPERSON

0.99+

Laura IpsenPERSON

0.99+

Jeffrey ImmeltPERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Chris O'MalleyPERSON

0.99+

Andy DaltonPERSON

0.99+

Chris BergPERSON

0.99+

Dave VelantePERSON

0.99+

Maureen LonerganPERSON

0.99+

Jeff FrickPERSON

0.99+

Paul FortePERSON

0.99+

Erik BrynjolfssonPERSON

0.99+

AWSORGANIZATION

0.99+

Andrew McCafeePERSON

0.99+

YahooORGANIZATION

0.99+

CherylPERSON

0.99+

MarkPERSON

0.99+

Marta FedericiPERSON

0.99+

LarryPERSON

0.99+

Matt BurrPERSON

0.99+

SamPERSON

0.99+

Andy JassyPERSON

0.99+

Dave WrightPERSON

0.99+

MaureenPERSON

0.99+

GoogleORGANIZATION

0.99+

Cheryl CookPERSON

0.99+

NetflixORGANIZATION

0.99+

$8,000QUANTITY

0.99+

Justin WarrenPERSON

0.99+

OracleORGANIZATION

0.99+

2012DATE

0.99+

EuropeLOCATION

0.99+

AndyPERSON

0.99+

30,000QUANTITY

0.99+

MauricioPERSON

0.99+

PhilipsORGANIZATION

0.99+

RobbPERSON

0.99+

JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Mike NygaardPERSON

0.99+

Dan Havens, Acronis | Acronis Global Cyber Summit 2019


 

>>From Miami beach, Florida. It's the queue covering a chronics global cyber summit 2019 brought to you by Acronis. >>Okay, welcome back. Everyone's the cubes covers two days here in Miami beach. The Fontainebleau hotel for the Kronos has global cyber summit 2019. It's inaugural event around a new category emerging called cyber protection. Um, this isn't a wave that's going to be part of the modernization a week we've been calling cloud 2.0 or whatever you want to call it. A complete modernization of the it technology stack and development environment includes core data center to the edge and beyond. Our next guest is Dan havens, chief growth officer per Chronis. Dan, thanks for coming on. Appreciate it. And thank you for having me, Dan. So, uh, what does chief growth officer mean? You guys obviously are growing, so obviously we see some growth there. Yeah, numbers are there. What she, what she, we have a couple of divisions in the company where we see we can really accelerate the business. >>So we came in and we wanted to make some large investments here. One of those areas was sports. You're seeing race cars out here on the floor, you're seeing all kinds of baseball teams, soccer teams, and we're talking to everybody. We have 40 teams now that are using our technology for competitive advantage on the field. Uh, the other areas, OEM, so, uh, original equipment manufacturers, everybody from making a camera to a server somewhere, having a Cronus be embedded, that's a big angle for us and we just didn't have a lot of focus. So I came into to build those divisions. I've actually joined the CEO before in a prior life in his last company and did something similar for him on a similar, uh, back there and we had violent success. So yeah, it's been a lot of fun. I've been here a year and a half and we're killing it. >>We got triple digit growth in the sporting category and similar in the OEM. It's interesting, you know, I look at a lot of these growth companies and the kind of a formula. You see, you guys have a very efficient and strong product platform engineering group. A lot of developers, a lot of smart people in the company, and a strong customer facing for the lack of a better word, field. The group you're in, you're involved, this is not, and you got marketing supporting it in the middle. Yep. So nice, efficient organizational structure on a massive way. But cyber, because this isn't your grandfather's data projection, this is a platform. What's the pitch? So the key here for us is we have to always say, and, and it, it's, it's hard to simplify and we're easy. In fact, we're cost-effective. Sometimes I'll even say I'm cheap and I'm easy. >>And that does not go out of style for an enterprise, right? So our ability to take good old fashioned backup and these things that other people need and basically extend that across. Now I can have one window where I can control, keep 'em out. If somebody gets in or from the inside or a disaster happens. I from this one place can recover my data. I'm secure with my data. I have the ability to notarize my data. So this one, and by the way, key simple interface. Customers love simple. This one simple interface to be able to do that. Now it takes a lot of engineering that goes behind that. I have plenty of, I have fancy engineering degrees and all that, but I try forget that when I'm talking to a customer because at the end of the day it's gotta make sense. A mind that doesn't know, says no. >>And I think we do a pretty good job of simplifying the message, but as they get under the covers and they roll it out, they recognize that there's, you know, we, we, we have more engineers per employee capita than any company that would have 1600 employees. Simple, easy to use. It reduces the steps it takes to do something as a winning business model. You kind of come from that school you mentioned, you know, cheap and easy. That's what is key. Yeah. But we're in a world where complexity is increasing and costs are increasing. Yep. These are two dynamics that are facing every enterprise, cyber it everywhere. What's your story when you want to educate that person so they can get to that? Yes. I want to work with you guys. What's that? What's that getting to? Yes. Processed motion look like. So the beautiful part is is we sell software right now. >>Software can be purchased complex. You install it, you can figure, you do everything yourself. We also can sell that from a cloud standpoint. So now you consume it like a service. Just like you consume Netflix at home, right? I can now consume this protection as a service. You have bolts spectrums covered. Most enterprises are somewhere in the middle. We call that hybrid. So the idea here is that there's going to be components where this data's not leaving these four walls. It might be government agency, it might be some compliance factor, but the ability to be able to say yes anywhere on that spectrum, it makes it very easy for an executive to say, okay, but we have a very, as you leverage the cloud, the OnRamp for this can be as simple as turning on the surface and pointing it at a data source. I mean, you're a student of history, obviously even in this business for awhile, you've done been there longer than you'd think. >>Data protection was kind of like that. Afterthought, backup data recovery all based upon, yeah, we might have an outage or a flood or hurricane Sandy who knows what's going to happen. You know, some force majority out there might happen, but security is a constant disrupter of business continuity. The data's being hijacked and ransomware to malware attacks. This is a major disruption point of a world that was supposed to be a non disruptive operational value proposition. Yeah, so the world has changed. They went from a niche, well, we've got their architecture of throwing back up. You've got to think about it from day one at the beginning. This seems to be your, your story for the company. You think about security from the beginning with data protection. There's only one club in the bag, so to speak. Talk about that dynamic and how's that translating into your customer's storytelling customer engagements to show you, you used an interesting word at the beginning, disaster recovery years ago, I started my tech industry in 1992 right? >>Disaster recovery is when we're going to have a flood or a hurricane and the building's going to burn down. What we find is most of our customers, that's certainly happens, but that's not the driver. The driver now is somebody after my data because the world has changed. Not only has the amount of data we're collecting change, but the ability to illegally monetize somebody else's data has become reality and you have social media that is socializes if you get breached and so forth. So there's a number of drivers. Number one, I don't want to be turned out of business. Number two, I don't want to be ransom. Then number three, I certainly don't want to do the cover of the wall street journal tomorrow morning as a top executive who looked past data. We literally watch brands, I won't mention the brand now, but a very large fortune 1000 what's called out yesterday. >>We see it every few days and we watched the carnage of their brand get deluded because they weren't protected. So I think it's the perfect storm up. I've got a ton of data, so it's coming in from all directions. Secondly, I I'm concerned about, you know, my brand and been able to protect that data and then you know, what do I do? And the disaster in this case is not necessarily flood or fire. It's that somebody from the inside or outside got in the gym. Pretend that I'm a decision maker. I'm like, my head's exploding. I'm got all this carnage going on. I don't want to get fired yet. I know I'm exposed. Nothing's yet happened yet. Maybe I settled the ransomware thing, but I know I'm not in a good place. What's your story to any, what's your pitch to me? What's in it for me? Tell me. >>Tell me the posture and the, well, we're halfway home. If you say, I know I'm not in a good place, right? Cause oftentimes somebody has to get bit first or they have to see their neighbor get bit first and then they say, Hey come in. One of my first plays would be let's find out what place you really are. I can do that very quickly and an assessment, we can gather your systems, we can get a sense for our, where's your data? Where it's flowing from. What are you doing? What are you doing to protect it? We typically will come back and there's going to be spots where there's blind spots. Sometimes they're fully naked, right? But the good news is is now we know the problem, so let's not waste any time, but you can get onboard and baby steps or you know, we can bandaid it or we can really go into full surgery however you want to move forward. >>But the idea is recognizing this has to be addressed because it's a beast. Every single device that's out there on the floor, in any enterprise, any company is a way in and a POC are critical for your business model. You want to get them certainly candy taste, show the value quickly has a POC, gets structured unit assessment. You come in on a narrow entry nail something quick, get a win. What's the, what's the playbook? Love PLCs because we're so fast and easy meaning oftentimes you do PLCs cause you're complex software and you're trying to prove your point and so forth. I love to push a POC cause I can do it inside of days, but I get the customer to take the drive. It's just on the car lot. If I get you to drive it down the block, you're not bringing it back. You're bringing it home to the neighbors. >>Right. That is the case with our software and our hit rate is key. But again it's because it's straightforward and it's easy. So though most sales cycles don't push for pilot. I can't wait to get a pilot but we don't need 30 days to do it in a couple of days. They're going to recognize I can do this too. You have a good track record of POC. If I get, this is going to be the most conceding. You might have to edit this out. If I get an audience, I will win. That is the most conceited statement on the planet. And if I get the audience and they will look, and this is why we use the sports teams. Sports teams are the cool kids using this. And if I get an executive to say, what are you guys doing with the red Sox? If I could get him or her to look, it's game over. >>Hey being bad ass and having some swagger. It's actually a good thing if you got the goods to back it up. That's not fun. Piece here is that the product works well and it's not this massive mountain to hurdle. It is. We can get started today and take bites as we go, but you mentioned sports. Let's get into that talk track. As we have been covering sports data for now six years on the cube in San Francisco. We were briefly talking about it last night at the reception, but I think sports teams encapsulates probably the most acute use case of digital transformation because they have multiple theaters that are exploding. They got to run their business, they got a team to manage and they got fan experience and their consumers, so you've got consumerization of it. You got security of your customers either in a physical venue from a potential terrorist disaster could happen to just using analytics to competitive venture from the Moneyball model to whatever sports really encapsulates what I call the poster child of using digital into a business model that works. >>You've been successful with sports. We interviewed Brian shield yesterday. Yup. Red Sox, vice-president technology. He was very candid. He's like, look it, we use analytics. It helps us get a competitive, not going to tell you the secrets, but we have other issues that people not thinking about drone strikes while the games going on, potential terrorist attacks, gathering the people, you know, adding on East sports stadium to Fenway park. They have a digital business model integrating in real time with a very successful consumer product and business in sports. This has been a good market for you guys. What's been the secret to success? >> Explosive market? Couple things. First off, you summarized well, sports teams are looking for competitive advantage, so anything that can come in under that guys is gonna get some attention plus data, fan data, system data, ticket data. Um, in baseball, they're studying every single pitch of pictures ever thrown. >>They have video on everything. This is heavy lift data, right? So a place to put it saved money, a place to protect it, a pace to access it so that all of my Scouts that are out in the field with a mobile device have the ability to upload or evaluate a player while they're out still on them and on the field somewhere maybe in another country. And then add the added caveat in our sexiest piece. And that's artificial intelligence. You mentioned Moneyball, right? Uh, the, the entire concept of, of stat of statistics came out in the Moneyball concept and you know, we all saw the movie and we all read the book, but at the end of the day, this is the next step to that, which is not just written down statistics. Now we can analyze data with machine learning and we have very, we have unique baseball examples where there's absolutely no doubt they have the data. >>It's the ability to, how do I turn that to where I can be more competitive on our racing team. So we're actually working with teams improving, changing the car on the track during the race, using our software fact. We always look forward to opportunities where somebody says, Hey, come in and talk about that because it's incredibly sexy to see. Um, but sports are fun because first off they're the cool kids. Secondly, they're early adopters. If it's gonna give competitive advantage, uh, and third, they hit all the vectors. Tons of data have to protect it. >> It's our life in the business models digital too. So the digital transformation is in prime time. We cannot ignore the fact that people want wifi. They got Instagram, Facebook, all of these, they're all conscious of social media. There are all kinds of listening sports club, they have to be, they have to be hip, right? >>And being out front like that, think about the data they have come in at. And so not just to be smart on the field, they have to be smart with our customer. They're competing with that customer for four of their major sports or whatever. Major sports in the, in the, in the, in our case in this fashionable to be hip is cool for the product, but now you think about how they run their business. They've got suppliers, um, that have data and trusting suppliers with data's. There's a difficult protection formula. They've got national secure security issues. They have to protect, well they have to protect as a big part, but they have to protect, well first off these, these archives of data that are of 20 races ago or of this pitcher pitched three years ago and I have a thousand of his pitches and I'm looking for towels. >>That is, that's mission critical. But also, uh, to boot you have just business functions where I'm a, I'm a team and I have a huge telco sponsor and we are shifting back and forth and designing what their actual collateral is going to be in the stadium. They're actually using a Chronis to be able to do that up in the cloud where they can both collaborate on that. Not only doing it, but being able to protect it that way. It's, it's more efficient for them. It's interesting. I asked Brian shield this question, I asked her how does baseball flex and digital with the business model of digital with the success of the physical product or their actual product baseball. And he said an interesting thing. He's like the ROI models just get whacked out because what's the ROI of an investment in technology? It used to be total cost of ownership. >>The class that's right under the under the iceberg to sharpen whatever you use, you use that. We don't use that. We think about other consequences like a terrorist attack. That's right. So so the business model, ROI calculation shifting, do you have those kinds of conversations with some of these big teams and these sports teams? Because you know they win the world series, their brand franchise goes up if they win the national championship, but whatever their goal is has real franchise value. There's numbers on that. There's also the risk of say an attack or some sort of breach. >> Well, I won't mention the names, I won't mention the teams by name, but I have a half a dozen teams right now and two that are actually rolling out that are doing facial recognition just for security, a fan's entering their stadium. So they are taking the ownership of the safety of their fan to the level of doing visual or facial recognition coming into their stadium. >>Obviously the archive to measure against is important and we can archive that, but they're also using artificial intelligence for that. So you're absolutely right. They owe their fan a safe experience, not only a safe experience with good experience and so forth. And we love to be associated whenever we can with wins and losses. But to your point, how do you get, or how do you show a TCO on a disaster and nobody wants to, and by the way, we've seen enough of that to know it's looming. And there's also the supply chain too. I can buy a hotdog and a beer from Aramark, which is the red socks. They say supplier that's not owned by the red Sox. They have a relationship. But my data's in, I'm a consumer of the red Sox. I'm procuring a, you know, some food or service from a vendor. Yeah, yeah. My data's out there. >>So who protects that? Well, these are unique questions that come up all the time. Again, that's a business decision for the customer. The idea is with cloud collaboration, it's technically quite easy, but again, they have to decide where they're gonna commingle their data, how they're going to share. But the idea here is, again, back to the spectrum, fully cloud and accessible and locked down airtight government's scenario where we have a, you know, a lock bottom line is you get to pick where you want to be on there and there's going to be times where my example of talking to the, uh, the telco vendor, we're, we're actually going to share our data together and we're going to make us faster, make a quicker return and design this collateral for our stadium faster. Those are business decisions, but they're allowed because it, Coronas can be as hybrid as you need to be along the site. >>And again, that resonates with an executive. They never want to be wearing handcuffs and they don't want to pay overpay for stuff to not use our stuff. And if you decide to consume cloud, you, you just pay as you go. It's like your electricity bill. All right. So the red Sox are a customer of you guys. You have or they use your service. What other sports teams have you guys engaged with who you're talking to? Give a taste of some of the samples. So European, we have a couple of formula one teams. We have a racing point. We have the Williams team and formula E we have to cheetah the dragon team. We have a adventury, we also have Neo. So we have a good presence in the racing clubs. We have a ton of a world rally cars and, and, and motorcycle motorcross and so forth. >>Then you step over into European football. So we, we, we started in cars and recognize this is hot. So then we got our first, uh, European team, uh, and we had arsenal. As a matter of fact, we have one of the legends here signing with us today. And you know, I mean, they're rock stars, right? People follow them. Anyway, so we have arsenal and we did man city. Um, and we just landed, uh, Liverpool just did that this quarter, two weeks ago. I literally just, the ink is still drying. Um, and then you move into the United States, which I brought the, you know, I brought the circus to town on January one, 2019. First when was the Boston red Sox. We quickly followed that up. You'll see us on the home run fence at San Diego Padres. Volts bought for different reasons, but both very sexy reasons. So it's the reason. >>What were the main drivers? So in the case of the Boston red Sox, it was, it was a heavy lift on video. A lot of on the protection side. Um, the, uh, San Diego was file sync and share. So the example I was giving of, um, being able to share with your largest telco vendor or with your largest investors slash sponsor for your stadium, um, that was the driver. Now what's funny about both is as they get started, he's always expanding, right? So we have the baseball teams, we did land this quarter, the Dallas stars. So that's our first hockey club. I really want. And my goal is to try to get a couple in each of the main four categories and then some of the subs, um, just cause you get the cool kids, you get a tipping point. Everybody then wants to know what's going on. I have a hundred and play. >>And so we, we typically try to qualify regional where it makes sense. Um, uh, we're, you know, we're very close with a team here in the region. So, you know, they, in the feedback from, from the, from the successes you had implementations, why, what's uh, what's been the feedback from the customers. So here's the file in this. Sounds like I'm just tripping with sales guy and I apologize. Warning signs. Okay. If they use it, we're home free. So when you get Brian or any one of these guys that are using it, all I have to do is make sure that a new customer hears this person who has no reason to say anything else and just expose them to it. Because it's this unknown, scary thing that we're trying to protect against and being able to do that and have the freedom of how aggressive or you know, what metaphor am I going to cover that? >>And then also, uh, you know, the, obviously the economics work is you pay as you go. Um, it's, you know, it's a good story. Well, Dan, congratulations on the success. Um, great to see you guys really digging in and getting those PLCs and being successful. We watching your growth. Final question for you yes. Is all the data and the patterns that you see and all of customers. What's the number one reason why a Cronus is selected and why you women? I think that's an interesting question and I think that it's a couple of reasons. Number one, we work, we're easy. We have an enormous footprint. So there's a lot to reference from. Many people have already used us on the consumer side, so we're safe. So that's one reason I would also tell you however, that we have a great ecosystem because a Kronos is different than most software companies. >>Most software companies have a huge outside sales force that sells direct to customer a Chronis. Everybody here is a partner. We sell through a service provider to a channel member through a, through a, a, a, an ISV. Um, and then we have some direct enterprise. But the idea is there's a variety of solutions that can be baked on this foundation. And I think people like that variety. I, they, they like the, like the freedom of I'm not just trapped with this one thing. I can buy it and all options are available and I will tell you an it, nobody wants to be locked down. Everybody wants options, safety in numbers. They want their data protected with the whole cyber land lens. And they know everything's changing every six months. Something's different. And I don't want to be handcuffed in my desk. I want all options available. I think that's our best value from all right, Dan, thanks for coming on. Dan havens, chief growth officer, but Krohn is weird. The Chronis global cyber summit. I'm John Ford. Stay tuned for more cube coverage after this short break.

Published Date : Oct 15 2019

SUMMARY :

global cyber summit 2019 brought to you by Acronis. A complete modernization of the it technology So I came into to build those divisions. So the key here I have the ability to notarize my data. So the beautiful part is is we sell software right now. So the idea here is that there's going to Yeah, so the world has changed. is most of our customers, that's certainly happens, but that's not the driver. And the disaster in this case is not necessarily flood or fire. But the good news is is now we know the problem, But the idea is recognizing this has to be addressed because it's a beast. And if I get an executive to say, what are you guys doing with the red Sox? Piece here is that the product works well and it's not this massive What's been the secret to success? First off, you summarized well, sports teams are looking for competitive advantage, have the ability to upload or evaluate a player while they're out still on them and on the field somewhere maybe It's the ability to, how do I turn that to where I can be more competitive on our racing team. So the digital transformation is the field, they have to be smart with our customer. But also, uh, to boot you have just So so the business model, ROI calculation shifting, So they are taking the ownership of the safety of their fan to the Obviously the archive to measure against is important and we can archive that, but they're also using artificial intelligence for But the idea here is, again, back to the spectrum, fully cloud and accessible and So the red Sox are a customer of you guys. So it's the reason. the subs, um, just cause you get the cool kids, you get a tipping point. So here's the file in this. What's the number one reason why a Cronus is selected and why you women? I can buy it and all options are available and I will tell you an it,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DanPERSON

0.99+

Brian shieldPERSON

0.99+

1992DATE

0.99+

Red SoxORGANIZATION

0.99+

BrianPERSON

0.99+

January one, 2019DATE

0.99+

30 daysQUANTITY

0.99+

40 teamsQUANTITY

0.99+

United StatesLOCATION

0.99+

John FordPERSON

0.99+

Fenway parkLOCATION

0.99+

San FranciscoLOCATION

0.99+

Dan HavensPERSON

0.99+

two daysQUANTITY

0.99+

six yearsQUANTITY

0.99+

red SoxORGANIZATION

0.99+

tomorrow morningDATE

0.99+

1600 employeesQUANTITY

0.99+

yesterdayDATE

0.99+

twoQUANTITY

0.99+

AramarkORGANIZATION

0.99+

AcronisORGANIZATION

0.99+

todayDATE

0.99+

firstQUANTITY

0.99+

OneQUANTITY

0.99+

one reasonQUANTITY

0.99+

a year and a halfQUANTITY

0.99+

arsenalORGANIZATION

0.99+

bothQUANTITY

0.98+

two dynamicsQUANTITY

0.98+

two weeks agoDATE

0.98+

three years agoDATE

0.98+

one windowQUANTITY

0.98+

FirstQUANTITY

0.98+

FacebookORGANIZATION

0.98+

one clubQUANTITY

0.97+

Dan havensPERSON

0.97+

SecondlyQUANTITY

0.97+

telcoORGANIZATION

0.97+

last nightDATE

0.96+

InstagramORGANIZATION

0.96+

FontainebleauORGANIZATION

0.96+

KronosORGANIZATION

0.96+

KrohnPERSON

0.96+

oneQUANTITY

0.96+

NetflixORGANIZATION

0.95+

first playsQUANTITY

0.95+

Miami beach, FloridaLOCATION

0.95+

Miami beachLOCATION

0.94+

day oneQUANTITY

0.94+

WilliamsORGANIZATION

0.94+

thirdQUANTITY

0.94+

EuropeanOTHER

0.94+

eachQUANTITY

0.93+

MoneyballTITLE

0.93+

Acronis Global Cyber Summit 2019EVENT

0.93+

fourQUANTITY

0.92+

first hockey clubQUANTITY

0.91+

20DATE

0.91+

four categoriesQUANTITY

0.91+

DallasORGANIZATION

0.91+

half a dozen teamsQUANTITY

0.9+

Tons of dataQUANTITY

0.89+

San Diego PadresLOCATION

0.87+

OnRampORGANIZATION

0.87+

Boston red SoxORGANIZATION

0.87+

one placeQUANTITY

0.86+

one thingQUANTITY

0.84+

this quarterDATE

0.84+

VoltsORGANIZATION

0.83+

six monthsQUANTITY

0.83+

Chronis global cyber summitEVENT

0.82+

cyber summit 2019EVENT

0.79+

ton of dataQUANTITY

0.77+

East sports stadiumLOCATION

0.77+

Pat Gelsinger | VMworld 2013


 

(upbeat music) >> Hey welcome back to VMWorld 2013. This is theCUBE, flagship program. We go out to the events to extract the signal from the noise. I'm John Furrier, the founder of SiliconANGLE. I'm joined with David Vilante, my co-host from Wikibon.org and we're kicking off today with an awesome interview. CEO of VMWare, Pat Gelsinger, CUBE Alumni. Been on the theCUBE with Dave and I multiple times. So many times. You are in like the leaderboards. So in terms of overall guest frequency, you've been up there, but also you're also the top dog at VMWare and great to see you again. How are you feeling? >> Thank you, thank you. Good morning, guys. >> Pleasure. >> Good to see you. >> So what's new? I mean obviously you're running the show here. You're running around. Last night you were at the NetApp event. You ran through CIO, R&D. You got to go out and touch all the bases out here. >> Yeah, yeah. >> What does that look like? What have you done and obviously, you did, the key note was awesome. What else is going on? >> You know, everything, you know, VMWorld is just, it's just overwhelming, right? I mean 23,000 people almost. I mean you know the amount of activities around that and it really has become the infrastructure event for the industry and you know, if you're anything related to infrastructure, right, what's going on, right in the enterprise side of IT, you got to be here, right? And there's parties everywhere. Every vendor has their events. Every you know, different particular technology area, a bunch of the things that we're doing, and of course to me, it's just delightful that I can go touch as many people and you know, they get excited to see the CEO. I have no idea why, but hey I get to show up. It's good. >> You've been in the industry for a long time. Obviously you've seen all the movies before and we've talked about the seas of change in the EMC world when you were there, but we had two guests on yesterday that were notable. Steve Herrod who's now a venture capitalist at Generalcatalyst and Jerry Chen who's a VC at Graylock, and we have a 10-year run here at VMWare which is esteemed by convention, but the first five years were a lot different than the last five years, and certainly, the last year you were at the helm. So what's changed in the past 24 months? A lot of stuff has certainly evolved, right? So the Nicira acquisition certainly changed up, changed everything, right? You saw software-defined data center now come into focus this year, but really, just about less than 24 months, a massive kind of change. What, how do you view all that? How do you talk to your employees and the customers about that change? >> Well you know, as we think about the software-defined data center vision, right, it is a broad comprehensive powerful vision for rearchitecting how the data center is operated, how customers take advantage of it. You know and the results and the agility and efficiency that comes from that. And obviously the Nicira acquisition is sort of the shot heard 'round the world as the really, "Okay, these guys are really serious "about making that happen." And it changes every aspect of the data center in that regard. You know and this year's VMWorld is really, I'll say, putting the beef on the bones, right? We talked about the vision, we talked about each of the four legs of it: compute, networking, storage and management of automation. So this year it's really putting the beef on the bones and the NSX announcement, putting substance behind it. The vSAN announcement, putting substance behind it. The continuing progress of management and automation. And I think everything that we've seen here in the customer conversations, the ecosystem of partner conversations are SDDC is real. Now get started. >> Can you, I think you've had some fundamental assumptions in that scenario, particularly around x86 in the service business. Essentially if I understand it, you've said that x86 will dominate that space. You're expecting status quo in the sense that it will continue to go in the cadence of you know, cores and Moore's Law curve even though we know that's changing. But that essentially will stay as is and it's the other parts, the networking and the storage piece that you're really, where you define conventions. Is that right? >> Yeah certainly we expect a continuing momentum by the x86 by Intel in that space, but as you go think about software-defined everything in the data center really is taking the power of that same core engine and applying it to these other areas because when we say software-defined networking, right, you need a very high packet flow capability and that's running a software on x86. We need to talk about data services running in software, right? You need high performance. It's snapshots, file systems, etc. running on software, no longer bound to you know physical array. So it really is taking that same power, that same formula right, and applying it to the rest of the elements of the data center and yeah, we're betting big right, that that engine will continue and that we'll be successful in being able to deliver that value in this software layer running on that core powerful Silicon engine. >> So Pat, so obviously when you came on board, the first thing you did was say, "Hey, the pricing. "I want to change some things." Hyper-Visor's always been kind of this debate. Everyone always debates about what to do with Hyper-Visor. But still, virtualization's still the enabling technology so you know, you kind of had this point where the ball's moving down the field and all of a sudden, in 2012, it changed significantly, and that was a lot in part with your vision with infrastructure. As infrastructure gets commoditized, what is going to change in the IT infrastructure and for service providers, and the value chains that's going to be disrupted? Obviously economics are changing. What specifically is virtualization going to do next with software defined that's going to be enabling that technology? >> Yeah, you know and I, you know, we're not out to commoditize. We're out to enable innovation. We're out to enable agility, right, and then the course of that, it changes what you expect and what the underlying hardware does. But you know, it's enabling that ecosystem of innovation is what we're about and customers to get value from that and as you go look at these new areas, "Hey, you know, we're changing how you do networking." Right, all of a sudden, we're going to create a virtual network overlay that has all of these services associated with it that are proficient just like VMs in seconds. We're creating a new layer of how storage is going to be enabled. You know, this policy-driven capability. Taking those capabilities that before were tightly bound to hardware, delivering it through the software layer, enabling this new magnificent level of automation and yesterday's demo with Carl. I mean Carl does a great CTO impersonation, doesn't he? And he's getting some celebrity action. He's like, "I got the bottle." >> Oh yeah. >> Steve Herrod gave him a thumbs up too. >> Yes, yeah Steve gave him a good job. But you know, so all of those pieces coming together, right, is you know, really, and you know, just the customer and the ecosystem response here at the show has been, "Oh, you know, right, "SDDC, it's not some crazy thing out there in the future. "This is something I can start realizing value for now." >> Well it's coming into focus. It's not 100% clear for a lot of the customers because they're still getting into the cloud and the hybrid cloud, I call it the halfway house to kind of a fully evolved IT environment, but you know. How do you define? >> No it is the endgame. Hyper cloud is not a halfway house. What are you talking about? What are you talking about? >> To to full all-utility computing. That is ultimately what we're saying. >> Halfway house? >> I don't mean it that way. (group laughs) >> Help me. >> Okay next question. >> (chuckles) When you're in a hole, stop digging, buddy. >> So how do you define the total adjusted mark at 50 billion that Carl talked about? >> Yeah you know, as we looked at that, we said across the three things, right that we said, software-defined data center, 28 billion dollars; hyper cloud, 14 billion; eight billion for the end-user computing; that's just 50 billion opportunity. But even there, I think that dramatically understates the market opportunity. IT overall is $1.7 trillion, right? The communications, the services, outsourcing, etc. And actually the piece that we're talking about is really the underpinnings for a much larger set of impact in the part of what applications are going to be developed, how services are delivered, how consumers and businesses are able to take advantage of IT. So yes, that's the $50 billion. We'll give you the math, we'll show you all the details of Gartner's and IDC's to support it. But to us, the vision and the impact that we're out for is far more dramatic than that would even imply. >> Well that's good news because we said to Carl, "It's good that your market cap is bigger than--" (Pat laughs) >> Oh yeah your TAM is bigger than your market cap. Well okay now we-- >> Yeah, that's nice, yeah. Yeah, we're out to fix the market cap. >> Yeah he said, "Now we got to get the 50 billion. So I'm glad to hear there's upside to the TAM. But I wanted to ask you about the ecosystem conversation. When you talk about getting things like you know, software defined network and software defined source, what's the discourse like in ecosystem? For guys like, let's take the storage side. EMC, NetApp last night, they say, "Hey you know, software defined storage. "We really like that, but we want to be in that business." so what, talk about that discussion. >> Yeah, clearly every piece of software defined, whether it's software defined storage, software defined data services, software defined security services or networking, every piece of that has ecosystem implications along the way. But if you go talk to a NetApp or a EMC, they'd say, "You're an appliance vendor." And they would quickly respond and say, "No, our value's in software, "and we happen to deliver it as an appliance." And we'd say, "Great, let's start delivering "the software value as a software appliance "through virtualization and through the software delivery "mechanisms that we're talking about for this new platform." Now each one of them has to adjust their product strategies, their, you know, business strategies to enable those software components, right, independent of their hardware elements for full execution and embodiment into the software-defined data center feature. But for the most part, every one of them is saying, "Yes, now how do we figure out how to get there, "and how do we decompose our value, embody it it in new ways "and how can we enable that in "this new software-defined data center vision?" >> And they've always done that with software companies. I mean certainly Microsoft and Oracle have always grabbed a piece of the storage stack and put it into their own, but it's been very narrow, within their own spaces, and of course, VMWare is running any application anywhere. So it's more of a general purpose platform. >> Absolutely. >> Is it a tricker fit for the ecosystem to figure out where that white space is? >> Absolutely. Every one of them has to figure out their strategy. If you're F5, you know, I was with John McAdam this morning. "Okay, how do I take my value?" And you would very quickly say, "Hey, our value's in software. "We deliver it as mostly as appliances, "but how do we shift, you know, your checkpoint?" Okay, you know, they're already, right, you know, our largest software value or Riverbed, you know, the various software vendors and security as well. Each one of them are having to rethink their strategies and the context of software define. Our customers are saying, "Wow, this is powerful. "The agility and the benefits that I get from it, "they're driving them to go there." >> So what's the key to giving them confidence? Is it transparency? You're sharing roadmaps during integration? >> Yes, yes, yes. >> Anything else? Am I missing anything there? >> You know, also how we work with them and go to market as well. You know, they're expecting from us that, okay, "you know, if this is one of our accounts, "come in and work with us on those accounts as well." So we do have to be transparent. We have to the APIs and enable them to do integration. We have to work with them in terms of enabling their innovation and the context of this platform that we're building. But as we work along the way, we're getting good responses to that. >> Pat, how do you look at the application market? Now with end-user computing, you guys are picking that up. You got Sanjay Poonen coming in and obviously mobile and cloud, we talked about this before on theCUBE, but core IT has always been enabling kind of the infrastructure and then you get what you get from what you have in IT. Now the shift is, application is coming from outside IT. Business units and outside from partners, whether they're resellers. How do you view that tsunami of apps coming in that need infrastructure on demand or horizontally scalable at will? >> Yeah so first point is, yes, right, we do see that, you know, as infrastructure becomes more agile and more self provisioned, right, more aligned to the requirements of applications, we do see that it becomes a tsunami of new applications. We're also working very hard to enable IT to be the friend of the line of business. No longer seen as a barrier, but really seen as a friend, partner enabler of what they're trying to do because many of the, you know, line of businesses have been finding way. You know, how do I get around the slow-moving IT? Well we want to make IT fast-moving and enabling to meet their security, governance, SLA requirements while they're also enabling these powerful new applications to emerge and that to us is what infrastructure is all about for the future is enabling, you know, businesses to move at the speed of business and not have infrastructure being a limiter and as we're doing things, you know, like the big data announcements that we did, enabling infrastructure that's more agility, you see us do more things in the AppDev area over time, and enabling the management tools to integrate more effectively to those environments. Self-service portals that are enabling that and obviously with guys like Sanjay in our mobile initiative, yeah that's a big step up. Don't you like Sanjay? He's a great addition to the team. >> Yeah Sanjay's awesome. He's been great and he has done a lot on the mobile side. Obviously that is something that the end users want. >> That's an interesting way that I put him into that business group first. (group chuckles) >> Well on the Flash side, so under the hood, right? So we look under the hood. You got big data on the dashboard. Everyone's driving this car to the new future of IT. Under the hood, you got Flash. That's changing storage a bit and certainly reconfiguring what a DaaS is and NaaS and SaaS and obviously you talked about vSAN in your key note. What is happening, in your vision, with compute? I mean obviously as you have more and more apps hitting IT, coming in outside core IT but having to be managed by core IT, does that change the computing paradigm? Does it make it more distributed, more software? I mean how do you look at that 'cause that's changing the configuration of say the compute architecture. >> Sure and I mean a couple of things, if you think about the show here that we've done, two of them in particular in this space, one is vSAN, right? A vSAN is creating converged infrastructure that includes storage. Why do you do that? Well now you have storage, you know, apps are about data, right? Apps need data to operate on so now we've created an integrated storage tier that essentially presents an integrated application environment in converged infrastructure. That changes the game. We talked about the Hadoop extension. It changes how you think about these big data applications. Also the Cloud Foundry announcement. Right on/off premise of PaaS layer to uniquely enable applications and as they've done that on the PaaS layer, boy, you don't have to think about the infrastructure requirements to deploy that on or off premise or increasingly as I forecast for the future, hybrid applications, born in the hybrid, not born in the cloud, but born in the hybrid cloud applications that truly put the stuff that belongs on premise on premise, puts the stuff that belongs on the cloud in the cloud, right and enables them to fundamentally work together in a secure operational manner. >> So the apps are dictating through the infrastructure basically on demand resources, and essentially combine all that. >> Absolutely. Right. The infrastructure says, "Here's the services "that I have already, right, in catalogs "that you can immediately take advantage of, "and if this, you fit inside "of these catalogs, you're done." It's self-provisions from that point on and we've automated the operations and everything to go against that. >> So that concept of "born in the hybrid" is a good one. So obviously that's your sweet spot. You're going from a position. >> Yeah and this stupid halfway house hybrid comment. I mean I've never heard something so idiotic before. >> One person, yeah. (group chuckles) >> I don't know, it was probably an Andreessen comment or something, I don't know. (group chuckles) >> He's done good for himself, Marc Andreessen. >> Google and Amazon are obviously going to have a harder time with that, you know, born in the hybrid. What about Microsoft? They got a good shot at born in the hybrid, don't they? >> Yeah, you know and I think I've said the four companies that I think have a real shot to be you know, very large significant players for public cloud infrastructure services. You know, clearly Amazon, you know Google, they have a large, substantive very creative company. Yeah Microsoft, they have a large position. Azure, what they've done with Hyper-V and ourselves, and I think that those, you know the two that sort of have the natural assets to participate in the hybrid space are us and Microsoft at that level, and obviously you know we think we have lots of advantages versus Microsoft. We think we're miles ahead of them and SDDC, right, we think the seamlessness and the compatibility that we're building with one software stack, not two. It's not Azure and Hyper-V. It is SDDC in the cloud and on premise that that gives us significant advantages and then we're going to build these value rate of services on top of it, you know, as we announced with Desktop as a Service, Cloud Foundry as a Service, DR as a service. We're going to quickly build that stack of capabilities. That just gives substantial value to enterprise customers. >> So I got to ask you, talk about hybrid since you brought it up again. So software defined data center software. So what happens to the data center, the actual physical data center? You mentioned about the museum. I mean what is it going to look like? I mean right now there's still power and cooling. You're going to have utility competing with cloud resources on demand. People are still going to run data centers. >> You're talking about the facility? >> Yeah, the actual facility. I'm still going to have servers. This will be an on premise. Do you see that, how do you see that phasing out to hybrid? What does that look like physically for someone to manage? Just to get power, facility management, all that stuff. >> Yeah and in many ways, I think here, the you know, the cloud guys, Googles and Amazons and Yahoos and Facebooks have actually led the way in doing some pretty creative work. These things become you know, highly standardized, highly modularized, highly scalable, you know, very few number of admins per server ratio. As we go forward, these become very automated factories, right, of cloud execution. Some of those will be on premise. Some of those will be off premise. But for the most part, they'll look the same, right, in how they operate and our vision for software defined data center is that software layer is taking away the complexity, right, of what operates underneath it. You know, they'll be standardized, they'll be modularized. You plug in power, you plug in cooling, you plug in network, right, and these things will operate. >> Basically efficient down to the bone. >> Yeah. >> Fully operated software. >> Yeah and you know, people will decide what they put in their private cloud, you know, based on business requirements. SLAs, you know, privacy requirements, data governance requirements, right? I mean in Europe, got to be on premise in these locations and then they'll say, "Put stuff in the public cloud "that allows me to burst effectively. "Maybe a DR because I don't do that real well. Or these applications that belongs in the cloud, right because it's distributive in nature, but keep the data on premise. You know, and really treat it as a menu of options to optimize the business requirements between capex to opex, regulatory requirements, scale requirements, expertise, mission critical and all of those things then are delivered by a sustainable position. Not some stupid hybrid halfway house. A sustainable position that optimizes against the business requirements that they have. >> Let me take one of those points, SLA. Everybody likes to attack Amazon and its SLAs, but in many regards. >> Yeah, I'm glad I got your attention. >> Yeah, that's good, we're going to come back to that John. (group chuckles) >> In my head right now. >> I don't think we're done with that talk track. (laughs) So it's easy to attack Amazon and SLAs, but in essence, the SLA is, to the degree of risk that you're willing to take and put on paper at scale. So how transparent will you be with your SLAs with the hybrid cloud and you know, will they exceed what Amazon and Google have been willing and HP for that matter have been willing to promise at scale? >> Oh yeah, absolutely. I mean we're going to be transparent. The SLAs will have real teeth associated with them, you know, real business consequences for lack of execution against them. You know, they will be highly transparent. You know, we're going to have true, we're going to measure these things and you know, provide uptime commitments, etc. against them. That's what an enterprise service is expected, right? At the end of the day, that's what enterprises demand, right? When you pick up the phone and need support, you get it, right. And in our, the VMWare support is legendary. I'm just delighted by the support services that we offer and the customer response to those is, "Hey you fixed my problem even when "it wasn't your problem and make it work." And that's what enterprise customers want because that's what they have to turn around and commit back to their businesses against all of the other things as well. You know, regulatory requirements, audit requirements, all of those types of things. That's what being an enterprise provider is all about. >> John wants to get that. Talk about public cloud. (Pat laughs) >> I want to talk about OpenStack because you guys are big behind OpenStack. You talk about it as a market expansion. Internally what are some of the development conversations and sales conversations with customers around OpenStack instead of status, what's it doing, how you guys are looking at that and getting involved? >> Yeah, you know, we've clearly said you know, that you have to think about OpenStack in the proper way. OpenStack is a framework for building clouds, and you know, for people who are wanting to build their own cloud as opposed to get the free package cloud, right, you know, this is our strategy to enable those APIs, to give our components to those customers to help them go build it, right and those customers, largely are service providers, internet providers who have unique scale, integration and other requirements and we're finding that it's a good market expansion opportunity for us to put our components in those areas, contribute to the open source projects where we truly have IP and can differentiate for it like at the Hyper-Visor level, like at the right networking layer and it's actually going pretty well. You know, in our Q2 earnings call, you might recall, you know, I talked about that our business with the public OpenStack customers was growing faster than the rest of our business. That's pretty significant, right, to say, "Wow, if it's growing faster, "that says the strategy is working." Right, and we are seeing a good response there and clearly we want to communicate. We're going to continue that strategy going forward. >> And the installed base of virtualization is obviously impressive and the question I want to ask you is how do you see the evolution of the IT worker? I mean they have the old model, DBA, system admins, and then now you have data science on the big data side so with software defined data center, the virtualization team seems to be the center point for that. What roles do you see changing with hybrid cloud and software defined data center and user computing? >> Well I think sort of the theme of our conference is defy convention. Right and why do we do that? Because we really see that the, you know, the virtual admin and the virtual infrastructure that they have really become the center of IT. Now we need the competence of networking, the security guys, the database guys, but that now has to happen in the context, right, of a virtualized environment. DBA doesn't get to control his unique infrastructure. The Hadoop guy doesn't get his own unique infrastructure. They're all just workloads that run on this virtualized infrastructure that is increasingly adept and adaptable, right, to these different workload areas and that's what we see going forward as we reach into these new areas and the virtual admin, he has to go make best buddies with the networking guy and say, "Let me talk to you about virtual networking "and how we're going to cross between the virtual overlay "domain and the physical domain and how these things "are going to stitch together for making your job better "right, and delivering a better solution "for our line of business and for our customers." >> One thing you did to defy convention is get on stage with Marc Andreessen. So I want to talk about that a little bit. You guys had I would call it, you know, slight disagreements and, into the future. >> Just a little. >> But I thought you were kind to him. And he said, you know, "No startup that I work with "is going to buy any servers." And I thought you were going to add, no never mind. I won't even go there. (group laughs) I won't even go there, I want to be friends. No so talk about that a little bit, that discussion that you had. Your view of the world and Marc's. How do you respond to that statement? Do they grow up into VMWare customers? Is that the obvious answer? >> I mean I have a lot of regard. You know, Marc and I have known each other for probably close to two decades now and you know, we partnered and sparred together for a long time and he's a smart, successful guy and I appreciate his opinions. You know, but he takes a very narrow view, right, of a venture seed fund, right, who is optimizing cashflow, and why would they spend capital on cashflow when they can go get it as a service? That's exactly the right thing for a very early stage startup company to do in most cases, right? Marc driving his customers to do that makes a lot of sense, but at the end of the day, right, if you want to reach into enterprise customers, you got to deliver enterprise services, right? You got to be able to scale these things. You got to be cost-effective at these things and then all the other aspects of governance, SLAs, etc. that we already talked about. So in that view, I think Marc's view is very perspective. >> Also Zynga and those guys, when they grew up on Amazon, they went right to bare metals as soon as they started scale. >> They had to bring it back in right 'cause they needed the SLAs, they needed the cost structures. They wanted to have the controls of some of those applications. >> And rental is more expensive at the end of the day. >> There you go. Somebody's got to pay the margins, right, you know, on top of that, to the providers so you know, I appreciate the perspective, but to me it is very narrow and periconchal to that point of view and I think the industry is much broader and things like policy and regulation are going to take decades, right? Not years, you know, multiple decades for these things to change and roll out to enable us a mostly public cloud world ever, right, and that's why I say I think the hybrid is not a waystation, right? It is the right balance point that gives customers flexibility to meet their business demands across the range of things and Marc and I obviously, we're quite in disagreement over that particular point. >> And John once again, Nick Carr missed the mark. We made a lot of money. >> I think Marc Andreessen wants to put a lot of money into that book. Everyone could be the next Facebook where you you know, you build your own and I think that's not a reality in enterprise. They kind of want to be like Facebook-like applications, but I wanted to ask you about automation. So we talked to a lot of customers here in theCUBE and we all asked them a question. Automation orchestration's at the top of the stack. They all want it, but they all say they have different processes and you really can't have a general purpose software approach. So Dave and I were commenting last night when we got back after the NetApp event was you know, you and Paul Murray were talking in 2010 around this hardened top when you introduced that stack and with infrastructure as a service, is there a hardened top where functionality is more important than which hardware you buy so you can enable some of those service catalogs, some of those agility features in automation because every customer will have a different process to be automated. >> Yeah. >> And how do you do that without human intervention? So where is that hardened top now? I mean is it platform as a service or is it still at the infrastructure as a service model? >> Yeah, I think clearly the line between infrastructure as a service and platform as a service will blur, right, and you know, it's not really clear where you can quite draw that line. Also as we make infrastructure more application aware, right, and have more application development services associated with it, that line will blur even more. So I think it's going to be hard to call, you know, "Here's that simple line associated with it." We'd also argue that in this world that customers, they have heterogeneous tools that they need to work with. Some will have bought in a big way into some of the legacy tools and as much as we're going to try help them move past some of those brittle environments, well that takes a long time as well. I'd also say that you know, it's the age of APIS, not UIs, and for us it's very much to expose our value through programmatic interfaces so customers truly can have the flexibility to integrate those and give them more choice even as we're trying to build a more deeply integrated and automated stack that meets a general set of needs for customers. >> So that begs the question, at the top of the stack where end user computing's going to sit and you're going to advance that piece, what's, what's the to do item for you? What needs to happen there? Is it, on a scale of one to 10, 10 being fully baked out, where is it, what are the white spaces that need to be tweaked either by partners or by VMWare? >> Yeah and I think we're pretty quickly finishing the stack with regard to the traditional PC environments and I think the amount of work to do for the mobile environment is still quite enormous as we go forward and in that, you know, we're excited about Horizon getting some good uptake, a number of partner announcements this week, but there's a lot to be done in that space because people want to be able to secure apps, provision apps, deprovision apps, have secure work spaces, social experiences, a rich range of integrations to the authentication devices associated with it to be able to have applications that are developed in that environment that access this hybrid infrastructure effectively over time, be able to self-compose those applications, put them into enterprise, right, stores and operations, be able to access this big data infrastructure. There's a whole lot of work to be done in that space and I think that'll keep us busy for quite a number of years. >> This is great. We're here with Pat Gelsinger inside theCUBE. We could keep rolling until we get to the hook, but a couple more final questions is the analogy of cloud has always been like the grid, electricity. You kind of hinted to this earlier. I mean is that a fair comparison? The electricity's kind of clean and stable. We have an actual national grid. It doesn't have bad data and hackers coming through it so is that a fair view of cloud to kind of look, talk about plugging electricity in the wall for IT. >> I think that is so trite, right? It came up in the panel we had with Andreessen, Bechtolsheim, Graeme, and myself because you know, it's so standardized. 120 volts AC right and hey you know, maybe it gets distributed as four, 440, three phase, but you know, it is so standardized. It hasn't moved. Sockets standards, right, you're done. Think how fast this cloud world is evolving. Right the line between IA as in PaaS as we just touched upon, the services that are being offered on top of it. >> Security, security. >> Yeah, yeah, all these different things. To me, it is such a trite, simple analogy that has become so used and abused in the process that I think it leads people to such wrong conclusions right, about what we're doing and the innovation that's going on here and the potential that we're going to offer. So I hope that every one of our competitors takes that and says, "That's the right model." Because I think it leads them to exactly the wrong conclusion. >> I couldn't agree more. The big switch is a big myth. I wanted to get tactical for a minute. I listened to your conference calls. I can't wait to read the transcript. I just go, I got to listen to the calls, but just observing those and the conversations around here, I just wanted to ask you. I always ask CEOs, "What keeps you up at night?" They always say execution so let's focus on execution in the next 12 to 18 months. I came up with the following. "To maintain dominance in vSphere, "get revenue beyond vSphere, "broaden end user license agreements, "increase end user computing adoption "and proof points around hybrid cloud." Are those the big ones? Did I miss anything? >> That's a good list. >> Yeah? >> That's a good list. >> So those are the things an observer should watch in let's say 12 to 18 months of indicators of success and of what you're doing and what you're driving. >> Yeah and you know, clearly inside of that, with SDDC, obviously we think this environment for networking, right, and what we've really, I'll say delivered that. That would be one in particular inside of that category that we would call out you know, with regard to our hybrid cloud strategy. It's clearly globalizing that platform. Right, we announced Savvis here, but we need to make this available on a global basis. You go to an enterprise customer and they're going to say, "I need services in Japan, I need services in Singapore. "I need to be able to operate in a global basis." So clearly having a platform, building out the services on top of it is another key aspect of building those hybrid user cases and more of the value on top of it and then in the EUC space, we touched a bit on the mobile thing already. >> So we'll have Martin on later, but his PowerPoint demonstration. >> What a rockstar, what a rockstar. >> He is a rockstar and we've had him on before. He's fantastic, but his PowerPoint demonstration is very simple, made it seem so simple. It's not going to be that easy to virtualize the network. Can you talk about the headwinds there and the challenges that you have and the things that you have to do to actually make progress there and really move the needle? >> Yeah it really sort of boils down in two aspects. One is we are suggesting that there will be a software layer for networking that is far more scalable, agile and robust than you can do in a physical networking layer. That's a pretty tall order, right? I need to be able to scale to tens, hundreds, millions of VMs, right? I need to be able to scale to terabytes of cross-sectional packet flow through this. I need to be able to deliver services on top of this, right, that truly allow firewalls, load balancers, right, IDSes, all of those things to be agile, scale. Yeah, it is ambitious. >> Ambitious. >> This is, right, the most radical, architectural statements in networking in the last 20 or 30 years and that's what gets Martin passionate. So there's a lot of technical scale and we really feel good about what we've done, right, but being able to prove that with robust scalability, right, for which like the Hyper-Visor, it is more reliable than hardware today, in being able to make that same statement about NSX that just like ESX, it is better than hardware, right, in terms of its reliability, its resilience. That's an important thing for us to accomplish technically in that space, but then the other pieces, showing customer value, right? Getting those early customers and what a powerful picture. GE, Citigroup and eBay, right? It's like wow, right? These are massive customers, right, and being able to prove the value and the use cases in the customer settings, right, and if we do those two things, you know, we think that truly we all have accomplished something very very special in the networking domain. >> Pat, talk about the innovation strategy. You've been now a year under your belt at VMWare and you were obviously with EMC and Intel and we mentioned on theCUBE many times, cadence of Moore's Law was kind of the culture of Intel. Why don't you tell us about the innovation strategy of VMWare going forward, your vision, but also talk about the culture and talk about the one thing that VMWare has from a culture that makes it unique and what is that unique feature of the VMWare culture? >> We spent time as a team talking about what is it that drives our innovation, that drives our passion, and clearly as we've talked about our values as a team, it is very much about this passion for technology and passion for customers and how those two coming together, right, with fundamental disruptive "wow" kind of technologies where people just say, like they did when they first used ESX and they say, "Wow, I just didn't ever envision "that you could possibly do that." And that's the experience that we want to deliver over and over again, right, so you know, hugely disruptive powerful software driven virtualization technologies for these domains, but doing it in a way that customers just fall in love with our technologies and you know as, I got a note from Sanjay and I just asked him, "You know, what do you think of VMWorld?" And he said, right, "It is like a cult geek fest." Right, because there's just this deep passion around what people do with our technology, right, and they're not even at that point, they're not customers, they're not partners. They are deeply aligned passionate zealots around what we are doing to make their lives so much more powerful, so much more enabled, right, and ultimately, a lot more fun. >> People say it's like being a car buff. You know, you got to know the engine, you want to know the speeds and feeds. It is a tech culture. >> Yeah, it is absolutely great. >> Pat, thanks for coming on theCUBE. We scan spend a lot of time with you. I know we went a little over. I appreciate your time. Always great to see you. >> Great to see you too. >> Looking good. >> Thank you for that. >> Tech Athlete Pat Gelsinger touching all the bases here. We saw him last night at AT&T Park. Great event here, VMWare World 2013. This is theCUBE. We'll be right back with our next guest after this short break. Pat Gelsinger, CEO on theCUBE.

Published Date : Aug 28 2013

SUMMARY :

at VMWare and great to see you again. Thank you, thank you. running the show here. What have you done and obviously, for the industry and you know, in the EMC world when you were there, and the NSX announcement, in the cadence of you know, no longer bound to you the first thing you did and as you go look at these new areas, and the ecosystem and the hybrid cloud, I No it is the endgame. To to full all-utility computing. I don't mean it that way. a hole, stop digging, buddy. in the part of what applications bigger than your market cap. Yeah, we're out to fix the market cap. things like you know, and embodiment into the software-defined a piece of the storage stack and the context of software define. and go to market as well. from what you have in IT. and enabling the management that the end users want. into that business group first. Under the hood, you got Flash. on the PaaS layer, boy, you So the apps are dictating and everything to go against that. in the hybrid" is a good one. Yeah and this stupid (group chuckles) I don't know, it was He's done good for with that, you know, born in the hybrid. shot to be you know, You mentioned about the museum. see that phasing out to hybrid? the you know, the cloud Yeah and you know, people will decide Everybody likes to attack going to come back to that John. but in essence, the SLA and the customer response to those is, Talk about public cloud. the development conversations and you know, for people and the question I want to ask you is and the virtual admin, he You guys had I would call it, you know, Is that the obvious answer? but at the end of the day, right, Also Zynga and those guys, They had to bring it back in right at the end of the day. and periconchal to that point of view Nick Carr missed the mark. after the NetApp event was you know, be hard to call, you know, as we go forward and in that, you know, You kind of hinted to this earlier. but you know, it is so standardized. and abused in the process in the next 12 to 18 months. and of what you're doing and more of the value on top of it So we'll have Martin on later, and the things that you have to do I need to be able to scale and if we do those two things, you know, and you were obviously with EMC and Intel so you know, hugely disruptive You know, you got to know the engine, Always great to see you. right back with our next guest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarcPERSON

0.99+

Marc AndreessenPERSON

0.99+

DavePERSON

0.99+

David VilantePERSON

0.99+

Steve HerrodPERSON

0.99+

StevePERSON

0.99+

CarlPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

JapanLOCATION

0.99+

Pat GelsingerPERSON

0.99+

CitigroupORGANIZATION

0.99+

twoQUANTITY

0.99+

2010DATE

0.99+

OracleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

SingaporeLOCATION

0.99+

John FurrierPERSON

0.99+

EuropeLOCATION

0.99+

2012DATE

0.99+

GEORGANIZATION

0.99+

NiciraORGANIZATION

0.99+

Paul MurrayPERSON

0.99+

AndreessenPERSON

0.99+

JohnPERSON

0.99+

Pat GelsingerPERSON

0.99+

tensQUANTITY

0.99+

Nick CarrPERSON

0.99+

EMCORGANIZATION

0.99+

50 billionQUANTITY

0.99+

ZyngaORGANIZATION

0.99+

eBayORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

Sanjay PoonenPERSON

0.99+

AmazonsORGANIZATION

0.99+

10-yearQUANTITY

0.99+

12QUANTITY

0.99+

YahoosORGANIZATION

0.99+

14 billionQUANTITY

0.99+

$50 billionQUANTITY

0.99+

$1.7 trillionQUANTITY

0.99+

MartinPERSON

0.99+

120 voltsQUANTITY

0.99+

GooglesORGANIZATION

0.99+

two guestsQUANTITY

0.99+

eight billionQUANTITY

0.99+

GeneralcatalystORGANIZATION

0.99+

FacebooksORGANIZATION

0.99+

IntelORGANIZATION

0.99+

John McAdamPERSON

0.99+

FacebookORGANIZATION

0.99+

100%QUANTITY

0.99+

Jerry ChenPERSON

0.99+

PowerPointTITLE

0.99+

CUBEORGANIZATION

0.99+

NSXORGANIZATION

0.99+

ESXTITLE

0.99+

18 monthsQUANTITY

0.99+

SanjayPERSON

0.99+

two thingsQUANTITY

0.99+

yesterdayDATE

0.99+

four legsQUANTITY

0.99+

440QUANTITY

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

23,000 peopleQUANTITY

0.99+

28 billion dollarsQUANTITY

0.99+

PatPERSON

0.99+